query
stringlengths
273
149k
pos
stringlengths
18
667
idx
int64
0
1.99k
task_name
stringclasses
1 value
Graph Neural Networks (GNNs) have received tremendous attention recently due to their power in handling graph data for different downstream tasks across different application domains. The key of GNN is its graph convolutional filters, and recently various kinds of filters are designed. However, there still lacks in-depth analysis on Whether there exists a best filter that can perform best on all graph data; Which graph properties will influence the optimal choice of graph filter; How to design appropriate filter adaptive to the graph data. In this paper, we focus on addressing the above three questions. We first propose a novel assessment tool to evaluate the effectiveness of graph convolutional filters for a given graph. Using the assessment tool, we find out that there is no single filter as a `silver bullet' that perform the best on all possible graphs. In addition, different graph structure properties will influence the optimal graph convolutional filter's design choice. Based on these findings, we develop Adaptive Filter Graph Neural Network (AFGNN), a simple but powerful model that can adaptively learn task-specific filter. For a given graph, it leverages graph filter assessment as regularization and learns to combine from a set of base filters. Experiments on both synthetic and real-world benchmark datasets demonstrate that our proposed model can indeed learn an appropriate filter and perform well on graph tasks. Graph Neural Networks (GNNs) are a family of powerful tools for representation learning on graph data, which has been drawing more and more attention over the past several years. GNNs can obtain informative node representations for a graph of arbitrary size and attributes, and has shown great effectiveness in graph-related down-stream applications, such as node classification , graph classification (b), graph matching , recommendation systems , and knowledge graphs . As GNNs have superior performance in graph-related tasks, the question as to what makes GNNs so powerful is naturally raised. Note that GNNs adopt the concept of the convolution operation into graph domain. To obtain a representation of a specific node in a GNN, the node aggregates representations of its neighbors with a convolutional filter. For a task related to graph topology, the convolutional filter can help GNN nodes to get better task-specific representations . Therefore, it is the filter that makes GNNs powerful, and thus the key to designing robust and accurate GNNs is to design proper graph convolutional filters. Recently, many GNN architectures are proposed with their own graph filter designs. However, none of them have properly answered the following fundamental questions of GNNs: Is there a best filter that works for all graphs? If not, what are the properties of graph structure that will influence the performance of graph convolutional filters? Can we design an algorithm to adaptively find the appropriate filter for a given graph? In this paper, we focus on addressing the above three questions for semi-supervised node classification task. Inspired by studies in Linear Discriminant Analysis (LDA), we propose a Graph Filter Discriminant (GFD) Score metric to measure the power of a graph convolutional filter in discriminating node representations of different classes on a specific graph. We have analyzed all the existing GNNs' filters with this assessment method to answer the three aforementioned questions. We found that no single filter design can achieve optimal on all possible graphs. In other words, for different graph data, we should adopt different graph convolutional filters to achieve optimal performance. We then experimentally and theoretically analyze how graph structure properties influence the optimal choice of graph convolutional filters. Based on all of our findings, we propose the Adaptive Filter Graph Neural Network (AF-GNN), which can adaptively learn a proper model for the given graph. We use the Graph Filter Discriminant Score (GFD) as a an extra loss term to guide the network to learn a good data-specific filter, which is a linear combination of a set of base filters. We show that the proposed Adaptive Filter can better capture graph topology and separate features on both real-world datasets and synthetic datasets. We highlight our main contributions as follows: • We propose an assessment tool: Graph Filter Discriminant Score, to analyze the effectiveness of graph convolutional filters. Using this tool, we find that no best filter can work for all graphs, the optimal choice of a graph convolutional filter depends on the graph data. • We propose Adaptive Filter Graph Neural Network that can adaptively learn a proper filter for a specific graph using the GFD Score as guidance. • We show that the proposed model can find better filters and achieve better performance compared to existing GNNs, on both real-word and newly created benchmark datasets. Semi-Supervised Node Classification. Let Y be the class assignment vector for all the nodes in V. C indicates the total number of classes, and Y v ∈ {1, · · ·, C} indicates the class that node v belongs to. The goal of semi-supervised node classification is to learn a mapping function f: V → {1, · · ·, C} using the labeled nodes, and predict the class labels for the unlabeled nodes, i.e., Y v = f (v), by leveraging both node features X and graph structure A. Graph Data Generator. Intuitively, semi-supervised node classification requires both node features (X) and the graph structure (A) to be correlated to the intrinsic node labels (Y) to some extent. To systematically analyze the performance of different GNN filters, we test their performance under different graph data with different properties, i.e., graphs with different X, A, Y. Intuitively, both graph topology and node features have to be correlated with the node labels, if including both can enhance the performance of node classification task. To better understand the roles played by each component, we assume the graphical model to generate a graph data is as described in Fig. 1(a). To better disclose the relationship between different graph filters and properties of different graph data, we further make assumptions on how X and A are generated when Y is given, as it is difficult to obtain those properties from real-world data. Therefore, we study simulated data to support a thorough analysis. We now describe the generation of Y, X|Y, and A|Y respectively. Generating Y: Each node is randomly assigned with a class label with probability proportional to its class size. We assume each class c is associated with n c nodes. Generating X|Y: We assume that node features are sampled from a distribution determined by their corresponding labels. For example, we can sample node features of class c from a multivariate Gaussian distribution with the parameters conditioned on class c:. For another example, we can sample node features of class c from a circular distribution with radius r c and noise noise c conditioned on c. Generating A|Y: We follow the most classic class-aware graph generator, i.e. stochastic block model , to generate graph structure conditioned on class labels. SBM has several simple assumptions that edges are generated via Bernoulli distributions independently and the parameter of the Bernoulli distribution is determined by the classes of the corresponding pair of nodes v i and v j, i.e., A ij |Y i, Y j ∼ Ber(p YiYj), where p YiYj is a parameter determined by the two corresponding classes. In a simple two-class case, p = p 11 = p 22 denotes the probability that the linked pair belongs to the same class, while q = p 12 = p 21 denotes the probability that the linked pair belongs to different classes. We call p+q 2 the "density of graph", which controls the overall connectivity of a graph, and we call |p − q| the "density gap", which controls how closely the graph generated by SBM correlates with labels. We assume p ≥ q in all the following sections. Degree Corrected SBM , which is a variation of SBM, adds another parameter γ to control the "power-law coefficient" of degree distribution among nodes. Figure 1 Graph Convolutional Filters. By examining various GNN designs, we find that most of the GNN operators can fit into a unified framework, i.e., for the l-th layer: which describes the three-step process that involves: a graph convolutional operation (can also be regarded as feature propagation or feature smoothing) denoted as F(G)H (l−1), a linear transformation denoted by multiplying W, and a non-linear transformation denoted by σ(·). Clearly, the graph convolutional operation F(G)H (l−1) is the key step that helps GNNs to improve performance. Thus, to design a good GNN, a powerful graph convolutional filter F(G) is crucial. We analyze the effectiveness of graph filters for existing GNNs in the following. The work of GCN first adopts the convolutional operation on graphs and use the filter F(G) =D −1/2ÃD−1/2. Here,à = A + I is the self-augmented adjacency matrix, andD = diag(d 1, ...,d n) is the corresponding degree matrix, whered i = n j=1à ij. Some studies (a;) use a filter F(G) = (D −1/2ÃD−1/2) k that is similar in form to GCN's filter, but with a pre-defined exponent k greater than one. This would help a node to obtain information from its further neighbors without redundant computation cost. Several other studies propose to use sampling to speed up GNN training (b; ; a) ), which can be considered as a sparser version of GCN's filter. Another set of GNNs consider using a learnable graph convolutional filter. For example, and both propose to use F(G) = A+ I where is a learnable parameter to augment self-loop skip connection. Graph Attention Networks proposes to assign attention weight to different nodes in a neighborhood, which can be considered as a flexible learnable graph convolutional filter. Their graph filters applied on a feature matrix X can be considered as: ∀i, j, where N i is the neighborhood of node i, α is a learnable weight vector, and || indicates concatenation. In this section, we introduce a novel assessment tool for analyzing graph convolutional filters. We first review the Fisher score, which is widely used in Linear Discriminant Analysis to quantify the linear separability of two sets of features. With the Fisher score, we propose the Graph Filter Discriminant Score metric to evaluate the graph convolutional filter on how well it can separate nodes in different classes. Fisher Score. When coming to non-graph data, the Fisher Score is used to assess the linear separability between two classes. Given two classes of features X (i) and X (j), the Fisher Score is defined as the ratio of the variance between the classes (inter-class distance) to the variance within the classes (inner-class distance) under the best linear projection w of the original feature: where µ (i) and µ (j) denotes the mean vector of X (i) and X (j) respectively, Σ (i) and Σ (j) denotes the variance of X (i) and X (j) respectively, and w denotes the linear projection vector which we can understand as a rotation of the coordinate system, and the max w operation is to find the best direction in which these two class of nodes are most separable. As the numerator of J indicates interclass distance and the denominator of J indicates inner-class distance a larger value of J indicates higher separability. Note that for given features, we can directly get the closed form solution of the optimal w, with which Fisher Score could be deformed as: The detailed proof is provided in appendix A.2. Graph Filter Discriminant Score. As mentioned before, the key component that empowers GNNs is the graph convolutional filter F(G). Intuitively, an effective filter should make the representations of nodes in different classes more separable. Therefore, we propose to use Fisher Scores of the node representations before and after applying the graph convolutional filter in order to evaluate this filter. For each pair of classes (i, j), we define their Fisher Difference as, which is the difference of their Fisher Score of representations after applying the filter F(G) and their Fisher Score of initial representations. We then define the GFD Score for the filter F(G) with respect to feature matrix X as follows: where n c is the number of nodes in class c. Note that the GFD Score is a weighted sum of the Fisher Difference for each pair of classes. Intuitively, the larger the GFD score, the more effective is this corresponding filter to increase the separability of node features. The Fisher Score can be extended to evaluate non-linearly separable data in addition to linearly separable data. We claim the rationale of such measure by showing that the graph convolution can actually help non-linearly separable data to be linearly separable if the graph filter is chosen properly for a given graph. As shown in Figure 2 (a)∼(d), if we use a proper filter, the convolutional operation can transform three circular distributions, which are non-linearly separable, into three linearly separable clusters. Moreover, as shown in Figure 2 (e)∼(h), even if the original features of different classes are sampled from the same distribution, the proper graph convolutional filter can help to linearly separate the data. This phenomenon shows that if the graph structure (A) is correlated with the task (Y), a proper filter alone is powerful enough to empower GNNs with non-linearity, without any non-linear activation. This phenomenon is also supported by the promising of SGC (a), which removes all the non-linear activations in the GCN architecture. Therefore, we claim that the proposed GFD is a reasonable metric to evaluate a graph filter's effectiveness, and a good graph filter for a given graph should have a higher GFD score on that graph. With the help of the assessment tool, we now examine existing filters and try to answer the two fundamental questions: Is there a best filter that works for all graphs? If not, what are the properties of graph data that will influence the performance of graph convolutional filters? The GFD Score we introduced in the above section can be applied to any filter on any given graph. From Table 3, we can see that most of the current GNNs fall into the following filter family: {(Â) k }, where the base is a normalized adjacency matrix, and k is the order of the filter. Note that there are some other variants of GNN filters that do not fall into this family, for example, GAT, but the analysis is similar. Without loss of generality, we focus on analyzing this filter family. The two main components of this filter family are the normalization strategy (Â) and the order to use (k). For (h) F isher = 17.8018 Figure 2: Each row corresponds to a graph. The i-th column corresponds to the feature distribution after applying filter (D −1/2ÃD−1/2) i−1. Both graphs include three classes of same size and has structure generated by SBM (p = 0.6, q = 0.03). The first graph's feature follows a circular distribution with radius = 1, 0.9, 0.8 and Gaussian noise = 0.02 for each class. The second graph's feature follows a circular distribution with radius = 1 and Gaussian noise = 0.02 for all classes. simplicity, we study the roles of these two components separately, using our assessing tool to show whether there exists an optimal choice of filter for different graph data. If an optimal choice does not exist, we determine the factors that will influence our choice of component. Through the analysis, we choose SBM and DCSBM introduced previously to generate the structures of synthetic graphs, and choose multivariate Gaussian distributions to generate features of synthetic graphs. We focusing on the structure properties that influence the optimal choice of filter. We enumerate the hyper-parameters to generate graphs with different structure properties, including the power law coefficient (γ) that controls the power law degree distribution of the graph, label ratio (n1 n2) that indicates how balance are the classes of this graph, density (p+q 2) that indicates the overall connectivity of the graph, and density gap (|p − q|) that indicates structural separability of the graph. As these properties are significant for real-world graphs, our generated synthetic graphs can cover a large range of possible graph properties, and are representative for analyzing different filters. Analyzing Filter's Normalization Strategy. We consider three normalization strategies, including row normalization D −1 A, column normalization AD −1, and symmetric normalization D −1/2 AD −1/2. We calculate GFD scores of these three graph filters for graphs generated with different parameters. As shown in Figure 3, no single normalization strategy is optimal for all graphs. Here we give an empirical explanation to this phenomenon. Note that, with the same order, each filter has the same receptive field, and different normalization strategies affect only on how to assign weights to the neighboring nodes. The row normalization strategy simply takes the mean of features of the node's neighbors. Clearly, this would help to keep every node's new representations in the same range. On the contrary, column normalization and symmetric normalization, might keep a larger representation for higher-degree nodes. Using a column-normalized adjacency matrix as the base of the graph convolutional filter is similar to the PageRank algorithm. While a node propagates its features to neighbors, this normalization strategy takes its degree into account. Thus, column normalization can be helpful when the when node degree plays an important role for classification. Symmetric normalization combines the properties from both the row normalization and the column normalization. Even in the case where row normalization and column normalization do not perform well, symmetric normalization still leads to promising performance. We now examine which graph properties influence our choice of the optimal normalization strategy, which may vary per graph. We find that power law coefficient γ is an important factor that influences the choice of normalization. As shown in Figure 3, when power-law coefficient γ decreases (graph's extent of power-law grows), row normalization tends to have better performance. This is because row normalization helps to keep node representations in the same range, so that large representations of high degree nodes can be avoided. Therefore, it prevents nodes with similar degrees getting closer to each other and messing the classification tasks where node degrees are not important. We also find that the label ratio (n1 n2) matters. As shown in Figure 3, when the size of each class becomes more imbalanced, column normalization tends to work better. This is because column normalization better leverages degree property during representation smoothing, as nodes in largesize classes tend to have larger representation since they are more likely to have higher degree. This can help nodes within different classes become more separable. We then analyze what would be the best order for filters. With a highorder filter, a node can obtain information from its further neighbors, and thus the amount of information it receives during the feature propagation increases. But do we always need more information under any circumstances? The answer is no. Still, we find that, for different graphs, the order that in the best performance would be different 1. Since there is no best filter order for all the cases, we explore the factors that can influence the choice of order. We find that the density of graph and density gap between two classes have a big impact. As shown in Figure 4, when the density or density gap increases, the filter with higher order tends to be a better choice. We provide an intuitive explanation for this phenomenon as follows. Note that the feature propagation scheme is based on the assumption that nodes in the same class have a closer connection. On one hand, when the density increases, the connections between nodes are closer. Therefore, high-order filters can help gather richer information and thus reduce the variance of the obtained new node representations, so it helps nodes in the same class get smoother representations. On the other hand, when the density gap decreases, for a node, the size of neighbors within the same class becomes similar to the size of neighbors within different classes. Thus conducting high-order graph convolution operations will mix the representations of all nodes regardless of classes, which will make node classification more difficult. Based on previous analysis, we now answer the last question: Can we design an algorithm to adaptively find the appropriate filter for a given graph? We develop a simple but powerful model, the Adaptive Filter Graph Neural Network (AFGNN). For a given graph, AFGNNs can learn to combine an effective filter from a set of filter bases, guided by GDF Scores. Adaptive Filter Graph Neural Network (AFGNN). For simplicity, we only consider finding the optimal filter for one family of graph convolutional filters: where k is the maximum order. Note that, we also include the identity matrix, which serves as a skip-connection, to maintain the original feature representation. Based on our previous analysis, for graphs that are not closely correlated to tasks (i.e., small density gap in SBM), the identity matrix will outperform all the other convolutional filters. We denote the above 3k + 1 filters as, the l-th layer of AFGNN is defined as a learnable linear combination of these filter bases: where ψ (l) is the learnable vector to combine base filters and α (l) is its softmax-normalized version. Comparing to GNNs with fixed filters such as GCN and SGC, our proposed AFGNN can adaptively learn a filter based on any given graph. As we have shown that no single fixed filter can perform optimally for all graphs, we conclude that an adaptive filter has more capacity to learn better representations. Comparing to other GNNs with learnable filters such as GAT, AFGNN is computationally cheaper and achieves similar or better performance on most existing benchmarks and our synthetic datasets (as shown in the experiment section). We leave expanding the base filter family and adding more complex filters such as GAT into our filter bases as future work. Training Loss. To train this AFGNN model, we can simply optimize the whole model via any downstream tasks, i.e., node classification. However, as most of the semi-supervised node classification datasets only contain limited training data, the enlarged filter space will make the model prone to over-fitting. Thus, we decide to add the GFD Score as an loss term into the training loss to guide the optimization of filter weights, i.e., ψ (l) and to prevent overfitting: where L CE is the cross-entropy loss of the node classification, and L GF D is defined as the cumulative negation of GFD Score for the learned adaptive filter F AF GN N (G) (l) at each layer with respect to its input representation H (l−1). During the training, we minimize L to learn the proper model. With a different choice of the weight λ for GFD loss, we can categorize our model into: AFGNN 0: With λ = 0, the model is only trained by L CE, which might be prone to over-fitting when data is not sufficient. AFGNN 1: With λ = 1, the model is trained by both L CE and L GF D simultaneously. AFGNN ∞: This case is not exactly λ = ∞, and the training process is different from the other two cases. We implement the training iteratively: we optimize the combination of base filters by training only with GFD loss L GF D, then we optimize the linear transofrmation parameter W l s with classification loss L CE. Note that the input feature H = X is invariant, we can pre-train the optimal filter for first layer and fix it. Dataset We first evaluate AFGNN on three widely used benchmark datasets: Cora, Citeseer, and Pubmed . As these datasets are not sensitive enough to differentiate the models, we need more powerful datasets that can evaluate the pros and cons of each model. Based on our findings in section 3.2, we generate two synthetic benchmarks called SmallGap and SmallRatio. SmallGap corresponds to the case in which the density gap of the graph is close to 1. This indicates that the graph structure does not correlate much to the task, thus I would be the best filter in this case. SmallRatio corresponds to the case in which the label ratio is small, i.e. the size of one class is clearly smaller than the other, and column normalization AD −1 is the best normalization 2. Baselines and Settings. We compare against 5 baselines, including GCN, GIN, SGC, GFNN, and GAT. To make fair comparisons, for all the baseline GNNs, we set the number of layers (or orders) to be 2, and tune the parameters including learning rate, weight decay, and number of epochs 3. For all the benchmark datasets, we follow the data split convention 2. For the synthetic dataset, we conduct 5-fold cross-validation, randomly split the nodes into 5 groups of the same size, take one group as the training set, one as the validation set and the remaining three as the test set. Each time we pick Classification Performance. As is shown in Table 1, our proposed AFGNN ∞ model can consistently achieve competitive test accuracy. On Pubmed, SmallGap, and SmallRatio, AFGNN ∞ can achieve the best among all the baseline models. On Cora and Citeseer, though GAT outperforms our proposed model a little bit, however, as shown in Table 6,7, GAT takes a longer time to train and converge, and has more memory cost as well. Also, when the given graph is simple, GAT would suffer unavoidable overfitting problem. We further compare our AFGNN 0, AFGNN 1, AFGNN ∞ to examine the role of GFD loss. The AFGNN 0 performs quite poorly on all the datasets, implying that the larger search space of the filter without GFD loss is prone to over-fitting, while AFGNN 1 and AFGNN ∞ perform much better. Also, AGFNN ∞ has superior performance compared to AFGNN 1, which indicates the GFD Score is indeed a very powerful assessment tool. Graph Filter Discriminant Analysis. We are also interested to see whether the proposed method can indeed learn the best combination of filters from the base filter family. To do so, we calculate the GFD Score of the first-layer filter learned by AFGNN 0, AFGNN 1, AFGNN ∞ and the seven base filters on the test set for each dataset. For the AFGNN models, the filter is trained with the training set for each dataset. Table 2 4 and Figure 5 show the , we can see that our proposed method can indeed learn a combined filter on all the datasets. Specifically, in all the benchmark datasets, the best base filter is (D −1Ã) 2, and our proposed adaptive filter not only picks out the best base filter but also learns a better combination. For the two synthetic datasets, where I and (ÃD −1) 2 are the best filters, our algorithm can also learn to pick them out. We thereby conclude that the proposed GFD loss can help find an appropriate filter for a given dataset. Understanding the graph convolutional filters in GNNs is very important, as it can help to determine whether a GNN will work on a given graph, and can provide important guidance for GNN design. In our paper, we focus on the semi-supervised node classification task. We first propose the Graph Filter Discriminant Score as an assessment tool for graph convolutional filter evaluation, and then apply this GFD Score to analyze a family of existing filters as a case study. Using this tool, we learn that no single fixed filter can produce optimal on all graphs. We then develop a simple but powerful GNN model: Adapative Filter Graph Neural Network, which can learn to combine a family of filters and obtain a task-specific powerful filter. We also propose to add the negative GFD Score as an extra component to the objective function, it can act as a guidance for the model to learn a more effective filter. Experiments show that our approach outperforms many existing GNNs on both benchmark and synthetic graphs. Graph Convolutional Filters F(G) = Q, where Q is parametric attention function of X and A Table 3 summarized the graph filters for existing GNNs. Proof According to the in linear discriminant analysis, the maximum separation occurs when w ∝ (Note that, when we want to apply this fisher linear discriminant score in our problem, the linear transformation part in our classifier (and also the linear transformation part in GNN) will help to find the best w. Thus, we can directly plug the optimum solution w * = c into this formula, here c is a scalar. Then, we'll have: Thus we completed the proof. A.3.1 EXAMPLES OF "NO BEST NORMALIZATION STRATEGY FOR ALL" Figure 6 provides two examples to show there is no best normalization strategy for all graphs. For both examples, we fix the order of filter to be 2. The first row shows a case in which row normalization is better than the other two. The corresponding graph contains 2 classes of nodes with size 500. The graph structure is generated by DCSBM with p = 0.3, q = 0.05, power law coefficient γ = −0.9. The features for two classes satisfy multivariate distribution with an identity co-variance matrix, and with mean (0.2,0.2) and respectively. In this example, we can clearly see that with other two normalization strategy, some high-degree hubs show up in the upper right corner from both class, which is harmful for classification. We generate this example to illustrate the benefit of row normalization because row normalization would be very helpful for a graph with power law degree distribution, which contains some nodes with unusually large degree (those nodes are called hubs), since it can help avoid those hubs obtaining larger representations and thus be mis-classified. The second row shows a case in which column normalization is better than the other two. The corresponding graph contains 2 classes of nodes with size 900 and 100 respectively. The graph structure is generated by SBM with p = 0.3, q = 0.2. The features for two classes satisfy multivariate distribution with an identity co-variance matrix, and with mean (-0.2,-0.2) and (0.2,0.2) respectively. We generate this example to illustrate the benefit of column normalization because under this case, we should consider taking more degree information into consideration. Therefore, column normalization would be more helpful. Figure 7 provides two examples to show there is no best order for all graphs. For both examples, we fix the normalization strategy to be row normalization, and varies order to be 2, 4, 6. The first row shows a case in which small order is better than the large ones. The corresponding graph contains 2 classes of nodes with same size 500. The graph structure is generated by SBM with p = 0.215, q = 0.2. The features for two classes satisfy multivariate distribution with an identity co-variance matrix, and with mean (0.5,0.5) and respectively. The second row shows a case in which large order is better than the smaller ones. The corresponding graph contains 2 classes of nodes with same size 500. The graph structure is generated by SBM with p = 0.75, q = 0.6. The features for two classes satisfy multivariate distribution with an identity co-variance matrix, and with mean (0.5,0.5) and respectively. For the curves indicating how powerlaw coefficient influence the choice of normalization in Figure 3, we generate the corresponding graphs structure by DCSBM with fixed p = 0.3, q = 0.2 and varies the powerlaw coefficient from -0.3 to 0. The graph contains two classes of nodes, and is of size 400 and 600 for each class respectively. The feature for each class satisfies multivariate normal distribution with identity co-variance matrix, and with mean and (0.2,0.2). For the curves indicating how label ratio influence the choice of normalization in Figure 3, we generate the corresponding graphs structure by SBM with fixed p = 0.3, q = 0.1 and varies the label ratio. The graph contains a total number of 1000 nodes in two classes. The feature for each class satisfies multivariate normal distribution with identity co-variance matrix, and with mean and (0.5,0.5). For the curves indicating how density influence the choice of normalization in Figure 4, we generate the corresponding graphs structure by SBM with fixed density gap p/q = 1.5 and varies the density by varying q. The graph contains two classes of node of size 500. The feature for each class satisfies multivariate normal distribution with identity co-variance matrix, and with mean and (0.5,0.5). For the curves indicating how density gap influence the choice of normalization in Figure 4, we generate the corresponding graphs structure by SBM with fixed density p + q = 0.6 and varies the density gap. The graph contains two classes of node of size 500. The feature for each class satisfies multivariate normal distribution with identity co-variance matrix, and with mean (-0.2,-0.2) and (0.2,0.2). The following flowchart (Figure 8) describes the process of how a one-layer AFGNN tackle node classification task. We reduced the dimension of feature by t-SNE . We annotate the filter and the GFD Score in title of each subfigure. Note that, identity also corresponds to the initial feature. The figure is the feature representation obtained after conduct graph convolution operation once with the corresponding filter. We use three benchmark dataset: Cora, Citeseer and Pubmed for the node classification task. Their statictics are in table4. Beside number of nodes, edges, classes, the dimension of feature, and the data split strategy, we also show the class ratio variance, which can indicates if this dataset is imbalance or not, density gap, which indicates the dependency of structure and labels, and density, which indicates the overall connectivity of a graph. We provide the degree distribution in Figure 10, and we can clearly find that these benchmark datasets has power law degree distribution. Nodes 2708 3327 19717 Edges 5429 4732 44338 Classes 7 6 3 Feature 1433 3703 500 Train 140 120 60 Validation 500 500 500 Test 1000 1000 We tune the number of epochs based on convergence performance. For learning rate and weight decay, we follows the parameter setting provides by the corresponding public implementations unless we find better parameters. The tuned parameters can be found in our code resource. We report the accuracy of node classification task for baseline models on Cora, Citeseer, and Pubmed provided by corresponding literature. Since GIN is not originally evaluated on node classification task, we do not have the reported number here. The is in Table 5. A.9 TIME AND MEMORY COST COMPARISON Both our AFGNN model and GAT model have a learnable filter. We provide time and memory complexity comparison on benchmark datasets here to compare these two models. As shown in Table 6, GAT's time cost is at least three times of AFGNN's time cost on both Cora and Citeseer dataset. As shown in Table 8: Performance on OAG SmallRatio Dataset because it requires too much memory cost and is not able to run on GPU. Therefore, AFGNN needs less time and memory cost than GAT. We generate a real-world dataset with imbalanced classes to justify hard cases may exist in realworld datasets. We download a large scale academic graph called Open Academic Graph (OAG), and choose two fields that have a large disparity in the number of papers: "History of ideas", which consists of 1041 papers; "Public history", which consists of 150 papers. Obviously this two classes are imbalanced, and fall in the large label ratio gap problem. We run supplementary experiment on the generated OAG graph, the experiment setting remains the same as experiment settings for synthetic graphs. Table 8 shows the experiment . To evaluate the models, we compare their F1 score for each class, the weighted average F1 score (micro F1), and the average F1 score (macro F1). Our AFGNN ∞ model shows superior performance on this dataset.
Propose an assessment framework to analyze and learn graph convolutional filter
700
scitldr
The advance of node pooling operations in Graph Neural Networks (GNNs) has lagged behind the feverish design of new message-passing techniques, and pooling remains an important and challenging endeavor for the design of deep architectures. In this paper, we propose a pooling operation for GNNs that leverages a differentiable unsupervised loss based on the minCut optimization objective. For each node, our method learns a soft cluster assignment vector that depends on the node features, the target inference task (e.g., a graph classification loss), and, thanks to the minCut objective, also on the connectivity structure of the graph. Graph pooling is obtained by applying the matrix of assignment vectors to the adjacency matrix and the node features. We validate the effectiveness of the proposed pooling method on a variety of supervised and unsupervised tasks. A fundamental component in deep convolutional neural networks is the pooling operation, which replaces the output of convolutions with local summaries of nearby points and is usually implemented by maximum or average operations . State-of-the-art architectures alternate convolutions, which extrapolate local patterns irrespective of the specific location on the input signal, and pooling, which lets the ensuing convolutions capture aggregated patterns. Pooling allows to learn abstract representations in deeper layers of the network by discarding information that is superfluous for the task, and keeps model complexity under control by limiting the growth of intermediate features. Graph Neural Networks (GNNs) extend the convolution operation from regular domains, such as images or time series, to data with arbitrary topologies and unordered structures described by graphs . The development of pooling strategies for GNNs, however, has lagged behind the design of newer and more effective message-passing (MP) operations , such as graph convolutions, mainly due to the difficulty of defining an aggregated version of the original graph that supports the pooled signal. A naïve pooling strategy in GNNs is to average all nodes features , but it has limited flexibility since it does not extract local summaries of the graph structure, and no further MP operations can be applied afterwards. An alternative approach consists in pre-computing coarsened versions of the original graph and then fit the data to these deterministic structures . While this aggregation accounts for the connectivity of the graph, it ignores task-specific objectives as well as the node features. In this paper, we propose a differentiable pooling operation implemented as a neural network layer, which can be seamlessly combined with other MP layers (see Fig. 1). The parameters in the pooling layer are learned by combining the task-specific loss with an unsupervised regularization term, which optimizes a continuous relaxation of the normalized minCUT objective. The minCUT identifies dense graph components, where the nodes features become locally homogeneous after the message-passing. By gradually aggregating these components, the GNN learns to distil global properties from the graph. The proposed minCUT pooling operator (minCUTpool) yields partitions that 1) cluster together nodes which have similar features and are strongly connected on the graph, and 2) take into account the objective of the downstream task. minCUT Pooling Message-passing Figure 1: A deep GNN architecture where message-passing is followed by minCUT pooling. Given a graph G = {V, E}, |V| = N, and the associated adjacency matrix A ∈ R N ×N, the K-way normalized minCUT (simply referred to as minCUT) aims at partitioning V in K disjoint subsets by removing the minimum volume of edges. The problem is equivalent to maximizing where the numerator counts the edge volume within each cluster, and the denominator counts the edges between the nodes in a cluster and the rest of the graph . Let C ∈ R N ×K be a cluster assignment matrix, so that C i,j = 1 if node i belongs to cluster j, and 0 otherwise. The minCUT problem can be expressed as where D = diag(A1 N) is the degree matrix . Since problem is NP-hard, it is usually recast in a relaxed formulation that can be solved in polynomial time and guarantees a near-optimal solution : While the optimization problem is still non-convex, there exists an optimal solution Q * = U K O, where U K ∈ R N ×K contains the eigenvectors of A corresponding to the K largest eigenvalues, and O ∈ R K×K is an orthogonal transformation . Since the elements of Q * are real values rather than binary cluster indicators, the spectral clustering (SC) approach can be used to find discrete cluster assignments. In SC, the rows of Q * are treated as node representations embedded in the eigenspace of the Laplacian, and are clustered together with standard algorithms such as k-means . One of the main limitations of SC lies in the computation of the spectrum of A, which has a memory complexity of O(N 2) and a computational complexity of O(N 3). This prevents its applicability to large datasets. To deal with such scalability issues, the constrained optimization in can be solved by gradient descent algorithms that refine the solution by iterating operations whose individual complexity is O(N 2), or even O(N) . Those algorithms search the solution on the manifold induced by the orthogonality constraint on the columns of Q, by performing gradient updates along the geodesics . Alternative approaches rely on the QR factorization to constrain the space of feasible solutions , and alleviate the cost O(N 3) of the factorization by ensuring that orthogonality holds only on one minibatch at a time . Other works based on neural networks include an autoencoder trained to map the ith row of the Laplacian to the ith components of the first K eigenvectors, to avoid the spectral decomposition . use a soft orthogonality constraint to learn spectral embeddings as a volumetric reparametrization of a precomputed Laplacian eigenbase.; propose differentiable loss functions to partition generic data and process out-of-sample data at inference time. generate balanced node partitions with a GNN, but adopt an optimization that does not encourage cluster assignments to be orthogonal. Many approaches have been proposed to process graphs with neural networks, including recurrent architectures or convolutional operations inspired by filters used in graph signal processing (; . Since our focus is on graph pooling, we base our GNN implementation on a simple MP operation, which combines the features of each node with its 1st-order neighbors. To account for the initial node features, it is possible to introduce self-loops by adding a (scaled) identity matrix to the diagonal of A . Since our pooling will modify the structure of the adjacency matrix, we prefer a MP implementation that leaves the original A unaltered and accounts for the initial node features by means of skip connections. N ×N be the symmetrically normalized adjacency matrix and X ∈ R N ×F the matrix containing the node features. The output of the MP layer is where Θ M P = {W m, W s} are the trainable weights relative to the mixing and skip component of the layer, respectively. The minCUT pooling strategy computes a cluster assignment matrix S ∈ R N ×K by means of a multi-layer perceptron, which maps each node feature x i into the ith row of S: where Θ P ool = {W 1 ∈ R F ×H, W 2 ∈ R H×K} are trainable parameters. The softmax function guarantees that s i,j ∈ and enforces the constraints S1 K = 1 N inherited from the optimization problem in. The parameters Θ M P and Θ P ool are jointly optimized by minimizing the usual task-specific loss, as well as an unsupervised loss L u, which is composed of two terms where · F indicates the Frobenius norm. The cut loss term, L c, evaluates the minCUT given by the cluster assignment S, and is bounded by −1 ≤ L c ≤ 0. Minimizing L c encourages strongly connected nodes to be clustered together, since the inner product s i, s j increases whenã i,j is large. L c has a single maximum, reached when the numerator T r(This occurs if, for each pair of connected nodes (i.e.,ã i,j > 0), the cluster assignments are orthogonal (i.e., s i, s j = 0). L c reaches its minimum, −1, when T r(S Tà S) = T r(S TD S). This occurs when in a graph with K disconnected components the cluster assignments are equal for all the nodes in the same component and orthogonal to the cluster assignments of nodes in different components. However, L c is a non-convex function and its minimization can lead to local minima or degenerate solutions. For example, given a connected graph, a trivial optimal solution is the one that assigns all nodes to the same cluster. As a consequence of the continuous relaxation, another degenerate minimum occurs when the cluster assignments are all uniform, that is, all nodes are equally assigned to all clusters. This problem is exacerbated by prior message-passing operations, which make the node features more uniform. The orthogonality loss term, L o, penalizes the degenerate minima of L c by encouraging the cluster assignments to be orthogonal and the clusters to be of similar size. Since the two matrices in L o have unitary norm it is easy to see that 0 ≤ L o ≤ 2. Therefore, L o does not dominate over L c and the two terms can be safely summed directly (see Fig. 4 for an example). I K can be interpreted as a (rescaled) clustering matrix I K =Ŝ TŜ, whereŜ assigns exactly N/K points to each cluster. The value of the Frobenius norm between clustering matrices is not dominated by the performance on the largest clusters and, thus, can be used to optimize intra-cluster variance. Contrarily to SC methods that search for feasible solutions only within the space of orthogonal matrices, L o only introduces a soft constraint that could be violated during the learning procedure. Since L c is non-convex, the violation compromises the theoretical guarantee of convergence to the optimum of. However, we note that: 1. the cluster assignments S are well initialized: after the MP operation, the features of the connected vertices become similar and, since the MLP is a smooth function , it yields similar cluster assignments for those vertices; 2. in the GNN architecture, the minCUT objective is a regularization term and, therefore, a solution which is sub-optimal for could instead be adequate for the specific objective of the downstream task; 3. optimizing the task-specific loss helps the GNN to avoid the degenerate minima of L c. The coarsened version of the adjacency matrix and the graph signal are computed as where the entry x pool i,j in X pool ∈ R K×F is the weighted average value of feature j among the elements in cluster i. A pool ∈ R K×K is a symmetric matrix, whose entries a are the total number of edges between the nodes in the cluster i, while a pool i,j is the number of edges between cluster i and j. Since A pool corresponds to the numerator of L c in, the trace maximization yields clusters with many internal connections and weakly connected to each other. Hence, A pool will be a diagonal-dominant matrix, which describes a graph with self-loops much stronger than any other connection. Because self-loops hamper the propagation across adjacent nodes in the MP operations following the pooling layer, we compute the new adjacency matrixà pool by zeroing the diagonal and by applying the degree normalization where diag(·) returns the matrix diagonal. The proposed method is straightforward to implement: the cluster assignments, the loss, graph coarsening, and feature pooling are all computed with standard linear algebra operations. There are several differences between minCUTpool and classic SC methods. SC partitions the graph based on the Laplacian, but does not account for the node features. Instead, the cluster assignments s i found by minCUTpool depend on x i, which works well if connected nodes have similar features. This is a reasonable assumption in GNNs since, even in disassortative graphs (i.e., networks where dissimilar nodes are likely to be connected ), the features tend to become similar due to the MP operations. Another difference is that SC handles a single graph and is not conceived for tasks with multiple graphs to be partitioned independently. Instead, thanks to the independence of the model parameters from the number of nodes N and from the graph spectrum, minCUTpool can generalize to outof-sample data. This feature is fundamental in problems such as graph classification, where each sample is a graph with a different structure, and allows to train the model on small graphs and process larger ones at inference time. Finally, minCUTpool directly uses the soft cluster assignments rather than performing k-means afterwards. Trainable pooling methods. Similarly to our method, these approaches learn how to generate coarsened version of the graph through differentiable functions, which take as input the nodes features X and are parametrized by weights optimized on the task at hand. Diffpool is a pooling module that includes two parallel MP layers: one to compute the new node features X (t+1) and another to generate the cluster assignments S. Diffpool implements an unsupervised loss that consists of two terms. First, the link prediction term A − SS T F minimizes the Frobenius norm of the difference between the adjacency and the Gram matrix of the cluster assignments, encouraging nearby nodes to be clustered together. The second term 1 N N i=1 H(S i) minimizes the entropy of the cluster assignments to make them alike to one-hot vectors. Like minCUTpool, Diffpool clusters the vertices of annotated graphs, but yields completely different partitions, since it computes differently the clustering assignments, the coarsened adjacency matrix and, most importantly, the unsupervised loss. In Diffpool, such a loss shows pathological behaviors that are discussed later in the experiments. The approach dubbed Top-K pooling , learns a projection vector that is applied to each node feature to obtain a score. The nodes with the K highest scores are retained, the others are dropped. Since the top-K selection is not differentiable, the scores are also used as a gate/attention for the node features, letting the projection vector to be trained with backpropagation. Top-K is memory efficient as it avoids generating cluster assignments. To prevent A from becoming disconnected after nodes removal, Top-K drops the rows and the columns from A 2 and uses it as the new adjacency matrix. However, computing A 2 costs O(N 2) and it is inefficient to implement with sparse operations. Topological pooling methods. These methods pre-compute a pyramid of coarsened graphs, only taking into account the topology (A), but not the node features (X). During training, the node features are pooled with standard procedures and are fit into these deterministic graph structures. These methods are less flexible, but provide a stronger bias that can prevent degenerate solutions (e.g., coarsened graphs collapsing in a single node). The approach proposed by , which has been adopted also in other GNN architectures , exploits GRACLUS , a hierarchical algorithm based on SC. At each pooling level l, GRACLUS indetifies the pairs of maximally similar nodes i l and j l to be clustered together into a new vertex k (l+1). At inference phase, max-pooling is used to determine which node in the pair is kept. Fake vertices are added so that the number of nodes can be halved each time, but this injects noisy information in the graph. Node decimation is a method originally proposed in graph signal processing literature , which as been adapted also for GNNs . The nodes are partitioned in two sets, according to the signs of the Laplacian eigenvector associated to the largest eigenvalue. One of the two sets is dropped, reducing the number of nodes each time approximately by half. Kron reduction is used to compute a pyramid of coarsened Laplacians from the remaining nodes. A procedure proposed in diffuses a signal from designated nodes on the graph and stores the observed sequence of diffused components. The ing stream of information is interpreted as a time signal, where standard CNN pooling is applied. We also mention a pooling operation for coarsening binary unweighted graphs by aggregating maximal cliques . Nodes assigned to the same clique are summarized by max or average pooling and become a new node in the coarsened graph. We consider both supervised and unsupervised tasks, and compare minCUTpool with other GNN pooling strategies. The Appendix provides further details on the experiments and a schematic depiction of the architectures used in each task. In addition, the Appendix reports two additional experiments: i) graph reconstruction by means of an Auto Encoder with bottleneck, implemented with pooling and un-pooling layers, ii) an architecture with pooling for graph regression. To study the effectiveness of the proposed loss, we perform different node clustering tasks with a simple GNN composed of a single MP layer followed by a pooling layer. The GNN is trained by minimizing L u only, so that its effect is evaluated without the "interference" of a supervised loss. Clustering on synthetic networks We consider two simple graphs: the first is a network with 6 communities and the second is a regular grid. The adjacency matrix A is binary and the features X are the 2-D node coordinates. Fig. 2 depicts the node partitions generated by SC (a, d), Diffpool (b, e), and minCUTpool (c, f). Cluster indexes for Diffpool and minCUTpool are obtained by taking the argmax of S row-wise. Compared to SC, Diffpool and minCUTpool leverage the information contained in X. minCUTpool generates very accurate and balanced partitions, demonstrating that the cluster assignment matrix S is well formed. On the other hand, Diffpool assigns some nodes to the wrong community in the first example, and produces an imbalanced partition of the grid. Image segmentation Given an image, we build a Region Adjacency Graph (Trémeau &) using as nodes the regions generated by an oversegmentation procedure . The SC technique used in this example is the recursive normalized cut , which recursively clusters the nodes until convergence. For Diffpool and minCUTpool, we include node features consisting of the average and total color in each oversegmented region. We set the number of desired clusters to K = 4. The in Fig. 3 show that minCUTpool yields a more precise segmentation. On the other hand, SC and Diffpool aggregate wrong regions and, in addition, SC finds too many segments. Clustering on citation networks We cluster the nodes of three popular citation networks: Cora, Citeseer, and Pubmed. The nodes are documents represented by sparse bag-of-words feature vectors stored in X and the binary undirected edges in A indicate citation links between documents. Each node i is labeled with the document class y i. Once the training is over, to test the quality of the partitions generated by each method we check the agreement between the cluster assignments and the true class labels. Tab. 1 reports the Completeness Score CS(ỹ, y) = 1 −, where H(·) is the entropy. The GNN architecture configured with minCUTpool achieves a higher NMI score than SC, which does not account for the node features X when generating the partitions. Our pooling operation outperforms also Diffpool, since the minimization of the unsupervised loss in Diffpool yields degenerate solutions. The pathological behavior is shown in Fig. 4, which depicts the evolution of the NMI scores as the unsupervised losses in Diffpool and minCUTpool are minimized in training. In this task, the i-th datum is a graph with N i nodes represented by a pair {A i, X i} and must be associated to the correct label y i. We test the models on different graph classification datasets. For featureless graphs, we used the node degree information and the clustering coefficient as surrogate node features. We evaluate model performance with a 10-fold train/test split, using 10% of the training set in each fold as validation for early stopping. We adopt a fixed network architecture, MP-pool-MP-pool-MP-GlobalAvgPool-softmax, where MP is the message-passing operation in with 32 hidden units. The pooling module is implemented either by Graclus, Decimation pooling, Top-K, SAGPool , Diffpool, or the proposed minCUTpool. Each pooling method is configured to drop half of the nodes in a graph (K = N/2 in Top-K, Diffpool, and minCUTpool). As baselines, we consider the popular Weisfeiler-Lehman (WL) graph kernel , a network with only MP layers (Flat), and a fully connected network (Dense). Tab. 2 reports the classification , highlighting those that are significantly better (p-value < 0.05 w.r.t. the method with the highest mean accuracy). The comparison with Flat helps to understand if a pooling operation is useful or not. The of Dense, instead, help to quantify how much additional information is brought by the graph structure, with respect to the node features alone. It can be seen that minCUTpool obtains always equal or better with respect to every other GNN architecture. On the other hand, some pooling procedures do not always improve the performance compared to the Flat baseline, making them not advisable to use in some cases. The WL kernel generally performs worse than the GNNs, except for the Mutagenicity dataset. This is probably because Mutagenicity has smaller graphs than the other datasets, and the adopted GNN architecture is overparametrized for this task. Interestingly, in some dataset such as Proteins and COLLAB it is possible to obtain fairly good classification accuracy with the Dense architecture, meaning that the graph structure only adds limited information. TopK DiffPool minCUT Graclus Decim. Graclus and Decimation are understandably the fastest methods, since the coarsened graphs are precomputed. Among the differentiable pooling methods, minCUTpool is faster than Diffpool, which uses a slower MP layer rather than a MLP to compute cluster assignments, and than Top-K, which computes the square of A at every forward pass. We proposed a pooling layer for GNNs that coarsens a graph by taking into account both the the connectivity structure and the node features. The layer optimizes a regularization term based on the minCUT objective, which is minimized in conjunction with the task-specific loss to produce node partitions that are optimal for the task at hand. We tested the effectiveness of our pooling strategy on unsupervised node clustering tasks, by optimizing only the unsupervised clustering loss, as well as supervised graph classification tasks on several popular benchmark datasets. Results show that minCUTpool performs significantly better than existing pooling strategies for GNNs. To compare the amount of information retained by the pooling layers in the coarsened graphs, we train an autoencoder (AE) to reconstruct a input graph signal X from its pooled version. The AE architecture is MP-MP-pool-unpool-MP-MP-MP, and is trained by minimizing the mean squared error between the original and the reconstructed graph signal, X − X rec 2. All the pooling operations are configured to retain 25% of the original nodes. In Diffpool and minCUTpool, the unpool step is simply implemented by transposing the original pooling operations Top-K does not generate a cluster assignment matrix, but returns a binary mask m = {0, 1} N that indicates the nodes to drop or to retain. Therefore, an upsamplig matrix U is built by dropping the columns of the identity matrix I N that correspond to a 0 in m, U = [I N]:,m==1. The unpooling operation is performed by replacing S with U in, and the ing upscaled graph is a version of the original graph with zeroes in correspondence of the dropped nodes. 6 and 7 report the original graph signal X (the node features are the 2-D coordinates of the nodes) and the reconstruction X rec obtained by using the different pooling methods, for a ring graph and a regular grid graph. The reconstruction produced by Diffpool is worse for the ring graph, but is almost perfect for the grid graph, while minCUTpool yields good in both cases. On the other hand, Top-K clearly fails in generating a coarsened representation that maintains enough information from the original graph. This experiment highlights a major issue in Top-K pooling, which retains the nodes associated to the highest K values of a score vector s, computed by projecting the node features onto a trainable vector p: s = Xp. Nodes that are connected on the graph usually share similar features, and their similarity further increases after the MP operations, which combine the features of neighboring nodes. Retaining the nodes associated to the top K scores in s corresponds to keeping those nodes that are alike and highly connected, as it can be seen in Fig. 6-7. Therefore, Top-K discards entire portions of the graphs, which might contain important information. This explains why Top-K fails to recover the original graph signal when used as bottleneck for the AE, and yields the worse performance among all GNN methods in the graph classification task. The QM9 chemical database is a collection of ≈135k small organic molecules, associated to continuous labels describing several geometric, energetic, electronic, and thermodynamic properties 1. Each molecule in the dataset is represented as a graph {A i, X i}, where atoms are associated to nodes, and edges represent chemical bonds. The atomic number of each atom (one-hot encoded; C, N, F, O) is taken as node feature and the type of bond (one-hot encoded; single, double, triple, aromatic) can be used as edge attribute. In this experiment, we ignore the edge attributes in order to use all pooling algorithms without modifications. The purpose of this experiment is to compare the trainable pooling methods also on a graph regression task, but it must be intended as a proof of concept. In fact, the graphs in this dataset are extremely small (the average number of nodes is 8) and, therefore, a pooling operation is arguably not necessary. We consider a GNN with architecture MP-pool-MP-GlobalAvgPool-Dense, where pool is implemented by Top-K, Diffpool, or minCUTpool. The network is trained to predict a given chemical property from the input molecular graphs. Performance is evaluated with a 10-fold cross-validation, using 10% of the training set for validation in each split. The GNNs are trained for 50 epochs, using Adam with learning rate 5e-4, batch size 32, and ReLU activations. We use the mean squared error (MSE) as supervised loss. The MSE obtained on the prediction of each property for different pooling methods is reported in Tab. 3. As expected, the flat baseline with no pooling operation (MP-MP-GlobalAvgPoolDense) yields a lower error in most cases. Contrarily to the graph classification and the AE task, Top-K achieves better than Diffpool in average. Once again, minCUTpool significantly outperforms the other methods on each regression task and, in one case, also the flat baseline. Table 3: MSE on the graph regression task. The best with a statistical significance of p < 0.05 are highlighted: the best overall are in bold, the best among pooling methods are underlined. For the WL kernel, we used the implementation provided in the GraKeL library 2. The pooling strategy based on Graclus, is taken from the ChebyNets repository 3. Diffpool and minCUTpool are configured with 16 hidden neurons with linear activations in the MLP and MP layer, respectively used to compute the cluster assignment matrix S. The MP layer used to compute the propagated node features X uses an ELU activation in both architectures. The learning rate for Adam is 5e-4, and the models are trained for 10000 iterations. The details of the citation networks dataset are reported in Tab. 4. We train the GNN architectures with Adam, an L 2 penalty loss with weight 1e-4, and 16 hidden units (H) both in the MLP of minCUTpool and in the internal MP of Diffpool. Mutagenicity, Proteins, DD, COLLAB, and Reddit-2k are datasets representing real-world graphs and are taken from the repository of benchmark datasets for graph kernels 4. Bench-easy and Bench-hard 5 are datasets where the node features X and the adjacency matrix A are completely uninformative if considered alone. Hence, algorithms that account only for the node features or the graph structure will fail to classify the graphs. Since Bench-easy and Bench-hard come with a train/validation/test split, the 10-fold split is not necessary to evaluate the performance. The statistics of all the datasets are reported in Tab. 5. Fig. 8 reports the schematic representation of the minCUTpool layer; Fig. 9 the GNN architecture used in the clustering and segmentation tasks; Fig. 10 the GNN architecture used in the graph classification task; Fig. 12 the GNN architecture used in the graph regression task; Fig. 11 the graph autoencoder used in the graph signal reconstruction task.
A new pooling layer for GNNs that learns how to pool nodes, according to their features, the graph connectivity, and the dowstream task objective.
701
scitldr
Knowledge graphs are structured representations of real world facts. However, they typically contain only a small subset of all possible facts. Link prediction is the task of inferring missing facts based on existing ones. We propose TuckER, a relatively simple yet powerful linear model based on Tucker decomposition of the binary tensor representation of knowledge graph triples. By using this particular decomposition, parameters are shared between relations, enabling multi-task learning. TuckER outperforms previous state-of-the-art models across several standard link prediction datasets. Vast amounts of information available in the world can be represented succinctly as entities and relations between them. Knowledge graphs are large, graph-structured databases which store facts in triple form (e s, r, e o), with e s and e o representing subject and object entities and r a relation. However, far from all available information is stored in existing knowledge graphs, which creates the need for algorithms that automatically infer missing facts. Knowledge graphs can be represented by a third-order binary tensor, where each element corresponds to a triple, 1 indicating a true fact and 0 indicating the unknown (either a false or a missing fact). The task of link prediction is to infer which of the 0 entries in the tensor are indeed false, and which are missing but actually true. A large number of approaches to link prediction so far have been linear, based on various methods of factorizing the third-order binary tensor BID12 BID22 BID19 BID7. Recently, state-of-the-art have been achieved using non-linear convolutional models BID3 BID0. Despite achieving very good per- formance, the fundamental problem with deep, non-linear models is that they are non-transparent and poorly understood, as opposed to more mathematically principled and widely studied tensor decomposition models. In this paper, we introduce TuckER (E stands for entities, R for relations), a simple linear model for link prediction in knowledge graphs, based on Tucker decomposition BID21 of the binary tensor of triples. Tucker decomposition factorizes a tensor into a core tensor multiplied by a matrix along each mode. In our case, rows of the matrices contain entity and relation embeddings, while entries of the core tensor determine the level of interaction between them. Due to having the core tensor, unlike simpler models, such as RESCAL, DistMult and ComplEx, where parameters for each relation are often learned separately, TuckER makes use of multi-task learning between different relations BID24. Subject and object entity embedding matrices are assumed equivalent, i.e. we make no distinction between the embeddings of an entity depending on whether it appears as a subject or as an object in a particular triple. Our experiments show that TuckER achieves state-of-the-art across all standard link prediction datasets. Several linear models for link prediction have previously been proposed. An early linear model, RESCAL BID12, optimizes a scoring function containing a bilinear product between subject and object entity vectors and a full rank matrix for each relation. RESCAL is prone to overfitting due to its large number of parameters, which increases quadratically in the embedding dimension with the number of relations in a knowledge graph. DistMult BID22 ) is a special case of RESCAL with a diagonal matrix per relation, which reduces overfitting. However, DistMult cannot model asymmetric relations. ComplEx BID19 extends DistMult to the complex domain. Subject and object entity embeddings for the same entity are complex conjugates, which enables ComplEx to model asymmetric relations. SimplE BID7 is a model based on Canonical Polyadic (CP) decomposition BID5.Scoring functions of all models described above and TuckER are summarized in Table 1. Table 1. Scoring functions of state-of-the-art link prediction models, the dimensionality of their relation parameters, and significant terms of their space complexity. de and dr are the dimensionalities of entity and relation embeddings, while ne and nr denote the number of entities and relations respectively. eo ∈ C de is the complex conjugate of eo, he s, te s ∈ R de are the head and tail entity embedding of entity es, and w r −1 ∈ R dr is the embedding of relation r −1 which is the inverse of relation r. · denotes the dot product and ×n denotes the tensor product along the n-th mode, f is a non-linear function, and W ∈ R de×de×dr is the core tensor of a Tucker decomposition. Relation Parameters Space Complexity BID22 e s, w r, e o w r ∈ R de O(n e d e + n r d e) ComplEx BID19 Re(e s, w r, e o) DISPLAYFORM0 DISPLAYFORM1 Let E denote the set of all entities and R the set of all relations present in a knowledge graph. A triple is represented as (e s, r, e o), with e s, e o ∈ E denoting subject and object entities respectively and r ∈ R the relation between them. In link prediction, we are given a subset of all true triples and the aim is to learn a scoring function φ that assigns a score s = φ(e s, r, e o) ∈ R which indicates whether a triple is true, with the ultimate goal of being able to correctly score all missing triples. The scoring function is either a specific form of tensor factorization in the case of linear models or a more complex (deep) neural network architecture for nonlinear models. Typically, a positive score for a particular triple indicates a true fact predicted by the model, while a negative score indicates a false one. Tucker decomposition, named after Ledyard R. Tucker BID20, decomposes a tensor into a set of matrices and a smaller core tensor. In a three-mode case, given the original tensor X ∈ R I×J×K, Tucker decomposition outputs a tensor Z ∈ R P ×Q×R and three matrices A ∈ R I×P, B ∈ R J×Q, C ∈ R K×R: DISPLAYFORM0 with × n indicating the tensor product along the n-th mode. Elements of the core tensor Z show the level of interaction between the different components. Typically, P, Q, R are smaller than I, J, K respectively, so Z can be thought of as a compressed version of X BID9 ). We propose a model that uses Tucker decomposition for link prediction on the binary tensor representation of a knowledge graph, with entity embedding matrix E that is equivalent for subject and object entities, i.e. E = A = C ∈ R ne×de and relation embedding matrix R = B ∈ R nr×dr, where n e and n r represent the number of entities and relations and d e and d r the dimensionality of entity and relation embedding vectors respectively. We define the scoring function for TuckER as: DISPLAYFORM0 where e s, e o ∈ R de are the rows of E representing the subject and object entity embedding vectors, w r ∈ R dr the rows of R representing the relation embedding vector and W ∈ R de×dr×de is the core tensor. We apply logistic sigmoid to each score φ(e s, r, e o) to obtain the predicted probability p of a triple being true. Visualization of the TuckER model architecture can be seen in FIG1. The number of parameters of TuckER increases linearly with respect to entity and relation embedding dimensionality d e and d r, as the number of entities and relations increases, since the number of parameters of W depends only on the entity and relation embedding dimensionality and not on the number of entities or relations. By having the core tensor W, unlike simpler models such as DistMult, ComplEx and SimplE, TuckER does not encode all the learned knowledge into the embeddings; some is stored in the core tensor and shared between all entities and relations through multi-task learning BID24. Following the training procedure introduced by BID3, we use 1-N scoring, i.e. we simultaneously score a pair (e s, r) with all entities e o ∈ E, in contrast to 1-1 scoring, where individual triples (e s, r, e o) are trained one at a time. We assume that a knowledge graph is only locally complete by including only the non-existing triples (e s, r, ·) and (·, r, e o) of the observed pairs (e s, r) and (r, e o) as negative samples and all observed triples as positive samples. We train our model to minimize the Bernoulli negative loglikelihood loss function. A component of the loss for one subject entity and all the object entities is defined as: DISPLAYFORM0 where p ∈ R ne is the vector of predicted probabilities and y ∈ R ne is the binary label vector. We evaluate TuckER using standard link prediction datasets. FB15k BID1 ) is a subset of Freebase, a large database of real world facts. FB15k-237 BID18 was created from FB15k by removing the inverse of many relations that are present in the training set from validation and test sets. WN18 BID1 ) is a subset of WordNet, containing lexical relations between words. WN18RR BID3 ) is a subset of WN18, created by removing the inverse relations. We implement TuckER in PyTorch BID13 and make our code available on Github 1. We choose all hyper-parameters by random search based on validation set performance. For FB15k and FB15k-237, we set entity and relation embedding dimensionality to d e = d r = 200. For WN18 and WN18RR, which both contain a significantly smaller number of relations relative to the number of entities as well as a small number of relations compared to FB15k and FB15k-237, we set d e = 200 and d r = 30. We use batch normalization BID6 and dropout BID16 to speed up training. We choose the learning rate from {0.01, 0.005, 0.003, 0.001, 0.0005} and learning rate decay from {1, 0.995, 0.99}. We find the following combinations of learning rate and learning rate decay to give the best : (0.003, 0.99) for FB15k, (0.0005, 1.0) for FB15k-237, (0.005, 0.995) for WN18 and (0.01, 1.0) for WN18RR. We train the model using Adam BID8 with the batch size 128.1 https://github.com/ibalazevic/TuckERWe evaluate each triple from the test set as in BID1: for a given triple, we generate 2n e test triples by keeping the subject entity e s and relation r fixed and replacing the object entity e o with all possible entities E and vice versa. We then rank the scores obtained. We use the filtered setting, i.e. we remove all true triples apart from the currently observed test triple. For evaluation, we use the evaluation metrics used across the link prediction literature: mean reciprocal rank (MRR) and hits@k, k ∈ {1, 3, 10}. Mean reciprocal rank is the average of the inverse of a mean rank assigned to the true triple over all n e generated triples. Hits@k measures the percentage of times the true triple is ranked in the top k of the n e generated triples. Link prediction on all datasets are shown in Tables 2 and 3. Overall, TuckER outperforms previous state-ofthe-art models on all metrics across all datasets (apart from hits@10 on WN18). Results achieved by TuckER are not only better than those of other linear models, such as DistMult, ComplEx and SimplE, but also better than those of many more complex deep neural network and reinforcement learning architectures, e.g. MINERVA, ConvE and HypER, demonstrating the expressive power of linear models. Even though at entity embedding dimensionality d e = 200 and relation embedding dimensionality d r = 30 on WN18RR TuckER has fewer parameters (∼9.4 million) than ComplEx and SimplE (∼16.4 million), it consistently obtains better than any of those models. We believe this is achieved by exploiting knowledge sharing between relations through the core tensor. We find that lower dropout values (0.1, 0.2) are required for datasets with a higher number of training triples per relation and thus less risk of overfitting (WN18 and WN18RR) and higher dropout values (0.3, 0.4, 0.5) are required for FB15k and FB15k-237. We further note that TuckER improves the of all other linear models by a larger margin on datasets with a large number of relations (e.g. +14% improvement on FB15k over ComplEx, +8% improvement over SimplE on the toughest hits@1 metric), which supports our belief that TuckER makes use of the parameters shared between similar relations to improve predictions by multi-task learning. The presence of the core tensor which allows for knowledge sharing between relations suggests that TuckER should need a lower number of parameters for obtaining good than ComplEx or SimplE. To test this, we re-implement ComplEx and SimplE with 1-N scoring, batch normalization and dropout for fair comparison, perform random search to choose best hyper-parameters and train all three models on FB15k-237 with embedding sizes d e = d r ∈ Table 2. Link prediction on WN18RR and FB15k-237. We report for ComplEx-N3 BID10 at de = 115 for WN18RR and de = 400 for FB15k-237 to ensure comparability with TuckER in terms of the overall number of parameters (original paper reports at de = 2000). The RotatE BID17 are reported without their self-adversarial negative sampling (see Appendix H in the original paper) for fair comparison, given that it improves the by ∼ 4% and it is not specific to that model only. FB15k-237Linear MRR Hits@10 Hits@3 Hits@1 MRR Hits@10 Hits@3 Hits@1DistMult BID22 no BID14 no DISPLAYFORM0 DISPLAYFORM1.151 MINERVA BID2 no BID3 FIG2 shows the obtained MRR on the test set for each model. It is important to note that at embedding dimensionalities 20, 50 and 100, TuckER has fewer parameters than ComplEx and SimplE (e.g. ComplEx and SimplE have ∼3 million and TuckER has ∼2.5 million parameters for embedding dimensionality 100). DISPLAYFORM2 We can see that the difference between the MRRs of ComplEx, SimplE and TuckER is approximately constant for embedding sizes 100 and 200. However, for lower embedding sizes, the difference between MRRs increases e.g. by 4.2% for embedding size 20 for ComplEx and by 9.9% for embedding size 20 for SimplE. At embedding size 20 (∼300k parameters), the performance of TuckER is almost as good as the performance of ComplEx and SimplE at embedding size 200 (∼6 million parameters), which supports our initial assumption. In this work, we introduce TuckER, a relatively simple yet highly flexible linear model for link prediction in knowledge graphs based on the Tucker decomposition of a binary tensor of training set triples, which achieves state-of-the-art on several standard link prediction datasets. TuckER's number of parameters grows linearly with respect to embedding dimension as the number of entities or relations in a knowledge graph increases, which makes it easily scalable to large knowledge graphs. Future work might include exploring how to incorporate knowledge on individual relation properties into the existing model.
We propose TuckER, a relatively simple but powerful linear model for link prediction in knowledge graphs, based on Tucker decomposition of the binary tensor representation of knowledge graph triples.
702
scitldr
With innovations in architecture design, deeper and wider neural network models deliver improved performance on a diverse variety of tasks. But the increased memory footprint of these models presents a challenge during training, when all intermediate layer activations need to be stored for back-propagation. Limited GPU memory forces practitioners to make sub-optimal choices: either train inefficiently with smaller batches of examples; or limit the architecture to have lower depth and width, and fewer layers at higher spatial resolutions. This work introduces an approximation strategy that significantly reduces a network's memory footprint during training, but has negligible effect on training performance and computational expense. During the forward pass, we replace activations with lower-precision approximations immediately after they have been used by subsequent layers, thus freeing up memory. The approximate activations are then used during the backward pass. This approach limits the accumulation of errors across the forward and backward pass---because the forward computation across the network still happens at full precision, and the approximation has a limited effect when computing gradients to a layer's input. Experiments, on CIFAR and ImageNet, show that using our approach with 8- and even 4-bit fixed-point approximations of 32-bit floating-point activations has only a minor effect on training and validation performance, while affording significant savings in memory usage. Deeper neural network models are able to express more complex functions, and recent have shown that with the use of residual BID7 and skip BID9 connections to address vanishing gradients, such networks can be trained effectively to leverage this additional capacity. As a , the use of deeper network architectures has become prevalent, especially for visual inference tasks BID8. The shift to larger architectures has delivered significant improvements in performance, but also increased demand on computational resources. In particular, deeper network architectures require significantly more on-device memory during training-much more so than for inference. This is because training requires retaining the computed activations of all intermediate layers since they are needed to compute gradients during the backward pass. The increased memory footprint means fewer training samples can fit in memory and be processed as a batch on a single GPU. This is inefficient: smaller batches are not able to saturate all available parallel cores, especially because computation in "deeper" architectures is distributed to be more sequential. Moreover, smaller batches also complicate the use of batch-normalization BID11, since batch statistics are now computed over fewer samples making training less stable. These considerations often force the choice of architecture to be based not just on optimality for inference, but also practical feasibility for training-for instance, deep residual networks for large images drop resolution early, so that most layers have smaller sized outputs. While prior work to address this has traded-off memory for computation BID13 BID4 BID3, their focus has been on enabling exact gradient computation. However, since stochastic gradient descent (SGD) inherently works with noisy gradients at each iteration, we propose an algorithm that computes reasonably approximate gradients, while significantly reducing a network's memory footprint and with virtually no additional computational cost. Our work is motivated by distributed training algorithms Figure 1: Proposed Approach. We show the computations involved in the forward and backward pass during network training for a single "pre-activation" layer, with possible residual connections. The forward pass is exact, but we discard full-precision activations right after use by subsequent layers (we store these in common global buffers, and overwrite activations once they have been used and no longer needed for forward computation). Instead, we store a low-precision approximation of the activations which occupies less memory, and use these during back-propagation. Our approach limits errors in the gradient flowing back to the input of a layer, and thus accumulation of errors across layers. Since our approximation preserves the signs of activations, most of the computations along the path back to the input are exact-with the only source of error being the use of the approximate activations while back-propagating through the variance-computation in batch-normalization.that succeed despite working with approximate and noisy gradients aggregated across multiple devices BID15 BID2; ). We propose using low-precision approximate activations-that require less memory-to compute approximate gradients during back-propagation (backprop) on a single device. Note that training with a lowerprecision of 16-instead of 32-bit floating-point representations is not un-common. But this lower precision is used for all computation, and thus allows only for a modest lowering of precision, since the approximation error builds up across the forward and then backward pass through all layers. In this work, we propose a new backprop implementation that performs the forward pass through the network at full-precision, and incurs limited approximation error during the backward pass. We use the full-precision version of a layer's activations to compute the activations of subsequent layers. However, once these activations have been used in the forward pass, our method discards them and stores a low-precision approximation instead. During the backward pass, gradients are propagated back through all the layers at full precision, but instead of using the original activations, we use their low-precision approximations. As a , we incur an approximation error at each layer when computing the gradients to the weights from multiplying the incoming gradient with the approximate activations, but ensure the error in gradients going back to the previous layer is minimal. Our experimental show that even using only 4-bit fixed-point approximations, for the original 32-bit floating-point activations, causes only minor degradation in training quality. This significantly lowers the memory required for training, which comes essentially for "free"-incurring only the negligible additional computational cost of converting activations to and from low precision representations. Our memory-efficient version of backprop is thus able to use larger batch sizes at each iteration-to fully use available parallelism and compute stable batch statistics-and makes it practical for researchers to explore the use of much larger and deeper architectures than before. A number of works focus on reducing the memory footprint of a model during inference, e.g., by compression BID6 and quantization BID10, to ensure that it can be deployed on resource-limited mobile devices, while still delivering reasonable accuracy. These methods still require storing full versions of model weights and activations during network training, and assume there is sufficient memory to do so. However, training requires significantly more memory than inference because of the need to store all intermediate activations. And so, memory can be a bottleneck during training, especially with the growing preference for larger and deeper network architectures. A common recourse to this has been to simply use multiple GPUs during training. But, this is inefficient due to the overhead of intra-device communication, and often under-utilizes the available parallelism on each device-computation in deeper architectures is distributed more sequentially and, without sufficient data parallelism, often does not saturate all GPU cores. A popular strategy to reduce training memory requirements is "checkpointing". Activations for only a subset of layers are stored at a time, and the rest recovered by repeating forward computations BID13 BID4. This affords memory savings with the trade-off of additional computational cost-e.g., propose a strategy that requires memory proportional to the square-root of the number of layers, while requiring up to the computational cost of an additional forward pass. In a similar vein, BID3 considered network architectures with "reversible" or invertible layers. This allows re-computing intermediate input activations of reversible layers from their outputs during the backward pass. These methods likely represent the best possible solutions if the goal is restricted to computing exact gradients. But SGD is fundamentally a noisy process, and the exact gradients computed over a batch at each iteration are already an approximation-of gradients of the model over the entire training set BID16. Researchers have posited that further approximations are possible without degrading training ability, and used this to realize gains in efficiency. For distributed training, asynchronous methods BID15 BID2 delay synchronizing models across devices to mitigate communication latency. Despite each device now working with stale models, there is no major degradation in training performance. Other methods quantize gradients to two or three levels so as to reduce communication overhead, and again find that training remains robust to such approximation. Our work also adopts an approximation strategy to gradient computation, but targets the problem of memory usage on a each device. We approximate activations, rather than gradients, with lower-precision representations, and by doing so, we are able to achieve considerable reductions in a model's memory footprint during training. Note that since our method achieves a constant factor saving in memory for back-propagation across any group of layers, it can also be employed within checkpointing to further improve memory cost. It is worth differentiating our work from those that carry out all training computations at lowerprecision BID14; BID5. This strategy allows for a modest loss in precision: from 32-to 16-bit representations. In contrast, our approach allows for much greater reduction in precision. This is because we carry out the forward pass in full-precision, and approximate activations only after they have been used by subsequent layers. Our strategy limits accumulation of errors across layers, and we are able to replace 32-bit floats with 8-and even 4-bit fixed-point approximations, with little to no effect on training performance. Note that performing all computation at lower-precision also has a computational advantage: due to reduction in-device memory bandwidth usage (transferring data from global device memory to registers) in BID14, and due to the use of specialized hardware in BID5. While the goal of our work is different, our strategy can be easily combined with these ideas: compressing intermediate activations to a greater degree, while also using 16-instead of 32-bit precision for computation. A neural network is composition of linear and non-linear functions that map the input to the final desired output. These functions are often organized into "layers", where each layer consists of a single linear transformation-typically a convolution or a matrix multiply-and a sequence of nonlinearities. We use the "pre-activation" definition of a layer, where we group the linear operation with the non-linearities that immediately preceed it. Consider a typical network whose l th layer applies batch-normalization and ReLU activation to its input A l:i followed by a linear transform: DISPLAYFORM0 to yield the output activations A l:o that are fed into subsequent layers. Here, each activation is a tensor with two or four dimensions: the first indexing different training examples, the last corresponding to "channels", and others to spatial location. Mean(·) and Var(·) aggregate statistics over batch and spatial dimensions, to yield vectors µ l and σ 2 l with per-channel means and variances. Element-wise addition and multiplication (denoted by •) are carried out by "broadcasting" when the tensors are not of the same size. The final operation represents the linear transformation, with × denoting matrix multiplication. This linear transform can also correspond to a convolution. Note that - FORMULA0 are defined with respect to learnable parameters γ l, β l, and W l, where γ l, β l are both vectors of the same length as the number of channels in A l, and W l denotes a matrix (for fullyconnected layers) or elements of a convolution kernel. These parameters are learned iteratively using SGD, where at each iteration, they are updated based on gradients-∇γ l, ∇β l, and ∇W l -of some loss function with respect to these parameters, computed on a batch of training samples. To compute gradients with respect to all parameters for all layers in the network, the training algorithm first computes activations for all layers in sequence, ordered such that each layer in the sequence takes as input the output from a previous layer. The loss is computed with respect to activations of the final layer, and then the training algorithm goes through all layers again in reverse sequence, using the chain rule to back-propagate gradients of this loss. For the l th layer, given the gradients ∇A l:o of the loss with respect to the output, this involves computing gradients ∇γ l, ∇β l, and ∇W l with respect to the layer's learnable parameters, as well as gradients ∇A l:i with respect to its input for further propagation. These gradients are given by: DISPLAYFORM1 where Sum(·) and Mean(·) involve aggregation over all but the last dimension, and δ(A > 0) a tensor the same size as A that is one where the values in A are positive, and zero otherwise. When the goal is to just compute the final output of the network, the activations of an intermediate layer can be discarded during the forward pass as soon as we finish processing the subsequent layer or layers that use it as input. However, we need to store all these intermediate activations during training because they are needed to compute gradients during back-propagation: - FORMULA1 involve not just the values of the incoming gradient, but also the values of the activations themselves. Thus, training requires enough available memory to hold the activations of all layers in the network. We begin by observing we do not necessarily need to store all intermediate activations A l:1, A l:2, and A l:3 within a layer. For example, it is sufficient to store the activation values A l:2 right before the ReLU, along with the variance vector σ 2 l (which is typically much smaller than the activations themselves). Given A l:2, we can reconstruct the other activations A l:3 and A l:3 needed in- using element-wise operations, which typically have negligible computational cost compared to the linear transform itself. Some deep learning frameworks already use such "fused" layers to conserve memory, and we consider this to be our "baseline" for memory usage. However, storing one activation tensor at full-precision for every layer still requires a considerable amount of memory. We therefore propose retaining an approximate low-precision versionà l:2 of A l:2, that requires much less memory for storage, for use in- during back-propagation. As shown in Fig. 1, we use full-precision versions of all activations during the forward pass to compute A l:o from A l:i as per - FORMULA0, and use A l:2 to compute its approximationà l:2. The full precision approximations are discarded as soon they have been used-the intermediate activations A l:1, A l:2, A l:3 are discarded as soon as the approximationà l:2 and output A l:o have been computed, and A l:o is discarded after it has been used by a subsequent layer. Thus, only the approximate activationsà l:2 and (full-precision) variance vector σ 2 l are retained in memory for back-propagation. We use a simple, computationally inexpensive approach to approximate A l:2 via a K-bit fixed-point representation for some desired value of K. Since A l:1 is normalized to be zero-mean and unitvariance, A l:2 has mean β l and variance γ 2 l. We compute an integer tensorà * l:2 from A l:2 as: DISPLAYFORM0 where · indicates the "floor" operator, and Clip K (x) = max(0, min(2 K − 1, x)). The ing integers (between 0 and 2 K − 1) can be directly stored with K-bits. When needed during backpropagation, we recover a floating-point tensor holding the approximate activationsà l:2 as: DISPLAYFORM1 This simply has the effect of clipping A l:2 to the range β l ± 3γ l (approximately, the range may be slightly asymmetric around β l because of rounding), and quantizing values in 2 K fixed-size intervals (to the median of each interval) within that range. However, crucially, this approximation ensures that the sign of each value is preserved, i.e., δ(A l:2 > 0) = δ(à l:2 > 0). Since the forward computations happen in full-precision, there is no error introduced in any of the activations A l prior to approximation. To analyze the error introduced by our approach, we then consider the effect of usingà l:2 instead of A l:2 (and equivalently,à l:1 andà l:3 derived fromà l:2) to compute gradients in-. We begin by noting that for all values of A l:2 that fall within the range β l ± 3γ l (and are therefore not clipped), the worst-case approximation error in the activations themselves is bounded by half the width of the quantization intervals: DISPLAYFORM0 where Var(·) denotes per-channel variance (and the RHS is interpreted as applying to all channels). Hence, the approximation error is a fraction of the variance in the activations themselves, and is lower for higher values of K. It is easy to see that |A l:3 −à l:2 | ≤ |A l:2 −à l:3 | since A l:3 andà l:3 are derived from A l:2 andà l:3 by clipping negative values of both to 0, which only decreases the error. Further, since A l:2 is related to A l:1 by simply scaling, the error inà l:1 is also bounded as a fraction of its variance, which is one, i.e: DISPLAYFORM1 We next examine how these errors in the activations affect the accuracy of gradient computations in-. During the first back-propagation step in through the linear transform, the gradient ∇W to the learnable transform weights will be affected by the approximation error inà l:3. However, the gradient ∇A l:2 can be computed exactly (as a function of the incoming gradient to the layer ∇A l:o), because it does not depend on the activations. Back-propagation through the ReLU in is also not affected, because it depends only on the sign of the activations, which is preserved by our approximation. When back-propagating through the scale and bias in, only the gradient ∇γ to the scale depends on the activations, but gradients to the bias β l and to A l:1 can be computed exactly. And so, although our approximation introduces some error in the computations of ∇W and ∇γ, there is no error introduced in the gradient flowing towards the input of the layer, until it reaches the batch-normalization operation in. Here, we do incur an error, but note that this is only in one of the three terms of the expression for ∇A l:i -which accounts for back-propagating through the variance computation, and is the only term that depends on the activations. Hence, while our activation approximation does introduce some errors in the gradients for the learnable weights, we limit the accumulation of these errors across layers because a majority of the computations for backpropagation to the input of each layer are exact. This is illustrated in Fig. 1, with the use of green arrows to show computations that are exact, and red arrows for those affected by the approximation. Our full training algorithm applies our approximation strategy to every layer (defined by grouping linear transforms with preceding non-linear activations) during the forward and backward pass. Skip and residual connections are handled easily, since back-propagation through these connections involves simply copying to and adding gradients from both paths, and doesn't involve the activations themselves. (Although we do not consider this in our implementation, older residual connections that are added after batch-normalization but before the ReLU can also be handled, but would require saving activations both before and after addition-in the traditional case, well as our approach).Our method is predicated on the use of ReLU activations since its gradient depends only on the sign of the activations, and can be used for other such non-linearities such as "leaky"-ReLUs. Other activations (like sigmoid) may incur additional errors-in particular, we do not approximate the activations of the final output layer in classifier networks that go through a Soft-Max. However, since this is typically at the final layer, and computing these activations is immediately followed by back-propagating through that layer, approximating these activations offers no savings in memory. Our approach also handles average pooling by simply folding it in with the linear transform. For max-pooling, exact back-propagation through the pooling operation would require storing the argmax indices (the number of bits required to store these would depend on the max-pool receptive field size). However, since max-pool layers are used less often in recent architectures in favor of learned downsampling (ResNet architectures for image classification use max-pooling only in one layer), we instead choose not to approximate layers with max-pooling for simplicity. Given a network with L layers, our memory usage depends on connectivity for these layers. Our approach requires storing the approximate activations for each layer, each occupying reduced memory rate at a fractional rate of α < 1. During the forward pass, we also need to store, at full-precision, those activations that are yet to be used by subsequent layers. This is one layer's activations for feedforward networks, and two layers' for standard residual architectures. More generally, we will need to store activations for upto W layers, where W is the "width" of the architecture-which we define as the maximum number of outstanding layer activations that remain to be used as process layers in sequence. During back-propagation, the same amount of space is required for storing gradients till they are used by previous layers. We also need space to re-create a layer's approximate activations as full-precision tensors from the low-bit stored representation, for use in computation. Thus, assuming that all activations of layers are the same size, our algorithm requires O(W +1+αL) memory, compared to the standard requirement of O(L). This leads to substantial savings for deep networks with large L (note α = 1 /4, 1 /8 when approximating 32-bit floats with K = 8, 4 bits). We developed a library that implements the proposed method for approximate memory-efficient training, given a network architecture specification which can include residual layers (i.e., W = 2).As illustrated in Fig. 1, the method allocates a pair of global buffers for the direct and residual paths that is common to all layers. At any point during the forward pass, these buffers hold the fullprecision activations that are needed for computation of subsequent layers. The same buffers are used to store gradients during the back-ward pass. Beyond these common buffers, the library only stores the low-precision approximate activations for each layer for use during the backward-pass. Further details of our implementation are provided in the appendix. We compare our approximate training approach, with 8-and 4-bit activations, to exact training with full-precision activations as a baseline. For a fair comparison, we again only store one set of activations like our method for a group of batch-normalization, ReLU, and linear (convolution) operations. This is achieved with our library by storing A l:2 without approximation (à l:2 = A l:2).CIFAR-10 and CIFAR-100. We begin with comparisons on 164-layer pre-activation residual networks BID8 on CIFAR-10 and CIFAR-100 BID12 ), using threelayer "bottlneck" residual units and parameter-free shortcuts for all residual connections. We train the network for 64k iterations with a batch size of 128, momentum of 0.9, and weight decay of 2 × 10 −4. Following BID8, the learning rate is set to 10 −2 for the first 400 iterations, then increased to 10 −1, and dropped by a factor of 10 at 32k and 48k iterations. We use standard data-augmentation with random translation and horizontal flips. We train these networks with our We show the evolution of training and test error for ResNet-164 models trained on CIFAR-10 and CIFAR-100 (with four different random seeds for each case) and find performance of approximate training closely follows that of the exact baseline. (Right) We visualize errors in the computed gradients of learnable parameters (convolution kernels) for different layers for two snapshots of a CIFAR-100 model at the start and end of training. We plot errors between the true gradients and those computed by our approximation, averaged over a 100 batches. We compare to the errors from SGD itself: the variance between the (exact) gradients computed from different batches, and find this to be 1-2 orders of magnitude higher. Table 1: Accuracy Comparisons on CIFAR and ImageNet. On CIFAR-10 and CIFAR-100, we report test set error statistics with ResNet-164 over four models trained with different random seeds. For ImageNet, we report 10-crop Top-5 error on the validation set with ResNet-34 and ResNet-152. DISPLAYFORM0 5.56% 5.54% ± 0.14 23.59% 23.58% ± 0.35 10.06% 7.20% 8-bit (α = 1 /4) 5.61% 5.63% ± 0.14 23.63% 23.75% ± 0.39 10.60% 7.70% 4-bit (α = 1 /8) 5.63% 5.62% ± 0.07 23.66% 23.71% ± 0.29 10.74% 7.72% approach using K = 8 and K = 4 bit approximations, and measure degradation in accuracy with respect to the baseline-repeating training for all cases with four random seeds. We visualize the evolution of training and test set error in FIG0, and report statistics of the final test error in Table 1.We find that both training and test errors when using our low-memory approximation strategy closely follow those of exact back-propagation, throughout the training process. Moreover, the final median test errors of models trained with even 4-bit approximations (i.e., corresponding to α = 1 /8) are higher only by 0.07% compared to those trained with exact computations. To examine the reason behind this robustness, FIG0 also visualizes the error in the final parameter gradients used to update the model. Specifically, we take two models for CIFAR-100-at the start and end of training-and then compute gradients for a 100 batches with respect to the convolution kernels of all layers exactly, and using our approximate strategy. We plot the average squared error between these gradients. We compare this approximation error to the "noise" inherent in SGD, due to the fact that each iteration considers a random batch of training examples. This is measured by average variance between the (exact) gradients computed in the different batches. We see that our approximation error is between one and two orders of magnitude below the SGD noise for all layers, both at the start and end of training. So while we do incur an error due to approximation, this is added to the much higher error that already exists due to SGD even in exact training, and hence further degradation is limited. ImageNet. We also report on training models for ImageNet . Here, we consider two residual architectures: 34-layer (with two-layer units without bottlenecks) and 152-layer (with three-layer bottleneck units)-again using pre-activation parameter-free shortcuts. We train with a batch size of 256 for a total of 640k iterations with a momentum of 0.9, weight decay of 10 −4, and standard scale, color, flip, and translation augmentation. The initial learning rate is set to 10 −1 with drops by factor of 10 every 160k iterations. Table. 1 reports top-5 validation accuracy (using 10 crops at a scale of 256) for models trained using exact computation, and our approach with K = 8 and K = 4 bit approximations. Again, the drop in accuracy is relatively small: at 0.7% and 0.5% for the 34-and 152-layer models respectively, for a memory savings factor of α = 1 /8.Memory and Computational Efficiency. For the CIFAR experiments, the full 128-size batch fit on a single 1080Ti GPU for both the baseline and our method. For ImageNet with ResNet-34, our method could fit the 256-sized batch, but not the baseline-for which we used two passes with 128-size batches and averaged the gradients for each iteration. For ResNet-152, we parallelized computation across two GPUs, and again, our method could fit half a batch (size 128) on each GPU, the baseline required two passes with 64−sized batches per-GPU per-pass. In the CIFAR experiments and ImageNet with ResNet-34, the batches were large enough to saturate all GPU cores for both our method and the baseline. In this case, the running times per iteration were almost identical-with a very slight increase in our case due to the cost computing approximations: exact vs approximate (4-bit) training took 0.66 seconds vs 0.72 seconds for CIFAR-100, and 1.68 seconds vs 1.71 seconds for ImageNet ResNet-34. But for ResNet-152 on ImageNet, the 64-sized batch for exact training underutilized the available parallelism, and the time per-iteration (across two GPUs) was 2s vs 1.7s for exact vs approximate (4-bit) training. However, these represent comparisons restricted to have the same total batch size (needed to evaluate relative accuracy). For a more precise evaluation of memory usage, and the ing computational efficiency from parallelism, we considered residual networks for CIFAR-10 of various depths up to 1001 layers-and additionally for the deepest network, a version with four times as many feature channels in each layer. For each network, we measured the largest batch size that could be fit in memory with our method (with K = 4) vs the baseline, i.e., b such that a batch of b + 1 caused an out-of-memory error on a 1080Ti GPU. We also measured the corresponding wall-clock training time per sample, computed as the training time per-iteration divided by this batch size. These are summarized in TAB0. We find that in all cases, our method allows significantly larger batches to be fit in memory. Moreover, for larger networks, our method also yields a notable computational advantage since the larger batches permit full exploitation of available cores on the GPU. We introduced a new algorithm for approximate gradient computation in neural network training, that significantly reduces the amount of required on-device memory. Our experiments show that this comes at a minimal cost in terms of both quality of the learned models, and computational expense. With a lower memory footprint, our method allows training with larger batches in each iterationimproving efficiency and stability-and exploration of deeper architectures that were previously impractical to train. We will release our reference implementation on publication. Our method shows that SGD is reasonably robust to working with approximate activations. While we used an extremely simple approximation strategy-uniform quantization-in this work, we are interested in exploring whether more sophisticated techniques-e.g., based on random projections or vector quantization-can provide better trade-offs, especially if informed by statistics of gradients and errors from prior iterations. We are also interested in investigating whether our approach to partial approximation can be utilized in other settings, especially to reduce inter-device communication for distributed training with data or model parallelism. We implemented our approximate training algorithm using the TensorFlow library BID0. However, we only used TensorFlow's functions for individual forward and gradient computations, but not on its automatic differentiation functionality. Instead, our library allows specifying general residual network architectures, and based on this specification, creates a set of TensorFlow ops for doing forward and backward passes through each layer. We also used custom ops implemented in CUDA to handle the conversion to and from low-precision representations. Each layer's forward and backward ops are called in separate sess.run calls, and all data that needs to persist between calls-including still to be used full precision activations and gradients in the forward and backward pass, and approximate intermediate activations-are stored explicitly as Tensorflow variables. For the forward and backward passes through the network, we call these ops in sequence followed by ops to update the model parameters based on the computed gradients. We chose not to allocate and then free variables for the full-precision layer activations and gradients, since this caused memory fragmentation with Tensorflow's memory management routines. Instead, as described in Sec. 4, we used two common variables as buffers for all layers to hold activations (in the forward pass) and gradients (in the backward pass) for the direct and residual paths in the network respectively. We reuse these buffers by overwriting old activations and gradients with new ones. The size of these buffers is set based on the largest layer, and we used slices of these buffers for smaller layers.
An algorithm to reduce the amount of memory required for training deep networks, based on an approximation strategy.
703
scitldr
Estimating the frequencies of elements in a data stream is a fundamental task in data analysis and machine learning. The problem is typically addressed using streaming algorithms which can process very large data using limited storage. Today's streaming algorithms, however, cannot exploit patterns in their input to improve performance. We propose a new class of algorithms that automatically learn relevant patterns in the input data and use them to improve its frequency estimates. The proposed algorithms combine the benefits of machine learning with the formal guarantees available through algorithm theory. We prove that our learning-based algorithms have lower estimation errors than their non-learning counterparts. We also evaluate our algorithms on two real-world datasets and demonstrate empirically their performance gains. Classical algorithms provide formal guarantees over their performance, but often fail to leverage useful patterns in their input data to improve their output. On the other hand, deep learning models are highly successful at capturing and utilizing complex data patterns, but often lack formal error bounds. The last few years have witnessed a growing effort to bridge this gap and introduce algorithms that can adapt to data properties while delivering worst case guarantees. Deep learning modules have been integrated into the design of Bloom filters (; BID18, caching algorithms , graph optimization BID12, similarity search BID22 BID29 ) and compressive sensing BID3. This paper makes a significant step toward this vision by introducing frequency estimation streaming algorithms that automatically learn to leverage the properties of the input data. Estimating the frequencies of elements in a data stream is one of the most fundamental subroutines in data analysis. It has applications in many areas of machine learning, including feature selection BID0, ranking , semi-supervised learning BID27 and natural language processing . It has been also used for network measurements (; BID30 BID28 and security BID23 . Frequency estimation algorithms have been implemented in popular data processing libraries, such as Algebird at Twitter BID4 . They can answer practical questions like: what are the most searched words on the Internet? or how much traffic is sent between any two machines in a network?The frequency estimation problem is formalized as follows: given a sequence S of elements from some universe U, for any element i ∈ U, estimate f i, the number of times i occurs in S. If one could store all arrivals from the stream S, one could sort the elements and compute their frequencies. However, in big data applications, the stream is too large (and may be infinite) and cannot be stored. This challenge has motivated the development of streaming algorithms, which read the elements of S in a single pass and compute a good estimate of the frequencies using a limited amount of space.1 Over the last two decades, many such streaming algorithms have been developed, including Count-Sketch BID7, Count-Min BID11 ) and multistage filters . The performance guarantees of these algorithms are wellunderstood, with upper and lower bounds matching up to O(·) factors .However, such streaming algorithms typically assume generic data and do not leverage useful patterns or properties of their input. For example, in text data, the word frequency is known to be inversely correlated with the length of the word. Analogously, in network data, certain applications tend to generate more traffic than others. If such properties can be harnessed, one could design frequency estimation algorithms that are much more efficient than the existing ones. Yet, it is important to do so in a general framework that can harness various useful properties, instead of using handcrafted methods specific to a particular pattern or structure (e.g., word length, application type).In this paper, we introduce learning-based frequency estimation streaming algorithms. Our algorithms are equipped with a learning model that enables them to exploit data properties without being specific to a particular pattern or knowing the useful property a priori. We further provide theoretical analysis of the guarantees associated with such learning-based algorithms. We focus on the important class of "hashing-based" algorithms, which includes some of the most used algorithms such as Count-Min, Count-Median and Count-Sketch. Informally, these algorithms hash data items into B buckets, count the number of items hashed into each bucket, and use the bucket value as an estimate of item frequency. The process can be repeated using multiple hash functions to improve accuracy. Hashing-based algorithms have several useful properties. In particular, they can handle item deletions, which are implemented by decrementing the respective counters. Furthermore, some of them (notably Count-Min) never underestimate the true frequencies, i.e., f i ≥ f i holds always. However, hashing algorithms lead to estimation errors due to collisions: when two elements are mapped to the same bucket, they affect each others' estimates. Although collisions are unavoidable given the space constraints, the overall error significantly depends on the pattern of collisions. For example, collisions between high-frequency elements ("heavy hitters") in a large estimation error, and ideally should be minimized. The existing algorithms, however, use random hash functions, which means that collisions are controlled only probabilistically. Our idea is to use a small subset of S, call it S, to learn the heavy hitters. We can then assign heavy hitters their own buckets to avoid the more costly collisions. It is important to emphasize that we are learning the properties that identify heavy hitters as opposed to the identities of the heavy hitters themselves. For example, in the word frequency case, shorter words tend to be more popular. The subset S itself may miss many of the popular words, but whichever words popular in S are likely to be short. Our objective is not to learn the identity of high frequency words using S. Rather, we hope that a learning model trained on S learns that short words are more frequent, so that it can identify popular words even if they did not appear in S.Our main contributions are as follows:• We introduce learning-based frequency estimation streaming algorithms, which learn the properties of heavy hitters in their input and exploit this information to reduce errors• We provide performance guarantees showing that our algorithms can deliver a logarithmic factor improvement in the error bound over their non-learning counterparts. Furthermore, we show that our learning-based instantiation of Count-Min, a widely used algorithm, is asymptotically optimal among all instantiations of that algorithm. See Table 4.1 in section 4.1 for the details.• We evaluate our learning-based algorithms using two real-world datasets: traffic load on an Internet backbone link and search query popularity. In comparison to their non-learning counterparts, our algorithms yield performance gains that range from 18% to 71%. Frequency estimation in data streams. Frequency estimation, and the closely related problem of finding frequent elements in a data stream, are some of the most fundamental and well-studied problems in streaming algorithms, see BID9 for an overview. Hashingbased algorithms such as Count-Sketch BID7, Count-Min BID11 ) and multi-stage filters are widely used solutions for these problems. These algorithms also have close connections to sparse recovery and compressed sens-ing BID6 ), where the hashing output can be considered as a compressed representation of the input data .Several "non-hashing" algorithms for frequency estimation have been also proposed BID17 BID13; BID15. These algorithms do not possess many of the properties of hashing-based methods listed in the introduction (such as the ability to handle deletions), but they often have better accuracy/space tradeoffs. For a fair comparison, our evaluation focuses only on hashing algorithms. However, our approach for learning heavy hitters should be useful for non-hashing algorithms as well. Some papers have proposed or analyzed frequency estimation algorithms customized to data that follows Zipf Law BID7 BID10 BID15 BID16 BID21; the last algorithm is somewhat similar to the "lookup table" implementation of the heavy hitter oracle that we use as a baseline in our experiments. Those algorithms need to know the data distribution a priori, and apply only to one distribution. In contrast, our learning-based approach applies to any data property or distribution, and does not need to know that property or distribution a priori. Learning-based algorithms. Recently, researchers have begun exploring the idea of integrating machine learning models into algorithm design. In particular, researchers have proposed improving compressed sensing algorithms, either by using neural networks to improve sparse recovery algorithms BID20 BID3, or by designing linear measurements that are optimized for a particular class of vectors BID2 BID19, or both. The latter methods can be viewed as solving a problem similar to ours, as our goal is to design "measurements" of the frequency vector (f 1, f 2 . . ., f |U |) tailored to a particular class of vectors. However, the aforementioned methods need to explicitly represent a matrix of size B × |U |, where B is the number of buckets. Hence, they are unsuitable for streaming algorithms which, by definition, have space limitations much smaller than the input size. Another class of problems that benefited from machine learning is distance estimation, i.e., compression of high-dimensional vectors into compact representations from which one can estimate distances between the original vectors. Early solutions to this problem, such as Locality-Sensitive Hashing, have been designed for worst case vectors. Over the last decade, numerous methods for learning such representations have been developed BID22 BID29; BID28. Although the objective of those papers is similar to ours, their techniques are not usable in our applications, as they involve a different set of tools and solve different problems. More broadly, there have been several recent papers that leverage machine learning to design more efficient algorithms. The authors of BID12 show how to use reinforcement learning and graph embedding to design algorithms for graph optimization (e.g., TSP). Other learning-augmented combinatorial optimization problems are studied in (; BID1). More recently, (; BID18 have used machine learning to improve indexing data structures, including Bloom filters that (probabilistically) answer queries of the form "is a given element in the data set?" As in those papers, our algorithms use neural networks to learn certain properties of the input. However, we differ from those papers both in our design and theoretical analysis. Our algorithms are designed to reduce collisions between heavy items, as such collisions greatly increase errors. In contrast, in existence indices, all collisions count equally. This also leads to our theoretical analysis being very different from that in BID18 3. PRELIMINARIES We will use e i:= |f i − f i | to denote the estimation error for f i. To measure the overall estimation error between the frequencies F = {f 1, f 2, · · ·, f |U |} and their estimatesF = {f 1,f 2, · · ·,f |U |}, we will use the expected error E i∼D [e i], where D models the distribution over the queries to the data structure. Similar to past work BID21, we assume the query distribution D is the same as the distribution of the input stream, i.e., for any j we have where N is the sum of all frequencies. This leads to the estimation error ofF with respect to F: DISPLAYFORM0 DISPLAYFORM1 We note that the theoretical guarantees of frequency estimation algorithms are typically phrased in the "(, δ)-form", e.g., Pr[|f i − f i | > N] < δ for every i (see e.g., BID11). However, this formulation involves two objectives (and δ). We believe that the (single objective) expected error in Equation 3.1 is more natural from the machine learning perspective. In this section, we recap three variants of hashing-based algorithms for frequency estimation. Single Hash Function. DISPLAYFORM0 Note that it is always the case DISPLAYFORM1 and an array C of size k × B. The algorithm maintains C, such that at the end of the stream we have C[, b] = j:h (j)=b f j. For each i ∈ U, the frequency estimatef i is equal to min ≤k C[, h (i)], and always satisfiesf i ≥ f i.Count-Sketch. Similarly to Count-Min, we have k distinct hash functions DISPLAYFORM2 and an array C of size k × B. Additionally, in Count-Sketch, we have k sign functions g i: U → {−1, 1}, and the algorithm maintains C such that DISPLAYFORM3. For each i ∈ U, the frequency estimatef i is equal to the median of {g (i)·C[, h (i)]} ≤k. Note that unlike the previous two methods, here we may havef i < f i. In our theoretical analysis we assume that the item frequencies follow the Zipf Law. That is, if we re-order the items so that their frequencies appear in a sorted order f i1 ≥ f i2 ≥... ≥ f in, then f ij ∝ 1/j. To simplify the notation we assume that f i = 1/i. We aim to develop frequency estimation algorithms that exploit data properties for better performance. To do so, we learn an oracle that identifies heavy hitters, and use the oracle to assign each heavy hitter its unique bucket to avoid collisions. Other items are simply hashed using any classic frequency estimation algorithm (e.g., Count-Min, or Count-Sketch), as shown in the block-diagram in Figure 4.1. This design has two useful properties: First, it allows us to augment a classic frequency estimation algorithm with learning capabilities, producing a learning-based counterpart that inherits the original guarantees of the classic algorithm. For example, if the classic algorithm is Count-Min, the ing learning-based algorithm never underestimates the frequencies. Second, it provably reduces the estimation errors, and for the case of Count-Min it is (asymptotically) optimal. Algorithm 1 provides pseudo code for our design. The design assumes an oracle HH(i) that attempts to determine whether an item i is a "heavy hitter" or not. All items classified as heavy hitters are assigned to one of the B r unique buckets reserved for heavy items. All other items are fed to the remaining B − B r buckets using a conventional frequency estimation algorithm SketchAlg (e.g., Count-Min or Count-Sketch).The estimation procedure is analogous. To computef i, the algorithm first checks whether i is stored in a unique bucket, and if so, reports its count. Otherwise, it queries the SketchAlg procedure. Note that if the element is stored in a unique bucket, its reported count is exact, i.e.,f i = f i.The oracle is constructed using machine learning and trained with a small subset of S, call it S. Note that the oracle learns the properties that identify heavy hitters as opposed to the identities of the heavy hitters themselves. For example, in the case of word frequency, the oracle would learn that shorter words are more frequent, so that it can identify popular words even if they did not appear in the training set S. Our algorithms combine simplicity with strong error bounds. Below, we summarize our theoretical , and leave all theorems, lemmas, and proofs to the appendix. In particular, Table 4.1 lists the proven in this paper, where each row refers to a specific streaming algorithm, its corresponding error bound, and the theorem/lemma that proves the bound. First, we show (Theorem 9.11 and Theorem 9.14) that if the heavy hitter oracle is accurate, then the error of the learned variant of Count-Min is up to a logarithmic factor smaller than that of its nonlearning counterpart. The improvement is maximized when B is of the same order as n (a common scenario 2). Furthermore, we prove that this continues to hold even if the learned oracle makes prediction errors with probability δ, as long as δ = O(1/ ln n) (Lemma 9.15).Second, we show that, asymptotically, our learned Count-Min algorithm cannot be improved any further by designing a better hashing scheme. Specifically, for the case of Learned Count-Min with a perfect oracle, our design achieves the same asymptotic error as the "Ideal Count-Min", which optimizes its hash function for the given input (Theorem 10.4).Finally, we note that the learning-augmented algorithm inherits any (, δ)-guarantees of the original version. Specifically, its error is not larger than that of SketchAlg with space B − B r, for any input. Expected Error Analysis DISPLAYFORM0 Theorem 10.4 Table 4.1: Our performance bounds for different algorithms on streams with frequencies obeying Zipf Law. k is a constant (≥ 2) that refers to the number of hash functions, B is the number of buckets, and n is the number of distinct elements. The space complexity of all algorithms is the same, Θ(B). See section 9.4 for non-asymptotic versions of the some of the above bounds Baselines. We compare our learning-based algorithms with their non-learning counterparts. Specifically, we augment Count-Min with a learned oracle using Algorithm 1, and call the learningaugmented algorithm "Learned Count-Min". We then compare Learned Count-Min with traditional Count-Min. We also compare it with "Learned Count-Min with Ideal Oracle" where the neuralnetwork oracle is replaced with an ideal oracle that knows the identities of the heavy hitters in the test data, and " Table Lookup for each stream element i do 3:if DISPLAYFORM0 if i is already stored in a unique bucket then 5: increment the count of i 6:else create a new unique bucket for i and 7:initialize the count to 1 8: end if 9: else 10:feed i to SketchAlg with B − Br buckets 11:end if 12: end for 13: end procedure us to show the ability of Learned Count-Min to generalize and detect heavy items unseen in the training set. We repeat the evaluation where we replace Count-Min (CM) with Count-Sketch (CS) and the corresponding variants. We use validation data to select the best k for all algorithms. Training a Heavy Hitter Oracle. We construct the heavy hitter oracle by training a neural network to predict the heaviness of an item. Note that the prediction of the network is not the final estimation. It is used in Algorithm 1 to decide whether to assign an item to a unique bucket. We train the network to predict the item counts (or the log of the counts) and minimize the squared loss of the prediction. Empirically, we found that when the counts of heavy items are few orders of magnitude larger than the average counts (as is the case for the Internet traffic data set), predicting the log of the counts leads to more stable training and better . Once the model is trained, we select the optimal cutoff threshold using validation data, and use the model as the oracle described in Algorithm 1. For our first experiment, the goal is to estimate the number of packets for each network flow. A flow is a sequence of packets between two machines on the Internet. It is identified by the IP addresses of its source and destination and the application ports. Estimating the size of each flow i -i.e., the number of its packets f i -is a basic task in network management BID24. Model: The patterns of the Internet traffic are very dynamic, i.e., the flows with heavy traffic change frequently from one minute to the next. However, we hypothesize that the space of IP addresses should be smooth in terms of traffic load. For example, data centers at large companies and university campuses with many students tend to generate heavy traffic. Thus, though the individual flows from these sites change frequently, we could still discover regions of IP addresses with heavy traffic through a learning approach. We trained a neural network to predict the log of the packet counts for each flow. The model takes as input the IP addresses and ports in each packet. We use two RNNs to encode the source and destination IP addresses separately. The RNN takes one bit of the IP address at each step, starting from the most significant bit. We use the final states of the RNN as the feature vector for an IP address. The reason to use RNN is that the patterns in the bits are hierarchical, i.e., the more significant bits govern larger regions in the IP space. Additionally, we use two-layer fully-connected networks to encode the source and destination ports. We then concatenate the encoded IP vectors, encoded port vectors, and the protocol type as the final features to predict the packet counts 3. The inference time takes 2.8 microseconds per item on a single GPU without any optimizations 4.Results: We plot the of two representative test minutes (the 20th and 50th) in FIG3.2. All plots in the figure refer to the estimation error (Equation 3.1) as a function of the used space. The space includes space for storing the buckets and the model. Since we use the same model for all test minutes, the model space is amortized over the 50-minute testing period. Second, the figure also shows that our neural-network oracle performs better than memorizing the heavy hitters in a lookup table. This is likely due to the dynamic nature of Internet traffic -i.e., the heavy flows in the training set are significantly different from those in the test data. Hence, memorization does not work well. On the other hand, our model is able to extract structures in the input that generalize to unseen test data. Third, the figure shows that our model's performance stays roughly the same from the 20th to the 50th minute (FIG3 .2b and FIG3 .2d), showing that it learns properties of the heavy items that generalize over time. Lastly, although we achieve significant improvement over Count-Min and Count-Sketch, our scheme can potentially achieve even better with an ideal oracle, as shown by the dashed green line in FIG3.2. This indicates potential gains from further optimizing the neural network model. For our second experiment, the goal is to estimate the number of times a search query appears. We use the AOL query log dataset, which consists of 21 million search queries collected from 650 thousand users over 90 days. The users are anonymized in the dataset. There are 3.8 million unique queries. Each query is a search phrase with multiple words (e.g., "periodic table element poster"). We use the first 5 days for training, the following day for validation, and estimate the number of times different search queries appear in subsequent days. The distribution of search query frequency follows the Zipfian law, as shown in FIG3 Model: Unlike traffic data, popular search queries tend to appear more consistently across multiple days. For example, "google" is the most popular search phrase in most of the days in the dataset. Simply storing the most popular words can easily construct a reasonable heavy hitter predictor. However, beyond remembering the popular words, other factors also contribute to the popularity of a search phrase that we can learn. For example, popular search phrases appearing in slightly different forms may be related to similar topics. Though not included in the AOL dataset, in general, metadata of a search query (e.g., the location of the search) can provide useful context of its popularity. To construct the heavy hitter oracle, we trained a neural network to predict the number of times a search phrase appears. To process the search phrase, we train an RNN with LSTM cells that takes characters of a search phrase as input. The final states encoded by the RNN are fed to a fully-connected layer to predict the query frequency. Our character vocabulary includes lower-case English alphabets, numbers, punctuation marks, and a token for unknown characters. We map the character IDs to embedding vectors before feeding them to the RNN 6. We choose RNN due to its effectiveness in processing sequence data BID25; ). We plot the estimation error vs. space for two representative test days (the 50th and 80th day) in FIG3.4. As before, the space includes both the bucket space and the space used by the model. The model space is amortized over the test days since the same model is used for all days. Similarly, our learned sketches outperforms their conventional counterparts. For Learned CountMin, compared to Count-Min, it reduces the loss by 18% at 0.5 MB and 52% at 1.0 MB FIG3.4a). For Learned Count-Sketch, compared to Count-Sketch, it reduces the loss by 24% at 0.5 MB and 71% at 1.0 MB FIG3. Further, our algorithm performs similarly for the 50th and the 80th day (FIG3 .4b and FIG3 .4d), showing that the properties it learns generalize over a long period. The figures also show an interesting difference from the Internet traffic data: memorizing the heavy hitters in a lookup table is quite effective in the low space region. This is likely because the search queries are less dynamic compared to Internet traffic (i.e., top queries in the training set are also popular on later days). However, as the algorithm is allowed more space, memorization becomes ineffective. We analyze the accuracy of the neural network heavy hitter models to better understand the on the two datasets. Specifically, we use the models to predict whether an item is a heavy hitter (top 1% in counts) or not, and plot the ROC curves in FIG3.5. The figures show that the model for the Internet traffic data has learned to predict heavy items more effectively, with an AUC score of 0.9. As for the model for search query data, the AUC score is 0.8. This also explains why we see larger improvements over non-learning algorithms in FIG3.2. In this section, we visualize the embedding spaces learned by our heavy hitter models to shed light on the properties or structures the models learned. Specifically, we take the neural network activations before the final fully-connected layer, and visualize them in a 2-dimensional space using t-SNE BID14. To illustrate the differences between heavy hitters (top 1% in counts) and the rest ("light" items), we randomly sample an equal amount of examples from both classes. We visualize the embedding space for both the Internet traffic and search query datasets. We show the embedding space learned by the model on the Internet traffic data in Figure 6.1. Each point in the scatter plot represents one Internet traffic flow. By coloring each flow with its number of packets in Figure 6.1a, we see that the model separate flows with more packets (green and yellow clusters) from flows with fewer packets (blue clusters). To understand what structure the model learns to separate these flows, we color each flow with its destination IP address in Figure 6.1b. We found that clusters with more packets are often formed by flows sharing similar destination address prefixes. Interestingly, the model learns to group flows with similar IP prefixes closer in the embedding space. For example, the dark blue cluster at the upper left of Figure 6.1b shares a destination IP address prefix "1.96.*.*". Learning this "structure" from the Internet traffic data allows the model to generalize to packets unseen in the training set. We show the embedding space learned by the model on the search query data in Figure 6.2. Each point in the scatter plot represents one search query. Similarly, the model learns to separate frequent search queries from the rest in Figure 6.2a. By coloring the queries with the number of characters in Figure 6.2b, we have multiple interesting findings. First, queries with similar length are closer in the embedding space, and the y-axis forms the dimension representing query length. Second, if we simply use the query length to predict heavy hitters, many light queries will be misclassified. The model must have learned other structures to separate heavy hitters in Figure 6.2a. We have presented a new approach for designing frequency estimation streaming algorithms by augmenting them with a learning model that exploits data properties. We have demonstrated the benefits of our design both analytically and empirically. We envision that our work will motivate a deeper integration of learning in algorithm design, leading to more efficient algorithms. In this section, we analyze the performance of three different approaches, single (uniformly random) hash function, Count-Min sketch, and Learned Count-Min sketch when the frequency of items is from Zipfian distribution. For simplicity, we assume that the number of distinct elements n is equal to the size of the universe |U |, and f i = 1/i. We use [n] to denote the set {1 . . . n}. We also drop the normalization factor 1/N in the definition of estimation error. The following observation is useful throughout this section (in particular, in the section on nonasymptotic analysis). Observation 9.1. For sufficiently large values of n (i.e., n > 250), DISPLAYFORM0 DISPLAYFORM1 Moreover, since each bucket maintains the frequency of items that are mapped to it under h, the space complexity of this approach is proportional to the number of buckets which is Θ(B). Here, we provide an upper bound and lower bound for the expected estimation error of CountMin sketch with k hash functions and B buckets per row. In the rest of this section, for each j ∈ [n], ≤ k, we use e j, and e j respectively to denote the estimation error of f j by h and Count-Min sketch. Recall that the expected error of Count-Min sketch is defined as follows: DISPLAYFORM0 Our high-level approach is to partition the interval [0, B ln n] into m + 1 smaller intervals by a sequence of thresholds Θ(ln 1+γ ( n B)) = r 0 ≤ · · · ≤ r m = B ln n where γ is a parameter to be determined later. Formally, we define the sequence of r i s to satisfy the following property: DISPLAYFORM1 Proof: By (9.3) and assuming ln r i+1 ≥ ln(DISPLAYFORM2 1+γ > 2 ln r i for sufficiently large values of r i 7 assuming γ ≤ 3.Note that as long as ln r i+1 ≥ ln( Then, to compute (9.2), we rewrite E[e j] using the thresholds r 0, · · ·, r m as follows: DISPLAYFORM3. Proof: First we prove the following useful observation. DISPLAYFORM4 Thus, by Markov's inequality, for each item j and hash function h, DISPLAYFORM5 Now, for each h in Count-Min sketch, we bound the value of Pr(e j, ≥ t B) where t ∈ [r i, r i+1): DISPLAYFORM6 ) by (9.5) and Corollary 9.4 (9.6) Hence, for k ≥ 2, DISPLAYFORM7 by (9.5) and (9.6) Next, for each item j, we bound the contribution of each interval (DISPLAYFORM8).Proof: DISPLAYFORM9 Similarly, r i+1 B r i B m−1 q=i+1 B q dx is at most: DISPLAYFORM10 Now, we complete the error analysis of (9.4): DISPLAYFORM11 Note that (9.10) requires γ(k − 1) − 2 ≥ 1 which is satisfied by setting γ = 3/(k − 1) and k ≥ 2. Thus, for each item j, DISPLAYFORM12 Lemma 9.8. The expected error of Count-Min sketch of size k × B (with k ≥ 2) for estimating items whose frequency distribution is Zipfian is O(DISPLAYFORM13 Proof: By plugging in our upper bound on the estimation error of each item computed in (9.11) in the definition of expected estimation error of Count-Min (9.2), we have the following. DISPLAYFORM14 Next, we show a lower bound on the expected error of Count-Min sketch with B buckets (more precisely, of size (k × B/k)) for estimating the frequency of items that follow Zipf Law. Observation 9.9. For each item j, Pr[e j ≥ 1/(2( DISPLAYFORM15 For each item j, the probability that none of the first 2. DISPLAYFORM16 In particular, for the case B = Θ(n) and k = O, the expected error of Count-Min sketch is Θ(ln n B). Proof: The proof follows from Lemma 9.8 and 9.10. We remark that the bound in Lemma 9.8 is for the expected estimation error of Count-Min sketch of size k × B. Hence, to get the bound on the expected error of Count-Min of size k × (B k), we must replace B with B/k. Definition 9.12 (φ-HeavyHitter). Given a set of items I = {i 1, · · ·, i n} with frequencies f = f 1, · · ·, f n, an item j is a φ-HeavyHitter of I if f j ≥ φ|| f || 1. Remark 9.13. If the frequency distribution of items I is Zipfian, then the number of φ-HeavyHitters is at most 1/(φ ln n). In other words, B r ≤ (φ ln n) −1.To recall, in our Learned Count-Min sketch with parameters (B r, B), B r buckets are reserved for the frequent items returned by HH and the rest of items are fed to a Count-Min sketch of size k × where k is a parameter to be determined. We emphasize that the space complexity of Learned Count-Min sketch with parameter (B r, B) is B r + B = O(B). Theorem 9.14. The optimal expected error of Learned Count-Min sketches with parameters (B r, B) is at most DISPLAYFORM0 Proof: Since, the count of top B r frequent items are stored in their own buckets, for each j ≤ B r, e j = 0. Hence, DISPLAYFORM1 Note that the last inequality follows from the guarantee of single hash functions; in other words, setting k = 1 in the Count-Min sketch. Unlike the previous part, here we assume that we are given a noisy HeavyHitters oracle HH δ such that for each item j, Pr HH δ (j, DISPLAYFORM0 Br ln n) ≤ δ where HH 0 is an ideal HeavyHitter oracle that detects heavy items with no error. Lemma 9.15. In an optimal Learned Count-Min sketch with parameters (B r, B) and a noisy HeavyHitters oracle DISPLAYFORM1 Proof: The key observation is that each heavy item, any of B r most frequent items, may only misclassify with probability δ. Hence, for each item j classified as "not heavy", (9.12) where the first term denotes the expected contribution of the misclassified heavy items and the second term denotes the expected contribution of non-heavy items. DISPLAYFORM2 The rest of analysis is similar to the proof of Theorem 9.14. DISPLAYFORM3 Corollary 9.16. Assuming B r = Θ(B) = Θ(n) and DISPLAYFORM4 Space analysis. Here, we compute the amount of space that is required by this approach. ) with cutoff (B r reserved buckets) for estimating the frequency of items whose distribution is Zipfian is O(B).Proof: The amount of space required to store the counters corresponding to functions DISPLAYFORM5 Here, we also need to keep a mapping from the heavy items (top B r frequent items according to HH δ) to the reserved buckets B r which requires extra O(B r) space; each reserved buckets stores both the hashmap of its corresponding item and its count. In this section, we compare the non-asymptotic expected error of Count-Min sketch and out Learned Count-Min sketch with ideal HeavyHitters oracle. Throughout this section, we assume that the amount of available space to the frequency estimation algorithms is (1+α)B words. More precisely, we compare the expected error of Count-Min sketch with k hash functions and our Learned CountMin sketch with B r = αB reserved buckets. Recall that we computed the following bounds on the expected error of these approaches (Lemma 9.10 and Theorem 9.14): DISPLAYFORM0 In the rest of this section, we assume that B ≥ γn and then compute the minimum value of γ that guarantees DISPLAYFORM1 In other words, we compute the minimum amount of space that is required so that our Learned Count-Min sketch performs better than Count-Min sketch by a factor of at least (1 + ε). DISPLAYFORM2 Hence, we must have (0.58 DISPLAYFORM3 · ln n. By solving the corresponding quadratic equation, DISPLAYFORM4 This implies that ln γ = − ln α + 0.58 DISPLAYFORM5 Published as a conference paper at ICLR 2019 Next, we consider different values of k and show that in each case what is the minimum amount of space in which Learned CM outperforms CM by a factor of 1.06 (setting ε = 0.06).• k = 1. In this case, we are basically comparing the expected error of a single hash function and Learned Count-Min. In particular, in order to get a gap of at least (1 + ε), by a more careful analysis of Lemma 9.2, γ must satisfy the following condition: DISPLAYFORM6 To simplify it further, we require that (ln( 2 γ)+0.58) 2 ≤ 3.18·(ln 2 n−1.65) which implies that γ = Θ(1/ ln n).• k = 2. In this case, γ ≤ • k ∈ {3, 4}: In this case, γ ≤ 2 e √ (ln n)/3.5for sufficiently large values of B. Hence, we require that the total amount of available space is at least DISPLAYFORM7 for sufficiently large values of B. Hence, we require that the total amount of available space is at least DISPLAYFORM8 We also note that settings where the number of buckets is close to n are quite common in practice. Recall that the estimation error of a hash function h is defined as Err(F(I),F h (I)):= i∈I f i · (f (h(i)) − i). Note that we can rewrite Err(F(I),F h (I)) as DISPLAYFORM9 Note that in (10.1) the second term is independent of h and is a constant. Hence, an optimal hash function minimizes the first term, b∈B I f (b) 2.Suppose that an item i * with frequency at least DISPLAYFORM10 collides with a (non-empty) set of items I * ⊆ I \ {i *} under an optimal hash function h *. Since the total frequency of the items mapped to the bucket b * containing i * is greater than. Next, we define a new hash function h with smaller estimation error compared to h * which contradicts the optimality of h *: DISPLAYFORM11 Formally, Err(F(I),F h * (I)) − Err(F(I),F h (I)) = f h * (b DISPLAYFORM12 Next, we show that in any optimal hash function h * :[n] → [B] and assuming Zipfian input distribution, Θ(B) most frequent items do not collide with any other items under h *.Lemma 10.2. Suppose that B = n/γ where γ ≥ e 4.2 is a constant and lets assume that items follow Zipfian distribution. In any hash function h *:[n] → [B] with minimum estimation error, none of the B 2 ln γ most frequent items collide with any other items (i.e., they are mapped to a singleton bucket). Proof: Let i j * be the most frequent item that is not mapped to a singleton bucket under h *. If j * > B 2 ln γ then the statement holds. Suppose it is not the case and j * ≤ B 2 ln γ. Let I denote the set of items with frequency at most f j * = 1/j * (i.e., I = {i j | j ≥ j *}) and let B I denote the number of buckets that the items with index at least j * mapped to; B I = B − j * + 1. Also note that by Observation 9.1, f (I) < ln(n j *) + 1. Next, by Claim 10.1, we show that h * does not hash the items {j *, · · ·, n} to B I optimally. In particular, we show that the frequency of item j * is more than DISPLAYFORM13. To prove this, first we observe that the function g(j):= j · (ln(n/j) + 1) is strictly increasing in Proof: By Lemma 10.2, in any hash function with minimum estimation error, the (B 2 ln γ) most frequent items do not collide with any other items (i.e., they are mapped into a singleton bucket) where γ = n/B > e 4.2.Hence, the goal is to minimize (10.1) for the set of items I which consist of all items other than the (B 2 ln γ) most frequent items. Since the sum of squares of m items that summed to S is at least S 2 /m, the multi-set loss of any optimal hash function is at least: ) as well. DISPLAYFORM14
Data stream algorithms can be improved using deep learning, while retaining performance guarantees.
704
scitldr
Link prediction in simple graphs is a fundamental problem in which new links between nodes are predicted based on the observed structure of the graph. However, in many real-world applications, there is a need to model relationships among nodes which go beyond pairwise associations. For example, in a chemical reaction, relationship among the reactants and products is inherently higher-order. Additionally, there is need to represent the direction from reactants to products. Hypergraphs provide a natural way to represent such complex higher-order relationships. Even though Graph Convolutional Networks (GCN) have recently emerged as a powerful deep learning-based approach for link prediction over simple graphs, their suitability for link prediction in hypergraphs is unexplored -- we fill this gap in this paper and propose Neural Hyperlink Predictor (NHP). NHP adapts GCNs for link prediction in hypergraphs. We propose two variants of NHP --NHP-U and NHP-D -- for link prediction over undirected and directed hypergraphs, respectively. To the best of our knowledge, NHP-D is the first method for link prediction over directed hypergraphs. Through extensive experiments on multiple real-world datasets, we show NHP's effectiveness. The problem of link prediction in graphs has numerous applications in the fields of social network analysis BID24, knowledge bases BID30, bioinformatics BID26 to name a few. However, in many real-world problems relationships go beyond pairwise associations. For example, in chemical reactions data the relationship representing a group of chemical compounds that can react is inherently higher-order and similarly, the co-authorship relationship in a citation network is higher-order etc. Hypergraphs provide a natural way to model such higher-order complex relations. Hyperlink prediction is the problem of predicting such missing higher-order relationships in a hypergraph. Besides the higher-order relationships, modeling the direction information between these relationships is also useful in many practical applications. For example, in the chemical reactions data, in addition to predicting groups of chemical compounds which form reactants and/or products, it is also important to predict the direction between reactants and products, i.e., a group of reactants react to give a group of products. Directed hypergraphs BID12 provide a way to model the direction information in hypergraphs. Similar to the undirected hypergraphs, predicting the missing hyperlinks in a directed hypergraph is also useful in practical settings. Figure 1 illustrates the difference between modeling the chemical reactions data using undirected and directed hypergraphs. Most of the previous work on hyperlink prediction BID43 focus only on undirected hypergraphs. In this work we focus both on undirected and directed hypergraphs. Recently, Graph Convolutional Networks (GCNs) BID21 have emerged as a powerful tool for representation learning on graphs. GCNs have also been successfully applied for link prediction on normal graphs BID34 BID20. Inspired by the success of GCNs for link prediction in graphs and deep learning in general BID39, we propose a GCN-based framework for hyperlink prediction which works for both undirected and directed hypergraphs. We make the following contributions:Figure 1: Illustrating the difference between modeling chemical reactions data using undirected and directed hypergraphs. To the left is the undirected hypergraph, in which both the reactants and products are present in the same hyperlink. Whereas in the directed hypergraph (to the right), for a given reaction, the reactants are connected by one hyperlink and products are connected by another hyperlink and both these hyperlinks are connected by a directed link.• We propose a Graph Convolutional Networks (GCN)-based framework called Neural Hyperlink Predictor (NHP) for the problem of hyperlink prediction. To the best of our knowledge, this is the first ever deep learning based approach for this problem.• We extend the proposed NHP for the problem of hyperlink prediction in directed hypergraphs. To the best of our knowledge, this is the first ever attempt at the problem of link prediction in directed hypergraphs.• Through extensive experiments on multiple real-world datasets, we show the effectiveness of proposed NHP for link prediction in both undirected and directed hypergraphs. We have released NHP's source code at this anonymous location: https://anonymous.4open. science/repository/7d86231e-f6ba-4795-ae51-ac28d89f1521/. In this section, we briefly review related work in deep learning on graphs and link prediction on hypergraphs. Learning representations on graphs: The key advancements in learning low-dimensional node representations in graphs include matrix factorisation-based methods, random-walk based algorithms, and deep learning on graphs BID16. Our work is based on deep learning on graphs. Geometric deep learning is an umbrella phrase for emerging techniques attempting to generalise (structured) deep neural network models to non-Euclidean domains such as graphs and manifolds. The earliest attempts to generalise neural networks to graphs embed each node in an Euclidean space with a recurrent neural network (RNN) and use those embeddings as features for classification or regression of nodes or graphs BID14 BID32.A CNN-like deep neural neural network on graphs was later formulated in the spectral domain in a pioneering work BID5 by a mathematically sound definition of convolution on graph employing the analogy between the classical Fourier transforms and projections onto the eigen basis of the graph Laplacian operator BID17. Initial works proposed to learn smooth spectral multipliers of the graph Laplacian, although at high computational cost BID5 BID19. To resolve the computational bottleneck and avoid the expensive computation of eigenvectors, the ChebNet framework BID7 learns Chebyshev polynomials of the graph Laplacian (hence the name ChebNet). The graph convolutional network (GCN) BID21 ) is a simplified ChebNet framework that uses simple filters operating on 1-hop local neighborhoods of the graph. A second formulation of convolution on graph is in the spatial domain (or equivalently in the vertex domain) where the localisation property is provided by construction. One of the first formulations of a spatial CNN-like neural network on graph generalised standard molecular feature extraction methods based on circular fingerprints BID8. Subsequently, all of the above types (RNN, spectral CNN, spatial CNN on graph) were unified into a single message passing neural network (MPNN) framework and a variant of MPNN has been shown to achieve state-of-the-art on an important molecular property prediction benchmark. The reader is referred to a comprehensive literature review ) and a survey BID16 on the topic of deep learning on graphs and learning representation on graphs respectively. Below, we give an overview of related research in link prediction on hypergraphs where relationships go beyond pairwise. Link Prediction on hypergraphs: Machine learning on hypergraphs was introduced in a seminal work BID43 that generalised the powerful methodology of spectral clustering to hypergraphs and further inspired algorithms for hypergraph embedding and semi-supervised classification of hypernodes. Link prediction on hypergraph (hyperlink prediction) has been especially popular for social networks to predict higher-order links such as a user releases a tweet containing a hashtag BID22 and to predict metadata information such as tags, groups, labels, users for entities (images from Flickr) BID1. Techniques for hyperlink prediction on social networks include ranking for link proximity information BID22 and matrix completion on the (incomplete) incidence matrix of the hypergraph BID1 BID27. Hyperlink prediction has also been helpful to predict multi-actor collaborations BID35.In other works, a dual hypergraph has been constructed from the intial (primal) hypergraph to cast the hyperlink prediction as an instance of vertex classification problem on the dual hypergraph BID25. Coordinated matrix maximisation (CMM) predicts hyperlinks in the adjacency space with non-negative matrix factorisation and least square matching performed alternately in the vertex adjacency space. CMM uses expectation maximisation algorithm for optimisation for hyperlink prediction tasks such as predicting missing reactions of organisms' metabolic networks. Undirected hypergraph is an ordered pair H = (V, E) where V = {v 1, · · ·, v n} is a set of n hypernodes and E = {e 1, · · ·, e m} ⊆ 2 V is a set of m hyperlinks. The problem of link prediction in an incomplete undirected hypergraph H involves predicting missing hyperlinks fromĒ = 2 V − E based on the current set of observed hyperlinks E. The number of hypernodes in any given hyperlink e ∈ E can be any integer between 1 and 2 n. This variable cardinality of a hyperlink makes traditional graph-based link prediction methods infeasible because they are based on exactly two input features (those of the two nodes potentially forming a link). The variable cardinality problem also in an exponentially large inference space because the total number of potential hyperlinks is O(2 n). However, in practical cases, there is no need to consider all the hyperlinks inĒ as most of them can be easily filtered out. For example, for the task of finding missing metabolic reactions, we can restrict hyperlink prediction to all feasible reactions because the infeasible reactions seldom have biological meanings. In other cases such as predicting multi-author collaborations of academic/technical papers, hyperlinks have cardinalities less than a small number, as papers seldom have more than 6 authors. The number of restricted hyperlinks in such practical cases is not exponential and hence hyperlink prediction on the restricted set of hyperlinks becomes a feasible problem. Formally, a hyperlink prediction problem is a tuple (H, E), where H = (V, E) is a given incomplete hypergraph and E is a set of (restricted) candidate hyperlinks with E ⊆ E. The problem is to find the most likely hyperlinks missing in H from the set of hyperlinks E − E.Directed hypergraph BID12 is an ordered pair H = (V, E) where V = {v 1, · · ·, v n} is a set of n hypernodes and DISPLAYFORM0 V is a set of m directed hyperlinks. Each e ∈ E is denoted by (t, h) where t ⊆ V is the tail and h ⊆ V is the head with t = Φ, h = Φ. As shown in figure 1, chemical reactions can be modeled by directed hyperlinks with chemical substances forming the set V. Observe that this model captures and is general enough to subsume previous graph models:• an undirected hyperlink is the special case when t = h• a directed simple link (edge) is the special case when |t| = |h| = 1 Similar to the undirected case, the directed hyperlink prediction problem is a tuple (H, E), where H = (V, E) is a given incomplete directed hypergraph and E is a set of candidate hyperlinks with E ⊆ E. The problem is to find the most likely hyperlinks missing in H from the set of hyperlinks E − E. In this section we discuss the proposed approach for link prediction in hypergraphs. Our proposed NHP can predict both undirected and directed hyperlinks in a hypergraph. We start with how NHP can be used to predict undirected hyperlinks and then explain how NHP can be extended to the directed case. Given an undirected hypergraph H = (V, E), as a first step NHP constructs a dual hypergraph DISPLAYFORM0 is obtained by taking V * = E as the set of hypernodes and E * = {e * 1, . . ., e * n} such that e * i = {e ∈ E : v i ∈ e} with e * i corresponding to v i for i = 1,..., n. The vertex-hyperedge incidence matrices of H and H * are transposes of each other. The problem of link prediction in H can be posed as a binary node classification problem in H * BID25. A label +1 on a node in H * indicates the presence of a hyperlink in H and a label of −1 indicates the absence. For the problem of semi-supervised node classification on the dual hypergraph H *, we use Graph Convolutional Networks (GCN) on the graph obtained from the clique expansion of H *. Clique expansion BID43 BID11 ) is a standard and a simple way of approximating a hypergraph into a planar graph by replacing every hyperlink of size s with an s-clique .Graph Convolutional Network Let G = (V, E), with N = |V|, be a simple undirected graph with the adjacency matrix A ∈ R N ×N, and let the data matrix be X ∈ R N ×p. The data matrix has p-dimensional real-valued vector representations for each node in the graph. The forward model for a simple two-layer GCN takes the following simple form: DISPLAYFORM1 whereĀ =D DISPLAYFORM2 is an input-to-hidden weight matrix for a hidden layer with h hidden units and Θ ∈ R h×r is a hidden-to-output weight matrix. The softmax activation function is defined as softmax(x i) = exp(xi) j exp(xj) and applied row-wise. For semi-supervised multi-class classification with q classes, we minimise the cross-entropy error over the set of labeled examples, V L. DISPLAYFORM3 The weights of the graph convolutional network, viz. Θ and Θ, are trained using gradient descent. We note that BID25 also follow a similar approach of constructing dual graph followed by node classification for hyperlink prediction. However, ours is a deep learning based approach and BID25 do not perform link prediction in the directed hypergraphs. Figure 2: (best seen in colour) The proposed NHP framework. We convert the hypergraph with the observed hyperlinks and the candidate hyperlinks into its dual in which hyperlinks are converted into hypernodes. We then use a technique from positive unlaballed learning to get plausible negatively labelled hypernodes. The clique expansion of the hypergraph is used to approximate the hypergraph. Then a GCN is run on the graph to classify the unlabelled hypernodes. A label of +1 on e 1 indicates presence of the e 1 in the primal. For more details, refer to section 4.Learning on hypergraphs in the positive unlabelled setting The cross-entropy objective of GCN in equation 2 inherently assumes the presence of labeled data from at least two different classes and hence cannot be directly used for the positive unlabeled setting. A challenging variant of semi-supervised learning is positive-unlabelled learning BID9 which arises in many real-world applications such as text classification, recommender systems, etc. In this setting, only a limited amount of positive examples are available and the rest are unlabelled examples. In positive unlabelled learning framework, we construct a plausible negative sampling set based on data similarity BID18. We calculate the average similarity of each unlabelled point u ∈ E − E, to the positive labelled examples in E: DISPLAYFORM4 where S represents a similarity function and f represents a map that maps an example to a ddimensional emebedding space f: E → R d. We then rank the unlabelled training examples in E − E in the ascending order of their average similarity scores. We then select the top ones in the ascending order (i.e. with the lowest similarity values) to construct the set of plausible negative examples F ⊆ E − E. The intuition here is that the set of plausible negative examples contain examples most dissimilar to the positive examples. The GCN on the hypergraph can subsequently be run by minimising the objective 2 over the positive examples in E and the plausible negative examples in F i.e. V L = E ∪ F. As explained in Section 3, hyperlink prediction in directed hypergraphs is the problem of predicting directing links which are tail-head set pairs. However, in practice the collection of the tail and head sets is also incomplete. Therefore, the problem of link prediction in directed hypergraphs also requires to predict missing tail and head sets of nodes besides predicting directed links among them. The tail and head sets of nodes can be thought of as undirected hyperlinks and the directed hyperlink is between these undirected hyperlinks. A straight forward approach for this problem would be to predict the undirected hyperlinks first and then followed by predicting the directed links between pairs. However, this sequential approach may not produce desirable as the error during the training for directed part would not have any impact on the training for the undirected part. Therefore, we propose the following joint learning scheme, DISPLAYFORM0 DISPLAYFORM1 The tail hyperlink and the corresponfding head hyperlink are separate hypernodes in the dual and form directed hyperlinks in the primal (with the direction from t to h). The set W + L consists of directed hyperlinks that currently exist in the given directed hypergraph. Note that, for loss L u, the set of positively labelled hypernodes will be, DISPLAYFORM2 We sample |W + L | = |E| hypernodes (in the dual) from the unlabelled data using the positive unlabelled approach of 3 to get the set of W − L pairs. We label these pairs negative i.e. d ij1 = 0 and DISPLAYFORM3 L are used to minimise the objective 6. To explain how D is computed, we rewrite equation 1 as: DISPLAYFORM4 We use D = g(x 1, x 2) with g being a function that takes the dual hypernode representations x 1 ∈ X and x 2 ∈ X and is parameterised by for example a simple neural network. In the experiments, we used a simple 2-layer multilayer perceptron on the concatenated embeddings x 1 ||x 2 i.e.g(x 1, x 2) = MLP Θ x 1 ||x 2.We train DISPLAYFORM5, and Θ end-to-end using backpropagation. In this section, we evaluate NHP on hyperlink prediction in undirected hypergraphs. We performed two different sets of experiments whose motivations and setups are as follows. Predicting reactions of metabolic networks: Reconstructed metabolic networks are important tools for understanding the metabolic basis of human diseases, increasing the yield of biologically engineered systems, and discovering novel drug targets. We used four datasets of and we show the statistics of the datasets used in table 1. For each dataset, we randomly generated fake reactions according to the substance distribution of the existing reactions. So, the candidate reactions contain already existing ones and the randomly generated fake ones. The number of fake reactions generated is equal to the number of already existing ones. Given a small number of reactions the task is to predict the other reactions. Table 3: mean AUC (higher is better) over 10 trials. NHP achieves consistently superior performance over its baselines for all the datasets. Refer to section 5 for more details. dataset iAF692 iHN637 iAF1260b iJO1366 CORA DBLP SHC BID43 248 ± 6 289 ± 4 1025 ± 4 1104 ± 19 1056 ± 14 845 ± 18 node2vec 299 ± 10 303 ± 4 1100 ± 13 1221 ± 21 1369 ± 15 813 ± 9 CMM 170 ± 6 225 ± 10 827 ± 1 963 ± 15 1452 ± 13 651 ± 20 GCN on star expansion BID40 174 ± 5 219 ± 12 649 ± 10 568 ± 18 1003 ± 14 646 ± 15 NHP-U (ours) 313 ± 6 360 ± 5 1258 ± 9 1381 ± 9 1476 ± 20 866 ± 15 Table 4: mean (± std) number of hyperlinks recovered over 10 trials (higher is better) among the top ranked |∆E| hyperlinks. NHP achieves consistently superior performance over its baselines for all the datasets. Refer to section 5 for more details. Predicting multi-author collaborations in coauthorship networks: Research collaborations in scientific community have been extensively studied to understand team dynamics in social networks BID29 BID3 BID2 BID6. Coauthorship data provide a means to analyse research collaborations. We used cora and dblp for coauthorship data. The statistics are shown in table 2 and the construction of the datasets is pushed to the supplementary. A coauthorship hypergraph (primal) contains each author as a hypernode and each paper represents a hyperlink connecting all the authors of the paper. The corresponding dual hypergraph considers each author as a hyperlink connecting all the papers (hypernodes) coauthored by the author. The hyperlink prediction problem is given a small number of collaborations, to essentially predict other collaborations among a given set of candidate collaborations. For each dataset we randomly sampled 10% of the hypernodes to get E and then we sampled an equal number of negatively-labelled hypernodes from E − E in the positive-unlabelled setting of 3. To get the feature matrix X, we used random 32−dimensional Gaussian features (p = 32 in equation 1) for the metabolic networks and bag-of-word features shown in table 2 for the coauthorship datasets. For fake papers we generated random Gaussian bag-of-word features. We used node2vec to learn low dimensional embedding mapping, f: E → R d with d = 128. We used the clique expansion of the dual hypergraph as input graph to node2vec and cosine similarity to compute similarity between two embeddings. We compared NHP against the following state-of-the-art baselines for the same E as constructed above.• Spectral hypergraph Clustering (SHC) BID43: SHC outputs classification scores by f = (I − ξΘ) −1 y. We used SHC on the dual hypergraph.• node2vec : One of the most popular node embedding approaches. We note that have shown node2vec to be superior to DeepWalk BID31 and LINE BID36 and hence we compared against only node2vec. We used node2vec to embed the nodes of the clique expansion (of the dual). We then used an MLP on the embeddings with the semi-supervised objective of equation 2 in the positive unlabelled setting of equation 3.• Co-ordinated Matrix Maximisation (CMM): The matrix factorisation-based CMM technique uses the EM algorithm to determine the presence or absence of candidate hyperlinks.• GCN on star expansion BID40: PinSage BID40 ) is a GCN-based method designed to work on the (web-scale) bipartite graph of Pinterest. The Pinterest graph can be seen as the star expansion of a hypergraph BID0 with pins (hypernodes) on one side of the partition and boards (hyperlinks) on the other side. We have compared NHP against star expansion. This is essentially approximating the hypergraph with its star exapansion and then running a GCN over it (instead of the clique expansion of NHP). Similar to, we report mean AUC over 10 trials in table 3 and the mean number of hyperlinks recovered over 10 trials in the top ranked |∆E| ones in table 4. Note that ∆E ⊂ E is the set of missing hyperlinks with |∆E| = 0.9 * |E|. As we can observe, we consistently outperform the baselines in both the metrics. We believe this is because of the powerful non-linear feature extraction capability of GCNs. We also report Recall@ |∆E| for NHP-U for all datasets (to make the numbers across datasets somewhat comparable). It is got through dividing the mean number of hyperlinks recovered by ∆E. Table 6: mean AUC over 10 trials for all the datasets. Both the proposed models achieve similar . Refer to section 6 for more details. dataset iAF692 iHN637 iAF1260b iJO1366 node2vec + MLP 255 ± 5 237 ± 5 838 ± 13 902 ± 11 CMM + MLP 253 ± 9 241 ± 11 757 ± 26 848 ± 21 GCN on star expansion BID40 Table 7: mean (± std) number of hyperlinks recovered over 10 trials (higher is better) among the top ranked |∆E| hyperlinks. Both the proposed models achieve similar . Refer to section 6 for more details. We used the same four metabolic networks to construct directed hyperlinks. The metabolic reactions are encoded by stoichiometric matrices. The negative entries in a stoichiometric matrix indicate reactants and positive entries indicate products. We extracted only those reactions which have at least two substances in each of the reactant side and the product side and the statistics are shown in table 5. We labelled randomly sampled 10% of the hyperlinks in the data and use the remaining 90% unlabelled data for testing. Tables 6 and 7 show the on the datasets. NHP-D (joint) is the model proposed in 4.2. On the other hand, NHP-D (sequential) is the model which treats undirected hyperlink prediction and direction prediction separately. NHP-D (sequential) first runs a GCN on the clique expansion of the undirected hypergraph to get the node embeddings (without softmax) and then runs a multi-layer perceptron on the concatenated emebeddings to predict the directions. Table 9: mean (± std) number of hyperlinks recovered over 10 trials among the top ranked |∆E| hyperlinks. Positive unlabeled learning of 3 achieves consistently lower standard deviations than the the other two. The standard deviations of random negative sampling are on the higher side. Refer to section 7 for more details. We compared against the following baselines with a multi-layer perceptron (MLP):• node2vec + MLP: We used node2vec for the undirected hyperlink prediction part (as explained in section 5 and a 2-layer perceptron to predict the direction between hyperlinks with the joint objective of equation 4.• Co-ordinated Matrix Maximisation (CMM) + MLP: The matrix factorisation-based CMM technique uses the EM algorithm to determine the presence or absence of candidate hyperlinks for the following optimisation problem: DISPLAYFORM0 We used a 2−layer perceptron in a sequential manner. To get a representation for a hyperlink e, we took the mean of the representations of all the hypernodes v ∈ e from the matrix W above. Note CMM works on the primal hypergraph.• GCN on star expansion BID40 + MLP: We used the star expansion to approximate the input hypergraph and then run a GCN on it (instead of the clique expansion of NHP). A 2-layer perceptron is used to predict the direction between hyperlinks with the joint objective of equation 4.As we see in the tables, both NHP-D (joint) and NHP-D (sequential) perform similarly. This can be attributed to the fact that training data to predict directions between hyperlinks is sparse and hence the learned hypernode representations of both the models are similar. Please note that existing apporaches for link prediction on directed simple graphs cannot be trivially adopted for this problem because of the sparsity in the training data. Comparison to baselines NHP-D outperforms the baselines on 3 out of 4 datasets. The dataset iHN637 seems to be a very challenging dataset on which each model recovers less than half the number of missing hyperlinks. In order to justify positive-unlabeled learning of equation 3, we compared NHP-U and negative samples chosen uniformly at random. The for all the undirected hypergraph datasets are shown in tables 8 and 9. In the tables, we have called negative samples chosen uniformly at random from E − E as random negative sampling. We have called negative samples chosen through positive unlabeled learning of equation 3, i.e. NHP-U, as positive-unlabeled learning. Note that the numbers corresponding to this row are the same as those in tables 3 and 4.In addition to the above two, we used positive unlabeled learning technique of equation 3 to sort the hyperedges (primal) in nondecreasing order of their similarities and then selected uniformly at random from only the first half of the sorted order (i.e. most dissimilar hyperedges). We have called this technique mixed as, intuitively, it provides benefits of both positive unlabeled learning and uniform random negative sampling. More principled approaches than mixed is left for future work. As we can see in table 9, the standard deviations of random negative sampling are on the higher side. This is expected as the particular choice made for negative samples decides the decision boundary for the binary classifier. The superior AUC values of mixed in table 8 supports our intuition that it provides benefits of both positive unlabeled learning and uniform random negative sampling. The standard deviations of mixed are much lower but still higher than positive-unlabeled learning. In general, summarising the for all datasets, we believe that positive-unlabeled learning is superior to random negative sampling because of the higher confidence (low standard deviation) predictions. We have introduced NHP, a novel neural approach for hyperlink prediction in both undirected and directed hypergraphs. To the best of our knowledge, this is the first neural method for hyperlink prediction in undirected hypergraphs. NHP is also the first method for hyperlink prediction in directed hypergraphs. Through extensive experiments on multiple real-world datasets, we have demonstrated NHP's effectiveness over state-of-the art baselines. Approaches that augment GCNs with attention BID38, self-training and co-training with random walks BID23, edge-feature learning in a dual-primal setup BID28 have been recently proposed on graph-based semi-supervised learning tasks. Our NHP framework provides the flexibility to incorporate these approaches for more improved performance. An interesting future direction is predicting hyperlinks in partial-order hypergraphs . We leave extending NHP framework to inductive settings as part of future work.hyperparameter value number of hidden units 16 number of hidden layers 2 dropout rate 0.5 L2 regularisation 5 × 10 −4 learning rate 0.01 non-linearity ReLU TAB1: Hyperparameters of GCN used for all the datasets• DBLP: We used the DBLP database v4 3. We filtered out papers without abstracts, and processed each abstract by tokenizing it and removing stop-words removal. Further, we filtered out papers with one author only. This left 540532 papers. In order to ensure that the hypergraph formed would be sufficiently dense, we found the number of papers authored by each author and took the top 1000 authors as'selected authors'. Then we filtered out the papers that were not authored by at least three of the selected authors. Finally, we were left with 1590 papers by 685 of the original 1000 selected authors. To extract word features from each of these abstracts, we took all words appearing in these abstracts with a frequency greater than 50. Each abstract was thus represented by a 602-dimensional bag-of-words representation. For both datasets, we randomly sample |E| fake papers according to the author distribution of the existing non-fake papers (2708 and 1590 for CORA and DBLP respectively). We randomly generated Gaussian p dimensional features for these fake papers (1433 and 602 for CORA and DBLP respectively).
We propose Neural Hyperlink Predictor (NHP). NHP adapts graph convolutional networks for link prediction in hypergraphs
705
scitldr
In this paper, we present a method for adversarial decomposition of text representation. This method can be used to decompose a representation of an input sentence into several independent vectors, where each vector is responsible for a specific aspect of the input sentence. We evaluate the proposed method on two case studies: the conversion between different social registers and diachronic language change. We show that the proposed method is capable of fine-grained con- trolled change of these aspects of the input sentence. For example, our model is capable of learning a continuous (rather than categorical) representation of the style of the sentence, in line with the reality of language use. The model uses adversarial-motivational training and includes a special motivational loss, which acts opposite to the discriminator and encourages a better decomposition. Finally, we evaluate the obtained meaning embeddings on a downstream task of para- phrase detection and show that they are significantly better than embeddings of a regular autoencoder. Despite the recent successes in using neural models for representation learning for natural language text, learning a meaningful representation of input sentences remains an open research problem. A variety of approaches, from sequence-to-sequence models that followed the work of BID37 to the more recent proposals BID2 BID29 BID8 BID25 BID36 BID5 share one common drawback. Namely, all of them encode the input sentence into just one single vector of a fixed size. One way to bypass the limitations of a single vector representation is to use an attention mechanism BID3 BID40. We propose to approach this problem differently and design a method for adversarial decomposition of the learned input representation into multiple components. Our method encodes the input sentence into several vectors, where each vector is responsible for a specific aspect of the sentence. In terms of learning different separable components of input representation, our work most closely relates to the style transfer work, which has been applied to a variety of different aspects of language, from diachronic language differences BID42 to authors' personalities BID24 and even sentiment BID17 BID13. The style transfer work effectively relies on the more classical distinction between meaning and form BID9, which accounts for the fact that multiple surface realizations are possible for the same meaning. For simplicity, we will use this terminology throughout the rest of the paper. Consider the case when we encode an input sentence into a meaning vector and a form vector. We are then able to perform a controllable change of meaning or form by a simple change applied to these vectors. For example, we can encode two sentences written in two different styles, then swap the form vectors while leaving the meaning vectors intact. We can then generate new unique sentences with the original meaning, but written in a different style. In the present work, we propose a novel model for this type of decomposition based on adversarialmotivational training and design an architecture inspired by the GANs BID14 and adversarial autoencoders BID26. In addition to the adversarial loss, we use a special motivator BID0, which, in contrast to the discriminator, is used to provide a motivational loss to encourage the model to better decomposition of the meaning and the form, as well as specific aspects of meaning. We make all the code publicly available on GitHub 1.We evaluate the proposed methods for learning separate aspects of input representation on the following case studies:1. Learning to separate out a representation of the specific diachronic slice of language. One may express the same meaning using the Early Modern English (e.g. What would she have?) and the contemporary English (What does she want?)2. Learning a representation for a social register BID16 -that is, subsets of language appropriate in a given context or characteristic of a certain group of speakers. These include formal and informal language, the language used in different genres (e.g., fiction vs. newspapers vs. academic texts), different dialects, and even literary idiostyles. We experiment with the registers corresponding to the titles of scientific papers vs. newspaper articles. As mentioned above, the most relevant previous work comes from the style transfer research, and it can be divided into two groups:1. Approaches that aim to generate text in a given form. For example, the task may be to produce just any verse as long as it is in the "style" of the target poet.2. Approaches that aim to induce a change in either the "form" or the "meaning" of an existing utterance. For example, "Good bye, Mr. Anderson." can be transformed to "Fare you well, good Master Anderson" BID42 ).An example of the first group is the work by BID31, who trained several separate networks on verses by different hip-hip artists. An LSTM network successfully generated verses that were stylistically similar to the verses of the target artist (as measured by cosine distance on TfIdf vectors). More complicated approaches use language models that are conditioned in some way. For example, BID24 produced product reviews with a target rating by passing the rating as an additional input at each timestep of an LSTM model. BID38 generated reviews not only with a given rating but also for a specific product. At each timestep a special context vector was provided as input, gated so as to enable the model to decide how much attention to pay to that vector and the current hidden state. BID23 used "speaker" vectors as an additional input to a conversational model, improving consistency of dialog responses. Finally, BID12 performed an extensive evaluation of conditioned language models based on "content" (theme and sentiment) and "style" (professional, personal, length, descriptiveness). Importantly, they showed that it is possible to control both "content" and "style" simultaneously. Work from the second group can further be divided into two clusters by the nature of the training data: parallel aligned corpora, or non-aligned datasets. The aligned corpora enable approaching the problem of form shift as a paraphrasing or machine translation problem. BID42 used statistical and dictionary-based systems on a dataset of original plays by Shakespeare and their contemporary translations. BID4 trained an LSTM network on 33 versions of the Bible. BID19 used a Pointer Network BID41, an architecture that was successfully applied to a wide variety of tasks BID27 BID15 BID32, to enable direct copying of the input tokens to the output. Note that these works use BLEU BID30 as the main, or even the only evaluation measure. This is only possible in cases where a parallel corpus is available. Recently, new approaches that do not require a parallel corpora were developed in both CV and NLP. BID17 succeeded in changing tense and sentiment of sentences with a two steps procedure based on a variational auto-encoder (VAE) BID21. After training a VAE, a discriminator and a generator are trained in an alternate manner, where the discriminator tries to correctly classify the target sentence attributes. A special loss component forces the hidden representation of the encoded sentence to not have any information about the target sentence attributes. BID28 used a VAE to produce a hidden representation of a sentence, and then modify it to match the desired form. Unlike BID17, they do not separate the form and meaning embeddings. BID34 applied a GAN to align the hidden representation of sentences from two corpora and force them to do not have any information about the form via adversarial loss. During the decoding, similarly the work by BID24, special "style" vectors are passed to the decoder at every timestep to produce a sentence with the desired properties. The model is trained using the Professor-Forcing algorithm BID22. BID20 worked directly on hidden space vectors that are constrained with the same adversarial loss instead of outputs of the generator, and use two different generators for two different "styles". Finally, BID13 proposed two models for generating sentences with the target properties using an adversarial loss, similarly to BID34 and BID20.Comparison with previous work In contrast to the proposals of BID42, BID4, BID19, our solution does not require a parallel corpus. Furthermore, unlike the model by BID34, our model works directly on representation of sentences in the hidden space. Most importantly, in contrast to the proposals by BID28, BID17, BID20, BID13, our model produces a representation for both meaning and form and does not treat the form as a categorical (in the vast majority of works, binary) variable. Although the form was represented as dense vectors in previous work, it is still just a binary feature, as they use a single pre-defined vector for each form, with all sentences of the same form assigned the same form vector. In contrast, our work treats form as a truly continuous variable, where each sentence has its own, unique, form vector. Treating meaning and form not as binary/categorical, but as continuous is more consistent with the reality of language use, since there are different degrees of overlap between the language used by different registers or in different diachronic slices. Indeed, language change is gradual, and the acceptability of expressions in a given register also forms a continuum, so one expects a substantial overlap between the grammar and vocabulary used, for example, on Twitter and by New York Times. To the best of our knowledge, this is the first model that considers linguistic form in the task of text generation as a continuous variable. One significant consequence of learning a continuous representation for form is that it allows the model to work with a large, and potentially infinite, number of forms. Note that in this case the locations of areas of specific forms in the vector style space would reflect the similarity between these forms. For example, the proposed model could be directly applied to the authorship attribution problem. In this case, each author would have their own area in the form space, and the more similar the authors are in terms of writing style, the closer these areas would be to each other. We performed preliminary experiments on this and report the in Appendix A. Let us formulate the problem of decomposition of text representation on an example of controlled change of linguistic form and conversion of Shakespeare plays in the original Early Modern to contemporary English. Let X a be a corpus of texts DISPLAYFORM0 and X b be a corpus of texts DISPLAYFORM1 We assume that the texts in both X a and X b has the same distribution of meaning m ∈ M. The form f, however, is different and generated from a mixture of two distributions: DISPLAYFORM2 where f a and f b are two different languages (Early Modern and contemporary English). Intuitively,we say that a sample x i has the form f a if α. The goal of dissociation meaning and form is to learn two encoders E m: X → M and E f: X → F for the meaning and form correspondingly, and the generator G: M, F → X such that ∀j ∈ {a, b}, ∀k ∈ {a, b}: DISPLAYFORM3 That is, the form of a generated sample depends exclusively on the provided f j and can be the in the same domain for two different m u and m v from two samples from different domains X a and X b.Note that, in contrast to the previously proposals, the form f is not a categorical variable but a continuous vector. This enables fine-grained controllable change of form: the original form f i is changed to reflect the form of the specific target sentence f j with its own unique α a and α b while preserving the original meaning m i.An important caveat concerns the core assumption of the similar meaning distribution in the two corpora, which is also made in all other works reviewed in Section 2. It limits the possible use of this approach to cases where the distributions are in fact similar (i.e. parallel or at least comparable corpora are available). It does not apply to many cases that could be analyzed in terms of meaning and form. For example, books for children and scholarly papers are both registers, they have their own form (i.e. specific subsets of linguistic means and structure conventions) -but there is little overlap in the content. This would make it hard even for a professional writer to turn a research paper into a fairy tale. Encoder encodes the inputs sentences into two latent vectors m and f. The Generator takes them as the input and produces the output sentence. During the training, the Discriminator is used for an adversarial loss that forces m to do not carry any information about the form, and the M otivator is used for a motivational loss that encourages f to carry the needed information about the form. Our solution is based on a widely used sequence-to-sequence framework BID37 and consists of four main parts. The encoder E encodes the inputs sequence x into two latent vectors m and f which capture the meaning and the form of the sentence correspondingly. The generator G then takes these two vectors as the input and produces a reconstruction of the original input sequencex. The encoder and generator by themselves will likely not achieve the dissociation of the meaning and form. We encourage this behavior in a way similar to Generative Adversarial Networks (GANs) BID14, which had an overwhelming success the past few years and have been proven to be a good way of enforcing a specific distribution and characteristics on the output of a model. Inspired by the work of BID0 and the principle of "carrot and stick" BID33, in contrast to the majority of work that promotes pure adversarial approach BID14 BID34 BID13, we propose two additional components, the discriminator D and the motivator M to force and motivate the model to learn the dissociation of the meaning and the form. Similarly to a regular GAN model, the adversarial discriminator D tries to classify the form f based on the latent meaning vector m, and the encoder E is penalized to make this task as hard as possible. Opposed to such vicious behaviour, the motivator M tries to classify the form based on the latent form vector f, as it should be done, and encourages the encoder E to make this task as simple as possible. We could apply the adversarial approach here as well and force the distribution of the form vectors to fit a mixture of Gaussians (in this particular case, a mixture of two Guassians) with another discriminator, as it is done by BID26, but we opted for the "dualistic" path of two complimentary forces. Both the encoder E and the generator G are modeled with a neural network. Gated Recurrent Unit (GRU) BID6 ) is used for E to encode the input sentence x into a hidden vector h = GRU(x).The vector h is then passed through two different fully connected layers to produce the latent vectors of the form and the meaning of the input sentence: DISPLAYFORM0 We use θ E to denote the parameters of the encoder E: W m, b m, W f, b f, and the parameters of the GRU unit. The generator G is also modelled with a GRU unit. The generator takes as input the meaning vector m and the form vector f, concatenates them, and passes trough a fully-connected layer to obtain a hidden vector z that represents both meaning and form of the original input sentence: DISPLAYFORM1 After that, we use a GRU unit to generate the output sentence as a probability distribution over the vocabulary tokens: DISPLAYFORM2 We use θ G to denote the parameters of the generator G: W z, b m, and the parameters of the used GRU. The encoder and generator are trained using the standard reconstruction loss: DISPLAYFORM3 The representation of the meaning m produced by the encoder E should not contain any information about the form f. We achieve this by using an adversarial approach. First, we train a discriminator D, consisting of several fully connected layers with ELU activation function BID7 between them, to predict the form f of a sentence by its meaning vector:f D = D(m), wheref is the score (logit) reflecting the probability of the sentence x to belong to one of the form domains. Motivated by the Wasserstein GAN BID1, we use the following loss function instead of the standard cross-entropy: DISPLAYFORM0 Thus, a successful discriminator will produce negative scoresf for sentences from X a and positive scores for sentences from X b. This discriminator is then used in an adversarial manner to provide a learning signal for the encoder and force dissociation of the meaning and form by maximizing L D: L adv (θ E) = −λ adv L D, where λ adv is a hyperparameter reflecting the strength of the adversarial loss. Note that this loss applies to the parameters of the encoder. Our experiments showed that it is enough to have just the discriminator D and the adversarial loss L adv to force the model to dissociate the form and the meaning. However, in order to achieve a better dissociation, we propose to use a motivator M and the corresponding motivational loss. Conceptually, this is the opposite of the adversarial loss, hence the name. As the discriminator D, the motivator M learns to classify the form f of the input sentence. However, its input is not not the meaning vector but the form vector: DISPLAYFORM0 The motivator has the same architecture as the discriminator, and the same loss function. While the adversarial loss forces the encoder E to produce a meaning vector m with no information about the form f, the motivational loss encourages E to encode this information in the form vector by minimizing DISPLAYFORM1 The overall training procedure follows the methods for training GANs BID14 BID1 and consists of two stages: training the discriminator D and the motivator M, and training the encoder E and the generator G.In contrast to BID1, we do not train the D and M more than the E and the G. In our experiments we found that simple training in two stages is enough to achieve dissociation of the meaning and the form. Encoder and generator are trained with the following loss function that combines reconstruction loss with the losses from the discriminator and the motivator: DISPLAYFORM0 Similarly to the evaluation of style transfer in CV ), evaluation of this task is difficult. We follow the approach of; BID34 and recently proposed by BID13 methods of evaluation of "transfer strength" and "content preservation". The authors showed the proposed automatic metrics to a large degree correlate with human judgment and can serve as a proxy. Below we give an overview of these metrics. Transfer Strength. The goal of this metric is to capture whether the form has been changed successfully. To do that, a classifier C is trained on the two corpora, X a and X b to recognize the linguistic "form" typical of each of them. After that, a sentence the form/meaning of which was changed is passed to the classifier. The overall accuracy reflects the degree of success of changing the form/meaning. This approach is widely used in CV, and was applied in NLP as well BID34.In our experiments we used a GRU unit followed by four fully-connected layers with ELU activation functions between them as the classifier. Content preservation Note that transfer strength by itself does not capture the overall quality of a changed sentence. A extremely overfitted model that produces the same, the most characteristic sentence of one corpus all the time would have a high score according to this metric. Thus, we need to measure how much of the meaning was preserved while changing the form. To do that, BID13 proposed to use a cosine similarity based metric using pretrained word embeddings. First, a sentence embedding is computed by concatenation of max, mean, and average pooling over the timesteps: DISPLAYFORM1 Next, the cosine similarity score s i between the embedding v s i of the original source sentence and the target sentence with the changed form v t i is computed, and the scores across the dataset are averaged to obtain the total score: DISPLAYFORM2 The metrics described above treat the form as a categorical (in most cases, even binary) variable. This was not a problem in previous work since the change of form could be done by just inverting the form vector. Our work, in contrast, treats the form as a continuous variable, and, therefore, we cannot just use the proposed metrics directly. To enable a fair comparison, we propose the following procedure. For each sentence s a s in the test set from the corpus X a we sample k = 10 random sentence from the corpus X b of the opposite form. After that, we encode them into the meaning m i and form DISPLAYFORM3 We then generate a new sentence with its original meaning vector m s and the ing form vector f avg, and use it for evalation. This process enables a fair comparison with the previous works that treat form as a binary variable. We performed an extensive evaluation of the proposed method on several dataset that reflect different changes of meaning, form, or specific aspects of meaning, such as sentiment polarity. Changing form: register This experiment is conducted with a dataset of titles of scientific papers and news articles published by BID13. This dataset (referred to as "Headlines") contains titles of scientific articles crawled from online digital libraries, such as "ACM Digital Library" and "arXiv". The titles of the news articles are taken from the "News Aggregator Data Set" from UCI Machine Learning Repository BID10 Changing form: language diachrony Diachronic language change is explored with the dataset composed by BID42. It includes the texts of 17 plays by William Shakespeare in the original Early Modern English, and their translations into contemporary English. We randomly permuted all sentences from all plays and sampled the training, validation, and test sets. Note that this is the smallest dataset in our experiments. Previous work on style transfer for text also included the experiments with changing sentiment polarity BID34 BID13. We do not report the experiments with sentiment data, since the change in sentiment polarity corresponds to a change in a specific aspect of meaning, rather than form. We therefore believe the comparison with these data would not be instructive. Probably, the most recent and similar to our work is the model proposed by BID13, in particular the "style-embedding" model. We implemented this model to provide a baseline for comparison. The classifier used in the transfer strength metric achieves very high accuracy (0.832 and 0.99 for the Shakespeare and Headlines datasets correspondingly). These concur with the of BID34 and BID13, and show that the two forms in the corpora are significantly different. Following BID13, we show the of different configuration of the size of the form and meaning vectors on FIG2. Namely, we report combinations of 64 and 256-dimensional vectors. Note that the sizes of the form vector are important. The larger is the form vector, the higher is the transfer strength, but smaller is content preservation. This is consistent with BID13, where they observed a similar behaviour. It is clear that the proposed method achieves significantly better transfer strength then the previously proposed model. It also has a lower content preservation score, which means that it repeats fewer exact words from the source sentence. Note that a low transfer strength and very high (0.9) content preservation score means that the model was not able to successfully learn to transfer the form and the target sentence is almost identical to the source sentence. The Shakespeare dataset is the hardest for the model in terms of transfer strength, probably because it is the smallest dataset, but the proposed method performs consistently well in transfer of both form and meaning and, in contrast to the baseline. Fluency of generated sentences Note that there is no guarantee that the generated sentences would be coherent after switching the form vector. In order to estimate how this switch affects the fluency of generated sentences, we trained a language model on the Shakespeare dataset and calculated the perplexity of the generated sentences using the original form vector and the average of form vectors of k random sentences from the opposite style (see subsubsection 5.1.1). While the perplexity of such sentences does go up, this change is not big (6.89 vs 9.74). To investigate the impact of the motivator, we visualized form and meaning embeddings of 1000 random samples from the Headlines dataset using t-SNE algorithm BID39 with the Multicore-TSNE library . The is presented in FIG3.There are three important observations. First, there is no clear separation in the meaning embeddings, which means that any accurate form transfer is due to the form embeddings, and the dissociation of form and meaning was successful. Second, even without the motivator the model is able to produce the form embeddings that are clustered into two group. Recall from section 4 that without the motivational loss there are no forces that influence the form embeddings, but nevertheless the model learns to separate them. However, the separation effect is much more pronounced in the presence of motivator. This explains why the motivator consistently improved transfer strength of ADNet, as shown in FIG2. 6.2 QUALITATIVE EVALUATION Table 1 and Table 2 show several examples of the successful form/meaning transfer achieved by ADNet. Table 1 presentes the of an experiment that to some extent replicates the approach taken by the authors who treat linguistic form as a binary variable BID34 BID13. The sentences the original Shakespeare plays were averaged to get the "typical" Early Modern English form vector. This averaged vector was used to decode a sentence from the modern English translation back into the original. The same was done in the opposite direction. → This man will tell us everything. (EME) I've done no more to caesar than you will do to me. (CE) → I have done no more to caesar than, you shall do to me. (EME) Table 1: Decoding of the source sentence from Early Modern English (EME) into contemporary English (CE), and vice versa. Table 2 illustrates the possibilities of ADNet on fine-grained transfer applied to the change of register. We encoded two sentences in different registers from the Headlines dataset to produce form and meaning embeddings, and then we decoded the first sentence with the meaning embedding of the second, and vice versa. As can be seen from Table 2, the model correctly captures the meaning of sentences and decodes them using the form of the source sentences. Note how the model preserves specific words and the structure of the source sentence. In particular, note how in the first example, the model decided to put the colon after the "crisis management", as the source form sentence has this syntactic structure ("A review:"). This is not possible in the previously proposed models, as they treat form as just a binary variable. A review: detection techniques for LTE system Crisis management: media practices in telecommunication management Situation management knowledge from social media A review study against intelligence internet Security flaw could not affect digital devices, experts say Semantic approach approach: current multimedia networks as modeling processes Semantic approach to event processing Security flaw to verify leaks Table 2: Flipping the meaning and the form embeddings of two sentence from different registers. Note the use of the colon in the first example, and the use of the "to"-constructions in the second example, consistent with the form of the source sentences. We conducted some experiments to test the assumption that the derived meaning embeddings should improve performance on downstream tasks that require understanding of the meaning of the sentences regardless of their form. We evaluated embeddings produced by the ADNet, trained in the Headlines dataset, on a task of paraphrase detection. We used the SentEval toolkit BID8 and the Microsoft Research Paraphrase Corpus BID11. The F1 scores on this task for different models are presented in Table 3. Note that all models, except InferSent, are unsupervised. The InferSent model was trained on a big SNLI dataset, consisting of more than 500,000 manually annotated pairs. ADNet achieves the the highest score among the unsupervised systems and outperforms the regular sequence-to-sequence autoencoder with a large gap. Table 3: F1 scores on the task of paraphrase detection using the SentEval toolkit BID8 7 In this paper, we presented ADNet, a new model that performs adversarial decomposition of text representation. In contrast to previous work, it does not require a parallel training corpus and works directly on hidden representations of sentences. Most importantly, is does not treat the form as a binary variable (as done in most previously proposed models), enabling a fine-grained change of the form of sentences or specific aspects of meaning. We evaluate ADNet on two tasks: the shift of language register and diachronic language change. Our solution achieves superior , and t-SNE visualizations of the learned meaning and style embeddings illustrate that the proposed motivational loss leads to significantly better separation of the form embeddings.
A method which learns separate representations for the meaning and the form of a sentence
706
scitldr
We were approached by a group of healthcare providers who are involved in the care of chronic patients looking for potential technologies to facilitate the process of reviewing patient-generated data during clinical visits. Aiming at understanding the healthcare providers' attitudes towards reviewing patient-generated data, we conducted a focus group with a mixed group of healthcare providers. Next, to gain the patients' perspectives, we interviewed eight chronic patients, collected a sample of their data and designed a series of visualizations representing patient data we collected. Last, we sought feedback on the visualization designs from healthcare providers who requested this exploration. We found four factors shaping patient-generated data: data & context, patient's motivation, patient's time commitment, and patient's support circle. Informed by the of our studies, we discussed the importance of designing patient-generated visualizations for individuals by considering both patient and healthcare provider rather than designing with the purpose of generalization and provided guidelines for designing future patient-generated data visualizations. Collecting patient-generated data is becoming increasingly common in chronic disease management. Patients use technological tracking tools to collect health and lifestyle data in disparate places. Both healthcare providers and patients agree that this data could be used to make smarter decisions to improve patients' quality of life and to aid providers in making decisions about patient ongoing care. There are already technological tools for tracking and visualizing health data such as sleep (e.g., ), physical activity (e.g., ), variations in weight (e.g., ), and blood sugar level (e.g., ). However, most of these tracking tools are not designed to fully meet patients and healthcare providers' expectations and do not support reviewing patient-generated data with healthcare providers during clinical visits. One way to support patients in presenting their data with healthcare providers is to visualize the patient-generated data collections effectively. Yet, we lack an understanding of what type of visualization designs can support chronic patients to present and review their health data with healthcare providers during clinical visits. To answer this question, we explored patients' and healthcare providers' perspectives on presenting and reviewing patient data. To extract healthcare provider requirements when reviewing patientgenerated data during a clinical visit, we conducted a focus group with a mixed group of healthcare providers. To uncover patient stories and their approaches to tracking and presenting their health data, we interviewed eight patients with chronic conditions who actively track their health data. Our findings revealed four factors shaping patient-generated data: data items & data context collected by patients, time commitment invested by patients to track data, patients' motivation for collecting data, and patients' support circle. Considering these four factors, we designed various visualizations representing patient-generated data collections we gathered from our patients. Instead of pursuing a single generalized visualization design, we designed individually tailored visualizations for each patient. Based on our preliminary visualization designs, we proposed a design space of patient-generated data visualizations. Next, using these individually tailored visualization designs as elicitation artifacts, we interviewed the healthcare providers who had initiated the request for this project to reflect on the designs. Healthcare providers pointed to four use cases that they envision these visualizations could support their practice. As a whole, the of all our studies led to one message: the importance of designing patient-generated data visualizations by considering each patient and healthcare provider rather than designing for generalization. However, it may seem impossible to either design a unique set of visualizations for each patient or expect patients to design their own visualizations. We, as healthcare technology designers, need to provide patients and providers with a set of visualization designs as starting points. This approach would let each patient and provider choose the visualization designs that work the best for them with the capacity to customize the designs based on their lifestyle, conditions, collected data, and patientprovider relationships. Our contributions in this paper are as follow: We identified four factors shaping patient-generated data. We presented a design space of visualizations representing patient-generated data collections. We provided guidelines for designing future patient-generated data visualizations. In the this section, first we discuss patients' perspectives and goals for collecting their health data. In the second part, we provide an overview of healthcare providers' perspectives on the benefits and the challenges of using patient-generated data in their practice. In the last part, we discuss how technological and visualization tools can support patients and healthcare providers with presenting and reviewing patient-generated data collections. The number of patients with chronic conditions is increasing every day in the world. The nature of chronic conditions requires close monitoring and self-managing care for these patients. A survey study in 2013 showed at least seven out of ten adults in the U.S. track a health indicator for themselves or for someone whom they take care. An increase in the availability of wearable sensors, mobile health apps, and novel portable technologies provided patients with an extra boost to track more personal health data. People track their health data in various forms, including memorization, original artifacts, personal paper records, personal electronic records, and electronic patient portals. Patients track different types and amount of data depending on their personal health goals. These goals can range from preventing more complications, having more control over their health, setting personal health goals, improving their conditions, and sharing these self-collected data with their healthcare providers. Studies have shown that sharing patient-generated data with healthcare providers can improve patient-provider communication. Sharing health data also empowers patients in taking control of the conversation during a clinical visit and helps healthcare providers build a relationship with patients. Many patients share their self-collected health data with their healthcare providers during clinical visits seeking tailored medical advice. Some healthcare providers see value in patients collecting their health data and presenting them during clinical visits. They think that by reviewing patient-generated data, they will gain more insights into patient goals and will be able to provide more tailored care to patients. Providers think, in some cases, patientgenerated data might be more reliable than clinic measurements because the data is collected at more frequent intervals, and there is less recall bias. Providers mentioned that often, a hospital's electronic medical record system have misinformation or inaccuracies. In addition, patient data measured in the clinic (such as blood pressure) may be affected by the white coat effect and stress of the clinical environment. In these situations, patient-generated data can be used to reconcile these inaccuracies as patientgenerated data may contain less false-positive data than patient health data collected in the clinic. We should note that although healthcare providers may find patient-generated data complementary to clinical measurements and history taking if tracked in a meaningful way, they do not consider this data as a replacement to clinically measured data. Patients may not be willing to record their data when they have abnormal readings due to fear of consequences and may be worried that their data will be part of their permanent clinical record. In addition, providers sometimes express frustrations when patients do not track enough data, track excessive data, or track non-meaningful data. Patients also use different mediums and organization formats that work best for them to collect and present their health data. As a , the patient-generated data collections become heavily personal and complex, making it challenging for healthcare providers to understand and analyze. It is difficult to find the time to examine unrequested data during a short clinical visit. Most clinical visits are currently short. The common clinical visits with family physicians usually last about 10 to 20 minutes, leaving a short amount of time for reviewing patient-generated data. The providers may not find as much value reviewing patient-generated data during a clinical visit. Storing this data safely and securely can be challenging for providers and can add to their workload. Thus, there is still not a fully clear understanding of how, when, and what type of patient-generated data is most useful to review and discuss during clinical visits. One way to facilitate reviewing patient-generated data would be to have standardized data collection and presentation processes. However, a standardized process is probably not a panacea, as every patient and healthcare provider may have individualized preferences and needs. There is evidence that technology can support providers and patients in improving the quality of communicating patient data. Previous work raised questions about how technology should be designed that could assist both patients and healthcare providers in smoothly reviewing patient-generated data during clinical visits. One way could be visualizing these patient-generated data. Visualizing this data can benefit both patients and providers, if carefully designed so that it seamlessly integrates both perspectives into patient care planning. However, designing a general solution that works for all patients and providers is not easily achievable. Thus, first we need to move towards designing tailored visualizations, making an individualized visualization experience for each patient and provider. We were approached by a group of healthcare providers from a local hospital who are involved in the care of chronic patients to explore if, and how, to design technology that can enhance the process of presenting and reviewing patient-generated data during a clinical visit. To answer this question, we took an iterative design approach with close involvement of both patients and healthcare providers. The Institutional Review Board of (anonymized) University approved this study. First, we conducted a focus group with the healthcare provider that voiced concerns for reviewing patient-generated data. Then, to complete the healthcare providers' perspectives reviewing patientgenerated data,, we interviewed eight patients actively collect health data and collected a sample of their data. We asked our patient participants about their experience collecting, analyzing, and sharing their data with healthcare providers. Next, we leveraged this understanding to propose potential visualization designs representing patient-generated data that we collected. Our goal was to design visualizations to improve the process of reviewing these data during clinical visits. Last, we interviewed healthcare providers seeking their reflection on our proposed visualization designs. We also asked the providers how they envision using these visualizations in their practice. To clarify, confirm, and gain a deeper understanding of the healthcare providers' perspectives about the patient-generated data collection review process, we conducted a formal focus group with a mixed group of healthcare providers. Our focus group included a subgroup of providers who initially approached us including a clinical endocrinologist (with 21 years of experience), one internal medicine specialist physician (with 29 years of experience), and one healthcare provider (with 9 years of experience) directly supporting patients who monitor their data. Three other healthcare researchers were present during the focus group listening to the discussions. In our focus group, we asked healthcare providers about their experiences reviewing the patient-generated data, analyzing and understanding the patient data, and giving advice to patients based on their data. One interviewer primarily posed the questions during the discussion and two other researchers from our interview team took field notes. The focus group lasted around 60 minutes. We video-recorded, transcribed the focus group discussions, and later we used the grounded theory to analysis the data. To understand patients' perspectives on tracking and presenting their self-generated health data, we interviewed eight patients who suffer from one or multiple chronic conditions. We used several methods of recruitment for this study: emails from a local Patient Care Networks directors, Patient Care Networks newsletter ads, targeted recruitment through the healthcare provider who participated in the focus group and snowball sampling. We conducted an hour long semi-structured interview with each patient. We formed our patient interview questions based on the of our discussions during the focus group with healthcare providers. We asked participants to bring a sample of their data to the interview session and walk us through their data sample in detail. We video-recorded and transcribed all the interviews. To analyze the interview , we used the grounded theory method, analyzing each interview in a separate process. Our goal was to reach a deeper understanding of each patient's story. We state proof of existence for each interview and do not try to generalize our findings across patients. Next, based on the requirements of each individual patient, we sketched various visualization alternatives representing their own patient-generated data collections. As a group, we discussed the visualizations and how they meet the patients' needs. Then, we selected one or several alternative designs that best matched the patient's requirements. To complete our design cycle, we took our visualization designs back to three healthcare providers, who were among the group that initiated this project, seeking their feedback. We interviewed an internal medicine physician with 29 years of experience (C1), a clinical endocrinologist with 21 years of experience (C2), and a complex chronic specialist physician with 22 years of experience (C3). Each session lasted between 40-60 minutes and was video recorded and later transcribed. In the interview session, we first gave the providers a description of the patients' conditions, their personal stories, and their data collection processes. Then, we shared the visualization designs with the providers and observed their reactions walking through and talking out loud about the designs. From our analysis of the focus group transcripts, we extracted four requirements by our healthcare provider participants, to support reviewing patient-generated data during clinical visits. R1-Integrating data context: Healthcare providers think patient sometimes collect too many data items, but data without context is not helpful for medical purposes, "you get the data in a 7 by 6 table with numbers and they are all over the place. Without food information, stress information, activity information it does look like a bunch of noise. You don't see a pattern without being able to query on those other dimensions. Like your sugar is high, are you stressed?" (C1). To overcome this challenge, providers need tools that are able to integrate context with data. R2-Summarizing for quick presentation of data: Patients sometimes come to clinical visits with a large collection of data and expect their healthcare providers to help them make sense of their data "they clearly put in a lot of work, but you don't have time and you have nowhere to begin" (C1). Healthcare providers want tools with abilities to summarize and filter patient data to see trends, patterns, and anomalies. R3-Sharing goals and motivations: Our healthcare providers told us patients usually have different goals than providers which may cause conflicts. Patients often like to discuss details of their data, but providers are more interested in an overview of the whole data, so they wanted "a platform that forces people to be explicit between stakeholders" (C2). With this in mind, providers wanted to have tools with ability to overview and focus on parts of the data to explore the patient data in both focused and detailed views accommodating their goals and patients' goals. R4-Supporting conversations: Both patients and healthcare providers need support to discuss their concerns "[patient says] I have questions about [this] and the doctor says ok, great, that is what is going on there. But I am more concerned about this" (C1). Healthcare providers told us they need support opening up communications with patients which may have not happened otherwise; tools that can represent patient data in different views letting patients and providers discuss various aspects of patient data. The findings from the focus group helped us form our patient interview questions. Our healthcare providers found patient-generated data useful when patients collect meaningful data with context. Thus, in our patient interviews, we asked our patient participants to talk about the data items and the context data they collect. Our providers expressed their concerns about patients committing an excessive amount of time on data collection ing in large datasets. Thus, to get patient perspectives in this manner, we asked our patient participants to tell us about their time commitment to data collection. Our healthcare providers talked about the impact of patient goals and motivation on their data collection and data sharing. Thus, in our patient interviews, we asked patients to tell us about their goals and motivation for collecting data and if they were advised to track data by their providers. Our healthcare providers saw value in having a patient's presence during clinical conversations. Thus, we asked the patients whether they shared their data with their healthcare providers or their caregivers at home and how was their experience getting support. To design the patient-generated visualization designs, we considered the four requirements (R1-R4) identified from our focus group and followed the design guidelines established in the literature. To accommodate data context integration, R1, in the visualization, we used "Tooltip" which is an identifying tool presenting the attribute data attached to an object. To incorporate R2, we followed the basic information seeking principles. To fulfill R3, we incorporated "overview and details-on-demand" interactions in our designs. To support patients and providers view patient data from different perspectives, R4, we designed multiple visualization designs for each patient-generated data collection. We allocated pseudonyms to confer the anonymity of our patients. In each part of this section, we first present the profile of the patient; their data and context, their time commitment, their motivation, and their support circle. For each patient, we ideated and designed one or multiple visualizations. These visualization designs are simple visualizations that are carefully designed to capture providers' requirements and each individual patient needs and may not be novel designs by themselves. We explain the detail of each visualization design we sketched to display patient data and how we took the providers' and patients' requirements into considerations when exploring visualization design opportunities to represent their patient-generated data collections. We did not restrict ourself to designing a certain number of visualization representations; we sketched as many design possibilities as we could think of to present the data for the patient. In total, we generated 20 preliminary visualization designs for eight patients. We laid out these designs on a design space board (Fig. 1). In this design space, each column corresponds to one patient and the visualizations in the column are design variations for that patient. Later, as a group, we discussed all of the visualization designs and selected the designs that best represent each individual patient. In this figure, the selected designs are highlighted with an orange border. We acknowledge that these designs are not the only possible visualizations and other designers/researchers may come up with variations to these designs. Here, we present our designs and we hope this will be a starting point for other researchers and designers to contribute more patient stories to the literature and to move towards thinking about designing more for individuals. Maria is 67 years old, one day she experienced high blood pressure and visited the hospital emergency room. After that hospital visit, Maria constantly experienced high blood pressure. That year, she was diagnosed with hypertension. Data & context: Maria was advised to track her blood pressure and heart rate on a regular basis using a cuff machine. She uses a notebook to record her readings (Fig. 3 -a). We designed a visualization representing both Maria's blood pressure and heart rate readings (Fig. 1 -column p#1 -first row). We display blood pressure readings in the form of bars and show the patient's heart rate on demand. Each bar represents one blood pressure reading, we associate the bottom border of the bar to diastolic and the top border of the bar to systolic. The two horizontal lines in the show the normal blood pressure reading range (120 over 80). In addition, we added colour to each bar showing a normal (green), an abnormal (yellow), or a dangerous (red) blood pressure reading. Time commitment: Maria tracks her blood pressure and heart rate three to four times per day. Thus in our design each bar in the visualization shows one reading with the time of the recording. Motivation: Maria's ultimate goal for tracking her data is "to feel better... make my blood pressure go down" (P01). After her diagnosis, she changed her life style to reach her goals. She is drinking more fluids and reduced the amount of salt in her diet. She is hopeful that she can reach her goal. She also keeps a record of events or activities she thinks may be relevant to her blood pressure, so later during a medical visit, she can discuss them with her healthcare providers. Thus, in our design we have an option to add notes associated with her blood pressure records. Support circle: Maria presents her notebook to her family physician saying, "because of this [notebook], it will be easier for me to inform the doctor" (P01). She hopes her family physician can make sense of the data and make adjustments to her treatment plans based on her data. To accommodate Maria's need for sharing her data, we designed this visualization with the capacity to show an overview of blood pressure readings over months (top row in the design) to quickly check her overall status in the past months as well as detailed numbers on demands (bottom row in the design). Andrew was diagnosed with type 1 diabetes about 16 years ago at the age of 52. Due to his age, he was first misdiagnosed with type 2 diabetes. After his diagnosis, his interaction with the healthcare system changed from visiting his family physicians once a year to getting an A1C test every three months. He has been in direct interaction with a nurse educator, a foot care clinic, and an endocrinologist in a diabetic neuropathology clinic. Data & context: Andrew measures his blood glucose and basal rate as advised by his nurse educator and endocrinologist (Fig. 3 -b). He uses a glucose meter to measure the concentration of glucose in his blood and an insulin pump to calculate the amount of insulin required. We represent Andrew's blood glucose data in two different visualization designs. The (Fig. 1 -column p#2 -first row) is a detailed view of one day of Andrew's glucose level. The circle shows a 24-hour clock. Each time Andrew measures his glucose, we show his reading on that time on the clock with a bar. The height of the bar represents the glucose rate and the color of the bar represents the normality of the glucose rate; if the glucose reading is too low (red), low (yellow), normal (green), high (yellow), or too high (red). In the (Fig. 1 -column p#2 -second row), the top part shows all blood glucose ratings recorded in a month with circular points. The y-axis shows the glucose rate and the x-axis shows the date. We also double coded each data point with the same colour themes as the first design. Time commitment: Before each meal, Andrew measures his blood glucose using the glucose meter and enters his readings into the insulin pump. The pump automatically send Andrew's insulin intake to his nurse educator. Besides that, Andrew keeps track of his basal rates that he measures using the glucose meter, in a notebook to later share with his nurse educator. Every time Andrew visits his healthcare providers to check on his conditions, he shares the recorded data he collected over the past few months with his healthcare providers. Thus, in our visualization designs, we included a weekly or monthly overview of his glucose rates at the bottom of both designs. Motivation: Andrew lives a good life, eats healthy, gets enough sleep, and has a balanced work-life lifestyle. He recently got diabetes complications. After experiencing the complications, he is hoping to start an exercise routine. Andrew tracks his exercise on the side to understand the effect of his physical activities on his blood glucose. Thus, we added an option for the patient to add a free style note (e.g., exercise) on his data point to appear on demand when hovering over the data point in the visualizations. Support circle: Andrew has a hard time analyzing and finding trends in his data to adjust his lifestyle saying, " There's so many factors that come to play with your blood sugars and trying to get everything in the right spot" (P02). He expects his healthcare providers to make sense of his data for him and give him direct instructions on how to better manage his conditions. Thus, we included a weekly and a monthly view of the glucose recordings on the bottom of both designs to give an overview of his data. Jen is 34 years old and was diagnosed with hypertension when she was 18 years old and was medicated for a few months. Last year, she had a visit with her family physician to get treatment for an infection and her blood pressure reading was high at the clinic. But, when she checked her blood pressure at home, she noticed that her reading was closer to normal readings than in the clinic. Data & context: Jen tracks her blood pressure and heart rate (Fig. 3 -c). Since she is experiencing a steady heart rate, she mainly focuses on her blood pressure data. Thus, we only display her blood pressure data, in two different visualization alternatives. We designed two visualizations displaying Jen's blood pressure data. In the (Fig. 1 -column p#3 -first row) design, we have designed a tree based visualization with the ability to expand on demand. The top root represent the average blood pressure readings of the patient over one year. The next level shows the seasons, then months, and lastly the daily blood pressure reading. Jen uses three different colors to distinguish her readings into normal, borderline, and abnormal. With colour coding her numbers, she can quickly glance over her data. We have used the same idea in our visualization design and color coded her blood pressure readings. In the (Fig. 1 -column p#3 -second row) design, each bar shows an average of all Jen's blood pressure readings in a day, where the colours indicate the normality of the number. Dark green indicates high blood pressure readings, green indicates a normal blood pressure readings, and light green indicates low blood pressure readings. Looking at this view, she can decide if she is having more dark or light colors in a period of time. Whenever she decides to focus on a certain period of time, she can select that section and a table view appears underneath with data displayed for each day. Time commitment: Jen has been measuring her blood pressure a few times per week for a year and believes her condition is under control with steady normal blood pressure readings: "Lately, it's been quite good for the last several months. So, kind of since January I check it maybe once a week now as opposed to every day"(P03). Thus, in our designs we only display maximum one reading per day. Motivation: Since her last clinical visit, Jen monitors her numbers to prevent any complications or developing hypertension for the second time. Last time she was taking medications for her hypertension, she experienced many side effects, and she fears that the healthcare providers may medicate her again: "I've been borderline and they've talked about medicating me for it, but I would rather not be if I can avoid it. So, I am just trying to manage it other ways before getting to that point" (P03). Support circle: Jen usually does light exercises, gardening, or short walks to stay healthy. To stay under 1500 mg sodium per day she plans her weekly meals with her husband. She expressed her concerns to her physicians that she only has high blood pressure when she is at the clinic, visiting her providers gets her anxious and stressed. To overcome this problem, she writes notes next to her readings keeping track of any triggering factors such as a clinical visit. She is hoping by showing the numbers she tracked at home to her healthcare providers, she can tell them, "No, it's usually right around 120/80. It's not always this high" (P03). Therefore, in our designs, we have an option for Jen to mark the blood pressure readings measured during her clinical visits. Lucas is 43 years old and suffers from hypertension, type 2 diabetes, and depression. Lucas was hospitalized a few times with suicidal thoughts and high blood glucose. Tracking his blood pressure and glucose level helps him get his conditions under control; however, sometimes he experiences an emotional break down when his readings are higher than the normal range advised by his providers. Data & context: Lucas collects his glucose, blood pressure, and heart rate (Fig. 3 -d) in his notebooks. We designed two visualizations displaying all three items he is tracking (Fig. 1 -column P#4). Lucas wants to look at his glucose, blood pressure, and heart rate data all at once. Thus, we display all his data in one view. Each data point is color coded in both visualizations based on the ranges defined for Lucas's conditions. Green indicates normal, yellow shows borderline, and the out of range readings are colored in red. Time commitment: Lucas was advised by his providers to record his data five times a day. However, he is dealing with a lot of pressure due to his conditions and his personal problems, so he only manages to track his data once a day. Thus, in the (Fig. 1 column P#4 -first row), each vertical division in the chart shows one data item: blood pressure, blood glucose, and heart rate. In the (Fig. 1 -column P#4 -second row), we show each day of data readings in a flower shape visualization, each petal representing one data item: blood pressure, blood glucose, and heart rate. Motivation: He feels frustrated and upset with himself for not having his conditions under control. Lucas hopes to get support that motivates him to track his data, but does not want to be pushed. He wants to exercise regularly, as it can help him stabilize his blood pressure and glucose level; however, his busy schedule does not allow for exercise. Instead, he tries to go for short walks to lower his blood pressure when he experiences high blood pressure. His goal is to get off the insulin by next year. Support circle: Lucas feels that he does not have enough family support and his family lacks compassion and doesn't understand the seriousness of his conditions. He has difficulty making sense of his data and expects his healthcare providers to understand his data and give him advice based on them. For instance, he was hoping to find relations between his blood pressure readings and glucose level, but could not find any correlations. Thus, we visualized all the three data items he collects adjacent to each other in one view. Ken is 37 years old and suffers from multiple conditions. He had memory problems, paranoia, and learning difficulties since childhood. He was diagnosed with behavioural disorder in 2005, mental health problems in 2009, and asperger syndrome in 2011. In addition, Ken has digestive problems and is experiencing pain in different parts of his body (e.g., neck, back, shoulder, ankle), which have not been officially diagnosed. Data & context: Ken tracks his nutrition data and symptoms related to his stomach pain and bowel movements using MySymptoms app. He tracks his pain to help with diagnosing the source of his pain (Fig. 3 -e). To understand the effects of his mental state on his conditions, he also tracks his mood. Ken prefers using multiple apps on his tablet to record different health data items, therefore we also visualized his data in two separate designs. He is happy with the app he uses for tracking his nutrition, so we focused our designs on the other data items (moon and pain). We sketched a visualization displaying Ken's mood data (Fig. 1 -column P#5 -first row). Each day on the calendar shows Ken's mood of the day which is colour coded; happy (green), normal (yellow), sad (red), and selfdefined (blue). We sketched a second visualization representing Ken's pain data (Fig. 1 -column P#5 -second row). Time commitment: Ken tracks his mood once everyday. To present his mood data, in the calendar visualization, we also allow for one mood entry per day (Fig. 1 -column P#5 -first row). On the other hand, the pain body mock-up visualization (Fig. 1 column P#5 -second row) does not display any time data and lets Ken record as many occurrences as he experiences. Each ring in this visualization represents pain experienced once in the marked location of body. Motivation: Ken's goals are to eat healthier, get more physically active, and lose weight. Also, he is hoping to get more involved in his care. He records relevant context as notes that he thinks may trigger his mood. Thus, we added an option for him to add free style notes to keep track of the context associated with each day in the calendar view visualization. Support circle: Ken tracks several symptoms and trigger factors that he thinks may be helpful for improving his health, but his healthcare providers do not always find his collected data useful. He is confused about which data items are useful to collect: "I gave all my symptoms to her, all recorded on a sheet. She said,'Oh, we're just looking at the gut issues.' I'm like, What about the rest?" (P05). Sarah is 49 years old and was diagnosed with type 1 diabetes in 1984. In 2013, Sarah was hospitalized experiencing severe gastroparesis symptoms. Later, Sarah developed arthritis in her hand and gets cortisone shots, which increases her glucose level after each shot. Data & context: She uses an insulin pump to manage her diabetes (Fig. 3 -f). Thus, in our visualizations, we represent her blood glucose data (Fig. 1 -column P#6 -first row). Since Sarah has an insulin pump, the device automatically tracks her blood glucose many times in a day. Thus, to visually show all the data points measured by her insulin pump in a day, we designed a clock visualization. The clock view can show all the data readings in one view with their timestamp. Sarah's healthcare providers predefined a normal range of glucose level for her based on her conditions. In the (Fig. 1 -column P#6 -first row) visualization, the blood glucose reading is marked with an X inside each ring and colour coded to green, yellow, and red based on the ranges defined for her. Time commitment: The pump automatically tracks her blood glucose level in different time intervals to program her insulin. Sarah does not regularly record her food intake, but when she feels sick, she takes notes in her phone of what she ate and her activities that may have affected her glucose: "there's really no answer, I've been dealing with this for about two or three years now " (P06). Motivation: Sarah has changed her lifestyle especially after her diagnosis with gastroparesis. She takes an active role in managing her conditions. She says "with gastroparesis there's no medication, there's no cure... it's a matter of just doing a lot of research and reading in different avenues (P06). Sarah has a fear of getting sick to the extent that she needs hospitalization. Support circle: Her diabetes nurse monitors Sarah's glucose level regularly. On the occasion that Sarah feels sick or in need of help, she calls her nurse and asks her nurse to log into her pump remotely. Based on her pump , the nurse will give her advice on how to normalize her glucose level. To discuss her readings over a week with her nurse, we displayed an overview in form of seven rings (days) (Fig. 1 -column P#6 -first row). Tim is 56 years old and was diagnosed with type 2 diabetes about 8-10 years ago. His condition has gotten worse in the past two years. Tim has been also dealing with hypertension for a long time. He also has a genetic disorder, Hereditary Hemorrhagic Telangiectasia that cause abnormality in blood vessel formation, but it does not affect his chronic conditions. Data & context: Tim uses a glucose meter to measure his glucose reading and records his readings in an app on his phone (Fig. 3 -g). He also uses a blood pressure cuff machine to measure his blood pressure. He then manually enters his blood pressure readings into two different apps on his phone since he is afraid one app will wipe out his recorded data. He prefers to collect his data on his phone rather than the booklet he was given by the nurse. We display both data items in one design (Fig. 1 -column P#7). Tim takes notes keeping track of events and special occasions (i.e., holidays and parties). Thus, to accommodate recording these notes, we added an option in our design to track and later display the notes. Time commitment: He tracks his blood glucose once or twice a day and measures his blood pressure a few times a day at different times. Thus, we also show multiple data readings on the chart per day. Tim normally skips tracking his data during vacation times. However, not tracking his data during his last vacation caused an abnormality in his data: "I was good for a while. Then took a vacation and, whoaa!"(P07). To visually display the effect of not tracking data we show the missing dates with dashed lines. Motivation: After visiting a new physician, the physician changed Tim's hypertension medication to a more recently developed medication. Since the change of his medication, his blood pressure has been generally stable and he got motivated to start tracking it, "I kicked myself, I should have tracked it longer"(P07). He is hoping to become more active in his care. Tim has a standing order from his diabetes nurse to get A1C test every three months. He is hoping his glucose level goes below 6.5: "six months ago, it was 8.1. Now it's 7.1"(P07). To make it easier for him to check if his numbers are normal, we colour coded (green, yellow, red) the data points. Support circle: Tim's diabetes nurse and his family physician automatically receive the of his A1C test. However, Tim does not share any of his self-collected data with his providers. Katy is 52 years old and she suffers from hypertension, asthma, arthritis, chronic pain, and depression. She was diagnosed with asthma 21 years ago which is mostly under control with medications. In 2004, she gave birth to a premature baby and had a sudden death in her family. Later that year, she was diagnosed with severe depression and was hospitalized in the psychiatric ward. Data & context: As a of Katy's depression, she gained 150 pounds. Three years ago, she joined a weight management group and was advised by her dietitian to track her food intake (Fig. 3 -h), but Katy does not like to share her collected data with her providers. A few years ago, Katy started to experience pain in certain areas of her upper body; however, her physician did not believe her pain was real and was dismissive to her condition. After struggling with pain for a while, she decided to look for another pain specialist. She created an Excel sheet with upper body part names and each day she would put in a number corresponding to her pain level in addition to the type of pain (stabbing, stinging, and shooting). Thus, we designed an upper body mock-up drawing visualization to help her visually track the type and location of her pain (Fig. 1 -column P#8 -second row). In the visualization, the tensity of the pain is represented by the number of rings (1 to 10) and the three types of pain that Katy tracks are distinguished with different colours. Time commitment: Every time Katy experiences pain, she records her pain data. Thus, we also allow for as many (pain) data entry as pain occur during a day in our visualization design. Motivation: Katy writes side notes to her pain data to investigate if there is any relationship between the time of the day, her activities, and her pain level. She shared her pain diary with her new pain specialist to see if there is any relationship between the time of the day, her activities, and her pain level; Katy told us her specialist said: "This is great, there is no relationship to anything which just tells me it is probably a nerve or something. This is fabulous and I want to keep this!" (P08). Support circle: Katy hoped to receive more tailored care by sharing her self-collected health data with her healthcare providers. She sees value in tracking her health data and sharing them with her healthcare providers. We display an overview to her pain data by displaying a week of her pain data in form of small body mockups at the bottom of our design. This view will help providers to get an overview and to find possible patterns or trigger factors. We presented the patient-generated data visualizations to three of the providers who initially requested visualizations and technological support for reviewing patient-generated data. We observed providers' reactions towards our visualization designs and asked for their feedback. The providers varied widely in why, when, and how they want to use patient-generated data visualizations in their practices. We present our according to the two themes we identified through analyzing the interview data. The themes are 1) the visualizations' use cases in providers' practice, and 2) the platforms for implementing the visualizations. Providers envisioned different use cases for the visualizations in their practice: 1) one provider saw value in the use of these visualizations only by patients, 2) two provider wanted to use them to review patient data during clinical visits collaboratively, 3) one provider thought of using them to support their medical judgment, and 4) two providers found displaying patient data through different lenses useful to understand the data better. Encourage patient self-experimentation and goal setting: The complex chronic care specialist, C3, expected visualization views that would encourage patients to do more self-experiments. He thinks particularly for chronic symptom management where there is no complete treatment to resolve the symptoms, but rather it is a matter of trying to track and manage them, experimenting to find trigger factors can be helpful for these patients. The potential of self-experimenting with data can help patients find solutions to perform easier through their everyday life. In addition, C3 thinks visualization designs need to have the capacity to support patients in setting goals and tracking an intervention that patients may set in their minds to control their symptoms.: " For example, taking three glasses of water per day may reduce headache" (C3). Although this provider was interested in encouraging patients to do selfexperiments and set goals, C3 wanted patients to share the data collections with them. In these circumstances, the providers can help patients understand if there is a scientific correlation between variables and help patients understand the body mechanism that might explain this correlation. Juxtapose data for collaborative interpretation: The internal medicine specialist, C1, was cautious about juxtaposing all patient-generated data items in a single visualization view. He was concerned that juxtaposing patient data could imply a link that may not exist and falsely medicalize the relation between the data: " the minute you put them on a shared data exhibit, it is a correlation" (C1). Although he was not enthusiastic about presenting some data items such as blood pressure and glucose level in one view, he found coupling some data points useful. For instance, when seeing the (Fig. 1 -column p#6 -first row) visualization he was keen to view patient food intake and blood sugars displayed together to investigate their relationship. Another functionality that the providers found useful was the potential to overlay data collected across different situations or days. By overlapping patient data, providers may be able to find patterns in patients' data. For instance, C3 was interested in overlapping patients' glucose data over a few days to find out the effect of biking for 30 minutes on patient glucose levels, "the nature of the adjustments is very rarely a single day" (C3). Offer a holistic overview to the provider: The endocrinologist clinician, C2, showed interest in a holistic visualization view of all the data items a patient collects. She found the visualization designs that represented all patient data items in one view very useful for planning complex chronic patient care. For example, for displaying blood pressure and glucose level in one view she said: " as a care provider, I can show that'yeah, during these times these situations are really bad for you' " (C2). She was also keen to see patient's physical activities such as steps taken per day presented in the same view to understand the effect of exercise on the patient's other health conditions. C2 was interested in having access to the patients' notes describing the context and the situation when this data was recorded. She told us that she encourages her patients to take notes of their emotional states, their meals, or any other relevant information when recording their health data. Knowing the context associated with the data, the provider has more information to make informed medical judgments. Understand data better through a different lense: Providers were able to quickly adapt to new visualization designs and warmed up to the idea of alternative views of data, promising for adoption in their practice. For example, C3, said he appreciates the visualizations capability to display patient data differently. He said both patients and providers are used to seeing patient data in a standard tabular format. He thought showing patient data in different forms will give patients extra support in understanding their data and taking actions towards enhancing their health: " We never had this kind of things [visualizations] and so, this is where the notion of'same data, different lens' becomes useful" (C3). The providers also recognized that some of these visualization designs can be used to represent other health measurements. One of the providers who was at first skeptical of using the blood pressure tree design (Fig. 1 -column p#3 -first row), after reviewing and discussing the design, suggested using this visualization for collectively displaying 24-hour blood pressure cuff machine data. Providers use these machines to closely monitor the patient blood pressure to help with diagnosis: "this is an attractive idea, maybe this kind of visualization can be used for a 24 hour report [showing data] every 10 minutes" (C1). The choice of visualization platform can make a difference in designing the right patient-generated data visualization. The providers talked to us about their preferred patient-generated data platforms, and the rationales, benefits, and trade-offs of their choices. Different technology and platforms for implementing such visualizations include data booklets, websites, phone apps, and patient portals. Booklets: To smoothly integrate visualizations into providers' practices, one challenge is to design the patient-generated data visualizations compatible and aligned with the current providers' practices. Providers usually give patients tabular template booklets to record data. C3 mentioned that he preferred reading patient data in these booklet format, since it is easier and faster for him to find trends. Printed visualizations in the form of booklets can be familiar and easy to use for providers, but do not support interactivity. Websites: Some providers prefer to have patient data uploaded on designated websites where providers could potentially integrate patient-generated data into the patient's health records. C2 thought that, if designed well, a website would be a good platform that could support both patients and providers to interact with patient-generated data and see the data in different ways. However, healthcare services usually have restrictive policies for use of websites in clinical settings. Phone Apps: Patients may not feel comfortable sharing all the data with one healthcare provider and may only be willing to share related data with a specific provider depending on their specialty. C2 thought that using a personal phone to record data could be a solution, since patients have full authority to share data they wish. However, small display real estate could cause limitations in designing visualizations that represent all patient-generated data at once. Also, sharing a small display between patients and providers during clinical visits can be difficult. Patient Portals: Providers normally have a PC in their clinical rooms for taking specific notes about a patient's condition and recording them in a patient's healthcare portal. C1 was keen on the idea of asking patients to link their self-collected data into their healthcare portals ahead of time. He thought that having patientgenerated data collections and visualizations available on the portal could not only save time, but could also be easily accessible for discussion. However, implementing visualizations into these portals can be a long and difficult process, which also requires support from healthcare services. Effective communication of patient-generated data during clinical visits can help patients feel understood and support healthcare providers get all the necessary data they need to make proper medical decisions. Our objective was to design visualizations to support patients present patient-generated data for reviewing during clinical visits. The focus of our studies was on studying patients with chronic conditions and the healthcare providers who visit chronic patients. The of our patient interview studies revealed the individualities and the complexities of patient-generated data collections. Each patient has a unique body, a highly individualized lifestyle, a different set of goals, and a personalized patient-provider relationship. All these factors need to be considered while caring or designing for patients. How can we design only one visualization solution that can consider all these differences in patients? Providers also differed in their principle goal of using patientgenerated data. This has major implications on the design of visualization. A solution that works for one provider may not work for another. This may affect the types of visualizations we consider for them and their patients. There are many driving forces for designing effective patient-generated data visualizations. It is still unclear which direction works best for both patients and providers. In software and technology design, research, and businesses, there is often the notion of designing with a generalization mindset,'one-size-fits-all'. The idea of designing one software or one visualization tool that can address everyone's problem may be appealing and cost-efficient, but it does not always bring validation. We echo the call from recent research for the necessity to design for particulars, individuals. Looking into medical literature and the approaches taken in healthcare services for patient care planning, we often see one-toone interactions between a patient and their healthcare providers in clinical visits. This one-to-one interaction model has been practiced for centuries in medicine and is tailored depending on the individualities of each patient and their healthcare provider. Similarly, for designing visualizations to improve patient-provider communication, we, as visualization and technology designers, should take directions from the medical literature and their practices. We should take steps towards designing individualized tailored visualizations based on both patient and provider preferences to be able to accommodate as many patient-provider communications as possible. Perhaps one solution can be to start designing and developing many patient-generated visualizations tailored based on both the healthcare provider and the patient preferences. There have been attempts in the literature to design visualizations representing patient-generated data for some chronic conditions including, visualizing bipolar patient lived experience, collaborative interactive storyboards design with chronic pediatric patients, photo-based visualization to support patients with irritable bowel syndrome communicate with providers. However, designing visualizations to support chronic patients with their self-collected data is indeterminant, or in other word a wicked problem, meaning there are no definitive solutions or limits to this design problem. Our design study in this paper is a step towards tackling this wicked problem. We followed two criteria required for a rigor design study when addressing a wicked problem: 1) A rigor solution to a wicked problem needs to include multiple perspectives to shape the design problem and consider a broad set of solutions. Thus, in our study, we included the perspectives of both patients and healthcare providers. We explored the healthcare provider perspectives on reviewing patient-generated data during clinical visits and the details of eight patients' approaches to tracking and presenting their health data. Furthermore, we looked into a sample of our patient participants data collections to understand patients' methods of recording their data and their reasoning. 2) A design study is not and should not be reproducible; rather the solutions proposed are one of many possible solutions. However, a rigor solution to a wicked problem needs to report the process of design and analysis in a transparent way. We designed multiple alternative visualizations for each patient. All of our visualizations together shaped a design space of variant patient-generated data representations. We understand that depending on the patient needs, the providers' expectations, and the patient-provider relationship dynamics, a different set of visualization designs could be suitable. Our solutions are one of the many possible solutions to this wicked problem. Furthermore, we explained the process of design and reflection of our design in detail transparently. We hope that the detailed design process we provided supports other researchers and designers to further tackle this wicked problem and to design patient-generated data visualizations. Inspired by the four factors identified from the focus group and the patient interviews, and the healthcare providers' reflections on the designs, we provide the following design guidelines. We understand that these guidelines are not exhaustive; rather these are starting points to design patient-generated visualizations. Include data context in the design: Patients often track a large amount of data. To gain valuable insights from this data, the patients often take notes of the events, circumstances, or emotions associated with the data points. On the other hand, healthcare providers in our study found this contextual information useful to make medical decisions. Previous studies also pointed to the importance of relevant dimensions and context of data for making medical . Thus, visualization designs need to allow for smooth inclusion of the contextual data often in forms of free-format text along with the data points. Consider patients' time commitments in the design: Chronic patients often deal with many issues in their everyday life, leaving them with less free time to track and record their data regularly. The Apps available on the market do not usually consider patient differences in the time they invest in collecting data. For instance, displaying empty entries can cause mental effects, making patients feel they are not doing enough. Thus, visualization designs should allow patients to customize the amount of information and input fields being shown. Allow patients to freely explore their data: Our show that the patient's motivation for tracking and presenting their data to providers play an important role in the design. Some patients are eager to find correlations between their data, some patients are looking for causation of their symptoms, and some patients want to have an overview of their numbers. Previous work also stressed the need to support patients in sense-making and problem-solving with data. Thus, these differences in patients' motivations for data collection should be considered when designing visualizations to represent patient-generated data. Support patients' needs to (partially) share their data: Patients differ in the support they receive from their family, friends, and the healthcare team. Some patients benefit from sharing their whole data with their support circle, some are interested in sharing a selection of their data, and some are hesitant to share their data. Thus, visualization designs should support sharing overviews, selective views, and protected views. Visualization designs that support sharing views need to also include annotation capability and multiple views (e.g., patient view and clinician view). Support providers interacting with patient data: Although providers had different perspectives on the use cases of patientgenerated data visualizations in their practice, they had commonalities in regards to necessary interactive functionalities. All of our providers talked about how difficult it can be to cope with messy, inconsistent, and complicated data collections. This suggests that ata-glance data comprehension is an important visualization design goal. In addition, providers needed interactions to better understand the data including filtering the data, focusing into data details, and overlaying different parts of the data for comparison. In recent years, we have seen growing interests among patients with chronic conditions to track and analyze their data. However, sharing this data with healthcare providers can be challenging due to limited time in clinical visits and the large and complex nature of patient-generated data. We responded to a group of healthcare providers' call from a local hospital to design potential technological solutions to address the challenges of presenting, reviewing, and analyzing patient-generated data collections. We first gained healthcare providers' perspectives through a focus group. Then, we took an in-depth look at chronically ill patients' perspectives tracking their health data. The individual differences among these patients promoted a design space approach where we used insights from these patients to design a space of possible tailored visualizations. By exploring the possibilities of designing individual tailored visualizations representing patient-generated data, we have added one way that can support patients and healthcare providers when reviewing patient-generated data during clinical visits. We hope our proposed visualizations provide patients and healthcare providers better opportunities to present, review, and gain insights on patientgenerated data. We note that we included the perspectives of a small number of patients and healthcare providers; thus, other perspectives may not be included in our . However, we envision this study as a stepping stone for the call to focus more on designing technologies in healthcare for individuals. We encourage the human-computer interaction, visualization, and healthcare communities to repeat these studies by including more patients and healthcare providers and explore designing tailored visualizations for each individual. Then, as a community, we can move towards accumulating these perspectives and designs to empower individuals with accessible design variations. We hope that in the long term, the of this exploration contribute to supporting patients' and healthcare providers' in reviewing patient-generated data collections using visualizations during clinical visits.
We explored the visualization designs that can support chronic patients to present and review their health data with healthcare providers during clinical visits.
707
scitldr
Recently, neural-network based forward dynamics models have been proposed that attempt to learn the dynamics of physical systems in a deterministic way. While near-term motion can be predicted accurately, long-term predictions suffer from accumulating input and prediction errors which can lead to plausible but different trajectories that diverge from the ground truth. A system that predicts distributions of the future physical states for long time horizons based on its uncertainty is thus a promising solution. In this work, we introduce a novel robust Monte Carlo sampling based graph-convolutional dropout method that allows us to sample multiple plausible trajectories for an initial state given a neural-network based forward dynamics predictor. By introducing a new shape preservation loss and training our dynamics model recurrently, we stabilize long-term predictions. We show that our model’s long-term forward dynamics prediction errors on complicated physical interactions of rigid and deformable objects of various shapes are significantly lower than existing strong baselines. Lastly, we demonstrate how generating multiple trajectories with our Monte Carlo dropout method can be used to train model-free reinforcement learning agents faster and to better solutions on simple manipulation tasks. Small errors in the input and prediction can lead to significantly different object trajectories. The orange ball could either end up on the left or right side of the wedge. Learning to predict the physical motion of objects from data is an open area of research. Yet, recent (hierarchical) relation network based forward dynamics predictors (; ;) seem to be a promising alternative to conventional physics engines that are key components of robot control, computer vision and reinforcement learning (RL) systems. Physics simulators, both traditional numerical solvers and learned prediction models, still suffer from insufficient accuracy in challenging scenarios. Small errors in the input and model can lead to dramatically different object trajectories. Take the orange ball that is falling on the blue wedge in Figure 1. Depending on where the orange ball starts or what bias the model has, the ball could either end up on the left or right side. Both are valid outcomes. However, deterministic physics engines will either predict one trajectory or the other. While it is important to reduce errors in each prediction, it is also important to acknowledge that uncertain situations might not have one but multiple possible outcomes. In machine learning, uncertainty-aware neural networks avoid deterministic point estimates by predicting distributions or by randomly sampling in the prediction interval. In the context of dynamics predictions, we propose to use Monte Carlo sampling based dropout on the model weights of a learned forward dynamics predictor to model uncertainty and sample multiple plausible trajectories for an initial state. To stabilize each trajectory and reduce error accumulation over long-time horizons, we use a state-invariant recurrent training mechanism. By feeding back predictions as input over multiple time steps, the model becomes more robust to its own prediction errors without the need for a hidden state. Finally, we introduce a new shape loss on the model predictions that constrains the pairwise distances between objects and object parts and greatly improves shape preservation and the stability of trajectories over long-time horizons. Our final fully differentiable forward dynamics model is able to sample multiple, more accurate and more stable trajectories over long-time horizons compared to existing baselines. An accurate forward dynamics predictor that is able to predict a distribution of future states can be of great importance for robotic control. In model-free reinforcement learning, accomplishing tasks through random exploration is sample inefficient and hardly generalizable. Model-based methods promise greater generalization abilities, but suffer from deterministic world models that are hard to learn and fail in stochastic environments. With our stochastic forward dynamics predictor, we can move part of the sampling process into the environment, physically grounding the random exploration of model-free agents. As the agent is able to observe multiple trajectories at a given state without actually executing multiple actions, the sample efficiency is greatly improved while the stochasticity of each state and action is implicitly learned. We show on several control experiments that a model-free agent trained in our stochastic forward dynamics environment is not only able to better explore and learn faster but often also comes to better solutions than agents trained in deterministic environments. In summary, we propose a stochastic differentiable forward dynamics model that is able to generate multiple plausible trajectories via Monte Carlo (MC) based graph-convolutional dropout. We greatly improve the accuracy and stability of long-term predictions by proposing a new fullyconnected shape loss term and training the model recurrently end-to-end in a state-invariant way. We demonstrate how our stochastic dynamics model can be used to improve the efficiency and performance of model-free reinforcement learning agents on several physical manipulation tasks. Physical dynamics prediction has long been an open research questions (; ; ; ; a; ; a;). Recent advancements in deep learning allowed for emergence of successful systems that aim at solving this problem by learning from data. and proposed a graph-based approach with object-centric and relation-centric representations, and a neural network architecture that predicts object dynamics and interaction between objects in complex 2D scenes. implement a relational network with particle representation for objects, but extend this approach to 3D scenes introducing hierarchical graph representations for computational tractability. These works rely however on a single step prediction during training. We propose a recurrent training scheme on multiple step predictions and show lower long-term error in our experiments. Simulating future plausible object states under physical and user constraints is a commonly addressed challenge in computer graphics. analysed uncertainties in their simulation model for multi-body scenarios and used a Markov chain Monte Carlo algorithm to predict multiple trajectories. based their work on psychological findings about human errors in predicting object dynamics and simulated multiple future environment states by applying external random impulses to colliding bodies. Trajectories generated by both of the mentioned methods are visually plausible to human, but can often diverge from the real physical behavior and require extensive expertise to choose simulation parameters that ensure convergence. support the psychological theory behind the latter work, further suggesting that human predictions of non-linear dynamic effects such as collisions are far from perfect, and thus allow for a less advanced perturbation methods. draw a line between physical and visual plausibility naming further factors improving the visual plausibility of a scenario such as the number of simultaneous collisions or homogeneity of colliding objects. In physical systems, situations that are non-intuitive to human can occur due to an unobservable state of the environment, e.g an object colliding with a fast rotating wheel or an unexpected behavior of a compressed spring. This opposes the goal of computer graphics were visual plausibility becomes a stronger requirement and motivates the search for more sophisticated methods for sampling probable states in physics engines. Multiple techniques allow neural networks to incorporate uncertainty in model predictions. MeanVariance-Estimation is a method that circumvents point estimates in the output space by directly predicting a normal distribution. used this method along with a particle-based representation for splash prediction. Assuming independent velocity distributions for each splash particle produced visually pleasing . In our initial experiments for stochastic simulations, this method lead to unsatisfactory . We observed that Mean-VarianceEstimation method is capable of indicating highly uncertain situations e.g collisions or force applications. Unfortunately, due to the lack of space-time consistency between particles that are present in real objects, this approach lead to incorrect shape predictions during test time. Stochastic regularization as a way of capturing model uncertainty is an active field of research with scarce theoretical foundations. Nonetheless, we see a growing number of practical applications of this group of algorithms in numerous research areas (; ; ;). proposed applying dropout during training and inference as a Bayesian inference approximation with prediction variance as the measure of the epistemic uncertainty. A clear advantage of this method is the ability to visualize the of each predicted trajectory. On the other hand the computational cost grows linearly with the number of samples. Prior work has shown that injecting noise into neural networks is successful not only as a regularization method but also in the training of RL agents;. In model-free RL, temporal credit assignment, sparse reward, and exploration-exploitation trade-offs present significant challenges. Long episodes amplify both problems of credit assignment and reward sparsity, where naive exploration causes exponentially growing sample inefficiency . Reward shaping is one counter measure that improves credit assignment, and consequently sample efficiency, in model-free RL (Grześ, 2017;). Designing shaping functions usually requires expert knowledge and hand-engineering, while also imposing constraints on how the agent solves the task. Such constraints may prevent the agent from solving the task optimally. Predicting a set of trajectories can be framed here as a reward relaxation method with the clear advantage of depending on a single parameter -dropout rate. introduced parametric noise learned with gradient descent into action prediction network, which led to significantly better exploration and higher rewards without creating large computational overhead. This method shows clear potential for stochastic methods to improve training efficiency in reinforcement learning. Figure 2: Hierarchical Relation Network (HRN) architecture. Force, collision and past effects on particles are computed and then propagated through each object hierarchy. The propagated effects are used to predict the next particle positions. Gray blocks represent graph convolutional effect propagation modules. Our stochastic forward dynamics model is based on the deterministic hierarchical relation network (HRN) as proposed by and depicted in Figure 2. Hierarchical graph representation: Propagating effects through a fully-connected scene graph is computationally infeasible. HRN circumvents this problem by leveraging a tree-like graph form that defines a constrained subset of edges for more efficient effect propagation. Edges within each object graph comprise shape and material properties, describing how rigid, soft and "cloth-like" the material is as defined in our simulator. Edges across object graphs describe physical relations between objects such as contact forces. The hierarchy construction starts with the particles at the lowest level as provided by our ground truth simulator. Each next level of the hierarchy is constructed by clustering particles based on their states. The state attributes are the position, the velocity and the mass. Finally, nodes at each hierarchical level represent the state of an object, an object part, an object subpart and so forth down to single object particles. HRN takes a sequence of the past two hierarchical physics graphs G 1,2 which consist of hierarchical object graphs as input and predicts the next state of the scene graph. Force, Collision and History Modules: HRN assumes three effects that act on particles at the lowest level of the hierarchy and that influence the next particle state: external forces, interactions with particles of other objects in close proximity and particle's state history. Inputs to these modules are current particle states, external forces (Force Module) and states of particles with which the particle interacts (Collision Module). Each module computes an embedding vector that represents the effects of the external forces, the collisions and the past states on the particle using pairwise graph convolutions. These effects are combined by summation and enter the Hierarchical Effect Propagation Module. Hierarchical Effect Propagation Module: HRN predicts the next graph state by estimating the influence that particles within each object and across objects exert upon each other. The compounded effects from the Force, Collision and History Modules are propagated not only up and down the hierarchy, but also between particles at the same level by hierarchical graph convolutions that are implemented as fully-connected networks with weight sharing. Each forward pass takes the states of two connected particles and outputs an embedding vector that describes the influence of the first particle on the latter. These effect embedding vectors are collected for each particle in the scene graph and are used to estimate the future particle velocity. This module is realized as a feed-forward fully-connected neural network and predicts per-particle future velocity. It takes in the current particle state consisting of the position, velocity and mass, as well as the sum of effect embedding vectors belonging to the particle. The predicted particle velocity is expressed in the local coordinate frame, i.e relative the corresponding particle at the higher level in the hierarchy. The state of the particle at the highest level additionally includes gravity and is defined in the global coordinate frame. To make the HRN stochastic and sample multiple plausible trajectories for a given initial state, we introduce a Monte Carlo based dropout on the activations of the graph convolutional collision and force modules. We greatly improve model predictions by introducing a new fully-connected shape loss term and a state-invariant recurrent training procedure. Our final model creates realistic trajectories that are suited to train a model-free agent for manipulation tasks. Stable realistic long-term predictions are crucial for planning tasks. While HRN predictions are accurate for a large number of complex physical scenarios, we identify that objects fall apart relatively quickly along boundaries of object parts for two reasons. First, the HRN's loss function is designed to minimize the error between predicted and ground truth states while imposing a group shape constraint. As shown in the left panel of Figure 3, this group shape loss optimizes the pairwise distances between object nodes within a group to be the same as the ground truth pairwise distances. It does not impose that the pairwise distances across groups are the same as the ground truth distances, which leads to unrealistic deformations between groups as depicted in Figure 5. We therefore introduce a stronger fully-connected shape constraint (Figure Figure 3, right) that imposes pairwise distances to be the same as ground truth pairwise distances across all possible node combinations, which improves shape preservation significantly. Second, the HRN's prediction errors accumulate exponentially as predictions are fed back in recurrently during inference to generate multi-step trajectory predictions. However, during training the HRN is only supervised with the next ground truth state and thus never gets its own perturbed predictions as input. To make the HRN robust against prediction errors, we therefore propose to train the model recurrently in a state-invariant way, i.e. without using a hidden state as physical dynamics is state-free (Figure 4). The overall loss is the sum of losses from each time step. Learning recurrently on long sequences, the network optimizes its weights taking into account its own prediction errors during training. This significantly reduces error accumulation during inference time. Figure 3: Shape loss. The HRN shape loss only constraints particle distances within object particle groups (left). Our new fully-connected shape loss constraints distances between all particles pairs within each objects (right). removes a certain number of randomly chosen nodes in a neural network to prevent overfitting with a probability commonly referred to as dropout rate. In each iteration, a new set of nodes is sampled and only the edge weights attached to the active nodes are updated via backpropagation. In our novel approach, we use randomly sample dropout masks on graph-convolution kernels to sample physically plausible trajectories. To keep the kernel fixed independent of its position in the graph, we only sample once per prediction step. To infer a set of plausible trajectories, we randomly sample a dropout mask for each generated trajectory during test time, similarly how uses dropout to generate multiple predictions. The modular architecture of the HRN allows us to apply dropout at different locations in a very interpretable way (Figure 2 . Dropout on the collision module makes sampled trajectories diverge at collision points. Dropout on the force module leads to diverging trajectories during force applications. Dropout on other HRN modules lead to convergence problems and unrealistic predictions. In the following experiments, we thus only apply dropout to the HRN's force and collision modules. Applying our dropout based sampling method on our dynamics model in physically plausible long-term predictions with consistent shapes. The ability to sample a distribution of physically plausible trajectories can be used to improve the efficiency of exploration of model-free reinforcement learning agents. We thus train a model-free policy on our stochastic physics predictor to achieve physical manipulation tasks as follows. At each episode during training, we input the agent's action and current state into our stochastic forward dynamics predictor and sample a set of 5 future states with our dropout method. For accelerated training, we introduce a reward relaxation method which consists of rewarding an agent as soon as one of the trajectories from the sampled set leads to the goal, naturally exposing the agent to rewards much quicker. If none of the trajectories hits the goal, one future state is chosen at random and the sampling process is repeated for the next future state. The level of reward relaxation is controlled by the dropout rate. The higher the dropout rate, the wider the set of trajectories and the easier it is for the agent to be rewarded. This directs the agent much quicker towards the reward in early training stages. In scenarios requiring high levels of accuracy and repeatability, we find that a gradual reduction of the dropout rate during policy training helps convergence and leads to more efficient policies compared to a fixed dropout rate, which we show in the following experiments. In our experiments, we first show that our recurrent training and new fully-connected shape loss significantly improve the prediction quality for single long-term trajectories on complex physical scenarios over baselines. We then demonstrate how our proposed Monte Carlo sampling based dropout method generates multiple high-quality trajectories by visualizing stochastic model roll-outs. Lastly, we use our stochastic forward dynamics model's ability to generate multiple trajectories to train a model-free policy on two physical manipulation tasks more efficiently and to higher reward. We evaluate our models forward dynamics prediction performance against the HRN baseline on two complex scenarios. The first scenario showcases the ability of our model to predict complex deformations: A deformable soft cube is first lifted off the ground by an upward impulse and then falls toward the ground while rotating and deforming on impact ( Figure 5 left). In the second scenario, we evaluate our models performance on complex collisions. Collisions can greatly magnify object position and pose errors leading to large discrepancies between predictions and ground truth. For our collision experiment, two rigid cubes are placed at a random distance from each other and then repeatedly accelerated by an impulse on each cube into another to generate collisions (Figure 5 right). We train our model on a multitude of examples of both scenarios and evaluate on held out examples. We compare the mean squared error on positions, velocities and shape loss and show qualitative long-term predictions of our model and the HRN baseline. In Table 1, we present our quantitative on the deformation and collision task. Our fullyconnected shape loss and recurrent training procedure significantly lower long-term prediction errors in both scenarios. On the collision task, initial position and velocity error increase slightly compared to the baseline but accumulate to far lower errors in the long run. Empirically, we found that our recurrent training procedure works best with sequence lengths between 4 and 6 time steps. Longer sequence lengths prevent the model from converging during training. We found that gradually increasing the sequence length during training is an effective countermeasure. The improvements of our model over the HRN baseline are obvious in visualizations of predicted trajectories (Figure 5). Whereas HRN predictions fall apart along object boundaries and sometimes penetrate objects, our method preserves shapes and resolves collisions much better and predicts positions much closer to the ground truth, leaving us with an adequate basis for generating multiple plausible trajectories with our sampling method. Figure 5: Dynamics prediction comparisons. Our method is compared to the HRN baseline and ground truth. a) A soft cube bounces of the ground. b) Two rigid cubes collide. Our method preserves the geometry of objects better over long time horizons. In this section, we demonstrate that our Monte Carlo sampling based dropout method can sample multiple physically plausible trajectories from our forward dynamics predictor under the same initial state. In two complex scenarios we study our model's uncertainty during force applications and collisions. In the first scenario, an external force lifts a soft body, which subsequently drops toward the floor rotating slightly. By applying dropout to the force module (Figure 2), we generate multiple trajectories that arise due to our model's uncertainty during force applications. We use a dropout rate of 0.1 during training and 0.3 during testing. In the second scenario, we show that our proposed method produces realistic sets of trajectories in collision scenarios. Forces are applied to two rigid cubes pushing them towards each other causing collision. Dropout is applied to both force and collision module with a rate of 0.05 both during training and testing. The visualizations of multi trajectory roll-outs in Figure 6 show that the sets of predicted trajectories are physically and visually plausible. Due to the modularity of the HRN model, targeted stochasticity can be applied within each submodule via dropout, introducing uncertainty in the output of force and collision predictions. Our proposed sampling method is able to capture trajectory distributions ranging from single mode low variance to complex, multi-modal distributions. Dropout rates between 0.05 and 0.3 allow for fast convergence during training and a wide variety of visually plausible sample trajectories during inference. We notice that inference dropout rates that differ significantly from the training rates can cause biased predictions leading to, e.g. objects slowly drifting away in one direction. Additional studying the effect of the dropout rate on the width of the state distributions can be found in supplement Figure 10. Deformations t t+5 t+23 t+25 t+28 t+33 t+35 t+40 t+45 t+50 t t+1 t+2 t+3 t+4 t+5 t+6 t+7 t+8 t+9 Collisions Falling on a wedge t t+1 t+2 t+3 t+4 t+5 t+6 t+7 t+8 t+9 Multiple materials t t+1 t+2 t+3 t+4 t+5 t+6 t+7 t+8 t+9 Figure 6: Example sampled multiple trajectories. We use dropout on the force and collision module to sample multiple trajectories given the same initial input. Dark colors depict the ground truth trajectory. Light colors depict imagined sampled trajectories. The performance of our model in forward simulation is maintained at a high level despite the introduced graph-convolutional dropout. Deformations: A soft cube bounces off the ground. Collisions: Two rigid cubes collide. Our method is able to sample multiple physically plausible trajectory in each scenario. Falling on a wedge: We simulate the situation from Fig 1. A rigid objects falls on a wedge. Stochastic simulation allows for a multi-modal prediction. Multiple materials: A rigid and a soft object interact with a cloth while falling on the ground. We show that stochastic physical environments are useful for intelligent systems by training reinforcement learning agents in two different scenarios involving various physical interaction types and materials. We use Proximal Policy Optimization as a model-free reinforce-ment learning method in all scenarios. Similarly to , we add a further baseline in which we use the deterministic environment and add Gaussian noise in the action space. Cube moving task: In this scenario, the agent learns to apply a sequence of forces to rigid cubes such that at least one of the cubes is pushed towards the goal region. The maximum length of the episode is 10. Here, the two cubes are transparent to each other and cannot collide. The stochasticity originates entirely from the uncertainty in the force application. We use a constant dropout rate of 0.1 in the force module of our physics predictor throughout the whole training. Ball hitting tower task: Tasks that require inducing collisions pose a significantly more difficult challenge to the RL agent. In this scenario, there is a stack of three rigid cubes and a ball to which the agent can apply force. The agent gets a reward for pushing the middle cube out of the tower. To achieve the goal, the agent needs to hit the tower with the ball to which it applies forces. The maximum episode length is 15 steps. Training update Avg. episode length Figure 7: Average episode length in the "cube moving task". We compare a deterministic environment against action space noise and 2 randomly seeded stochastic environments. The agent learns faster in stochastic environments through better initial exploration and converges to a shorter policy. Training update Avg. episode length Figure 8: Average episode length in the "ballhits-tower task". We compare a deterministic environment against action space noise, a stochastic environment where the dropout rate is fixed and 3 randomly seeded stochastic environments where the dropout rate is annealed. The agent finds shorter policies earlier in the training indicating more efficient exploration in the stochastic environments. Figure 9: Policy comparisons. a) Cube moving task. Top: Policy learned in a deterministic environment (longer, 4 time steps). Bottom: Policy learned in a stochastic environment (shorter, 2 time steps). The first two frames are model inputs. The red cubes indicate the target position to which the green cubes have to be moved. b) Ball hitting tower task. Agents in deterministic and stochastic environments converge to similar 4 step policies. The figure depicts one 4 step policy example. Cube moving task: In this example, introducing the action space noise leads to faster learning compared to learning in the deterministic environment. Training in stochastic physical environments outperforms both baselines, allows for better exploration and finding of shorter policies. In Figure 9, we visualize the two learned policies, in stochastic and deterministic environments. Ball hitting tower task: In this scenario, the agent finds more efficient policies through the application of stronger forces during training, which in gradually shorter policies as shown in Figure 8. The most effective learning method is learning in a stochastic environment with the dropout rate annealing. We lower the dropout rate linearly from 0.1 at the start to 0 after 1200 training updates. This method allows for initial fast exploration, but does not introduce too high stochasticity when precision is needed as the agent begins applying strong forces later in the training. Without annealing, the randomness is too high and agents learn longer policies in a noisy training process, as indicated by the red curve in Figure 8. Furthermore, action space noise improves the pace at which the agent learns compared to the entirely deterministic environment. The policies learned by the presented methods do not significantly differ. An exemplary policy is visualized in Figure 9. Qualitatively our stochastic HRN predicts plausible future trajectories; an experiment in which human subjects were asked to discriminate between ground-truth and predicted trajectories could be used to evaluate its performance quantitatively. Even though this method does not require extensive expert knowledge, a few design decisions have to be made e.g dropout rates for training and inference. During inference, too high of a dropout rate can lead to visually unrealistic dynamics and object interactions. Dropout rate scheduling during training should be investigated to improve convergence of the dynamics model during training, which may improve its performance as an environment for the reinforcement learning tasks. Possible optimizations include more complex, potentially non-linear, annealing schedules during inference, delaying the dropout rate annealing, and finding appropriate starting values. Finding a universal schedule that can be applied to any environment and task has large potential for accelerating reinforcement learning. Further improvements for the physics predictor are key for its use as a physical environment. These can include improvements for: scenarios with multiple materials in one scene, penetrations during collisions that can lead to insufficient position prediction, and generalization to new scenes. Our show that the proposed sampling method produces physically plausible trajectories in single-and multi-object scenarios as well as across a range of materials. The quality of roll-outs, e.g. shape prediction is not compromised by the introduced noise. Furthermore, our model-free reinforcement learning experiments indicate that agents learning in physically stochastic environments are able to explore better and learn quicker, which confirms the quality of the sampled trajectories. In difficult reinforcement learning scenarios, where a high level of precision is needed to get a reward, we demonstrated that dropout rate annealing is an effective method to avoid too high randomness at the same time not reducing the benefits of stochasticity for exploration in early stages of the training. In this regard, stochastic neural physics engines offer a clear advantage over conventional physics engines. In Figure 10, we present the effect of the dropout rate on the width of the predicted distribution of trajectories. In this example, dropout is activated in the force module. We compare distributions generated with dropout rates 0.5, 0.3 and 0.1. In Figure 11, we present the stochastic predictions in "ball hitting tower" scenario. The dropout is active in the force and collision modules. The prediction variance corresponds to the dropout rate strength in both scenarios. In the "ball hitting tower" scenario, we observe highly complex behavior, where the time at which the cubes fall off the tower at different rates
We propose a stochastic differentiable forward dynamics predictor that is able to sample multiple physically plausible trajectories under the same initial input state and show that it can be used to train model-free policies more efficiently.
708
scitldr
There has been a large amount of interest, both in the past and particularly recently, into the relative advantage of different families of universal function approximators, for instance neural networks, polynomials, rational functions, etc. However, current research has focused almost exclusively on understanding this problem in a worst case setting: e.g. characterizing the best L1 or L_{infty} approximation in a box (or sometimes, even under an adversarially constructed data distribution.) In this setting many classical tools from approximation theory can be effectively used. However, in typical applications we expect data to be high dimensional, but structured -- so, it would only be important to approximate the desired function well on the relevant part of its domain, e.g. a small manifold on which real input data actually lies. Moreover, even within this domain the desired quality of approximation may not be uniform; for instance in classification problems, the approximation needs to be more accurate near the decision boundary. These issues, to the best of our knowledge, have remain unexplored until now. With this in mind, we analyze the performance of neural networks and polynomial kernels in a natural regression setting where the data enjoys sparse latent structure, and the labels depend in a simple way on the latent variables. We give an almost-tight theoretical analysis of the performance of both neural networks and polynomials for this problem, as well as verify our theory with simulations. Our both involve new (complex-analytic) techniques, which may be of independent interest, and show substantial qualitative differences with what is known in the worst-case setting. The concept of representational power has been always of great interest in machine learning. In part the reason for this is that classes of "universal approximators" abound -e.g. polynomials, radial bases, rational functions, etc. Some of these were known to mathematicians as early as Bernstein and Lebesgue 1 -yet it is apparent that not all such classes perform well empirically. In recent years, the class of choice is neural networks in tasks as simple as supervised classification, and as complicated as reinforcement learning -inspiring an immense amount of theoretical study. Research has focus on several angles of this question, e.g. comparative power to other classes of functions (; ; BID0, the role of depth and the importance of architecture (; ; BID6, and many other topics such as their generalization properties and choice of optimization procedure BID7 ; BID0 .Our fall in the first category: comparing the relative power of polynomial kernels and ReLU networks -with a significant twist, that makes our more relevant to real-life settings. The flavor of existing in this subject is roughly the following: every function in a class C 1 can be approximately represented as a function in a different class C 2, with some blowup in the size/complexity of the function (e.g. degree, number of nodes, depth). The unsatisfying aspect of such is the "worst-case" way in which the approximation is measured: typically, one picks a domain coarsely relevant for the approximation (e.g. an interval or a box), and considers the L ∞, L 2, L 1,... norm of the difference between the two functions on this domain. In some of the constructions (e.g. BID6) ), the evaluation is even more adversarial: it's the mean-square error over a specially-designed measure. Instead, in practically relevant settings, it's reasonable to expect that approximating a predictor function well only on some "relevant domain" would suffice, e.g. near the prediction boundary or near a lower-dimensional manifold on which the data lives, as would be the case in settings like images, videos, financial data, etc. A good image classifier need not care about "typical" data points from the ∞ -ball, which mostly look like white noise. The difficulty with the above question is that it's not immediate how to formalize what the "relevant domain" is or how to model the data distribution. We tackle here a particularly simple (but natural) incarnation of this question: namely, when the data distribution has sparse latent structure, and all we ask is to predict a linear function of the latent variables based upon (noisy) observations. The assumption of sparsity is very natural in the context of realistic, high-dimensional data: sparsity under the correct choice of basis is essentially the reason that methods such as lossy image compression work well, and it is also the engine behind the entire field of compressed sensing BID5. We will be considering a regression task where the data has a sparse latent structure. More precisely, we wish to fit pairs of (observables, labels) (X, Y) generated by a (latent-variable) process:• Sample a latent vector Z ∈ R m from H, where H is a distribution over sparse vectors.• To produce X ∈ R n, set X = AZ + ξ, where the noise ξ ∼ subG(σ 2) is a subgaussian random vector with variance proxy σ 2 (e.g. N (0, σ 2 I)).• To produce Y ∈ R, we set Y = w, Z.We hope the reader is reminded of classical setups like sparse linear regression, compressive sensing and sparse coding: indeed, this distribution on the data distribution X is standard in these setups. In our setting, we additionally attach a regression task to this data distribution, wherein the labels Y are linearly generated 2 by a predictor w from the latent vector Z.Note our interest is slightly different than usual: in the traditional setup, we are interested in the statistical/algorithmic problem of inferring Z, given X as input (the former studying the optimal rates of "reconstruction" for Z, the latter efficient algorithms for doing so). In particular, we do not typically care about the particular form of the predictor as long as it is efficiently computable. By contrast, we want to understand how well different subsets of universal approximator families can fit the data points (X, Y). Namely, regardless of the specifics of the training procedure, the end will be an element of some function class like a linear function of a kernel embedding of X, or a neural network. Therefore, we ask if these classes are rich enough to reconstruct Y given X accurately (i.e. compared to the Bayes-optimal estimator E[Y |X]): if the answer is negative, then we know our predictor will perform poorly, no matter the training method. We measure the performance of these estimators in the natural 3 distributional sense: expected reconstruction error, DISPLAYFORM0. Informally, what we will show is the following. Theorem (Informal). For the problem of predicting Y given X in the generative model for data described above, it holds that: Small two-layer ReLU networks achieve close to the statistically optimal rate. Polynomial predictors of degree lower than log m achieve a statistical rate which is substantially worse. (In fact, in a certain sense, close to "trivial".) Conversely, polynomial predictors of degree O((log n)2 ) achieve close to the statistically optimal rate. The lower bound in is relevant since fitting a polynomial to data points of the form (x i, y i) requires 4 searching through the space of multivariate polynomials of degree Ω(log m) which has dimension m Ω(log(m)), and thus even writing down all of the variables in this optimization problem takes super-polynomial time. Practical aspects of using polynomial kernels even with much lower degree than this have been an important concern and topic of empirical research; see for example BID2 and references within. On the other hand, the upper bound in shows that our analysis is essentially tight: greater than polylog(m) degree is not required to achieve good statistical performance, which is qualitatively different from the situation in worst-case analyses (see Section 4.2.2 for more details). Our mathematical analysis closely matches the observed behavior in experiments: see Section 6.For formal statements of the theorems, see Section 4. There has been a large body of work studying the ability of neural networks to approximate polynomials and various classes of well-behaved functions, such as recent work (; ; BID0). These exclusively focus on the worst-case setting where the goal is to find a network close to some function in some norm (e.g. L ∞ or L 1 -norm, often under an adversarially chosen measure).In contrast there is little work on the problem of approximating ReLU networks by polynomials, mostly because it is well-known by classical of approximation theory (; BID3) that polynomials of degree Ω(1/) are required to approximate even a single ReLU function within error in L ∞ -norm on [−1, 1]. On the other hand, we will show that if we do not seek to achieve -error everywhere for the ReLU (in particular not near the nonsmooth point at 0) we can build good approximations to ReLU using polynomials of degree only O(log 2 (1/)) (see discussion in Section 4.2.2 and Theorem 5.2).Because of the trivial Ω(1/) lower bound for worst-case approximation of ReLU networks by polynomials, BID0 studied the related problem of approximating a neural network by rational functions. (A classical of approximation theory shows that rational functions of degree O(log 2 (1/)) can get within -error of the absolute value function.) In particular, BID0 shows that rational functions of degree polylog(1/) can get within distance in L ∞ -norm of bounded depth ReLU neural networks. Somewhat related is also the work of BID8 who considered neural networks with quadratic activations and related their expressivity to that of sigmoidal networks in the depth-2 case building on of for approximating sigmoids. The in is also proved using complex-analytic tools, though the details are substantially different. The work of studied the power of kernel regression methods to simulate a certain class of neural networks. More precisely, they bounded the 2 norm of kernel regression models approximating neural networks with bounded depth, "nice" activation functions (not including ReLU), and small input and edge weights. By standard generalization theory, this gives a corresponding sample complexity for improper learning via kernels. In our setting, their does not apply: first, the network of interest has ReLU activations; even ignoring this issue, their bounds would be roughly exponential in n because the 2 norm of the network's input vector is large, of order Θ(σ √ n). There is a vast literature on high dimensional regression and compressed sensing which we do not attempt to survey, since the main goal of our paper is not to develop new techniques for sparse regression but rather to analyze the representation power of kernel methods and neural networks. Some relevant references for sparse recovery can be found in . We only emphasize that the upper bound via soft thresholding we show (Theorem 4.1) is implicit in the literature on high-dimensional statistics; we include the proofs here solely for completeness. In this section we will give formal statements of the and give some insight into the techniques used. First, we state the assumptions on the parameters of our generative model:• Z is sparse: more precisely, |supp(Z)| ≤ k and Z 1 ≤ M with high probability. • A is a µ-incoherent n × m matrix, which means that A A − I ∞ ≤ µ for some µ ≥ 0.• w ∞ = 1 (w.l.o.g., since changing the magnitude of w rescales Y)The assumption on A is standard in the literature on sparse recovery (see reference texts ). In general one needs an assumption like this (or a stronger one, such as the RIP property) in order to guarantee that standard algorithms such as LASSO actually work for sparse recovery. For the reader not familiar with this literature, this property is a proxy for the matrix being "random-like" -e.g. a matrix with i.i.d. entries of the form ±1/ √ n has µ = O(1/ √ n), even when m >> n. We also note that for notational convenience, we will denote A ∞ = max i,j |A i,j |.Before proceeding to the , we note that the first-time reader may freely assume that µ = 0 and n = m; the are still interesting in this setting and no important technical idea is needed for the more general case. For the upper bounds, we have included for the more general setting (with µ ≥ 0) to show that our are relevant even to very high-dimensional settings where m >> n. We have only proven the lower bound in the case µ = 0: this is the easiest setting for algorithms, so this makes the lower bounds the strongest. We prove the following theorem, which shows that small 2-layer ReLU networks can achieve an almost optimal statistical rate. Let us denote the soft threshold function with threshold τ as ρ τ (x):= sgn(x) min(0, |x| − τ) = ReLU(x − τ) − ReLU(−x + τ). Let's introduce the notation ρ ⊗m τ to denote the map given by applying ρ τ coordinate-wise to a vector in R m. Consider the following estimator (for y), corresponding to a 2-layer neural network: DISPLAYFORM0 We can prove the following for the estimator (see Appendix A of the supplement):Theorem 4.1 (2-layer ReLU). With high probability, the estimatorŶ N N satisfies DISPLAYFORM1 Notice that the size of the ReLU net is comparable to the input: one of the layers has the same dimension as A, the other the same dimension as w. Furthermore, to interpret this , recall that we think of µ as quite small -in particular µ 1. Thus the error of the estimator is essentially O(σ 2 k 2 log(m)), i.e. essentially |σ| error "per-nonzero-coordinate". It can be shown that this upper bound is nearly information-theoretically optimal (see Remark B.1), except that there is an additional factor of k. This additional factor is artificial and can be removed with added technical effort; we show how to do this in the µ = 0 case in Theorem A.1.We emphasize that the analysis of this kind of soft thresholding estimator is implicit in much of the literature on sparse linear regression. For completeness, we include a complete and self-contained proof of Theorem 4.1 in Section A. We first show that polynomials of degree smaller than O(log m) essentially cannot achieve a "nontrivial" statistical rate. This holds even in the easiest case for the dictionary A: when it's the identity matrix. More precisely, we consider the situation in which A is an orthogonal matrix (i.e. µ = 0, m = n), w ∈ {±1} m, the noise distribution is Gaussian N (0, σ 2 I), and the entries of Z are independently 0 with probability 1 − k/m and N (0, γ 2) with probability k/m. Then we show Theorem 4.2. Suppose k < m/2 and f is a multivariate degree d polynomial. Then DISPLAYFORM0 To parse the , observe that the numerator is of order γ 2 k which is the error of the trivial estimator 7 and the denominator is close to 1 unless d is sufficiently large with respect to m. More precisely, assuming the signal-to-noise ratio γ/σ does not grow too quickly with respect to m, we see that the denominator is close to 1 unless DISPLAYFORM1. On a technical note we observe that this statement is given with respect to expectation but a similar one can be made with high probability, see Remark B.2. The lower bound of the previous section leaves open the possibility that polynomials of degree O(polylog(m)) still do not suffice to perform sparse regression and solve our inference problem; Indeed, it is a well-known fact (see e.g. BID0) that to approximate a single ReLU to -closeness in infinity norm in [−1, 1] requires polynomials of degree poly(1/); this follows from standard facts in approximation theory BID3 since ReLU is not a smooth function. Proceeding with this "worst-case" way of thinking: our upper bound follows by designing a polynomial approximation to ReLU into our neural network construction; since estimates for Y typically accumulate error from estimating each of the m coordinates of Z, to guarantee accurate reconstruction we would need m to be small. Plugging in the the best approximation to ReLU in infinity norm, we would need a Ω(√ m)-degree polynomial for this to yield a multivariate polynomial with similar statistical performance to the 2-layer ReLU network which computesŶ N N. Thus, naively, we might suspect that the degree of the kernel needs to be as high as √ m to get a reasonable approximation. Surprisingly, we show this intuition is incorrect! In fact, we show how using only a polylog(m) degree polynomial, our converted ReLU network has similar statistical performance. Formally this is summarized by the following theorem, whereŶ d,M is the corresponding version ofŶ N N formed by replacing each ReLU by our polynomial approximation. DISPLAYFORM0 With high probability, the estimatorŶ d,M satisfies DISPLAYFORM1 7 I.e. the estimator which always returns 0, without looking at the data. 8 As in Theorem 4.1, there is a spurious factor of k in this bound which can be removed with additional technical effort. In particular in the µ = 0 case we can remove it using the same argument as Theorem A.1; details are omitted. The idea behind our construction is described in Section 5.3. Our methods are novel and may be of independent interest; we are not aware of a way to get this using only generic techniques such as FT-Mollification BID4. In this section, we will sketch the ideas behind the proofs of our . The full proofs are relegated to the appropriate appendices. We proceed with each of our in turn. As previously mentioned, this kind of is well-known in the literature on sparse regression and we include a proof primarily for completeness. The intuition is simple: the estimatorẐ N N can make use of the non-linearity in the soft threshold to zero out the coordinates in the estimate A X which are small and thus "reliably" not in the support of the true z. Thus, the estimator only makes mistakes on the non-zero coordinates. The full proofs are in Section A. The proof of Theorem 4.2 has two main ideas, which we detail below: A structural lemma, which shows that the optimal predictor has a "decoupled" structure along the coordinates of the latent variable. An analysis of this decoupled estimator using a bias-variance calculation in an appropriately chosen basis. The full proofs of this Section are in Appendix B. As explained above, our structural lemma shows that the optimal low-degree polynomial estimator decouples along the coordinates of the latent variable. In order to understand why this should be true, first observe that the optimal estimator for Y = w, Z given X has a particularly simple structure. Concretely, the optimal estimator is the conditional expectation E[w, Z |X] = i w i E[Z i |X], so the optimal estimator for Y simply reconstructs Z as well as possible coordinate-wise, then takes an inner product with w. With this in mind, note the coordinates of Z are independent in our setting, so optimal estimation of Z i should not depend in any way on reconstructing Z j for j = i. This allows us to show that the optimal polynomial of degree d to estimate Y has no "mixed monomials" in an appropriate basis. This is the content of the next lemma, whose proof is in Appendix B. Lemma 5.1. Suppose X = AZ + ξ where A is an orthogonal m × m matrix, Z has independent entries and ξ ∼ N (0, σ 2 Id). Then there exists a unique minimizer f * d over all degree d polynomials f d of the square-loss, DISPLAYFORM0 and furthermore f * d has no mixed monomials. In other words, we can write f * DISPLAYFORM1 Once we have reduced to considering estimators with decoupled structure, it becomes feasible to analyze the performance of all possible low degree polynomials using a bias-variance calculation in a carefully chosen basis. This is the second (and more involved) step in the proof. In order to perform the calculation, we need to apply Fourier analytic methods, so we need to switch to an orthonormal basis. Since the noise we chose for the lower bound instance is Gaussian 9, a natural choice is the Hermite polynomials. We review the definition of the Hermite polynomials in Appendix B, but for the purposes of this proof overview, the Hermite polynomials are polynomials H n (x) indexed by multi-indices n ∈ N m 0 with the important property that they are orthogonal with respect to the standard m-variate Gaussian distribution, namely DISPLAYFORM0 From this, we can derive Plancherel's Theorem in this basis: DISPLAYFORM1 We use this theorem, along with the structural Lemma 5.1 to perform a bias-variance tradeoff analysis of any predictor: namely, we show If the Fourier coefficients | f (n)| are large, then the estimator will be very sensitive to noise (i.e. has too high of a variance). On the other hand, if | f (n)| is small and f is low-degree, then the estimator cannot match the correct mean well regardless of noise (i.e. has too high of a bias).Efficient application of Plancherel's theorem is key to proving both : in the first case, we apply it over the randomness in the noise ξ, and in the second case, we apply it over the randomness in the latent vector Z, which has Gaussian entries conditioned on its support. Note that when f is sufficiently high-degree, it can effectively take advantage of the difference in scales between the noise and the signal to achieve both low bias and low variance simultaneously: see the following upper bound section for details. As previously mentioned, it's a from classical approximation theory that no low-degree polynomial is close to the ReLU function on all of [−1, 1]. The crux of these is that it's hard to approximate ReLU well at 0, its point of non-smoothness. However, in our setting precisely approximating ReLU everywhere is not important for getting a good regression rate: instead, the approximation needs to be very close to 0 when the input is negative, and only very coarsely accurate otherwise. The reason for this is the intuition we described for 2-layer ReLU networks: the property of ReLU that is useful in this setting is it's "denoising" ability -the fact that it zeroes out negative inputs. Consequently, we design a polynomial approximation to ReLU of degree O(log 2 n) which sacrifices accuracy near the point of non-smoothness in favor of closeness to 0 in the negative region. More precisely, we prove the following theorem, in which the parameter τ in our theorem controls the trade-off between the polynomial p d being close to 0 for x < 0 and being close to x for x > 0. Theorem 5.2. Suppose R > 0, 0 < τ < 1/2 and d ≥ 7. Then there exists a polynomial DISPLAYFORM0 and for x ∈ [0, R], DISPLAYFORM1 The proof of this theorem proceeds in two steps: First, one takes a "soft-max" mollification of ReLU of the form g β (x):= 1 β log(1 + e βx) with an appropriately tuned β, so that g β is sufficiently close to ReLU. Second, if β is not too large, we prove that the poles (in the complex plane) of the function g β are The full proofs are in Appendix C. Finally, we provide synthetic experiments to verify the predictions from Theorem 4.2 and Theorem 4.3. The setup is as follows: we generate a large synthetic data set (with n = m and µ = 0) in the following fashion:• A is a random orthogonal matrix and w is sampled from a n-dimensional standard Gaussian.• Z ∈ R n is sampled by including each coordinate with probability k/n, and sampling a standard Gaussian for each included coordinate.• X and Y are sampled according to the generative model in Section 2, using Gaussian noise with standard deviation σ. For each fixed degree, we fit a polynomial using least-squares regression, and evaluate the performance on a corresponding test set 10 generated in the same fashion (reusing the same A and w). Solving the regression problem for large degrees is intractable using standard training methods; to overcome this issue, we used structural observation in Lemma 5.1 to reduce the regression problem for estimating Y from X to that of estimating Z i given X i, which is a much lower dimensional problem. The of the experiment are in FIG2, graphed on a log scale. All experiments were run with k = 5 and σ = 0.06. We see that for low degrees, i.e. before our prediction error is close to the information-theoretic limit, the log-error decays roughly linearly with respect to polynomial degree. This matches the prediction of the lower bound in Theorem 4.2 after taking a log of the right-hand-side. For completeness, we also evaluate the baseline 2-Layer ReLU network described in Section 4.1 in the same experimental setup. Table 1 shows the test error of the baseline 2-Layer ReLU network and, for comparison, the best polynomial of degree 17 in the same experiment. Despite the high degree, the ReLU network is still slightly better. In this paper, we considered the problem of providing representation lower and upper bounds for different classes of universal approximators in a natural statistical setup that exhibits sparse latent structure. We hope this will inspire researchers to move beyond the worst-case setup when considering the representational power of different function classes. Figure 1: Degree vs Log L2 Error on test set for different values of n, the dimensionality of the problem. This plot was generated using a training set of 8000 examples from the generative model and a test set of 1000 additional examples; error is unnormalized. The techniques we develop are interesting in their own right: unlike standard approximation theory setups, we need to design polynomials which may only need to be accurate in certain regions. Conceivably, in classification setups, similar wisdom may be helpful: the approximator needs to only be accurate near the decision boundary. Finally, we conclude with a tantalizing open problem: In general it is possible to obtain non-trivial sparse recovery guarantees for LASSO even when the sparsity k is nearly of the same order as n under assumptions such as RIP. Since LASSO can be computed quickly using iterated soft thresholding (ISTA and FISTA, see), we see that sufficiently deep neural networks can compute a near-optimal solution in this setting as well. It would be interesting to determine whether shallower networks and polynomials of degree polylog(n) can achieve a similar guarantees. Ankur Moitra. We will first prove a bound on the error of the soft-thresholding estimatorẐ N N (Lemma A.2), which corresponds to the hidden layer of the neural network: this is essentially a standard fact in high-dimensional statistics (see reference text ). The idea is that the soft thresholding will correctly zero-out most of the coordinates in the support while adding only a small additional error to the coordinates outside the support. From the recovery guarantee forẐ N N, we will then deduce Theorem 4.1.Towards proving the above , we first need an estimate on the bias of A x, i.e. the error without noise: DISPLAYFORM0 Proof. We have DISPLAYFORM1 so applying the incoherence assumption we have DISPLAYFORM2 Using this we can analyze the error in thresholding. Lemma A.2. Suppose A is µ-incoherent i.e. A A − I ∞ ≤ µ. Let z be an arbitrary fixed vector such that z 1 ≤ M and |supp(z)| ≤ k. Suppose x = Az + ξ where ξ ∼ N (0, σ 2 I n×n). Then for some τ = Θ(σ (1 + µ) log m + µM ) andẑ = ρ ⊗n τ (A x), with high probability we have ẑ − z ∞ ≤ 2τ and supp(ẑ) ⊂ supp(z).Proof. Observe that A x = z + (A A − I)z + A ξ. Note that entry i of A ξ is A i, ξ where A i 2 2 ≤ (1 + µ) so (A t ξ) i is subgaussian with variance proxy at most σ 2 (1 + µ).By concentration and union bound, with high probability all coordinates not in the true support are thresholded to 0. Similarly we see that for each of the coordinates in the support, an error of at most 2τ is made. From the above lemma, we can easily prove the main theorem of this section:Proof of Theorem 4.1. When the high probability above event happens, we have the following upper bound by Holder's inequality: DISPLAYFORM3 For the lower bounds we will be interested mostly in the case when µ = 0, i.e. A is orthogonal and so m = n, the coordinates of Z are independent and each is nonzero with probability at most k/n, and the noise is Gaussian. Then the error estimate we had in the previous theorem specializes to O(σ 2 k 2 log(n)), but under these assumptions we know that the information-theoretic optimal is actually σ 2 k log(n). While not very important to the flow of the paper, for completeness we can improve the analysis to eliminate the extra factor of k, without changing the algorithm: Theorem A.1. Suppose A is orthogonal (hence m = n), the coordinates of Z are independent, and ξ ∼ N (0, σ 2 I). Then DISPLAYFORM4 Proof. In this case, we have A X = Z + ξ where ξ ∼ N (0, σ 2 I). Therefore the coordinates ofẐ are independent of each other, and so we see DISPLAYFORM5. DISPLAYFORM6 where the first inequality follows as in Lemma A.2, the second inequality uses that |ρ τ (x) − x| ≤ τ, the third uses that (a + b) 2 = a 2 + 2ab + b 2 ≤ 2a 2 + 2b 2 by Young's inequality, and the last inequality follows from standard tail bounds on Gaussians. We see the last expression is O(kσ 2 log(m)) so we have proved the . In this section, prove the lower bounds for polynomial kernels. We recall the lower bound instance: the noise distribution is N (0, σ 2 Id) and the distribution for Z is s.t. every coordinate is first chosen to be non-zero with probability k/n, and if it is non-zero, it's set as an independent sample from N (0, γ 2). This construction makes Z approximately k-sparse with high probability while making its coordinates independent. We choose A as an arbitrary orthogonal matrix, so m = n. We choose w to be an arbitrary ±1 sign vector, so w 2 i = 1 for every i. As a warmup, we first show that linear predictors, and subsequently fixed low degree polynomials cannot achieve the information-theoretic rate 12 of O(σ 2 k log n) -in fact, we will show that they achieve a "trivial" rate. Furthermore, we will show that even if the degree of our polynomials is growing with n, if d = o(log n/ log log n) the state of affairs is similar. As a warmup, and to illustrate the main ideas of the proof techniques, we first consider the case of linear predictors. (i.e. kernels of degree 1.)The main idea is to use a bias-variance trade-off: namely, we show that the linear predictor we use, say f (x) = w, x either has to have too high of a variance (when w is large), or otherwise has too high of a bias. (Recall, the bias captures how well the predictor captures the expectation.)We prove: DISPLAYFORM0 Before giving the proof, let us see how the theorem should be interpreted. The trivial estimator which always returns 0 makes error of order γ 2 K and a good estimator (such as thresholding) should instead make error of order σ 2 K log n when γ >> σ √ log n. The next theorem shows that as long as the signal to noise ratio is not too high, more specifically as long as γ 2 (k/n) = o(σ 2), any linear estimator must make square loss of Ω(γ 2 k), i.e. not significantly better than the trivial 0 estimate. Note that the most interesting (and difficult) regime is when the signal is not too much larger than the noise, e.g. γ 2 = σ 2 polylog(n) in which case it is definitely true that γ 2 (k/n) << σ 2.Proof. Note that w, x − y = w, Az + ξ − w, z = A w − w, z + w, ξ which gives the following bias-variance decomposition for the square loss: DISPLAYFORM1 where in the second-to-last step we used that the covariance matrix of Z is γ 2 (k/n)I, and in the last step we used that A is orthogonal. Now observe that if we fix R = w 2, then by the Pythagorean theorem the minimizer of the square loss is given by the projection of Aw onto the R-dilated unit sphere, sow = R 2 /m(Aw) since Aw 2 = w 2 = √ m. In this case the square loss is then of the form DISPLAYFORM2 12 See Remark B.1 for why this is the optimal information-theoretic rate. and the risk is minimized when DISPLAYFORM3 so the minimum square loss is DISPLAYFORM4 B.2 STRUCTURE OF THE OPTIMAL ESTIMATOR: PROOF OF LEMMA 5.1Proof of Lemma 5.1. Let X = A X, so by orthogonality X = Z + ξ where ξ ∼ N (0, σ 2 Id). Observe that if we look at the optimum over all functions f, we see that DISPLAYFORM5 where where in the first step we used that the conditional expectation minimizes the squared loss, in the second step we used linearity of conditional expectation, and in the last step we used that Z i is independent of X =i.By the Pythagorean theorem, the optimal degree d polynomial f * d is just the projection of i w i E[Z i |X i] onto the space of degree d polynomials. On the other hand observe that DISPLAYFORM6 is just the projection of each of the E[Z i |X i]. Therefore f * d has no mixed monomials. Remark B.1. The previous calculation shows additionally that the problem of minimizing the squared loss for predicting Y is equivalent to that of minimizing the squared loss for the sparse regression problem of recovering Z. It is a well-known fact that the information theoretic rate for sparse regression (with our normalization convention) is Θ(σ 2 k) (see for example ), and so the information-theoretic rate for predicting Y is the same, and is matched by Theorem A.1. We recall that the lower bound for polynomials combines the observation of Lemma 5.1 with a bias-variance tradeoff calculation using Fourier analysis on orthogonal polynomials. Concretely, since the noise we chose for the lower bound instance is Gaussian, the most convenient basis will be the Hermite polynomals. We recall the probabilist's Hermite polynomial He n (x), defined by the recurrence relation DISPLAYFORM0 where He 0 (x) = 1, He 1 (x) = x. In terms of this, the normalized Hermite polynomial H n (x) is DISPLAYFORM1 It's easy to see the polynomials H n (x) form an orthogonal basis with respect to the standard m-variate Gaussian distribution. As a consequence, we get DISPLAYFORM2 which gives us Plancherel's theorem: DISPLAYFORM3 We can use Plancherel's theorem to get lower bounds on the noise sensitivity of degree d polynomials. This will be an analogue of the variance. DISPLAYFORM4 Proof. First we suppose Z (and thus Y) is fixed and consider the randomness of the noise. Let S denote the support of Z. DISPLAYFORM5 so by expanding out f Z in terms of the fourier expansion of f, we see f Z (n) = f (n) for n such that supp(n) ⊂ S. Finally the probability n ⊂ S for n = 0 is upper bounded by the probability a single element of its support is in S, which is k/n. Therefore DISPLAYFORM6 which proves the . Next we give a lower bound for the bias, showing that if f =0 2 2 is small for a low-degree polynomial, it cannot accurately predict y. Here we will assume f is of the form given by Lemma 5.1. Lemma B.2 (Low variance implies high bias). Suppose f is a multivariate polynomial of degree d with no mixed monomials, i.e. f (x) = i f i (x i) where f i is a univariate polynomial of degree d. Expand f in terms of Hermite polynomials as f (x) = n f (n)H n (x/σ). Then DISPLAYFORM7 Before proving the lemma, let us see how it proves the main theorem:Proof of Theorem 4.2. By Lemma 5.1, Lemma B.1, and Lemma B.2 we have that for the f which minimizes the square loss among degree d polynomials, we have a variance-type lower bound DISPLAYFORM8 where the second equality is by the law of total expectation and the last inequality is Cauchy-Schwarz. Using the recurrence relation, we can bound the sum of the absolute value of the coefficients of DISPLAYFORM9 We can also bound the moments of the absolute value of a Gaussian by DISPLAYFORM10 Therefore by Holder's inequality DISPLAYFORM11 Therefore by reverse triangle inequality DISPLAYFORM12 We make a few remarks regarding the in this section. Recall that γ 2 k is the square loss of the trivial zero-estimator. Suppose as before that γ = Θ(σ 2 polylog(n)), then we see that if d = o(log n/ log log n) then the denominator of the lower bound tends to 1, hence any such polynomial estimator has a rate no better than that of the trivial zero-estimate. It is possible to derive a similar statement to Theorem 4.2 that holds with high probability instead of in expectation for polynomials of degree o(log n/ log log n). All that is needed is to bound the contribution to the expectation from very rare tail events when the realization of the noise ξ is atypically large. Since the polynomials we consider are very low degree o(log n/ log log n), they can only grow at a rate of x d = x o(log(n)/ log log n); thus standard growth rate estimates (e.g. the Remez inequality) combined with the Gaussian tails of the noise can be used to show that a polynomial which behaves reasonably in the high-probability region (e.g. which has small w.h.p. error) cannot contribute a large amount to the expectation in the tail region. In this section, we construct polynomials achieving close to the information-theoretic optimal rate of degree only O(log 2 m). Recall this is nearly optimal due to our previous lower bound of Ω(log n).As previously mentioned, the key technical here will be Theorem 5.2, giving the construction of a new polynomial approximation to ReLU. Before proceeding to the proof of that theorem, we show how it implies the final , Theorem 4.3.Towards that, we substitute our polynomial construction for ρ τ into our ReLU neural network and derive the analogous version of Lemma A.2. First, define M τ = M + 2τ and let DISPLAYFORM0 where p is the polynomial constructed in Theorem 5.2. We then have: DISPLAYFORM1 and for |x| ≤ τ we have DISPLAYFORM2 By the guarantee of Theorem 5.2, we see that for for |x| ≤ τ that DISPLAYFORM3 Thus we see that taking d = Ω(Mτ τ log 2 ( Mτ τ)) suffices to make the latter expression at most. Similarly for |x| > τ we know that DISPLAYFORM4 and taking d = Ω(Mτ τ log 2 (Mτ τ)) with sufficiently large constant guarantees the middle term is at most τ and the last term is at most.Using this, we can show that if we use a polynomial of degree Ω((M/σ √ log n) log 2 m) we can achieve similar statistical performance to the ReLu network: DISPLAYFORM5, then with high probability we have ẑ − z 1 ≤ 6kτ.Proof. Apply Lemma C.1 with = τ /m. Then we see for |x| ∈ (τ, M τ) we havn DISPLAYFORM6 and for |x| ≤ τ we have DISPLAYFORM7 Note that entry i of A ξ is A i, ξ where A i 2 2 ≤ (1 + µ) so (A t ξ) i is Gaussian with variance at most σ 2 (1 + µ).By choosing τ with sufficiently large constant, then applying the sub-Gaussian tail bound and union bound, with high probability all coordinates not in the true support are thresholded to at most τ /m. Similarly we see that for each of the coordinates in the support, an error of at most 5τ is made. Therefore ẑ − z 1 ≤ 5kτ + m(τ /m) ≤ 6kτ. Finally, we return to the proof of the key Theorem 5.2:Proof of Theorem 5.2. We start with the case where R = 1/2. We build the approximation in two steps. First we approximate ReLu by the following "annealed" version of ReLu, for parameters β > π, τ > 0 to be optimized later: DISPLAYFORM8 f β,τ (x) = g β (x − τ).Observe that when we look at negative inputs, g β (−x) = For the second step,, we need to show f β can be well-approximated by low-degree polynomials. In fact, because f β is analytic in a neighborhood of the origin, it turns out that its optimal rate of approximation is determined exactly by its complex-analytic properties. More precisely, define D ρ to be the region bounded by the ellipse in C = R 2 centered at the origin with equation For our application we need only the upper bound and we need a quantitative estimate for finite n. Following the proof of the upper bound in , we get the following :Theorem C.2. Suppose f is analytic on the interior of D ρ1 and |f (z)| ≤ M on the closure of D ρ1. Then DISPLAYFORM9 The proof is fairly simple: by writing f in terms of cos(x) one gets an expansion into Chebyshev polynomials and it suffices to bound the coefficients of the corresponding Fourier series: to do this, we write them as integrals over the unit circle, and use the analyticity assumption on D ρ1 to contour shift the integral to a different circle, which immediately gives us the desired exponential decay. For details see BID3 ).We will now apply this theorem to g β. First, we claim that g β is analytic on D ρ1 where ρ 1 is the solution to this equation for the semi-axis of the ellipse: DISPLAYFORM10 which is DISPLAYFORM11 To see this, first extend log to the complex plane by taking a branch cut at (−∞, 0]. To prove g β is analytic on D ρ1, we just need to prove that 1 + e βz avoids (−∞, 0] for z ∈ D ρ1. This follows because by the definition of ρ 1, for every z ∈ D ρ1, (z) < π 2β hence (1 + e βz) ≥ 1. We also see that for z ∈ D ρ1, |g β (z)| = 1 β | log(1 + e βz)| ≤ 1 β sup w∈D βρ 1 | log(1 + e w)| ≤ 1 β (log(1 + e β) + π) < 6.Therefore by Theorem C.2 we have DISPLAYFORM12 where in the last step we used that 1 + x ≥ exp(x/2) for x < 1/2 and that β > π.
Beyond-worst-case analysis of the representational power of ReLU nets & polynomial kernels -- in particular in the presence of sparse latent structure.
709
scitldr
Recent deep generative models can provide photo-realistic images as well as visual or textual content embeddings useful to address various tasks of computer vision and natural language processing. Their usefulness is nevertheless often limited by the lack of control over the generative process or the poor understanding of the learned representation. To overcome these major issues, very recent works have shown the interest of studying the semantics of the latent space of generative models. In this paper, we propose to advance on the interpretability of the latent space of generative models by introducing a new method to find meaningful directions in the latent space of any generative model along which we can move to control precisely specific properties of the generated image like position or scale of the object in the image. Our method is weakly supervised and particularly well suited for the search of directions encoding simple transformations of the generated image, such as translation, zoom or color variations. We demonstrate the effectiveness of our method qualitatively and quantitatively, both for GANs and variational auto-encoders. With the success of recent generative models to produce high-resolution photo-realistic images (; ;), an increasing number of applications are emerging, such as image in-painting, dataset-synthesis, and deep-fakes. However, the use of generative models is often limited by the lack of control over the generated images. Indeed, more control could, for instance, be used to improve existing approaches which aim at generating new training examples by allowing the user to choose more specific properties of the generated images. First attempts in this direction showed that one can modify an attribute of a generated image by adding a learned vector on its latent code or by combining the latent code of two images . Moreover, the study of the latent space of generative models provides insights about its structure which is of particular interest as generative models are also powerful tools to learn unsupervised data representations. For example, observed on auto-encoders trained on datasets with labels for some factors of variations, that their latent spaces exhibit a vector space structure where some directions encode the said factors of variations. We suppose that images from underlying factors of variation such as the presence of objects, their relative positions or the lighting of the scene. We distinguish two categories of factors of variations. Modal factors of variation are discrete values that correspond to isolated clusters in the data distribution, such as the category of the generated object. On the other hand, the size of an object or its position are described by Continuous factors of variations, expressed in a range of possible values. As humans, we naturally describe images by using factors of variations suggesting that they are an efficient representation of natural images. For example, to describe a scene, one likely enumerates the objects seen, their relative positions and relations and their characteristics . This way of characterizing images is also described in. Thus, explaining the latent space of generative models through the lens of factors of variation is promising. However, the control over the image generation is often limited to discrete factors and requires both labels and an encoder model. Moreover, for continuous factors of variations described by a real parameter t, previous works do not provide a way to get precise control over t. In this paper, we propose a method to find meaningful directions in the latent space of generative models that can be used to control precisely specific continuous factors of variations while the literature has mainly tackled semantic labeled attributes like gender, emotion or object category . We test our method on image generative models for three factors of variation of an object in an image: vertical position, horizontal position and scale. Our method has the advantage of not requiring a labeled dataset nor a model with an encoder. It could be adapted to other factors of variations such as rotations, change of brightness, contrast, color or more sophisticated transformations like local deformations. However, we focused on the position and scale as these are quantities that can be evaluated, allowing us to measure quantitatively the effectiveness of our method. We demonstrate both qualitatively and quantitatively that such directions can be used to control precisely the generative process and show that our method can reveal interesting insights about the structure of the latent space. Our main contributions are: • We propose a method to find interpretable directions in the latent space of generative models, corresponding to parametrizable continuous factors of variations of the generated image. • We show that properties of generated images can be controlled precisely by sampling latent representations along linear directions. • We propose a novel reconstruction loss for inverting generative models with gradient descent. • We give insights of why inverting generative models with optimization can be difficult by reasoning about the geometry of the natural image manifold. • We study the impacts of disentanglement on the ability to control the generative models. We argue that it is easier to modify a property of an image than to obtain a label describing that property. For example, it is easier to translate an image than to determine the position of an object within said image. Hence, if we can determine the latent code of a transformed image, we can compute its difference with the latent code of the original image to find the direction in the latent space which corresponds to this specific transformation as in. Let us consider a generative model G: z ∈ Z → I, with Z its latent space of dimension d and I the space of images, and a transformations T t: I → I characterized by a continuous parameter t. For example if T is a rotation, then t could be the angle, and if T is a translation, then t could be a component of the vector of the translation in an arbitrary frame of reference. Let z 0 be a vector of Z and I = G(z 0) a generated image. Given a transformation T T, we aim at finding z T such that G(z T) ≈ T T (I) to then use the difference between z 0 and z T in order to estimate the direction encoding the factor of variation described by T. Given an image I ∈ I, we want to determine its latent code. When no encoder is available we can search an approximate latent codeẑ that minimizes a reconstruction error L between I andÎ = G(ẑ) (Î can be seen as the projection of Solving this problem by optimization leads to solutions located in regions of low likelihood of the distribution used during training. It causes the reconstructed imageÎ = G(ẑ) to look unrealistic 1. Since z follows a normal distribution 2.1.1 CHOICE OF THE RECONSTRUCTION ERROR L One of the important choice regarding this optimization problem is that of L. In the literature, the most commonly used are the pixel-wise Mean Squared Error (MSE) and the pixel-wise cross-entropy as in and. However in practice, pixel-wise losses are known to produce blurry images. To address this issue, other works have proposed alternative reconstruction errors. However, they are based on an alternative neural network (; making them computationally expensive. The explanation usually given for the poor performance of pixel-wise mean square error is that it favors the solution which is the expected value of all the possibilities 2. We propose to go deeper into this explanation by studying the effect of the MSE on images in the frequency domain. In particular, our hypothesis is that due to its limited capacity and the low dimension of its latent space, the generator can not produce arbitrary texture patterns as the manifold of textures is very high dimensional. This uncertainty over texture configurations explains why textures are reconstructed as uniform regions when using pixel-wise errors. In Appendix A, by expressing the MSE in the Fourier domain and assuming that the phase of high frequencies cannot be encoded in the latent space, we show that the contribution of high frequencies in such a loss is proportional to their square magnitude pushing the optimization to solutions with less high frequencies, that is to say more blurry. In order to get sharper we therefore propose to reduce the weight of high frequencies into the penalization of errors with the following loss: where F is the Fourier transform, * is the convolution operator and σ is a Gaussian kernel. With a reduced importance given to the high frequencies to determineẑ when one uses this loss in equation 2, it allows to benefit from a larger range of possibilities for G(z), including images with more details (i.e with more high frequencies) and appropriate texture to get more realistic generated images. A qualitative comparison to some reconstruction errors and choices of σ can be found in Appendix C. We also report a quantitative comparison to other losses, based on the Learned Perceptual Image Patch Similarity (LPIPS), proposed by. Using equation 2, our problem of finding z T such that G(z T) ≈ T T (I), given transformation T T, can be solve through the following optimization problem: 1 We could have used a L 2 penalty on the norm of z to encode a centered Gaussian prior on the distribution of z. However the L 2 penalty requires an additional hyper-parameter β that can be difficult to choose. 2 Indeed, if we model the value of pixel by a random variable x then arg min. In fact, this problem can easily generalized at every pixel-wise loss if we assume that nearby pixels follows approximately the same distribution as arg min x E [L(x, x)] will have the same value for nearby pixels. Algorithm 1: Create a dataset of trajectories in the latent space which corresponds to a transformation T in the pixel space. The transformation is parametrized by a parameter δt which controls a degree of transformation. We typically use N = 10 with (δt n) (0≤n≤N) distributed regularly on the interval [0, T]. Note that z 0 and δt n are retained in D at each step to train the model of Section 2.2. Input: number of trajectories S, generator G, transformation function T, trajectories length N, threshold Θ. In practice, this problem is difficult and an "unlucky" initialization can lead to a very slow convergence. proposed to use an auxiliary network to estimate z T and use it as initialization. Training a specific network to initialize this problem is nevertheless costly. One can easily observe that a linear combination of natural images is usually not a natural image itself, this fact highlights the highly curved nature of the manifold of natural images in pixel space. In practice, the trajectories corresponding to most transforms in pixel space may imply small gradients of the loss that slowdown the convergence of problem of Eq. (see Appendix D). To address this, we guide the optimization on the manifold by decomposing the transformation T T into smaller transformations [T δt0, . . ., T δt N] such that T δt0=0 = Id and δt N = T and solve sequentially: each time initializing z with the of the previous optimization. In comparison to , our approach does not require extra training and can thus be used directly without training a new model. We compare qualitatively our method to a naive optimization in Appendix C. A transformation on an image usually leads to undefined regions in the new image (for instance, for a translation to the right, the left hand side is undefined). Hence, we ignore the value of the undefined regions of the image to compute L. Another difficulty is that often the generative model cannot produce arbitrary images. For example a generative model trained on a given dataset is not expected to be able to produce images where the object shape position is outside of the distribution of object shape positions in the dataset. This is an issue when applying our method because as we generate images from a random start point, we have no guarantee that the transformed images is still on the data manifold. To reduce the impact of such outliers, we discard latent codes that give a reconstruction error above a threshold in the generated trajectories. In practice, we remove one tenth of the latent codes which leads to the worst reconstruction errors. It finally into Algorithm 1 to generate trajectories in the latent space. After generating trajectories with Algorithm 1, we need to define a model which describes how factors of variations are encoded in the latent space. We make the core hypothesis that the parameter t of a specific factor of variations can be predicted from the coordinate of the latent code along an axis u, thus we pose a model f: Z → R of the form t = f (z) = g(z, u), with g: R → R and ·, · the euclidean scalar product in R d. When g is a monotonic differentiable function, we can without loss of generality, suppose that u = 1 and that g is an increasing function. Under these conditions, the distribution of t = g(z, u) when z ∼ N (0, I) is given by ϕ: R → R +: For example, consider the dSprite dataset and the factor corresponding to the horizontal position of an object x in an image, we have x that follows a uniform distribution U([−0.5, 0.5]) in the dataset while the projection of z onto an axis u follows a normal distribution N. Thus, it is natural to adopt g: R → [−0.5, 0.5] and for x = g(z, u): However, in general, the distribution of the parameter t is not known. One can adopt a more general parametrized model g θ of the form: with g θ: R → R and (θ, u) trainable parameters of the model. We typically used piece-wise linear functions for g θ. However, this model cannot be trained directly as we do not have access to t (in the case of horizontal translation the x-coordinate for example) but only to the difference δt = t G(z δt) − t G(z0) between an image G(z 0) and its transformation G(z δt) (δx or δy in the case of translation). We solve this issue by modeling δt instead of t: Hence, u and θ are estimated by training f (θ,u) to minimize the MSE between δ t and f (θ,u) (z δt) − f (θ,u) (z 0) with gradient descent on a dataset produced by Algorithm 1 for a given transformation. An interesting application of this method is the estimation of the distribution of the images generated by G by using Equation 6. With the knowledge of g θ we can also choose how to sample images. For instance, let say that we want to have t ∼ φ(t), with φ: R → R + an arbitrary distribution, we can simply transform z ∼ N as follows: with h φ: → R and ψ such that: These are interesting to bring control not only on a single output of a generative model but also on the distribution of its outputs. Moreover, since generative models reflect the datasets on which they have been trained, the knowledge of these distributions could be applied to the training dataset to reveal potential bias. Datasets: We performed experiments on two datasets. The first one is dSprites, composed of 737280 binary 64 × 64 images containing a white shape on a dark . Shapes can vary in position, scale and orientations making it ideal to study disentanglement. The second dataset is ILSVRC , containing 1.2M natural images from one thousand different categories. Implementation details: All our experiments have been implemented with TensorFlow 2.0 and the corresponding code is available at https://anonymised.for.review. We used a BigGAN model whose weights are taken from TensorFlow-Hub allowing easy reproduction of our . The BigGAN model takes two vectors as inputs: a latent vector z ∈ R 128 and a one-hot vector to condition the model to generate images from one category. The latent vector z is then split into six parts which are the inputs at different scale levels in the generator. The first part is injected at the bottom layer while next parts are used to modify the style of the generated image thanks to Conditional Batch Normalization layers (de). We also trained several β-VAEs to study the importance of disentanglement in the process of controlling generation. The exact β-VAE architecture used is given in Appendix B. The models were trained on dSprites with an Adam optimizer during 1e5 steps with a batch size of 128 images and a learning rate of 5e−4. Evaluating quantitatively the effectiveness of our method on complex datasets is intrinsically difficult as it is not always trivial to measure a factor of variation directly. We focused our analysis on two factors of variations: position and scale. On simple datasets such as dSprites, the position of the object can be estimated effectively by computing the barycenter of white pixels. However, for natural images sampled with the BigGAN model, we have to use first saliency detection on the generated image to produce a binary image from which we can extract the barycenter. For saliency detection, we used the model provided by which is implemented in the PyTorch framework . The scale is evaluated by the proportion of salient pixels. The evaluation procedure is: 1. Get the direction u which should describe the chosen factor of variation with our method. 2. Sample latent codes z from a standard normal distribution. 3. Generate images with latent code z − z, u u + tu with t ∈ [−T, T]. 4. Estimate the real value of the factor of variation for all the generated images. 5. Measure the standard deviation of this value with respect to t. proposed an alternative method for quantitative evaluation that relies on an object detector. Similarly to us, it allows an evaluation for x and y shift as well as scale but is restricted to image categories that can be recognized by a detector trained on some categories of ILSVRC. The proposed approach is thus more generic. We performed quantitative analysis on ten chosen categories of objects of ILSVRC, avoiding non actual objects such as "beach" or'cliff". Results are presented in Figure 2 (top). We observe that for the chosen categories of ILSVRC, we can control the position and scale of the object relatively precisely by moving along directions of the latent space found by our method. However, one can still wonder whether the directions found are independent of the category of interest. To answer this question, we merged all the datasets of trajectories into one and learned a common direction on the ing datasets. Results for the ten test categories are shown in Figure 2 (bottom). This figure shows that the directions which correspond to some factors of variations are indeed shared between all the categories. Qualitative are also presented in Figure 3 for illustrative purposes. We also checked which parts of the latent code are used to encode position and scale. Indeed, BigGAN uses hierarchical latent code which means that the latent code is split into six parts which are injected at different level of the generator. We wanted to see by which part of the latent code these directions are encoded. The squared norm of each part of the latent code is reported in Figure 4 for horizontal position, vertical position and scale. This figure shows that the directions corresponding to spatial factors of variations are mainly encoded in the first part of the latent code. However, for the y position, the contribution of level 5 is higher than for the x position and the scale. Quantitative on the ten categories of the ILSVRC dataset used for training (Top) and for ten other categories used for validation (Bottom) for three geometric transformations: horizontal and vertical translations and scaling. In blue, the distribution of the measured transformation parameter and in red the standard deviation of the distribution with respect to t. Note that for large scales the algorithm seems to fail. However, this phenomenon is very likely due to the poor performance of the saliency model when the object of interest covers almost the entire image (scale ≈ 1.0). (best seen with zoom) We suspect that it is due to correlations between the vertical position of the object in the image and its that we introduced by transforming the objects because the is not invariant by vertical translation because of the horizon. To test the effect of disentanglement on the performance of our method, we trained several β-VAE on dSprites, with different β values. Indeed, β-VAE are known for having more disentangled latent spaces as the regularization parameter β increases. Results can be seen in Figure 5. The figure shows that it is possible to control the position of the object on the image by moving in the latent space along the direction found with our method. As expected, the effectiveness of the method depends on the degree of disentanglement of the latent space since the are better with a larger β. Indeed we can see on Figure 5 that as β increases, the standard deviation decreases (red curve), allowing a more precise control of the position of the generated images. This observation motivates further the interest of disentangled representations for control on the generative process. Our work aims at finding interpretable directions in the latent space of generative models to control their generative process. We distinguish two families of generative models: GAN-like models which do not provide an explicit way to get the latent representation of an image and auto-encoders which provide an encoder to get the latent representation of images. From an architectural point of view, conditional GANs allows the user to choose the category of a generated object or some chosen properties of the generated image but this approach requires a labeled dataset and use a model which is explicitly designed to allow this control. Similarly regarding identified that they suffer from a trade-off between reconstruction accuracy and sample plausibility and proposed to identify regions of the latent space that correspond to plausible samples to improve reconstruction accuracy. They also use conditional reconstruction to control the generative process. In comparison to these approaches, our method does not directly requires labels. shows that adding a code to the the input of the GAN generator and optimizing with an appropriate regularization term leads to disentangle the latent space and make possible to find a posteriori meaningfully directions. In contrast, we show that it is possible to find such directions in several generative models, without changing the learning process (our approach could even be applied to InfoGAN) and with an a priori knowledge of the factor of variation sought. More recently, analyze the activations of the network's neurons to determine those that in the presence of an object in the generated image, and thus allows to control such a presence. In contrast, our work focuses on the latent space and not on the intermediate activations inside the generator. One of our contribution and a part of our global method is a procedure to find the latent representation of an image when an encoder is not available. Several previous works have studied how to invert the generator of a GAN to find the latent code of an image. showed on simple datasets (MNIST and Omniglot ) that this inversion process can be achieved by optimizing the latent code to minimize the reconstruction error between the generated image and the target image. introduced tricks to improve the on a more challenging dataset (CelebA ). However we observed that these methods fail when applied on a more complex datasets (ILSVRC ). The reconstruction loss introduced in Section 2.1.1 is adapted to this particular problem and improves the quality of reconstructions significantly. We also theoretically justify the difficulties to invert a generative model, compared to other optimization problems. In the context of vector space arithmetic in a latent space, argues that replacing a linear interpolation by a spherical one allows to reduce the blurriness as well. This work also propose an algorithmic data augmentation, named "synthetic attribute", to generate image with less noticeable blur with a VAE. In contrast, we act directly on the loss. The closest works were released on ArXiv very recently indicating that finding interpretable directions in the latent space of generative models to control their output is of high interest for the community. In these papers, the authors describe a method to find interpretable directions in the latent space of the BigGAN model . If their method exhibits similarities with ours (use of transformation, linear trajectories in the latent space), it also differs on several points. From a technical point of view our training procedure differs in the sense that we first generate a dataset of interesting trajectories to then train our model while they train their model directly. Our evaluation procedure is also more general as we use a saliency model instead of a MobileNet-SSD v1 trained on specific categories of the ILSVRC dataset allowing us to measure performance on more categories. We provide additional insight on how auto-encoders can also be controlled with the method, the impact of disentangled representations on the control and on the structure of the latent space of BigGAN. Moreover we also propose an alternative reconstruction error to invert generators. However, the main difference we identify between the two works is the model of the latent space used. Our model allows a more precise control over the generative process and can be being adapted to more cases. Generative models are increasingly more powerful but suffer from little control over the generative process and the lack of interpretability in their latent representations. In this context, we propose a method to extract meaningful directions in the latent space of such models and use them to control precisely some properties of the generated images. We show that a linear subspace of the latent space of BigGAN can be interpreted in term of intuitive factors of variation (namely translation and scale). It is an important step toward the understanding of the representations learned by generative models. In Section 2.1, we consider a target image I ∈ I and a generated imageÎ = G(ẑ) to be determined according to a reconstruction loss L (Equation 1). Let us note F{·} the Fourier transform. If L is the usual MSE, from the Plancherel theorem, we have ||Î − I|| 2 = ||F{Î} − F{I}|| 2. Let us consider a particular frequency ω in the Fourier space and compute its contribution to the loss. The Fourier transform of I (resp. Î) having a magnitude r (resp.r) and a phase θ (resp.θ) at ω, we have: If we model the disability of the generator to model every high frequency patterns as an uncertainty on the phase of high frequency of the generated image, i.e by posingθ ∼ U([0, 2π]), the expected value of the high frequency contributions to the loss is equal to: The term r 2 is a constant w.r.t the optimization of L and can thus be ignored. The contribution to the total loss L thus directly depends onr 2. While minimizing L, the optimization process tends to favour imagesÎ = G(ẑ) with smaller magnitudes in the high frequencies, that is to say smoother images, with less high frequencies. B β-VAE ARCHITECTURE The β-VAE framework was introduced by to discover interpretable factorised latent representations for images without supervision. For our experiments, we designed a simple convolutional VAE architecture to generate images of size 64x64, the decoder network is the opposite of the encoder with transposed convolutions. Convolution + ReLU filters=32 size=4 stride=2 pad=SAME Convolution + ReLU filters=32 size=4 stride=2 pad=SAME Convolution + ReLU filters=32 size=4 stride=2 pad=SAME Convolution + ReLU filters=32 size=4 stride=2 pad=SAME Dense + ReLU units=256 Dense + ReLU units=256 µ: Dense + Identity σ: Dense + Exponential units=10 Decoder Dense + ReLU units=256 Dense + ReLU units=256 Reshape shape=4x4x32 Transposed Convolution + ReLU filters=32 size=4 stride=2 pad=SAME Transposed Convolution + ReLU filters=32 size=4 stride=2 pad=SAME Transposed Convolution + ReLU filters=32 size=4 stride=2 pad=SAME Transposed Convolution + Sigmoid filters=1 size=4 stride=2 pad=) and our loss. With or without the constraint on ||z||. Note the artifacts when using our loss without constraining z (best seen with zoom). On Fig. 6 we show qualitative reconstruction with our method (Eq. 3) for several values of σ. On this representative example, we observe quite good with σ = 3 and σ = 5. Higher values penalizes too low frequencies that lead to a less accurate reconstruction. We also illustrate on Fig. 7 a comparison of our approach to two others, namely classical Mean Square Error (MSE) and Structural dissimilarity (DSSIM) proposed by. Results are also presented with an unconstrained latent code during optimization (Eq. 1) and the approach proposed (Eq. 2). This example show the accuracy of the reconstruction obtained with our approach, as well as the fact that the restriction of z to a ball of radius √ d avoids the presence of artifacts. We also performed a quantitative evaluation of the performance of our approach. We randomly selected one image for each of the 1000 categories of the ILSVRC dataset and reconstructed it with our method with a budget of 3000 iterations. We then computed the Learned Perceptual Image Patch Similarity (LPIPS), proposed by , between the final reconstruction and the target image. We used the official implementation of the LPIPS paper with default parameters. Results are reported in Table 2. It suggests that images reconstructed using our reconstruction error are perceptually closer to the target image than those obtained with MSE or DSSIM. The curvature of the natural image manifold makes the optimisation problem of Equation 2 difficult to solve. This is especially true for factors of variation which correspond to curved walks in pixel-space (for example translation or rotation by opposition to brightness or contrast changes which are linear). To illustrate this fact, we show that the trajectory described by an image undergoing common transformations is curved in pixel space. We consider three types of transformations, namely translation, rotation and scaling, and get images from the dSprites dataset which correspond to the progressive transformation (interpolation) of an image. To visualize, we compute the PCA of the ing trajectories and plot the trajectories on the two main axes of the PCA. The of this experiment can be seen in Figure 8. In this figure, we can see that for large translations, the direction of the shortest path between two images in pixel-space is near orthogonal to the manifold. The same problem occurs for rotation and, at a smaller extent, for scale. However this problem does not exist for brightness for example, as its change is a linear transformation in pixel-space. This is problematic during optimization of the latent code because the gradient of the reconstruction loss with respect to the generated image is tangent to this direction. Thus, when we are in the case of near orthogonality, the gradient of the error with respect to the latent code is small. Indeed, let us consider an ideal case where G is a bijection between Z and the manifold of natural images. Let be z ∈ Z, a basis of vectors tangent to the manifold at point G(z) is given by Thus, It shows that when the direction of descent in pixel space is near orthogonal to the manifold described by the generative model, optimization gets slowed down and can stop if the gradient of the loss with respect to the generated image is orthogonal to the manifold. For example, let assume we have an ideal GAN which generates a small white circle on a black , with a latent space of dimension 2 that encodes the position of the circle. Let consider a generated image with the circle on the left of the image and we want to move it to the right. Obviously, we thus have ∇ z ||G(z) − T T (G(z 1))|| 2 = 0 if the intersection of the two circles is empty (see Figure 8) since a small translation of the object does not change the reconstruction error. E ADDITIONAL QUALITATIVE EXAMPLES Figure 9: Qualitative for 10 categories of ILSVRC dataset for three geometric transformations (horizontal and vertical translations and scaling) and for brightness. We show qualitative examples for images generated with the BigGAN model for position, scale and brightness. The images latent codes are sampled in the following way: z − z, u u + αu with α ∈ [−3, 3] and u the learned direction. We have chosen the categories to produce interesting : for position and scale categories are objects, for brightness categories are likely to be seen in a bright or dark environment. Notice that for some of the chosen categories, we failed to control the brightness of the image. It is likely due to the absence of dark images for these categories in the training data. for position and scale, the direction is learned on the ten categories presented here while for brightness only the five top categories are used.
A model to control the generation of images with GAN and beta-VAE with regard to scale and position of the objects
710
scitldr
Modern neural network architectures use structured linear transformations, such as low-rank matrices, sparse matrices, permutations, and the Fourier transform, to improve inference speed and reduce memory usage compared to general linear maps. However, choosing which of the myriad structured transformations to use (and its associated parameterization) is a laborious task that requires trading off speed, space, and accuracy. We consider a different approach: we introduce a family of matrices called kaleidoscope matrices (K-matrices) that provably capture any structured matrix with near-optimal space (parameter) and time (arithmetic operation) complexity. We empirically validate that K-matrices can be automatically learned within end-to-end pipelines to replace hand-crafted procedures, in order to improve model quality. For example, replacing channel shuffles in ShuffleNet improves classification accuracy on ImageNet by up to 5%. Learnable K-matrices can also simplify hand-engineered pipelines---we replace filter bank feature computation in speech data preprocessing with a kaleidoscope layer, ing in only 0.4% loss in accuracy on the TIMIT speech recognition task. K-matrices can also capture latent structure in models: for a challenging permuted image classification task, adding a K-matrix to a standard convolutional architecture can enable learning the latent permutation and improve accuracy by over 8 points. We provide a practically efficient implementation of our approach, and use K-matrices in a Transformer network to attain 36% faster end-to-end inference speed on a language translation task. Structured linear maps are fundamental and ubiquitous in modern machine learning. Their efficiency in speed (fast algorithms) and space (few parameters) can reduce computation and memory usage. They include fixed specialized transforms such as the discrete Fourier transform (DFT) and Hadamard transform used in signal processing , convolutions for image, language, and speech modeling , and low-rank and sparse matrices for efficient storage and inference on edge devices . Forms of structure such as sparsity have been at the forefront of recent advances in ML , and are critical for on-device and energy-efficient models, two application areas of tremendous recent interest . There are a plethora of classes of structured linear maps, each with a significantly different representation, algorithm, and implementation. They have different tradeoffs in terms of inference speed, training speed, and accuracy, and the conventional wisdom is that no one class works uniformly well across all applications. As a , ML practitioners currently hand-pick specific classes of structured linear maps for each of their applications. This is a difficult and labor-intensive task. Ideally, these problems should be addressed with a universal representation for structured linear maps: (i) Such a parameterization should be expressive enough to capture important classes of structure, with a nearly tight parameter count and runtime: the space required to represent the linear map should be close to optimal, and the ing algorithm for matrix vector multiplication should be close to the fastest possible algorithm. (ii) The parameterization should be differentiable in order to be learned as a component of end-to-end ML pipelines, enabling it to easily be used as a drop-in replacement for manually engineered structured components. (iii) The parameterization should admit practically efficient algorithms for training and inference, in terms of both speed and memory. Currently, no class of structured linear maps satisfies all of these criteria. Most existing classes of structured matrices-such as the class of low-rank matrices-fail to tightly capture other important types of structure. For example, the DFT has an efficient structured representation of size O(n log n), yet cannot be well-approximated by a low-rank transform of size n 2. Sparsity is another important type of structure; lots of exciting recent work has focused on the design of sparse neural networks. For instance, sparse networks of comparable quality to their dense counterparts-yet an order of magnitude fewer parameters-may be created via pruning or by identifying "winning lottery tickets" . In parallel, recent theoretical by show that sparsity and the notion of structure in linear maps are fundamentally linked: any given matrix can be factored into a product of sparse matrices with total parameter count equal to the efficiency (i.e. minimum arithmetic circuit complexity) of the matrix. In other words, the representation of linear maps as products of sparse matrices tightly captures all forms of structure. Unfortunately, actually learning sparse representations is difficult, because it requires finding the matrices' sparsity patterns-a discrete, nondifferentiable search problem. So, current methods for training sparse neural networks are either expensive , or rely on highly handtuned heuristics for evolving the sparsity patterns throughout training . By contrast, we propose a representation of linear maps as products of sparse matrices with specific predefined sparsity patterns (Section 2), and show that it does satisfy our desiderata: it retains the expressiveness of unstructured sparsity, while being differentiably learnable and efficient like other structured representations. Concretely, our representation is based on products of a particular building block known as a butterfly matrix ; we term such products kaleidoscope matrices (K-matrices for short). 1 (i) Our main theoretical contribution (Section 2.3) concerns the expressiveness of this representation: we show that any structured linear map (i.e. one that can be applied using s n 2 arithmetic operations) can be represented as a K-matrix, with a nearly tight number of parameters and algorithmic complexity (both on the order of s up to logarithmic factors). (ii) The kaleidoscope representation is fully differentiable; thus, all the parameters of a K-matrix can be learned using standard optimization algorithms such as SGD. (iii) Because of their simple, regular structure, K-matrices are practical and easy to use. We provide memory-and runtime-efficient implementations of K-matrix multiplication on CPU and GPU for training and inference, with a simple PyTorch interface. We empirically validate that, due to their expressiveness, learnability, and efficiency, we can use K-matrices as a drop-in replacement for linear components in deep learning models. In Section 3.1, we use K-matrices to replace hand-crafted structure in two different settings. We simplify the six steps of filter bank computation in speech preprocessing into a single learnable K-matrix step, with only an 0.4% accuracy drop on the TIMIT speech recognition task. We use K-matrices to replace channel shuffles in ShuffleNet, improving ImageNet classification accuracy by up to 5%. In Section 3.2, we show that K-matrices can successfully recover latent structure; a K-matrix is used to learn latent permutations in a permuted image dataset (Permuted CIFAR), ing in 9 points higher accuracy in a downstream CNN model. In Section 3.3, we show that our efficient K-matrix multiplication implementation can be applied to speed up real-world tasks: we replace linear layers with K-matrices in a DynamicConv-Transformer network to attain 36% faster end-to-end inference speed with a 1.0 drop in BLEU score on the IWSLT14 German→English translation task. We first present some on the characterization of all structured matrices (i.e. those with subquadratic multiplication algorithms) as products of sparse factors, along with the definition of butterfly matrices. We then propose a differentiable family of kaleidoscope matrices, composed of products of butterfly matrices, and prove their expressivity: all structured matrices can be represented in this form, with almost optimal parameter count and runtime. Sparse factorization One method of constructing matrices with theoretically fast matrix-vector multiplication algorithms is as a product of sparse matrices, so that multiplication by an arbitrary vector has cost proportional to the total number of nonzeros (NNZ) of the matrices in the product. Surprisingly, the converse is also true. introduce the concept of sparse product width (SPW), which roughly corresponds to the total NNZ in a factorization of a matrix, and show that it is an asymptotically optimal descriptor of the algorithmic complexity of matrix-vector multiplication (Bürgisser et al., 2013). We use a similar argument in the proof of our main theorem (Section 2.3). However, attempting to learn such a factorization of a given matrix is difficult, as the sparsity constraint is non-continuous. Moreover, because of the possibly irregular sparsity patterns, it is difficult to realize the theoretical speedups in practice . Butterfly matrices Butterfly matrices, encoding the recursive divide-and-conquer structure of the fast Fourier transform (FFT) algorithm, have long been used in numerical linear algebra and machine learning (; ; ; ;). Here we define butterfly matrices, which we use as a building block for our hierarchy of kaleidoscope matrices. Definition 2.1. A butterfly factor of size k ≥ 2 (denoted as B k) is a matrix of the form where each D i is a k 2 × k 2 diagonal matrix. We restrict k to be a power of 2. Definition 2.2. A butterfly factor matrix of size n with block size k (denoted as B (n) k ) is a block diagonal matrix of n k (possibly different) butterfly factors of size k: Definition 2.3. A butterfly matrix of size n (denoted as B (n) ) is a matrix that can be expressed as a product of butterfly factor matrices: 2. Equivalently, we may define B (n) recursively as a matrix that can be expressed in the following form: (Note that [B Using the building block of butterfly matrices, we formally define the kaleidoscope (BB *) hierarchy and prove its expressiveness. This serves as a fully differentiable alternative to products of sparse matrices (Section 2.1), with similar expressivity. In Appendix J, we show where various common structured matrix classes are located within this hierarchy. The building block for this hierarchy is the product of a butterfly matrix and the (conjugate) transpose of another butterfly matrix (which is simply a product of butterfly factors taken in the opposite order). Figure 1 visualizes the sparsity patterns of the butterfly factors in BB *, where the red and blue dots represent the allowed locations of nonzero entries. Definition 2.4 (Kaleidoscope hierarchy, kaleidoscope matrices). • Define B as the set of all matrices that can be expressed as in the form B (n) (for some n). • Define BB * as the set of matrices M of the form M = M 1 M We now present our main theoretical : the fact that general linear transformations, expressed as low-depth linear arithmetic circuits, are captured in the BB * hierarchy with low width. Arithmetic circuits are commonly used to formalize algebraic algorithmic complexity (Bürgisser et al., 2013); we include a primer on this in Appendix M. The quantities of interest are the total number of gates in the circuit, representing the total number of steps required to perform the algorithm for a serial processor, and the depth, representing the minimum number of steps required for a parallel processor. Theorem 1. Let M be an n × n matrix such that multiplication of M times an arbitrary vector v can be represented as a linear arithmetic circuit with s total gates and depth The representation of such a matrix M in the BB * hierarchy has O(ds log s) parameters and yields a O(ds log s) multiplication algorithm, compared to the O(s) parameters and runtime of the circuit representation. To the best of our knowledge, the most general classes of efficient matrices that have been studied have depth d on the order of log n or poly log n. In these cases, the representation with K-matrices matches the best known bounds up to polylogarithmic factors. The crux of the proof of Theorem 1 (shown in Appendix F) is an almost tight representation of any sparse matrix as a K-matrix (i.e. a product of butterfly matrices): any n × n sparse matrix with s (Theorem 3, Appendix I). We then leverage the expressivity of products of sparse matrices to represent all arithmetic circuits (similar to the sparse product width of in Section 2.1) to complete the proof of Theorem 1. This intermediate is also a novel characterization of sparse matrices, to the best of our knowledge. For a matrix with s NNZ, the kaleidoscope representation has O(s log n) parameters and runtime, instead of the optimal O(s) parameters and runtime. We trade off an extra logarithmic factor in space and time for full differentiability (thanks to the fixed sparsity patterns in the representation). The intuition behind this is as follows: a sparse matrix with s NNZ can be written as a sum of s/n matrices each with at most n NNZ. Any n × n matrix with at most n NNZ, up to permuting the rows and columns, is a product of two butterfly matrices (Lemma I.1). Sorting networks imply that permutation matrices are in (BB *) O(log n), but we tighten the to show that they are in fact in BB * (Theorem 2, Appendix G). We thus obtain a kaleidoscope representation for each summand matrix with O(n log n) parameters. By the addition closure property of the BB * hierarchy (Lemma H.5), each sparse matrix with s NNZ then has a kaleidoscope representation with O(s log n) parameters. Tight representation for structured linear maps common in ML Even though Theorem 1 suggests that the kaleidoscope representation can be loose by logarithmic factors, many structured linear maps common in ML can be represented in this hierarchy with an optimal number of parameters and runtime compared to best known parameterizations, up to constant factors. Appendix J includes several examples such as discrete transforms (the DFT, discrete cosine transform (DCT), discrete sine transform (DST), Hadamard transform), convolution (i.e. circulant matrix), Toeplitz matrices , structured matrices for kernel approximation ((HD) 3 ) and compact neural network design (Fastfood , ACDC ). There have been other large classes structured matrices proposed in the machine learning literature, such as Toeplitz-like or low displacement rank (LDR) , but to the best of our knowledge, they are not able to capture these common structures as tightly as K-matrices. More detailed discussions are in Appendix A. ReLU networks with low-depth structured weight matrices In Appendix L, we prove that finding an efficient circuit for a ReLU network can be reduced to finding efficient circuits for each of its weight matrices, with at most a constant factor greater size and run-time (i.e. number of gates). We also show that ReLU networks with kaleidoscope weight matrices have VC dimension near-linear in the number of parameters, matching the bound for networks with unconstrained weight matrices and LDR . This yields a corresponding sample complexity bound. Orthogonal kaleidoscope hierarchy Orthogonal butterfly matrices are one commonly used variant due to their improved stability , where each butterfly factor is constrained to be orthogonal: C S −S C with C, S being diagonal and C 2 + S 2 = I. Similar to the BB * hierarchy, in Appendix K, we define the OBB hierarchy consisting of products of orthogonal butterfly matrices and diagonal matrices, and show that this hierarchy has the same expressiveness as the BB * hierarchy. We validate three claims that suggest that kaleidoscopes are a promising technique to learn different types of structure in modern architectures. 1. Section 3.1: for applications in speech and lightweight computer vision relying on highly hand-crafted structured transformations, we show that we can recover (and even improve) the quality of such architectures by simply replacing existing hand-structured components with K-matrices, with only a small overhead in memory and computation. 2. In Section 3.2, for a challenging task with latent structure (Permuted CIFAR-10), a K-matrixbased relaxation of permutations is able to learn the right latent permutation, yielding 9 points better accuracy in a downstream CNN compared to standard RNN and CNN baselines used on such permuted image classification tasks. 3. In Section 3.3, we show that, although not yet highly optimized, our current implementation of K-matrices can improve the inference throughput of DynamicConv Transformer, a state-ofthe-art fast machine translation model, by 36%, with only a relatively small drop in translation quality. In all of the above applications, as K-matrices are fully differentiable, we simply train them jointly with the rest of the model using standard learning algorithms (such as SGD). Full details for all of the experiments (precise architectures, hyperparameters, etc.) are in Appendix B. We validate that kaleidoscope matrices can recover or improve on the performance of hand-crafted structure in ML models. For example, a single learnable kaleidoscope layer can be used to replace the hand-engineered filter bank speech preprocessing pipeline with only 0.4% loss in accuracy on the TIMIT speech recognition task (Section 3.1.1). Replacing channel shuffles in ShuffleNet with learnable K-matrices improves classification accuracy on ImageNet by up to 5.0% (Section 3.1.2). We show that K-matrices can remove the need for hand-tuning by significantly simplifying speech recognition data preprocessing pipelines. In particular, we can entirely replace the complex handcrafted MFSC featurization commonly used in speech recognition tasks with a fully learnable Figure 2: Comparison of the standard MFSC featurization pipeline with our "kaleidoscope" pipeline. kaleidoscope layer, with only 0.4% drop in accuracy on the TIMIT speech recognition benchmark. Results are presented in Table 1. Our approach is competitive with the accuracy of standard models that use hand-crafted features, and significantly outperforms current approaches for learning from raw audio input. Modern speech recognition models currently rely on carefully hand-crafted features extracted from the audio, which are then fed into an acoustic model. By contrast, learning directly from the raw audio-i.e. end-to-end learning from the audio waveform without any manual featurization-obviates the need for this complicated and often expensive preprocessing step. There have been recent attempts to learn directly from raw audio, such as SincNet ; however, they often rely on specialized architectures designed by domain experts. Instead, we use a standard RNN speech recognition architecture, but use a learnable kaleidoscope layer to replace the featurization steps. The baseline architecture takes as input filter bank (MFSC) features, which are a popular standard featurization for speech recognition and involve several steps hand-crafted specifically for this domain. These features are extracted from the raw audio waveform, and fed as the input into a Bi-LSTM model. We significantly simplify this pipeline by replacing the featurization step with a trainable kaleidoscope layer that is trained end-to-end together with the Bi-LSTM. The original pipeline and our modified kaleidoscope version are depicted in Figure 2. The computation of MFSC features involves a series of painstakingly hand-designed steps (further described in Appendix B.1), each involving their own hyperparameters: (i) the waveform is framed (split into chunks), (ii) dithering, (iii) pre-emphasis, (iv) the Hamming window is applied, (v) the FFT is applied and the power spectrum is computed, (vi) the is mapped to the mel scale (which involves applying a linear transformation and then taking the logarithm), (vii) cepstral mean and variance normalization is applied. We replace the last six steps (ii-vii) of this featurization with a kaleidoscope layer; specifically, after windowing, we multiply the input by a K-matrix, and then compute the logarithm of the power spectrum; the output is fed into the Bi-LSTM model. We evaluate how K-matrices can improve the quality of hand-crafted, lightweight architectures for computer vision tasks, without the need for hand-tuning. We select ShuffleNet , which is a state-of-the-art lightweight CNN architecture that uses a manually designed "channel shuffle" permutation matrix to improve performance. By replacing this fixed permutation with a learnable K-matrix, we achieve up to 5% further improvement in classification accuracy, without hand-tuned components and with a modest space penalty of up to 10%. Results are given in Table 2. Grouped convolution is often used to reduce parameter count and speed up inference compared to standard convolution, but by default, channels in different groups cannot exchange information. To remedy this, ShuffleNet uses a permutation matrix to shuffle the channels after each grouped convolution. propose to instead use the Hadamard transform before and after each grouped convolution to mix the channels. In place of these hand-engineered solutions, we use a K-matrix before and after each grouped convolution, and learn these end-to-end together with the rest of the network. As shown in Table 2, across a range of sizes, replacing the channel shuffles with K-matrices in improved performance at comparable parameter counts. We show that K-matrices can be used in a challenging task for which existing classes of structured linear maps have not been found suitable. We investigate the problem of image classification on a permuted image dataset (Permuted CIFAR-10). This problem is challenging due to the discrete nature of learning the latent permutation of the dataset; we present a differentiable relaxation for this using a K-matrix as a key component. Results are presented in Table 3; compared to methods that do not have a permutation learning step, our approach gets 9 points higher accuracy (84.4% to 93.6%), coming within 2 points of the accuracy on the un-permuted dataset (94.9%). In this task, we use a permuted image classification dataset (Permuted CIFAR-10), wherein a fixed global permutation is applied to the pixels of every image in the original input set. Typically, only fully-connected (FC) and recurrent models are applied to such datasets , because the permutation destroys locality in the image, presenting a difficulty for CNNs. However, CNNs are much better-suited for standard image tasks. We thus expect that learning the permutation and then applying a standard CNN should outperform these baselines. As mentioned in Section 2, the kaleidoscope hierarchy provides a nearly tight parameterization of permutations; this makes them a natural fit for the permutation learning step. Experimentally, a K-matrix is used to represent a distribution over permutations, which converges to a single permutation at the end of training. The correct latent structure is learned by applying samples from this distribution to the permuted training images, and minimizing an auxiliary smoothness-based loss that encourages the reconstructed images to be more "natural" (i.e. vary smoothly pixel-to-pixel). The learned permutation is evaluated by training a ResNet18 with the K-matrix permutation layer inserted at the beginning. Full details of our approach are provided in Appendix B.3. In Table 3, we compare our approach to a ResNet18 without this extra K-matrix layer, a ResNet18 with an extra dense matrix at the beginning instead of a K-matrix, and other baselines. As generic representations such as unstructured matrices do not have the requisite properties to fit in the pipeline, these baselines fail to effectively learn the latent permutation. We emphasize that a K-matrix provides this ability to recover latent structure despite not being specialized for permutations. Figure 3 describes the pipeline and displays examples of permuted and unpermuted images. Figure 3: (a) (Left) Schematic describing permutation learning approach. The inputs are multiplied by a K-matrix and then fed into a CNN, from which the classification loss is computed. Separately, the input is permuted by a permutation matrix sampled from the distribution described by the Kmatrix, and a "smoothness" loss is computed from the , as described in Appendix B.3. (b) (Right) Left panel: original (unpermuted) images. Center panel: the permuted versions. Right panel: these images after then applying the permutation recovered by the K-matrix. The K-matrix is able to nearly unscramble the images into their unpermuted versions. We evaluate the inference speed benefit of using K-matrices on a real language translation model. We choose the state-of-the-art DynamicConv Transformer translation model , which has shown 20% inference speedup over the standard Transformer model, and replace dense matrices in the decoder's linear layers with K-matrices, which leads to a further 36% inference speedup (Table 4). As outlined in Section 2.3, K-matrices admit a simple and fast O(n log n) matrix-vector multiplication algorithm. We provide fast implementations of this algorithm in C++ and CUDA, with an interface to PyTorch , and use this implementation in our experiments. We use K-matrices to replace all the linear layers in the decoder of DynamicConv (since 90% of inference time is spent in the decoder). As shown in Table 4, on the IWSLT-14 German-English translation task, this yields a 25% smaller model with 36% faster inference time on CPU, at the cost of 1.0 drop in BLEU score 4 (nearly matching SOTA performance of 2 years ago ). The majority (55%) of inference time is spent in matrix-vector multiplication; our implementation of K-matrix-vector multiplication is about 2x faster than the optimized implementation of dense matrix-vector multiplication in the Intel MKL library. Direct comparisons of K-matrix multiplication with this and other highly-optimized routines such as the FFT are further detailed in Appendix C. We address the problem of having to manually choose among the numerous classes of structured linear maps by proposing the universal (expressive, efficient, and learnable) family of kaleidoscope matrices. We prove that K-matrices can represent any structured linear maps with near-optimal space and time complexity. Empirical validations suggest that K-matrices are a promising way to employ structure in modern ML; they can be used to reduce the need for hand-engineering, capture challenging latent structure, and improve efficiency in models. We are excited about future work on further hardware-optimized implementations of K-matrices, to fully realize the size and speed benefits of structured matrices on a broad array of real-world applications. Structured linear maps such as the DFT, the Hadamard transform and convolution are a workhorse of machine learning, with diverse applications ranging from data preprocessing, random projection, featurization, to model compression. For example, the DFT is a crucial step in the standard filter bank speech preprocessing pipeline . Fast random projection and kernel approximation methods rely on the fast Hadamard transform and convolution . Large learnable classes of structured matrices such as Toeplitz-like matrices and low-displacement rank (LDR) matrices have been used for model compression. However, despite their theoretical speedup, they lack efficient implementations, especially on GPUs. Therefore their use has been confined to small models (e.g. single hidden layer neural nets) and small datasets (e.g. CIFAR-10). Several classes of structured linear transforms are ubiquitous in modern deep learning architectures; particularly widespread examples include convolution and multiheaded attention. Recently, attempts to impose sparsity on the neural network weights have been gaining traction. State-of-the art approaches of this type typically accomplish this by pruning dense weights (either gradually during training , or post-training ) or by training a dense network and then identifying "winning lottery tickets" -sparse subnetworks which may then be retrained from scratch with appropriate initialization . Importantly, these approaches start from a dense network, and therefore training is expensive. There is also a more nascent line of work that aims to train unstructured sparse neural networks directly (; a;). These approaches maintain a constant network sparsity level throughout training, and use heuristics to evolve the sparsity pattern during training. One drawback is that the indices of the nonzero entries need to be stored in addition to the entry values themselves, which increases the memory required to store the sparse weight tensors. Another drawback is that these approaches to learn the sparsity pattern are based on intricate heuristics, which can be brittle. We note that these heuristic sparsification techniques could potentially be combined with our approach, to further sparsify the K-matrix factors. Numerous works focus on the problem of speech recognition from raw audio input, i.e. without manual featurization. SincNet is a CNN-based architecture parameterized with sinc functions such that the first convolutional layer imitates a band-pass filter. formulate a learnable version of a filter bank featurization; their filters are initialized as an approximation of MFSC features and then fine-tuned jointly with the rest of the model. proposed a powerful combined convolutional LSTM (CLDNN)-based model for learning from raw audio, using a large amount of training data. The WaveNet generative architecture (van den), based on dilated convolutions, has been adapted to speech recognition and can be trained on raw audio. Some other approaches that can learn from raw audio can be found in (; ;). To our knowledge, the 14.6% PER achieved by our kaleidoscope + LSTM model on the TIMIT test set is the lowest error rate obtained by a model trained directly on the raw audio. Permutation matrices find use in tasks such as matching and sorting. Techniques to obtain posterior distribution over permutations have been developed, such as the exponential weights algorithm and the Gumbel-Sinkhorn network . Classifying images with permuted pixels has been a standard task to benchmark the ability of RNNs to learn long range dependency. propose Permuted MNIST task where the model has to classify digit images with all the pixels permuted. Many new RNN architectures, with unitary or orthogonal weight matrices to avoid gradient explosion or vanishing, have been proposed and tested on this task (; ; ; ;). Standard gated RNN architectures such as LSTM and GRU have also been found to be competitive with these new RNN architectures Our baseline Bi-LSTM architecture is taken from the PyTorch-Kaldi repository. 5 This is a strong baseline model that, to the best of our knowledge, matches state-of-the-art performance for models that use a single type of input featurization . The original Bi-LSTM model takes as input filter bank features. These are computed as follows: (i) the waveform is framed (split into chunks of 25 ms each that overlap by 10 ms each), (ii) the waveform is dithered (zero-mean Gaussian random noise is added), (iii) pre-emphasis is applied to amplify high frequencies, (iv) the Hamming window function is applied, (v) the FFT is applied, and the power spectrum of the ing (complex-valued) output is computed, (vi) the power spectrum (which has dimension 512) is mapped to the "mel scale" (which is a scale intended to mimic human auditory perception ) by multiplication with a specific banded matrix of dimension 512 × 23, and the entrywise logarithm of the output is taken (the 23 outputs are called the filters), and (vii) cepstral mean and variance normalization is applied. Numerical hyperparameters of this procedure include the dither noise scale, the pre-emphasis coefficient, the Hamming window size, the number of mel filters, and more; we kept all these the same as the Kaldi/PyTorch-Kaldi defaults. In contrast, our version of the model takes as input the raw waveform, split into chunks the same way as before but with no normalization, dithering, or other preprocessing, which is then fed into a complex-valued kaleidoscope [(BB *) 2 ] matrix. Similarly to the nonlinear steps in computing filter bank features, the logarithm of the power spectrum of the output (which has dimension 512) is then computed. This output is fed into the Bi-LSTM; the Bi-LSTM and kaleidoscope layer are trained together in standard end-to-end fashion. The Bi-LSTM architecture is not modified aside from changing the input dimension from 23 to 512; this (along with the ≈ 75K parameters in the kaleidoscope layer itself) in approximately a 1.1M increase in the total number of parameters compared to the model that takes in MFSC features (a modest 8% relative increase). Total training time for our kaleidoscope-based architecture is 7% greater than that required for the model that uses MFSC features, not counting the time required to precompute the MFSC features; the FLOPs for inference-time are approximately 15% greater (mostly due to the larger dimension of the input to the Bi-LSTM; the kaleidoscope layer accounts for less than 0.5% of the total FLOPs). As baselines, we also compare to inserting other types of linear transformations before the Bi-LSTM: fixed linear transformations (such as the fixed FFT, or no transform at all [the identity]), trainable structured layers (low-rank, sparse, and circulant) and a trainable unstructured (dense) linear layer. The kaleidoscope layer performs the best out of all such approaches. Full are given in Table 5. In our experiments, we grid search the initial learning rate for the "preprocessing layer" (if applicable) in {5e-5, 1e-4, 2e-4, 4e-4, 8e-4, 1.6e-3}, and fix all other hyperparameters (including the initial learning rates for the other parts of the network) to their default values in the PyTorch-Kaldi repository. The model and any preprocessing layers are trained end-to-end with the RMSProp optimizer for 24 epochs (as per the defaults in PyTorch-Kaldi). For each model, we use the validation set to select the best preprocessing learning rate; while the final error rates are reported on the separate held-out test set. For all structured matrix baselines except circulant (which always has n parameters for an n × n matrix), the number of parameters in the structured matrices is set to equal the number of parameters in the butterfly layer, while the unconstrained matrix is simply a standard dense complexvalued square matrix. For all experiments with a trainable "preprocessing layer," we initialize the preprocessing matrix to represent the FFT (or approximate it as closely as possible [i.e. minimize the Frobenius error to the true FFT matrix], in the case of low-rank, sparse, and circulant), which we found to outperform random initialization. As an additional experiment, we sought to investigate whether combining the hand-engineered MFSC featurization pipeline and a learnable kaleidoscope layer (instead of replacing the former with the latter) could lead to accuracy gains. Specifically, in this experiment we first used the standard filter bank featurization pipeline described above, and trained end-to-end as usual. Then, we replaced the FFT step with a K-matrix initialized to the FFT, and made the weights of the Hamming window function and the mel filter bank matrix learnable as well (similarly to ). We fine-tuned the ing architecture for an additional 10 epochs. The final test PER% attained by this "hybrid" model is 13.9 ± 0.2; the model has 14.4M parameters-a negligible increase over the 14.3M in the original architecture. Thus, by combining the manually encoded domain knowledge in the filter bank featurization and allowing this structure to be learnable rather than fixed, we are able to nearly match the state-of-the-art 13.8% accuracy on TIMIT. (While this "hybrid" model certainly involves hand-engineering, the state-of-the-art use a concatenation of three different speech audio featurizations-MFSC, MFCC, and fMLLR-as the neural network input, along with a customized RNN architecture (LiGRU) specifically designed for speech recognition, and thus require a more complicated pipeline that is arguably even more hand-crafted.) ShuffleNet uses a permutation matrix to shuffle the channels after each grouped 1x1 convolution, sending the i-th channel to the (i mod g)-th group, where g is the total number of groups. The architecture for each blocks is: 1x1 group conv → Batch norm, ReLU → Permutation → 3x3 depthwise conv → Batch norm → 1x1 group conv. The permutation is fixed. propose to use the Hadamard transform before and after each grouped 1x1 convolution to mix the channels. Note that the Hadamard transforms are placed before the batch norm and ReLU layer (unlike the permutation matrix in the original ShuffleNet design). In particular, the architecture for each block is: Hadamard → 1x1 group conv → Hadamard → Batch norm, ReLU → 3x3 depthwise conv → Batch norm → 1x1 group conv. The Hadamard transform is fixed. In our architecture, we use a kaleidoscope matrix in OBB (product of an orthogonal butterfly matrix, a diagonal matrix, and the transpose of another butterfly matrix) before and after each grouped 1x1 convolution. We place the second K-matrix after batch norm and ReLU to more closely mimic the original ShuffleNet design. The structure for each block is: K-matrix → 1x1 group conv → Batch norm, ReLU → K-matrix → 3x3 depthwise conv → Batch norm → 1x1 group conv. The K-matrices are learned along with the rest of the network. We evaluate the CNN architectures on the image classification task of the standard ImageNet dataset . We use the standard data augmentation, training, and evaluation pipeline as in . We train with SGD on 8 GPUs for 90 epochs, with a total batch size of 2048 and initial learning rate 0.8. For the 1.0 ShuffleNet g8 architecture, we reduce total batch size to 1792 to fit into GPU memory, and linear scale initial learning rate to 0.7. Other hyperparameters (e.g. learning rate schedule, weight decay, etc.) are the same as the ShuffleNet paper . We use the training script from Nvidia's deep learning examples repository. In Table 6, we report top-5 classification accuracy on ImageNet, to complement the Top-1 accuracy in Table 2. In each setting, the total training time of our K-matrix approach is within 20% of the total training time of vanilla ShuffleNet. In Figure 4, we plot the loss and accuracy on the training set and validation set when we train 1.0 ShuffleNet g8, with either a fixed permutation (Shuffle) or a K-matrix for channel shuffling. Even though each K-matrix is a product of multiple (sparse) matrices, K-matrices take about the same number of training steps to converge as the baseline model. One reason is that K-matrices can be easily initialized or constrained to be orthogonal (Section 2.4), thus avoiding vanishing or exploding gradients. The permuted CIFAR-10 dataset is constructed by applying a fixed permutation to every input. We choose to use the 2-D bit-reversal permutation, 7 i.e., the bit reversal permutation on 32 elements is applied to the rows and to the columns. This permutation was chosen because it is locality-destroying: if two indices i, j are close, they must differ in a lower-order bit, so that the bit-reversed indices i, j are far. This makes it a particularly difficult test case for architectures that rely on spatial locality such as "vanilla" CNNs. We describe the model architectures used in Section 3.1 (those reported in Table 3). The model represents a fixed permutation P, parametrized as a K-matrix, to learn to recover the true permutation, followed by a standard ResNet18 architecture . Because of the simple decomposable nature of the butterfly factors (Section 2.1), our parameterization is easily extensible with additional techniques: (i) We constrain each butterfly factor matrix in the K-matrix to be doubly-stochastic. For example, each 2 × 2 block in the butterfly factor matrix of block size 2 has the form a 1 − a 1 − a a, where a ∈. We treat this block as a distribution over permutations, generating the identity 1 0 0 1 with probability a and the swap 0 1 1 0 with probability 1−a. Butterfly factor matrices with larger block sizes are constrained to be doubly-stochastic in a similar manner. In this way, a permutation is sampled for each butterfly factor matrix, and these permutations are composed to get the final permutation that is applied to the image. (ii) For each minibatch, the examples P x by applying permutation samples on the (permuted) inputs are fed into an additional unsupervised reconstruction loss 0≤i,j<n measuring total variation smoothness of the de-noised inputs. Such loss functions are often used in image denoising . A final regularization loss was placed on the entropy of P, which was annealed over time to encourage P to converge toward a sharper doubly-stochastic matrix (in other words, a permutation). The model is trained with just the reconstruction loss to convergence before a standard ResNet is trained on top. These techniques are applicable to the K-matrix as well as specialized methods for representing permutations such as Gumbel-Sinkhorn and are important for recovering the true permutation. However, they are not applicable to a general linear layer, which shows the flexibility of K-matrices for representing generic structure despite not being a specially tailored method for this task. We also remark that other classes of structured linear maps such as low-rank, circulant, and so on, are even less suited to this task, as they are incapable of representing all permutations. 1. Fully connected (FC): This is a 3-layer MLP, with hidden size 1024 and ReLU nonlinearity in-between the fully connected layers. We use a gated recurrent unit (GRU) model , with hidden size 1024. Many RNN architectures have been proposed to capture long-range dependency on permuted image dataset such as Permuted MNIST . Standard gated architectures such as LSTM and GRU have shown competitive performance on the Permuted MNIST dataset, and we choose GRU as a baseline since it has been reported to slightly outperform LSTM . We use the standard ResNet18 architecture, adapted to smaller image size of the CIFAR-10 dataset (changing stride from 2 to 1 of the first convolutional layer, and removing max-pooling layer that follows). 4. Dense + CNN: We add an additional linear layer (i.e. a dense matrix) of size 1024 × 1024 before the ResNet18 architecture. This dense layer can in theory represent a permutation, but cannot benefit from the additional techniques described above. We use the standard ResNet18 architecture applied to the unpermuted CIFAR-10 dataset. All models are trained for 200 total epochs, with the Adam optimizer. We use the standard learning rate schedule and weight decay from Mostafa & Wang (2019b). We use Hyperband to tune other hyperparameters such as the initial learning rate and annealing temperature. The architecture of each layer of the decoder is: For every layer of the decoder, we replace all four dense weight matrices in the four Linear layers with four K-matrices from the B class (i.e. butterfly matrices). The models are trained from scratch using the training script from the Fairseq repository, with the same hyperparameters (optimizer, learning rate, number of updates) used in the DynamicConv paper . We note that the DynamicConv model with K-matrices in the decoder trains slightly faster than the default DynamicConv model (both models are trained for 50,000 updates, which requires approximately 7% less time for the K-matrix model than for the default model). To evaluate inference speed, we run the decoding script on the IWSLT-14 De-En test set in singlethreaded mode on a server Intel Xeon CPU E5-2690 v4 at 2.60GHz, and measure wall-clock time. The test set contains 6750 sentences, with 149241 tokens. , we set the batch size to 1 and beam size to 1. We additionally compare the speed-quality tradeoff of K-matrices with other classes of structured matrices, when used to replace the fully-connected layers of DynamicConv's decoder. We consider the following classes of structured matrices, in addition to K-matrices: low-rank, circulant, Toeplitzlike , ACDC , Fastfood (, and sparse. For classes with variable number of parameters (e.g. low-rank, sparse), we set the number of parameters to match that of K-matrices. For sparse matrices, besides the for an ensemble of 10 models (the default setting in the Fairseq repository), we also report the for a single model, as that could have faster inference time (ensembling/averaging sparse matrices produces less a less sparse matrix). In Figure 5, we plot the tradeoff between translation quality (measured by BLEU score) and inference speed (sentences per second). Most classes of structured matrices produce similar translation quality (between 34.1 and 34.4 BLEU score). K-matrices have the second fastest inference time, only 7% slower than low-rank matrices. We note that low-rank matrices benefit from very well-tuned BLAS routines (matrix-matrix multiplication). Even though our implementation of K-matrix multiplication is not yet highly optimized, it is already quite close to the speed of low-rank matrices. Each K-matrix (for fixed width and expansion), has an O(n log n) matrix-vector multiplication algorithm by sequentially multiply the input vector with each of the sparse factor. Our implementation of this simple algorithm is surprisingly competitive with optimized subroutines both on GPU (e.g. for training) and on CPU (e.g. for inference). In Figure 6, we compare the speed of multiplying by a K-matrix in class B (i.e. a butterfly matrix) against specialized implementation of the FFT. We normalize the speed by the speed of dense matrix-matrix multiply (on GPU) or dense matrix-vector multiply (on CPU). On GPU, with input sizes n = 1024 and batch size 256, the training time (forward and backward) of K-matrices matrix is 23% faster than dense matrix multiply (GEMM from cuBLAS). For inference on CPU, the kaleidoscope fast multiplication can be one or two orders of magnitude faster than GEMV. Over a range of matrix sizes, our implementation is within a factor of 4x of specialized implementation of the FFT, a highly optimized kernel. Our implementation is also memory efficient. In the forward pass through the O(log n) sparse factors, we do not store the intermediate , but recompute them during the backward pass. Therefore the activation memory required is O(bn) for input batch size b. We directly validate Theorem 1 on well-known types of structured matrices used in machine learning. Given a structured matrix M, we attempt to represent M as closely as possible using K-matrices as well as the standard classes of structured matrices: sparsity and low-rank. In Table 7, we quantify the expressivity of each of the three methods, as measured by their ability to approximate a range of different structures. Results for "global minimum" of kaleidoscope matrices are obtained from the theoretical expressiveness in Section I and Section J. Low-rank and sparse approximation have closed form solutions: truncating the SVD and keeping the largest-magnitude entries, respectively. We also report the using SGD for kaleidoscope matrices to validate that good approximation with K-matrices can be obtained even from standard first-order optimization algorithms. Even with imperfect optimization, kaleidoscope matrices can still capture out-of-class target matrices better than low-rank and sparse matrices. Table 7: Expressiveness of different classes of structured matrices: Frobenius error of representing common structured matrices (columns) of dimension 256 using three structured representations of matrices with adjustable numbers of parameters. (Left group: Target matrices in the same class as the methods. Middle group: Target matrices with fixed number of parameters. Right: Random matrix to show typical scale of error.) Each method is allotted the same number of parameters, equal to a log n factor more than that of the target matrix. Low-rank and sparse matrices are unable to capture any structure outside their own class, while the minima for kaleidoscope matrices found via optimization better capture the actual structure for out-of-class targets better than the baselines. The target matrices are kaleidoscope, low-rank, sparse, convolution (i.e. circulant matrices), Fastfood , and entrywise random iid Gaussian matrix (to show the typical magnitude of the error). All target matrices M were randomly initialized such that To find a kaleidoscope approximation with SGD, we Hyperband to tune its learning rate (from 0.001 to 0.5). E PROPERTIES OF THE BB * HIERARCHY Here, we justify why the definitions in Section 2.2 give rise to a hierarchy. We first make some basic observations about the parameterization. Observation E.1. An n × n matrix M ∈ BB * has 4n log n parameters. Proof. M can be expressed as a product of 2 log n butterfly factor matrices of size n × n. Each of these factor matrices has 2 parameters per row, for a total of 2n parameters each. Hence, the total number of parameters is 4n log n. Observation E.2. Let M be an n × n matrix in (BB *) w e. Then, given an arbitrary vector v of length n, we can compute Mv with O(wne log(ne)) field operations. Proof. Since M ∈ (BB *) w e, we can decompose it as SE 1 E 2... E w S T, where S is as given in Definition 2.4, and each E i is an en × en matrix in BB *. Therefore, to compute Mv, we can use associativity of matrix multiplication to multiply the vector by one of these matrices at a time. Since all of these factors are sparse, we use the naïve sparse matrix-vector multiplication algorithm (begin with a 0-vector and perform the corresponding multiplication and addition for each nonzero matrix entry). S (and thus S T) have n NNZ. Therefore, matrix-vector multiplication by S or S T requires O(n) operations, which is dominated by the butterfly matrix-vector multiplication. Each E i can be further decomposed into 2 log(ne) matrices with at most 2ne non-zero entries each (by Observation E.1). Therefore, matrix vector multiplication by each E i requires O(ne log(ne)). Since there are w such E i, we require a total of O(wne log(ne)) operations. Now, we are ready to show that our definition of classes (BB *) w e forms a natural hierarchy. First, we must argue that all matrices are contained within the hierarchy. Lemma E.3. Let M be an arbitrary n × n matrix. Then M ∈ (BB *) (2n−2). Proof. Corollary E.3 in Appendix K shows that any n × n matrix can be written in the form, where M i, M i are orthogonal butterfly matrices and M is a diagonal matrix. We can combine D with M n to form another (possibly not orthogonal) butterfly matrix. This yields a decomposition of M as products of (possibly not orthogonal) butterfly matrices and their (conjugate) transposes, completing the proof. Next, we argue that, up to a certain point, this hierarchy is strict. Lemma E.4. For every fixed c ≥ 1, there is an n × n matrix M n (with n sufficiently large) such that Proof. Given c, fix n to be a power of 2 such that c < n 4 log 2 n. For sake of contradiction, assume that every n × n matrix in (BB *) c+1 is also in (BB *) c. Let A be an arbitrary n × n matrix. From Lemma E.3, A ∈ (BB *) (2n−2). From our assumption, we can replace the first c + 1 BB * factors of A with c (potentially different) BB * factors and still recover A. We can repeat this process until we are left with c BB * factors, implying that A ∈ (BB *) c. From Observation E.1, we require 4cn log n < n 2 (by our choice of n) parameters to completely describe A. This is a contradiction since A is an arbitrary n × n matrix, and therefore has n 2 arbitrary parameters. Hence, there must be some n × n matrix in (BB *) c+1 that is not in (BB *) c. In this appendix, we prove our main theoretical , namely, our ability to capture general transformations, expressed as low-depth linear arithmetic circuits, in the BB * hierarchy. This is recorded in Theorem 1. Theorem 1. Let M be an n×n matrix such that matrix-vector multiplication of M times an arbitrary vector v can be represented as a be a linear arithmetic circuit C comprised of s gates (including inputs) and having depth d. Then, M ∈ (BB *) To prove Theorem 1, we make use of the following two theorems. Theorem 2. Let P be an n × n permutation matrix (with n a power of 2). Then P ∈ BB *. Theorem 3. Let S be an n × n matrix of s NNZ. Then S ∈ (BB *) Theorem 2 is proven in Appendix G, and Theorem 3 is proven in Appendix I. Proof of Theorem 1. We will represent C as a product of d matrices, each of size s × s, where s is the smallest power of 2 that is greater than or equal to s. To introduce some notation, define w 1,... w d such that w k represents the number of gates in the k'th layer of C (note that s = n + d k=1 w k). Also, define z 1,... z d such that z 1 = n and z k = w k−1 + z k−1 (z k is the number of gates that have already been used by the time we get to layer k). Let g i denote the i'th gate (and its output) of C (0 ≤ i < s), defined such that: where i 1, i 2 are indices of gates in earlier layers. For the k'th layer of C, we define the s × s matrix M k such that it performs the computations of the gates in that layer. Define the i'th row of M k to be: We'd like to argue that v d contains the outputs of all gates in C (i.e, the n values that make up Mv). To do this we argue, by induction on k, that v k is the vector whose first z k+1 entries are g 0, g 1,..., g (z k −1), and whose remaining entries are 0. The base case, k = 0 is trivial. Assuming this holds for the case k − 1, and consider multiplying v k−1 by M k. The first z k rows of M k duplicate the first z k entries of v k−1 The next w k rows perform the computation of gates g z k,..., g (z k+1 −1). Finally, the remaining rows pad the output vector with zeros. Therefore, v k is exactly as desired. The final matrix product will contain all n elements of the output. By left multiplying by some permutation matrix P, we can reorder this vector such that the first n entries are exactly Mv. Hence, we are left to argue the position of PM d... M 2 M 1 within the BB * hierarchy. Each M k is a matrix with total 2w k + z k < 2s NNZ. From Theorem 3, we can, therefore, represent M k as a product of O matrices (of size 2s) in BB *. From Theorem 2, P ∈ BB *. Note that s ≤ s < 2s, so s = Θ(s). * factors, and requires an expansion from size n to size 2s, or an expansion factor of O(, as desired. Remark F.1. By applying Observation E.2, we see that Theorem 1 gives an O(sd log s) matrix vector multiplication algorithm for M. In this appendix, we prove Theorem 2. To do this, we decompose permutation matrix P into P = LR, with L ∈ B and R ∈ B *. Throughout the proof, we make use of the following definition. Definition G.1. Let L be an n × n permutation matrix (n a power of 2). We say that L meets the 2 j balance condition if L can be divided into chunks of 2 j (with each chunk having all columns i such that i 2 j has the same value) such that for every 0 ≤ m < 2 j, each chunk has exactly one L[:, k] = e π k with π k ≡ m (mod 2 j). We say that L is modular-balanced if it meets the 2 j balance condition for each 2 ≤ 2 j ≤ n. First step of decomposition of modular-balanced matrix L. Here, the red entries must be permuted into the main diagonal blocks. Proof. We proceed by induction on n. The base case n = 2 is trivial. As our inductive hypothesis, we assume that all modular-balanced matrices of size n 2 × n 2 are butterfly matrices of size n 2. From Definition 2.3, it is sufficient to show that L can be decomposed as: where B n is a butterfly factor of size n and each L j is an n 2 × n 2 modular-balanced matrix. Define L 1 and L 2 such that: Note that since L is a permutation matrix (and thus has exactly one non-zero entry per column), at most one term of each of these sums can be non-zero. For sake of contradiction, assume L 1 is not modular-balanced. Then, for some 2 j ≤ n 2, there are two columns c 1, c 2 such that c1 2 j = c2 2 j and such that indices of the non-zero entries of L 1 in columns c 1 and c 2 are the same modulo 2 j. However, from the definition of L 1, this implies that the indices of the non-zero entries of L in columns c 1 and c 2 are also the same modulo 2 j, contradicting L being modular-balanced. Hence, L 1 is modular-balanced. An analogous argument (that instead considers columns c 1 + n 2, c 2 + n 2 of L) shows that L 2 is also modular-balanced. To complete the proof, we must argue that B n is a butterfly factor of size n. Since each L i is modular-balanced, it is a permutation matrix. Therefore, L has exactly 1 non-zero entry in each of the first n 2 rows and columns from L 1 and exactly 1 non-zero entry in each of the second n 2 rows and columns from L 2. Hence, L is a permutation matrix. Since both L and L are permutation matrices, B = L (L) −1 must also be a permutation matrix. Therefore, we can view B as performing a permutation of the rows of L to get L. Consider the i'th row of L, with 0 ≤ i < In both cases, the non-zero entries of B fall into the correct diagonal bands (the main diagonal, and the bands n 2 away). Hence, B is a butterfly factor of size n. Now, we consider the process of transforming P into a modular-balanced matrix. We make use of the following lemma. If M met the k 2 balance condition, then each node would additionally have in-degree exactly 1 and out-degree exactly 1. By reversing edges of G such that each (undirected) cycle becomes a directed cycle, we can achieve this. However, reversing edges corresponds to swapping columns of M that are k 2 apart. Let B k be the permutation matrix that performs all such swaps. B k has non-zero entries only along the main diagonal and the diagonal bands k 2 away, and thus is a butterfly factor of size k. We are ready to present the decomposition of P. Lemma G.3. Let P be an n × n permutation matrix. Then we can decompose P into P = LR, where L is modular-balanced and R ∈ B *. Proof. We repeatedly apply Lemma G.2. First, we conclude that there is a butterfly factor B n such that PB n = P, where P meets the n 2 balance condition. Now, we consider the first and last n 2 columns of P independently. We can again apply Lemma G.2 (twice) to conclude that there are butterfly factors B n 2 1, B n 2 2 such that where P meets the n 2 and n 4 balance conditions. We continue this process until we obtain a matrix that meets all of the balance conditions. Our final equation is of the form: where B is a butterfly matrix and L is a modular-balanced matrix. Let R = B −1 = B * (since B is a permutation matrix, and thus is orthogonal) and hence R ∈ B *. Then P = LR, as desired. Theorem 2 follows immediately from Lemmas G.3 and G.1. Here, we present some basic facts of the BB * hierarchy that will be useful for later constructions. For simplicity, we assume (WLOG via 0-padding) that all matrices are square matrices with size that is a power of 2. Lemma H.1. If M ∈ B (or M ∈ B *), then DM, MD ∈ B (B * resp.) for any diagonal matrix D. Proof. Left multiplication by a diagonal matrix scales the rows of M by the corresponding diagonal entries. The same can be achieved by scaling all entries the leftmost butterfly factor matrix. Similarly, right multiplication by a diagonal matrix scales the columns of M, which can be achieved by scaling all entries in the columns of the rightmost butterfly factor matrix. w2 by Lemma H.1. Hence, AB ∈ (BB *) w1+w2 e by Definition 2.4. where P is a permutation that that moves the first k rows of each E Ai (in order) into the top mk rows. From Theorem 2, P ∈ BB *, (and so is P T, also a permutation). Within the RHS block matrix, the decompositions of each E Ai can be done in parallel, requiring total width w. Hence, w+2 e, as desired. Remark H.4. If e = 1 in Lemma H.3, then P is unnecessary. Hence, Proof. For each 1 ≤ i ≤ m, let E Ai ∈ F ek×ek be defined such that A i = SE Ai S T (with S as in Definition 2.4). Note that E Ai ∈ (BB *) w. Consider matrices of the form: Here, L and R compute the sum of the 2ek × 2ek matrices on the diagonal of SP 1, where P 1 is a permutation swapping E Ai to the 4 th ek-block column. Note that S is the diagonalization of four matrices in (BB *) w, so S ∈ (BB *) w by Remark H.4. In addition, since each block in S is a butterfly matrix of size ek, S only uses butterfly factors up to size ek, so the outer factor matrices of sizes 4ek and 2ek in S are unused. Also note that L and R are butterfly factor matrices of size 4ek (or B (4ek) 4ek ), and P 1 is a butterfly factor matrix of size 2ek (or B (4ek) 2ek ). This allows us to fold the surrounding matrices L, Through repeated application (m times) of the identity From Lemma H.2, M ∈ (BB *) mw. Finally, note that where P 2 is a permutation that moves the first k columns of the second block-column of M to the left. P 2 can be folded into the final summation factor M m as follows: Lemma H.6. Let M be an invertible n × n matrix such that M ∈ B. Then M −1 ∈ B *. Proof. We prove this in a series of steps.. By the form of B, non-zero entries within a row or column are always exactly k 2 positions apart. Therefore, the only row operations needed for this Gaussian elimination are: • Scaling a row by a constant factor c = 0 • Addition of a row to another row exactly k 2 rows apart Performing these operations on I k will only allow non-zeros on the main diagonal and k 2 diagonals away from the main diagonal. Hence, B −1 k is also a butterfly factor of size k. k be an invertible butterfly factor matrix of size n and block size k. Its inverse is the block diagonal matrix formed by the inverses of each of its constituent butterfly factors. From above, is also a butterfly factor matrix of size n and block size k. Finally, consider M ∈ B. Finally, we include a closure for the Kronecker product, another common matrix composition operation. Although Lemma H.7 is not directly used in the subsequent proofs, it allows for examples the for the DFT to be lifted to higher-dimensional Fourier transforms. We also note that the closure bound in Lemma H.7 can be tightened in such cases (cfṘemark H.4). Proof. Note that for some permutation P. In this appendix, we prove Theorem 3. First, we consider matrices with at most n NNZ. Lemma I.1. let S be an n × n matrix with at most n NNZ. Then, S ∈ (BB *) 5. We use this lemma and the addition closure lemma to prove Theorem 3. Proof of Theorem 3. We note that any s sparse matrix is the sum of s n matrices of at most n NNZ, and we appeal to Lemma H.5. In the rest of the section we will prove Lemma I.1. We begin by defining two classes of matrices that will be used in our decomposition. Definition I.1. An n × n matrix H is a horizontal step matrix if for every 0 ≤ i, i < n and An n × n matrix V is a vertical step matrix if V * is a horizontal step matrix. With this definition, the horizontal step matrix obeys a "Lipschitz-like" condition. Each column of a horizontal step matrix can have at most one non-zero entry, and given two non-zero columns k apart, the non-zero entry in the right column must be between 0 and k rows below the non-zero entry in the left column. Note that to show that a matrix is a horizontal step matrix, it is sufficient to argue that this condition holds for each pair of neighboring non-zero columns. Similarly, each row of a vertical step matrix can have at most one non-zero entry, and given two non-zero rows k apart, the non-zero entry in the lower row must be between 0 and k columns to the right of the non-zero entry in the upper row. Lemma I.2. Let H be an n × n horizontal step matrix. Then H ∈ B. Proof. We proceed by induction on n. The base case n = 2 is trivial. As our inductive hypothesis, we assume that all horizontal step matrices of size n 2 × n 2 are butterfly matrices of size n 2. From Definition 2.3, it is sufficient to show that H can be decomposed as: where H 1, H 2 are n 2 × n 2 horizontal step matrices and each D k is a n 2 × n 2 diagonal matrix. Denote the four, n 2 × n 2 corner submatrices of H by: Then, define H 1 and H 2 by: For sake of contradiction, assume that H 1 is not a horizontal step matrix. Then, there are 0 From our definition of H 1, the non-zero entries in columns j and j of H are either (i − i) mod n 2 or n 2 + (i − i) mod n 2, both of which are greater than j − j, rows apart. This contradicts H being a horizontal step matrix. Hence, H 1 must be a horizontal step matrix, as must H 2 from an analogous argument. To finish the proof, we argue the correctness of the decomposition by equating arbitrary entries of each of the 4 corner submatrices. We begin with the upper left submatrix. Here, we consider two cases: Since H is a horizontal step matrix (and hence may have at most one non-zero entry per column), it follows that H 11 [i, j] = 0. In this case, the indicator function evaluates to 0, so Otherwise, for sake of contradiction, suppose that H 21 [i, :] = 0. Then, two of the first n 2 columns of H would have non-zero entries n 2 rows apart, contradicting H being a horizontal step matrix. Hence, In all cases,, so our decomposition correctly recovers the upper left corner of H. Analogous arguments show that the other three corners are also correctly recovered. Hence, our decomposition is correct, and by induction, H ∈ B. Corollary I.1. Let V be a vertical step matrix. Then V ∈ B *. Now, we use step matrices to prove Lemma I.1. Proof of Lemma I.1. Given S, we decompose it as S = P 1 HP 2 VP 3, where each P is a permutation matrix, H is a horizontal step matrix, and V is a vertical step matrix. For an example of this, see Figure 9. We first decompose S as S = P 1 S P 3, where P 1 is the permutation that moves all 0 rows of S to the bottom and P 3 is the permutation that moves all 0 columns of S to the right. Next, we further decompose S into S = HV as follows. Since S has s ≤ n NNZ, we can parameterize with the non-zero entries indexed in row-major order. Define matrix H by: Define matrix V by: To show that S = HV, we consider an arbitrary entry: by definition of matrix multiplication by definition of H and V Here, we note that (i, j) can equal (i k, j k) for at most one value of k since the locations in θ are unique. Hence, HV [i, j] = c k only if (i, j) = (i k, j k) for some k, which is exactly the definition of S. Hence, S = HV. We argue that H is a horizontal step matrix through a series of assertions. First, note that H has exactly one non-zero entry in each of its first s columns. Also, note that since θ is in row-major order, these non-zero entries are sorted (any column to the right cannot have a non-zero entry in a higher row). Hence, to show that H is a horizontal step matrix, it is sufficient to argue that adjacent columns of H have non-zero entries at most one row apart. This is equivalent to S having no zero rows between two non-zero rows, which is guaranteed by P 1. Hence, H is a horizontal step matrix. Since V has at most one non-zero entry per row, we may permute the rows of V to obtain a matrix V, where the non-zero entries of V are sorted (any lower row below cannot have a non-zero entry in an earlier column). Hence, for some permutation matrix (P 2) −1, V = (P 2) −1 V, which implies that V = P 2 V. It has exactly one non-zero entry in each of its first s columns. From the action of P 2, these non-zero entries are sorted. Therefore, by the same argument as for H above, V T is a horizontal step matrix. Hence, V is a vertical step matrix. In all, we have found a decomposition S = P 1 HP 2 VP 3, where each P is a permutation matrix (∈ BB * by Theorem 2), H is a horizontal step matrix (∈ BB * by Lemma I.2), and V is a vertical step matrix (∈ BB * by Corollary I.1). By Lemma H.2, S ∈ (BB *) 5. Corollary I.2. Let R be an n × n matrix of rank r. Then R ∈ (BB *) 10r 4. Proof. We can decompose R as R = GH * where G, H are n × r matrices. With appropriate zero-padding, both of these can be made into n × n matrices with at most rn NNZ. The proof follows immediately from Theorem 3 and Lemma H.2. In this appendix, we draw comparisons between the BB * hierarchy and the BP hierarchy introduced by. Lemma J.1. Let F n be the Discrete Fourier Transform of size n. Then F n ∈ (BB *) 2. Proof. , we can express F n as F n = B P, where B ∈ B and P is a permutation (the bit reversal permutation). From Theorem 2, P ∈ BB *. Hence, by Lemma H.2, F n ∈ (BB *) 2. Lemma J.2. Let H n be the Hadamard Transform of size n. Then H n ∈ BB *. Proof. H n ∈ B, so trivially H n ∈ BB *. Lemma J.3. Let S n be the Discrete Sine Transform of size n. Then S n ∈ (BB *) 2. Proof. As described in , S n can be performed as a scaled permutation (separating the even and odd indices of the input, and reversing and negating the odd indices) composed with F n. Therefore, we may decompose S n as S n = B P 2 D P 1, where P 1, P 2 are permutations, B ∈ B, and D is a diagonal matrix. P 2 D P 1 is simply a permutation matrix with scaled entries, which can be equivalently expressed as D P for some diagonal matrix D and permutation P. By Lemma H.1, B D ∈ BB *. By Theorem 2, P ∈ BB *. Hence, by Lemma H.2, S n ∈ (BB *) 2. Remark J. 4. An analogous argument shows that the Discrete Cosine Transform is also in (BB *) 2. Lemma J.5. Let C n be an n × n circulant (convolution) matrix. Then C n ∈ BB *. Proof. Using Theorem 2.6.4 of , we can express C n as C n = (F n) −1 DF n where F n is the Discrete Fourier Transform and D is a diagonal matrix. (F n) −1 = B P (with B ∈ B, P a permutation), which implies that F n = (P) −1 (B) −1. Therefore The middle three factors have the effect of performing a permutation, scaling each element, and undoing the permutation, which is equivalent to simply scaling by some diagonal matrix D. Hence, we are left with By Lemma H.1, B D ∈ B. By Lemma H.6, (B) −1 ∈ B *. Hence, C n ∈ BB *. Remark J.6. We can expand any n × n Toeplitz matrix T n into a 2n × 2n circulant matrix (with upper left n × n submatrix equal to T n). Hence, T n ∈ (BB *) 1 2 by Lemma J.5. The Fastfood matrix class can be tightly captured in the BB * hierarchy: Lemma J.7. The product SHDPHB where S, D, B are diagonal matrices, H is the Hadamard transform, and P is a permutation matrix, is in (BB *) 3. Proof. We have shown in Lemma J.2 that H ∈ BB *, and in Theorem 2 that P ∈ BB *. Since BB * is closed under diagonal multiplication (Lemma H.1), we conclude that SHDPHB ∈ (BB *) 3. The two classes of matrices introduced in , called AFDF and ACDC, are also tightly captured in the BB * hierarchy: Lemma J.8. Let AF −1 DF be a product of a diagonal matrix A, the inverse Fourier transform F −1, another diagonal matrix D, and the Fourier transform F. Then AF −1 DF ∈ BB *. Let AC −1 DC be a product of a diagonal matrix A, the inverse cosine transform C −1, another diagonal matrix D, and the cosine transform C. Then AC −1 DC ∈ (BB *) 4. Proof. We have argued in Lemma J.5 that F −1 DF ∈ BB *. Since BB * is closed under diagonal multiplication (Lemma H.1), we conclude that AF −1 DF ∈ BB *. We have shown that C ∈ (BB *) 2, so C −1 ∈ (BBS) 2 as well. Since BB * is closed under diagonal multiplication (Lemma H.1), we conclude that AC −1 DC ∈ (BB *) 4. Remark J.9. Within each butterfly factor matrix of the DFT (excluding the bit reversal permutation) and the Hadamard transform, the columns are pairwise orthogonal and have norm 2. Hence, we can divide all factors by √ 2 to make orthogonal factor matrices. To counteract this scaling, we can add a diagonal matrix with √ 2 log 2 (n) = √ n in all entries to the factorization. By doing this we can place all of the above transforms in the OBB hierarchy (defined in Appendix K) with the same width and expansion factor. Here, we show that, using larger matrices, we are able to similarly capture multi-dimensional versions of the above transforms. Lemma J.10. Let F 2 n be the 2-dimensional Discrete Fourier Transform (represented as an Proof. The separation property of the 2-D DFT allows us to express its action on an n × n matrix as the composition of a 1-D DFT on each of its rows and a 1-D DFT on each of its columns. If we view the 2-D DFT as an n 2 × n 2 matrix, its input and outputs will both be column vectors of size n 2 . As our convention, we list the entries of the input vector in the row-major order corresponding to the n × n input matrix. Then, we consider the 2-D DFT in four steps, where the first two steps perform the 1-D DFT row-wise, and the second two steps perform the 1-D DFT column-wise: Step 1: Permute the columns: We permute the columns (with a bit reversal permutation), which performs a bit reversal permutation on each row. Viewing the input as a vector, this step corresponds to left multiplication by a permutation matrix P c that permutes the entries of each chunk of size n of the input vector. Step 2: Multiply each row by a butterfly matrix Since the entries of the input were listed in row major order, this step is achieved through multiplication by a block diagonal matrix of n butterfly matrices of size n, which can be viewed as a product of butterfly factor matrices B Step 3: Permute the rows: We permute the rows (with a bit reversal permutation), which performs a bit reversal permutation on each column. This corresponds to left multiplication by a permutation matrix P r. Since we are permuting the rows, P r permutes the entries at the granularity of each n-chunk. Since Steps 1 and 2 each performed an identical computation to each n-chunk we can move this row permutation before Step 2, combining P c and P r into a single permutation P. Step 4: Multiply each column by a butterfly matrix Consider multiplication by the first factor matrix. In each row, this matrix is taking linear combinations of adjacent column entries. In our length-n 2 vector, these entries will be exactly n indices apart. Therefore this multiplication can be handled by a butterfly factor matrix B (n 2) 2n. Similarly, we find that this butterfly multiplication can be expressed as multiplication by a product of butterfly factor matrices B (n 2) 2n. Combined with the factor matrices from Step 2, these form a butterfly matrix B of size n 2. In all, we see that the 2-D DFT may be realized as multiplication by a permutation matrix P followed by multiplication by a butterfly matrix B. The same argument as Lemma J.1 shows that F 2 n ∈ (BB *) 2. Remark J.11. An analogous argument (using the separation property of the respective transforms) can be used to argue that 2-D Discrete Sine and Discrete Cosine transforms are in (BB *) 2, and that 2-D Hadamard Transforms are in BB *. Lemma J.12. Let C 2 n be a 2-dimensional convolution matrix. Then C 2 n ∈ BB *. Proof. We can express a 2-D convolution matrix as C −1 ) as the product of a butterfly matrix and a permutation matrix. The rest of the argument is analogous to the proof of Lemma J.5. Remark J.13. Using an inductive argument, we can show that all k-dimensional (k ∈ Z) variants of the above transforms, expressed as n k × n k matrices are contained in BB * or (BB *) 2. To do this, we use the separation property of the transforms to break them into a k − 1-dimensional transform (the inductive hypothesis) followed by a 1-dimensional transform. Through practical application of the butterfly matrices, it has been found useful to constrain them in orthogonality. In Section K.1 we will modify the existing kaleidoscope hierarchy to create the orthogonal kaleidoscope hierarchy OBB. Then, in Section K.2, we will argue that all orthogonal matrices, and as a all matrices, can also be expressed in this hierarchy in O(n) width. Lastly, in Section K.3, we will argue that permutation matrices and sparse matrices also exist in this hierarchy in O width, which in turn implies a corresponding for matrices with low-depth arithmetic circuits. The definition of the orthogonal butterfly is identical to the original butterfly, with the constraint that all butterfly factors are orthogonal. We specify this definition below: Definition K.1 (Analog of Definition 2.1). An orthogonal butterfly factor of size k ≥ 2 (denoted as B k) is a butterfly factor that is also orthogonal. Definition K.2 (Analog of Definition 2.3). An orthogonal butterfly matrix of size n (denoted as B (n) ) is a butterfly matrix with all butterfly factor matrices being orthogonal. It is easily checked that We choose, which are orthogonal by construction (via). Hence, L ∈ B (n) where • denotes the Hadamard product. From Denoting the first half of this vector by w 0 ∈ C n/2, we have where w 0 2 = u 2 = 1. The follows inductively. As an immediate corollary, we can use Singular Value Decomposition to obtain a factorization for an arbitrary n × n matrix. Corollary K.1. Let M be an arbitrary n × n matrix. Then, M ∈ (OBB) 2n−1, where all but one matrix in the decomposition is orthogonal (unitary). Proof. By employing Singular Value Decomposition, we can decompose M as M = UΣV *, where U, V * are orthogonal and Σ is diagonal. By Lemma K.2, U, V * ∈ (OBB) n−1, and trivially Σ ∈ OBB. Hence, M ∈ (OBB) 2n−1. Note that Σ is the only matrix in the decomposition that is not orthogonal (unitary). We show that we can construct s-sparse matrices in the OBB hierarchy with the same width as the BB * hierarchy. The proof follows a structure to that of Theorem 3. We begin by arguing about permutation and step matrices, then using the same factorization to argue that matrices with at most n NNZ are contained in (BB *) 5. Then, we will appeal to a modified sum closure lemma to extend the argument to matrices of general s NNZ. Similar to Appendix F, we can use these to place all matrices with low-depth circuits for matrix vector multiplication in the OBB hierarchy. We begin by presenting the argument that permutations are included in OBB as a corollary to Theorem 2. Corollary K.2. Let P be a permutation matrix. Then P ∈ B B *. Proof. We appeal to the decomposition from Theorem 2, noting that all butterfly factor matrices constructed in the proofs of Lemmas G.3 and G.1 are permutation matrices, and thus are orthogonal. Hence, P ∈ OBB where the inner diagonal matrix is I. To prove the containment of sparse matrices within the OBB hierarchy, we make use of the following lemma. Lemma K.3. Let P be a permutation matrix and D a diagonal matrix. Then there exist diagonal matrices D and D such that: Proof. Let σ be the permutation such that An analogous argument to above shows that DP = PD. In the BB * hierarchy (Lemma I.2), we were able to show that horizontal step matrices are butterfly matrices. Here, we present a similar for the OBB hierarchy. Lemma K.4. Let H be an n × n horizontal step matrix. Then we can decompose H = DO, where D is a diagonal matrix and O ∈ B. Proof. Throughout the proof, we make reference to the original horizontal step matrix construction given in Lemma I.2 and its proof. To begin, we show that an arbitrary 2 k × 2 k butterfly factor H 2 k in the decomposition of H can be expressed as the product of a diagonal matrix and an orthogonal butterfly factor. Since a butterfly factor is direct sum of 2 × 2 matrices, there is a permutation matrix P 2 k such that conjugation of H 2 k by P 2 k gives a block diagonal matrix H 2 k of n 2 2 × 2 matrices, i.e. Figure 10 for an illustration.) Specifically, P 2 k is the permutation where: We argue that each of these 2×2 blocks can be decomposed into a diagonal matrix times an orthogonal matrix. Note that the butterfly factor matrices constructed in the proof of Lemma I.2 each have at most one non-zero entry per column. Hence, there are 4 cases to consider. Note that matrices with at most one non-zero entry are exhausted by Cases 1 and 2. Case 1: In the last two cases, O is a 2 × 2 rotation matrix, which is commonly known to be orthogonal. Assume that we perform the above decomposition on all of the blocks of H 2 k in parallel, therefore expressing H 2 k = D O. We now have is the product of three orthogonal matrices, and thus orthogonal. Additionally, the construction of P 2 k ensures that P * 2 k O P 2 k is butterfly factor. 10 Hence, H 2 k can be expressed as the product of a diagonal matrix and an orthogonal butterfly factor, as desired. Now, we show that this decomposition of butterfly factors implies Lemma K.4. By performing this decomposition in parallel on each butterfly factor, we conclude that any butterfly factor matrix H We complete the argument by induction on n. The base case n = 2 holds by the observation about butterfly factor matrices above. Assume that any horizontal step matrix of size n 2 × n 2 can be expressed as a diagonal matrix times an orthogonal butterfly matrix. Now, consider the n × n horizontal step matrix H. From Lemma I.2, H can be expressed as where H 1, H 2 are n 2 × n 2 horizontal step matrices. By our inductive hypothesis, n D 1 is a butterfly factor, and therefore can be expressed as with O ∈ B, as desired. 10 Conjugation by P 2 k is an isomorphism from 2 k × 2 k butterfly factors onto block diagonal matrices with 2 k−1, 2 × 2 blocks. Therefore, conjugation by P −1 2 k = P * 2 k maps a block diagonal matrix to a butterfly factor. 11 Note that a block diagonal matrix composed of orthogonal matrices is, itself, orthogonal. Just as with the BB * hierarchy, the decomposition of vertical step matrices falls out as an immediate corollary to the horizontal step matrix proof. Corollary K.3. Let V be a vertical step matrix. Then we can decompose V = O * D, where D is a diagonal matrix and O * ∈ B *. Now that we have argued about the decomposition of permutation and step matrices in the OBB hierarchy, we can leverage the construction from Lemma I.1 to argue about matrices with at most n NNZ. Corollary K.4. Let S be an n × n matrix with at most n NNZ. Then, S ∈ (OBB) 5. Proof. We use the construction from Lemma I.1, along with Lemma K.4 and Corollary K.3, to express S as: with each O i ∈ B, each O j ∈ B *, and each D k diagonal. Noting that O 1 and O 5 are permutations, we make use of Lemma K.3 to re-express S as: Note that each M ∈ OBB. Hence, S ∈ (OBB) 5, as desired. Just as in Appendix I, we would like to extend this orthogonal-based construction to capture matrices of general sparsity. To accomplish this, we introduce an addition closure lemma analogous to Lemma K.5 for the OBB hierarchy. With Lemma K.5, we arrive at the following Corollary on general orthogonal sparsity. Corollary K.5. Let S be an n × n matrix with s NNZ. Then, S ∈ (OBB) Proof. Just as in the proof of Theorem 3, we accomplish this using a sum of s n matrices of at most n NNZ. For handling the sum of matrices, we need to appeal to Lemma K.5. To conclude the argument, we give the proof of Lemma K.5. Proof of Lemma K.5. For each 1 ≤ i ≤ m, let E Ai ∈ F ek×ek be defined such that A i = SE Ai S * (with S as in Definition 2.4). Note that E Ai ∈ (OBB) w. Consider matrices of the form: Note that K, a block diagonal matrix composed of matrices in (OBB) w, is itself in (OBB) w since was not yet used in L w ) to conclude that OL w ∈ B. Similarly, since no btterfly factor from B (4ek) 2ek has been used in R 1, we may fold P into R 1 to conclude that R 1 P ∈ B *. Finally, we address the scalar multiple of √ 2 by multiplying all entries of any diagonal matrix in the decomposition of K by √ 2. Hence, we may conclude that M i ∈ (OBB) w. Through repeated application (m times) of the identity we see that Therefore, M ∈ (OBB) mw. Next, we note that We would like to show that we can fold Q into the rightmost OBB factor of M. The rightmost matrix in the decomposition of M is P. Note that. Just as earlier, the factor of √ 2 can be multiplied through any diagonal matrix. Also, these two orthogonal butterfly factor matrices can be folded into the the rightmost R matrix (the decomposition of K above does not use these two, rightmost butterfly factors). Hence, Just as in Theorem 1, we can use the sparsity in Lemma K.5 to place matrices with low-depth (linear) arithmetic circuits for matrix vector multiplication in the OBB hierarchy. Corollary K.6. Let M be an n × n matrix such that matrix-vector multiplication of M times an arbitrary vector v can be represented as a be a linear arithmetic circuit C comprised of s gates (including inputs) and having depth d. Then, M ∈ (OBB) Proof. We use the construction given in the proof of Theorem 1. Corollaries K.4 and K.2 allow us to recover the same width and expansion factor with the OBB hierarchy. We show that for any neural network with ReLU nonlinearities and whose weight matrices have arithmetic circuits with few gates, its linear network counterpart (obtained by removing all the ReLU's) also has an arithmetic circuit with not too many more gates. This implies that in trying to find the smallest arithmetic circuit augmented with ReLU gates to represent a ReLU network, one might as well try to find the smallest arithmetic circuits that represent the matrix-vector multiplication of each weight matrix. Proposition 2. Consider a neural network architecture consisting of L layers with weight matrices W 1,..., W L ∈ F n×n and ReLU nonlinearity in between. Suppose that matrix-vector multiplication of W i times an arbitrary vector v can be represented as a linear arithmetic circuit with s i gates (including inputs Proof of Proposition 2. To compute the output of the network ReLU(W L (. . . ReLU(W 1 v))), we first compute the matrix-vector product W 1 v with an arithmetic circuit of s 1 gates by assumption, and use n other ReLU gates to compute the pointwise ReLU. Then we repeat the process for layer 2, 3,..., L, using the arithmetic circuits of W 1,..., W L and Ln additional gates for ReLU. In total we obtain an arithmetic circuit augmented with ReLU gates with L i=1 s i + Ln total gates. Conversely, to build an arithmetic circuit augmented with ReLU gates to compute W 1 v,..., W L... W 1 v, we pass v and then −v through the circuit that computes ReLU(W 1 x) for an arbitrary x to get ReLU(W 1 v) and ReLU(−W 1 v). Noting that x = ReLU(x) − ReLU(−x), we can use n additional gates to compute W 1 v from ReLU(W 1 v) and ReLU(−W 1 v). Repeat the process for layer 2, 3,..., L (for example, pass W 1 v and −W 1 v to the circuit that computes W 2 x for an arbitrary x on layer 2). Overall we need to double the circuits that computes all the activations of the network ReLU (W 1 v),..., ReLU(W L . . . ReLU(W 1 v)), requiring 2s gates. We also need n additional gates per layer to compute the negation of the input to that layer (e.g. computing −v from v), and n additional gates per layer to subtract the output of the ReLU circuit (e.g. computing W 1 v from ReLU(W 1 v) and ReLU(−W 1 v).) Therefore we can construct an arithmetic circuit augmented with ReLU gates with 2s + 2L total gates that computes the activations of the network without ReLU W 1 v,..., W L... W 1 v. We now prove an asymptotic bound on the VC dimension of a ReLU network whose weight matrices are kaleidoscope matrices with bounded width and expansion. Proposition 3. Let F be the class of ReLU neural networks consisting of L layers, where each layer is a K-matrix with width and expansion bounded by some constant C. Suppose that the network has W total parameters. Let sign F denote the corresponding classification functions: {x → sign f (x): f ∈ F}. Then this class has VC dimension: VCdim(sign F) = O(LW log W). We leverage the from for the case where the entries of the weight matrices interact multiplicatively, but with polynomially bounded degrees. This proof is similar to the VC bound for ReLU networks whose weight matrices are butterfly matrices . Proof. To use Theorem 3 of , we simply need to check that the entries of the linear layer, as polynomials of the parameters, has degree at most c 1 m c2 l for some universal constant c 1, c 2 > 0, where m l is the size of output of the l-th layer. If the network weight matrices are K-matrices with bounded width and expansion, each weight matrix is a product of at most c 3 log m l sparse factors, for some universal constant c 3 > 0. This means that the degree is polynomially bounded, which satisfies the condition of the theorem. Therefore the VC dimension is bounded to be almost linear in the number of parameters: VCdim(sign F) = O(LW log W). We give a quick overview of arithmetic circuits. This is a model of computation that has been studied for numerous computational problems (and is the basic model for algebraic complexity theory). For our purposes, we will exclusively focus on arithmetic circuits for the matrix-vector multiplication problem. For a more detailed exposition, the reader is referred to the standard book on this topic (Bürgisser et al., 2013). Definition M.1 (Arithmetic Circuits). An arithmetic circuit that computes y = Ax (for A ∈ F m×n) has n input gates (corresponding to x,..., x[n − 1]) and m output gates (corresponding to y,..., y[m − 1]). All the internal gates correspond to addition, subtraction, multiplication and division 12 over the underlying field F. The circuit is also allowed to use constants from F for'free.' The definition of the internal gates can depend on A (as well as x of course). In other words, one can'bake' the knowledge about A into the circuit. The size s of a circuit is n plus the number of addition, multiplication, subtraction and division gates used in the circuit. The depth d of a circuit is the minimum number of layers such that all gates in a given layer take as its input gates from previous layers. One drawback of arithmetic circuits (especially for infinite fields e.g. F = R, which is our preferred choice in this work) is that they assume operations over F can be performed exactly. In particular, it ignores precision issues involved with real arithmetic. Nonetheless, this model turns out to be a very useful model in reasoning about the complexity of doing matrix-vector multplication for any family of matrices. Perhaps the strongest argument in support of arithmetic circuits is that a large (if not an overwhelming) majority of matrix-vector multplication algorithm also imply an arithmetic circuit of size comparable to the runtime of the algorithm (and the depth of the circuit roughly correponds to the time taken to compute it by a parallel algorithm). For example consider the obvious algorithm to compute Ax 14 One thing to note about the arithmetic circuit above is that all the multplications involve at least one input that is a constant from F (recall that we can assume that the entries of A are constants that can be used to build the circuit). This leads to the following important sub-class of arithmetic circuits: Definition M.2 (Linear Arithmetic Circuits). An arithmetic circuit is called a linear arithmetic circuit if it only uses addition, subtraction and multiplication. Further, every multiplcation has a fixed constant from F as at least one of its two inputs. In other words, all gates in the circuit are linear functions of their inputs (i.e. of the form ax + by for fixed constants a, b ∈ F). Intuitively for the matrix-vector multiplication, it makes sense to consider linear arithmetic circuits since the final function we want to compute Ax is indeed a linear function of its inputs. For inifinite fields (e.g. F = R or F = C), it turns out that this is essentially without loss of generality: Theorem 4 ((Bürgisser et al., 2013)). Let F be an infinite field. Any (general) arithmetic circuit to compute Ax over F of size s and depth d can be converted into a linear arithmetic circuit of size O(s) and depth O(d). 12 Here we assume all the gates have two inputs. 13 The input layer corresponding to the input gates does not contriubte to the depth. 14 The claim on the depth follow from the fact that each of the sums The above implies that for asymptotic considerations, linear arithmetic circuits for matrix-vector multiplication are equivalent to general arithmetic circuits. One important property of linear arithmetic circuits of depth d, which we will use in our arguments, is that such a circuit can be equivalently represented as product of d sparse matrices (see the proof of Theorem 1 for the precise derivation 16). As mentioned earlier, a vast majority of efficient matrix vector multiplication algorithms are equivalent to small (both in size and depth) linear arithmetic circuit. For example the FFT can be thought of as an efficient arithmetic circuit to compute the Discrete Fourier Transform (indeed when one converts the linear arithmetic circuit for FFT into a matrix decomposition, 17 then each matrix in the decomposition is a butterfly factor, with each block matrix in each factor being the same). For an illustration of this consider the DFT with n = 4 as illustrated in Figure 11. Finally, Figure 13 is representation of the arithmetic circuit of Figure 12 as a product of a butterfly matrix and (the bit-reversal) permutation. We note that our generic arithmetic circuit to decomposition into BB * is not as tight as in Figure 13. One reason for the vast majority of existing efficient matrix vector algorithms leading to (linear) arithmetic circuits is that they generally are divide and conquer algorithms that use polynomial operations such as polynomial multiplication or evaluation (both of which themselves are divide and conquer algorithms that use FFT as a blackbox) or polynomial addition. Each of these pieces are well known to have small (depth and size) linear arithmetic circuits (since FFT has these properties). Finally, the divide and conquer structure of the algorithms leads to the circuit being of low depth. See the book of Pan for a more elaborate description of this connection. In fact, the recent work of De Sa et al. makes this fact explicit and presents the most general known structure on matrices that imply near-linear size linear arithmetic circuits for the corresponding matrix vector multiplication. Their work combines two separate classes of structures matrices-orthogonal polynomial transforms (; Szegö, 1967) as well as matrices with low displacement rank -and presents a linear class of linear arithmetic circuits to solve their matrix vector multiplication problem. We note that structured matrices with low displacement rank have been used to replace fully connected layers in some neural network architectures .
We propose a differentiable family of "kaleidoscope matrices," prove that all structured matrices can be represented in this form, and use them to replace hand-crafted linear maps in deep learning models.
711
scitldr
We propose a method, called Label Embedding Network, which can learn label representation (label embedding) during the training process of deep networks. With the proposed method, the label embedding is adaptively and automatically learned through back propagation. The original one-hot represented loss function is converted into a new loss function with soft distributions, such that the originally unrelated labels have continuous interactions with each other during the training process. As a , the trained model can achieve substantially higher accuracy and with faster convergence speed. Experimental based on competitive tasks demonstrate the effectiveness of the proposed method, and the learned label embedding is reasonable and interpretable. The proposed method achieves comparable or even better than the state-of-the-art systems. Most of the existing methods of neural networks use one-hot vector representations for labels. The one-hot vector has two main restrictions. The first restriction is the "discrete distribution", where each label is distributed at a completely different dimension from the others. The second restriction is the "extreme value" based representation, where the value at each dimension is either 1 or 0, and there is no "soft value" allowed. Those deficiencies may cause the following two potential problems. First, it is not easy to measure the correlation among the labels due to the "discrete distribution". Not being able to measure the label correlation is potentially harmful to the learned models, e.g., causing the data sparseness problem. Given an image recognition task, the image of the shark is often similar to the image of the dolphin. Naturally, we expect the two labels to be "similar". Suppose that we have a lot of training examples for shark, and very few training examples for dolphin. If the label shark and the label dolphin have similar representations, the prediction for the label dolphin will suffer less from the data sparsity problem. Second, the 0/1 value encoding is easy to cause the overfitting problem. Suppose A and B are labels of two similar types of fishes. One-hot label representation prefers the ultimate separation of those two labels. For example, if currently the system output probability for A is 0.8 and the probability for B is 0.2, it is good enough to make a correct prediction of A. However, with the one-hot label representation, it suggests that further modification to the parameters is still required, until the probability of A becomes 1 and the probability of B becomes 0. Because the fish A and the fish B are very similar in appearance, it is probably more reasonable to have the probability 0.8 for A and 0.2 for B, rather than completely 1 for A and 0 for B, which could lead to the overfitting problem. We aim to address those problems. We propose a method that can automatically learn label representation for deep neural networks. As the training proceeds, the label embedding is iteratively learned and optimized based on the proposed label embedding network through back propagation. The original one-hot represented loss function is softly converted to a new loss function with soft distributions, such that those originally unrelated labels have continuous interactions with each other during the training process. As a , the trained model can achieve substantially higher accuracy, faster convergence speed, and more stable performance. The related prior studies include the traditional label representation methods BID7 BID10 BID1, the "soft label" methods BID22, and the model distillation methods BID9 ).Our method is substantially different from those existing work, and the detailed comparisons are summarized in Appendix E. The contributions of this work are as follows:• Learning label embedding and compressed embedding: We propose the Label Embedding Network that can learn label representation for soft training of deep networks. Furthermore, some large-scale tasks have a massive number of labels, and a naive version of label embedding network will suffer from intractable memory cost problem. We propose a solution to automatically learn compressed label embedding, such that the memory cost is substantially reduced.• Interpretable and reusable: The learned label embeddings are reasonable and interpretable, such that we can find meaningful similarities among the labels. The proposed method can learn interpretable label embeddings on both image processing tasks and natural language processing tasks. In addition, the learned label embeddings can be directly adapted for training a new model with improved accuracy and convergence speed.• General-purpose solution and competitive : The proposed method can be widely applied to various models, including CNN, ResNet, and Seq-to-Seq models. We conducted experiments on computer vision tasks including CIFAR-100, CIFAR-10, and MNIST, and on natural language processing tasks including LCSTS text summarization task and IWSLT2015 machine translation task. Results suggest that the proposed method achieves significantly better accuracy than the existing methods (CNN, ResNet, and Seq-to-Seq). We achieve comparable or even better than the state-of-the-art systems on those tasks. A neural network typically consists of several hidden layers and an output layer. The hidden layers map the input to the hidden representations. Let's denote the part of the neural network that produces the last hidden representation as h = f (x) where x is the input of the neural network, h is the hidden representation, and f defines the mapping from the input to the hidden representation, including but not limited to CNN, ResNet, Seq-to-Seq, and so on. The output layer maps the hidden representation to the output, from which the predicted category is directly given by an argmax operation. The output layer typically consists of a linear transformation that maps the hidden representation h to the output z: DISPLAYFORM0 where o represents the linear transformation. It is followed by a softmax operation that normalizes the output as z, so that the sum of the elements in z is 1, which is then interpreted as a probability distribution of the labels: z = softmax(z) The neural network is typically trained by minimizing the cross entropy loss between the true label sdistribution y and the output distribution as the following: DISPLAYFORM1 where m is the number of the labels. In the following, we will use y to denote the true label category, y to denote the one-hot distribution of y, x to denote softmax(x), and H(p, q) to denote the cross entropy between p and q, where p is the distribution that the model needs to approximate, e.g., y in, and q is the distribution generated by the model, e.g., z in. The label embedding is supposed to represent the semantics, i.e. similarity between labels, which makes the length of each label embedding to be the number of the labels m. The embedding is denoted by E ∈ R m×m where m is the number of the labels. Each element in a label embedding vector represents the similarity between two labels. For example, in label y's embedding vector e = E y, the i-th value represents the similarity of label y to label i. To learn the embeddings, a reasonable approach would be to make the label embedding e = E y close to the output z in of the neural network, whose predicted label is y, as the output distribution of the model contains generalization information learned by the neural network. In turn, the label embedding can be used as a more refined supervisory signal for the learning of the model. However, the aforementioned approach affects the learning of the model, in terms of the discriminative power. In essence, the model is supposed to distinguish the inputs, while the label embedding is supposed to capture the commonness of the labels based on the inputs, and the two goals are in conflict. To avoid the conflict, we propose to separate the output representation. One output layer, denoted by o 1, is used to differentiate the hidden representation as normal, which is used for predicting, and the other output layer, denoted by o 2, focuses more on learning the similarity of the hidden representation, from which the label embedding is learned: DISPLAYFORM0 The two output layers share the same hidden representation, but have independent parameters. They both learn from the one-hot distribution of the true label: DISPLAYFORM1 DISPLAYFORM2 In back propagation, the gradient from z 2 is kept from propagating to h, so the learning of the o 2 does not affect the hidden representation. By doing this, the discriminative power of o 1 is maintained and even enhanced by the using of label embedding. In the meanwhile, the label embedding obtains a more stable learning target. The label embedding is then learned by minimizing the cross entropy loss between the normalized embedding e = softmax(e) and the normalized output z 2 = softmax(z 2): DISPLAYFORM3 However, the above approach does not scale properly during the training, as the output z 2 becomes too close to the one-hot distribution y, and the label embedding fails to capture the similarity between labels. To solve this, we apply the softmax with temperature τ to soften the distribution of the normalized z 2, which is computed by DISPLAYFORM4 and the loss becomes DISPLAYFORM5 In the following we will use z 2 to denote the softmax with temperature. By applying a higher temperature, the label embedding gains more details of the output distribution, and the elements in an embedding vector other than the label-based one, i.e. the elements off the diagonal, are better learned. However, the annealed distribution also makes the difference between the incorrect labels closer. To solve the problem, we further propose to regularize the normalized output, so that the highest value of the distribution does not get too high, and the difference between labels is kept: DISPLAYFORM6 If p equals to 1 or 2, the loss is a hinge L1 or L2 regularization. The learned embedding is in turn used in the training of the network by making the output close to the learned embedding. This is done by minimizing the cross entropy loss between the normalized output and the normalized label embedding: DISPLAYFORM7 As a fine-grained distribution of the true label is learned by the model, a faster convergence is achieved, and risk of overfitting is also reduced. In summary, the final objective of the proposed method is as follows: FIG1 shows the overall architecture of the proposed method. Various kinds of neural networks are compatible to generate the hidden representation. In our experiments, we used CNN, ResNet, and Seq-to-Seq. However, the choice may not be limited to those architectures. Moreover, although the output architecture is significantly re-designed, the computational cost does not increase much, as the added operations are relatively cheap in computation. DISPLAYFORM8 When there is a massive number of labels (e.g., over 20,000 labels for neural machine translation), the embedding E takes too much memory. Suppose we have a neural machine translation task with 50,000 labels, then the label embedding is a 50,000 × 50,000 matrix. The embedding matrix alone will take up approximately 9,536MB, which is not suitable for GPU. To alleviate this issue, we propose to re-parameterize the embedding matrix to a product of two smaller matrices, A and B: DISPLAYFORM0 where m is the number of the labels, and h is the size of the "compressed" label embedding. The label embedding for label y is computed as the following: DISPLAYFORM1 where A y means taking out the y-th row from the matrix A. The ing vector e is an mdimensional vector, and can be used as label embedding to substitute the corresponding part in the final loss of a normal label embedding network. The matrix A can be seen as the "compressed" label embeddings, where each row represents a compressed label embedding, and the matrix B can be seen as the projection that reconstructs the label embeddings from the "compressed" forms. This technique can reduce the space needed to store the label embeddings by a factor of m 2h. Considering the previous example, if h = 100, the space needed is reduced by 250x, from 9,536MB to about 38.15MB. We conduct experiments using different models (CNN, ResNet, and Seq-to-Seq) on diverse tasks (computer vision tasks and natural language processing tasks) to show that the proposed method is general-purpose and works for different types of deep learning models. The CIFAR-100 BID13 ) dataset consists of 60,000 32×32 color images in 100 classes containing 600 images each. The dataset is split into 50,000 training images and 10,000 test images. Each image comes with a "fine" label (the class to which it belongs) and a "coarse" label (the superclass to which it belongs). The CIFAR-10 dataset BID13 ) has the same data size as CIFAR-100, that is, 60,000 32×32 color images, split into 50,000 training images and 10,000 test images, except that it has 10 classes with 6,000 images per class. The MNIST handwritten digit dataset BID14 consists of 60,000 28×28 pixel gray-scale training images and additional 10,000 test examples. Each image contains a single numerical digit. We select the first 5,000 images of the training images as the development set and the rest as the training set. consists of more than 2,400,000 social media text-summary pairs. It is split into 2,400,591 pairs for training, 10,666 pairs for development data, and 1,106 pairs for testing. Following, the evaluation metric is ROUGE-1, ROUGE-2 and ROUGE-L BID16. The dataset is from the International Workshop on Spoken Language Translation 2015. The dataset consists of about 136,000 English-Vietnam parallel sentences, constructed from the TED captions. It is split into training set, development set and test set, with 133,317, 1,268 and 1,553 sentence pairs respectively. The evaluation metric is BLEU score BID23. For CIFAR-100 and CIFAR-10, we test our method based on ResNet with 18 layers and 8 layers, respectively, following the settings in BID8. For MNIST, the CNN model consists of two convolutional layers, one fully-connected layer, and another fully-connected layer as the output layer. The filter size is 5 × 5 in the convolutional layers. The first convolutional layer contains 32 filters, and the second contains 64 filters. Each convolutional layer is followed by a max-pooling layer. Following common practice, we use ReLU BID6 as the activation function of the hidden layers. For LCSTS and IWSLT2015, we test our approach based on the sequence-to-sequence model. Both the encoder and decoder are based on the LSTM unit, with one layer for LCSTS and two layer for IWSLT2015. Each character or word is represented by a random initialized embedding. For LCSTS, the embedding size is 400, and the hidden state size of the LSTM unit is 500. For IWSLT2015, the embedding size is 512, and the hidden state size of the LSTM unit is 512. We use beam search for IWSLT2015, and the beam size is 10. Due to the very large label sets, we use the compressed label embedding network (see Section 2.2) for both tasks. Although there are several hyper-parameters introduced in the proposed method, we use a very simple setting for all tasks, because the proposed method is robust in our experiments, and simply works well without fine-tuning. We use temperature τ = 2 for all the tasks. For simplicity, we use the L1 form of the hinge loss of o 2, and α is set to 0.9 for all the tasks. We use the Adam optimizer for all the tasks, using the default hyper-parameters. For CIFAR-100, we divide the learning rate by 5 at epoch 40 and epoch 80. As shown in the previous work BID8, dividing the learning rate at certain iterations proves to be beneficial for SGD. We find that the technique also applies to Adam. We do not apply this technique for CIFAR-10 and MNIST, because the are similar with or without the technique. The experiments are conducted using INTEL Xeon 3.0GHz CPU and NVIDIA GTX 1080 GPU. We run each configuration 20 times with different random seeds for the CV tasks. For the tasks without development sets, we report the at the final epoch. For the ones with development sets, we report the test at the epoch that achieves the best score on development set. First, we show on CIFAR-100 and CIFAR-10, which are summarized in TAB1. As we can see, the proposed method achieves much better . On CIFAR-100, the proposed method achieves 12.4% error reduction ratio from the baseline (ResNet-18). On CIFAR-10, the proposed method achieves 19.5% error reduction ratio from the baseline (ResNet-8). The training time per epoch is similar to the baselines. The of MNIST are summarized in TAB1. As we can see, the proposed method achieves the error rate reduction of over 32%.The detailed error rate curves are shown in FIG2. The 20 repeated runs are shown in lighter color, and the averaged values are shown in deeper color. As we can see from FIG2, the proposed method achieves better convergence speed than the ResNet and CNN baselines. This is because the label embedding achieves soft training of the model, where the conflict of the features of similar labels are alleviated by the learned label embeddings. The learned label embeddings enables the model to share common features when classifying the similar labels, because the supervisory signal contains the information about similarity, thus making the learning easier. Besides, the model is not required to distinguish the labels completely, which avoids unnecessary subtle update of the parameters. In addition, we can see that by using label embedding the proposed method has much more stable training curves than the baselines. The fluctuation of the proposed method is much smaller than the baselines. As the one-hot distribution forces the label to be completely different from others, the original objective seeks unique indicators for the labels, which are hard to find and prone to 17.7 8.5 15.8 Seq2seq (C) 21.5 8.9 18.6 Seq2seq-Attention (W) 26.8 16.1 24.1 Seq2seq-Attention (C) 29.9 17.4 27.2 Seq2seq-Attention (C) (our implementation) 30.1 17.9 27.2 Seq2seq-Attention-LabelEmb (C) (our proposal) 31.7 (+1.6) 19.1 (+1.2) 29.1 (+1.9) overfitting, thus often leading the training astray. The proposed method avoids that by softening the target distribution, so that the features used are not required to be unique, and more common but essential features can be selected, which stabilizes the learning compared to the original objective. DISPLAYFORM0 The proposed method achieves comparable or even better than the state-of-the-art systems. More detailed comparisons to the high performance systems are in Appendix D. It would be interesting to check the learned label embeddings from those datasets. FIG3 shows the learned label embeddings from the CIFAR-100, CIFAR-10, and MNIST tasks, respectively. For the CIFAR-100 task, as we can see, the learned label embeddings are very interesting. Since we don't have enough space to show the heatmap of all of the 100 labels, we randomly selected three groups of labels, with 15 labels in total. For example, the most similar label for the label "bottle" is "can". For the label "bowl", the two most similar labels are "cup" and "plate". For the label "man", the most similar label is "woman", and the second most similar one is "boy".For the CIFAR-10 task, as we can see, the learned label embeddings are also meaningful. For example, the most similar label for the label "automobile" is "truck". For the label "cat", the most similar label is "dog". For the label "deer", the most similar label is "horse". For the MINST task, there are also interesting patterns on the learned label embeddings. Those heatmaps of the learned labels demonstrate that our label embedding learning is reasonable and can indeed reveal rational similarities among diversified labels. The learned embedding can also be used to directly trained a new model on the same task, with improved accuracy and faster convergence, which we will show in Appendix C. First, we show experimental on the LCSTS text summarization task. The are summarized in TAB3. The performance is measured by ROUGE-1, ROUGE-2, and ROUGE-L. As we can see, the proposed method performs much better compared to the baselines, with ROUGE-1 score of 31.7, ROUGE-2 score of 19.1, and ROUGE-L score of 29.1, improving by 1.6, 1.2, and 1.9, respectively. In addition, the of the baseline implemented by ourselves are competitive BLEU Stanford NMT BID18 23.3 NMT (greedy) BID19 25.5 NMT (beam=10) BID19 26.1 Seq2seq-Attention FORMULA6 25.7 Seq2seq-Attention-LabelEmb (beam=10) 26.8 (+1.1) with previous work. In fact, in terms of all of the three metrics, our implementation consistently beats the previous work, and the proposed method could further improve the . Then, we show experimental on the IWSLT2015 machine translation task. The are summarized in TAB4. We measure the quality of the translation by BLEU, following common practice. The proposed method achieves better BLEU score than the baseline, with an improvement of 1.1 points. To our knowledge, 26.8 is the highest BLEU achieved on the task, surpassing the previous best 26.1 BID19. From the experimental , it is clear that the compressed label embedding can improve the of the Seq-to-Seq model as well, and works for the tasks, where there is a massive number of labels. The label embedding learned in compressed fashion also carries semantic similarities. We report the sampled similarities in TAB5. As shown in TAB5, the learned label embeddings capture the semantics of the label reasonably well. For example, the word "đỏ" (red) is most similar to the colors, i.e. "màu" (color), "red" (red), "xanh" (blue), "đen" (black), and "vàng" (yellow). The word "mưa (rain)" is most similar to "bão" (storm), "trời" (sky), "gió" (wind), "cơn" (storm), "nước" (water), which are all semantically related to the natural phenomenon "rain". The of the label embeddings learned in a compressed fashion demonstrate that the re-parameterization technique is effective in saving the space without degrading the quality of the learned label embeddings. They also prove that the proposed label embedding also works for NLP tasks. We propose a method that can learn label representation during the training process of deep neural networks. Furthermore, we propose a solution to automatically learn compressed label embedding, such that the memory cost is substantially reduced. The proposed method can be widely applied to different models. We conducted experiments on CV tasks including CIFAR-100, CIFAR-10, and MNIST, and also on natural language processing tasks including LCSTS and IWSLT2015. Results suggest that the proposed method achieves significant better accuracies than the existing methods (CNN, ResNet, and Seq-to-Seq). Moreover, the learned label embeddings are reasonable and interpretable, which provides meaningful semantics of the labels. We achieve comparable or even better with the state-of-the-art systems on those tasks. To achieve good performance, there are some additional considerations for the proposed method. First, when learning the label embedding, if the current output from the model is wrong, which often happens when the training just begins, the true label's embedding should not learn from the output from the model. This is because the information is incorrect to the learning of the label embedding, and should be neglected. This consideration can be particularly useful to improve the performance under the circumstances where the model's prediction is often wrong during the start of the training, e.g. the CIFAR-100 task and the neural machine translation task. Second, we suggest using the diagonal matrix as the initialization of the label embedding matrix. By using the diagonal matrix, we provide a prior to the label embedding that one label's embedding should be the most similar to the label itself, which could be useful at the start of the training and beneficial for the learning. We also conducted experiments on MNIST, using the MLP model. The MLP model consists of two 500-dimensional hidden layers and one output layer. The other settings are the same as the CNN model. The experimental are summarized in TAB6. As we can see, the proposed label embedding method achieves better performance than the baseline, with an error rate reduction over 24%. All the are the averaged error rates over 20 repeated experiments, and the standard deviation are also shown. FIG4 shows the detailed error rate curve of the MLP model. The 20 repeated runs are shown in light color, and the averaged values are shown in deeper color. As shown, the proposed method also works for MLP, and the are consistently better than the baselines. As the same with the CNN model, the proposed method converges faster than the baseline. In the following section, we will show that the learned label embedding is not only reasonable, but also useful for applications. For example, the learned label embedding can be directly used as finegrained true label distribution to train a new model on the same dataset. For this purpose, the new model's objective function contains two parts, i.e., the original one-hot label based cross entropy objective, together with a label embedding based cross entropy objective. We call this model PreTrained Label Embedding Network. The Label Embedding Network means that the network uses label embedding to improve the training of the network, and the difference of the pre-trained label embedding network from the one presented in Section 2.1 is that in the pre-trained label embedding network, the label embedding is pre-trained and fixed, thus eliminating the need for learning the embedding, while in the label embedding network, the label embedding is learned during the training. In implementation, there are two main differences. First, the label embedding E is fixed and requires no learning. Second, the sub-network o 2, which learns the label embedding, is removed -because there is no need to learn the label embedding again. Thus, the pre-trained label embedding network has the loss function as follows:Loss(x, y; θ) = H(y, z 1) + H(e, z 1)The pre-trained label embedding network is illustrated in Figure 5. Figure 6 shows the of the pre-trained label embedding network, whose label embedding is learned by a normal label embedding network. As we can see, pre-trained label embedding network can achieve much better than the baseline, with faster convergence. It shows that the learned label embedding is effective in improving the performance of the same model and the label embedding indeed contains generalization information, which provides a more refined supervised signal to the model. In this way, the learned label embeddings can be saved and be reused to improve the training of different models on the task, and there is no need to learn the label embedding again. For the CIFAR-100 task, the error rate is typically from 38% to 25% BID4 BID17 BID26 BID27 BID8 BID24 BID3 BID5 BID21. BID4 For CIFAR-10 task, the error rate is typically from 15% to 7% BID28 BID17 BID26 BID24 BID15 BID27 BID8 BID3 BID4. Further improvement can be achieved by finetuning the model and the optimization method BID30 For MNIST task, plain convolutional networks typically achieve error rates ranging widely from more than 1.1% to around 0.4% BID4 BID25 BID27. Data augmentation and other more complicated models can further improve the performance of the models BID28 BID2 BID5, which we believe also work for our method. Srivastava et al. FORMULA0 achieves 0.57% error rate by using Highway Network. BID21 achieves 0.48% error rate by using LSUV initialization for FitNets. Our CNN model achieves the averaged error rate of 0.55%. If considering a good run, our model achieves 0.42%. The prior studies on label representation in deep learning are limited. Existing label representation methods are mostly on traditional methods out of deep learning frameworks. Those label representation methods also adopt the name of label embedding. However, the meaning is different from that in the sense of deep learning. Those label representation methods intend to obtain a representation function for labels. The label representation vector can be data independent or learned from existing information, including training data BID29, auxiliary annotations BID0, class hierarchies BID1, or textual descriptions BID20. For example, in BID10, the label embedding is fixed and is set independently from the data by random projections, and several regressors are used to learn to predict each of the elements of the true label's embedding, which is then reconstructed to the regular one-hot label representation for classification. Another example is the Canonical Correlation Analysis (CCA), which seeks vector a and vector b for random variables X and Y, such that the correlation of the variables a X and b Y is maximized, and then b Y can be regarded as label embeddings BID7 ).There are several major differences between those methods and our proposed method. First, most of those methods are not easy to adapt to deep learning architectures. As previously introduced, those methods come with a totally different architecture and their own learning methods, which are not easy to extend to general-purpose models like neural networks. Instead, in the proposed method, label embedding is automatically learned from the data by back propagation. Second, the label representation in those methods is not adapting during the training. In BID10, the label embedding is fixed and randomly initialized, thus revealing none of the semantics between the labels. The CCA method is also not adaptively learned from the training data. In all, their learned label representation lacks interaction with other model parameters, while label embeddings obtained from our proposed method both reveal the semantics of the labels and interact actively with the other parts of the model by back propagation. There have also been prior studies on so-called "soft labels". The soft label methods are typically for binary classification BID22, where the human annotators not only assign a label for an example, but also give information on how confident they are regarding the annotation. The side information can be used in the learning procedure to alleviate the noise from the data and produce better . The main difference from our method is that the soft label methods require additional annotation information (e.g., the confidence information of the annotated labels) of the training data, while our method does not need additional annotation information, and the "soft" probability is learned during the training in a simple but effective manner. Moreover, the proposed method is not restricted to binary classification. There have also been prior studies on model distillation in deep learning that uses label representation to better compress a big model into a smaller one. In deep learning, it's common sense that due to the non-convex property of the neural network functions, different initialization, different data order, and different optimization methods would cause varied of the same model. Model distillation BID9 ) is a novel method to combine the different instances of the same model into a single one. In the training of the single model, its target distribution is a combination of the output distributions of the previously trained models. Our method is substantially different compared with the model distillation method. The motivations and designed architectures are both very different. The model distillation method adopts a pipeline system, which needs to first train a large model or many different instances of models, and then use the label representation of the baseline models to provide better supervisory signals to re-train a smaller model. This pipeline setting is very different from our single-pass process setting. Our method also enables the ability to learn compressed label embedding for an extremely large number of labels. Moreover, for a given label, the label representation in their method is different from one example to another. That is, they do not provide a universal label representation for a label, which is very different compared with our setting.
Learning Label Representation for Deep Networks
712
scitldr
Decentralized training of deep learning models is a key element for enabling data privacy and on-device learning over networks, as well as for efficient scaling to large compute clusters. As current approaches are limited by network bandwidth, we propose the use of communication compression in the decentralized training context. We show that Choco-SGD achieves linear speedup in the number of workers for arbitrary high compression ratios on general non-convex functions, and non-IID training data. We demonstrate the practical performance of the algorithm in two key scenarios: the training of deep learning models (i) over decentralized user devices, connected by a peer-to-peer network and (ii) in a datacenter. Distributed machine learning-i.e. the training of machine learning models using distributed optimization algorithms-has enabled many recent successful applications in research and industry. Such methods offer two of the key success factors: 1) computational scalability by leveraging the simultaneous computational power of many devices, and 2) data-locality, the ability to perform joint training while keeping each part of the training data local to each participating device. Recent theoretical indicate that decentralized schemes can be as efficient as the centralized approaches, at least when considering convergence of training loss vs. iterations (; ; ; ;). Gradient compression techniques have been proposed for the standard distributed training case (; ; b; ;), to reduce the amount of data that has to be sent over each communication link in the network. For decentralized training of deep neural networks, introduce two algorithms (DCD, ECD) which allow for communication compression. However, both these algorithms are restrictive with respect to the used compression operators, only allowing for unbiased compressors and-more significantlyso far not supporting arbitrarily high compression ratios. We here study CHOCO-SGD-recently introduced for convex problems only -which overcomes these constraints. For the evaluation of our algorithm we in particular focus on the generalization performance (on the test-set) on standard machine learning benchmarks, hereby departing from previous work such as e.g. (; ; b;) that mostly considered training performance (on the train-set). We study two different scenarios: firstly, (i) training on a challenging peer-to-peer setting, where the training data is distributed over the training devices (and not allowed to move), similar to the federated learning setting . We are again able to show speed-ups for CHOCO-SGD over the decentralized baseline with much less communication overhead. Secondly, (ii) training in a datacenter setting, where decentralized communication patterns allow better scalability than centralized approaches. For this setting we show that communication efficient CHOCO-SGD can improve time-to-accuracy on large tasks, such as e.g. ImageNet training. However, when investigating the scaling of decentralized algorithms to larger number of nodes we observe that (all) decentralized schemes encounter difficulties and often do not reach the same (test and train) performance as centralized schemes. As these findings do point out some deficiencies of current decentralized training schemes (and are not particular to our scheme) we think that reporting these is a helpful contribution to the community to spur further research on decentralized training schemes that scale to large number of peers. • On the theory side, we are the first to show that CHOCO-SGD converges at rate O 1 / √ nT + n /(ρ 4 δ 2 T) on non-convex smooth functions, where n denotes the number of nodes, T the number of iterations, ρ the spectral gap of the mixing matrix and δ the compression ratio. The main term, O 1 / √ nT, matches with the centralized baselines with exact communication and shows a linear speedup in the number of workers n. Both ρ and δ only affect the asymptotically smaller second term. • On the practical side, we present a version of CHOCO-SGD with momentum and analyze its practical performance on two relevant scenarios: • for on-device training over a realistic peer-to-peer social network, where lowering the bandwidth requirements of joint training is especially impactful • in a datacenter setting for computational scalability of training deep learning models for resource efficiency and improved time-to-accuracy • Lastly, we systematically investigate performance of the decentralized schemes when scaling to larger number of nodes and we point out some (shared) difficulties encountered by current decentralized learning approaches. For the training in communication restricted settings a variety of methods have been proposed. For instance, decentralized schemes (; Nedić et al., 2018;), gradient compression (; ; ; ; b; ; ; b; ; ;, asynchronous methods or performing multiple local SGD steps before averaging; a). This especially covers learning over decentralized data, as extensively studied in the federated Learning literature for the centralized algorithms . In this paper we advocate for combining decentralized SGD schemes with gradient compression. Decentralized SGD. We in particular focus on approaches based on gossip averaging (; ;) whose convergence rate typically depends on the spectral gap ρ ≥ 0 of the mixing matrix . combine SGD with gossip averaging and show convergence at the rate O 1 / √ nT + n /(ρ 2 T). The leading term in the rate, O 1 / √ nT, is consistent with the convergence of the centralized mini-batch SGD and the spectral gap only affects the asymptotically smaller terms. Similar have been observed very recently for related schemes (; ;). Quantization. Communication compression with quantization has been popularized in the deep learning community by the reported successes in . Theoretical guarantees were first established for schemes with unbiased compression (; ;) but soon extended to biased compression as well. Schemes with error correction work often best in practice and give the best theoretical gurantees (b; ; ; . Recently, also proximal updates and variance reduction have been studied in combination with quantized updates Horváth et al., 2019). Decentralized Optimization with Quantization. It has been observed that gossip averaging can diverge (or not converge to the correct solution) in the presence of quantization noise (; ; Nedić et al., 2008; ; b;). propose an algorithm that can still converge, though at a slower rate than the exact scheme. Another line of work proposed adaptive schemes (with increasing compression accuracy) that converge at the expense of higher communication cost (a; ;). For deep learning applications, proposed the DCD and ECD algorithms that converge at the same rate as the centralized baseline though only for constant compression ratio. The CHOCO-SGD algorithm that we consider in this work can deal with arbitrary high compression, and has been introduced in but only been analyzed for convex functions. For non-convex functions we show a rate of, where δ > 0 measures the compression quality. Simultaneous work of Tang et al. (2019a) introduced DeepSqueeze, an alternative method which also converges with arbitrary compression ratio. In our experiments, under the same amount of tuning, CHOCO-SGD achieves higher test accuracy. Algorithm 1 CHOCO-SGD input:, E) and mixing matrix W, initializex 1: for t in 0... T − 1 do {in parallel for all workers i ∈ [n]} 2: for neighbors j: {i, j} ∈ E (including {i} ∈ E) do 5: end for 8: 9: In this section we formally introduce the decentralized optimization problem, compression operators, and the gossip-based stochastic optimization algorithm CHOCO-SGD from . Distributed Setup. We consider optimization problems distributed across n nodes of the form where D 1,... D n are local distributions for sampling data which can be different on every node, Communication. Every device is only allowed to communicate with its local neighbours defined by the network topology, given as a weighted graph G = ([n], E), with edges E representing the communication links along which messages (e.g. model updates) can be exchanged. We assign a positive weight w ij to every edge (w ij = 0 for disconnected nodes {i, j} / ∈ E). Assumption 1 (Mixing matrix). We assume that In our experiments we set the weights based on the local node degrees: w ij = max{deg(i), deg(j)} −1 for {i, j} ∈ E. This will not only guarantee ρ > 0 but these weights can easily be computed in a local fashion on each node . Compression. We aim to only transmit compressed (e.g. quantized or sparsified) messages. We formalized this through the notion of compression operators that was e.g. also used in . for a parameter δ > 0. Here E Q denotes the expectation over the internal randomness of operator Q. In contrast to the quantization operators used in e.g. (; Horváth et al., 2019), compression operators defined as in are not required to be unbiased and therefore supports a larger class of compression operators. Some examples can be found in and we further discuss specific compression schemes in Section 5. Algorithm. CHOCO-SGD is summarized in Algorithm 1. Every worker i stores its own private variable x i ∈ R d that is updated by a stochastic gradient step in part 2 and a modified gossip averaging step on line 2. This step is a key element of the algorithm as it preserves the averages of the iterates even in presence of quantization noise (the compression errors are not discarded, but aggregated in the local variables x i, see also ). The nodes communicate with their neighbors in part 1 and update the variablesx j ∈ R d for all their neighbors {i, j} ∈ E only using compressed updates. Thesex i are available to all the neighbours of the node i and represent the'publicly available' copies of the private x i, in general x i =x i, due to the communication restrictions. From an implementation aspect, it is worth highlighting that the communication part 1 and the gradient computation part 2 can both be executed in parallel because they are independent. Moreover, each node only needs to store 3 vectors at most, independent of the number of neighbors (this might not be obvious from the notation used here for additinal clarity, for further details c.f. ). We further propose a momentum-version of CHOCO-SGD in Algorithm 2 (see also Section D for further details). As the first main contribution, we here extend the analysis of CHOCO-SGD to non-convex problems. For this we make the following technical assumptions: and the variance of the stochastic gradients is bounded on each worker: where 2, the averaged iterates i of Algorithm 1 satisfy: where c:= ρ 2 δ 82 denotes the convergence rate of the underlying consensus averaging scheme of . This shows that CHOCO-SGD converges asymptotically as The first term shows a linear speed-up compared to SGD on a single node, while compression and graph topology affect only the higher order second term. For slightly more general statements than Theorem 4.1 (with improved constants) as well as for the proofs and convergence of the individual iterates x i we refer to Appendix A. In this section we experimentally compare CHOCO-SGD to the relevant baselines for a selection of commonly used compression operators. For the experiments we further leverage momentum in all implemented algorithms. The newly developed momentum version of CHOCO-SGD is given as Algorithm 2. Algorithm 2 CHOCO-SGD with Momentum input: Same as for Algorithm 1, additionally: weight decay factor λ, momentum factor β, local momentum memory v i:= 0 ∀i ∈ [n] Lines 1-8 in Algorithm 1 are left unmodified Line 9 in Algorithm 1 is replaced with the following two lines local momentum with weight decay 10: Setup. In order to match the setting in for our first set of experiments, we use a ring topology with n = 8 nodes and train the ResNet20 architecture on the Cifar10 dataset (50K/10K training/test samples) . We randomly split the training data between workers and shuffle it after every epoch, following standard procedure as e.g. in . We implement DCD and ECD with momentum , DeepSqueeze with momentum (a), CHOCO-SGD with momentum (Algorithm 2) and standard (all-reduce) mini-batch SGD with momentum and without compression . The momentum factor is set to 0.9 without dampening. For all algorithms we fine-tune the initial learning rate and gradually warm it up from a relative small value (0.1) for the first 5 epochs. The learning rate is decayed by 10 twice, at 150 and 225 epochs, and stop training at 300 epochs. For CHOCO-SGD and DeepSqueeze the consensus learning rate γ is also tuned. The detailed hyper-parameter tuning procedure refers to Appendix F. Every compression scheme is applied to every layer of ResNet20 separately. We evaluate the top-1 test accuracy on every node separately over the whole dataset and report the average performance over all nodes. Compression Schemes. We implement two unbiased compression schemes: (i) gsgd b quantization that randomly rounds the weights to b-bit representations , and (ii) random a sparsification, which preserves a randomly chosen a fraction of the weights and sets the other ones to zero . Further two biased compression schemes: (iii) top a, which selects the a fraction of weights with the largest magnitude and sets the other ones to zero , and (iv) sign compression, which compresses each weight to its sign scaled by the norm of the full vector (; . We refer to Appendix C for exact definitions of the schemes. DCD and ECD have been analyzed only for unbiased quantization schemes, thus the combination with the two biased schemes is not supported by theory. In converse, CHOCO-SGD and DeepSqueeze has been studied only for biased schemes according to Definition 2. However, both unbiased compression schemes can be scaled down in order to meet the specification (cf. discussions in ) and we adopt this for the experiments. Results. The are summarized in Table 1. For unbiased compression schemes, ECD and DCD only achieve good performance when the compression ratio is small, and sometimes even diverge when the compression ratio is high. This is consistent 1 with the theoretical and experimental in . We further observe that the performance of DCD with the biased top a sparsification is much better than with the unbiased random a counterpart, though this operator is not yet supported by theory. CHOCO-SGD can generalize reasonably well in all scenarios (at most 1.65% accuracy drop) for fixed training budget. The sign compression achieves state-of-the-art accuracy and requires approximately 32× less bits per weight than the full precision baseline. We now shift our focus to challenging real-world scenarios which are intrinsically decentralized, i.e. each part of the training data remains local to each device, and thus centralized methods either fail or are inefficient to implement. Typical scenarios comprise e.g. sensor networks, or mobile devices or hospitals which jointly train a machine learning model. Common to these applications is that i) each device has only access to locally stored or acquired data, ii) communication bandwidth is limited (either physically, or artificially for e.g. metered connections), iii) the global network topology is typically unknown to a single device, and iv) the number of connected devices is typically large. Additionally, this fully decentralized setting is also strongly motivated by privacy aspects, enabling to keep the training data private on each device at all times. Modeling. To simulate this scenario, we permanently split the training data between the nodes, i.e. the data is never shuffled between workers during training, and every node has distinct part of the dataset. To the best of our knowledge, no prior works studied this scenario for decentralized Figure 1: Scaling of CHOCO-SGD with sign compression to large number of devices on Cifar10 dataset. Left: best testing accuracy of the algorithms reached after 300 epochs. Right: best testing accuracy reached after communicating 1000 MB. deep learning. For the centralized approach, gathering methods such as all-reduce are not efficiently implementable in this setting, hence we compare to the centralized baseline where all nodes route their updates to a central coordinator for aggregation. For the comparison we consider CHOCO-SGD with sign compression (this combination achieved the compromise between accuracy and compression level in Table 1)), decentralized SGD without compression, and centralized SGD without compression. Scaling to Large Number of Nodes. To study the scaling properties of CHOCO-SGD, we train on 4, 16, 36 and 64 number of nodes. We compare decentralized algorithms on two different topologies: ring as the worst possible topology, and on the torus with much larger spectral gap. Their parameters are listed in the table 2. . For the simplicity, we keep the learning rate constant and separately tune it for all methods. We tune consensus learning rate for CHOCO-SGD. The are summarized in Figure 1. First we compare the testing accuracy reached after 300 epochs (Fig. 1, Left). CentralizedSGD has a good performance for all the considered number of nodes. CHOCO-SGD slows down due to the influence of graph topology (Decentralized curve), which is consistent with the spectral gaps order (see Tab. 2), and also influenced by the communication compression (CHOCO curve), which slows down training uniformly for both topologies. We observed that the train performance is similar to the test on Fig. 1, therefore the performance degradation is explained by the slower convergence (Theorem 4.1) and is not a generalization issue. Increasing the number of epochs improves the performance of the decentralized schemes. However, even using 10 times more epochs, we were not able to perfectly close the gap between centralized and decentralized algorithms for both train and test performance. In the real decentralized scenario, the interest is not to minimize the epochs number, but the amount of communication to reduce the cost of the user's mobile data. We therefore fix the number of transmitted bits to 1000 MB and compare the best testing accuracy reached (Fig. 1, Right). CHOCO-SGD performs the best while having slight degradation due to increasing number of nodes. It is beneficial to use torus topology when the number of nodes is large because it has good mixing properties, for small networks there is not much difference between these two topologies-the benefit of large spectral gap is canceled by the increased communication due larger node degree for torus topology. Both Decentralized and Centralized SGD requires significantly larger number of bits to reach reasonable accuracy. Experiments on a Real Social Network Graph. We simulate training models on user devices (e.g. mobile phones), connected by a real social network. We chosen Davis Southern women social network with 32 nodes. We train ResNet20 (0.27 million parameters) model on the Cifar10 dataset (50K/10K training/test samples) for image classification and a three-layer LSTM architecture (28.95 million parameters) for a language modeling task on WikiText-2 (600 training and 60 validation articles with a total of 2 088 628 and 217 646 tokens respectively) . We use exponentially decaying learning rate schedule. For more detailed experimental setup we refer to Appendix F. The are summarized in Figures 2-3 and in Table 3. For the image classification task, when comparing the training accuracy reached after the same number of epochs, we observe that the decentralized algorithm performs best, follows by the centralized and lastly the quantized decentralized. However, the test accuracy is highest for the centralized scheme. When comparing the test accuracy reached for the same transmitted data 2, CHOCO-SGD significantly outperforms the exact decentralized scheme, with the centralized performing worst. We note a slight accuracy drop, i.e. after the same number of epochs (but much less transmitted data), CHOCO-SGD does not reach the same level of test accuracy than the baselines. For the language modeling task, both decentralized schemes suffer a drop in the training loss when the evaluation reaching the epoch budget; while our CHOCO-SGD outperforms the centralized SGD in test perplexity. When considering perplexity for a fixed data volume (middle and right subfigure of Figure 3), CHOCO-SGD performs best, followed by the exact decentralized and centralized algorithms. Figure 4: Large-scale training: Resnet-50 on ImageNet-1k in the datacenter setting. The topology has 8 nodes (each accesses 4 GPUs). We use "Sign+Norm" as the quantization scheme of CHOCO-SGD. The benefits of CHOCO-SGD can be further pronounced when scaling to more nodes. Decentralized optimization methods offer a way to address scaling issues even for well connected devices, such as e.g. in datacenter with fast InfiniBand (100Gbps) or Ethernet (10Gbps) connections. describe scenarios when decentralized schemes can outperform centralized ones, and recently, presented impressive speedups for training on 256 GPUs, for the setting when all nodes can access all training data. The main differences of their algorithm to CHOCO-SGD are the asynchronous gossip updates, time-varying communication topology and most importantly exact communication, making their setup not directly comparable to ours. We note that these properties of asynchronous communication and changing topology for faster mixing are orthogonal to our contribution, and offer promise to be combined. Setup. We train ImageNet-1k (1.28M/50K training/validation) with Resnet-50 . We perform our experiments on 8 machines (n1-standard-32 from Google Cloud), where each of machines has 4 Tesla P100 GPUs. Within one machine communication is fast and we perform all-reduce with the full model. Between different machines we use decentralized communication with compressed communication (sign-CHOCO-SGD) in a ring topology. The mini-batch size on each GPU is 128, and we follow the general SGD training scheme in and directly use all their hyperparameters for CHOCO-SGD. Due to the limitation of the computational resource, we did not heavily tune the consensus stepsize for CHOCO-SGD 3. Results. We depict the training loss and top-1 test accuracy in terms of epochs and time in Figure 4. CHOCO-SGD benefits from its decentralized and parallel structure and takes less time than all-reduce to perform the same number of epochs, while having only a slight 1.5% accuracy loss. (All-reduce with full precision gradients achieved test accuracy of 76.37%, vs. 75.15% for CHOCO-SGD). In terms of time per epoch, our speedup does not match that of , as the used hardware is very different. Their scheme is orthogonal to our approach and could be integrated for better training efficiency. Nevertheless, we still demonstrate a time-wise 20% gain over the common all-reduce baseline, on our used commodity hardware cluster. We propose the use of CHOCO-SGD (and its momentum version) for enabling decentralized deep learning training in bandwidth-constrained environments. We provide theoretical convergence guarantees for the non-convex setting and show that the algorithm enjoys the a linear speedup in the number of nodes. We empirically study the performance of the algorithm in a variety of settings on image classification (ImageNet-1k, Cifar10) and on a language modeling task (WikiText-2). Whilst previous work successfully demonstrated that decentralized methods can be a competitive alternative to centralized training schemes when no communication constraints are present , our main contribution is to enable training in strongly communication-restricted environments, and while respecting the challenging constraint of locality of the training data. We theoretically and practically demonstrate the performance of decentralized schemes for arbitrary high communication compression, and under data-locality, and thus significantly expand the reach of potential applications of fully decentralized deep learning. In this section we present the proof of Theorem 4.1. For this, we will first derive a slightly more general statement: in Theorem A.2 we analyze CHOCO-SGD for arbitrary stepsizes η, and then derive Theorem 4.1 as a special case. The structure of the proof follows. That is, we first show that Algorithm 1 is a special case of a more general class of algorithms (given in Algorithm 3): Observe that Algorithm 1 consists of two main components: 2 the stochastic gradient update, performed locally on each node, and 1 the (quantized) averaging among the nodes. We can show convergence of all algorithms of this type-i.e. stochastic gradient updates 2 followed by an arbitrary averaging step 1 -as long as the averaging scheme exhibits linear convergence. For the specific averaging used in CHOCO-SGD, linear convergence has been shown in and we will use their estimate of the convergence rate of the averaging scheme. For convenience, we use the following matrix notation in this subsection. Decentralized SGD with arbitrary averaging is given in Algorithm 3. Algorithm 3 DECENTRALIZED SGD WITH ARBITRARY AVERAGING SCHEME input: blackbox averaging/gossip 4: end for 2 {1 { Assumption 3. For an averaging scheme h : Assume that h preserves the average of iterates: and that it converges with linear rate for a parameter 0 < c ≤ 1 and Laypunov function Ψ(X,, where E h denotes the expectation over internal randomness of averaging scheme h. Example: Exact Averaging. Setting X + = XW and Y + = X + gives an exact consensus averaging algorithm with mixing matrix W . It converges at the rate c = ρ, where ρ is an eigengap of mixing matrix W, defined in Assumption 1. Substituting it into the Algorithm 3 we recover D-PSGD algorithm, analyzed in. CHOCO-SGD. To recover CHOCO-SGD, we need to choose CHOCO-GOSSIP as consensus averaging scheme, which is defined as 82 in the more general below. It is important to note that for Algorithm 1 given in the main text, the order of the communication part 1 and the gradient computation part 2 is exchanged. We did this to better illustrate that both these parts are independent and that they can be executed in parallel. The effect of this change can be captured by changing the initial values but does not affect the convergence rate. A.2 PROOF OF THEOREM 4.1 Lemma A.1. Under Assumptions 1-2 the iterates of the Algorithm 3 with constant stepsize η satisfy Proof of Lemma A.1. These are exactly the same calculations as the first 9 lines in the proof of Lemma 21 from. We got a recursion Verifying that r t ≤ η 2 4A c 2 satisfy recursion completes the proof as E X Theorem A.2. Under Assumptions 1-3 with constant stepsize η = n T +1 for T ≥ 64nL 2, the averaged iterates i of Algorithm 3 satisfy: where c denotes convergence rate of underlying averaging scheme. The first term shows a linear speed up compared to SGD on one node, whereas the underlying averaging scheme affects only the second-order term. Substituting the convergence rate for exact averaging with W gives the rate O(1 / √ nT + n /(T ρ 2)), which recovers the rate of D-PSGD . CHOCO-SGD with the underlying CHOCO-GOSSIP averaging scheme converges at the rate 2 )). The dependence on ρ (eigengap of the mixing matrix W) is worse than in the exact case. This might either just be an artifact of our proof technique or a consequence of supporting arbitrary high compression. Proof of Theorem A.2. By L-smoothness To estimate the second term, we add and subtract ∇f (For the last term, we add and subtract ∇f (x (t) ) and the sum of ∇f j (x Combining this together and using L-smoothness to estimate f ( Using Lemma A.1 to bound the third term and using that η ≤ Rearranging terms and averaging over t Substituting η = n T +1 and using that T ≥ 64nL 2 we get the statement of the theorem. The theorem gives guarantees for the averaged vector of parameters x, however in a decentralized setting it is very expensive and sometimes impossible to average all the parameters distributed across several machines, especially when the number of machines and the model size is large. We can get similar guarantees on the individual iterates x i as e.g. in . We summarize these briefly below. Corollary A.3 (Convergence of local weights). Under the same setting as in Theorem 4.1, Proof of Corollary A.3. where we used L-smoothness of f. Using Theorem 4.1 and Lemma A.1 The previous holds only for T larger than 64nL 2. This is not necessary and can be relaxed. Theorem A.4. Under Assumptions 2, 1 with constant stepsize η and the consensus stepsize γ:= ρ 2 δ 16ρ+ρ 2 +4β 2 +2ρβ 2 −8ρδ where β = I − W 2 ∈ Algorithm 3 converges at the speed where i and c is convergence rate of underlying averaging scheme. In contrast to Theorem A.2, this rate holds for any T, however the first term is worse than in Theorem A.2 because σ 2 is usually much smaller than G 2. Proof of Theorem A.4. By L-smoothness Using Lemma A.1 we can bound the last term Rearranging terms, taking α = 1 and averaging over t we are getting statement of the theorem Corollary A.5 (Convergence of local weights x (t) i ). Under Assumtions 2, 1, algorithm 1 with η = n T +1 converges at the speed c 2 (T + 1). Proof. where we used L-smoothness of f. This holds for ∀α > 0. Using Theorem A.4 and Lemma A.1 and setting α = 1 where we set η = n T +1. Lemma B.1. For arbitrary set of n vectors Lemma B.3. For given two vectors a, This inequality also holds for the sum of two matrices A, B ∈ R n×d in Frobenius norm. Algorithm 4 CHOCO-SGD as Error Feedback, E) and mixing matrix W, initializex 1: for t in 0... T − 1 do {in parallel for all workers i ∈ [n]} 2: for neighbors j: {i, j} ∈ E (including {i} ∈ E) do end for 10: 11: stochastic gradient update 12: end for D CHOCO-SGD WITH MOMENTUM Algorithm 2 demonstrates how to combine CHOCO-SGD with weight decay and momentum. Nesterov momentum can be analogously adapted for our decentralized setting. To better understand how does CHOCO-SGD work, we can interpret it as an error feedback algorithm (; . We can equivalently rewrite CHOCO-SGD (Algorithm 1) as Algorithm 4. The common feature of error feedback algorithms is that quantization errors are saved into the internal memory, which is added to the compressed value at the next iteration. In CHOCO-SGD the value we want to transmit is the difference x (t) i − x (t−1) i, which represents the evolution of local variable x i at step t. Before compressing this value on line 4, the internal memory is added on line 3 to correct for the errors. Then, on line 5 internal memory is updated. Note that m i in the old notation. We precise the procedure of model training as well as the hyper-parameter tuning in this section. Social Network Setup. For the comparison we consider CHOCO-SGD with sign compression (this combination achieved the compromise between accuracy and compression level in Table 1)), decentralized SGD without compression, and centralized SGD without compression. We train two models, firstly ResNet20 (0.27 million parameters) for image classification on the Cifar10 dataset (50K/10K training/test samples) and secondly, a three-layer LSTM architecture (28.95 million parameters) for a language modeling task on WikiText-2 (600 training and 60 validation articles with a total of 2 088 628 and 217 646 tokens respectively) . For the language modeling task, we borrowed and adapted the general experimental setup of , where we use a three-layer LSTM with hidden dimension of size 650. The loss is averaged over all examples and timesteps. The BPTT length is set to 30. We fine-tune the value of gradient clipping (0.4), and the dropout (0.4) is only applied on the output of LSTM. We train both of ResNet20 and LSTM for 300 epochs, unless mentioned specifically. The per node mini-batch size is 32 for both datasets. The learning rate of CHOCO-SGD follows a linear scaling rule, which is proportional to the node degree. The momentum (with factor 0.9) is only applied on the ResNet20 training. Social Network and a Datacenter details. For all algorithms, we gradually warmup the learning rate from a relative small value (0.1) to the fine-tuned initial learning rate for the first 5 training epochs. During the training procedure, the tuned initial learning rate is decayed by the factor of 10 when accessing 50% and 75% of the total training epochs. The learning rate is tuned by finding the optimal learning rate per sampleη, where the learning rate (used locally) is determined by a linear scaling rule (i.e., degree of node ×η × per node mini-batch size). The optimalη is searched in a pre-defined grid and we ensure that the best performance was contained in the middle of the grids. For example, if the best performance was ever at one of the extremes of the grid, we would try new grid points. Same searching logic applies to the consensus stepsize. Table 4 demonstrates the fine-tuned hpyerparameters of CHOCO-SGD for training ResNet-20 on Cifar10, while Table 6 reports our fine-tuned hpyerparameters of our baselines. Table 5 demonstrates the fine-tuned hpyerparameters of CHOCO-SGD for training ResNet-20/LSTM on a social network topology. Table 5: Tuned hyper-parameters of CHOCO-SGD, corresponding to the social network topology with 32 nodes in Table 3. We randomly split the training data between the nodes and keep this partition fixed during the entire training (no shuffling). The per node mini-batch size is 32 and the maximum degree of the node is 14. Base learning rate (before scaling by node degree) Consensus stepsize Figure 1. n = 4 n = 16 n = 36 n = 64 Figure 1. n = 4 n = 16 n = 36 n = 64 We additionally plot the learning curve for the social network topology in Figure. 6 and Figure. topology. The topology has 32 nodes and we assume each node can only access a disjoint subset of the whole dataset. The local mini-batch size is 32. We additionally provide plots for training top-1, top-5 accuracy and test top-5 accuracy for the datacenter experiment in Figure 8. On Figure 9 we additionally depict the test accuracy of the averaged model i (left) and averaged distance of the local models from the averaged model (right). Towards the end of the optimization the local models reach consensus (Figure 9, right), and their individual test performances are the same as performance of averaged model. Interestingly, before decreasing the stepsize at the epoch 225, the local models are in general diverging from the averaged model, while decreasing only when the stepsize decreases. The same behavior was also reported in.
We propose Choco-SGD---decentralized SGD with compressed communication---for non-convex objectives and show its strong performance in various deep learning applications (on-device learning, datacenter case).
713
scitldr
We show that Entropy-SGD , when viewed as a learning algorithm, optimizes a PAC-Bayes bound on the risk of a Gibbs (posterior) classifier, i.e., a randomized classifier obtained by a risk-sensitive perturbation of the weights of a learned classifier. Entropy-SGD works by optimizing the bound’s prior, violating the hypothesis of the PAC-Bayes theorem that the prior is chosen independently of the data. Indeed, available implementations of Entropy-SGD rapidly obtain zero training error on random labels and the same holds of the Gibbs posterior. In order to obtain a valid generalization bound, we show that an ε-differentially private prior yields a valid PAC-Bayes bound, a straightforward consequence of connecting generalization with differential privacy. Using stochastic gradient Langevin dynamics (SGLD) to approximate the well-known exponential release mechanism, we observe that generalization error on MNIST (measured on held out data) falls within the (empirically nonvacuous) bounds computed under the assumption that SGLD produces perfect samples. In particular, Entropy-SGLD can be configured to yield relatively tight generalization bounds and still fit real labels, although these same settings do not obtain state-of-the-art performance. Optimization is central to much of machine learning, but generalization is the ultimate goal. Despite this, the generalization properties of many optimization-based learning algorithms are poorly understood. The standard example is stochastic gradient descent (SGD), one of the workhorses of deep learning, which has good generalization performance in many settings, even under overparametrization BID36, but rapidly overfits in others BID44. Can we develop high performance learning algorithms with provably strong generalization guarantees? Or is their a limit?In this work, we study an optimization algorithm called Entropy-SGD BID10, which was designed to outperform SGD in terms of generalization error when optimizing an empirical risk. Entropy-SGD minimizes an objective f: R p → R indirectly by performing (approximate) stochastic gradient ascent on the so-called local entropy DISPLAYFORM0 where C is a constant and N denotes a zero-mean isotropic multivariate normal distribution on R p.Our first contribution is connecting Entropy-SGD to in statistical learning theory, showing that maximizing the local entropy corresponds to minimizing a PAC-Bayes bound BID31 on the risk of the so-called Gibbs posterior. The distribution of w + ξ is the PAC-Bayesian "prior", and so optimizing the local entropy optimizes the bound's prior. This connection between local entropy and PAC-Bayes follows from a due to Catoni (2007, Lem. 1.1.3) in the case of bounded risk. (See Theorem 4.1.) In the special case where f is the empirical cross entropy, the local entropy is literally a Bayesian log marginal density. The connection between minimizing PACBayes bounds under log loss and maximizing log marginal densities is the subject of recent work by BID19. Similar connections have been made by BID45; Zhang (2006b); BID20; BID21.Despite the connection to PAC-Bayes, as well as theoretical by Chaudhari et al. suggesting that Entropy-SGD may be more stable than SGD, we demonstrate that Entropy-SGD (and its corresponding Gibbs posterior) can rapidly overfit, just like SGD. We identify two changes, motivated by theoretical analysis, that suffice to control generalization error, and thus prevent overfitting. The first change relates to the stability of optimizing the prior mean. The PAC-Bayes theorem requires that the prior be independent of the data, and so by optimizing the prior mean, Entropy-SGD invalidates the bound. Indeed, the bound does not hold empirically. While a PAC-Bayes prior may not be chosen based on the data, it can depend on the data distribution. This suggests that if the prior depends only weakly on the data, it may be possible to derive a valid bound. We formalize this intuition using differential privacy BID13 BID17. By modifying the cross entropy loss to be bounded and replacing SGD with stochastic gradient Langevin dynamics (SGLD; BID43, the data-dependent prior mean can be shown to be (ε, δ)-differentially private BID42 BID34. We refer to the SGLD variant as Entropy-SGLD. Using connecting statistical validity and differential privacy (b, Thm. 11), we show that an ε-differentially private prior mean yields a valid, though looser, generalization bound using the PAC-Bayes theorem. (See Theorem 5.4.)A gap remains between pure and approximate differential privacy. Under some technical conditions, in the limit as the number of iterations diverges, the distribution of SGLD's output is known to converge weakly to the corresponding stationary distribution, which is the well-known exponential mechanism in differential privacy (, Thm. 7). Weak convergence, however, falls short of implying that SGLD achieves pure ε-differential privacy. We proceed under the approximation that SGLD enjoys the same privacy as the exponential release mechanism, and apply our ε-differentially private PAC-Bayes bound. We find that the corresponding 95% confidence intervals are reasonably tight but still conservative in our experiments. While the validity of our bounds are subject to our approximation, the bounds give us a view as to the limitations of combining differential privacy with PAC-Bayes bounds: when the privacy of Entropy-SGLD is tuned to contribute no more than 2ε 2 × 100 ≈ 0.2% to the generalization error, the test error of the learned network is 3-8%, which is approximately 5-10 times higher than the state of the art, which for MNIST is between 0.2-1%, although the community has almost certainly overfit its networks/learning rates/loss functions/optimizers to MNIST. We return to these points in the discussion. The second change pertains to the stability of the stochastic gradient estimate made on each iteration of Entropy-SGD. This estimate is made using SGLD. (Hence Entropy-SGD is SGLD within SGD.) Chaudhari et al. make a subtle but critical modification to the noise term in SGLD update: the noise is divided by a factor that ranges from 10 3 to 10 4. (This factor was ostensibly tuned to produce good empirical .) Our analysis shows that, as a of this modification, the Lipschitz constant of the objective function is approximately 10 6 -10 8 times larger, and the that the Entropy-SGD objective is smoother than the original risk surface no longer stands. This change to the noise also negatively impacts the differential privacy of the prior mean. Working backwards from the desire to obtain tight generalization bounds, we are led to divide the SGLD noise by a factor of only 4 √ m, where m is the number of data points. (For MNIST, 4 √ m ≈ 16.) The ing bounds are nonvacuous and tighter than those recently published by BID18, although it must be emphasized that the bound presented here hold subject to the approximation concerning privacy of the prior mean, which is certainly violated but to an unknown degree. We begin with a review of some related work, before introducing sufficient so that we can make a formal connection between local entropy and PAC-Bayes bounds. We then introduce a differentially private PAC-Bayes bound. In Section 6, we present experiments on MNIST which provide evidence for our theoretical analysis. (Empirical validation is required in order to address the aforementioned gap between pure and approximate differential privacy.) We close with a short discussion. This work was inspired in part by BID44, who highlight empirical properties of SGD that were not widely appreciated within the theory community, and propose a simple linear model to explain the phenomenon. They observe that, without regularization, SGD can achieve zero training error on MNIST and CIFAR, even if the labels are chosen uniformly at random. At the same time, SGD obtains weights with very small generalization error with the original labels. The first observation is strong evidence that the set of classifier accessible to SGD within a reasonable number of iterations is extremely rich. Indeed, with probability almost indistinguishable from one, fitting random labels on a large data set implies that the Rademacher complexity of this effective hypothesis class is essentially the maximum possible (, Thm. 11).The second observation suggests that SGD is performing some sort of capacity control. Zhang et al. show that SGD obtains the minimum norm solution for a linear model, and thus performs implicit regularization. They suggest a similar phenomenon may occur when using SGD to training neural networks. Indeed, earlier work by observed similar phenomena and argued for the same point: implicit regularization underlies the ability of SGD to generalize, even under massive overparametrization. Subsequent work by introduced "path" norms as a better measure of the complexity of ReLU networks. Despite progress, these new norms have not yet lead to nonvacuous generalization bounds (, App. D).There has been recent progress: describe PAC-Bayes bounds, built by perturbing the weights learned by SGD. (The authors were motivated in part by Entropy-SGD and empirical findings relating to "flat minima".) Their bounds are controlled by 1) the "flatness" of empirical risk surface near the SGD solution and 2) the L2 distance between the learned weights and the random initialization. The bounds are also found to be numerically nonvacuous. (We return to this aspect below.) Similar bounds are studied in further depth by Neyshabur et al. (2017b). Recent advances have also identified new spectral norm bounds that correlate closely with generalization error and distinguish between true and random labels BID4 BID38.Our work and Entropy-SGD both connect to early work by BID22 and BID23, which introduced regularization schemes based on information-theoretic principles. These ideas, now referred to as "flat minima", were related to minimizing PAC-Bayes bounds by BID18, although these bounds are minimized with respect to the posterior, not the prior, as is done by Entropy-SGD. BID1 provide an informationtheoretic argument for a generalization of the objective of Hinton and Camp. Their objective takes the form of regularized empirical cross entropŷ DISPLAYFORM0 where Q and P are the prior and posterior on the weights, respectively. For an appropriate range of β, linear PAC-Bayes bounds are exactly of this form. they empirically observe that varying β correlates with a degree of overfitting on a random label dataset. BID1 also highlight the connections with variational inference BID25.Our work also relates to renewed interest in nonvacuous generalization bounds BID26 BID27, i.e., bounds on the numerical difference between the unknown classification error and the training error that are (much) tighter than the tautological upper bound of one. demonstrated nonvacuous generalization bounds for random perturbations of SGD solutions using PAC-Bayes bounds for networks with millions of weights.(The algorithm can be viewed as variational dropout BID25, with a proper data-dependent prior but without local reparametrization.) Their work builds on the core insight demonstrated nearly 15 years ago by BID27, who computed nonvacuous bounds for neural networks five orders of magnitude smaller. A key aspect of our analysis relies on the stability of a data-dependent prior. Stability has long been understood to relate to generalization BID8. Our analysis of Entropy-SGLD rests on in differential privacy (see for a survey) and its connection to generalization BID17 BID16 BID7 BID40, which can be viewed as a particularly stringent notion of stability. Entropy-SGLD is an instance of differentially private empirical risk minimization, which is well studied, both in the abstract BID11 BID24 BID6 and in the particular setting of private training via SGD BID6 BID0. Our analysis also relates to the differential privacy of Bayesian and Gibbs posteriors, and approximate sampling algorithms BID35 BID6 BID12 BID42 BID34.In effect, our differentially private PAC-Bayes bound uses a data-distribution-dependent prior, which are permitted in the PAC-Bayesian framework. (Priors must be independent of the data sample, however. Differential privacy allows us to extract information about the distribution from a sample while maintaining statistical validity BID17 .)There is a growing body of work in the PAC-Bayes literature on data-distribution-dependent priors. Write S for a data sample and Q(S) for a data-dependent PAC-Bayesian posterior (i.e., Q : Z m → M 1 (R p) is a fixed learning algorithm for a randomized classifier). BID9 makes an extensive study of data-distribution-dependent priors of the form P * = P * (Q) DISPLAYFORM1. While such priors were known to minimize the KL term in expectation, Catoni was the first to derive PAC-Bayes excess risk bounds using these priors: focusing on Gibbs posteriors Q(S) = Q P (S) def = P exp(−τR S) for some fixed measure P, Catoni derives bounds on the complexity term KL(Q P (S)||P * (Q P)) that hold uniformly over all possible data distributions D. Catoni calls such priors and bounds "local". BID30 extend this approach to generalization bounds and consider both data-independent and data-dependent choices for P. In the later case, P = P(S) and the generalization bound uses the local prior P * (Q P) = E S∼D m [Q P(S) (S)]. In our work, we make a data-dependent but private choice of the prior P = P(S), and then use our differentially private PAC-Bayes generalization bound to control the generalization error of the associated Gibbs posterior Q P (S) in terms of KL(Q P (S)||P). We also evaluated differentially private versions of local bounds, where the complexity term is a uniform bound on KL(Q P (S)||P * (Q P)). The bounds were virtually indistinguishable, and so we do not report them here.3 PRELIMINARIES: SUPERVISED LEARNING, ENTROPY-SGD, AND PAC-BAYES Let Z be a measurable space, let D be an unknown distribution on Z, and consider the batch supervised learning setting under a loss function bounded below: having observed S ∼ D m, i.e., m independent and identically distributed samples from D, we aim to choose a predictor, parameterized by weight vector w ∈ R p, with minimal risk DISPLAYFORM2 where: R p × Z → R is measurable and bounded below. (We ignore the possibility of constraints on the weight vector for simplicity.) We will also consider randomized predictors, represented by probability measures Q ∈ M 1 (R p) on R p, whose risks are defined via averaging, DISPLAYFORM3 where the second equality follows from Fubini's theorem and the fact that is bounded below. Let S = (z 1, . . ., z m) and letD DISPLAYFORM4 δ z i be the empirical distribution. Given a weight distribution Q, such as that chosen by a learning algorithm on the basis of data S, its empirical risk DISPLAYFORM5 will be studied as a stand-in for its risk, which we cannot compute. WhileR S (Q) is easily seen to be an unbiased estimate of R D (Q) when Q is independent of S, our goal is to characterize the (one-sided) generalization error R D (Q) −R S (Q) when Q is random and dependent on S.One of our focuses will be on classification, where Z = X × K, with K a finite set of classes/labels. A product measurable (in practice, continuous) function f: DISPLAYFORM6 In this setting, 0-1 loss corresponds to g(y, y) = 1 if and only if y = y. In binary classification, we take K = {0, 1}.We will also consider parametric families of probability-distribution-valued classifiers f: DISPLAYFORM7 For every input x ∈ X, the output f (w, x) specifies a probability distribution on K. In this setting, (w, (x, y)) = g(f (w, x), y) for some g: DISPLAYFORM8 The standard loss is then the cross entropy, given by g((p 1, . . ., p K), y) = − log p y. (Under cross entropy loss, the empirical risk is, up to a multiplicative constant, a negative log likelihood.) In the special case of binary classification, the output can be represented simply by an element of, i.e., the probability the label is one. The binary cross entropy, BCE, is given by g(p, y) = −y log(p) − (1 − y) log(1 − p). Note that cross entropy loss is merely bounded below. We will consider bounded modifications in Appendix B.2.We will sometimes refer to elements of R p and M 1 (R p) as classifiers and randomized classifiers, respectively. Likewise, we will often refer to the (empirical) risk as the (empirical) error. Entropy-SGD is a gradient-based learning algorithm proposed by BID10 as an alternative to stochastic gradient descent on the empirical risk surfaceR S. The authors argue that Entropy-SGD has better generalization performance and provide some empirical evidence. Part of that argument is a theoretical analysis of the smoothness of the local entropy surface that Entropy-SGD optimizes in place of the empirical risk surface, as well as a uniform stability argument that they admit rests on assumptions that are violated, but to a small degree empirically. As we have mentioned in the introduction, Entropy-SGD's modifications to the noise term in SGLD in much worse smoothness. We will modify Entropy-SGD in order to stabilize its learning and, up to some approximations, provably control overfitting. Entropy-SGD is stochastic gradient ascent applied to the optimization problem:arg max DISPLAYFORM0 The objective F γ,τ (·; S) is known as the local entropy, and can be viewed as the log partition function of the unnormalized probability density function DISPLAYFORM1 (We will denote the corresponding distribution by G w,S γ,τ .) Assuming that one can exchange differentiation and integration, it is straightforward to verify that DISPLAYFORM2 and then the local entropy F γ,τ (·; S) is even differentiable, even if the empirical riskR S is not. Indeed, Chaudhari et al. show that the local entropy and its derivative are Lipschitz. Chaudhari et al. argue informally that maximizing the local entropy leads to "flat minima" in the empirical risk surface, which several authors BID22 BID23 BID2 BID3 have argued is tied to good generalization performance (though none of these papers gives generalization bounds, vacuous or otherwise). 1 Chaudhari et al. propose a Monte Carlo estimate of the gradient, DISPLAYFORM3 1 The local entropy should not be confused with the smoothed risk surface obtained by convolution with a Gaussian kernel: in that case, every point on this surface represents the average risk of a network obtained by perturbing the network parameters according to a Gaussian distribution. The local entropy also relates to a perturbation, but the perturbation is either accepted or rejected based upon its relative performance (as measured by the exponentiated loss) compared with typical perturbations. Thus the local entropy perturbation concentrates on regions of weight space with low empirical risk, provided they have sufficient probability mass under the distribution of the random perturbation. Section 4 yields further insight into the local entropy function. Input: DISPLAYFORM0 w, µ ← w 3:for i ∈ {1, ..., L} do Run SGLD for L iterations.4: DISPLAYFORM1 Entropy-SGLD onlyStep along stochastic local entropy ∇ BID43, which generates an exact sample in the limit of infinite computation and requires that the empirical risk be differentiable. 2 The final output of Entropy-SGD is the deterministic predictor corresponding to the final weights w * achieved by several epochs of optimization. Algorithm 1 gives a complete description of the stochastic gradient step performed by Entropy-SGD. If we rescale the learning rate, η ← 1 2 η τ, lines 6 and 7 are equivalent to DISPLAYFORM2 Notice that the noise term is multiplied by a factor of 2/τ. This follows from the definition of the local entropy. A multiplicative factor ε-called the "thermal noise", but playing exactly the same role as 2/τ here-appears in the original description of the Entropy-SGD algorithm given by Chaudhari et al. However, ε does not appear in the definition of local entropy used in their stability analysis. Our derivations highlights that the scaling the noise term in SGLD update has a profound effect: the thermal noise exponentiates the density that defines the local entropy. The smoothness analysis of Entropy-SGD does not take into consideration the role of ε, which is critical because Chaudhari et al. take ε to be as small as 10 −3 and 10 −4. Indeed, the that the local entropy surface is smoother no longer holds. We will see that τ controls the differential privacy and thus the generalization error of Entropy-SGD. Let Q, P be probability measures defined on R p, assume Q is absolutely continuous with respect to P, and write dQ dP: R p → R + ∪ {∞} for some Radon-Nikodym derivative of Q with respect to P. Then the Kullback-Liebler divergence (or relative entropy) of P from Q is defined to be DISPLAYFORM0 For p, q ∈, we will abuse notation and define DISPLAYFORM1 where B(p) denotes the Bernoulli distribution on {0, 1} with mean p. We now present a PAC-Bayes theorem, first established by BID31. We focus on the setting of bounding the generalization error of a (randomized) classifier on a finite discrete set of labels K. The following variation is due to BID28 for 0-1 loss (see also BID26 and BID9 .) Theorem 3.1 (PAC-Bayes BID31 BID28). Under 0-1 loss, for every δ > 0, m ∈ N, distribution D on R k × K, and distribution P on R p, DISPLAYFORM2 We will also use the following variation of a PAC-Bayes bound, where we consider any bounded loss function. Theorem 3.2 (Linear PAC-Bayes Bound (; BID9). Fix λ > 1/2 and assume the loss takes values in an interval of length L max. For every δ > 0, m ∈ N, distribution D on R k × K, and distribution P on R p, DISPLAYFORM3 We introduce several additional generalization bounds when we introduce differential entropy. We now present our first contribution, a connection between the local entropy and PAC-Bayes bounds. We begin with some notation for Gibbs distributions. For a measure P on R p and function g: R p → R, let P[g] denote the expectation g(h)P(dh) and, provided P[g] < ∞, let P g denote the probability measure on R p, absolutely continuous with respect to P, with Radon-Nikodym derivative DISPLAYFORM0. A distribution of the form P exp(−τg) is generally referred to as a Gibbs distribution. In the special case where P is a probability measure, we call P exp(−τR S) a "Gibbs posterior". for some λ > 1/2, and let P be a multivariate normal distribution with mean w and covariance matrix (τγ) −1 I p. Then maximizing the local entropy F γ,τ (w; S) with respect to w is equivalent to minimizing a linear PAC-Bayes bound (Theorem 3.2) on the risk R D (G w,S γ,τ) of the Gibbs posterior G w,S γ,τ = P exp(−τR S), where the bound is optimized with respect to the mean w of P.Proof. Let m, δ, D, and P be as in Theorem 3.1 and let S ∼ D m. The linear PAC-Bayes bound (Theorem 3.2) ensures that for any fixed λ > 1/2 and bounded loss function, with probability at least 1 − δ over the choice of S, the bound DISPLAYFORM1 holds for all Q ∈ M 1 (R p). Minimizing the upper bound on the risk R D (Q) of the randomized classifier Q is equivalent to the program DISPLAYFORM2 with r(h) = m λ L maxR S (h). By (, Lem. 1.1.3), for all Q ∈ M 1 (R p) with KL(Q||P) < ∞, DISPLAYFORM3 Using Eq., we may reexpress Eq. as DISPLAYFORM4 By the nonnegativity of the Kullback-Liebler divergence, the infimum is achieved when the KL term is zero, i.e., when Q = P exp(−r). Then DISPLAYFORM5 Finally, it is plain to see that F γ,τ (w; DISPLAYFORM6, and P = N (w, (τγ) −1 I p ) is a multivariate normal with mean w and covariance matrix (τγ) −1 I.The analysis falls short when the loss function is unbounded, because the PAC-Bayes bound we have used applies only to bounded loss functions. BID19 described PAC-Bayes generalization bounds for unbounded loss functions. (See BID21 for related work on excess risk bounds and further references). For their bounds to be evaluated on the negative log likelihood loss, one needs some knowledge of the data distribution in order to approximate certain statistics of the deviation of the empirical riskR S (w) from true risk R D (w).5 DATA-DEPENDENT PAC-BAYES PRIORS VIA DIFFERENTIAL PRIVACY Theorem 4.1 reveals that Entropy-SGD is optimizing a PAC-Bayes bound with respect to the prior. As a , the prior P depends on the sample S, and the hypotheses of the PAC-Bayes theorem (Theorem 3.1) are not met. Naively, it would seem that this interpretation of Entropy-SGD cannot explain its ability to generalize. Using tools from differential privacy BID13, we show that if the prior term is optimized in a differentially private way, then a PAC-Bayes theorem still holds, at the cost of a slightly looser bound. We will assume basic familiarity with differential privacy, but give basic definitions and in Appendix A. We use the notation A: Z T for a (randomized) algorithm that takes as input an element in Z and produces an output in T.The key we will employ is due to Dwork et al. (2015b, Thm. 11). Theorem 5.1. Let m ∈ N, let A: Z m T, let D be a distribution over Z, let β ∈, and, for each t ∈ T, fix a set R(t) ⊆ Z m such that P S∼D m (S ∈ R(t)) ≤ β. If A is ε-differentially private for ε ≤ ln(1/β) /(2m), then P S∼D m (S ∈ R(A (S))) ≤ 3 β.Using Theorem 5.1, one can compute tail bounds on the generalization error of fixed classifiers, and then, provided that a classifier is learned from data in a differentially private way, the tail bound holds on the classifier, with less confidence. The following two tail bounds are examples of this idea. The first is a simple variant of (b, Thm. 9) due to Oneto, Ridella, and Anguita (2017, Lem. 2). Theorem 5.2. Let m ∈ N and let A: Z m R p be ε-differentially private. , Lem. 3)). Let m ∈ N and let A: Z m R p be ε-differentially private. Then DISPLAYFORM7 DISPLAYFORM8 The PAC-Bayes theorem allows one to choose the prior based on the data-generating distribution D, but not on the data S ∼ D m. Using differential privacy, we can consider a data-dependent prior P(S).Theorem 5.4. Under 0-1 loss, for every δ > 0, m ∈ N, distribution D on R k ×K, and ε-differentially private data-dependent prior P: DISPLAYFORM0 It follows from the PAC-Bayes theorem (Theorem 3.1) that P S∼D m (S ∈ R(P)) ≤ β. Theorem 5.1 implies that the bound holds with P replaced by P(S), provided that we inflate the probability of failure. In particular, let δ = 3 β. Then ln(1/β) = 2 ln(3/δ). By Theorem 5.1, provided 2mε 2 ≤ ln(1/β), then P S∼D m (S ∈ R(P(S))) ≤ δ. It follows that, with probability no more than δ over S ∼ D m, there exists a distribution Q on R p such that DISPLAYFORM1 The bound stated in Eq. follows immediately. Note that the bound holds for any posterior Q, including one obtained by optimizing a different PAC-Bayes bound. We have chosen to present a differentially private version of Theorem 3.1 rather than Theorem 3.2, because the former tends to be tighter numerically. Giving a differentially private version of Theorem 3.2, or any other PAC-Bayes bound, should be straightforward: one merely needs to decide how to incorporate the constraint between ε, β, and m in Theorem 5.1. We have chosen to deal with the constraint via a max operation affecting the width of the confidence interval. Note that, in realistic scenarios, δ is large enough relative to ε that an ε-differentially private prior P(S) contributes 2ε 2 to the generalization error. Therefore, ε must be much less than one to not contribute a nontrivial amount to the generalization error. In order to match the m −1 rate by which the KL term decays, one must have ε ∈ O(m −1/2). Our empirical studies use this rate. We have already explained that the weights learned by Entropy-SGD can be viewed as the mean of a data-dependent prior P(S). By Theorem 5.4 and the fact that post-processing does not decrease privacy, it would suffice to establish that the mean is ε-differentially private in order to obtain a risk bound on the corresponding Gibbs posterior classifier. Entropy-SGD can be viewed as stochastic gradient ascent on the negative local entropy, but with biased gradient estimates. The bias comes from the use of SGLD to compute the expectation in Eq.. Putting aside this issue, existing privacy analyses of SGD worsen after every iteration. For the number of iterations necessary to obtain reasonable weights, known upper bounds on the differential privacy of SGD yield vacuous generalization bounds. The standard (if idealized) approach for optimizing a data-dependent objective in a private way is to use the exponential mechanism BID33. In the context of maximizing the local entropy, the exponential mechanism correspond to sampling exactly from the "local entropy (Gibbs) distribution" DISPLAYFORM0 where β > 0 and P is some measure on R p. (It is natural to take P to be Lebesgue measure, or a multivariate normal distribution, which would correspond to L2 regularization of the local entropy.)The following establishes the privacy of a sample from the local entropy distribution: Theorem 5.5. Let γ, τ > 0, and assume the range of the loss is contained in an interval of length L max. One sample from the local entropy distribution P exp(β F γ,τ (·;S)), is 2β L max τ m -differentially private. Proof. The follows immediately from the following two lemmas. Lemma 5.6 ((, Thm. 6)). Let q: Z m × R p → R be measurable, let P be a measure on R p, let β > 0, and assume P[exp(−β q(S, ·))] < ∞ for all S ∈ Z m. Let ∆q def = sup S,S sup w∈R p |q(S, w) − q(S, w)|, where the first supremum ranges over pairs S, S ∈ Z m that disagree on no more than one coordinate. Let A: Z m R p, on input S ∈ Z m, output a sample from the Gibbs distribution P exp(−β q(S,·)). Then A is 2β ∆q-differentially private. Lemma 5.7. Let F γ,τ (w; S) be defined as Eq., assume the range of the loss is contained in an interval of length L max, and define q(S, w) = −F γ,τ (w; S). Then ∆q DISPLAYFORM1 Proof. The proof essentially mirrors that of (, Thm. 6).There are two obvious obstructions to using the exponential mechanism to pick a prior mean: first, cross-entropy loss can change in an unbounded way when swapping a single data point; second, sampling from the local entropy distribution exactly is hard in general. To sidestep the first obstruction, we modify the underlying cross-entropy loss to be bounded by rescaling the probabilities output by the classifier to be bounded away from zero and one, allowing us to invoke Lemma 5.7. (Details of our modification of the cross entropy are described in Appendix B.2.1.)There is no simple way to sidestep the second obstruction. Instead, we once again use SGLD to generate an approximate sample from the local entropy distribution. In summary, to optimize the local entropy F γ,τ (·; S) in a private way to obtain the prior mean w, we repeatedly perform the SGLD update DISPLAYFORM2 where at each roundĝ(w) is an estimate of the gradient ∇ w F γ,τ (w; S). (Recall the identity Eq..) As in Entropy-SGD, we construct biased gradient estimates via an inner loop of SGLD. In summary, the only change to Entropy-SGD is the addition of noise in the outer loop. We call the ing algorithm Entropy-SGLD. (See Algorithm 1. Note that we take β = 1 in our experiments.)There have been a number of privacy analyses of SGLD BID35 BID6 BID12 BID42 BID34. Most of these analyses deliver (ε, δ)-differential privacy, but none of them take advantage of the fact that SGLD mixes in the limit as it converges weakly to the Gibbs distributions, under certain technical conditions (, Thm. 7). In our analysis and bound calculations, we therefore make the approximation that SGLD has the same privacy as its limiting invariant measure, the exponential mechanism. Building a less conservative model of the privacy of SGLD is an open problem. However, by making this approximation, we may see the potential/limits of combining differentially private optimization and PAC-Bayesian bounds. We return to the issues again in light of our empirical findings (Section 6) and in our discussion (Section 7). The generalization bounds that we have devised are data-dependent and so the question of their utility is an empirical one that requires data. In this section, we perform an empirical study of SGD, SGLD, Entropy-SGD, and Entropy-SGLD on the MNIST data set, on both convolutional and fully connected architectures, and compare our generalization bounds to estimates based on held-out data. Under our privacy approximation, SGLD and Entropy-SGLD are ε-differentially private and we take advantage of this fact to apply differentially private versions of two tail bounds and our PACBayes bound. The degree ε of privacy is determined by the τ parameter of the local entropy (C.f. thermal noise 2/τ), and then, in turn, ε contributes to our bounds on the generalization error. As theory predicts, τ affects the degree of overfitting empirically, and no bound we compute is violated too frequently. Of course, the validity of our generalization bounds rests on the degree to which our privacy approximation is violated. 3 We reflect on our approximation in light of our empirical , and then return to this point in the discussion. The weights learned by SGD, SGLD, and Entropy-SGD are treated differently from those learned by Entropy-SGLD. In the former case, the weights parametrize a neural network as usual, and the training and test error are computed using these weights. In the latter case, the weights are taken to be the mean of a multivariate normal prior, and we evaluate the training and test error of the The gap is an estimate of the generalization error. On true labels, SGLD finds classifiers with relatively small generalization error. At low thermal noise settings, SGLD (and its zero limit, SGD), achieve small empirical risk. As we increase the thermal noise, the empirical 0-1 error increases, but the generalization error decreases. At 0.1 thermal noise, risk is close to 50%. (top-right) On random labels, SGLD has high generalization error for thermal noise values 0.01 and below. (True error is 50%). (middle-left) On true labels, Entropy-SGD, like SGD and SGLD, has small generalization error. For the same settings of thermal noise, empirical risk is lower. (middle-right) On random labels, Entropy-SGD overfits for thermal noise values 0.005 and below. Thermal noise 0.01 produces good performance on both true and random labels. (bottom row) Entropy-SGLD is configured to be ε-differentially private with ε ≈ 0.0327 by setting τ = √ m, where m is the number of training samples. (bottom-left) On true labels, the generalization error for networks learned by Entropy-SGLD is close to zero. Generalization bounds are relatively tight. (bottom-right) On random label, Entropy-SGLD does not overfit. See Fig. 3 for SGLD bounds at same privacy setting. associated Gibbs posterior (i.e., a randomized classifier). We also report the performance of the (deterministic) network parametrized by these weights (called the "mean" classifier) in order to give a coarse statistic summarizing the local empirical risk surface. Following BID44, we study these algorithms on MNIST with its original ("true") labels, as well as on random labels. Parameter τ that performs very well in one setting often does not perform well in the other. Random labels mimic data where the Bayes error rate is high, and where overfitting can have severe consequences. We use a two-class variant of MNIST . 4 (See FIG7 and Appendix C for our experiments on the standard multiclass MNIST dataset. They yield similar insight.) Some experiments involve random labels, i.e., labels drawn independently and uniformly at random at the start of training. We study three network architectures, abbreviated FC600, FC1200, and CONV. Both FC600 and FC1200 are 3-layer fully connected networks, with 600 and 1200 units per hidden layer, respectively. CONV is a convolutional architecture. All three network architectures are taken from the MNIST experiments by BID10, but adapted to our two-class version of MNIST. 5 Let S and S tst denote the training and test sets, respectively. For all learning algorithms we track (i)R S (w) andR S tst (w), i.e., the training and test error for w. We also track DISPLAYFORM0.e., the mean training and test error of the local Gibbs distribution, viewed as a randomized classifier ("Gibbs") and, using the differential privacy bounds in Theorem 5.5, compute (iii) a PAC-Bayes bound on R D (G w,S γ,τ) using Theorem 5.4 ("PAC-bound"); (iv) the mean of a Hoeffding-style bound on R D (w), where w ∼ P exp(F γ,τ (·;S)),, using Theorem 5.2 ("H-bound");(v) an upper bound on the mean of a Chernoff-style bound on R D (w), where w ∼ P exp(F γ,τ (·;S)),, using Theorem 5.3 ("C-bound").We also compute H-and C-bounds for SGLD, viewed as a sampler for w ∼ P exp(−τR S), where P here is Lebesgue measure. In order for SGLD and Entropy-SGLD to be private, we modify the cross entropy loss function to be bounded. We achieve this by an affine transformation of the neural networks output that prevents extreme probability (se Appendix B.2.1). With the choice of τ = √ m, and the loss function taking values in an interval of length L max = 4, Entropy-SGLD is ε-differentially private, with ε ≈ 0.0327. See Appendix B.2 for additional details. Note that, in the calculation of (iii), we do not account for Monte Carlo error in our estimate ofR S (w). The effect is small, given the large number of iterations of SGLD performed for each point in the plot. Recall that DISPLAYFORM1 and so we may interpret the bounds in terms of the performance of a randomized classifier or the mean performance of a randomly chosen classifier. Key for the convolutional architecture (CONV) appear in FIG2. Results for FC600 and FC1200 appear in Fig. 2 of Appendix B. (Training the CONV network produces the lowest training/test errors and tightest generalization bounds. Results and bounds for FC600 are nearly identical to those for FC1200, despite FC1200 having three times as many parameters.)The top row of FIG2 presents the performance of SGLD for various levels of thermal noise 2/τ under both true and random labels. (Under our privacy approximation, we may also use SGLD to directly perform a private optimization of the empirical risk surface. The level of thermal noise determines the differential privacy of SGLD and so we expect to see a tradeoff between empirical risk and generalization error. Note that SGD is the same as SGLD with zero thermal noise.) SGD achieves the smallest training and test error on true labels, but overfits the worst on random labels. In comparison, SGLD's generalization performance improves with higher thermal noise, while its risk performance worsens. At 0.05 thermal noise, SGLD achieves reasonable but relatively large risk but almost zero generalization error on both true and random labels. Other thermal noise settings have either much worse risk or generalization performance. The middle row of FIG2 presents the performance of Entropy-SGD for various levels of thermal noise 2/τ under both true and random labels. As with SGD, Entropy-SGD's generalization performance improves with higher thermal noise, while its risk performance worsens. At the same levels of thermal noise, Entropy-SGD outperforms the risk and generalization error of SGD. At 0.01 thermal noise, Entropy-SGD achieves good risk and low generalization error on both true and random labels. However, the test-set performance of Entropy-SGD at 0.01 thermal noise is still worse than that of SGD. Whether this difference is due to SGD overfitting to the MNIST test set is unclear and deserves further study. The bottom row of FIG2 presents the performance of Entropy-SGLD with τ = √ m on true and random labels. (This corresponds to approximately 0.09 thermal noise.) On true lables, both the mean and Gibbs classifier learned by Entropy-SGLD have approximately 2% test error and essentially zero generalization error, which is less than predicted by our bounds. Our PAC-Bayes risk bounds are roughly 3%. As expected by the theory, Entropy-SGLD does not overfit on random labels, even after thousands of epochs. We find that our PAC-Bayes bounds are generally tighter than the H-and C-bounds. All bounds are nonvacuous, though still loose. The error bounds reported here are tighter than those reported by BID18. However, the validity of all three privacy-based bounds that we report rests on the privacy approximation regarding SGLD, and so interpreting these bounds requires some subtlety. We achieve much tighter generalization bounds than previously reported, and better test error, but we are still far from the performance of SGD. This is despite making a strong approximation, and so we might view these as telling us the limits of combining differential privacy and PAC-Bayes bounds. Weaker notions of stability/privacy may be necessary to achieve further improvement in generalization error and test error. Despite the coarse privacy approximation, no bound is ever violated: possible explanations include the bounds simply being loose and/or the data being far from worst case. Note that, given the number of experiments, we might even expect a violation for tight bounds. Indeed, our performance on random labels supports the hypothesis that the privacy of (Entropy-)SGLD does not degrade over time, at least not in a way that can be detected by our experiments. Our work reveals that Entropy-SGD can be understood as optimizing a PAC-Bayes generalization bound in terms of the bound's prior. Because the prior must be independent of the data, the bound is invalid, and, indeed, we observe overfitting in our experiments with Entropy-SGD when the thermal noise 2/τ is set to 0.0001 as suggested by Chaudhari et al. for MNIST. PAC-Bayes priors can, however, depend on the data distribution. This flexibility seems wasted, since the data sample is typically viewed as one's only view onto the data distribution. However, using differential privacy, we can span this gap. By performing a private computation on the data, we can extract information about the underlying distribution, without undermining the statistical validity of a subsequent PAC-Bayes bound. Our PAC-Bayes bound based on a differentially private prior is made looser by the use of a private data-dependent prior, but the gains in choosing a datadistribution-dependent prior more than make up for the expansion of the bound due to the privacy. (The gains come from the KL term being much smaller on the account of the prior being better matched to the posterior.) Understanding how our approach compares to local PAC-Bayes priors BID9 is an important open problem. The most elegant way to make Entropy-SGD private is to replace SGD with a sample from the Gibbs distribution (known as the exponential mechanism in the differential privacy literature). However, generating an exact sample is intractable, and so practicioners use SGLD to generate an approximate sample, relying on the fact that SGLD converges weakly to the exponential mechanism under certain technical conditions. Our privacy approximation allows us to proceed with a theoretical analysis by assuming that SGLD achieves the same privacy as the exponential mechanism. On the one hand, we do not find overt evidence that our approximation is grossly violated. On the other, we likely do not require such strong privacy in order to control generalization error. We might view our privacy-based bounds as being optimistic and representing the bounds we might be able to achieve rigorously should there be a major advance in private optimization. (No analysis of the privacy of SGLD takes advantage of the fact that it mixes weakly.) On the account of using private data-dependent priors, our bounds are significantly tighter than those reported by BID18. However, despite our bounds potentially being optimistic, the test set error we are able to achieve is still 5-10 times that of SGD. Differential privacy may be too conservative for our purposes, leading us to underfit. Indeed, we think it is unlikely that Entropy-SGD has strong differential privacy, yet we are able to achieve good generalization on both true and random labels under 0.01 thermal noise. Identifying the appropriate notion of privacy/stability to combine with PAC-Bayes bounds is an important problem. Despite our progress on building learning algorithms with strong generalization performance, and identifying a path to much tighter PAC-Bayes bounds, Entropy-SGLD learns much more slowly than Entropy-SGD, the risk of Entropy-SGLD is far from state of the art, and our PAC-Bayes bounds are loose. It seems likely that there is a fundamental tradeoff between the speed of learning, the excess risk, and the ability to produce a certificate of one's generalization error via a rigorous bound. Characterizing the relationship between these quantities is an important open problem. A : DIFFERENTIAL PRIVACY Here we formally define some of the differential privacy related terms used in the main text. (See BID13 BID15 for more details.) Let U,U 1,U 2,... be independent uniform random variables, independent also of any random variables introduced by P and E, and let π: DISPLAYFORM0 Definition A.1. A randomized algorithm A from R to T, denoted A: R T, is a measurable map A: × R → T. Associated to A is a (measurable) collection of random variables {A r : r ∈ R} that satisfy A r = A (U, r). When there is no risk of confusion, we will write A (r) for A r. Definition A.2. A randomized algorithm A: Z m T is (ε, δ)-differentially private if, for all pairs S, S ∈ Z m that differ at only one coordinate, and all measurable subsets B ⊆ T, we have P(A (S) ∈ B) ≤ e ε P(A (S) ∈ B) + δ.We will write ε-differentially private to mean (ε, 0)-differentially private algorithm. Definition A.3. Let A: R T and A: DISPLAYFORM1 Lemma A.4 (post-processing). Let A: Z m T be (ε, δ)-differentially private and let F: T T be arbitrary. Then F • A is (ε, δ)-differentially private. We studied three architectures: CONV, FC600, and FC1200.CONV was a convolutional neural network, whose architecture was the same as that used by BID10 for multiclass MNIST classification, except modified to produce a single probability output for our two-class variant of MNIST. In particular, CONV has two convolutional layers, a fully connected ReLU layer, and a sigmoidal output layer, yielding 126, 711 parameters in total. FC600 and FC1200 are fully connected 3-layer neural networks, with 600 and 1200 hidden units, respectively, yielding 834, 601 and 2, 385, 185 parameters in total, respectively. We used ReLU activations for all but the last layer, which was sigmoidal to produce an output in.In their MNIST experiments, BID10 use dropout and batch normalization. We did not use dropout. The bounds we achieved with and without batch norm were very similar. Without batch norm, however, it was necessary to tune the learning rates. Understanding the combination of SGLD and batch norm and the limiting invariant distribution, if any, is an important open problem. B.2.1 OBJECTIVE All networks are trained to minimize a bounded variant of empirical cross entropy loss. The change involves replacing g(p, y) = − log p with g(p, y) = − log ψ(p), where DISPLAYFORM0 is an affine transformation that maps to DISPLAYFORM1, removing extreme probability values. As a , the binary cross entropy loss BCE is contained in an interval of length L max. In particular, DISPLAYFORM2 We take L max = 4 in our experiments. Ordinarily, an epoch implies one pass through the entire data set. For SGD, each stochastic gradient step processes a minibatch of size K = 128. Therefore, an epoch is m/K = 468 steps of SGD. An epoch for Entropy-SGD and Entropy-SGLD is defined as follows: each iteration of the inner SGLD Test (On true labels, SGLD learns a network with approximately 3% higher training and test error than the mean and Gibbs networks learned by Entropy-SGLD. SGLD does not overfit on random labels, as predicted by theory. The C-bound on the true error of this network is around 8%, which is worse than the roughly 4% C-bound on the mean classifier.loop processes a minibatch of size K = 128, and the inner loop runs for L = 20 steps. Therefore, an epoch is m/(LK) steps of the outer loop. In concrete terms, there are 20 steps of SGD per every one step of Entropy-SG(L)D. Concretely, the x-axis of our plots measure epochs divided by L. This choice, used also by BID10, ensures that the wall-clock time of Entropy-SG(L)D and SGD align. The step sizes for SGLD must be square summable but not summable. The step sizes for the outer SGLD loop are of the form η t = ηt −0.6, with η = 0.006 γτ. The step sizes for the inner SGLD loop are of the form η t = ηt −1, with η = 1 γτ. The estimate produced by the inner SGLD loop is computed using a weighted average (line 8) with α = 0.75. We use SGLD again when computing the PAC-Bayes generalization bound (Appendix B.3.2). In this case, SGLD is used to sample from the local Gibbs distribution when estimating the Gibbs risk and the KL term. We run SGLD for 1000 epochs to obtain our estimate. Again, we use weighted averages, but with α = 0.005, in order to average over a larger number of samples and better control the variance. We set γ = 1 and τ = √ m and keep the values fixed during optimization. By Theorem 5.5, the value of τ, L max, and β determine the differential privacy of Entropy-SGLD. In turn, the differential privacy parameter ε and confidence parameter δ contribute When the empirical error is close to zero, the KL version of the PAC-Bayes bound Theorem 3.1 is considerably tighter than the Hoeffding-style bound first described by BID31. However, using this relative entropy bound requires one to be able to compute the largest value p such that KL(q||p) ≤ c. There does not appear to be a simple formula for this value. In practice, however, the value can be efficiently numerically approximated using, e.g., Newton's method. See (, §2.2 and App. B). Let (w) = τR S (w). By (, Lem Both terms have obvious Monte Carlo estimates: DISPLAYFORM0 where w 1, . . ., w k are taken from a Markov chain targeting P exp(−), such as SGLD run for k 1 steps (which is how we computed our bounds), and log P[exp(−)] = log exp{− (w)} P(dw) DISPLAYFORM1 where h 1,..., h k are i.i.d. P (which is a multivariate Gaussian in this case). In the latter case, due to the concavity of log, the estimate is a lower bound with high probability, yielding a high probability upper bound on the KL term. We evaluate the same generalization bounds on the standard MNIST classification task as in the MNIST binary labelling case. All the details of the network architectures and parameters are as stated in Appendix B.2, with two exception: following BID10, we use a fully connected network with 1024 hidden units per layer, denoted FC1024. The neural network produces a probability vector (p 1, . . ., p K) via a soft-max operation. Ordinarily, we then apply the cross entropy loss corresponding to g FIG2.., p K ), y) = − log p y. When training privately, we use a bounded variant of the cross entropy loss, where the function g above is replaced by g((p 1, . . ., p K), y) = − log ψ(p y), and ψ is defined as in Eq.. FC1024 network trained on true labels. The train and test error suggest that the generalization gap is close to zero, while all three bounds exceed the test error by slightly more than 3%. (bottomleft) CONV network trained on true labels. Both the train and the test errors are lower than those achieved by the FC1024 network. We still do not observe overfitting. The C-bound and PAC-Bayes bounds exceed the test error by ≈ 3%. (top-right) FC1024 network trained on random labels. After approximately 1000 epochs, we notice overfitting by ≈ 2%. Running Entropy-SGLD further does not cause an additional overfitting. Theory suggests that our choice of τ prevents overfitting via differential privacy. (bottom-right) CONV network trained on random labels. We observe almost no overfitting (less than 1%). Both training and test error coincide and remain close to the guessing rate (90%).
We show that Entropy-SGD optimizes the prior of a PAC-Bayes bound, violating the requirement that the prior be independent of data; we use differential privacy to resolve this and improve generalization.
714
scitldr
In this paper, we investigate learning the deep neural networks for automated optical inspection in industrial manufacturing. Our preliminary has shown the stunning performance improvement by transfer learning from the completely dissimilar source domain: ImageNet. Further study for demystifying this improvement shows that the transfer learning produces a highly compressible network, which was not the case for the network learned from scratch. The experimental shows that there is a negligible accuracy drop in the network learned by transfer learning until it is compressed to 1/128 reduction of the number of convolution filters. This is contrary to the compression without transfer learning which loses more than 5% accuracy at the same compression rate. in, the network trained from scratch can also achieve 99.78% accuracy with the extensively 23 augmented data and a long period of training. Surprisingly, however, although the performance of 24 both networks is similar, the network trained from scratch learns much denser features of the input 25 data than the network trained by transfer learning. We experimentally show that using standard 26 teacher-student model compression technique, the network trained by transfer learning can be 27 compressed to 1/128 reduction of the number of convolution filters with a negligible accuracy drop. In contrast, the network trained from scratch loses more than 5% accuracy at the same compression 29 rate. Previous works on the network compression have focused on the methods of compression, but our work is the first report to the best of our knowledge that the transfer learning is 31 related to the network compression. The rest of the paper is organized as follows. In Section 2, we explain the method of experiments and show the compression in Section 2.3. Finally, we conclude with a brief discussion in Section 3. a given input image, we can see that the TL teacher network learns much sparser features of the 64 input data than the Scratch teacher network (Figure 2). The sparsity of activation in the TL teacher
We experimentally show that transfer learning makes sparse features in the network and thereby produces a more compressible network.
715
scitldr
The key challenge in semi-supervised learning is how to effectively leverage unlabeled data to improve learning performance. The classical label propagation method, despite its popularity, has limited modeling capability in that it only exploits graph information for making predictions. In this paper, we consider label propagation from a graph signal processing perspective and decompose it into three components: signal, filter, and classifier. By extending the three components, we propose a simple generalized label propagation (GLP) framework for semi-supervised learning. GLP naturally integrates graph and data feature information, and offers the flexibility of selecting appropriate filters and domain-specific classifiers for different applications. Interestingly, GLP also provides new insight into the popular graph convolutional network and elucidates its working mechanisms. Extensive experiments on three citation networks, one knowledge graph, and one image dataset demonstrate the efficiency and effectiveness of GLP. The success of deep learning and neural networks comes at the cost of large amount of training data and long training time. Semi-supervised learning BID37 BID8 ) is interesting and important as it can leverage ample available unlabeled data to aid supervised learning, thus greatly saving the cost, trouble, and time for human labeling. Many researches have shown that when used properly, unlabeled data can significantly improve learning performance BID38 BID16. The key challenge for semi-supervised learning is how to effectively leverage the information of unlabeled data, such as graph structures and data features. Label propagation BID39 BID36 BID2 is arguably the most popular method for graph-based semi-supervised learning. As a simple and effective tool, it has been widely used in many scientific research fields and has found numerous industrial applications. Given a non-oriented graph G = (V, W, X) with n = |V| vertices, a nonnegative symmetric affinity matrix W ∈ R n×n + encoding edge weights, and a feature matrix X ∈ R n×m which contains an mdimensional feature vector of each vertex. For semi-supervised classification, only a small subset of vertices are labeled, and the goal is to predict the labels of other vertices. Denote by Y ∈ {0, 1} n×l the labeling matrix 1 with l being the number of classes. The objective of of label propagation (LP) is to find a prediction (embedding) matrix Z ∈ R n×l which agrees with Y while being smooth on the graph such that nearby vertices have similar embeddings: DISPLAYFORM0 where α is a balancing parameter, L = D − W is the graph Laplacian 2 and D is the degree matrix. The term enforcing smoothness is called graph Laplacian regularization or Tikhonov regularization. Solving the quadratic regularization framework gives the prediction of LP.As LP makes predictions only based on graph information (W), its performance depends on whether the underlying graph structure can well represent the class information of data -vertices in the same 1 If the label of vertex vi is known, then Y (i, :) is a one-hot embedding of vi with yij = 1 if vi belongs to the j-th class and yij = 0 otherwise. If the label of vertex vi is not given, then Y (i, :) is a vector of all zeros.2 Other variants such as the normalized Laplacian matrices are also applicable.cluster tend to have same labels. For some applications such as social network analysis, data exhibits a natural graph structure. For some other applications such as image or text classification, data may come in a vector form, and a graph is usually constructed using data features. Nevertheless, in many cases, graphs only partially encode data information. Take document classification in a citation network as an example, the citation links between documents form a graph which represents their citation relation, and each document is represented as a bag-of-words feature vector which describes its content. To correctly classify a document, both the citation relations (W) and the content information (X) need to be taken into account, as they contain different aspects of document information. However, in this case, LP can only exploit the graph information to make predictions without using any of the feature information, thus ing in poor performance. To go beyond the limit of LP and jointly model graph and feature information, a common approach is to train a supervised learner to classify data features while regularizing the classifier using graph information. Manifold regularization BID1 trains a support vector machine with a graph Laplacian regularizer. Deep semi-supervised embedding BID32 and Planetoid BID34 ) train a neural network with an embedding-based regularizer. The recently proposed graph convolutional neural networks BID16 ) adopts a different approach by integrating graph and feature information in each of its convolutional layer, which is coupled with a projection layer for classification. In this paper, we extends the modeling capability of LP in the context of graph signal processing. Casted in the spectral domain, LP can be interpreted as low-pass graph filtering BID10 BID11. In light of this, we decompose LP into three components: graph signal, graph filter, and classifier. By naturally extending the three components, we propose a generalized label propagation (GLP) framework for semi-supervised learning. In GLP, a low-pass graph filter is applied on vertex features to produce smooth features, which are then fed to a supervised learner for classification. After filtering, the data features within each class are more similar and representative, making it possible to train a good classifier with few labeled examples. GLP not only extends LP to incorporate vertex features in a simple way, but also offers the flexibility of designing appropriate graph filters and adopting domain-specific classifiers for different semisupervised applications. The popular graph convolutional networks (GCN) BID16 is closely related to GLP. In fact, GCN without internal ReLUs is a special case of GLP with a certain graph filter and a multilayer perceptron classifier. When revisited under the GLP framework, it makes clear the working mechanisms of GCN including its design of convolutional filter and model parameter setting. Extensive experiments on citation networks, knowledge graphs, and image datasets show substantial improvement of GLP over GCN and other baselines for semi-supervised classification, confirming the effectiveness of this simple and flexible framework. The rest of the paper is organized as follows. Section 2 interprets LP in the context of graph signal processing. Section 3 presents the proposed GLP framework. Section 4 revisits GCN under GLP. Section 5 discusses the design of graph filters for GLP. Section 6 presents experimental . Section 7 discusses related works. Finally, section 8 concludes the paper. In this section, we provide a spectral view of LP in the context of graph signal processing. In graph signal processing BID24, the eigenvectors and eigenvalues of the graph Laplacian play the role of Fourier basis and frequencies in parallel with classical harmonic analysis. The graph Laplacian matrix can be eigen-decomposed as: L = ΦΛΦ −1, where Λ = diag(λ 1, · · ·, λ n) are the eigenvalues in an increasing order, i.e., 0 = λ 1 ≤ · · · ≤ λ n, and Φ = (φ 1, · · ·, φ n) are the associated orthogonal eigenvectors. Note that the row normalized graph Laplacian L r = D −1 L and the symmetrically normalized graph Laplacian L s = D A graph signal is a real-valued function f: V → R defined on the vertex set of a graph. Denote by f = (f (v 1), · · ·, f (v n)) a graph signal in a vector form. Consider (φ i) 1≤i≤n as basis functions. Any graph signal f can be decomposed into a linear combination of the basis functions: DISPLAYFORM0 where c = (c 1, · · ·, c n) and c i is the coefficient of φ i. The magnitude of the coefficient |c i | represents the strength of the basis function φ i presented in the signal f.A graph filter is defined as a matrix G ∈ R n×n. G is linear shift-invariant BID22, if and only if there exists an function p(·): R → R, satisfying G = Φp(Λ)Φ −1, where DISPLAYFORM1 It is well known that the basis functions associated with lower frequencies (smaller eigenvalues) are smoother BID38, as the smoothness of φ i can be measured by λ i: DISPLAYFORM2 This indicates that a smooth signal f should contain more low-frequency components than highfrequency components. To produce a smooth signal, the graph filter G should be able to preserve the low-frequency components in f while filtering out the high-frequency components. By Eq., we havef DISPLAYFORM3 In the filtered signalf, the coefficient c i of the basis function φ i is scaled by p(λ i). To preserve the low-frequency components and remove the high-frequency components, p(λ i) should amplify c i when λ i is small and suppress c i when λ i is large. Simply put, p(·) should behave like a low-pass filter in classical harmonic analysis. The prediction (embedding) matrix of LP can be obtained by taking the derivative of the unconstrained quadratic optimization problem in Eq. and setting it to zero: DISPLAYFORM0 With the prediction matrix Z, each unlabeled vertex v i is usually classified by simply comparing the elements in Z(i, :). In some methods, a normalization scheme may be applied on the columns of Z first before the comparison BID39.Casted in the context of graph signal processing, LP can be decomposed into three components: signal, filter, and classifier. By Eq., the input signal matrix of LP is the labeling matrix Y, where it has l channels and each column Y (:, i) can be considered as a graph signal. In Y (:, i), only the labeled vertices in class i have value 1 and others 0.The graph filter used in LP is DISPLAYFORM1 with frequency response function DISPLAYFORM2 Note that this also holds for the normalized graph Laplacians. As shown in FIG2, the frequency response function of LP is low-pass. For any α > 0, p(λ i) is near 1 when λ i is close to 0 and p(λ i) decreases and approaches 0 as λ i increases. Applying the filter on signal Y (:, i), it will produce a smooth signal Z(:, i) in which vertices of the same class have similar values and vertices in class i have larger values than others under the cluster assumption. The balancing parameter α controls the degree of the graph Laplacian regularization. When α increases, the filter becomes more low-pass (FIG2) and will produce smoother embeddings. Finally, LP applies a nonparametric classifier on the embeddings to classify the unlabeled vertices, i.e., the label of an unlabeled vertex v i is given by y i = arg max j Z(i, j). We propose a generalized label propagation (GLP) framework by naturally generalizing the three components of LP for semi-supervised classification:• Signal: Use the feature matrix X instead of the labeling matrix Y as input signal.• Filter: The filter G can be any low-pass, linear, shift-invariant filter.• Classifier: The classifier can be any classifer trained on the embeddings of labeled vertices. GLP consists of two steps. First, a low-pass, linear, shift-invariant graph filter G is applied on the feature matrix X to obtain a smooth feature matrixX ∈ R n×m: DISPLAYFORM0 The next step is to train a supervised classifier (e.g., multilayer perceptron, convolutional neural networks, support vector machines, etc.) with the filtered features of labeled data, and then apply the classifier on the filtered features of unlabeled data to predict their labels. GLP naturally combines graph and feature information in Eq. FORMULA9, and allows taking advantage of a powerful supervised classifier. The rationale behind GLP is to learn representative feature vectors of each class for easing the downstream classification task. After filtered by G, vertices in the same class are expected to have more similar and representative features, which makes it much easier to train a good classifier with very few samples. Consider an extreme case that each class is a connected component of the graph. In this case, we can learn perfect features by an extremely low-pass filter G, whose spectrum p(·) is unit impulse function, i.e., p = 1 and p(λ) = 0 if λ = 0. We can compute G = Φp(Λ)Φ −1 in the spatial domain. In particular, G ij = 1 l k if v i and v j are of the same class, otherwise G ij = 0, where l k is the number of labeled samples in class k. After filtered by G, vertices in the same class will have an identical feature vector which is its class mean. Then any classifier that can correctly classify the labeled data will achieve 100% accuracy on the unlabeled data, and only one labeled example per class is needed to train the classifier. In this section, we show that the graph convolutional networks (GCN) for semi-supervised classification can be interpreted under the GLP framework, which explains its implicit design features including the number of layers, the choice of the normalized graph Laplacian, and the renormalization trick on the convolutional filter. Graph Convolutional Networks. The GCN model contains three steps. First, a renormalization trick is applied on the adjacency matrix W by adding an self-loop to each vertex, which in a new adjacency matrixW = W + I with the degree matrixD = D + I. After that, symmetrically normalizeW and getW s =D DISPLAYFORM0 where H (t) is the matrix of activations in the t-th layer and DISPLAYFORM1 is the trainable weight matrix in layer t, and σ is the activation function, e.g., ReLU(·) = max(0, ·). The graph convolution is defined by multiplying the input of each layer with the renormalized adjacency matrixW s from the left, i.e.,W s H (t). The convoluted features are then fed into a projection matrix Θ (t). Third, stack two layers up and apply a softmax function on the output features to produce a prediction matrix: DISPLAYFORM2 and train the model using the cross-entropy loss over the labeled instances. The graph convolution in each layer of the GCN model actually performs feature smoothing with a low-pass filterW s = I −L s, whereL s is the symmetrically normalized graph Laplacian of the graph with extra self-loops. Suppose thatL s can be eigen-decomposed asL s = ΦΛΦ −1, then we have I −L s = Φ(I −Λ)Φ −1. The frequency response function of the filter is DISPLAYFORM0 Clearly, as shown in FIG2, this function is linear and low-pass on the interval, but not on, as it amplifies the eigenvalues near 2.Interestingly, by removing the activation function ReLU in Eq., we can see that GCN is a special case of GLP, where the input signal is X, the filter isW 2 s, and the classifier is a two-layer multi-layer perceptron (MLP).Why the Normalized Graph Laplacian. Note that the eigenvalues of the normalized Laplacians L s and L r all fall into interval , while the unnormalized Laplacian L has eigenvalues in [0, +∞]. If using the unnormalized graph Laplacian, the response function in Eq. will amplify eigenvalues in [2, +∞], which will introduce noise and undermine performance. Why Two Convolutional Layers. In Eq., the GCN model stacks two convolutional layers. Without the activation function, the feature matrix is actually filtered by I −L s twice, which is equivalent to be filtered by (I −L s) 2 with response function (1 − λ) 2. As we can see from FIG2, DISPLAYFORM1 2 is more low-pass than (1 − λ) by suppressing the eigenvalues in the mid-range of harder, which explains why GCNs with two convolutional layers perform better than those with only one. Why the Renormalization Trick. The effect of the renormalization trick is illustrated in FIG4, where the response functions on the eigenvalues of L s andL s on the Cora citation network are plotted. We can see that by adding a self-loop to each vertex, the range of eigenvalues shrink from to [0, 1.5], thus avoiding amplifying eigenvalues near 2 and reducing noise. This explains why the renormalization trick works. In this section, we discuss the design and computation of low-pass graph filters for GLP.Auto-Regressive. The Auto-Regressive (AR) filter is the one used in LP: DISPLAYFORM0 Actually p ar is an auto-regressive filter of order one BID29. We have shown p ar is low-pass in section 2.2. However, the computation of p ar involves matrix inversion, which is also computationally expensive with complexity O(n 3). Fortunately, we can circumvent this problem by approximating p ar using its polynomial expansion: DISPLAYFORM1 We can then computeX = p ar (L)X iteratively with DISPLAYFORM2 and letX = 1 1+α X (k). Empirically, we find that k = 4α is enough to get a good approximation. Hence, the computational complexity is reduced to O(nmα + N mα) (note that X is of size n × m), where N is the number of nonzero entries in L, and N n 2 when the graph is sparse. Renormalization. The renormalization (RNM) filter is an exponential function of the renormalized adjacency filter used in GCN: DISPLAYFORM3 We have shown in section 4.1 that although the response function p rnm (λ) = (1 − λ) k is not lowpass, the renormalization trick shrinks the range of eigenvalues ofL and makes p rnm resemble a low-pass filter. The exponent parameter k controls the low-pass effect of p rnm. When k = 0, p rnm is all-pass. When k increases, p rnm becomes more low-pass. Note that for a sparse graph, (I −L) is a sparse matrix. Hence, the fastest way to computeX = p rnm (L)X is to left multiply X by (I −L) repeatedly for k times, which has the computational complexity O(N mk).Random Walk. We also propose to design a random walk (RW) filter: DISPLAYFORM4 We call p rw the random walk filter because DISPLAYFORM5 is a stochastic matrix of a lazy random walk which at each step returns to the current state with probability 1 2, and DISPLAYFORM6 is the k-step transition probability matrix. Similarly, we can derive the response function of p rw as DISPLAYFORM7 Note that L r has the same eigenvalues with L s, with range. Unlike the RNM, p rw is a typical low-pass filter on, as shown in FIG2. We can also see in FIG2 that the curves of (1−λ) 2 and (1 − 1 2 λ) 4 are very close, implying that to have the same level of low-pass effect, k in p rw should be set twice as large as in p rnm. This may be explained by the fact that the two functions (1 − λ) DISPLAYFORM8 2k have the same derivative k at λ = 0. On the computation side, RW has the same complexity O(N mk) as RNM.An important issue of filter design for GLP is how to control the strength of filters by setting parameters such as α and k. Intuitively, when labeled data is scarce, it would be desirable for the filtered features of each instance to be closer to its class mean and be more representative of its own class. Hence, in this case, α and k should be set large to produce smoother features. However, oversmoothing usually in inaccurate class boundaries. Therefore, when the amount of labeled data is reasonably large, α and k should be set relatively small to preserve feature diversity in order to learn more accurate class boundaries. Datasets In this section, we test GLP on three citation networks -Cora, CiteSeer and PubMed BID23, one knowledge graph -NELL BID5, and one handwritten digit image dataset -MNIST BID18. Dataset discriptions are provided in Appendix A. Baselines On citation networks and NELL, we compare GLP against GCN BID16, LP BID33, multi-layer perceptron (MLP), Planetoid BID34, DeepWalk BID21, manifold regularization (ManiReg) BID1, semi-supervised embedding (SemiEmb) BID32, and iterative classification algorithm (ICA) BID23. On MNIST, we compare GLP against GCN, LP, MLP, and convolutional neural networks (CNN).Experimental Setup We test GLP with RNM, RW and AR filters (section 5) on all the datasets. We use MLP as the classifier for GLP on citation networks and NELL, and use CNN as the classifier on MNIST. Guided by our analysis in section 5, the filter parameters k and α should be set large with small label rate and set small with large label rate. We use fixed parameters k = 10 for RNM, k = 20 for RW, and α = 20 for AR when label rate is less than or equal to 2%, and set them to 5, 10, 10 respectively otherwise. We follow BID16 to set the parameters of MLP, including learning rate, dropout, and weight decay. To make sure GLP works in practice and for more fair comparison with baselines, we do not use a validation set for classifier model selection as in BID16, instead we select the classifier with the highest training accuracy in 200 steps. Results of GLP and GCN on all the datasets are reported without using a validation set, except that on NELL, we also report the with validation (on the right of "/"). More implementation details are provided in Appendix B due to space limitations. Performance of GLP The are summarized in TAB0, where the top 3 classification accuracies are highlighted in bold. Overall, GLP performs the best on all the datasets. On citation networks, with 20 labels per class, GLP performs comparably with GCN and outperforms other baselines by a considerable margin. With 4 labels per class, GLP significantly outperforms all baselines including GCN. On NELL, GLP wtih RW and RNM filters consistently outperforms the best baseline Planetoid for each setting, and outperforms other baselines including GCN by a large margin. Note that GLP achieves this performance without using any additional validation set. The performance of GLP (and GCN) will be further boosted with validation, as shown on the right of "/". On MNIST, GLP consistently outperforms all baselines for every setting. The running times of GLP and some other baselines are also reported in TAB0. GLP runs much faster than GCN on most datasets, except for NELL, on which the running times of GLP with two filters are similar with GCN. More discussions about running times are included in Appendix E.Results Analysis Compared with LP and DeepWalk which only use graph information, the large performance gains of GLP clearly comes from leveraging both graph and feature information. Compared with purely supervised MLP and CNN which are trained on raw features, the performance gains of GLP come from the unsupervised feature filtering. FIG5 visualizes the raw and filtered features (by RNM filter) of Cora projected by t-SNE (Van der BID30 . The filtered features exhibit a much more compact cluster structure, thus making classification much easier. In Appendix C, we show that feature filtering improves the accuracy of various classifiers significantly. Compared with GCN and other baselines which use both graph and feature information, the performance gains of GLP come in two folds. First, GLP allows using stronger filters to extract higher level data representations to improve performance when label rate is low, which can be easily achieved by increasing the filter parameters k and α, as shown in FIG5 . But this cannot be easily achieved in CNN) 94.1 (5.1s) 95.3 (6.7s) 95.6 (8.9s) GLP (AR, CNN) 94.1 (7.2s) 95.5 (8.8s) 95.8 (11.1s) GCN. As each convolutional layer of GCN is coupled with a projection layer, to increase smoothness one needs to stack many layers, and a deep GCN is difficult to train. Second, GLP allows adopting domain-specific classifiers such as CNN to deal with vision tasks. As shown in TAB2, the performance of CNN trained on raw features of labeled data is very competitive and grows fast. Due to space limitations, we include the stability analysis of GLP in Appendix D. Many graph-based semi-supervised learning methods adopt a common assumption that nearby vertices are likely to have same labels. One idea is to learn smooth low-dimensional embedding of data points by using Markov random walks BID25, Laplacian eigenmaps BID0, spectral kernels BID7 BID35, and context-based methods BID21. Another idea hinges on graph partition, where the cuts should agree with the labeled data and be placed in low density regions BID3 BID39 BID13 BID4. Perhaps the most popular idea is to formulate a quadratic regularization framework to explicitly enforce the consistency with the labeled data and the cluster assumption, which is known as label propagation BID36 BID6 BID2 BID17.To leverage more data information to improve predictions, a variety of methods proposed to jointly model data feature and graph information. BID36 proposed to combine label propagation with external classifiers by attaching a "dongle" vertex to each unlabeled vertex. Iterative classification algorithm BID23 iteratively classifies an unlabeled vertex using its neighbors' labels and features. Manifold regularization BID1, deep semi-supervised embedding BID32, and Planetoid BID34 regularize a supervised classifier with a Laplacian regularizer or an embedding-based regularizer. Graph convolutional networks BID16 combine graph and feature information in convolutional layers, which is actually doing Laplacian smoothing on data features. Follow-up works include graph attention networks BID31, attention-based graph neural network BID28, and graph partition neural networks BID20.The idea of feature smoothing has been widely used in computer graphics community for fairing 3D surface BID27 a; BID9. BID12 proposed manifold denoising which uses feature smoothing as a preprocessing step for running a label propagation algorithm, i.e, the denoised features are used to construct a better graph for LP. This method is still "onedimensional", as it cannot use the preexisting graph information in data such as citation networks. In contrast, the proposed GLP and the GCN frameworks are "two-dimensional". In this paper, we have proposed a simple, flexible, and efficient framework GLP for semi-supervised learning, and demonstrated its effectiveness theoretically and empirically. GLP offers new insights into existing methods and opens up possible avenues for new methods. An important direction for future research is the design and selection of graph filters for GLP in different application scenarios. Other directions include making GLP readily applicable to inductive problems, developing faster algorithms for GLP, and applying GLP to solve large-scale real-world problems. We include dataset descriptions, experimental details, supplementary experiments, stability analysis, and running time analysis here. Citation networks BID23 are networks that record documents' citation relationship. In citation networks, vertices are documents and edges are citation links. A pair of vertices are connected by an undirected edge if and only if one cites another. Each vertex is associated with a feature vector, which encodes the document content. In the three citation networks we tested on, CiteSeer, Cora and PubMed, feature vectors are 0/1 vectors that have the same length as the dictionary size and indicate whether a word appears in a document. The statistics of datasets are summarized in TAB3.Never Ending Language Learning (NELL) BID5 ) is a knowledge graph introduced by Carlson et al.. Yang et al. extracted an entity classification dataset from NELL, and converted the knowledge graph into a single relation graph. For each relation type r, they created two new vertices r 1 and r 2 in the graph. For each triplet (e 1, r, e 2), they created two edges (e 1, r 1) and (e 2, r 2). We follow BID16 to extend the features by assigning a unique one-hot representation for every relation vertex, ing in a 61,278-dimensional sparse feature vector for each vertex. Dataset statistics are also provided in TAB3.MNIST contains 70,000 images of handwritten digits from 0 to 9 of size 28 × 28. Each image is represented by a dense 784-dimensional vector where each dimension is a gray intensity pixel value. A 5-NN graph is constructed based on the Euclidean distance between images. If the i-th image is within the j-th image's 5 nearest neighbors or vice versa, then w ij = w ji = 1, otherwise w ij = w ji = 0. We provide more experimental details here for the sake of reproduction. Parameters We set k = 10 for RNM, k = 20 for RW, and α = 20 for AR, if label rate is less or equal than 2%; otherwise, we set them to 5, 10, 10 respectively. Networks On citation networks, we follow BID16 to use a two-layer MLP with 16 hidden units for citation networks, 0.01 learning rate, 0.5 dropout rate, and 5 × 10 −4 L2 regularization. On NELL, we also follow BID16 to use 64 hidden units, 10 −5 L2 regularization, 0.1 dropout rate and two layer-structure. On MNIST, we use 256 hidden units, 0.01 learning rate, 0.5 dropout rate, and 5 × 10 −4 L2 regularization. The CNN we use consists of six layers, whose structure is specified in TAB4. For CNN, we use 0.003 learning rate and 0.5 dropout. All of MNIST are averaged over 10 runs. We train all networks using Adam BID14.Baselines Results of some baselines including ManiReg, SemiEmb, DeepWalk, ICA, Planetoid are taken from BID16, except for the 4-labels-per-class setting, for which we run BID34. All other are reported by us. To demonstrate the benefit of GLP, we compare training various supervised classifiers with raw and filtered features. The classifiers include support vector machine (SVM), decision tree (DT), logistic regression (LR), and multilayer perceptron (MLP). The are summarized in TAB5. We can see that for all classifiers and on all datasets, there is a huge improvement in classification accuracy with the smooth features produced by the three filters we proposed. This clearly demonstrates the advantage of filtered features over raw features. In this experiment, we use 0.01 learning rate and 5 × 10 −4 L 2 regularization for LR. For SVM, we use the RBF kernel with γ = 1/n and 1.0 L 2 regularization. For DT, we use Gini impurity as quality measure. We use the same parameters for MLP as described in Appendix B. We test how the filter parameters k and α influence the performance of GLP. Figs. 4 to 6 plot the classification accuracies of GLP with different k and α on three citation networks, with 4 labels per class. Consistent with our analysis in section 5, the classification accuracy of GLP first increases and then decreases as k and α increases. The shows that GLP consistently outperforms GCN for a wide range of k and α.
We extend the classical label propation methods to jointly model graph and feature information from a graph filtering perspective, and show connections to the graph convlutional networks.
716
scitldr
Because the choice and tuning of the optimizer affects the speed, and ultimately the performance of deep learning, there is significant past and recent research in this area. Yet, perhaps surprisingly, there is no generally agreed-upon protocol for the quantitative and reproducible evaluation of optimization strategies for deep learning. We suggest routines and benchmarks for stochastic optimization, with special focus on the unique aspects of deep learning, such as stochasticity, tunability and generalization. As the primary contribution, we present DeepOBS, a Python package of deep learning optimization benchmarks. The package addresses key challenges in the quantitative assessment of stochastic optimizers, and automates most steps of benchmarking. The library includes a wide and extensible set of ready-to-use realistic optimization problems, such as training Residual Networks for image classification on ImageNet or character-level language prediction models, as well as popular classics like MNIST and CIFAR-10. The package also provides realistic baseline for the most popular optimizers on these test problems, ensuring a fair comparison to the competition when benchmarking new optimizers, and without having to run costly experiments. It comes with output back-ends that directly produce LaTeX code for inclusion in academic publications. It supports TensorFlow and is available open source. As deep learning has become mainstream, research on aspects like architectures BID15 BID16 BID48 BID50 BID41 and hardware BID33 BID9 ) has exploded, and helped professionalize the field. In comparison, the optimization routines used to train deep nets have arguable changed only little. Comparably simple first-order methods like SGD BID38, its momentum variants (MOMENTUM) BID34 BID31 and ADAM BID20 remain standards BID14 BID19. The low practical relevance of more advanced optimization methods is not for lack of research, though. There is a host of papers proposing new ideas for acceleration of first-order methods BID13 BID49 BID54 BID12 BID3 BID24 BID37, incorporation of second-order information BID27 BID28 BID5 BID8, and automating optimization BID43 BID25 BID39, to name just a few. One problem is that these methods are algorithmically involved and difficult to reproduce by practitioners. If they are not provided in packages for popular frameworks like TENSORFLOW, PYTORCH etc., they get little traction. Another problem, which we hope to address here, is that new optimization routines are often not convincingly compared to simpler alternatives in research papers, so practitioners are left wondering which of the many new choices is the best (and which ones even really work in the first place).Designing an empirical protocol for deep learning optimizers is not straightforward, and the corresponding experiments can be time-consuming. This is partly due to the idiosyncrasies of the domain:• Generalization: While the optimization algorithm (should) only ever see the training-set, the practitioner cares about performance of the trained model on the test set. Worse, in some important application domains, the optimizer's loss function is not the objective we ultimately care about. For instance in image classification, the real interest may be in the percentage of correctly labeled images, the accuracy. Since this 0-1 loss is infeasible in practice BID26, a surrogate loss function is used instead. So which score should actually be presented in a comparison of optimizers? Train loss, because that is what the optimizer actually works on; test loss, because an over-fitting optimizer is useless, or test accuracy, because that's what the human user cares about?• Stochasticity: Sub-sampling (batching) the data-set to compute estimates of the loss function and its gradient introduces stochasticity. Thus, when an optimizer is run only once on a given problem, its performance may be misleading due to random fluctuations. The same stochasticity also causes many optimization algorithms to have one or several tuning parameters (learning rates, etc.). How should an optimizer with two free parameter be compared in a fair way with one that has only one, or even no free parameters?• Realistic Settings, Fair Competition: There is a widely-held belief that popular standards like MNIST and CIFAR-10 are too simplistic to serve as a realistic place-holder for a contemporary combination of large-scale data set and architecture. While this worry is not unfounded, researchers, ourselves included, have sometimes found it hard to satisfy the demands of reviewers for ever new data sets and architectures. Finding and preparing such data sets and building a reasonable architecture for them is time-consuming for researchers who want to focus on their novel algorithm. Even when this is done, one then has to not just run one's own algorithm, but also various competing baselines, like SGD, MOMENTUM, ADAM, etc. This step does not just cost time, it also poses a risk of bias, as the competition invariably receives less care than one's own method. Reviewers and readers can never be quite sure that an author has not tried a bit too much to make their own method look good, either by choosing a convenient training problem, or by neglecting to tune the competition. To address these problems, we propose an extensible, open-source benchmark specifically for optimization methods on deep learning architectures. We make the following three contributions:• A protocol for benchmarking stochastic optimizers. Section 2 discusses and recommends best practices for the evaluation of deep learning optimizers. We define three key performance indicators: final performance, speed, and tunability, and suggest means of measuring all three in practice. We provide evidence that it is necessary to show the of multiple runs in order to get a realistic assessment. Finally, we strongly recommend reporting both loss and accuracy, for both training and test set, when demonstrating a new optimizer as there is no obvious way those four learning curves are connected in general.• DEEPOBS 1, a deep learning optimizer benchmark suite. We have distilled the above ideas into an open-source python package, written in TENSORFLOW BID0, which automates most of the steps presented in section 2. The package currently provides over twenty off-the-shelf test problems across four application domains, including image classification and natural language processing, and this collection can be extended and adapted as the field makes progress. The test problems range in complexity from stochastic two dimensional functions to contemporary deep neural networks capable of delivering near state-of-the-art on data sets such as IMAGENET. The package is easy to install in python, using the pip toolchain. It automatically downloads data sets, sets up models, and provides a back-end to automatically produce L A T E X code that can directly be included in academic publications. This automation does not just save time, it also helps researchers to create reproducible, comparable, and interpretable .• Benchmark of popular optimizers From the collection of test problems, two sets, of four simple ("small") and four more demanding ("large") problems, respectively, are selected as a core set of benchmarks. Researchers can design their algorithm in rapid iterations on the simpler set, then test on the more demanding set. We argue that this protocol saves time, while also reducing the risk of over-fitting in the algorithm design loop. The package also provides realistic baselines for the most popular optimizers on those test problems. In Section 4 we report on the performance of SGD, SGD with momentum (MOMENTUM) and ADAM on the small and large benchmarks (this also demonstrates the output of the benchmark). For each optimizer we perform an exhaustive but realistic hyperparameter search. The best performing are provided with DEEPOBS and can be used as a fair performance metric for new optimizers without the need to compute these baselines again. We invite the authors of other algorithms to add their own method to the benchmark (via a git pull-request). We hope that the benchmark will offer a common platform, allowing researchers to publicise their algorithms, giving practitioners a clear view on the state of the art, and helping the field to more rapidly make progress. To our knowledge, there is currently no commonly accepted benchmark for optimization algorithms that is well adapted to the deep learning setting. This impression is corroborated by a more or less random sample of recent research papers on deep learning optimization BID13 BID54 BID20 BID28 BID12 BID3 BID24 BID37, whose empirical sections follow no joint standard (beyond a popularity of the MNIST data set). There are a number of existing benchmarks for deep learning as such. However, they do not focus on the optimizer. Instead, they are either framework or hardwarespecific, or cover deep learning as a holistic process, wrapping together architecture, hardware and training procedure, The following are among most popular ones: DAWNBench The task in this challenge is to train a model for IMAGENET, CIFAR-10 or SQUAD BID35 as quickly as possible to a specified validation accuracy, tuning the entire tool-chain from architecture to hardware and optimizer BID10.MLPerf is another holistic benchmark similar to DAWNBench. It has two different rule sets; only the'open' set allows a choice of optimization algorithm BID30. DLBS is a benchmark focused on the performance of deep learning models on various hardware systems with various software .DeepBench tests the speed of hardware for the low-level operations of deep learning, like matrix products and convolutions .Fathom is another hardware-centric benchmark, which among other things assesses how computational resources are spent.TBD focuses on the performance of three deep learning frameworks BID56.None of these benchmarks are good test beds for optimization research. BID42 defined unit tests for stochastic optimization. In contrast to the present work, they focus on small-scale problems like quadratic bowls and cliffs. In the context of deep learning, these problems provide unit tests, but do not give a realistic impression of an algorithm's performance in practice. This section expands the discussion from section 1 of design desiderata for a good benchmark protocol, and proposes ways to nevertheless arrive at an informative, fair, and reproducible benchmark. The optimizer's performance in a concrete training run is noisy, due to the random sampling of mini-batches and initial parameters. There is an easy remedy, which nevertheless is not universally adhered to: Optimizers should be run on the same problem repeatedly with different random seeds, and all relevant quantities should be reported as mean and standard deviation of these samples. This allows judging the statistical significance of small performance differences between optimizers, and exposes the "variability" of performance of an optimizer on any given problem. The obvious reason why researchers are reluctant to follow this standard is that it requires substantial computational effort. DEEPOBS alleviates this issue in two ways: It provides functionality to conveniently run multiple runs of the same setting with different seeds. More importantly, it provides stored baselines of popular optimizers, freeing computational resources to collect statistics rather than baselines. Training a machine learning system is more than a pure optimization problem. The optimizers' immediate objective is training loss, but the users' interest is in generalization performance, as estimated on a held-out test set. It has been observed repeatedly that in deep learning, different optimizers of similar training-set performance can have surprisingly different generalization (e.g.). Moreover, the loss function is regularly just a surrogate for the metric the user is ultimately interested in. In classification problems, for example, we are interested in classification accuracy, but this is infeasible to optimize directly. Thus, there are up to four relevant metrics to consider: training loss, test loss, training accuracy and test accuracy. We strongly recommend reporting all four of these to give a comprehensive assessment of a deep learning optimizer. For hyperparameter tuning, we use test accuracy or, if that is not available, test loss, as the criteria. We also use them as the performance metrics in TAB2.For empirical plots, many authors compute train loss (or accuracy) only on mini-batches of data, since these are computed during training anyway. But these mini-batch quantities are subject to significant noise. To get a decent estimate of the training-set performance, whenever we evaluate on the test set, we also evaluate on a larger chunk of training data, which we call a train eval set. In addition to providing a more accurate estimate, this allows us to "switch" the architecture to evaluation mode (e.g. dropout is not used during evaluation). Relevant in practice is not only the quality of a solution, but also the time required to reach it. A fast optimizer that finds a decent albeit imperfect solution using a fraction of other methods' resources can be very relevant in practice. Unfortunately, since learning curves have no parametric form, there is no uniquely correct way to define "time to convergence". In DEEPOBS, we take a pragmatic approach and measure the time it takes to reach an "acceptable" convergence performance, which is individually defined for each test problem from the baselines SGD, MOMENTUM and ADAM each with their best hyperparameter setting. Arguably the most relevant measure of speed would be the wall-clock time to reach this convergence performance. However, wall-clock runtime has well-known drawbacks, such as dependency on hardware or weak reproducibility. So many authors report performance against gradient evaluations, since these often dominate the total computational costs. However, thiscan hide large per-iteration overhead. We recommend first measuring wall-clock time of both the new competitor and SGD on one of the small test problems for a few iterations, and computing their ratio. This computation, which can be done automatically using DEEPOBS, can be done sequentially on the same hardware. One can then report performance against the products of iterations and per-iteration cost relative to SGD.For many first-order optimization methods, such as SGD, MOMENTUM or ADAM, the choice of hyperparameters does not affect the runtime of the algorithm. However, more evolved optimization methods, e.g. ones that dynamically estimate the Hessian, the hyperparameters can influence the runtime significantly. In those cases, it is suggested to repeat the runtime estimate for different hyperparameters. Almost all deep learning optimizers expose tunable hyperparameters, e.g., step sizes or averaging constants. The ease of tuning these hyperparameters is a relevant characteristic of an optimization method. How does one "fairly" compare optimizers with tunable hyperparameters?A full analysis of the effects of an optimizer's hyperparameters on its performance and speed is tedious, especially since they often interact. Even a simpler sensitivity analysis requires a large number of optimization runs, which are infeasible for most users. Such analyses also do not take into account if hyperparameters have default values that work for almost all optimization problems and therefore require no tuning in general. Instead we recommend that authors find and report the bestperforming hyperparameters for each test problem. Since DEEPOBS covers multiple test problems, the spread of these best choices gives a good impression of the required tuning. Additionally, we suggest reporting the relative performance of the hyperparameter settings used during this tuning process FIG5 shows an example). Doing so yields a characterization of tunability without additional computations. For the baselines presented in this paper, we chose a simple log-grid search to tune the learning rate. While this is certainly not an optimal tuning method, and more sophisticated methods exists (e.g. BID4, BID45), it is nevertheless used often in practice and reveals interesting properties about the optimizers and their tunability. Other tuning methods can be used with DEEPOBS however, this would require recomputing the baselines as well. DEEPOBS supports authors in adhering to good scientific practice by removing various moral hazards. The baseline for popular optimizers (whose hyperparameters have been tuned by us or, in the future, the very authors of the competing methods) avoid "starving" the competition of attention. When using different hyperparameter tuning methods, it is necessary to allocate the same computational budget for all methods in particular when comparing optimization methods of varying number of hyperparameters. The fixed set of test problems provided by the benchmark makes it impossible to (knowingly or subconsciously) cherry-pick problems tuned to a new method. And finally, the fact that the benchmark spreads over multiple such problem sets constitutes a mild but natural barrier to "overfit" the optimizer method to established data sets and architectures (like MNIST). Performances of the most popular optimizers..tex files of learning curves for new optimizer and the baselines. DEEPOBS provides the full stack required for rapid, reliable, and reproducible benchmarking of deep learning optimizers. At the lowest level, a data loading (§3.1) module automatically loads and preprocesses data sets downloaded from the net. These are combined with a list of models (§3.2) to define test problems. At the core of the library, runners (§3.3) take care of the actual training, and log a multitude of statistics, e.g., training loss or test accuracy. Baselines (§3.4) are provided for a collection of competitors. They currently include the popular choices SGD (raw, and with MOMENTUM) and ADAM, but we invite authors of other methods to contribute their own. The visualization (§3.6) script maps the to L A T E X output. Future releases of DEEPOBS will include a version number that follows the pattern MA-JOR.MINOR.PATCH, where MAJOR versions will differ in the selection of the benchmark sets, MINOR versions signify changes that could affect the . PATCHES will not affect the benchmark . All obtained with the same MAJOR.MINOR version of DEEPOBS will be directly comparable, all with the same MAJOR version will compare on the same problems. We now give a brief overview of the functionality; the full documentation can be found online. Excluding IMA-GENET, the downloaded data sets require less than one GB of disk space. The DEEPOBS data loading module then performs all necessary processing of the data sets to return inputs and outputs for the deep learning model (e.g. images and labels for image classification). This processing includes splitting, shuffling, batching and data augmentation. The data loading module can also be used to build new deep learning models that are not (yet) part of DEEPOBS. Together, data set and model define a loss function and thus an optimization problem. TAB1 provides an overview of the data sets and models included in DEEPOBS. We selected problems for diversity of task as well as the difficulty of the optimization problem itself. The list includes popular image classification models on data sets like MNIST, CIFAR-10 or IMAGENET, but also models for natural language processing and generative models. Additionally, three two-dimensional problems and an ill-conditioned quadratic problem are included. These simple tests can be used as illustrative toy problems to highlight properties of an algorithm and perform sanity-checks. Over time, we plan to expand this list when hardware and research progress renders small problems out of date, and introduces new research directions and more challenging problems. The runners of the DEEPOBS package handle training and the logging of statistics measuring the optimizers performance. For optimizers following the standard TensorFlow optimizer API it is enough to provide the runners with a list of the optimizer's hyperparameters. We provide a template for this, as well as an example of including a more sophisticated optimizer that can't be described as a subclass of the TensorFlow optimizer API. DEEPOBS also provides realistic baselines for, currently, the three most popular optimizers in deep learning, SGD, MOMENTUM, and ADAM. These allow comparing a newly developed algorithm to the competition without computational overhead, and without risk of conscious or unconscious bias against the competition. Section 4 describes how these baselines were constructed and discusses their performance. Baselines for further optimizers will be added when authors provide the optimizer's code, assuming the method perform competitively. Currently, baselines are available for all test problems in the small and large benchmark set; we plan to provide baselines for the full set of models in the near future.3.5 ESTIMATE RUNTIME DEEPOBS provides an option to quickly estimate the runtime overhead of a new optimization method compared to SGD. It measures the ratio of wall-clock time between the new optimizer and SGD. By default this ratio is measured on five runs each, for three epochs, on a fully connected network on MNIST. However, this can be adapted to a setting which fairly evaluates the new optimizer, as some optimizers might have a high initial cost that amortizes over many epochs. The DEEPOBS visualization module reduces the overhead for the preparation of , and simultaneously standardizes the presentation, making it possible to include a comparably large amount of information in limited space. The module produces.tex files with pgfplots-code for all learning curves for the proposed optimizer as well as the most relevant baselines (section 4 includes an example of this output). For the baseline provided with DEEPOBS, we evaluate three popular deep learning optimizers (SGD, MOMENTUM and ADAM) on the eight test problems that are part of the small (problems P1 to P4) and large (problems P5 to P8) benchmark set (cf. TAB1). The learning rate α was tuned for each optimizer and test problem individually, by evaluating on a logarithmic grid from α min = 10 −5 to α max = 10 2 with 36 samples. Once the best learning rate has been determined, we run those settings ten times with different random seeds. While we are using a log grid search, researchers are free to use any other hyperparameter tuning method, however this would require re-running the baselines as well. FIG4 shows the learning curves of the eight problems in the small and large benchmark set. TAB2 summarizes the from both benchmark sets. We focus on three main observations, which corroborate widely-held beliefs and support the case for an extensive and standardized benchmark. There is no optimal optimizer for all test problems. While ADAM compares favorably on most test problems, in some cases the other optimizers are considerably better. This is most notable on CIFAR-100, where MOMENTUM is significantly better then the other two. The connection between the four learning metrics is non-trivial. Looking at P6 and P7 we note that the optimizers rank differently on train vs. test loss. However, there is no optimizerthat universally generalizes better than the others; the generalization performance is evidently problem-dependent. The same holds for the generalization from loss to accuracy (e.g. P3 or P6).ADAM is somewhat easier to tune. Between the eight test problems, the optimal learning rate for each optimizer varies significantly. FIG5 shows the final performance against learning rate for each of the eight test problems. There is no significant difference between the three optimizers in terms of their learning rate sensitivity. However, in most cases, the order of magnitude of the optimal learning rate for ADAM is in the order of 10 (with the exception of P1), while for SGD and MOMENTUM this spread is slightly larger. Deep learning continues to pose a challenging domain for optimization algorithms. Aspects like stochasticity and generalization make it challenging to benchmark optimization algorithms against each other. We have discussed best practices for experimental protocols, and presented the DEEPOBS package, which provide an open-source implementation of these standards. We hope that DEEPOBS can help researchers working on optimization for deep learning to build better algorithms, by simultaneously making the empirical evaluation simpler, yet also more reproducible and fair. By providing a common ground for methods to be compared on, we aim to speed up the development of deep-learning optimizers, and aid practitioners in their decision for an algorithm.
We provide a software package that drastically simplifies, automates, and improves the evaluation of deep learning optimizers.
717
scitldr
Pre-trained word embeddings are the primary method for transfer learning in several Natural Language Processing (NLP) tasks. Recent works have focused on using unsupervised techniques such as language modeling to obtain these embeddings. In contrast, this work focuses on extracting representations from multiple pre-trained supervised models, which enriches word embeddings with task and domain specific knowledge. Experiments performed in cross-task, cross-domain and crosslingual settings indicate that such supervised embeddings are helpful, especially in the lowresource setting, but the extent of gains is dependent on the nature of the task and domain. Named entity recognition, semantic role labelling, relation extraction etc. can be thought of as primary tasks necessary for solving high level tasks like question answering, summarization etc. However, labelling large amounts of data at this granularity is not only prohibitively expensive, but also unscalable. Given that high performance models for these tasks already exist, it is desirable to leverage them for other language understanding tasks. Next, consider the domain adaptation setting where some domains have a lot of data, while others do not. A model for a low-resource domain would benefit from information in expert models trained on other data rich domains. Finally, consider the setting of cross-lingual adaptation, a common problem for personal assistants expanding to more languages. As the number of languages increases, it becomes unfeasible to obtain human annotated data. Again, the need to adapt to low-resource languages can be met by leveraging models that already exist for high-resource languages. Motivated by the above scenarios, we propose a simple method to transfer supervised knowledge, from multiple sources, in an easy to implement manner. In our approach, this knowledge is extracted from source models in the form of contextual word embeddings. We treat preexisting models as embedding extractors, which are used to extract token level representations for an input sentence. These representations are then combined via a task specific convex combination. Unsupervised transfer learning methods such as ELMo have shown great success for a variety of tasks BID15. While they have the advantage of being trained on very large corpora, the training objectives are unsupervised. We show that in low-resource settings especially, leveraging representations from multiple pre-trained supervised models in related tasks, domains or languages can prove to be beneficial. The common way of supervised transfer learning via fine-tuning can transfer information only from a single source task BID11. One way to incorporate information from multiple external sources is via multi-task learning BID5 BID17. The limitations of multitask learning are the need for labelled data for the source models, longer training times and complex design decisions (weighing the losses for each task, sampling strategies, and choice of architecture). In contrast, our plug-and-play approach is simple and does not assume availability of source model data at training time. Finally, our approach also provides some interpretability (through the parameters of the convex combination) into which source tasks or domains are important for which other tasks and domains. Our work aligns most with the following three directions of research. Unsupervised transfer learning Embeddings such as GloVe and FastText have become an integral part of the modern NLP pipeline BID13; BID0. Over the last year, language model based deep contextualized embedding methods such as ELMo have shown substantial improvements over their shallow counterparts, heralding a new era of word representations BID15. In terms of modelling approach, our work is similar to BID7, where the authors use multiple existing models for domain adaptation for spoken language understanding. In comparison, our work focuses not just on the domain adaptation, but also the cross-task and cross-lingual settings. In another work, BID1 create metaembeddings from multiple embeddings like GloVe, Fasttext etc. Most deep learning models can be thought of as having an encoder E and decoder D. For example in a Deep-SRL model BID6, stacked bidirectional LSTM constitutes E, while D is the softmax layer. Assume K existing supervised models either for different tasks or different domains M 1,..., M K and corresponding encoders E 1,..., E K. Given a sentence of N tokens (t 1, t 2, ..., t N), we feed these tokens to the K different encoders and get K different representations for each token. We denote the encoder output of the kth model for the nth token by h k n. Each encoder generates representations specialized for the task, domain, or language it was trained for. Since our approach assumes no explicit information about the encoders of the model, they can be of varying dimensions and use different underlying architectures. Evidently, they would also be in different vector spaces and therefore we first use a projection layer to bring all of them in the same vector space. The parameters of these projection layers W 1,... W K are learned along with the target model parameters. DISPLAYFORM0 For inclusion in a downstream model, we aggregate the projection layer output of all the different source models into one vector. Several aggregation schemes can be employed: pooling, convex combination, attention etc. We choose the simple yet interpretable convex combination approach, as described below. This technique is similar to one used by ELMo BID15. We use a softmax normalized weight s k corresponding to each of the different representations of the word, add them up and use a scalar parameter γ that scales up the whole vector. The embedding O n for the nth word comes out to be: DISPLAYFORM0 This approach adds K + 1 trainable parameters to the model. An advantage of combining the representations in this manner is that the size of the embedding is fixed irrespective of the number of source models used. Once we get a combined representation, it can be used in the target model just like any other embedding. In our experiments, we concatenate these embeddings with traditional GloVe or ELMo embeddings. We use the proposed supervised contextual embeddings along with GloVe and ELMo embeddings in three knowledge transfer settings. Cross-task transfer In this setting, we transfer knowledge to a target task from models trained on multiple source tasks. We transfer into Semantic Role Labeling (SRL) task using Constituency Parsing (CP), Dependency Parsing (DP) and Named Entity Recognition (NER) as source tasks. The choice of SRL as a target task, with source embeddings from CP, DP and NER models, is inspired by the popular use of explicit syntactic parsing features for SRL. We use OntoNotes 5.0 BID16 dataset to train the SRL target tasks. We use the stacked alternating LSTM architechture for SRL as per BID6. On the source side, the DP model is based on BID4 and CP on BID18. For most of the source models, we use off-the-shelf, pre-trained models provided by AllenNLP 1. We refer readers to BID15 for further description of model architectures for the various tasks. Cross-domain transfer Here, we study the applicability of our method in the cross-domain setting. The target task is same as the source tasks, but instead, the domains of the source and target models are different. For this set of experiments, our task is NER and we use the OntoNotes 5.0 dataset which comes with annotations for multiple domains. Though NER is an easier task, we chose it as the target task for the cross-domain setting as even state of the art NER models may perform poorly for a data-scarce domain. We choose the target domain as web blogs and the source domains are newswire, broadcast conversation, telephone conversation, magazines and broadcast news. Note that the samples in the validation and test sets are also limited to the web blogs domain only. We use an LSTM-CRF architechture with 1 LSTM layer for NER as per BID14.Cross-lingual transfer From the CoNLL shared tasks, we obtain NER datasets for English, Spanish, German and Dutch Tjong BID20. We consider two scenarios with German and Spanish as the target languages and the remaining 3 as source languages. To facilitate the input of sentences into models from other languages with different scripts, we rely on crosslingual embeddings provided by MUSE Conneau et al. (2017b). The NER model architecture is the same as the one used for the cross-domain experiments. To study the effectiveness of our approach in the low resource setting, in addition to the full datasets, we also run experiments on smaller training subsets. Similar to BID12, we create random subsets of 1,000 and 5,000 samples to simulate a low resource setting. In all the aforementoiend settings, the source task models are trained on their complete datasets. Hyperparameters We use the Adam optimizer (lr=0.001) for all our experiments. We run our target models for 50 epochs in SRL tasks and 75 epochs for NER tasks. Batch size is kept at 8 for the 1k data setting and 16 for 5k data setting. The dimensions of the GloVe and ELMo embeddings are 100 and 1024 respectively. The output dimension of the projection layer in all settings for supervised embeddings is 300. Cross-task SRL (with GloVe and ELMo in 1k, 5k and full data settings) have been tabulated in TAB1 has the for cross-domain NER and TAB2 shows the for crosslingual transfer on NER. All the reported numbers are F1 scores. Cross-task SRL With GloVe embeddings, adding the supervised embeddings gives us significant improvements in F1 scores ∼ 5% for 1k and ∼ 7% for 5k examples. When we use the entire dataset, adding supervised embeddings provides no performance gains. Examining the learned source task weights in the 1k setting, we find that weights for CP, DP and NER have values 0.41, 0.41 and 0.18 respectively which shows that SRL benefits greatly from syntactic tasks like CP and DP. This is in agreement with SRL state-of-the-art models BID19 and BID8 which rely on syntactic features. When we replace GloVe with ELMo representations, we see that the baseline model improves by over ∼ 13%, showing that ELMo representations are indeed very strong. But adding supervised embeddings in the 1k setting further improves upon the ELMo baseline by over ∼ 5%. A similar improvement of ∼ 5% can be seen in the 5k setting as well. Our model shows comparable performance as the baseline when we use the entire dataset. These suggest that the proposed supervised contextual embeddings further bring about improvements over already strong language model features in a low-resource setting. This reinforces the learning that when sufficient data is available, supervised signals do not provide information that the model cannot learn by itself from the data alone. Cross-domain NER Supervised embeddings provide an impressive 4% improvement over the GloVe baseline with both 1,000 and 5,000 samples. Even when we replace GloVe with ELMo, we see an improvement of 3%, indicating that the benefits of using knowledge from other domains is orthogonal to what ELMo can offer. However, the gains vanish when the full dataset is used, suggesting that knowledge from other domains is particularly useful in the very low-resource setting. However, if sufficient data is available, the model has enough resources to build upon generic word embeddings. It is also interesting to note that for this dataset, GloVe based models outperform their ELMo counterparts. This is probably due to the mismatch in the data used to train ELMo (formal language from the 1 billion word corpus) as opposed to the NER dataset which consists of informal language used in web blogs. Cross-lingual NER We observe substantial gains by exploiting information present in other languages. For both German and Spanish the performance gains are highest when number of samples is 1,000, thus validating the suitability of the proposed method for transfer to very low-resource settings. Even when full dataset is used, we see gains over 1% for both languages. We propose supervised contextual embeddings, an easy way to incorporate supervised knowledge from multiple pre-existing models. We perform experiments in the cross-task, cross-domain and cross-lingual setups and find that the proposed embeddings are particularly useful in the lowresource setting. Our work points to the potential of such embeddings in various downstream tasks in different transfer learning settings. Future work includes incorporating more tasks, domains and languages, and understanding the relationships among them. These explorations would build towards our larger vision of building a more complete taxonomy of transfer learning dependencies among NLP tasks, domains and languages.
extract contextual embeddings from off-the-shelf supervised model. Helps downstream NLP models in low-resource settings
718
scitldr
We build a theoretical framework for understanding practical meta-learning methods that enables the integration of sophisticated formalizations of task-similarity with the extensive literature on online convex optimization and sequential prediction algorithms in order to provide within-task performance guarantees. Our approach improves upon recent analyses of parameter-transfer by enabling the task-similarity to be learned adaptively and by improving transfer-risk bounds in the setting of statistical learning-to-learn. It also leads to straightforward derivations of average-case regret bounds for efficient algorithms in settings where the task-environment changes dynamically or the tasks share a certain geometric structure. Meta-learning, or learning-to-learn (LTL) BID26, has recently re-emerged as an important direction for developing algorithms capable of performing well in multitask learning, changing environments, and federated settings. By using the data of numerous training tasks, meta-learning algorithms seek to perform well on new, potentially related test tasks without using many samples from them. Successful modern approaches have also focused on exploiting the capacity of deep neural networks, whether by learning multi-task data representations passed to simple classifiers BID25 or by neural control of the optimization algorithms themselves BID23.Because of its simplicity and flexibility, a common approach is that of parameter-transfer, in which all tasks use the same class of Θ-parameterized functions f θ: X → Y; usually a shared global model φ ∈ Θ is learned that can then be used to train task-specific parameters. In gradient-based meta-learning (GBML) BID11, φ is a metainitialization such that a few stochastic gradient steps on a Preliminary work. Under review by the International Conference on Machine Learning (ICML). Do not distribute. few samples from a new task suffice to learn a good taskspecific model. GBML is now used in a variety of LTL domains such as vision BID18 BID21 BID17, federated learning BID7, and robotics BID0. However, its simplicity also raises many practical and theoretical questions concerning what task-relationships it is able to exploit and in which settings it may be expected to succeed. While theoretical LTL has a long history BID4 BID19 BID22, there has recently been an effort to understand GBML in particular. This has naturally lead to online convex optimization (OCO) , either directly BID12 BID16 or via online-to-batch conversion to statistical LTL BID16 BID9. These efforts all consider learning a shared initialization of a descent method; BID12 then prove learnability of a metalearning algorithm while BID16 and BID9 give meta-test-time performance guarantees. However, this line of work has so far considered at most a very restricted, if natural, notion of task-similarity -closeness to a single fixed point in the parameter space. We introduce a new theoretical framework, Averaged-Regret Upper-Bound Analysis (ARUBA), that enables the derivation of meta-learning algorithms that can provably take advantage of much more sophisticated task-structure. Expanding significantly upon the work of BID16, ARUBA treats meta-learning as the online learning of a sequence of losses that each upper bound the regret on a single task. These bounds frequently have convenient functional forms that are (a) nice enough for us to easily draw on the existing OCO literature and (b) strongly dependent on both the task-data and the meta-initialization, thus encoding task-similarity in a mathematically accessible way. Using ARUBA we provide new or dramatically improved meta-learning algorithms in the following settings:• Adaptive Meta-Learning: A major drawback of previous work is the reliance on knowing the task-similarity beforehand to set the learning rate BID12 or regularization BID9, or the use of a suboptimal guess-and-tune approach based on the doubling trick BID16. ARUBA yields a simple and efficient gradient-based algorithm that eliminates the need to guess the task-similarity by learning it on-the-fly.• Statistical LTL: ARUBA allows us to leverage powerful in online-to-batch conversion BID27 BID15 to derive new upper-bounds on the transfer risk when using GBML for statistical LTL BID4, including fast rates in the number of tasks when the task-similarity is known and fully highprobability guarantees for a class of losses that includes linear regression. These improve directly upon the guarantees of BID16 and BID9 for similar or identical GBML algorithms.• LTL in Dynamic Environments: Many practical applications of GBML include settings where the optimal initialization may change over time due to a changing taskenvironment BID0. However, current theoretical work on GBML has only considered learning a fixed initialization BID12 BID9. ARUBA reduces the problem of meta-learning in changing environments to a dynamic regret-minimization problem, for which there exists a vast array of online algorithms with provable guarantees.• Meta-Learning the Task Geometry: A recurring theme in parameter-transfer LTL is the idea that certain model weights, such as those encoding a shared representation, are common to all tasks, whereas others, such as those performing a task-specific classification, need to be updated on each one. However, by simply using a fixed initialization we are forced to re-learn this structure on every task. Using ARUBA we provide an algorithm that can learn and take advantage of such structure by adaptively determining which directions in parameter-space need to be updated. We further provide a fully adaptive, per-coordinate variant that may be viewed as an analog for Reptile BID21 of the Meta-SGD modification of MAML BID11 BID18, which learns a per-coordinate learning rate; in addition to its provable guarantees, our version is more efficient and can be applied to a variety of GBML methods. In the current paper we provide in Section 2 an introduction to ARUBA and use it to show guarantees for adaptive and statistical LTL. We defer our theory for meta-learning in dynamic environments and of different task-geometries, as well as our empirical , to the full version of the paper. Theoretical Learning-to-Learn: The statistical analysis of LTL as learning over a task-distribution was formalized by BID4 and expanded upon by BID19. Recently, several works have built upon this theory to understand modern LTL, either from a PAC-Bayesian perspective BID2 or in the ridge regression setting with a learned kernel BID8. However, due to the nature of the data, tasks, and algorithms involved, much effort has been devoted to the online setting, often through the framework of lifelong learning BID22 BID3 BID1. The latter work considers a many-task notion of regret similar to our own in order to learn a shared data representations, although our algorithms are significantly more practical. Very recently, BID6 also developed a more efficient online approach to learning a linear embedding of the data. However, such work is related to popular shared-representation methods such as ProtoNets BID25, whereas we consider the parameter-transfer setting of GBML.Gradient-Based Meta-Learning: GBML developed from the model-agnostic meta-learning (MAML) algorithm of BID11 and has been widely used in practice BID18 BID0 BID21 BID14 ). An expressivity was shown for MAML by BID10, proving that the metalearner could approximate any permutation-invariant learning algorithm given enough data and a specific neural network architecture. Under strong-convexity and smoothness assumptions and using a fixed learning rate, BID12 show that the MAML meta-initialization is learnable, albeit via a somewhat impractical Follow-the-Leader (FTL) method. In contrast to these efforts, BID16 and BID9 focus on providing finite-sample meta-test-time performance guarantees in the convex setting, the former for the SGD-based Reptile algorithm of BID21 and the latter for a more strongly-regularized variant. Our work improves upon these analyses by considering the case when the learning rate, a proxy for the task-similarity, is not known beforehand as in BID12 and BID9 but must be learned online; BID16 do consider an unknown task-similarity but use a rough doubling-trick-based approach that considers the absolute deviation of the task-parameters from the meta-initialization and is thus average-case suboptimal and sensitive to outliers. Furthermore, ARUBA can handle more sophisticated and dynamic notions of task-similarity and in certain settings can provide better statistical guarantees than those of BID16 and BID9. Following the setup of BID1, we consider a sequence of tasks t = 1,..., T; each task has rounds i = 1,..., m, on each of which we see a loss function DISPLAYFORM0 In the online setting, our goal will be to design algorithms taking actions θ t,i ∈ Θ that in small task-averaged regret (TAR) BID16, which averages the within-task regret over t ∈ [T]: DISPLAYFORM1 This quantity measures within-task performance by dynamically comparing to the best action on individual tasks. A common approach in this setting is to run an online algorithm, such as online gradient descent (OGD) with learning rate η t > 0 and initialization φ t ∈ Θ, on each task t: DISPLAYFORM2 The meta-learning problem is then reduced to determining which learning rate and initialization to use on each task t. Specific cases of this setup include the Reptile method of BID21 and the algorithms in several recent theoretical analyses BID1 BID16 BID9. The observation that enables the in the current paper is the fact that the online algorithms of interest in few-shot learning and meta-learning often have existing regret guarantees that depend strongly on both the parameters and the data; for example, the withintask regret of OGD for G-Lipschitz convex losses is DISPLAYFORM3 for θ * t the optimal parameter in hindsight. Whereas more sophisticated adaptive methods for online learning attempt to reduce this dependence on initialization, in our setting each task does not have enough data to do so. Instead we can observe that if the upper boundR t (φ t, η t) ≥ R t on the task-t regret is low on average over t ∈ [T] then the TAR of the actions θ t,i due to running OGD initialized at φ t with learning rate η t at each task t will also be low, i.e. DISPLAYFORM4 Often this upper-boundR t will have a nice functional form; for example, the OGD bound above is jointly convex in the learning rate η t and the initialization φ t. Then standard OCO can be applied directly. While this approach was taken implicitly by BID16, and indeed is related to earlier work on adaptive bound optimization for online learning BID20, in this work we make explicit this framework, which we call Averaged-Regret Upper-Bound Analysis (ARUBA), and showcase its usefulness in deriving a variety of new in both online and batch LTL. Specifically, our approach will reduce LTL to the online learning of a sequence of regret upper-boundsR 1 (x),...,R T (x), where x parameterizes the within-task algorithms. The ing guarantees will then have the generic form DISPLAYFORM5 Thus as T → ∞ the algorithm competes with the best parameterization x, which encodes the task-relatedness through the task-data-dependence ofR t.Algorithm 1: General form of meta-learning algorithm we study. TASK η,φ corresponds to online mirror descent (OMD) or follow-the-regularized-leader (FTRL) with initialization φ ∈ Θ, learning rate η > 0, and regularization R: Θ → R. META is follow-the-leader (FTL). META is some OCO algorithm. Set meta-initialization φ 1 ∈ Θ and learning rate η 1 > 0. DISPLAYFORM6 Our first is an adaptive algorithm for a simple notion of task-similarity that serves also to demonstrate how our framework may be applied. We consider tasks t = 1,..., T whose optimal actions θ * t are close to some unknown global φ * ∈ Θ according to some metric. For 2 -distance this assumption was made, explicitly or implicitly, by BID12 and BID9; BID16 also consider the case of a Bregman divergence B R (θ * t ||φ *) for 1-strongly-convex R: Θ → R BID5, with R(·) = B R (θ * t ||φ *) of the task-parameters; for OCO methods V is proportional to the learning rate or the inverse of the regularization coefficient, which were fixed by BID12 and BID9. BID16 instead used the doubling trick to learn the maximum deviation max t B R (θ * t ||φ *) ≥ V, which is suboptimal and extremely sensitive to outliers. We first formalize the setting we consider, extensions of which will also be used for later :Setting 2.1. Each task t ∈ [T] has m convex loss functions t,i Θ → R that are G-Lipschitz on average. Let θ * t ∈ arg min θ∈Θ mt i=1 t,i (θ) be the minimum-norm optimal fixed action for task t. We will consider variants of Algorithm 1, in which a parameterized OCO method TASK η,φ is run within-task and two OCO methods, META and META, are run in the outer loop to determine the learning rate η > 0 and initialization φ ∈ Θ. We provide the following guarantee: Theorem 2.1. In Setting 2.1 Algorithm 1 achieves TAR DISPLAYFORM0 where D 2 = max t B R (θ * t ||φ t) and R T is the regret of META on a sequence f 1,..., f T of functions of form DISPLAYFORM1 Proof Sketch. The proof follows from the well-known re- x + x. This is nontrivial, as while the functions are convex they are non-Lipschitz near 0. However, using strongly-convex coupling once more one can show that using the actions of FTL on the modified loss functionsf t (x) = by proving the exp-concavity off t and using the Exponentially-Weighted Online Optimization (EWOO) algorithm of BID13, which can be implemented efficiently in this single-dimensional case, instead of FTL. We thus have the following corollary: DISPLAYFORM2 DISPLAYFORM3 e. the mean and squared average deviation of the optimal task parameters, we have an asymptotic per-task regret of V G √ m, which is much better than the minimax-optimal single-task guarantee DG √ m when V D, i.e. when the tasks are on-average close in parameter space. As in BID16 and assuming a quadratic growth condition on each task, in the full version we extend this to the case when θ * t is not known and either the last or average within-task iterate is used to perform the meta-updates. An important motivation for studying LTL via online learning has been to provide batch-setting bounds on the transfer risk BID1 BID9. While BID16 provide an in-expectation bound on the expected transfer risk of any low-TAR algorithm, their cannot exploit the many stronger in the online-tobatch conversion literature. Following the classical distribution over task-distributions setup of BID4, ARUBA yields strong bounds on the expected transfer risk in the general case of convexR, as well as fast rates in the stronglyconvex case using BID15 and high probability bounds for linear regression using BID27. Theorem 2.2. Let convex losses t,i: Θ → be sampled i.i.d. P t ∼ Q, {t,i} i ∼ P m t for some distribution Q over task distributions P t. If the losses are given to an algorithm with averaged regret upper-boundR T that on each task runs an algorithm with regret upper-boundR t (s t) a convex, nonnegative, and B √ m-bounded function of the state s t of the algorithm at the beginning of time t then we have the following bound on the expected transfer risk: 2, 6αB √ m} αmT log 8 log T δ If the losses satisfy a certain self-bounding property then we have a high probability bound on the transfer risk itself: DISPLAYFORM0 DISPLAYFORM1 T m log 2 δ + 3ρ + 2 m log 2 δ w.p. 1 − δ for some ρ > 0.In the case of a known task-similarity, when we know the expected task-parameter deviation V and can fix the learning rate in Algorithm 1 accordingly, the above yields DISPLAYFORM2 This can be compared to of BID9, where the last term only decreases as T. 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 Adaptive Gradient-Based Meta-Learning Methods Zinkevich, M. Online convex programming and generalized infinitesimal gradient ascent. In Proceedings of the 20th International Conference on.
Practical adaptive algorithms for gradient-based meta-learning with provable guarantees.
719
scitldr
In this work, we propose a self-supervised method to learn sentence representations with an injection of linguistic knowledge. Multiple linguistic frameworks propose diverse sentence structures from which semantic meaning might be expressed out of compositional words operations. We aim to take advantage of this linguist diversity and learn to represent sentences by contrasting these diverse views. Formally, multiple views of the same sentence are mapped to close representations. On the contrary, views from other sentences are mapped further. By contrasting different linguistic views, we aim at building embeddings which better capture semantic and which are less sensitive to the sentence outward form. We propose to learn sentence embeddings by contrasting multiple linguistic representations. The motivation is to benefit from linguistic structures diversity to discard noises inherent to each representation. We aim at encoding high-level representations by aligning the underlying shared information from multiple views. As illustrated in Figure 1, we train our model with a contrastive framework which aims at mapping close input sentences to close representations while separating unrelated sentences. In Natural Language Processing (NLP), this framework has been widely used to learn word representations (a; b) for example. This model relies on the distributional hypothesis which conjectures that words within similar context share similar meaning. Such framework has also been extended to sentences with the similar hypothesis that the meaning can be inferred from the context sentences (; . We propose to extend this framework by assuming that different views of the same sentence should lead to close representation. We considered the dependency trees, a linguistic framework that describes the compositional structure of a sentence. As illustrated in Figure 1, in this framework, the sentence is mathematically described as an oriented acyclic graph where the nodes are words and edges describe the relations between words. Such structure has benefited from an important attention in the NLP community and efficient parser tools for various languages are available, which makes it possible to obtain such information almost freely in the sense it does not require additional hand annotated data. Tree representations are then mapped in a shared embedding space using appropriate Tree LSTM networks introduced in . Model parameters are learned using a discriminating objective as proposed in . Representation learning has gained significant attention from the NLP community . As illustrated in Figure 3 it is structured in two-steps: (A) a representation is learned using a proxy objective on a usually very large corpora (B) the representation is then used to solve a variety of downstream tasks. Literature comes with a variety of proxy tasks to learn representations. Proxy objectives usually fall in two categories: (i) predicting a contextual information or reconstructing Figure 1: Contrastive multi-views framework. The model is trained to distinguish between different views of context sentences and negative examples. The two views are obtained using a standard LSTM and a Tree LSTM networks on top of a dependency tree structure. A discriminative objective is used to contrast between samples. an altered input or (ii) discriminating multiple representations of the same data among negative examples. Proxy (i) has been declined into multiple variations in the literature. Words embeddings methods Mikolov et al. (2013b) proposes that words with close distributions should lead to close representations. For images, proposes to learn representations by solving jigsaw puzzles. With a close taste to NLP tasks, aims at learning representations by trying to fill the missing part of an image. van den; methods aim at predicting the future to represent video or audio. The second proxy objective (ii) is motivated by the observation that end-to-end frameworks tend to learn semantic similarities among classes. For example reports that the hidden layers of a supervised neural networks assigns close representations to leopard and jaguar images. However an image from lifeboat and leopard will be apart from each other. This motivates for contrastive learning methods where the network aims at distinguishing individual samples to insure a global consistency in the dataset. Many contrastive learnings have been proved effective in a variety of domains: image processing , audio input (van den), sentence representation or word embedding (a; b;). Besides the nature of data, formalisms might differ on two main aspects: (i) How to measure the proximity in both the original and the representation space and (ii) What training objective should be used to achieve well distinction between samples. One possible answer on how to select samples that should lead to close representation is to consider different views of the same sample as proposed in; who propose to combine multi-views of images with contrastive learning.; van den propose contrastive frameworks for sentences but without multi-views setting. Multiple metrics to measure the proximity in the representation space are proposed in the literature: enumerates multiple so called critic functions. The most straight forward also used in beeing the scalar product: The objective function for contrastive learning has also been declined in multiple work: Many relate to Mutual Information (MI); or methods based on triplet loss or max-margin . Contrastive learning is a self-supervised learning method which aims at learning a semantic mapping. Data is embedded with compact representations such that close samples are mapped to nearby points while unrelated ones are affected to apart points. In practice different methods exist to learn such mapping. A straight forward method is to treat each data sample as a distinct class and train a classifier to distinguish between individual instance classes. Such approach is used for word embedding where the goal is to predict a word given his context. However this is computationally difficult and the outputs are not a finite set in our case since an infinite number of correct sentences might be expressed. A method to approximate a full instance discrimination is to use Negative sampling and Noise Contrastive estimation methods. For every data point x from a dataset D we have access to one paired sample x + which should be close. We then draw K negative samples x K which should be apart. We learn a scoring function h θ which assigns large scores to positive paired samples (x, x +) while low values for negative pairs (x, x − k). In practice the function is trained using a classifier which aims at identifying the correct pair. In our setup we use a softmax classifier as follow: The classifier is trained using the negative log likelihood loss: As in the used scoring function is the inner product h θ (u, v) = u T v in order to avoid the situation where the model learns poor representations but compensates with an excellent classifier on the proxy task. However some authors report excellent with other critic functions: for example uses a combination of absolute distance and angle measures. Given the scoring vectors for each pair representation we use a simple softmax classifier with a log negative likelihood loss. In our setup, the positive pair (x, x +) is built from different views of the input data. From the original sentences, we construct multiple representations from different views of the object. For every sentence we consider two encoders. x s which is the representation obtained with a bidirectional LSTM who assumes the underlying structure of the sentence to be a sequence, while allowing for long term dependencies. The final representation of the sentence is the concatenation of both direction last state. x d is obtained with the dependency tree representation combined with a ChildSum Tree LSTM . The final representation is the state from the root node of the sentence graph. Negative examples are obtained using the dependency Tree LSTM. The dataset D is therefore augmented with the positive examples and multi-views examples Similarly to , the model is trained to identify the sentence appearing in the context from the target sentence. The key difference and our contribution is to build different views of the data. The target sequence is encoded using the sequential Tree LSTM, while the positive and negative samples are encoded using the ChildSum Tree LSTM. As described in the previous section, the objective is to maximize the log probability of the correct sentence to appear in the context. 4.1 DATA Models are trained on the BookCorpus dataset. Since the dataset is no longer distributed, a similar dataset is generated using smashword open book data 1. The obtained dataset contains 78M sentences from 17.000 books. We define a 20k words vocabulary, containing the corpus most frequent tokens. Sentences are tokenized using Microsoft BlingFire tool 2. Models are trained on a single epoch on the entire corpus without any train-test split. The use of Tree LSTM models supposes to parse sentences in dependency which is operated using the publicly available Spacy parser. The contrastive learning method supposes to draw negative examples for each sentence pair. We follow the training procedure proposed in and build mini-batches respecting books sentences order. Therefore samples from the mini-batch are used as negative examples. Models are trained using the Adam optimizer with a 5e-3 learning rate. Gradient is clipped with a value of 5.0. The hidden layer for both Tree LSTM is fixed to 1000 and the embedding dimension to 300. The batch size is 100 and the model is trained on a 1080Ti Nvidia GPU. All model weights are initialized with a xavier distribution and biases to 0. We observe biases initialization to have a significant impact on the training. Indeed leaf nodes are initialized with zero vectors for state and hidden and initial biases with too significant values tend to slow the training convergence. A point is computed every hour during the training phase. Despite the batch computation implementation describes in Section A, the Contrastive Tree LSTM is much slower to train. The training was stoped after 33 hours and training on 5.8% of the available training data. The evaluation setup for the SICK tasks is describe in Section 5. The Tree LSTM network was implemented using a batch procedure described in Section A. However the training was significantly slower than vanilla LSTM. The training phase was stopped after 33 hours of training. The training phase was completed on only 4.6M sentences among the 78M available. We monitored the performance on downstream tasks during training as illustrated in Figure 2. As described in Figure 3, the process requires specific evaluation processes at each learning step. Although methods are developed to control the properties of the intermediate representation such as probing tasks for NLP, the intermediate representation is usually evaluated on his ability to solve the final task: downstream evaluation. To facilitate the comparison across multiple representations and limit the impact of the downstream algorithm, the linear evaluation protocol is usually used. It proposes to solve the downstream task with a minimal and simple model: a logistic regression. At test time, the concatenation of two encoders is used. Figure 3: (left) End-to-end framework where the algorithm learns to map directly the inputs to the target. The hidden sates of the neural network are only used for this task. (right) In contrast to end-to-end approaches the algorithm proceeds in two phases. First a representation is learned with a self-supervised proxy task. The same representation might then be used for different tasks. Sentence representations are evaluated on downstream tasks that require to capture the underlying sentence semantic. 7 classification tasks from the SentEval benchmark are applied: movie review sentiment (MR) , product reviews (CR) , subjectivity classification (SUBJ) , opinion polarity (MPQA) , question type classification (TREC) , semantic relatedness and entailment (SICK-R, SICK-E) and paraphrase identification (MRPC) . The MR, CR, SUBJ, MPQA tasks are binary classification tasks with no pre-defined train-test split. A 10-fold cross validation is used in reporting test performance for these tasks. For other tasks we use the proposed train/dev/test splits. The dev set is used for choosing the regularization parameter and are reported on the test set. We follow the linear evaluation protocol of where a logistic regression classifier is trained on top of sentence representations with the cross-validation procedure described earlier. This protocol facilitates the comparison across studies and avoids the case of a good classifier which compensates for bad sentence representations. The script from is used for downstream evaluation. The scores are not as good as the state of the art reported in Table 1. However the model was not trained on the entire available training set used in. The model shows encouraging properties and trends in Figure 2 suggest it might improve with a larger exposition to the training data. Table 1: Comparison of sentence representations on downstream tasks. For the SICK-R task, the pearson coefficient is indicated. For the MRPC task, this is the F1 score. The baseline is reported from and is obtained using a bag-of-words representation. LSTM, BidirLSTM and Tree LSTM are reported from. Fastsent is reported from , Dissent and Infersent from , Skip-thoughts from and Quick-thoughts from . The table is divided into different sections. The bold-face numbers indicate the best performance values among models in the current and all previous sections. Best overall values in each column are underlined. argues that downstream tasks require complex form of inference, which makes it difficult to assess the fine grained quality of a representation. Probing tasks are supposed to overcome this limitation and separately evaluate individual linguistic properties of representations. The probing benchmark includes surface information tasks: predicting the length of the sentence (SentLen) and the word content (WC). Syntactic tasks include predicting the depth of the sentence tree (TreeDepth), the root top constituents (TopConst) and the word order (BShift). Finally semantic tasks aim at predicting the verb tense (Tense), if the subject and complement are singular or plural (SubjNum, ObjNum) and semantic or coordination inversion (SOMO, CoordInv). The probing are presented in Table 2 for both Contrastive LSTM (CL) and Contrastive Tree LSTM (CTL). Table 2: Probing task scores for the Contrastive Tree LSTM (CTL) and the Contrastive LSTM (CL). Bold-face numbers indicate the best performance values among the two models. The proposed traintest split is used for each task. Parameters are fixed on the dev set and are presented on the test set. Results for the Contrastive LSTM (CL) are obtained using a model we trained on our version of the bookcorpus as described in Section 4. We observe the interaction between the standard Tree LSTM and the Tree LSTM is sensible to the sentence length which was also observed by . All other scores are bellow the obtained by the application of the sole LSTM, pointing that the introduction of linguistic knowledge does not compensate enough for the smaller training set. Our training setup and share a similar framework but with different models to embed sentences. We retrieved the nearest neighbor from several query examples with the two methodologies. The sentences are extracted from the SICK test set. The closest neighbors are determined using the cosine similarity and presented in Table 3. Examples are selected to illustrate the ability of models to capture desired linguistic properties such as passive form, concepts, counting faculties or gender identification. Table 3: Nearest neighbors retrieved by the Contrastive Tree LSTM (CTL) and the Contrastive LSTM (CL). For each sentence, the 5 closest neighbors from the test split of the SICK dataset are retrieved using the cosine similarity from their representations. Results are presented in decreasing order of cosine distance. Both models might be fooled with surface information and retrieve examples with similar words but different meanings. Sole LSTM encoder presents remarkable properties to identify passive form. The Tree LSTM presents interesting properties such as retrieving sentences with similar principal proposition but complement which do not alter the sense of the sentence. Tree LSTM also captures number properties and identifies concepts such as 4 people might be referred as a group. We exploit the diversity of linguistic structures to build sentence representations. Out method shows promising and does not require hand annotated data. More scalable implementations might be considered to explore more experimentation setups. Although are below state of the art performances, our model is trained on only a small proportion of the bookcorpus sentences as stated in Figure 2. A larger exposition to the training data and an extended training time might benefit to the downstream and probing scores. Other linguistic structures might also be tested such as constituency tree associated with N-ary Tree LSTM or Tree LSTM improved with an attention mechanism. A COMPUTING METHOD FOR TREE LSTM We implemented a batching procedure to fasten Tree LSTM computations. Group of nodes are computed sequentially to insure all node children have already been computed. Nodes are considered given their distance to the root node. First, Leaf nodes with highest depth are computed to progressively compute inner nodes. The Tree LSTM cell implementation is specifically designed to treat simultaneously all nodes in each the step. Figure 4: The batching procedure to optimize the graph computation. For each batch, the computation is decomposed in steps which insure that every node dependent have already been computed. At each step, node with the same depth to the root are computed in a single operation and the output fed to the next computational step.
We aim to exploit the diversity of linguistic structures to build sentence representations.
720
scitldr
The peripheral nervous system represents the input/output system for the brain. Cuff electrodes implanted on the peripheral nervous system allow observation and control over this system, however, the data produced by these electrodes have a low signal-to-noise ratio and a complex signal content. In this paper, we consider the analysis of neural data recorded from the vagus nerve in animal models, and develop an unsupervised learner based on convolutional neural networks that is able to simultaneously de-noise and cluster regions of the data by signal content. Recent advances have made chronic observation of, and limited control over the peripheral nervous system possible. To characterise the dynamics of the signals passing to and from the brain, we wish to categorise patterns of activity within the peripheral nervous system. However, consistent detection of single neuron activity remains a challenge with current cuff electrode technology suitable for in vivo neural data acquisition. The relative position of an extracellular recording electrode and neuronal axons close enough to sit above the noise floor affects the polarity of presented signal components, and their summation at the electrode occludes the presence of individual action potentials during periods of neuronal activity. Instead, local field potentials (LFPs), the combination of many neuronal responses arriving concurrently at the electrode are observed. These population level responses are potentially informationally richer, but preclude the use of conventional spike-sorting methodologies on such data. Instead, we develop a method based on convolutional neural networks (CNN) that simultaneously de-noises the data and categorises the observed signals. We train this model on approximately one hour of data taken from a single subject approximately twelve hours post surgical implantation. We further show that it is applicable without further training to data from a second subject thirty days post surgical implantation, demonstrating cross-time, cross-subject applicability of the trained models. Neural data are collected from two nerve cuffs implanted on the vagus nerve, each recording LFP data at 30000Hz using a chronically implanted ITX PNS (peripheral nervous system) implant (BIOS, * To whom correspondence should be addressed. Figure 1: The architecture of the Coordinate-VAE. The input signal is encoded via a series of convolutional/pooling/leaky-ReLu/dropout blocks to a categorical latent vector representing the core process observed in the input signal. To allow the decoder to account for phase shifts, time warping, et cetara, a set of time coordinates for which the signal is closest to zero are sampled from each channel of the input signal. These pass through a 'coordinate encoder', before being concatenated with the latent vector. The decoder then upsamples with convolution to reconstruct the original signal. Cambridge, UK). We begin by standardising the mean and standard deviation of the data coming from each cuff, and applying a fifth-order Butterworth bandpass (50-1000Hz) filter, before rescaling such that the training data lie in the range (-1, 1). We then sample small time windows of equal size w from the data as input to the Coordinate-VAE. In the shown here, w is fixed at 256 samples, that is, at 256 30000 seconds. The basic model architecture is shown in Figure 1. For each window, the goal is to reduce the observed data to a one-hot latent vector of size L. We achieve this by training a variational auto-encoder (VAE) with a Gumbel-Softmax activation on the latent space. Encoding to the latent space is done through a series of convolutional blocks, where parameters for each block in the encoder are kept constant except for the number of filters in each convolutional layer, which doubles with each block. Pooling takes place in each block where this would not reduce the dimension of the data to less than the size of the convolutional layer. Decoding similarly follows a standard upsampling/convolutional scheme, with a hyperbolic tangent activation following the final convolutional layer. The temperature of the Gumbel layer is slowly annealed throughout training such that the temperature at epoch E is 2e −0.0003E. During inference, the temperature is fixed at 0.1. We define the loss as a weighted sum of the mean squared error on the reconstruction and the negative of the Kullback-Leibler divergence. Models were trained in Tensorflow (v 1.12.2) on a single Nvidia Tesla K80. Hyperparameter tuning was carried out over one thousand evaluations for each model using a Tree-structured Parzen Estimator as implemented in the hyperopt package. For the primary data set, the data were divided at random into training/validation/test sets comprising 70%, 20% and 10% of the data respectively. With a small (L = 20) one-hot latent space, a standard VAE is unable to reconstruct any signal (Fig. 2(a) ). Given a sufficiently large (L = 50)) latent space, there is sufficient information to reconstruct the signal, but at the cost of retaining much of the noise, signal artefacts, and increasing the complexity of the latent space (Fig. 2(b) ). We solve this by allowing the leakage of some information directly from the original signal to the decoder, bypassing the latent space. For each channel, we find the set of n time-coordinates at which the observed signal is closest to zero. To prevent memorisation of the signal based on these coordinates, we randomly sample a subset n of these coordinates. Since these data can be ordered over time, we then apply a 1-d convolutional network to this input in a similar fashion to the encoder, giving an encoding of the signal as defined by the sampling from the coordinates. This'coordinate encoding' is concatenated to the upsampled layers in each step of the decoder. This allows a small (L = 20) one-hot latent space to identify the signal present in the data while removing the noise (Fig. 2(c) ). For this analysis n = 5 and n = 1, that is, a single value taken from the time-axis for each data channel and passed to the encoder via a CNN is sufficient to allow reconstruction of the latent space. We are able to reduce the latent space further (L = 10 and L = 5) while maintaining identification of the signal (data not shown). The input data from the first (blue) and second (orange) cuffs is reduced to a single value in the latent space which evolves over time, and is used to reconstruct the original signal. Without the coordinate encoding, no reconstruction is possible using a small latent space (a). With a large latent space (b), reconstruction is possible but with a complex latent space and the reconstruction of noise in addition to signal. With a coordinate encoder (c) the latent space is relatively simple and the reconstruction is effectively de-noised. Figure 2 demonstrates the ability of a Coordinate-VAE model to effectively de-noise peripheral neural data and cluster the observed signals within a relatively simple latent space. Furthermore, we can apply the model trained on data from a single subject to other subjects. Figure 3 shows the latent space and reconstructed signal from vagus nerve recordings from a second subject taken sixty days post surgical implantation. Despite the increased noise levels in this data set, the trained model is able to de-noise the signal and characterise the signals within the data. Data from this subject were human-labelled as containing regions of neural activity corresponding to respiration modulation. There is a clear correlation between respiration modulation events and the amplitude of the reconstructed signal, suggesting that the latent space is able to capture meaningful physiological signals from neural data. Furthermore, the latent space shows strong differences between the latent values prevalent within regions of respiration modulation and those without, with latent values 0, 7, 10, 13, 16 and 19 being significantly (χ 2 -test, moderated for dependence of neighbouring values) over-represented within the respiration modulation events. This suggests that, in the absence of labels, the latent space representation may still give useful information with which to identify physiological events. We explore the de-noising ability of this technique further through simulation studies (Figure 4). We simulate noise in each channel by independently sampling from Morlet wavelets, whose parameters are further sampled from independent normal distributions, and whose location on the time series are uniformly distributed. We combine this'noise' with'signal', also sampled from Morlet wavelets, but now located within short'impulse' time periods and correlated between the two signal channels. We then reconstruct the combined waveform and estimate the ratio of the power of the reconstruction within the impulse regions to the power of the reconstruction outside the impulse regions. By varying the amplitude of the'noise' signal, we acquire different values for the true signal-to-noise ratio (SNR) and compare this to the SNR post-reconstruction. Particularly for low true SNR, the post-reconstruction data show a considerably improved SNR. The recent development of chronic neural interfacing implant systems that are able to record neural signals over period of months or years will create large sets of primarily unlabelled data, with numerous signals occurring over a range of time-scales. These data are currently un-characterisable with standard methods (e.g. spike-sorting). Previous work in this field has relied on mixing categorical and real-valued latent vectors. Westhuizen et al used an adversarial auto-encoder to project neural data to labels, incorporating an approximately one-hot encoding in the latent space but also including an approximately Gaussian vector to allow reconstruction. Since both vectors are trained simultaneously, the Gaussian component of the latent space may contain the relevant labelling information for one or more true classes. InfoGAN, a GAN implementation in which the discriminator identifies components of the latent space is similarly capable of one-hot latent representation of the data, but without constraints on the information carried within the one-hot encoding. The Coordinate-VAE approach, in restricting the information available to the encoder creating the non-categorical portion of the latent space, allows unsupervised characterisation of the signals in timeseries data, while simultaneously de-noising the signal. Models are transferable between individuals, suggesting that we may gain the ability to pre-train large models for the reduction to latent space representations. As shown in Figure 3, there is some evidence to suggest that these latent space representations are also informative for physiological features. We might then rapidly train a final classifier or agent for monitoring or control of individual patients, as in Pandarianth et al, in which an auto-encoder is used as a dimension reduction technique on collections of neural spiking data acquired from macaque motor and pre-motor cortices, following which a GLM is used to map the complex latent space to spiking activity.
Unsupervised analysis of data recorded from the peripheral nervous system denoises and categorises signals.
721
scitldr
Adversarial attacks on convolutional neural networks (CNN) have gained significant attention and there have been active research efforts on defense mechanisms. Stochastic input transformation methods have been proposed, where the idea is to recover the image from adversarial attack by random transformation, and to take the majority vote as consensus among the random samples. However, the transformation improves the accuracy on adversarial images at the expense of the accuracy on clean images. While it is intuitive that the accuracy on clean images would deteriorate, the exact mechanism in which how this occurs is unclear. In this paper, we study the distribution of softmax induced by stochastic transformations. We observe that with random transformations on the clean images, although the mass of the softmax distribution could shift to the wrong class, the ing distribution of softmax could be used to correct the prediction. Furthermore, on the adversarial counterparts, with the image transformation, the ing shapes of the distribution of softmax are similar to the distributions from the clean images. With these observations, we propose a method to improve existing transformation-based defenses. We train a separate lightweight distribution classifier to recognize distinct features in the distributions of softmax outputs of transformed images. Our empirical studies show that our distribution classifier, by training on distributions obtained from clean images only, outperforms majority voting for both clean and adversarial images. Our method is generic and can be integrated with existing transformation-based defenses. There has been widespread use of convolutional neural networks (CNN) in many critical real-life applications such as facial recognition and self-driving cars . However, it has been found that CNNs could misclassify the input image when the image has been corrupted by an imperceptible change . In other words, CNNs are not robust to small, carefully-crafted image perturbations. Such images are called adversarial examples and there have been active research efforts in designing attacks that show the susceptibility of CNNs. Correspondingly, many defense methods that aim to increase robustness to attacks have been proposed. Stochastic transformation-based defenses have shown considerable success in recovering from adversarial attacks. Under these defenses, the input image is transformed in a certain way before feeding into the CNN, such that the transformed adversarial image would no longer be adversarial. As the transformation is random, by feeding in samples of the transformed image through the CNN, we accumulate a set of CNN softmax outputs and predictions. As such, existing transformationbased defenses take a majority vote of the CNN predictions from the randomly transformed image . Transformation-based defenses are desirable as there is no need to retrain the CNN model. However, they suffer from deterioration of performance on clean images. With increasing number of pixel deflections , there is improvement on the performance on adversarial images, but this comes with a rapid deterioration of performance on clean images. In transformation-based defenses, the image is transformed stochastically where each sample t x is drawn from the distribution T (x) and then fed to the CNN (blue box). In our defense method, for each input image x, we build the marginal distribution of softmax probabilities from the transformed samples t x, · · ·. The distributions are fed to a separate distribution classifier which performs the final classification. Note that our distribution classifier is trained only on distributions obtained from clean images while tested on both clean and adversarial images. The exact mechanism of the deterioration in performance on clean images is unclear. We believe that the softmax distribution induced by the random transformation contains rich information which is not captured by majority vote that simply counts the final class predictions from the transformed samples. Now, an interesting question is whether the features in the distribution of softmax could be better utilized. In this paper, to elucidate how the deterioration in accuracy on clean images occurs, we study the effects of the random image transformations on the distribution of the softmax outputs and make some key observations. After the image transform, some clean images show distributions of softmax with modes at an incorrect class, reflecting the deterioration in voting accuracy as observed before. While the shifting of the distribution mode to the incorrect class is detrimental to the voting prediction, the ing distribution of softmax contains features that is useful for correcting the prediction. In addition, we observe that the adversarial counterparts show similar shifts in the distributions of softmax as the clean images. We also look into the distribution shapes for the transformed clean and adversarial images and find that they are similar. With these observations, we propose a simple method to improve existing transformation-based defenses, as illustrated in Figure 1. We train a separate lightweight distribution classifier to recognize distinct features in the distributions of softmax outputs of transformed clean images and predict the class label. Without retraining the original CNN, our distribution classifier improves the performance of transformation-based defenses on both clean and adversarial images. On the MNIST dataset, the improvements in accuracy over majority voting are 1.7% and 5.9% on the clean and adversarial images respectively. On CIFAR10, the improvements are 6.4% and 3.6% respectively. Note that the distributions obtained from the adversarial images are not included in the training of the distribution classifier. In real-world settings, the type of attack is not known beforehand. Training the distribution classifier on a specific attack may cause the classifier to overfit to that attack. Hence, it is an advantage that our defense method is attack-agnostic. Our experimental findings show that the features of the distribution in the softmax are useful and can be used to improve existing transformation-based defenses. Our contributions are as follows: 1. We analyze the effects of image transformation in existing defenses on the softmax outputs for clean and adversarial images, with a key finding that the distributions of softmax obtained from clean and adversarial images share similar features. 2. We propose a method that trains a distribution classifier on the distributions of the softmax outputs of transformed clean images only, but show improvements in both clean and adversarial images. This method is agnostic to the attack method, does not require retraining of the CNN and can be integrated with existing transformation-based methods. Given an image dataset {(x 1, y 1) · · · (x M, y M)} and a classifier F θ that has been trained with this dataset, with parameters θ, the aim of the attack is to produce an adversarial image x adv i such that F θ (x adv i) = y i, and ||x adv i − x i || is small. We focus on four gradient-based untargeted attacks. The Fast Gradient Sign Method (FGSM) is a single-step attack that uses the sign of the gradient of the classification loss to perturb the image. Iterative Gradient Sign Method (IGM) (a) is an iterative version of FGSM. In DeepFool , at each iteration, the attack approximates the classifier with a linear decision boundary and generates the minimal perturbation to cross the boundary. Finally, the Carlini & Wagner (C&W) L 2 attack jointly minimizes the perturbation L 2 norm and a differentiable loss function based on the classifier's logit outputs. Besides gradient-based attacks, there are also black-box attacks where the CNN model is not known and only the softmax output or final prediction is given (; ;). Defense methods have been proposed to make the classifiers more robust. In adversarial training, the CNN model is trained on adversarial examples generated from itself (; b) or from an ensemble of models (Tramèr et al., 2017). Other methods involve training auxiliary neural networks on mixture of clean and adversarial images, for instance, by denoising the inputs with a neural network before feeding into the CNN (; ;) or by training a neural network on the CNN logits . In the next section, we introduce another class of defense: transformation-based defenses. Transformation-based defenses aim to recover from adversarial perturbations, that is for input transformation T, we want F θ (T (x adv i)) = y i. At the same time, the accuracy on the clean images has to be maintained, ie. F θ (T (x i)) = y i. Note that transformation-based defenses are implemented at test time and this is different from training-time data augmentation. Here we introduce two transformation-based defenses that we experiment on. Pixel deflection (PD) : Pixel deflection corrupts an image by locally redistributing pixels. At each step, it selects a random pixel and replaces it with another randomly selected pixel in a local neighborhood. The probability of a pixel being selected is inversely proportional to the class activation map . Lastly, there is a denoising step based on wavelet transform. In our experiments, we did not use robust activation maps for our datasets as we found that this omission did not cause significant difference in performance (see Appendix D.3). Random resize and padding (RRP) : Each image is first resized to a random size and then padded with zeroes to a fixed size in a random manner. In many transformation-based methods, the transformation is stochastic. Hence there can be different samples of the transformation of an image: t x ∼ T (x), where t x represents a transformed sample. Existing transformation defenses benefit from improved performance by taking the majority vote across samples of random transformations. The advantage of transformation-based methods is that there is no retraining of the CNN classifier. However, a weakness, as identified by , is that the transformation increases the accuracy on adversarial images at the expense of the accuracy on clean images. The exact mechanism of the deterioration in performance on clean images is unclear. In this paper, we elucidate how the deterioration in accuracy on clean images occurs by studying the effects of the random image transformations on the distribution of the softmax outputs. Due to the randomness of the transforms, samples of the transformed image will have different softmax outputs. With each image, we obtain a distribution over the softmax outputs accumulated from multiple samples of the transformation. These are the steps to obtain the distribution of softmax: 1. For each input image x, obtain N transformed samples: The transformed samples of the image (t x) are fed into the CNN individually to obtain their softmax probabilities. Let σ x,j, for j = 1, · · ·, C, be the j-th component of the softmax vector. C denotes the number of classes for the classification task. With each input image and a transformation method, there exists an underlying joint distribution of the CNN softmax probabilities, from which we estimate with N samples. 3. The underlying joint distribution of the softmax has a dimension equal to the number of classes (eg. 10-D for MNIST). Performing accurate density estimation in high dimensions is challenging due to the curse of dimensionality. Here we make an approximation by computing the marginal distributions over each class. When we use the term'distribution of softmax', we are referring to the marginalized distributions. We use kernel density estimation with a Gaussian kernel. Let h x,j be the distribution accumulated from σ where δ is the kernel width and s ∈ is the support of the softmax output. The distribution is then discretized into bins. In this section, we study the effect of image transformation on the distribution of the softmax and make several interesting observations. In the following analyses, we study a LeNet5 CNN trained with MNIST. The adversarial images are generated using FGSM and for the transformation defense, we use pixel deflection, with N =100 transformation samples per image. The image transformation magnitude is controlled by the number of pixel deflections, d. In the analysis here and in the experimental in Section 5, when reporting the accuracies, on clean images, we consider images that have been correctly predicted by the CNN, hence without any transformation defense, the test accuracy is 100%. This is following the setup of where the misclassified images by CNN are excluded as it is not meaningful to evaluate any attack (and subsequent defense) methods on these images. For adversarial images, we consider the images that have been successfully attacked, so the test accuracy reflects the recovery rate and without any defense the accuracy is 0%. In Figure 2a, we show how the image transformation affects the voting predictions on two MNIST classes. For each MNIST class, we take all the clean and adversarial test images, perform the transformation and then feed through the CNN to obtain the final voting prediction. We observe that for class label 8, there is some recovery from the attack as some adversarial images are voted to the correct class after the transformation. However, some clean images get misclassified to other classes (eg. 2 and 3). Although this means there is a deterioration of the accuracy on clean images, it is interesting that the misclassifications have the same voting classes as the transformed adversarial images. A similar pattern is observed for class label 6 where the clean images are misclassified to classes 4 and 5, which overlap with the vote predictions of some adversarial images at d = 300. With the above analysis, we characterize the relationship between the clean and adversarial images in terms of the JS divergence of the distributions of the softmax at increasing number of pixel deflections. For each MNIST digit class, we quantify the distance of the distributions among the clean images (clean-clean, same class), distance of the distributions among the adversarial images (adversarial-adversarial, same class), the distance of the distributions between clean and adversarial images (clean-adversarial, same class) and the distance of the distributions between clean images of this class and all other classes (clean-clean, different class). Here we give details on the calculation of the 4 distance measures. First, the distance between the distributions of softmax output for two input images, x 1 and x 2 is given by, where D JS is the Jensen-Shannon divergence. Distance measures of and computed by taking the average distance of each image distribution to the centroid distribution which is computed with is computed by the distance between the centroids of the clean and adversarial distributions. Finally, is computed by the distance of the centroid distribution of the clean images of the particular class with the centroid distribution of another class, averaged over the other 9 classes. In Figure 2b, we show for two MNIST classes, but similar trends are observed across all classes (see Figure 8 in Appendix A)). The clean-clean (same-class) distance starts off low initially as all clean samples will give high scores at the correct class. With increasing number of deflections, there is increased variability in the softmax outputs and the ing distributions. Next, the adversarial images of the same class are initially predicted as different incorrect classes without any transformation, and hence the adversarial-adversarial (same-class) distance starts off high and decreases with more transformation. The clean-adversarial (same-class) distance decreases with increasing image transformation which shows that the distributions of softmax from the clean and adversarial images are becoming more similar. Finally, the clean-clean (different class) distance decreases as well, which is expected because we already know that with more transformation, the clean image voting accuracy deteriorates. However, we observe that clean-clean (different class) distance decreases less rapidly and remains higher than clean-clean (same-class) distance at d=300. This means the transformation still retains information about the differences between the classes. At d=800, all 4 distance measures converge, which suggests the number of deflections is too large and the differences between the classes are no longer retained. Next, we visualize the morphing of the distributions with increasing number of pixel deflections for an example image in Figure 3. For the purpose of visualization, instead of the per-class marginal distributions of the softmax, we perform kernel density estimation (kde) on the softmax values for the marginals on class 5 and 6. The softmax values of the other 8 classes are not shown. We have not excluded the areas where performing kde in sum probability exceeding one, and our visualization still conveys our ideas and the distribution shapes well. Without any image transformation, as expected, the softmax outputs of the clean and adversarial images are very different. As the number of pixel deflections increases, each point evolves to a distribution due to the randomness of the transformation. The voting mechanism is straightforward; an image is classified to the class where the distribution mass is largest. In this example, the distribution shapes for the clean and adversarial image become more similar, and in the same incorrect voting prediction at d=300. This shows the similarity of distributions obtained from clean and adversarial images after image transformation, which was illustrated in Figure 2b. In Figure 4, we show more examples of the distributions obtained from clean images (A-H) and their adversarial counterparts (Ã-H) at d=300. For clean images A-D, voting predicts correctly but on the adversarial counterpartsÃ-D, voting predicts wrongly. For clean images E-H and the adversarial counterpartsẼ-H, voting predicts wrongly. For completeness, we also show in Figure 9 in Appendix B examples of adversarial images where the transformation defense, coupled with voting, has successfully recovered the correct class. With the random image transformation, there are similarities in the distribution shapes between the clean and adversarial images, as shown by the groupings and arrows (eg. between E andÃ,Ẽ,F). This further supports our earlier observations. After the image transformation, the voting accuracy on the clean images deteriorates, but the ing distributions have similar features as the distributions from the adversarial counterparts. This gives us an idea to enhance existing transformation-based defenses: to train a distribution classifier on the distributions obtained from clean images only, while improving the performance on both clean and adversarial images. Instead of voting, to reduce the drop in performance on clean images, we train a separate compact distribution classifier to recognize patterns in the distributions of softmax probabilities of clean images, as illustrated in Figure 1. For each clean image, the marginal distributions obtained are inputs to the distribution classifier, which learns to associate this distribution with the correct class label. If the individual transformed images were initially misclassified by the CNN, our distribution classifier should learn to recover the correct class. During the test phase, for any input image, clean or adversarial, we build the distribution of softmax from N transformed samples and feed them into our trained distribution classifier to obtain the final prediction. Note that our defense method does not require retraining of the original CNN, is agnostic to the attack method and can be integrated with most existing stochastic transformation-based methods. Distribution classifiers: We investigated three distribution classification methods. First, we adapt a state-of-the-art distribution-to-distribution regression method, called distribution regression network (DRN) . Details of the adaptation of DRN are included in Appendix C. We also experimented on random forest (RF), which alleviates overfitting by averaging the outputs from multiple decision trees. Finally, we experimented on multilayer perceptrons (MLP) which are fully connected neural networks, with a softmax layer for the classification task. For this distribution classification task, we concatenate the distribution bins from the softmax classes into a single input vector for RF and MLP. For DRN and MLP, we use the cross entropy loss and the network architectures and optimization hyperparameters are chosen by cross-validation. For random forest, the Gini impurity is used as the splitting criterion and the number of trees and maximum depth of the trees are tuned by cross-validation. The hyperparameter values are included in Appendix D.4. In the following section, we describe our experimental setup to evaluate the performance on clean and adversarial images with our distribution classifier method. We use the MNIST , CIFAR10 and CIFAR100 et al., 1998 ) that has 98.7% test accuracy. For CIFAR10 and CIFAR100, we use wide ResNet with test accuracies of 95.7% and 78.9% respectively. Attack methods: As introduced in Section 2, we use four adversarial attacks in the untargeted setting. In Appendix D.1, we have included the distortion metrics, the success rates and the hyperparameters. The attacks are implemented using the CleverHans library . Transformation-based defenses: As a baseline, we use a random pixel noise (RPN) as a defense method, where each pixel noise is sampled with a uniform distribution with L ∞ measure. In addition, we use two existing transformation-based methods: pixel deflection (PD) and image random resize and pad (RRP) . Although these two methods have not been tested for MNIST, CIFAR10 and CIFAR100, we find that they work considerably well and present the here. The hyperparameter tuning for each defense is conducted on the validation sets. We select hyperparameters that give the best recovery from adversarial attack, regardless of the deterioration in accuracy on clean images. The hyperparameters are included in Appendix D.2. To test the effectiveness of the transformation-based defenses before integrating with our defense method, we perform majority voting on the transformed image samples. This sets the baseline for our distribution classifier defense method. When reporting the test accuracies, on clean images, we consider images that have been correctly predicted by the CNN, hence without any defense method, the test accuracy is 100%. For adversarial images, we consider the images that have been successfully attacked, so the test accuracy reflects the recovery rate and without any defense the accuracy is 0%. For the MNIST dataset, N = 100 transformation samples were used for voting and for constructing the distribution of softmax. We found that the distribution classifiers required only 1000 training data, which is a small fraction out of the original 50,000 data. Figure 5 (left) shows the test accuracies of the three transformation-based defenses with majority voting and with the three distribution classifiers. Table 11 in Appendix D.5 shows the numerical figures of the in Figure 5. First, we observe that the recovery on adversarial images with majority voting for the iterative methods IGSM, DeepFool and C&W is much better compared to the single-step FGSM. This is in line with the observations made by where they found their defense to be more effective for iterative attacks. The distribution classifiers are trained to reduce the deterioration of accuracy on clean images. The distribution classifiers have improved accuracy over voting on the clean images for most cases, except when the voting accuracy was already high (eg. 100% voting accuracy for PD on DeepFool). The mean improvement of the accuracy on the clean images is 1.7% for DRN. Hence, our distribution classifier method is stronger than voting. Voting simply takes the mode of the softmax probabilities of the transformed image, disregarding properties such as variance across the classes. In contrast, the distribution classifier learns from the distinctive features of the distribution of softmax. Without training on the distributions obtained from adversarial images, our method has managed to improve the recovery rate. The mean improvement of the accuracy on the adversarial images is 5.9% for DRN. The three distribution classifier methods are comparable, except for some cases where DRN outperforms other classifiers (eg. PD adv., IGSM) and where MLP and RF have lower accuracy than voting (eg. RPN adv., DeepFool and C&W). In the earlier section in Figure 4, we show that after image transformation, the distributions of softmax between the clean and adversarial images show some similarities and distinctive features. In fact, all of the clean (A-H) and adversarial (Ã-H) images (class 6) are classified correctly by the distribution classifier. Even though the distribution classifier was only trained on distributions from the clean images (A-H), the distribution classifier can recover the correct class for the adversarial images where voting has failed (Ã-H). The distribution classifier does so by learning the distinctive shapes of the distributions associated with the digit class from the clean images, and is able to apply this to the adversarial images with similar distribution shapes. Furthermore, our distribution classifier is able to pick up subtle differences in the distribution features. Figure 6a shows examples of clean images with class label 5 that are correctly classified by our distribution classifier. It is interesting that although the distribution shapes for adversarial imagesC and G shown in Figure 4 look similar, our distribution classifier is able to distinguish between the shapes for class 5 and 6. We used N =100 transformed samples in our experiments. Hence, the evaluation time will be 100 times longer than the time taken by a single sample. Here we study the effect of the number of samples. Figure 6b and 6c show the classification accuracies for voting and DRN as the number of transformed samples increases. On the clean images, both voting and DRN accuracies improve with more number of samples, with the performance of voting saturating while DRN's performance continues to increase with widening gap. The widening gap shows that a sufficient number of samples is required to capture the features of the distribution of softmax. On the adversarial images, the accuracies stay more of less the same. Although having more transformed samples is beneficial for the performance on clean images, our distribution classifier improves the voting performance regardless of the number of samples. For the CIFAR10 and CIFAR100 datasets, N = 50 image transformation samples and 10,000 training data were used. Figure 5 (middle) shows the for CIFAR10. All three distribution classifiers gave comparable improvements over voting, except for MLP which performs worse than voting for adversarial images with RPN on DeepFool. For CIFAR100 (Figure 5, right), the distribution classifiers mostly show improved performance over voting. There are exceptions where DRN (eg. PD adv., FGSM) and MLP (eg. RPN adv., DeepFool) have lower accuracy than voting. This suggests that for datasets with more classes, random forest may perform better than other classifiers. As explained in Section 3, in the in Figure 5, we have excluded clean images which are misclassified by the CNN and the images where the attack has failed. To check that our method works on these images, we evaluated these images for CIFAR100 with FGSM attack, random resize and padding and random forest classifier. Our in Table 14 in the Appendix show that our distribution classifier method still outperforms majority voting. In this section, we evaluate end-to-end attacks on our distribution classifier method (with DRN) on the MNIST and CIFAR10 datasets. We use the Boundary Attack which is a black-box decision-based attack. We performed the attack on the base CNN classifier (CNN), CNN with pixel deflection and voting (Vote), and CNN with pixel deflection and distribution classifier trained on clean images (DRN). In addition, we trained the distribution classifier on a mixture of distributions obtained from both clean and adversarial images obtained with IGSM on the base CNN, which can be seen as a lightweight adversarial training scheme (DRN LAT) except that the CNN is kept fixed. Finally, we tested the attack on an adversarially-trained CNN (Adv trained CNN) by with allowed perturbations of L ∞ ≤ 0.3. Since the Boundary Attack uses the L 2 measure, the adversarially-trained CNN which uses the L ∞ metric is not expected to perform well for the L 2 metric. For details of our implementation of the Boundary Attack, please refer to Appendix E. Figure 7 shows the mean L 2 of the perturbations over 100 test images, where all models are attacked with maximum 5000 iterations. CNN and the adversarially-trained CNN have very low perturbations. The stochastic models, Vote, DRN and DRN LAT, have much higher perturbations with lower quality adversarial images, and the difficulty of the attack increases in that order. This shows that the distribution classifier and the lightweight adversarial training extension are more difficult to attack by the Boundary Attack method compared to voting. have shown that under the white-box setting where the attacker has full knowledge of the CNN model and the defense, random transformation defenses are susceptible to further attack by estimating the gradients using multiple transformation samples, in a method called Expectation over Transformation(EOT). To employ white-box attack on our distribution classifier method, there are a few potential challenges. First, our method uses 50 to 100 transformation samples per image to accumulate the distribution of softmax before feeding into the distribution classifier. Attacking our method with EOT will be very time-consuming as it requires taking multiple batches of transformations, each with 50-100 samples. Next, we have shown our method works with different distribution classifier models, including the non-differentiable random forest classifier. While there have been attacks proposed for random forests , it is unclear how feasible it is to combine these attacks with EOT. We leave the evaluation of white-box attacks on our distribution classifier method for future work. Adversarial attacks on convolutional neural networks have gained significant research attention and stochastic input transformation defenses have been proposed. However, with transformation-based defenses, the performance on clean images deteriorates and the exact mechanism in which how this happens is unclear. In this paper, we conduct in-depth analysis on the effects of stochastic transformation-based defenses on the softmax outputs of clean and adversarial images. We observe that after image transformation, the distributions of softmax obtained from clean and adversarial images share similar distinct features. Exploiting this property, we propose a method that trains a distribution classifier on the distributions of the softmax outputs of transformed clean images only, but show improvements in both clean and adversarial images over majority voting. In our current work, we have considered untargeted attacks on the CNN and it is interesting to test our distribution classifier method with targeted attacks. In Section 3, we studied the 4 distance metrics for the distribution of softmax. Figure 8 shows the distance metrics for all ten MNIST classes with increasing number of pixel deflections. In Figure 9, we show examples where pixel deflection with voting recovers from the adversarial attack. For one of the distribution classifier methods, we adapt a state-of-the-art distribution-to-distribution regression method, called distribution regression network (DRN) . DRN encodes an entire distribution in each network node and this compact representation allows it to achieve higher prediction accuracies for the distribution regression task compared to conventional neural networks. Since DRN shows superior regression performance, we adapt DRN for distribution classification in this work. Our adaption of the distribution classifier is shown on the right of Figure 10. The network consists of fully-connected layers, where each node encodes a distribution. The number of hidden layers and nodes per hidden layer are chosen by cross validation. The number of discretization bins for each distribution for the input layer and hidden layers is also tuned as hyperparameters. To adapt DRN for our distribution classification task, for the final layer, we have C nodes representing each class and we use 2 bins for each distribution to represent the logit output for the corresponding class. The cost function for the distribution classifier is the cross entropy loss on the logits. The distribution classifier is optimized by backpropagation using the Adam optimizer . The weight initialization method follows , where the weights are sampled from a uniform random distribution. Tables 1 to 3 show the hyperparameter settings used for the adversarial attacks. The attacks are implemented using the CleverHans library . For DeepFool and C&W, the other hyperparameters used are the default values set in CleverHans. For L 2 norm, we use the root-mean-square distortion normalized by total number of pixels, following previous works. Tables 4 to 6 show the image transformation parameters used for MNIST and CIFAR10 respectively. The hyperparameter tuning for each defense method is conducted on the validation set for each dataset. We select hyperparameters that give the best recovery from adversarial attack, regardless of the deterioration in accuracy on clean images. The pixel deflection defense uses class activation maps (CAMs) to randomly select pixels to undergo the deflection step. In our experiments, we did not use class activation maps and instead randomly select pixels with equal probabilities. First, for the MNIST dataset, CAMs are unsuitable because the LeNet architecture does not have global average pooling layers which are required for CAMs. For the CIFAR10 dataset, the wide ResNet architecture uses a final layer of global average pooling and so we tested CAMs on it. Table 7 compares the performance on clean and adversarial images using the FGSM and IGSM attacks, with and without CAMs, which shows that using CAMs does Table 4: Details of the image transformation parameters for MNIST. The three transformation-based methods tested are random pixel noise (RPN), pixel deflection (PD) and random resize and padding (RRP). For RPN, the noise magnitude is unnormalized (out of 255). For PD, d is the number of deflections, w is the window size and σ is the denoising parameter., w=20, σ=0 d=100, w=20, σ=0 d=10, w=20, σ=0.08 d=100, w=25, σ=0.08 RRP resize range= resize range= resize range= resize range= not cause significant difference in performance. This may be because CAMs are more effective on larger images such as those in ImageNet where there are many more pixels. Our defense method uses a distribution classifier to train on distributions of softmax probabilities obtained from transformed samples of the clean images. For each image, we build the marginal distributions of the softmax for each class using kernel density estimation with a Gaussian kernel. The kernel width is optimized to be 0.05. For DRN and MLP, the network architecture of the distribution classifier and optimization hyperparameters are chosen by cross-validation. For random forest, the number of trees and maximum depth of the trees are tuned by cross-validation. The hyperparameters used are shown in Tables 8 to 10. Here we include the detailed numerical figures for the accuracies of majority voting and the distribution classifier methods. Tables 11 to 13 show the clean and adversarial test accuracies and the 4 attack methods and the 3 defense methods. For Vote, DRN and DRN LAT, the model outputs are random because of the random image transformation. At each step of Boundary Attack, we allow the attack to query the model once, and this involves taking 50-100 transformed samples for the image to perform voting or to feed to the MNIST RPN 1x10 2x10 1x10 1x10 PD 1x10 1x10 1x20 1x20 RRP 1x10 1x10 1x20 1x20 CIFAR10 RPN 1x10 1x20 1x20 1x20 PD 1x20 1x10 1x20 1x10 RRP 1x20 1x10 1x20 1x10 CIFAR100 RPN 1x100 1x100 1x100 1x100 PD 1x100 1x100 1x100 1x100 RRP 1x100 1x100 1x100 1x100 MNIST RPN 1x20 1x20 1x50 1x20 PD 1x20 1x50 1x50 1x50 RRP 1x50 1x20 1x50 1x50 CIFAR10 RPN 1x50 1x50 1x50 1x20 PD 1x50 1x50 1x50 1x50 RRP 1x50 1x50 1x20 1x20 CIFAR100 RPN 1x2000 1x1000 1x500 1x500 PD 1x500 1x1000 2x100 1x1000 RRP 1x200 1x1000 2x100 1x1000 MNIST RPN n=100, d=20 n=100, d=20 n=200, d=50 n=200, d=50 PD n=100, d=20 n=100, d=20 n=200, d=50 n=200, d=50 RRP n=200, d=20 n=100, d=20 n=200, d=50 n=200, d=50 CIFAR10 RPN n=200, d=50 n=200, d=50 n=200, d=50 n=200, d=50 PD n=200, d=50 n=200, d=50 n=200, d=50 n=200, d=50 RRP n=200, d=50 n=200, d=50 n=200, d=50 n=200, d=50 CIFAR100 RPN n=200, d=200 n=200, d=200 n=200, d=200 n=200, d=200 PD n=200, d=200 n=200, d=200 n=200, d=100 n=200, d=200 RRP n=200, d=200 n=200, d=200 n=200, d=100 n=200, d=100 Table 13: CIFAR100 : For each attack, we compare the clean and adversarial (adv.) test accuracies with majority voting (Vote) and the three distribution classifier methods: distribution regression network (DRN), random forest (RF) and multilayer perceptron (MLP). The three transformation-based defenses are random pixel noise (RPN), pixel deflection (PD) and random resize and padding (RRP). With no defense, the clean accuracy is 100% and the adversarial accuracy is 0%. distribution classifier to obtain a prediction. To avoid overfitting to a fixed transformation pattern, the transformation is random at each step. Our criteria for an image being adversarial is that out of 5 queries, the image is misclassified at least once. Because of the randomness of the model, the image returned by Boundary Attack may be classified to the correct class, and we increase the perturbation by increasing amounts until the image is misclassified. Note that to overcome the randomness, we could have performed multiple queries at each attack step, but because our models already use 50-100 transformed samples per query, this will be computationally infeasible.
We enhance existing transformation-based defenses by using a distribution classifier on the distribution of softmax obtained from transformed images.
722
scitldr
Reinforcement learning in multi-agent scenarios is important for real-world applications but presents challenges beyond those seen in single-agent settings. We present an actor-critic algorithm that trains decentralized policies in multi-agent settings, using centrally computed critics that share an attention mechanism which selects relevant information for each agent at every timestep. This attention mechanism enables more effective and scalable learning in complex multi-agent environments, when compared to recent approaches. Our approach is applicable not only to cooperative settings with shared rewards, but also individualized reward settings, including adversarial settings, and it makes no assumptions about the action spaces of the agents. As such, it is flexible enough to be applied to most multi-agent learning problems Reinforcement learning has recently made exciting progress in many domains, including Atari games, the ancient Chinese board game, Go, and complex continuous control tasks involving locomotion BID17 BID26 BID12. While most reinforcement learning paradigms focus on single agents acting in a static environment (or against themselves in the case of Go), real-world agents often compete or cooperate with other agents in a dynamically shifting environment. In order to learn effectively in multi-agent environments, agents must not only learn the dynamics of their environment, but also those of the other learning agents present. To this end, several approaches for multi-agent reinforcement learning have been developed. The simplest approach is to train each agent independently to maximize their individual reward, while treating other agents as part of the environment. However, this approach violates the basic assumption underlying reinforcement learning, that the environment should be stationary and Markovian. Any single agent's environment is dynamic and nonstationary due to other agents' changing policies. As such, standard algorithms developed for stationary Markov decision processes fail. At the other end of the spectrum, all agents can be collectively modeled as a single-agent whose action space is the joint action space of all agents BID2. While allowing coordinated behaviors across agents, this approach is not scalable due to the action space size increasing exponentially with the number of agents. It also demands a high degree of communication during execution, as the central policy must collect observations from and distribute actions to the individual agents. In real-world settings, this demand can be problematic. Recent work BID20 attempts to combine the strengths of these two approaches. In particular, a critic (or a number of critics) is centrally learned with information from all agents. The actors, however, receive information only from their corresponding agents. Thus, during testing, executing the policies does not require the knowledge of other agents' actions. This paradigm circumvents the challenge of non-Markovian and non-stationary environments during learning. Despite those progresses, however, algorithms for multi-agent reinforcement learning are still far from being scalable (to a larger number of agents) and being generically applicable to environments and tasks that are co-operative (sharing a global reward), competitive, or mixed. Our approach extends these prior works in several directions. The main idea is to centrally learn a critic with an attention mechanism. The intuition behind our idea is that in many real-world environ-ments, it is beneficial for agents to know what other agents it should pay attention to. For example, a soccer defender needs to pay attention to attackers in their vicinity as well as the player with the ball, while she/he rarely needs to pay attention to the opposing team's goalie. The specific attackers that the defender is paying attention to can change at different parts of the game, depending on the formation and strategy of the opponent. A typical centralized approach to multi-agent reinforcement learning does not take these dynamics into account, instead simply considering all agents at all timepoints. Our attention mechanism is able to dynamically select which agents to attend to at each time point, improving performance in multi-agent domains with complex interactions. The proposed approach has an input space linearly increasing with respect to the number of agents, as opposed to the quadratic increase in a previous approach BID20. It also works well in co-operative, competitive, and mixed environments, exceeding the capability of some prior work that focuses only on co-operative environments.We have validated our approach on two simulated environments and tasks. We plan to release the code for both the model and the environments after the reviewing period ends. The rest of the paper is organized as follows. In section 2, we discuss related work, followed by a detailed description of our approach in section 3. We report experimental studies in section 4 and conclude in section 5. Multi-Agent Reinforcement Learning (MARL) is a long studied problem BID2. Topics within MARL are diverse, ranging from learning communication between cooperative agents (; BID4 to algorithms for optimal play in competitive settings BID19, though, until recently, they have been focused on simple gridworld environments with tabular learning methods. As deep learning based approaches to reinforcement learning have grown more popular, they have, naturally, been applied to the MARL setting BID32 BID9, allowing multi-agent learning in high-dimensional/continuous state spaces; however, naive applications of Deep RL methods to MARL naturally encounter some limitations, such as nonstationarity of the environment from the perspective of individual agents BID6 BID20, lack of coordination/communication in cooperative settings BID29 BID23 BID20 BID5, credit assignment in cooperative settings with global rewards BID25 BID30, and the failure to take opponent strategies into account when learning agent policies BID11 .Most relevant to this work are recent, non-attention approaches that propose an actor-critic framework consisting of centralized training with decentralized execution BID20, as well as some approaches that utilize attention in a fully centralized multi-agent setting BID3 BID14 . BID20 investigate the challenges of multiagent learning in mixed reward environments BID2 . They propose an actor-critic method that uses separate centralized critics for each agent which take in all other agents' actions and observations as input, while training policies that are conditioned only on local information. This practice reduces the non-stationarity of multi-agent environments, as considering the actions of other agents to be part of the environment makes the state transition dynamics stable from the perspective of one agent. In practice, these ideas greatly stabilize learning, due to reduced variance in the value function estimates. Similarly introduce a centralized critic for cooperative settings with shared rewards. Their method incorporates a "counterfactual baseline" for calculating the advantage function which is able to marginalize a single agent's actions while keeping others fixed. This method allows for complex multi-agent credit assignment, as the advantage function only encourages actions that directly influence an agent's rewards. Attention models have recently emerged as a successful approach to intelligently selecting contextual information, with applications in computer vision BID0 BID21, natural language processing (; BID1 BID18, and reinforcement learning BID24 .In a similar vein, BID14 proposed an attention-based actor-critic algorithm for MARL. This work follows the alternative paradigm of centralizing policies while keeping the critics decentralized. Their focus is on learning an attention model for sharing information between the policies. As such, this approach is complementary to ours, and a combination of both approaches could yield further performance benefits in cases where centralized policies are desirable. Our proposed approach is more flexible than the aformentioned approaches for MARL. Our algorithm is able to train policies in environments with any reward setup, different action spaces for each agent, a variance-reducing baseline that only marginalizes the relevant agent's actions, and with a set of centralized critics that dynamically attend to the relevant information for each agent at each time point. As such, our approach is more scalable to the number of agents, and is more broadly applicable to different types of environments. We start by introducing the necessary notation and basic building blocks for our approach. We then describe our ideas in detail. We consider the framework of Markov Games BID19, which is a multi-agent extension of Markov Decision Processes. They are defined by a set of states, S, action sets for each of N agents, A 1, ..., A N, a state transition function, T : S × A 1 × ... × A N → P (S), which defines the probability distribution over possible next states, given the current state and actions for each agent, and a reward function for each agent that also depends on the global state and actions of all agents, DISPLAYFORM0 We will specifically be considering a partially observable variant in which an agent, i receives an observation, o i, which contains partial information from the global state, s ∈ S. Each agent learns a policy, π i: O i → P (A i) which maps each agent's observation to a distribution over it's set of actions. The agents aim to learn a policy that maximizes their expected discounted returns, DISPLAYFORM1, where γ ∈ is the discount factor that determines how much the policy favors immediate reward over long-term gain. Policy Gradients Policy gradient techniques BID31 ) aim to estimate the gradient of an agent's expected returns with respect to the parameters of its policy. This gradient estimate takes the following form: DISPLAYFORM2 Actor-Critic and Soft Actor-Critic The term ∞ t =t γ t −t r t (s t, a t) in the policy gradient estimator leads to high variance, as these returns can vary drastically between episodes. Actor-critic methods BID16 aim to ameliorate this issue by using a function approximation of the expected returns, and replacing the original return term in the policy gradient estimator with this function. One specific instance of actor-critic methods learns a function to estimate expected discounted returns, given a state and action, Q ψ (s t, a t) = E[DISPLAYFORM3, learned through temporal-difference learning by minimizing the regression loss: DISPLAYFORM4 where Qψ is the target Q-value function. To encourage exploration and avoid converging to non-optimal deterministic policies, recent approaches of maximum entropy reinforcement learning learn a soft value function by modifying the policy gradient to incorporate an entropy term BID10 : DISPLAYFORM5 where b(s) is a state-dependent baseline (for the Q-value function). The loss function for temporaldifference learning of the value function is also revised accordingly with a new target: DISPLAYFORM6 While an estimate of the value function V φ (s) can be used a baseline, we provide an alternative that further reduces variance and addresses credit assignment in the multi-agent setting in section 3.2. The main idea behind our multi-agent learning approach is to learn the critic for each agent by selectively paying attention to other agents' actions. This is the same paradigm of training critics centrally (to overcome the challenge of non-stationary non-Markovian environments) and executing learned policies distributedly. FIG0 illustrates the main components of our approach. Attention The attention mechanism functions in a manner similar to a differentiable key-value memory model BID24. Intuitively, each agent queries the other agents for information about their observations and actions and incorporates that information into the estimate of its value function. This paradigm was chosen, in contrast to other attention-based approaches, as it doesn't make any assumptions about the temporal or spatial locality of the inputs, as opposed to approaches taken in the natural language processing and computer vision fields. DISPLAYFORM0 where f i is a two-layer multi-layer perceptron (MLP), while g i is a one-layer MLP embedding function. The contribution from other agents, x i, is a weighted sum of each agent's value: DISPLAYFORM1 where the value, v j is a function of agent j's embedding, encoded with an embedding function and then linearly transformed by a shared matrix V. h is an element-wise nonlinearity (we have used leaky ReLU).The attention weight α j compares the embedding e j with e i = g i (o i, a i), using a bilinear mapping (ie, the query-key system) and passes the similarity value between these two embeddings into a softmax DISPLAYFORM2 where W q transforms e i into a "query" and W k transforms e j into a "key". The matching is then scaled by the dimensionality of these two matrices to prevent vanishing gradients .In our experiments, we have used multiple attention heads . In this case, each head, using a separate set of parameters (W k, W q, V), gives rise to an aggregated contribution from all other agents to the agent i and we simply concatenate the contributions from all heads as a single vector. Crucially, each head can focus on a different weighted mixture of agents. Note that the weights for extracting selectors, keys, and values are shared across all agents, which encourages a common embedding space. The sharing of critic parameters between agents is possible, even in adversarial settings, because multi-agent value-function approximation is, essentially, a multi-task regression problem. This method can easily be extended to include additional information, beyond local observations and actions, at training time, including the global state if it is available, simply by adding additional encoders, e. (We do not consider this case in our experiments, however, as our approach is effective in combining local observations to predict expected returns in environments where the global state may not be available).Learning with Attentive Critics All critics are updated together to minimize a joint regression loss function, due to the parameter sharing: DISPLAYFORM3, where DISPLAYFORM4 whereψ andθ are the parameters of the target critics and target policies respectively. Note that Q ψ i, the action-value estimate for agent i, receives observations and actions for all agents. α is the temperature parameter determining the balance between maximizing entropy and rewards. The individual policies are updated with the following gradient: DISPLAYFORM5 where b(o, a \i) is the multi-agent baseline used to calculate the advantage function decribed in the following section. Note that we are sampling all actions, a, from all agents' current policies in order to calculate the gradient estimate for agent i, unlike in the MADDPG algorithm BID20, where the other agents' actions are sampled from the replay buffer, potentially causing overgeneralization where agents fail to coordinate based on their current policies. Full training details and hyperparameters can be found in the appendix 6.1. As shown in, an advantage function using a baseline that only marginalizes out the actions of the given agent from Q ψ i (o, a), can help solve the multi-agent credit assignment problem. In other words, by comparing the value of a specific action to the value of the average action for the agent, with all other agents fixed, we can learn whether said action will cause an increase in expected return or whether any increase in reward is attributed to the actions of other agents. The form of this advantage function is shown below: DISPLAYFORM0 Using our attention mechanism, we can implement a more general and flexible form of a multiagent baseline that, unlike the advantage function proposed in, doesn't assume the same action space for each agent, doesn't require a global reward, and attends dynamically to other agents, as in our Q-function. This is made simple by the natural decomposition of an agents encoding, e i, and the weighted sum of encodings of other agents, x i, in our attention model. Concretely, in the case of discrete policies, we can calculate our baseline in a single forward pass by outputting the expected return Q i (o, (a i, a \i)) for every possible action, a i ∈ A i, that agent i can take. We can then calculate the expectation exactly: DISPLAYFORM1 In order to do so, we must remove a i from the input of Q i, and output a value for every action. We add an observation-encoder, e i = g o i (o i), for each agent, using these encodings in place of the e i = g i (o i, a i) described above, and modify f i such that it outputs a value for each possible action, rather than the single input action. In the case of continuous policies, we do not need to add any parameters, as we can simply estimate the expectation in Equation 9 by sampling actions from our policy and averaging their Q-values, though, this comes at the cost of multiple expensive passes through the network. 4.1 SETUP (a) Cooperative Treasure Collection. The small grey agents are "hunters" who collect the colored treasure, and deposit them with the correctly colored large "bank" agents.(b) Rover-Tower. Each grey "Tower" is paired with a "Rover" and a destination (color of rover corresponds to its destination). Their goal is to communicate with the "Rover" such that it moves toward the destination. We construct two environments that test various capabilities of our approach (MAAC) and baselines. We investigate in two main directions. First, we study the scalability of different methods as the number of agents grows. We hypothesize that the current approach of concatenating all agents' observations (often used as a global state to be shared among agents) and actions in order to centralize critics does not scale well. To this end, we implement a cooperative environment, Cooperative Treasure Collection, with shared rewards where we can vary the total number of agents. The experimental in sec 4.3 validate our claim. Secondly, we want to evaluate each method's ability to attend to information relevant to rewards. Moreover, the relevance (to rewards) can dynamically change during an episode. This is analogous to real-life tasks such as the soccer example presented earlier. To this end, we implement a Rover-Tower task environment where randomly paired agents communicate information and coordinate. The two environments are implemented in the multi-agent particle environment framework 1 introduced by BID23, and extended by BID20. We found this framework useful for creating environments involving complex interaction between agents, while keeping the control and perception problems simple, as we are primarily interested in addressing agent interaction. To further simplify the control problem, we use discrete action spaces, allowing agents to move up, down, left, right, or stay; however, the agents may not immediately move exactly in the specified direction, as the task framework incorporates a basic physics engine where agents' momentums are taken into account. Fig. 2 illustrates the two environments. Cooperative Treasure Collection The cooperative environment in Figure 2a ) involves 8 total agents, 6 of which are "treasure hunters" and 2 of which are "treasure banks", which each correspond to a different color of treasure. The role of the hunters is to collect the treasure of any color, which re-spawn randomly upon being collected (with a total of 6), and then "deposit" the treasure into the correctly colored "bank". The role of each bank is to simply gather as much treasure as possible from the hunters. All agents are able to see each others' positions with respect to their own. Hunters receive a global reward for the successful collection of treasure and all agents receive a global reward for the depositing of treasure. Hunters are additionally penalized for colliding with each other. As such, the task contains a mixture of shared and individual rewards and requires different "modes of attention" which depend on the agent's state and other agents' potential for affecting its rewards. Rover-Tower The environment in Figure 2b involves 8 total agents, 4 of which are "rovers" and another 4 which are "towers". At each episode, rovers and towers are randomly paired. The pair is negatively rewarded by the distance of the rover to its goal. The task can be thought of as a navigation task on an alien planet with limited infrastructure and low visibility. The rovers are unable to see in their surroundings and must rely on communication from the towers, which are able to locate the rovers as well as their destinations and can send one of five discrete communication messages to their paired rover. Note that communication is highly restricted and different from centralized policy approaches BID14, which allow for free transfer of continuous information among policies. In our setup, the communication is integrated into the environment (in the tower's action BID17 space and the rover's observation space), rather than being explicitly part of the model, and is limited to a few discrete signals. We compare to two recently proposed approaches for centralized training of decentralized policies: MADDPG BID20 and COMA, as well as a single-agent RL approach, DDPG, trained separately for each agent. In order to enable learning in discrete action spaces for both MADDPG and DDPG, where deterministic policies are not possible, we use the Gumbel-Softmax reparametrization trick BID13. We will refer to these modified versions as MADDPG (Discrete) and DDPG (Discrete). For a detailed description of this reparametrization, see the appendix 6.2. We use soft actor critic to optimize. Thus, in order to have fair comparisons, we additionally implement MADDPG and COMA with Soft Actor-Critic, named as MADDPG+SAC and COMA+SAC.We also consider an ablated version of our model as a variant of our approach. In this model, we use uniform attention by fixing the attention weight α j (Eq. 6) to be 1/(N − 1). This restriction prevents the model from focusing its attention on specific agents. All methods are implemented such that their approximate total number of parameters (across agents) are equal to our method, and each model is trained with 6 random seeds each. Hyperparameters for each underlying algorithm are tuned based on performance and kept constant across all variants of critic architectures for that algorithm. A thorough comparison of all baselines is summarized in TAB0. FIG1 illustrates averaged rewards per episode by various methods. The proposed approach (MAAC) is competitive with other approaches being compared. In what follows, we provide detailed analysis. Uniform attention is competitive with our approach in the Cooperative Treasure Collection (CTC) environment, but not in Rover-Tower. On the other hand, both MADDPG (Discrete) and MADDPG+SAC perform well on Rover-Tower, though they do not on CTC. Both variants of COMA do not fare well in our environments. DDPG, arguably a weaker baseline, performs surprisingly well in CTC, but does poorly in Rover-Tower. In CTC, the rewards are shared across agents thus an agent's critic does not need to focus on information from specific agents in order to calculate its expected rewards. Moreover, each agent's local observation provides enough information to make a decent prediction of its expected rewards. This might explain why MAAC (uniform) which attends to other agents equally, and DDPG (being very unattentive to other agents) perform well. On the other hand, rewards in the Rover-Tower environment for a specific agent are tied to another single agent's observations. This environment exemplifies a class of scenarios where dynamic attention can be beneficial: when subgroups of agents are interacting and performing coordinated tasks with separate rewards, but the groups do not remain static. This explains why MAAC (uniform) perform poorly and DDPG completely breaks down, as knowing information from another specific agent is crucial in predicting expected rewards. COMA uses a single centralized network for predicting Q-values for all agents with separate forward passes. Thus, this approach may perform best in environments with global rewards and agents with similar action spaces. However, our environments have agents with differing roles (and non-global rewards in the case of Rover-Tower). Thus both variants of COMA do not fare well. MADDPG (and its variant) is a very strong method. However, we suspect its low performance in CTC is due to this environment's relatively large observation spaces for all agents, as the MADDPG critic concatenates observations for all agents into a single input vector for each agent's critic. Our next experiments confirm this hypothesis. Scalability We compare the average rewards attained by both approaches (normalized by the range of rewards attained in the environment, as differing the number of agents changes the nature of rewards in each environment), and show that the improvement of our approach MAAC over MADDPG+SAC grows with respect to the number of agents. As suspected, MADDPG-like critics use all information non-selectively, while our approach can learn which agents to pay more attention through the attention mechanism. Thus our approach scales better when the number of agents increases. In future research we will continue to improve the scalability when the number of agents further increases by sharing policies among agents, and performing attention on sub-groups (of agents).While the Rover-Tower task has a lot of agents, each agent only gets information about its paired agent -in other words, the task itself has an intrinsically smaller number of "other agents" (conditioned on each agent) than the CTC environment. As a future direction, we are creating more complicated environments where each agent needs to cope with a large group of agents where selective attention is needed. This naturally models real-life scenarios that multiple agents are organized in clusters/sub-societies (school, work, family, etc) where the agent needs to interact with a small number of agents from many groups. We anticipate that in such complicated scenarios, our approach, combined with some advantages exhibited by other approaches would do well. We propose an algorithm for training decentralized policies in multi-agent settings. The key idea is to utilize attention in order to select relevant information for estimating critics. We analyze the performance of the proposed approach with respect to the number of agents, different configurations of rewards, and the span of relevant observational information. Empirical are promising and we intend to extend to highly complicated and dynamic environments. for j = 1... num critic updates do Update target critic and policy parameters: DISPLAYFORM0 T update ← 0 Update critic: DISPLAYFORM1 2, where Update policies: DISPLAYFORM2 DISPLAYFORM3 38: end function We train using Soft Actor-Critic BID10, an off-policy, actor-critic method for maximum entropy reinforcement learning. Our training procedure consists of performing 12 parallel rollouts, and adding a tuple of (o t, a t, r t, o t+1) 1...N to a replay buffer (with maximum length 1e6) for each timepoint. We reset each environment after every 100 steps (an episode). After 100 steps (across all rollouts), we perform 4 updates for the attention critic and for all policies. For each update we sample minibatches of 1024 timepoints from the replay buffer and then perform gradient descent on the Q-function loss objective FORMULA11, as well as the policy objective, using Adam BID15 as the optimizer for both with a learning rate of 0.001. These updates can be computed efficiently in parallel (across agents) using a GPU. After the updates are complete, we update the parametersψ of our target critic Qψ to move toward our learned critic's parameters, ψ, as in Lillicrap et al. FORMULA2; BID10:ψ = (1 − τ)ψ + τ ψ, where τ is the update rate (set to 0.002 for attention parameters and 0.04 for all other parameters). Using a target critic has been shown to stabilize the use of experience replay for off-policy reinforcement learning with neural network function approximators BID17. We update the parameters of the target policies,θ in the same manner. We use a discount factor, γ, of 0.99. All networks (separate policies and contained within the centralized critics) use a hidden dimension of 128 and Leaky Rectified Linear Units as the nonlinearity. We use 0.2 as our temperature setting for Soft Actor-Critic. Additionally, we typically use 4 attention heads in our attention critics unless otherwise specified. In order to compare to DDPG and MADDPG in our environments with discrete action spaces, we must make a slight modification to the basic algorithm. This modification is first suggested by BID20 in order to enable policies that output discrete communication messages. Consider the original DDPG policy gradient which takes advantage of the fact that you can easily calculate the gradient of the output of a deterministic policy with respect to its parameters. DISPLAYFORM0 Rather than policies that deterministically output an action from within a continuous action space, we use policies that produce differentiable samples through a Gumbel-Softmax distribution BID13. Using differentiable samples allows us to use the gradient of expected returns to train policies without using the log derivative trick, just as in DDPG. a∼π(s) [∇ a Q(s, a)∇ θ a] DISPLAYFORM1 In order to understand how the use of attention evolves over the course of training, we examine the "entropy" of the attention weights for each agent for each of the four attention heads that we use in both tasks (Figures 4 and 5). The black bars indicate the maximum possible entropy (i.e. uniform attention across all agents). Lower entropy indicates that the head is focusing on specific agents, with an entropy of 0 indicating attention focusing on one agent. In Rover-Tower, we plot the attention entropy for each rover. Interestingly, each agent appears to use a different combination of the four heads, but their use is not mutually exclusive, indicating that the inclusion of separate attention heads for each agent is not necessary. This differential use of attention heads is sensible due to the nature of rewards in this environment (i.e. individualized rewards). In the case of Treasure Collection, we find that all agents use the attention heads similarly, which is unsurprising considering that rewards are shared in that environment. In order to inspect how the attention mechanism is working on a more fine-grained level, we visualize the attention weights for one of the rovers in Rover-Tower (Figure 6), from the head that the agent appears to use the most (determined by looking at Figure 4), while changing the tower that said rover is paired to. In these plots, we ignore the weights over other rovers for simplicity since these are always near zero. We find that the rover learns to strongly attend to the tower that it is paired with, without any explicit supervision signal to do so. The model implicitly learns which Figure 5: Attention "entropy" for each head over the course of training for two collectors in the Treasure Collection Environment agent is most relevant to estimating the rover's expecture future returns, and said agent can change dynamically without affecting the performance of the algorithm. In order to test our model's ability to handle continuous action spaces, we add a network for each agent to learn a state-value function V i (o, a \i), which uses the same weighted attention embedding over other agents as Q i (o, a). The loss functions to learn both networks are provided by BID10. We test on an environment introduced in BID20 called Cooperative Navigation and compare to MADDPG. Our are presented in TAB3. This task does not require attention, as all agents are relevant to each others rewards at each time step. As such, it is unsurprising that our Figure 6: Attention weights when subjected to different Tower pairings for Rover 1 in Rover-Tower environment approach matches but does not surpass the performance of MADDPG. It is notable, however, both that attention does not harm performance in simple cases and that our approach handles continuous action spaces as well.
We propose an approach to learn decentralized policies in multi-agent settings using attention-based critics and demonstrate promising results in environments with complex interactions.
723
scitldr
The recent expansion of machine learning applications to molecular biology proved to have a significant contribution to our understanding of biological systems, and genome functioning in particular. Technological advances enabled the collection of large epigenetic datasets, including information about various DNA binding factors (ChIP-Seq) and DNA spatial structure (Hi-C). Several studies have confirmed the correlation between DNA binding factors and Topologically Associating Domains (TADs) in DNA structure. However, the information about physical proximity represented by genomic coordinate was not yet used for the improvement of the prediction models. In this research, we focus on Machine Learning methods for prediction of folding patterns of DNA in a classical model organism Drosophila melanogaster. The paper considers linear models with four types of regularization, Gradient Boosting and Recurrent Neural Networks for the prediction of chromatin folding patterns from epigenetic marks. The bidirectional LSTM RNN model outperformed all the models and gained the best prediction scores. This demonstrates the utilization of complex models and the importance of memory of sequential DNA states for the chromatin folding. We identify informative epigenetic features that lead to the further of their biological significance. Machine Learning algorithms are used nowadays in multiple disciplines. In particular, the utilization of these methods in molecular biology has a significant impact on our understanding of cell processes . Investigating the large-scale DNA structure, i.e. the spatial organization of the genome, or chromatin, is one of the challenging tasks in the field. The relevance of this research is supported by multiple observations of interconnections between gene regulation, inheritance, disease and chromatin structure (Lupiáñez et al., 2016). Although the chromatin structure is folded 10 4 − 10 5 times, it maintains fundamental and vital processes of the cell. Various regulation mechanisms were shown to act through the three-dimensional structure formation. High-throughput experiments capturing contacting fragments of the genome, such as Hi-C, have unravelled many principles of chromosomal folding . Although Hi-C-like techniques were developed ten years ago, the experiments of high quality started to be published mainly during the last several years, and the protocol is still elaborate and expensive. Hi-C has also revealed that chromosomes are subdivided into a set of self-interacting domains called Topologically Associating Domains (TADs) that can be seen in Figure 1. TADs were shown to correlate with units of replication timing regulation in mammals , as well as with either active or repressed epigenetic domains in Drosophila . Various factors were shown to contribute to structure formation. ChIP-Seq is one of the highthroughput experiments dedicated to the detection of factors binding on the DNA in vivo. The rapid growth of its data enables exploring the chromatin structure with more sophisticated and complex methods such as Machine Learning. The datasets for various factors such as ChIP-Seq experiments for histone modifications become increasingly available in public databases . The relationship between TADs and epigenetics marks has been investigated recently . However, the mechanisms that underlie partitioning of the genome into TADs remain poorly understood. Moreover, there is no comprehensive work investigating all the factors that are publicly available yet. Figure 1: Typical representation of Hi-C interaction map as a genome-wide contact matrix, or a heatmap. Bright triangles can be visible across the diagonal. These structures are called TADs (topologically associating domains) and interpreted as compact globules of interacting chromatin. Drosophila melanogaster S2-DRSC cells This study focuses on bringing insights into the 3D chromatin structure using Machine Learning. The goal is to explore the principles of TAD folding and the role of epigenetics in this process. To that end, the analysis of Drosophila melanogaster chromatin was performed using Linear Regression models and Recurrent Neural Networks. Quality metrics were calculated, and informative features were investigated to identify which of chromatin marks are most significant in predicting information about TADs. In addition, the same techniques might be used to explore the 3D chromatin structure of mammals and humans in particular. Such reconstruction of the information about Hi-C map might be useful not only for understanding the chromatin structure formation but can also have various practical medical applications. For example, gliomagenesis and limb malformations in humans were demonstrated to be caused by chromosomal topology disruption . Over the last decade, the volume of produced data has significantly increased and brought the opportunity of applying complex and efficient methods. Several other studies were focused on predicting the 3D chromatin architecture using Machine Learning methods. One approach to this problem is to use the Hi-C map as input of the model, for example, Cristescu et al. presented the REcurrent Autoencoders for CHromatin 3D structure prediction (REACH-3D). REACH-3D reconstructs the chromatin structure, recovers several biological properties and have high correlation with microscopy measurements. However, another approach is to predict the information about the chromatin structure from other types of biological characteristics. In particular, Schreiber et al. considered nucleotide sequence as input for a deep Convolutional Neural Network. The objective of this architecture was to estimate the Hi-C contacts. This Neural Network demonstrated that the predicted outcomes are related to histone modification, selected functional elements and replication timing which correlates with theoretical knowledge. Moreover, another work that inspired this research was made by Ulianov et al. . They suggested that active chromatin and transcription play a key role in chromosome partitioning into TADs. It was shown that numerous transient interactions between nucleosomes of inactive chromatin lead to the formation of TADs that are potentially highly dynamic self-organized structures. On the other hand, nucleosomes, that tend to interact less often, influence the formation of inter-TADs and TAD boundaries. Ulianov et al. showed that active chromatin marks were preferably present at TAD borders, and repressive histone modifications that reflect nucleosomes occupancy were depleted within inter-TADs, which reveals the correlation between TADs and chromatin marks. Fortin et al. in succeeded in extracting knowledge from ChIP-Seq data of histone modification to analyze the chromatin structure. They constructed a predictive model of the Hi-C that unrevealed the correlation with replication timing, which proves the hypothesis of the possibility of extracting information about the Hi-C contacts from Nucleotide Sequence and DNaseI assay signal of Homo sapiens cell lines. A principal difference from all described works is that to our model explores the 3D chromatin characteristics of Drosophila melanogaster using a set of ChIP-Seq data as input. To the best of our knowledge, no other published work was conducted to predict Topologically Associated Domain characteristics from epigenetic marks. 3 DATA 3.1 INPUT DATA Hi-C datasets for Drosophila melanogaster S2-DRSC cells were collected from Ulyanov et al. . Drosophila dm3 genome assembly was subdivided into 5950 sequential genomic regions called bins, where each bin coresponds to 200 000 (20-Kb) DNA base pairs. Each bin can be described by a number of epigenetic features, estimated by ChIP-Seq We downloaded all available epigenetic datasets at the moment from the modENCODE database and processed it identically to . Based on the current model of chromatin formation in Drosophila, we distinguish two ChIP-Seq sets. The first set has five biologically significant features: Chriz, CTCF, Su(Hw), H3K27me3, H3K27a. The second set contains Chriz, CTCF, Su(Hw), H3K27me3, H3K27a, BEAF-32, CP190, Smc3, GAF, H3K36me1, H3K36me3, H3K4me1, H3K9ac, H3K9me1, H3K9me2, H3K9me3, H4K16ac. For normalization of the input data, each feature was centred by mean and scaled by variance. The example of eight original ChIP-Seq features and their transformation is seen in Appendix. Topologically Associating Domains (TADs) can be represented as the segmentation of the genome into discrete regions. However, this segmentation is dependent on one or several parameters, corresponding to the characteristic size of TADs. We sought for avoiding the problem of parameters selection in our approach. Thus we adopted the approach from and calculated the local characteristic of TAD formation of the genome, namely, gamma transitional. The procedure of calculation is briefly described below. Armatus software is used to annotate Topologically Associating Domains (TADs) with scaling parameter gamma that determines the average size and the number of TADs. When gamma is fixed, each genomic bin is annotated as part of a TAD, inter-TAD or TAD boundary, as part of segmentation. We characterized each bin by the scaling parameter gamma at which this bin switches from being a part of a TAD to being a part of an inter-TAD or a TAD boundary. Given the higher the gamma value, the smaller the TADs are in the Armatus annotation. See the illustration in Figure 2. Whole-genome Hi-C maps of chromatin folding in a set of S2-DRSC Drosophila cells were taken from and processed similarly to . To avoid ambiguity, let us clearly define our Machine Learning problem. -The objects are "bins" -DNA sections of the length of 20,000 nucleotides with no intersection of Drosophila melanogaster (see Introduction and Section 3.1 for more details). -The features are ChIP-Seq epigenetic data on chromatin markers (Section 3.1). -The target value is transitional gamma -parameter of transformation from TAD to inter-TAD, TAD boundary (Section 3.2). Higher gamma values correspond to smaller TADs. Transitional gamma is the value of gamma at which genomic bin switches from being a part of a TAD to being a part of an inter-TAD or a TAD boundary. The histogram of the target value transitional gamma in the data is presented in the right part of this plot. -The task is to predict the characteristics of the 3D structure of chromatin transitional gamma. The aim is to identify which of chromatin marks (ChIP-Seq data) are most significant in predicting information about the Topologically Associating Domains (TADs). As described in Section 3.2, the target object, transitional gamma, is a continuous value from 0 to 10, which leads to solving a regression problem. The classical optimisation function in this type of problems is Mean Square Error (MSE). However, the distribution of the target is significantly unbalanced (Figure 2). The target value of most of the objects is in the diapason between 0 and 3. Nevertheless, the contribution of the error on objects with a high true value of the target will also be high in the total score using Mean Square Error. Moreover, the biological nature of objects with a high value of the transitional gamma is different from other objects. For DNA bins with a transitional gamma value equal to 10, gamma value at which this bin passed from the TAD state to the inter-TAD or TAD boundary was not found. To build a model that accurately predicts the values of the transitional gamma for most objects, we have introduced our own custom loss function called modified weighted Mean Square Error (wMSE). It might be reformulated as MSE multiplied by the weight (penalty) of the error, depending on the true value of the target variable. where n is the number of data points, y truei is the true value for data point number i, y predi is the predicted value for data point number i, α is equal to the maximum value of the y true values increased by 1 to avoid multiplying the error with 0. As a the model is penalized less for errors on objects with a high value of the transitional gamma by using the weighting. The maximum values of the target value in the transitional gamma dataset is 10, thus α is equal to 11. To explore the relationships between the 3D chromatin structure and epigenetics data, we built Linear Regression (LR) models, Gradient Boosting (GB) Regressors and Recurrent Neural Networks (RNN). The LR were applied with no regularization, either L1 and L2 regularization or both of them. All the models were trained using the wMSE loss function. The Linear models were chosen to create a benchmark for this problem as no other of ML pipeline is publicly available for this dataset. It also allows intuitive feature importance interpretation. It is worth mentioning that our input bins are sequentially ordered in the genome. Due to DNA connectivity and local properties of clustering, the target variable values might be vastly correlated. Thus, in order to increase the chance of learning this property of the biological data, we selected RNN models. DNA is a long structured molecule formed out of nucleotides arranged in a linear sequence. DNA is double-stranded which means each nucleotide has a complementary pair, together called a base pair. DNA molecule might be several million base pairs (Mb) long and serves as the storage and the means of utilization of genetic information. The information content of DNA is equivalent if read in forward and reverse direction, thus all local properties of its sequence should be independent of the selected direction. To use this property of DNA molecule, we selected bidirectional LSTM RNN architecture . The index of the middle bin is calculated as the floor division of the length of the input by 2. The variable parameters that we investigated in our LSTM model are: -A sequence length of input RNN objects is a set of consecutive DNA bins with fixed input length called window size from 1 to 10. -Number of LSTM Units: 1, 4, 8, 16, 32, 64, 128, 256. -Number of training Epochs: 1, 4, 8, 16, 32, 64, 128, 256, 512. Early Stopping to automatically identify the optimal number of training epochs was used for the final models. -Loss function: weighted Mean Square Error (wMSE), our custom evaluation function defined in Section 4. -Optimizer: Adam. The data was always randomly separated into three groups: train dataset 70% of data, 20% test dataset and 10% for validation. For each type of the models, we have performed training several times to get more consistent . The are presented for the observations of ten experiments. The weighted Mean Square Error (wMSE) that is defined in Section 4.2 was calculated for each experiment. The best score of the weighted Mean Square Error using Linear Regression with L1 and L2 regularization (Elastic Net model) with parameter alpha equal to 0.2 was performed using a grid search. The wMSE of these experiments on train and test datasets was found and is presented in Table 1. The values of MSE, MAE and R 2 can be found in Table 2 and Figure 10, where LR -Linear Regression models, GB-X -Grad Boosting models with X estimators. Feature importance can be analyzed by exploring the weight coefficients of the Linear models. The prediction is created based on the multiplication of each weight on the corresponding feature. Thus, larger absolute values of the feature in the stronger influence of this particular feature on the prediction of the model. Thus we were able to extract the prioritization in terms of the influence of the features. After performing experiments on the first dataset with five ChIP-Seq characteristics, the ing weights happen to be significantly stable as it is shown below in the table of feature coefficient of Linear Regression (Figure 4). As a , we obtain that the most valuable in terms of the absolute value of the feature weight is Chriz, then CTCF, H3K27ac and H3K27me3, when the weight of Su(Hw) is the smallest. We adopted the same approach to the second dataset of ChIP-Seq characteristics. In comparison to the same application on a dataset of five features the coefficient order by absolute weights values is less stable (a table with sorting of the indexes of features by their weights can be seen in Supplementary Section). The numbers of occurrences of each of the feature indexes in the list of most influencing features were calculated. We have sorted the features based on this frequency number. Chriz was proved to be the most robustly reproduced influential factor. CTCF and CP190 were identified as the second degree on the scale of significant factors. Another worth mentioning is the selection of only one important feature Chriz out of both datasets while using the model of Linear Regression with L1 regularization (visualization can be found in the supplementary materials). We implemented also Gradient Boosting (GB) for regression. The GB additive model has outperformed the linear regression models. However, there was a strong tendency to over-fitting for a wide range of variable parameters such as the number of estimators, learning rate, maximum depth of the individual regression estimators, minimum number of samples required to split an internal node. The best were observed while setting the'n estimators': 100,'max depth': 3 and n estimators': 250,'max depth': 4,'learning rate': 0.01 and they are presented bellow in Table 1. The main Neural Network that we were exploring is Bidirectional LSTM. As described above, the sequential relationship of the input objects in terms of the physical distance in the DNA justifies the usage of Recurrent Neural Networks. For each variation of parameters, experiments were conducted and evaluation metrics were calculated (tables with can be found in Section 9). To explore the dependencies of weighted Mean Square Error on the sizes of sequence length, Bidirectional RNN models were trained with different input window size and number of LSTM Units. The is shown in Figure 5 where an optimal sequence size equals to input window size 6 and 64 LSTM Units was revealed. This has a clear biological interpretation as the typical size of TADs from around 120 Kb, which corresponds to 6 bins of 20,000 that turned out to have the strongest prediction scores. As a , the Bidirectional LSTM Recurrent Neural Networks with 64 LSTM Units and sequence of 6 bins taken as input data were trained and achieved better evaluation scores than a constant prediction, Linear models and Gradient Boosting models (Table 1). The constant prediction was made using the mean value of the training dataset. To explore the importance of each feature X from the input space, we replaced the values of the corresponding column of the feature matrix with zeros. Further, we calculate the evaluation metrics and check how significantly different they are from the obtained while using the complete set of data (Figure 8). The of wMSE on the test set do not differ dramatically from using full dataset. When we drop out each of the five features, we get the same score of around 0.9 that is almost equal to using all of them together. This means that our RNN is able to achieve the same score with a subset of these four features out of all five. The of applying the same technique while omitting each feature one by one using the second dataset of ChIP-Seq features allowed the evaluation of the biological impact of the features. These wMSE scores are presented in Figure 6 as well as the of training the model on all features together. The difference between the wMSE using all the features and omitting each one separately is presented on Figure 6. This provides the opportunity of identifying how valuable a particular biological characteristic is using RNN. The ChIP-Seq data usage for chromatin folding patterns prediction was confirmed by training ML models with dignified evaluation scores. Moreover, the were interpretable and biologically relevant. Linear Regression models, Gradient Boosting Trees and Recurrent Neural Networks were for the first time applied to our new dataset of chromatin characteristics. All models have performed better than constant prediction with the mean value of the training dataset. The utilization of memory of previous states linearly ordered by DNA molecule improves the prediction significantly as the best were obtained by bidirectional LSTM RNN model. The optimal input window size was also equal to six which has a biological meaning as it strongly aligns with the average TAD length. Feature importance analysis of the input ChIP-Seq data was conducted. The Linear models weights provided a biologically meaningful prioritization of the ChIP-Seq. Moreover, after training Linear Regression with L1 regularization detected one ChIP-Seq feature Chriz on both of the datasets as the most influencing. The of applying Neural Network models allowed the evaluation of the biological impact of the features. Exploration of the transferability of the models between different cell types and species might be an interesting development of this work. More input features of different biological nature, such as DNA sequence itself, is another direction of research. The code is open sourced and the implemented pipeline can be easily adapted to any similar biological dataset of chromatin features. A APPENDIX Figure Figure 10: MSE, MAE, R 2, weighted MSE metrics for various ML models experiments. Here "LR" stands for Linear Regression models, "GB-X" -Grad Boosting models with X estimators, "* best" means that the presented scores for the best of models of type *.
We apply RNN to solve the biological problem of chromatin folding patterns prediction from epigenetic marks and demonstrate for the first time that utilization of memory of sequential states on DNA molecule is significant for the best performance.
724
scitldr
Previous work showed empirically that large neural networks can be significantly reduced in size while preserving their accuracy. Model compression became a central research topic, as it is crucial for deployment of neural networks on devices with limited computational and memory resources. The majority of the compression methods are based on heuristics and offer no worst-case guarantees on the trade-off between the compression rate and the approximation error for an arbitrarily new sample. We propose the first efficient, data-independent neural pruning algorithm with a provable trade-off between its compression rate and the approximation error for any future test sample. Our method is based on the coreset framework, which finds a small weighted subset of points that provably approximates the original inputs. Specifically, we approximate the output of a layer of neurons by a coreset of neurons in the previous layer and discard the rest. We apply this framework in a layer-by-layer fashion from the top to the bottom. Unlike previous works, our coreset is data independent, meaning that it provably guarantees the accuracy of the function for any input $x\in \mathbb{R}^d$, including an adversarial one. We demonstrate the effectiveness of our method on popular network architectures. In particular, our coresets yield 90% compression of the LeNet-300-100 architecture on MNIST while improving the accuracy. Neural networks today are the most popular and effective instrument of machine learning with numerous applications in different domains. used a model with 60M parameters to win the ImageNet competition in 2012, network architectures have been growing wider and deeper. The vast overparametrization of neural networks offers better convergence and better generalization . The downside of the overparametrization is its high memory and computational costs, which prevent the use of these networks in small devices, e.g., smartphones. Fortunately, it was observed that a trained network could be reduced to smaller sizes without much accuracy loss. Following this observation, many approaches to compress existing models have been proposed (see for a recent review on network sparsification, and ; ; ; for neural pruning). Although a variety of model compression heuristics have been successfully applied to different neural network models, such as;; , these approaches generally lack strong provable guarantees on the trade-off between the compression rate and the approximation error. The absence of worst-case performance analysis can potentially be a glaring problem depending on the application. Moreover, data-dependent methods for model compression (e.g., ; ; ; ;) rely on the statistics presented in a data set. Hence, these methods are vulnerable to adversarial attacks , which design inputs that do not follow these statistics. Ideally, a network compression framework should 1) provide provable guarantees on the tradeoff between the compression rate and the approximation error, 2) be data independent, 3) provide high compression rate, and 4) be computationally efficient. To address these goals, we propose an efficient framework with provable guarantees for neural pruning, which is based on the existing theory of coresets such as . Coresets decrease massive inputs to smaller instances while maintaining a good provable approximation of the original set with respect to a given function. Our main idea is to treat neurons of a neural network as inputs in a coreset framework. Specifically, we reduce the number of neurons in layer i by constructing a coreset of neurons in this layer that provably approximates the output of neurons in layer i + 1 and discarding the rest. The coreset algorithm provides us with the choice of neurons in layer i and with the new weights connecting these neurons to layer i + 1. The coreset algorithm is applied layer-wise from the bottom to the top of the network. The size of the coreset, and consequently the number of remaining neurons in layer i, is provably related to the approximation error of the output for every neuron in layer i + 1. Thus, we can theoretically derive the trade-off between the compression rate and the approximation error of any layer in the neural network. The coreset approximation of neurons provably holds for any input; thus our compression is data-independent. Similar to our approach, used coresets for model compression. However, their coresets are data-dependent; therefore, they cannot guarantee robustness over inputs. Moreover, they construct coresets of weights, while our approach constructs coresets of neurons. Neural pruning reduces the size of the weight tensors, while keeping the network dense. Hence the implementation of the pruned network requires no additional effort. Implementing networks with sparse weights (which is the of weight pruning) is harder and in many cases does not in actual computational savings. Our empirical on LeNet-300-100 for MNIST and VGG-16 for CIFAR-10 demonstrate that our framework based on coresets of neurons outperforms sampling-based coresets by improving compression without sacrificing the accuracy. Finally, our construction is very fast; it took about 56 sec. to compress each dense layer in the VGG-16 network using the platform specified in the experimental section. Our Contributions: We propose an efficient, data-independent neural pruning algorithm with a provable trade-off between the compression rate and the output approximation error. This is the first framework to perform neural pruning via coresets. We provide theoretical compression rates for some of the most popular neural activation functions summarized in Table 1. 2 RELATED WORK 2.1 CORESETS Our compression algorithm is based on a data summarization approach known as coresets. Over the past decade, coreset constructions have been recognized for high achievements in data reduction in a variety of applications, including k-means, SVD, regression, low-rank approximation, PageRank, convex hull, and SVM; see details in. Many of the non-deterministic coreset based methods rely on the sensitivity framework, in which elements of the input are sampled according to their sensitivity (; ;), which is used as a measure of their importance. The sampled elements are usually reweighted afterwards. State-of-the-art neural networks are often overparameterized, which causes a significant redundancy of weights. To reduce both computation time and memory requirements of trained networks, many approaches aim at removing this redundancy by model compression. Weight Pruning: Weight pruning was considered as far back as 1990 , but has recently seen more study . One of the most popular approaches is pruning via sparsity. Sparsity can be enforced by L 1 regularization to push weights towards zero during training . However, it was observed that after fine-tuning of the pruned network, L 2 regularized network outperformed L 1, as there is no benefit to pushing values towards zero compared to pruning unimportant (small weight) connections. The approach in exploits the linearity of the neural network by finding a lowrank approximation of the weights and keeping the accuracy within 1% of the uncompressed model. performs quantization of the neural network's weights and suggests a new training procedure to preserve the model accuracy after the quantization. These methods showed high compression rates, e.g., the compression rate of AlexNet can reach 35x with the combination of pruning, quantization, and Huffman coding . Nevertheless, strong provable worst-case analysis is noticeably absent for most weight pruning methods. Neural pruning: Weight pruning leads to an irregular network structure, which needs a special treatment to deal with sparse representations, making it hard to achieve actual computational savings. On the other hand, neural pruning and filter pruning in CNNs (e.g, ; ; simply reduce the size of the tensors. The method in first identifies weak neurons by analyzing their activiations on a large validation dataset. Then those weak neurons are pruned and the network is retrained. The processes are repeated several times. introduces channel pruning based on the contribution to the discriminative power. These methods are data-dependent; thus they cannot provide guarantees of approximation error for any future input. measures the importance of channels by calculating the sum of absolute values of weights. Other channel pruning methods either impose channel-wise sparsity in training, followed by pruning channels with small scaling factors, and fine-tuning (e.g,) or perform channel pruning by minimizing the reconstruction error of feature maps between the pruned and pre-trained model (e.g., .) These methods lack provable guarantees on the trade-offs between their accuracy and compression. Coreset-Based Model Compression Similar to our work, the approach in uses corests for model compression. However, they construct coresets of weights, while we construct coresets of neurons. Their approach computes the importance of each weight, which is termed sensitivity, using a subset from the validation set. The coreset is chosen for the specific distribution (of data) so consequently, the compressed model is data-dependent. In our construction, the input of the neural network is assumed to be an arbitrary vector in R d and the sensitivity of a neuron is computed for every input in R d. This means that we create a data-independent coreset; its size is independent of the properties of the specific data at hand, and the compression provably approximates any future test sample. builds upon k-means coresets by adding a sparsity constraint. The weighting of the filters in the coreset is obtained based on their activation magnitudes over the training set. The compression pipeline also includes a pre-processing step that follows a simple heuristic that eliminates filters based on the mean of their activation norms over the training set. This construction is obviously data-dependent and it uses corsets as an alternative mechanism for low-rank approximation of filters. We propose an algorithm for compressing layer i and we apply it to all layers from the bottom to the top of the network. We first give an intuitive description of the algorithm. We then formalize it and provide a theoretical analysis of the proposed construction..., |P |} is a small subset, and we want this approximation to be bounded by a multiplicative factor that holds for any x ∈ R d. Unfortunately, our in Theorem 6 shows that this idealized goal is impossible. However, we show in Theorem 7 and Corollary 8 that we can construct a small coreset C, such that |z −z| ≤ ε for any input x ∈ R d. Algorithm 1 summarizes the coreset construction for a single neuron with an activation function φ, (our for common neural activation functions are summarized in Table 1). Algorithm 2 and Corollary 9 show the construction of a single coreset with possibly different weights for all neurons in layer i + 1 (see Figure 1, bottom). Definition 1 (weighted set). Let P ⊂ R d be a finite set, and w be a function that maps every p ∈ P to a weight w(p) > 0. The pair (P, w) is called a weighted set. Algorithm 1: CORESET(P, w, m, φ, β) A weighted set (P, w), A weighted set (C, u); see Theorem 7 and Corollary 8. Sample a point q from P such that p ∈ P is chosen with probability pr(p). A coreset in this paper is applied on a query space which consists of an input weighted set, an objective function, and a class of models (queries) as follows. Definition 2 (Query space). Let P = (P, w) be a weighted set called the input set. Let X ⊆ R d be a set, and f: P × X → [0, ∞) be a loss function. The tuple (P, w, X, f) is called a query space. Given a set of points P and a set of queries X, a coreset of P is a weighted set of points that provides a good approximation to P for any query x ∈ X. We state the definition of coresets with multiplicative guarantees below, though we shall also reference coresets with additive guarantees. Definition 3 (ε-coreset, multiplicative guarantee). Let (P, w, X, f) be a query space, and ε ∈ be an error parameter. An ε-coreset of (P, w, X, f) is a weighted set (Q, u) such that for every The size of our coresets depends on two parameters: the complexity of the activation function which is defined below, and the sum of a supremum that is defined later. We now recall the well-known definition of VC dimension using the variant from . Definition 4 (VC-dimension ). Let (P, w, X, f) be a query space. For every x ∈ R d, and r ≥ 0 we define range P,f (x, r):= {p ∈ P | f (p, x) ≤ r} and ranges(P, X, f):= C ∩ range P,f (x, r) | C ⊆ P, x ∈ X, r ≥ 0. For a set ranges of subsets of The VC-dimension of the query space (P, X, f) is the VC-dimension of (P, ranges(P, X, f)). The VC-dimension of all the query spaces that correspond to the activation functions in Table 1 is O(d), as most of the other common activation functions . The following theorem bounds the size of the coreset for a given query space and explains how to construct it. Unlike previous papers such as , we consider additive error and not multiplicative error. Theorem 5 . Let d be the VC-dimension of a query space (P, w, X, f)., and ε, δ ∈. Let c ≥ 1 be a sufficiently large constant that can be determined from the proof, and let C be a sample (multi-set) of m ≥ ct ε 2 d log t + log 1 δ i.i.d. points from P, where for every p ∈ P and q ∈ C we have pr(p = q) = s(p)/t. Then, with probability at least 1 − δ, Algorithm 2: CORESET PER LAYER(P, w 1, · · ·, w k, m, φ, β) Input: A weighted set (C, u); see Theorem 7. Sample a point q from P such that p ∈ P is chosen with probability pr(p). Most of the coresets provide a (1 + ε)-multiplicative factor approximation for every query that is applied on the input set. The bound on the coreset size is independent or at least sub-linear in the original number n of points, for any given input set. Unfortunately, the following theorem proves that it is impossible to compute small coresets for many common activation functions such as ReLU. This holds even if there are constraints on both the length of the input set and the test set of samples. Theorem 6 (No coreset for multiplicative error). Let φ: R → [0, ∞) such that φ(b) > 0 if and only if b > 0. Let α, β > 0, ε ∈ and n ≥ 1 be an integer. Then there is a set P ⊆ B α of n points such that if a weighted set (C, u) satisfies C ⊆ P and ∀x ∈ B β: The proof of Theorem 6 is provided in Appendix A.1. The following theorem motivates the usage of additive ε-error instead of multiplicative (1 + ε) error. Fortunately, in this case there is a bound on the coreset's size for appropriate sampling distributions. Theorem 7. Let α, β > 0 and (P, w, B β, f ) be a query space of VC-dimension d such that P ⊆ B α, the weights w are non-negative, f (p, x) = φ(p T x) and φ: R → [0, ∞) is a nondecreasing function. Let ε, δ ∈ and and c is a sufficiently large constant that can be determined from the proof. Let (C, u) be the output of a call to CORESET(P, w, m, φ, β); see Algorithm 1. Then, |C| ≤ m and, with probability at least 1 − δ, The proof is provided in Appendix A.2. As weights of a neural network can take positive and negative values, and the activation functions φ: R → R may return negative values, we generalize our to include negative weights and any monotonic (non-decreasing or non-increasing) bounded activation function in the following corollary. for every p ∈ P. Let c ≥ 1 be a sufficiently large constant that can be determined from the proof, t = p∈P s(p), and Let (C, u) be the output of a call to CORESET(P, w, m, φ, β); see Algorithm 1. Then, |C| ≤ m and, with probability at least 1 − δ, ∀x ∈ B β: The proof of Corollary 8 is provided in Appendix A.3. Applying Algorithm 1 to each neuron in a layer i + 1 could in the situation that a neuron in layer i is selected to the coreset of some neurons in layer i + 1, but not to others. In this situation, it cannot be removed. To perform neuron pruning, every neuron in layer i + 1 should select the same neurons for its coreset, maybe with different weights. Thus, we wish to compute a single coreset for multiple weighted sets that are different only by their weight function. Each such a set represents a neuron in level i + 1, which includes k neurons. Algorithm 2 and Corollary 9 show how to compute a single coreset for multiple weighted sets. Figure 1 provides an illustration of the layer pruning on a toy example. Corollary 9 (Coreset per Layer). for every p ∈ P. Let c ≥ 1 be a sufficiently large constant that can be determined from the proof, Let (C, u 1, · · ·, u k) be the output of a call to CORESET(P, w 1, · · ·, w k, m, φ, β); see Algorithm 2. Then, |C| ≤ m and, with probability at least 1 − δ, The proof follows directly from the observation in Theorem 5 that soft-clipping We first test our neural pruning with coresets on two popular models: LeNet-300-100 on MNIST , and VGG-16 on CIFAR-10 . We then compare the compression rate of our coreset (Neuron Coreset) to the compression methods based on the following sampling schemes: Baselines: uniform sampling, percentile (which deterministically retains the inputs with the highest norms), and Singular Value Decomposition (SVD); Schemes for matrix sparsification: based on L1 and L2 norms and their combination (; ;); Sensitivity sampling: CoreNet and CoreNet++ . In all experiments we used ReLU networks and we computed the average error of the tested algorithms after performing each test ten times. For every layer, after applying neural pruning the remaining weights were fine-tuned until convergence. The experiments were implemented in PyTorch on a Linux Machine using an Intel Xeon, 32-core CPU with 3.2 GHz, 256 GB of RAM and Nvidia TitanX and Quadro M4000 GPUs. LeNet-300-100 network comprises two fully connected hidden layers with 300 and 100 neurons correspondingly, trained on MNIST data set. Our coresets were able to prune roughly 90% of the parameters and our compression did not have any associated accuracy cost -in fact, it slightly improved the classification accuracy. VGG-16 includes 5 blocks comprising convolutional and pooling layers, followed by 3 dense layers -the first two with 4096 neurons and the last with 1000 neurons. The model was trained and tested on CIFAR-10. We applied our algorithm for neural pruning to the dense layers, which have the largest number parameters. Our experiment showed slight improvement in accuracy of classification while the number of parameters decreased by roughly 75%. We summarize our findings in Table 2. We analyzed the empirical trade-off between the approximation error and the size of the coreset, constructed by Algorithm 1 and Corollary 8, in comparison to uniform sampling, which also implements Algorithm 1, but sets the probability of a point to 1/n (n is the size of the full set), and to percentile, which deterministically retains the inputs with the highest norms (note that in percentile the points are not weighted). We ran three tests, varying the distribution of weights. In the first and second tests (Figure 2, (a) and (b)) the weights were drawn from the Gaussian and Uniform distributions respectively. The total number of neurons was set to 1000. We selected subsets of neurons of increasing sizes from 50 to 1000 with a step of 50. In the third test (Figure 2, (c)) we used the trained weights from the first layer of Lenet-300-100 including 300 neurons. We varied the coreset size from 50 to 300 with a step 50. To evaluate the approximation error, we used images from MNIST test set as queries. Each point in the plot was computed by 1) running the full network and the compressed network (with corresponding compression level) on each image x in the test set, 2) computing additive approximation error the ing error over the test set. In all three tests, our coresets outperformed the tested methods across all coreset sizes. We compare the average approximation error vs. compression rates of our neural pruning coreset with several other well-known algorithms (listed above). We run these tests on LeNet-200-105 architecture, trained and tested on MNIST, and we measure the corresponding average approximation error as defined in : where φθ(x) and φ θ (x) are the outputs of the approximated and the original networks respectively. The are summarized in Figure 3. As expected, all algorithms perform better with lower compression, but our algorithm outperforms the other methods, especially for high compression rates. The proposed compression framework includes for every layer, a selection of neurons using Algorithm 2, followed by fine-tuning. We performed the following ablation analysis to evaluate the contribution of different parts of our framework on LeNet-300-100 trained on MNIST. First, we removed the fine-tuning, to test the improvement due to Algorithm 2 over the uniform sampling. Figure 4, (a) shows the classification accuracy without fine-tuning as a function of the compression rate. Figure 4, (b) shows that fine-tuning improves both methods, but the advantage of the coreset is still apparent across almost all compression rates and it increases at the higher compression rates. Note that the model selected by the coreset can be fine-tuned to 98% classification accuracy for any compression rate, while the model chosen uniformly cannot maintain the same accuracy for high compression rates. These demonstrate that our coreset algorithm provides better selection of neurons compared to uniform sampling. Moreover, it requires significantly less fine-tuning: fine-tuning until convergence of the uniform sampling took close to 2 epochs, while fine-tuning of our method required about half of that time. We proposed the first neural pruning algorithm with provable trade-offs between the compression rate and the approximation error for any future test sample. We base our compression algorithm on the coreset framework and construct coresets for most common activation functions. Our tests on ReLU networks show high compression rates with no accuracy loss, and our theory guarantees the worst case accuracy vs. compression trade-off for any future test sample, even an adversarial one. In this paper we focused on pruning neurons. In future work, we plan to extend the proposed framework to pruning filers in CNNs, to composition of layers, and to other architectures. Putting all together. By applying Theorem 1 with X = B β, we obtain that, with probability at least 1 − δ, ∀x ∈ B β: Assume that the last equality indeed holds. Hence, ∀x ∈ B β: A.3 PROOF OF COROLLARY 8 We assume that φ is a non-decreasing function. Otherwise, we apply the proof below for the nondecreasing function φ * = −φ and corresponding weight w * (p) = −w(p) for every p ∈ P. The correctness follows since w(p)φ(p T x) = w * (p)φ * (p T x) for every p ∈ P. Indeed, put x ∈ B β, and φ non-decreasing. Hence, Equation 6 is obtained by separating each sum into points with positive and negative weights and applying Cauchy-Schwarz inequality. Next, we bound points with positive and negative weights separately using Theorem 7.
We propose an efficient, provable and data independent method for network compression via neural pruning using coresets of neurons -- a novel construction proposed in this paper.
725
scitldr
Planning problems in partially observable environments cannot be solved directly with convolutional networks and require some form of memory. But, even memory networks with sophisticated addressing schemes are unable to learn intelligent reasoning satisfactorily due to the complexity of simultaneously learning to access memory and plan. To mitigate these challenges we propose the Memory Augmented Control Network (MACN). The network splits planning into a hierarchical process. At a lower level, it learns to plan in a locally observed space. At a higher level, it uses a collection of policies computed on locally observed spaces to learn an optimal plan in the global environment it is operating in. The performance of the network is evaluated on path planning tasks in environments in the presence of simple and complex obstacles and in addition, is tested for its ability to generalize to new environments not seen in the training set. A planning task in a partially observable environment involves two steps: inferring the environment structure from local observation and acting based on the current environment estimate. In the past, such perception-action loops have been learned using supervised learning with deep networks as well as deep reinforcement learning BID3, BID1,. Popular approaches in this spirit are often end-to-end (i.e. mapping sensor readings directly to motion commands) and manage to solve problems in which the underlying dynamics of the environment or the agent are too complex to model. Approaches to learn end-to-end perception-action loops have been extended to complex reinforcement learning tasks such as learning how to play Atari games (a), as well as to imitation learning tasks like controlling a robot arm BID12.Purely convolutional architectures (CNNs) perform poorly when applied to planning problems due to the reactive nature of the policies learned by them BID21, BID4. The complexity of this problem is compounded when the environment is only partially observable as is the case with most real world tasks. In planning problems, when using a function approximator such as a convolutional neural network, the optimal actions are dependent on an internal state. If one wishes to use a state-less network (such as a CNN) to obtain the optimal action, the input for the network should be the whole history of observations and actions. Since this does not scale well, we need a network that has an internal state such as a recurrent neural network or a memory network. BID20 showed that when learning how to plan in partially observable environments, it becomes necessary to use memory to retain information about states visited in the past. Using recurrent networks to store past information and learn optimal control has been explored before in BID11. While BID14 have shown that recurrent networks are Turing complete and are hence capable of generating any arbitrary sequence in theory, this does not always translate into practice. Recent advances in memory augmented networks have shown that it is beneficial to use external memory with read and write operators that can be learned by a neural network over recurrent neural networks BID5, BID6. Specifically, we are interested in the Differentiable Neural Computer (DNC) BID6 which uses an external memory and a network controller to learn how to read, write and access locations in the external memory. The DNC is structured such that computation and memory operations are separated from each other. Such a memory network can in principle be plugged into the convolutional architectures described above, and be trained end to end since the read and write operations are differentiable. However, as we show in our work, directly using such a memory scheme with CNNs performs poorly for partially observable planning problems and also does not generalize well to new environments. To address the aforementioned challenges we propose the Memory Augmented Control Network (MACN), a novel architecture specifically designed to learn how to plan in partially observable environments under sparse rewards.1 Environments with sparse rewards are harder to navigate since there is no immediate feedback. The intuition behind this architecture is that planning problem can be split into two levels of hierarchy. At a lower level, a planning module computes optimal policies using a feature rich representation of the locally observed environment. This local policy along with a sparse feature representation of the partially observed environment is part of the optimal solution in the global environment. Thus, the key to our approach is using a planning module to output a local policy which is used to augment the neural memory to produce an optimal policy for the global environment. Our work builds on the idea of introducing options for planning and knowledge representation while learning control policies in MDPs BID16. The ability of the proposed model is evaluated by its ability to learn policies (continuous and discrete) when trained in environments with the presence of simple and complex obstacles. Further, the model is evaluated on its ability to generalize to environments and situations not seen in the training set. The key contributions of this paper are:1. A new network architecture that uses a differentiable memory scheme to maintain an estimate of the environment geometry and a hierarchical planning scheme to learn how to plan paths to the goal. 2. Experimentation to analyze the ability of the architecture to learn how to plan and generalize in environments with high dimensional state and action spaces.2 METHODOLOGY Section 2.1 outlines notation and formally states the problem considered in this paper. Section 2.2 and 2.3 briefly cover the theory behind value iteration networks and memory augmented networks. Finally, in section 2.4 the intuition and the computation graph is explained for the practical implementation of the model. Consider an agent with state s t ∈ S at discrete time t. Let the states S be a discrete set [s 1, s 2, . . ., s n].For a given action a t ∈ A, the agent evolves according to known deterministic dynamics: s t+1 = f (s t, a t). The agent operates in an unknown environment and must remain safe by avoiding collisions. Let m ∈ {−1, 0} n be a hidden labeling of the states into free and occupied (−1). The agent has access to a sensor that reveals the labeling of nearby states through an observations z t = H(s t)m ∈ {−1, 0} n, where H(s) ∈ R n×n captures the local field of view of the agent at state s. The local observation consists of ones for observable states and zeros for unobservable states. The observation z t contains zeros for unobservable states. Note that m and z t are n × 1 vectors and can be indexed by the state s t. The agent's task is to reach a goal region S goal ⊂ S, which is assumed obstacle-free, i.e., m[s] = 0 for all s ∈ S goal. The information available to the agent at time t to compute its action a t is h t:= s 0:t, z 0:t, a 0:t−1, S goal ∈ H, where H is the set of possible sequences of observations, states, and actions. Our problem can then be stated as follows: Problem 1. Given an initial state s 0 ∈ S with m[s 0] = 0 (obstacle-free) and a goal region S goal, find a function µ: S → A such that applying the actions a t:= µ(s t) in a sequence of states s 0, s 1,..., s T satisfying s T ∈ S goal and m[s t] = 0 for all t = 0,..., T.Instead of trying to estimate the hidden labeling m using a mapping approach, our goal is to learn a policy µ that maps the sequence of sensor observations z 0, z 1,... z T directly to actions for the agent. The partial observability requires an explicit consideration of memory in order to learn µ successfully. A partially observable problem can be represented via a Markov Decision Process (MDP) over the history space H. More precisely, we consider a finite-horizon discounted MDP defined by M(H, A, T, r, γ), where γ ∈ is a discount factor, T: H × A → H is a deterministic transition function, and r: H → R is the reward function, defined as follows: DISPLAYFORM0 The reward function definition stipulates that the reward of a state s can be measured only after its occupancy state has been observed. Given observations z 0:t, we can obtain an estimatem = max{τ z τ, −1} of the map of the environment and use it to formulate a locally valid, fully-observable problem as the MDP M t (S, A, f, r, γ) with transition function given by the agent dynamics f and reward r(s t):=m[s t] given by the map estimatem. The typical algorithm to solve an MDP is Value Iteration (VI) BID15. The value of a state (i.e. the expected reward over the time horizon if an optimal policy is followed) is computed iteratively by calculating an action value function Q(s, a) for each state. The value for state s can then be calculated by V (s):= max a Q(s, a). By iterating multiple times over all states and all actions possible in each state, we can get a policy π = arg max a Q(s, a). Given a transition function T r (s |s, a), the update rule for value iteration is given by DISPLAYFORM0 A key aspect of our network is the inclusion of this network component that can approximate this Value Iteration algorithm. To this end we use the VI module in Value Iteration Networks (VIN) BID17. Their insight is that value iteration can be approximated by a convolutional network with max pooling. The standard form for windowed convolution is BID17 show that the summation in is analogous to s T (s |s, a)V k (s) in. When is stacked with reward, max pooled and repeated K times, the convolutional architecture can be used to represent an approximation of the value iteration algorithm over K iterations. DISPLAYFORM1 Recent works on deep learning employ neural networks with external memory BID5, BID6, BID9, . Contrary to earlier works that explored the idea of the network learning how to read and access externally fixed memories, these recent works focus on learning to read and write to external memories, and thus forgo the task of designing what to store in the external memory. We are specifically interested in the DNC BID6 architecture. This is similar to the work introduced by and BID2. The external memory uses differentiable attention mechanisms to determine the degree to which each location in the external memory M is involved in a read or write operation. The DNC makes use of a controller (a recurrent neural network such as LSTM) to learn to read and write to the memory matrix. A brief overview of the read and write operations follows. are the read weightings, re t is the read vector, and M t is the state of the memory at time t. These read vectors are appended to the controller input at the next time step which provides it access to the memory. The write operation consists of a write weight w W t, an erase vector e t and a write vector v t. The write vector and the erase vector are emitted by the controller. These three components modify the memory at time t as: DISPLAYFORM0 Memory addressing is defined separately for writing and reading. A combination of content-based addressing and dynamic memory allocation determines memory write locations, while a combination of content-based addressing and temporal memory linkage is used to determine read locations. Consider the 2D grid world in FIG2 The agent is spawned randomly in this world and is represented by the blue square. The goal of the agent is to learn how to navigate to the red goal region. Let this environment in FIG2 represented by a MDP M. The key intuition behind designing this architecture is that planning in M can be decomposed into two levels. At a lower level, planning is done in a local space within the boundaries of our locally observed environment space. Let this locally observed space be z. FIG2 this locally observed space. As stated before in Section 2.1, this observation can be formulated as a fully observable problem M t (S, A, f, r, γ). It is possible to plan in M t and calculate the optimal policy for this local space, π * l independent of previous observations FIG2. It is then possible to use any planning algorithm to calculate the optimal value function V * l from the optimal policy π * This policy learned by the convolutional network is purely reactive as it is computed for the z new observation independent of the previous observations. Such an approach fails when there are local minima in the environment. In a 2D/3D world, these local minima could be long narrow tunnels culminating in dead ends (see Fig 2). In the scenario where the environment is populated with tunnels, (Fig 2) the environment is only partially observable and the agent has no prior knowledge about the structure of this tunnel forcing it to explore the tunnel all the way to the end. Further, when entering and exiting such a structure, the agent's observations are the same, i.e z 1 = z 2, but the optimal actions under the policies π 1 l and π 2 l (computed by the convolutional network) at these time steps are not the same, i.e a π 1 = a π 2. To backtrack successfully from these tunnels/nodes, information about previously visited states is required, necessitating memory. To solve this problem, we propose using a differentiable memory to estimate the map of the environmentm. The controller in the memory network learns to selectively read and write information Figure 2: Environment with local minima. The agents observation when entering the tunnel to explore it and when backtracking after seeing the dead end are the same. Using a reactive policy for such environments leads to the agent getting stuck near the dead end.to the memory bank. When such a differentiable memory scheme is trained it is seen that it keeps track of important events/landmarks (in the case of tunnel, this is the observation that the dead end has been reached) in its memory state and discards redundant information. In theory one can use a CNN to extract features from the observation z and pass these features to the differentiable memory. Instead, we propose the use of a VI module BID17 that approximates the value iteration algorithm within the framework of a neural network to learn value maps from the local information. We hypothesize that using these value maps in the differential memory scheme provides us with better planning as compared to when only using features extracted from a CNN. This architecture is shown in Figure 3.The VI module is setup to learn how to plan on the local observations z. The local value maps (which can be used to calculate local policies) are concatenated with a low level feature representation of the environment and sent to a controller network. The controller network interfaces with the memory through an access module (another network layer) and emits read heads, write heads and access heads. In addition, the controller network also performs its own computation for planning. The output from the controller network and the access module are concatenated and sent through a linear layer to produce an action. This entire architecture is then trained end to end. Thus, to summarize, the planning problem is solved by decomposing it into a two level problem. At a lower level a feature rich representation of the environment (obtained from the current observation) is used to generate local policies. At the next level, a representation of the histories that is learned and stored in the memory, and a sparse feature representation of the currently observed environment is used to generate a policy optimal in the global environment. Computation Graph: To explain the computation graph, consider the case of a 2D grid world with randomly placed obstacles, a start region and a goal region as shown in FIG2 The actions for this grid world are considered to be discrete. The 2D grid world is presented in the form of an image I of size m × n to the network. Let the goal region be [m goal, n goal] and the start position be [m start, n start]. At any given instant, only a small part of I is observed by the network and the rest of the image I is blacked out. This corresponds to the agent only observing what is visible within the range of its sensor. In addition to this the image is stacked with a reward map R m as explained in BID17. The reward map consists of an array of size m × n where all elements of the array except the one corresponding to index [m goal, n goal] are zero. Array element corresponding to [m goal, n goal] is set to a high value(in our experiments it is set to 1) denoting reward. The input image of dimension [m × n × 2] is first convolved with a kernel of size (3 × 3), 150 channels and stride of 1 everywhere. This is then convolved again with a kernel of size, 4 channels and stride of 1. Let this be the reward layer R. R is convolved with another filter of size with 4 channels. This is the initial estimate of the action value function or Q(s, a). The initial value of the state V (s) is also calculated by taking max over Q(s, a). The operations up to this point are summarized by the "Conv" block in Figure 3. Once these initial values have been computed, the model executes a for loop k times (the value of k ranges based on the task). Inside the for loop at every iteration, the R and V are first concatenated. This is then convolved with another filter of size and 4 channels to get the updated action value of the state, Q(s, a). We find the value of the state V(s) by taking the max Figure 3: MACN Architecture. The architecture proposed uses convolutional layers to extract features from the environment. The value maps are generated with these features. The controller network uses the value maps and low level features to emit read and write heads in addition to doing its own planning computation.of the action value function. The values of the kernel sizes are constant across all three experiments. The updated value maps are then fed into a DNC controller. The DNC controller is a LSTM (hidden units vary according to task) that has access to an external memory. The external memory has 32 slots with word size 8 and we use 4 read heads and 1 write head. This varies from task to task since some of the more complex environments need more memory. The output from the DNC controller and the memory is concatenated through a linear layer to get prediction for the action that the agent should execute. The optimizer used is the RMSProp and we use a learning rate of 0.0001 for our experiments. This formulation is easy enough to be extended to environments where the state space is larger than two dimensions and the action space is larger. We demonstrate this in our experiments. To investigate the performance of MACN, we design our experiments to answer three key questions:• Can it learn how to plan in partially observable environments with sparse rewards?• How well does it generalize to new unknown environments?• Can it be extended to other domains?We first demonstrate that MACN can learn how to plan in a 2D grid world environment. Without loss of generality, we set the probability of all actions equal. The action space is discrete, A:= {down, right, up, lef t}. This can be easily extended to continuous domains since our networks output is a probability over actions. We show this in experiment 3.4. We then demonstrate that our network can learn how to plan even when the states are not constrained to a two dimensional space and the action space is larger than four actions. We first evaluate the ability of our network to successfully navigate a 2D grid world populated with obstacles at random positions. We make the task harder by having random start and goal positions. The full map shown in FIG4 is the top down view of the entire environment. The input to the network is the sensor map, where the area that lies outside the agents sensing abilities is grayed out as explained before. VIN: With just the VI module and no memory in place, we test the performance of the value iteration network on this 2D partially observable environment. CNN + Memory: We setup Table 1: Performance on 2D grid world with simple obstacles: All models are tested on maps generated via the same random process, and were not present in the training set. Episodes over 40 (for a 16 × 16 wide map), 60 (for 32 × 32) and 80 (for 64 × 64) time steps were terminated and counted as a failure. Episodes where the agent collided with an obstacle were also counted as failures.a CNN architecture where the sensor image with the reward map is forward propagated through four convolutional layers to extract features. We test if these features alone are enough for the memory to navigate the 2D grid world. A natural question to ask at this point is can we achieve planning in partially observable environments with just a planning module and a simple recurrent neural network such as a LSTM. To answer this we also test MACN with a LSTM in place of the memory scheme. We present our in Table 1. These are obtained from testing on a held out test-set consisting of maps with random start, goal and obstacle positions. Our show that MACN can learn how to navigate partially observable 2D unknown environments. Note that the VIN does not work by itself since it has no memory to help it remember past actions. We would also like to point out that while the CNN + Memory architecture is similar to , its performance in our experiments is very poor due to the sparse rewards structure. MACN significantly outperforms all other architectures. Furthermore, MACN's drop in testing accuracy as the grid world scales up is not as large compared to the other architectures. While these seem promising, in the next section we extend the experiment to determine whether MACN actually learns how to plan or it is overfitting. The previous experiment shows that MACN can learn to plan in 2D partially observable environments. While the claim that the network can plan on environments it has not seen before stands, this is weak evidence in support of the generalizability of the network. In our previous experiment the test environments have the same dimensions as in the training set, the number of combinations of random obstacles especially in the smaller environments is not very high and during testing some of the wrong actions can still carry the agent to the goal. Thus, our network could be overfitting and may not generalize to new environments. In the following experiment we test our proposed network's capacity to generalize. The sensor input is the information available to the agent. Right: The full map that we test our agent on after being trained on smaller maps. The dimensions of the map as well as the tunnel are larger. The environment is setup with tunnels. The agent starts off at random positions inside the tunnel. While the orientation of the tunnel is fixed, its position is not. To comment on the the ability of our network to generalize to new environments with the same task, we look to answer the following question: When trained to reach the goal on tunnels of a fixed length, can the network generalize to longer tunnels in bigger maps not seen in the training set?The network is set up the same way as before. The task here highlights the significance of using memory in a planning network. The agent's observations when exploring the tunnel and exiting the tunnel are the same but the actions mapped to these observations are different. The memory in our network remembers past information and previously executed policies in those states, to output the right action. We report our in Table 2. To show that traditional deep reinforcement learning performs poorly on this task, we implement the DQN architecture as introduced in (b). We observe that even after one million iterations, the DQN does not converge to the optimal policy on the training set. This can be attributed to the sparse reward structure of the environment. We report similar findings when tested with A3C as introduced in . We also observe that the CNN + memory scheme learns to turn around at a fixed length and does not explore the longer tunnels in the test set all the way to the end. Table 2: Performance on grid world with local minima: All models are trained on tunnels of length 20 units. The success percentages represent the number of times the robot reaches the goal position in the test set after exploring the tunnel all the way to the end. Maximum generalization length is the length of the longest tunnel that the robot is able to successfully navigate after being trained on tunnels of length 20 units. These offer insight into the ability of MACN to generalize to new environments. Our network is found capable of planning in environments it has not seen in the training set at all. On visualizing the memory (see supplemental material), we observe that there is a big shift in the memory states only when the agent sees the end of the wall and when the agent exits the tunnel. A t-sne visualization over the action spaces (see FIG7) clearly shows that the output of our network is separable. We can conclude from this that the network has learned the spatial structure of the tunnel, and it is now able to generalize to tunnels of longer length in larger maps. Thus, we can claim that our proposed model is generalizable to new environments that are structurally similar to the environments seen in the training set but have not been trained on. In addition to this in all our experiments are state and action spaces have been constrained to a small number of dimensions. In our next experiment we show that MACN can learn how to plan even when the state space and action space are scaled up. Figure 7: 9 Node Graph Search. Blue is start and Red is goal. In our earlier experiments, the state space was constrained in two dimensions, and only four actions were available. It is nearly impossible to constrain every real world task to a two dimensional space with only four actions. However, it is easier to formulate a lot of partially observable planning problems as a graph. We define our environment as an undirected graph G = (V, E) where the connections between the nodes are generated randomly (see Fig. 7). In Fig 7 the blue node is the start state and the red node is the goal state. Each node represents a possible state the agent could be in. The agent can only observe all edges connected to the node it currently is in thus making it partially observable. The action space for this state is then any of the possible nodes that the agent can visit next. As before, the agent only gets a reward when it reaches the goal. We also add in random start and goal positions. In addition, we add a transition probability of 0.8. (For training details and generation of graph see Appendix.) We present our in Table 3. On graphs with small number of nodes, the reinforcement learning with DQN and A3C sometimes converge to the optimal goal due to the small state size and random actions leading to the goal node in some of the cases. However, as before the MACN outperforms all other models. On map sizes larger than 36 nodes, performance of our network starts to degrade. Further, we observe that even though the agent outputs a wrong action at some times, it still manages to get to the goal in a reasonably small number of attempts. From these , we can conclude that MACN can learn to plan in more general problems where the state space is not limited to two dimensions and the action space is not limited to four actions. Learning how to navigate in unknown environments, where only some part of the environment is observable is a problem highly relevant in robotics. Traditional robotics solve this problem by creating and storing a representation of the entire environment. However, this can quickly get memory intensive. In this experiment we extend MACN to a SE2 robot. The SE2 notation implies that the robot is capable of translating in the x-y plane and has orientation. The robot has a differential drive controller that outputs continuous control values. The robot is spawned in the environment shown in FIG8. As before, the robot only sees a small part of the environment at any given time. In this case the robot has a laser scanner that is used to perceive the environment. It is easy to convert this environment to a 2D framework that the MACN needs. We fix the size of the environment to a m × n grid. This translates to a m × n matrix that is fed into the MACN. The parts of the map that lie within the range of the laser scanner are converted to obstacle free and obstacle occupied regions and added to the matrix. Lastly, an additional reward map denoting a high value for the goal location and zero elsewhere as explained before is appended to the matrix and fed into the MACN. The network output is used to generate way points that are sent to the underlying controller. The training set is generated by randomizing the spawn and goal locations and using a suitable heuristic. The performance is tested on a held out test set of start and goal locations. More experimental details are outlined in the appendix. Table 4: Performance on robot worldWe observe in Table 4 that the proposed architecture is able to find its way to the goal a large number of times and its trajectory is close to the ground truth. This task is more complex than the grid world navigation due to the addition of orientation. The lack of explicit planning in the CNN + Memory architecture hampers its ability to get to the goal in this task. In addition to this, as observed before deep reinforcement learning is unable to converge to the goal. We also report some additional in FIG9 show that MACN converges faster to the goal than other baselines. In addition to rate of convergence, one of the biggest advantages of MACN over other architectures, for a fixed memory size is its ability to scale up when the size of the environment increases. We show that MACN is able to beat other baselines when scaling up the environment. In this scenario, scaling up refers to placing the goal further away from the start position. While the success percentage gradually drops to a low value, it is observed that when the memory is increased accordingly, the success percentage increases again. Lastly, in FIG2 observe that in the robot world, the performance of MACN scales up to goal positions further away by adjusting the size of the external memory in the differentiable block accordingly.: Performance on simulated environment. a) We report a plot of the number of steps left to the goal as the agent executes the learned policy in the environment (Lower is better). In this plot, the agent always starts at a position 40 steps away from the goal. b) The biggest advantage of MACN over other architectures is its ability to scale. We observe that as the distance to the goal increases, MACN still beats other baselines at computing a trajectory to the goal.(Higher success % is better) FIG2: Effect of Memory in Robot World MACN scales well to larger environments in the robot world when memory is increased suitably. In this section, we analyze the performance of the proposed network against traditional motion planning baselines. As stated before, for the grid world environments and the tunnel task, we obtain expert trajectories by running A * on the environment. In the case of the continuous control domain, we use the the Human Friendly Navigation (HFN) paradigm BID8 which uses a variant of A * along with a constraint for safe distances from obstacles to plan paths from start location to goal location. For the grid worlds (both with simple obstacles and local minima), we compute the ratio of path length predicted by network architecture to the path length computed by A *. Our are presented in Table 5.The VIN alone is unable to reach the goal in a fixed number of steps. This behavior is consistent across all grid worlds. In the case of the tunnels, the VIN gets stuck inside the local minima and is unable to navigate to the goal. Thus, the ratio of path length produced by VIN to the path length produced by A * is infinite. In the case of the CNN+Memory, the network is able to navigate to the goal only when the grid world is small enough. In the case of the tunnels, the CNN+Memory learns to turn around at a fixed distance instead of exploring the tunnel all the way to the end. For example, when trained on tunnels of length 20 units and tested on tunnels of length 32 and 64 units, the CNN+Memory turns around after it has traversed 20 units in the tunnel. For this task, to demonstrate the ineffectiveness of the CNN+Memory model, we placed the goal just inside the tunnel at the dead end. Thus, the ratio of path length produced by CNN+Memory to A * is ∞ since the agent never explored the tunnel all the way to the end. For the case of the MACN, we observe performance close to A * for the small worlds. The performance gets worse when the size of the grid world is increased. However, the dropoff for MACN with the DNC is lesser than that of the MACN with LSTM. For the tunnel world environment, both network architectures are successfully able to emulate the performance of A * and explore the tunnel all the way to the end. It is important to note here that A * is a model based approach and requires complete knowledge of the cost and other parameters such as the dynamics of the agent (transition probabilities). In addition, planners like A * require the user to explicitly construct a map as input, while MACN learns to construct a map to plan on which leads to more compact representations that only includes vital parts of the map (like the end of the tunnel in the grid world case). Our proposed method is a model free approach that learns This model free paradigm also allows us to move to different environments with a previously trained policy and be able to perform well by fine-tuning it to learn new features. DISPLAYFORM0 1.2 1.4 1.62 1.0 1.0 MACN 1.07 1.11 1.47 1.0 1.0 Table 5: Comparison to A *. G corresponds to grid world with simple obstacles with the size of the world specified inside the parenthesis. L corresponds to grid worlds with local minima/tunnels with the length of the tunnel specified inside the parenthesis. All ratios are computed during testing. For the worlds with tunnels, the network is trained on tunnels of length 20 units. Using value iteration networks augmented with memory has been explored before in BID7. In their work a planning module together with a map representation of a robot's free space is used for navigation in a partially observable environment using image scans. The image scans are projected into a 2D grid world by approximating all possible robot poses. This projection is also learned by the model. This is in contrast to our work here in which we design a general memory based network that can be used for any partially observed planning problem. An additional difference between our work and that of BID7 ) is that we do not attempt to build a 2D map of the environment as this hampers the ability of the network to be applied to environments that cannot be projected into such a 2D environment. We instead focusing on learning a belief over the environment and storing this belief in the differentiable memory. Another similar work is that of where a network is designed to play Minecraft. The game environment is projected into a 2D grid world and the agent is trained by RL to navigate to the goal. That network architecture uses a CNN to extract high level features followed by a differentiable memory scheme. This is in contrast to our paper where we approach this planning by splitting the problem into local and global planning. Using differential network schemes with CNNs for feature extraction has also been explored in BID2. Lastly, a recently released paper Neural SLAM BID19 uses the soft attention based addressing in DNC to mimic subroutines of simultaneous localization and mapping. This approach helps in exploring the environment robustly when compared to other traditional methods. A possible extension of our work presented here, is to use this modified memory scheme with the differentiable planner to learn optimal paths in addition to efficient exploration. We leave this for future work. Planning in environments that are partially observable and have sparse rewards with deep learning has not received a lot of attention. Also, the ability of policies learned with deep RL to generalize to new environments is often not investigated. In this work we take a step toward designing architectures that compute optimal policies even when the rewards are sparse, and thoroughly investigate the generalization power of the learned policy. In addition we show our network is able to scale well to large dimensional spaces. The grid world experiments offer conclusive evidence about the ability of our network to learn how to plan in such environments. We address the concern of oversimplifying our environment to a 2D grid world by experimenting with planning in a graph with no constraint on the state space or the action space. We also show our model is capable of learning how to plan under continuous control. In the future, we intend to extend our policies trained in simulation to a real world platform such as a robot learning to plan in partially observable environments. Additionally, in our work we use simple perfect sensors and do not take into account sensor effects such as occlusion, noise which could aversely affect performance of the agent. This need for perfect labeling is currently a limitation of our work and as such cannot be applied directly to a scenario where a sensor cannot provide direct information about nearby states such as a RGB camera. We intend to explore this problem space in the future, where one might have to learn sensor models in addition to learning how to plan. For the grid world we define our sensor to be a 7 × 7 patch with the agent at the center of this patch. Our input image I to the VI module is [m × n × 2] image where m,n are the height and width of the image. I[:, :, 0] is the sensor input. Since we set our rewards to be sparse, I[:, :, 1] is the reward map and is zero everywhere except at the goal position (m goal, n goal). I is first convolved to obtain a reward image R of dimension [n × m × u] where u is the number of hidden units (vary between 100-150). This reward image is sent to the VI module. The value maps from the VI module after K iterations are fed into the memory network controller. The output from the network controller (here a LSTM with 256 hidden units) and the access module is concatenated and sent through a linear layer followed by a soft max to get a probability distribution over A.During training and testing we roll out our sequence state by state based on the ground truth or the networks output respectively. Further, the set of transitions from start to goal to are considered to be an episode. During training at the end of each episode the internal state of the controller and the memory is cleared. The size of the external memory is 32 × 8 the grid world task. An additional hyperparameter is the number of read heads and write heads. This parameter controls the frequency of reads vs frequency of writes. For the the grid world task, we fix the number of read heads to 4 and the number of write heads to 1. For the grid world with simple obstacles, we observe that the MACN performs better when trained with curriculum BID0 ). This is expected since both the original VIN paper and the DNC paper show that better are achieved when trained with curriculum. For establishing baselines, the VIN and the CNN+Memory models are also trained with curriculum learning. In the grid world environment it is easy to define tasks that are harder than other tasks to aid with curriculum training. For a grid world with size (m, n) we increase the difficulty of the task by increasing the number of obstacles and the maximum size of the obstacles. Thus, for a 32 × 32 grid, we start with a maximum of 2 obstacles and the maximum size being 2 × 2. Both parameters are then increased gradually. The optimal action in the grid world experiments is generated by A star BID13. We use the Manhattan distance between our current position and the goal as a heuristic. Our error curves on the test set for the MACN with the LSTM and the addressable memory scheme are shown in FIG2. We setup the network the same way as we did for the grid world experiments with blob shaped obstacles. Due to the relatively simple structure of the environment, we observe that we do not really need to train our networks with curriculum. Additionally, the read and write heads are both set to 1 for this experiment. We observe that for the tunnel shaped obstacle when just the VI module is fed the partial map (stitched together version of states explored) as opposed to the sensor input, it performs extremely well and is able to generalize to new maps with longer tunnels without needing any memory. This is interesting because it proves our intuition about the planning task needing memory. Ideally we would like the network to learn this partial map on its own instead of providing it with a hand engineered version of it. The partial map represents an account of all states visited in the past. We argue that not all information from the past is necessary and the non redundant information that is required for planning in the global environment can be learned by the network. This can be seen in the memory ablation. As stated in the main paper, we observe that the DQN performs very poorly since the rewards are very sparse. The network is setup exactly as described in (a). We observe that even after 1 million iterations, the agent never reaches the goal and instead converges to a bad policy. This can be seen in FIG2. It is clear that under the random initial policy the agent is unable to reach the goal and converged to a bad policy. Similar are observed for A3C. Further, it is observed that even when the partial map instead of the sensor input is fed in to DQN, the agent does not converge. We observe that when testing the network, the memory registers a change in its states only when important events are observed. In FIG2, the left hand image represents the activations from the memory when the agent is going into the tunnel. We observe that the activations from the memory remain constant until the agent observes the end of the tunnel. The memory states change when the agent observes the end of the tunnel, when it exits the tunnel and when it turns towards its goal (FIG2). Another key observation for this task is that the MACN is prone to over fitting for this task. This is expected because ideally, only three states need to be stored in the memory; entered the tunnel, observe end of tunnel and exit tunnel. To avoid overfitting we add L2 regularization to our memory access weights. For the graph experiment, we generate a random connected undirected graph with N nodes. We will call this graph G = (V, E), with nodes V = {V 1, V 2, . . ., V N} and edges, E. The agent, at any point in the simulation, is located at a specific node V i and travels between nodes via the edges. The agent can take actions from a set U = {u 1, u 2, . . ., u N} where choosing action u i will attempt to move to node V i. We have transition probabilities DISPLAYFORM0 At each node, the agent has access to the unique node number (all nodes are labeled with a unique ID), as well as the (A) i, the i th row of the adjacency matrix A. It also has access to the unique node number of the goal (but no additional information about the goal).(a) A random connected graph (b) Adjacency matrix for graph FIG2: a) A random connected undirected graph with a the goal given by the star-shaped node, and the current state given by the blue node b) Shows the corresponding adjacency matrix where white indicates a connection and black indicates no connection. The goal is given by the row shaded in green, and the current state is given by the row shaded in blue. To train the network to navigate this graph, we used supervised training with an expert demonstrating an intended behavior (breadth first search). Training samples were generated by running breadth first search (and connecting nodes that are explored by traveling previously explored nodes of the graph). Thus, for each state of the node and goal, we obtain a desired action. To fit this into the framework of our network and 2D convolutions, we reshaped the row vector of the matrix into a matrix that could use the same convolution operation. The reward prior is also a row vector with a 1 at the index of the goal node and zero everywhere else. This row vector is reshaped and stacked with the observation. We train the graph by giving example paths between pairs of nodes. We then test on pairs of nodes not shown during training. The training network is setup as before in the grid world navigation task. Due to the increased action space and state space, this task is significantly more complex than the grid world navigation task. We train MACN and the baselines with curriculum training. In the graph task it is easy to define a measure of increasing complexity by changing the number of hops between the start state and the goal state. Additionally, for the graph task the number of read heads and write heads are set to 1 and 4 respectively. Navigating an unknown environment is a highly relevant problem in robotics. The traditional methodology localizes the position of the robot in the unknown world and tries to estimate a map. This approach is called Simultaneous Localization and Mapping (SLAM) and has been explored in depth in robotics BID18. For the continuous control experiment, we use a differential drive robot (FIG2). The robot is equipped with a head mounted LIDAR and also has a ASUS Xtion Pro that can provide the depth as well as the image from the front facing camera. In this work, we only use the information from the LIDAR and leave the idea of using data from the camera for future work. The ground truth maps are generated by using Human Friendly Navigation (HFN) BID8 which generates reactive collision avoidance paths to the goal. Given a start and goal position, the HFN algorithm generates way points that are sent to the controller. For our experiment, we generate a tuple of (x, y, θ) associated with every observation. To train the network, a m × n matrix (environment matrix) corresponding to the m × n environment is initialized. A corresponding reward array (reward matrix) also of size m × n with a 1 at the goal position and zero elsewhere is concatenated with the environment matrix. The observations corresponding to the laser scan are converted to a j × k matrix (observation matrix) where j < m and k < n. The values at the indices in the environment array corresponding to the local observation are updated with the values from the observation matrix. At every iteration, the environment matrix is reset to zero to ensure that the MACN only has access to the partially observable environment. For the continuous world we define our observation to be a 10 × 10 matrix with the agent at the bottom of this patch. We change our formulation in the previous cases where our agent was at the center since the LIDAR only has a 270 degree field of view and the environment behind the robot is not observed. Our input image I to the VI module is [m × n × 2] image where m = 200,n = 200 are the height and width of the environment. I[:, :, 0] is the sensor input. I is first convolved to obtain a reward image R of dimension [n × m × u] where u is the number of hidden units (200 in this case). The K (parameter corresponding to number of iterations of value iteration) here is 40. The network controller is a LSTM with 512 hidden units and the external memory has 1024 rows and a word size of 512. We use 16 write heads and 4 read heads in the access module. The output from the access module is concatenated with the output from the LSTM controller and sent through a linear layer followed by a soft max to get probability distributions for (x, y, θ). We sample from these distributions to get the next waypoint. These way points are then sent to the controller. The waypoints are clipped to ensure that the robot takes incremental steps. For this task, we find that the performance increases when trained by curriculum training. MACN in addition to the baselines is first trained on maps where the goal is close and later trained on maps where the goal is further away. An additional point here, is that due to the complexity of the task, we train and test on the same map. Maps in the train set and test set differ by having random start and goal regions.
Memory Augmented Network to plan in partially observable environments.
726
scitldr
Contextualized word representations such as ELMo and BERT have become the de facto starting point for incorporating pretrained representations for downstream NLP tasks. In these settings, contextual representations have largely made obsolete their static embedding predecessors such as Word2Vec and GloVe. However, static embeddings do have their advantages in that they are straightforward to understand and faster to use. Additionally, embedding analysis methods for static embeddings are far more diverse and mature than those available for their dynamic counterparts. In this work, we introduce simple methods for generating static lookup table embeddings from existing pretrained contextual representations and demonstrate they outperform Word2Vec and GloVe embeddings on a variety of word similarity and word relatedness tasks. In doing so, our also reveal insights that may be useful for subsequent downstream tasks using our embeddings or the original contextual models. Further, we demonstrate the increased potential for analysis by applying existing approaches for estimating social bias in word embeddings. Our analysis constitutes the most comprehensive study of social bias in contextual word representations (via the proxy of our distilled embeddings) and reveals a number of inconsistencies in current techniques for quantifying social bias in word embeddings. We publicly release our code and distilled word embeddings to support reproducible research and the broader NLP community. Word embeddings (; ;) have been a hallmark of modern natural language processing (NLP) for several years. Pretrained embeddings in particular have seen widespread use and have experienced parallel and complementary innovations alongside neural networks for NLP. Advances in embedding quality in part have come from integrating additional information such as syntax (b;), morphology (Cotterell & Schütze, 2015), subwords , subcharacters and, most recently, context . As a consequence of their representational potential, pretrained word representations have seen widespread adoption across almost every task in NLP and reflect one of the greatest successes of both representation learning and transfer learning for NLP (b). The space of pretrained word representations can be partitioned into static vs. dynamic embeddings methods. Static methods such as Word2Vec , GloVe , and FastText yield representations that are fixed after training and generally associate a single vector with a given word in the style of a lookup table. While subsequent work addressed the fact that words may have multiple senses and should have different representations for different senses (; ; ; ;), fundamentally these methods cannot easily adapt to the inference time context in which they are applied. This contrasts with contextual, or dynamic, methods such as CoVe , ELMo , and BERT , which produce vector representations for a word conditional on the inference time context in which it appears. Given that dynamic representations are arguably more linguistically valid, more expressive (static embeddings are a special-case of dynamic embeddings that are optimally ineffective at being dynamic), and have yielded significant empirical improvements (b; a; a), it would seem that static embeddings are outdated. Static embeddings, however, have significant advantages over dynamic embeddings with regard to speed, computational resources, and ease of use. These benefits have important implications for time-sensitive systems, resource-constrained settings or environmental concerns , and broader accessibility of NLP technologies 1. As a consequence of this dichotomy between static and dynamic representations and their disparate benefits, we propose in this work a simple yet effective mechanism for converting from dynamic representations to static representations. We begin by demonstrating that our method when applied to pretrained contextual models (BERT, GPT-2, RoBERTa, XLNet, DistilBERT) yields higher quality static embeddings than Word2Vec and GloVe when evaluated intrinsically on four word similarity and word relatedness datasets. Further, since our procedure does not rely on specific properties of the pretrained contextual model, it can be applied as needed to generate ever-improving static embeddings that will track advances in pretrained contextual word representations. Our approach offers the hope that high-quality embeddings can be maintained in both settings given their unique advantages and appropriateness in different settings. At the same time, we show that by distilling static embeddings from their dynamic counterparts, we can then employ the more comprehensive arsenal of embedding analysis tools that have been developed in the static embedding setting to better understand the original contextual embeddings. As an example, we employ methods for identifying gender, racial, and religious bias (; ;) to our distilled representations and find that these experiments not only shed light on the properties of our distilled embeddings for downstream use but can also serve as a proxy for understanding existing biases in the original pretrained contextual representations. Our large-scale and exhaustive evaluation of bias further reveals dramatic inconsistencies in existing measures of social bias and highlights sizeable discrepancies in the bias estimates obtained for distilled embeddings drawn from different pretrained models and individual model layers. In this work, we study pretrained word embeddings, primarily of the static variety. As such, we focus on comparing our embeddings against existing pretrained static embeddings that have seen widespread adoption. We identify Word2Vec and GloVe as being the most prominent static embeddings currently in use and posit that these embeddings have been frequently chosen not only because of their high quality representations but also because lookup tables pretrained on large corpora are publicly accessible and easy to use. Similarly, in considering contextual models to distill from, we begin with BERT as it has been the most prominent in downstream use among the growing number of alternatives (e.g. ELMo , GPT , BERT , Transformer-XL, GPT-2 , XLNet, RoBERTa, and DistilBERT ) though we provide similar analyses for several of the other models (GPT-2, XLNet, RoBERTa, DistilBERT) and more comprehensively address them in the appendices. We primarily report for the bert-base-uncased model and include complete for the bert-large-uncased model in the appendices as well. In order to use a contextual model like BERT to compute a single context-agnostic representation for a given word w, we define two operations. The first is subword pooling: the application of a pooling mechanism over the subword representations generated for w in context c to compute a single representation for w in c, i.e. {w 1 c, . . ., w k c} → w c. Beyond this, we define context combination to be the mapping from representations w c1,..., w cn of w in different contexts c 1,..., c n to a single static embedding w that is agnostic of context. The tokenization procedure for BERT can be decomposed into two steps: performing a simple word-level tokenization and then potentially deconstructing a word into multiple subwords, yielding w 1,..., w k such that cat(w 1, . . ., w k) = w where cat(·) indicates concatenation. In English, the subword tokenization algorithm is WordPiece . As a consequence, the decomposition of a word into subwords is the same across contexts and the subwords can be unambiguously associated with their source word. Therefore, any given layer of the model outputs vectors w 1 c,..., w k c. We consider four potential pooling mechanisms to compute w c given these vectors: min(·) and max(·) are element-wise min and max pooling, mean(·) indicates mean pooling, i.e. |X | and last(·) indicates selecting the last vector, w k c. In order to convert contextual representations into static ones, we describe two methods of specifying contexts c 1,..., c n and then combining the ing representations w c1,..., w cn. Decontextualized -For a word w, we use a single context where c 1 = w. That is, we feed the single word w by itself into the pretrained contextual model and consider the ing vector to be the representation (applying subword pooling if the word is split into multiple subwords). Aggregated -Observing that the Decontextualized strategy may be presenting an unnatural input to the pretrained encoder which may have never encountered w by itself without a surrounding phrase or sentence, we instead consider ways of combining the representations for w in multiple contexts. In particular, we sample n sentences from a large corpus D, each of which contains the word w, and compute the vectors w c1,..., w cn. Then, we apply a pooling strategy to yield a single representation that aggregates the representations across the n contexts as is shown in Equation 2. To assess the representational quality of our static embeddings, we evaluate on several word similarity and word relatedness datasets (see §A.2 for additional commentary). We consider 4 such datasets: RG65 , WS353 , SIMLEX999 and SIMVERB3500 . Taken together, these datasets contain 4917 examples and contain a vocabulary V of 2005 unique words. Each example is a pair of words (w 1, w 2) with a gold-standard annotation (provided by one or more humans depending on the dataset) of how semantically similar or how semantically related w 1 and w 2 are. A word embedding is evaluated by the relative correctness of its ranking of the similarity/relatedness of all examples in a dataset with respect to the gold-standard ranking using the Spearman ρ coefficient. Embedding predictions are computed using cosine similarity as in Equation 3: We begin by studying how the choices of f and g 2 impact the performance of embeddings distilled from bert-base-uncased. In Figure 1, we show the performance on all four datasets of the ing static embeddings where embeddings computed using the Aggregated strategy are pooled over N = 100000 sentences. Here, N is the number of total contexts for all words (see §A.4). Across all four datasets, we see that g = mean is the best performing pooling mechanism within the Aggregated strategy and also outperforms the Decontexualized strategy by a substantial margin. Fixing g = mean, we further observe that mean pooling at the subword level also performs best. We further find that this trend that f = mean, g = mean is optimal among the 16 possible pairs consistently holds for almost all pretrained contextual models we considered. If we further consider the impacts of N as shown in Table 1, we see that performance for both bert-base-uncased and bert-large-uncased tends to steadily increase for all datasets with increasing N (and this trend holds for the 7 other pretrained models). In particular, in the largest setting with N = 1000000, the bert-large-uncased embeddings distilled from the best performing layer for each dataset dramatically outperform both Word2Vec and GloVe. However, this can be seen as an unfair comparison given that we are selecting the layer for specific datasets. As the middle band of table shows, we can fix a layer and still outperform both Word2Vec and Glove. Beyond the benefits of using a larger N, Table 1 reveals an interesting relationship between N and the best-performing layer. In Figure 1, there is a clear preference towards the first quarter of the model's layers (layers 0-3) with a sharp drop-off in performance immediately thereafter (we see a similar preference for the first quarter in models with a different number of layers, e.g. Figure 3, Figure 10). Given that our intrinsic evaluation is centered on lexical semantic understanding, this appears to be largely consistent with the findings of Liu et al. (2019a);. However, as we pool over a larger number of contexts, we see that the best-performing layer monotonically (with a single exception) shifts to be later and later within the pretrained model. What this indicates is that since the later layers did not perform better for smaller values of N, these layers demonstrate greater variance with respect to the layer-wise distributional mean and reducing this variance helps in our evaluation 3. This may have implications for downstream use, given that later layers of the model are generally preferred by downstream practitioners and it is precisely these layers where we see the greatest variance. Accordingly, combining our stable static embeddings from layer with the contextual example-specific embeddings also from layer of the pretrained model as was suggested in may be a potent strategy in downstream settings. In general, we find these suggest there may be merits towards further work studying the unification of static and dynamic methods. Along with a trend towards later layers for larger values of N, we see a similar preference towards later layers as we consider each column of from left to right. In particular, while the datasets are ordered chronologically 4, each dataset was explicitly introduced as an improvement over its predecessors (perhaps transitively, see §A.3). While it is unclear from our evaluation as to what differences in the examples in each dataset may cause this behavior, we find this correlation with dataset difficulty and layer-wise optimality to be intriguing. In particular, we see that SIMVERB3500 which contains verbs primarily (as opposed to nouns or adjectives which dominate the other datasets) tends to yield the best performance for embeddings distilled from the intermediary layers of the model (most clear for bert-large-uncased). Remarkably, we find that most tendencies we observe generalize well to all other pretrained models we study (specifically the optimality of f = mean, g = mean, the improved performance for larger N, and the layer-wise tendencies with respect to N and dataset). In Table 2, we summarize the of all models employing the Aggregated strategy with f = mean, g = mean and N = 100000 contexts. Surprisingly, despite the fact that many of these models perform approximately equally on many downstream evaluations, we observe that their corresponding distilled embeddings perform radically differently even when the same distillation procedure is applied. These can be interpreted as suggesting that some models learn better lexical semantic representations whereas others learn other behaviors such as context representation and semantic composition more accurately. More generally, we argue that these warrant reconsideration of analyses performed on only one pretrained model as they may not generalize to other pretrained models even when the models considered have (nearly) identical Transformer architectures. A noteworthy in Table 2 is that of DistilBert-6 which outperforms BERT-12 on three out of the four datasets despite being distilled using knowledge distillation from BERT-12. Analogously, RoBERTa, which was introduced as a direct improvement over BERT, does not reliably outperform the corresponding BERT models when comparing the derived static embeddings. Table 2: Performance of static embeddings from different pretrained models on word similarity and word relatedness tasks. f and g are set to mean for all models, N = 100000, and (#) indicates the layer the embeddings are distilled from. Bold indicates best performing embeddings for a given dataset of those depicted. Bias is a complex and highly relevant topic in developing representations and models in machine learning and natural language processing. In this context, we study the social bias encoded within static word representations. As Kate Crawford argued for in her NIPS 2017 keynote, while studying individual models is important given that specific models may propagate, accentuate, or diminish biases in different ways, studying the representations that serve as the starting point and that are shared across models (which are used for possibly different tasks) allows for more generalizable understanding of bias . In this work, we simultaneously consider multiple axes of social bias (i.e. gender, race, and religion) and multiple proposed methods for computationally quantifying these biases. We do so precisely because we find that existing NLP literature has primarily prioritized gender (which may be a technically easier setting) and because we find that different computational specifications of bias that evaluate the same social phenomena yield different . As a direct consequence, we strongly caution that the should be taken with respect to the definitions of bias being applied. Further, we note that an embedding which receives low bias scores cannot be assumed to be (nearly) unbiased, rather that under existing definitions the embedding exhibits low bias and perhaps additional more nuanced definitions are needed. introduced a definition for computing gender bias which assumes access to a set P = {(m 1, f 1),..., (m n, f n)} of (male, female) word pairs where m i and f i only differ in gender (e.g. 'men' and 'women'). They compute a gender direction g: where E(·) is the embedding function, ";" indicates horizontal concatenation/stacking and indicates taking the first principal component. Then, given a set N of target words that we are interested in evaluating the bias with respect to, specifies the bias as: This definition is only inherently applicable to binary bias settings, i.e. where there are exactly two protected classes, but still is difficult to apply to binary settings beyond gender as constructing a set P can be challenging. Similarly, multi-class generalizations of this bias definition are also difficult to propose due to the issue of constructing k-tuples that only differ in the underlying social attribute. This definition also assumes the first principal component is capable of explaining a large fraction of the variance. introduced a different definition for computing binary bias that is not restricted to gender, which assumes access to sets A 1 = {m 1, · · ·, m n} and A 2 = {f 1, · · ·, f n} of representative words for each of the two protected classes. For each class, µ i = mean w∈Ai E(w) is computed. computes the bias in the following ways: Compared to the definition of , these definitions may be more general as constructing P is strictly more difficult than constructing A 1, A 2 (as P can always be split into two such sets but the reverse is not generally true) and's definition does not rely on the first principal component explaining a large fraction of the variance. However, unlike the first definition, computes the bias in favor of/against a specific class (meaning if N = {'programmer', 'homemaker'} and'programmer' was equally male-biased as'homemaker' was female-biased, then under the definition of , there would be no bias in aggregate). For the purposes of comparison, we adjust their definition by taking the absolute value of each term in the mean over N. introduced a definition for quantifying multi-class bias which assumes access to sets A 1,..., A k of representative words as in. They quantify the bias as 5: Similar to the adjustment made for the definition, we again take the absolute value of each term in the mean over N. Figure 2: Layer-wise bias of distilled BERT-12 embeddings for f = mean, g = mean, N = 100000 Left: Gender, Center: Race, Right: Religion Table 3: Social bias within static embeddings from different pretrained models with respect to a set of professions N prof. Parameters are set as f = mean, g = mean, N = 100000 and the layer of the pretrained model used in distillation is X 4. Lowest bias in a particular column is denoted in bold. Inspired by the of , in this work we transparently report social bias in existing static embeddings as well as the embeddings we compute. In particular, we exhaustively report the bias for all 3542 valid (pretrained model, layer, social attribute, bias definition) 4-tuples which describe all combinations of static embeddings and bias measures referenced in this work. We specifically report for binary gender (male, female), two-class religion (Christianity, Islam) and three-class race (white, Hispanic, and Asian), directly following. These are by no means intended to be comprehensive with regards to the breadth of bias socially and only address a restricted class of social biases which notably does not include the important class of intersectional biases. The types of biases being evaluated for are taken with respect to specific word lists (which are sometimes subjective albeit being peer-reviewed) that serve as exemplars and with respect to definitions of bias grounded in the norms of the United States. Beginning with bert-base-uncased, we report the layer-wise bias across all (attribute, definition) pairs in Figure 2. What we immediately observe is that for any given social attribute, there is a great deal of variation across the layers in the quantified amount of bias. Further, while we are unsurprised that different bias measures for the same social attribute assign different absolute scores, we observe that they also do not agree in relative judgments. For gender, we observe that the bias estimated by the definition of steadily increases before peaking at the penultimate layer and slightly decreasing thereafter. In contrast, under bias GARG-EUC we see a distribution with two peaks corresponding to layers at the start or end of the pretrained contextual model with lower bias observed in the intermediary layers. For estimating the same quantity, bias GARG-COS is mostly uniform across the layers (though the scale of the axes visually lessens the variation displayed). Similarly, in looking at the religious bias, we see similar inconsistencies with the bias increasing monotonically from layers 2 through 8 under bias MANZINI, decreasing monotonically under bias GARG-EUC, and remaining roughly constant under bias GARG-COS. In general, while the choice of N (and the choice of A i in the gender bias case) does affect the absolute bias estimates under any given definition, we find that the general trends in the bias across layers are approximately invariant under these choices for a specific definition. Taken together, our analysis suggests a concerning state of affairs regarding bias quantification measures for (static) word embeddings. In particular, while estimates are seemingly stable to some types of choices regarding word lists, bias scores for a particular word embedding are tightly related to the definition being used and existing bias measures are markedly inconsistent with each other. We find this has important consequences beyond understanding the social biases in our representations. Concretely, we argue that without certainty regarding the extent to which embeddings are biased, it is impossible to properly interpret the meaningfulness of debiasing procedures (; a; b;) as we cannot reliably estimate the bias in the embeddings both before and after the procedure. This is further compounded with the existing evidence that current intrinsic measures of social bias may not handle geometric behavior such as clustering . In light of the above, next we compare bias estimates across different pretrained models in Table 3. Given the conflicting scores assigned by different definitions, we retain all definitions along with all social attributes in this comparison. However, we only consider target words given by N prof for visual clarity as well as due to the aforementioned stability to the choice of N, with the for adjectives provided in Table 8. We begin by noting that since we do not perform preprocessing to normalize embeddings, the scores using bias GARG-EUC are not comparable (and may not have been proper to compare in the layer-wise case either) as they are sensitive to the absolute norms of the embeddings which cannot be expected to be similar across models 6. Further, we note that bias BOLUKBASI may not be a reliable indicator as similar to Zhao et al. (2019a), we find that the first principal component explains less than 35% of the variance in the majority of the static embeddings distilled from contextual models. Of the two bias definitions not mentioned thus far, we find that all distilled static embeddings have substantially higher scores under bias MANZINI but generally lower scores under bias GARG-COS when compared to Word2Vec and GloVe. Interestingly, we see that under bias MANZINI both GPT-2 and RoBERTa embedding consistently get high scores across social attributes when compared to other distilled embeddings but under bias GARG-COS they receive the lowest scores among distilled embeddings. Ultimately, given the aforementioned issues regarding the reliability of bias measures, it is difficult to arrive at a clear consensus of the comparative bias between our distilled embeddings and prior static embeddings. What our analysis does resolutely reveal is a pronounced and likely problematic effect of existing bias definitions on the ing bias scores. Distilled Static Representations. introduced an approach similar to our Aggregated strategy where representations are gradually aggregated across instances in a dataset during training to model global information. Between epochs, the memory of past instances is reset and during testing, inference-time instances are added into the memory. In that work, the computed static embeddings are an additional feature that is used to achieve the state-of-the-art on several NER datasets. Based on our , we believe their approach could be further improved by different decisions in pretrained model and layer choice. Their may be explained by the (desirable) variance reduction we observe in pooling over many contexts. Additionally, since they only pool over instances in an online fashion within an epoch, the number of contexts is relatively small in their approach as compared to ours which may help to explain why they find that min or max pooling perform slightly better than mean pooling as the choice for g. proposes a different approach to convert representations from sentence encoders into static embeddings as a means for applying the WEAT implicit bias tests to a sentence encoder. In their method, a single semantically-bleached sentence is synthetically constructed from a template and then fed into the encoder to compute a static embedding for the word of interest. We argue that this approach may inherently not be appropriate for quantifying bias in sentence encoders 7 in the general case as sentence encoders are trained on semantically-meaningful sentences and semantically-bleached constructions are not representative of this distribution. Moreover, the types of templated constructions presented heavily rely on deictic expressions and therefore are difficult to adapt for certain syntactic categories such as verbs (as would be required for the SimVerb3500 dataset especially) without providing arguments for the verb. These concerns are further exacerbated by our findings given the poor representational behavior seen in our Decontextualized embeddings which have similar deficiencies with their static embeddings and the poor representational behavior when we pool over relatively few semantically-meaningful contexts using the Aggregated strategy (e.g. our for N = 10000 which is still 50 instances per word on average and is much more than the single instance they consider). We believe our quantification of bias as a can be taken as a more faithful estimator of bias in sentence encoders. considers a similar approach towards diachronic sense modelling. In particular, given a word, they find its senses and example sentences of each sense in the Oxford English Dictionary and use these to compute static embeddings using the Aggregated strategy with the last layer of bert-base-uncased and n i upper-bounded at 10. Given our , their performance could likely be improved by pooling over more sentences, using bert-large-uncased, and considering layer choice as their task heavily relies on lexical understanding which seems to be better captured in earlier layers of the model than the last one. Since they require sense annotations for their setting (and the number of example sentences in a dictionary for a sense is inherently constrained), our findings also suggest that additional sense-annotated or weakly sense-annotated sentences would be beneficial. Lightweight Pretrained Representations. Taken differently, our approach can be seen as a method for integrating pretraining in a more lightweight fashion. Model compression and knowledge distillation are well-studied techniques in machine learning that have been recently applied for similar purposes. In particular, several concurrent approaches have been proposed to yield lighter pretrained sentence encoders and contextual word representations (; ; ;). Our approach along with these recent approaches yield representations that are more appropriate for resource-constrained settings such as on-device models for mobile phones, for real-time settings where we require low-latency and short inference times, and for users that may not have access to GPU or TPU computational resources . Additionally, this line of work is particularly timely given the emergent concerns of the environmental impact/harm of training and using increasingly large models in NLP , machine learning , and the broader AI community . Bias. Social bias in NLP has been primarily evaluated in three ways: (a) using geometric similarity between embeddings (; ;), (b) adapting psychological association tests , and (c) considering down-stream behavior 2018a; 2019a; ) 8. In relation to this body of work, our bias evaluation is in the style of (a) as we are interested in intrinsic bias in embeddings and considers (potentially) multi-class social bias in the lens of gender, race, and religion whereas prior work has primarily focused on gender. Additionally, while most of the work on bias in embeddings has considered the static embedding setting, recent work has considered sentence encoders and contextual models. Zhao et al. (2019a) considers gender bias in ELMo when applied to extends these by considering not only NER but also bias using WEAT by leveraging the masked language modeling objective of BERT. considers intrinsic gender bias using ELMo by studying gender-swapped sentences. When compared to these approaches, we study a broader class of biases under more than one bias definition and consider more than one model. Further, while these approaches generally neglect reporting bias values for different layers of the model, we show this is crucial as bias is not uniformly distributed throughout model layers and downstream practitioners often do not use the last layer of deep Transformer models (a; ; b) 9. Pretrained contextual word representations have quickly gained traction in the NLP community, largely because of the flurry of empirical successes that have followed since their introduction. For downstream practitioners, our work suggests several simple (e.g. subword pooling mechanism choice) and more sophisticated (e.g. layer choice, benefits of variance reduction by using multiple contexts) strategies that may yield better downstream performance. Additionally, some recent models have combined static and dynamic embeddings (; ;) and our representations may support drop-in improvements in these settings. Beyond furthering efforts in representation learning, this work introduces a new approach towards the understanding of contextual word representations via proxy analysis. In particular, while in this work we choose to study social bias, similar analyses toward other forms of interpretability and understanding would be valuable. Additionally, post-processing approaches that go beyond analysis such as dimensionality reduction may be particularly intriguing given that this is often challenging to do within large multi-layered networks like BERT but has been successfully done for static embeddings (; ;). Future work may also consider the choice of the corpus D from which contexts are drawn. In particular, we believe choosing D to be drawn from the target domain for some downstream task may serve as an extremely lightweight domain adaptation strategy. Additionally, in this work we choose to provide contexts of sentence length in order to facilitate regularity in the comparison across models. But for some models, such as Transformer-XL or XLNet which are trained with memories to handle larger contexts, better performance may be achieved by using larger contexts. In this work, we propose simple but effective procedures for converting contextual word representations into static word embeddings. When applied to pretrained models like BERT, we find the ing embeddings outperform Word2Vec and GloVe substantially under intrinsic evaluation and provide insights into the pretrained model. We further demonstrate the ing embeddings are more amenable to (existing) embedding analysis methods and report the extent of various social biases (gender, race, religion) across a number of measures. Our large-scale analysis furnishes several findings with respect to social bias encoded in popular pretrained contextual representations via the proxy of our embeddings and has implications towards the reliability of existing protocols for quantifying bias in word embeddings. All data, code, visualizations (and code to produce to them), and distilled word embeddings will be publicly released. Additional reproducibility details are provided in Appendix A. URL http://papers.nips.cc/paper/ 6228-man-is-to-computer-programmer-as-woman-is-to-homemaker-debiasing-word-embeddi pdf. Rishi Bommasani, Arzoo Katiyar, and Claire Cardie. SPARSE: Structured prediction using argument-relative structured encoding. In this work, we chose to conduct intrinsic evaluation experiments that focused on word similarity and word relatedness. We did not consider the related evaluation of lexical understanding via word analogies as they have been shown to decompose into word similarity subtasks (a) and there are significant concerns about the validity of these analogies tests . We acknowledge that word similarity and word relatedness tasks have also been heavily scrutinized . A primary concern is that are highly sensitive to (hyper)parameter selection . In our setting, where the parameters of the embeddings are largely fixed based on which pretrained models are publicly released and where we exhaustively report the impact of most remaining parameters, we find these concerns to still be valid but less relevant. To this end, prior work has considered various preprocessing operations on static embeddings such as clipping embeddings on an elementwise basis when performing intrinsic evaluation. We chose not to study these preprocessing choices as they create discrepancies between the embeddings used in intrinsic evaluation and those used in downstream tasks (where this form of preprocessing is generally not considered) and would have added additional parameters implicitly. Instead, we directly used the computed embeddings from the pretrained model with no changes throughout this work. A.3 introduced a set of 65 noun-pairs and demonstrated strong correlation (exceeding 95%) between the scores in their dataset and additional human validation. introduced a larger collection of pairs which they argued was an improvement over RG65 as it more faithfully addressed semantic similarity. followed this work by introducing a even more pairs that included those of as a subset and again demonstrated correlations with human scores exceeding 95%. argued that SIMLEX999 was an improvement in coverage over RG65 and more correctly quantified semantic similarity as opposed to semantic relatedness or association when compared to WS353. Beyond this, SIMVERB3500 was introduced by to further increase coverage over all predecessors. Specifically, it shifted the focus towards verbs which had been heavily neglected in the prior datasets which centered on nouns and adjectives. We used PyTorch throughout this work with the pretrained contextual word representations taken from the HuggingFace pytorch-transformers repository 13. Tokenization for each model was conducted using its corresponding tokenizer, i.e. for GPT2 use the GPT2Tokenizer in pytorch-transformers. For simplicity, throughout this work, we introduce N as the total number of contexts used in distilling with the Aggregated strategy. Concretely, N = wi∈V n i where V is the vocabulary used (generally the 2005 words in the four datasets considered). As a , in finding contexts, we filter for sentences in D that contain at least one word in V. We choose to do this as this requires a number of candidate sentences upper bounded with respect to the most frequent word in V as opposed to filtering for a specific value for n which requires a number of sentences scaling in the frequency of the least frequent word in V. The N samples from D for the Aggregated strategy were sampled uniformly at random. Accordingly, as the aforementioned discussion suggests, for word w i, the number of examples n i which contain w i scales in the frequency of w i in the vocabulary being used. As a consequence, for small values of N, it is possible that rare words would have no examples and computing a representation w using the Aggregated strategy would be impossible. In this case, we back-offed to using the Decontextualized representation for w i. Given this concern, in the bias evaluation, we fix n i = 20 for every w i. In initial experiments, we found the bias to be fairly stable when choosing values n i ∈ {20, 50, 100}. The choice of n i would correspond to N = 40100 (as the vocabulary size was 2005) in the representation quality section in some sense (however this assumes a uniform distribution of word frequency as opposed to a Zipf distribution). The embeddings in the bias evaluation are drawn from layer X 4 using f = mean, g = mean as we found these to be the best performing embeddings generally across pretrained models and datasets in the representational quality evaluation. The set of gender-paired tuples P were taken from. In the gender bias section, P for definitions involving sets A i indicates that P was split into equal-sized sets of male and female work. For the remaining gender , the sets described in §G.3 were used. The various attribute sets A i and target sets N j were taken from which can be further sourced to a number of prior works in studying social bias. We remove any multi-word terms from these lists. B BERT-LARGE Table 8: Social bias within static embeddings from different pretrained models with respect to a set of adjectives, N adj. Parameters are set as f = mean, g = mean, N = 100000 and the layer of the pretrained model used in distillation is X 4.
A procedure for distilling contextual models into static embeddings; we apply our method to 9 popular models and demonstrate clear gains in representation quality wrt Word2Vec/GloVe and improved analysis potential by thoroughly studying social bias.
727
scitldr
The brain performs unsupervised learning and (perhaps) simultaneous supervised learning. This raises the question as to whether a hybrid of supervised and unsupervised methods will produce better learning. Inspired by the rich space of Hebbian learning rules, we set out to directly learn the unsupervised learning rule on local information that best augments a supervised signal. We present the Hebbian-augmented training algorithm (HAT) for combining gradient-based learning with an unsupervised rule on pre-synpatic activity, post-synaptic activities, and current weights. We test HAT's effect on a simple problem (Fashion-MNIST) and find consistently higher performance than supervised learning alone. This finding provides empirical evidence that unsupervised learning on synaptic activities provides a strong signal that can be used to augment gradient-based methods. We further find that the meta-learned update rule is a time-varying function; thus, it is difficult to pinpoint an interpretable Hebbian update rule that aids in training. We do find that the meta-learner eventually degenerates into a non-Hebbian rule that preserves important weights so as not to disturb the learner's convergence. Backpropagation achieves great performance in neural net optimization, but might not be biologically plausible because most problems are not explicitly phrased as classification with true labels, because neurons only know local signals (e.g. synaptic density, ACh levels, current), and because backpropagation uses the computational graph, a separate data structure with no known biological basis. Although some supervised training schemes are more biologically plausible (e.g. contrastive Hebbian learning and equilibrium propagation ), it's currently unknown whether the behavior of all neurons is accurately encapsulated by these models. We speculate that some local, unsupervised learning occurs in the brain and demonstrate that the addition of local, unsupervised rules to standard backpropagation actually improves the speed and robustness of learning. We begin by defining a local learning rule. Consider two adjacent neurons i, j with weight w ij: given an impulse traversing i, j with activations v i, v j, a local learning rule computes updates ∆w ij using local data v i, w ij, v j. Note that by this definition, a local learning rule is unsupervised at face value. Many neuroscientists have hypothesized specific functions that describe the brain's true (unsupervised) local learning rule. Most such rules involve using the correlation of activations as part of the update rule. Examples include Hebb's Rule, Oja's Rule, the Generalized Hebbian Algorithm, and nonlineear Hebbian rules. It is not obvious which of these rules (if any) describe the true behavior of neurons. We employ meta-learning (learning how to learn) as an investigative tool. Optimization functions are algorithms too; it stands to reason that we can learn the best optimization function. In the meta-learning framework, one model A learns a task (e.g. Fashion-MNIST) while another model B learns how to optimize A. Meta-learning has achieved great in finding robust optimization schemes. Andrychowicz et. al. used meta-learning to find the best gradient-based optimization function (B learns to update A using A's gradients), and Chen et. al. used meta-learning to find the best gradient-free optimization function (B learns to update A using only the sequence of A's losses). Finally, Metz et al. demonstrated a fully differentiable architecture for learning to learn unsupervised local rules and demonstrate better-than-random performance on a few-shot basis. If B consistently converges to some stable rule, we take it as strong evidence that this rule may occur in biological brains as well. We therefore wish to extend Metz's approach to learning semisupervised local rules not only to improve performance but also to investigate the functional form of the meta-learned update rule. The Hebbian-Augmented Training algorithm (HAT) is an algorithm that trains the neural net L twice per sample: using local, unsupervised rules on the forward pass and using backpropagationbased gradient descent on the backward pass. Formally, we create 2 multilayer perceptrons: a learner L(· | φ L) with parameters φ L and a metalearner M (v i, v j, w ij | φ M) with parameters φ M, which takes inputs v i, w ij, v j and returns ∆w ij. For a single sample (x, y), we train L without supervision using M and x; we simultaneously train L and M with supervision using A and y. On the forward pass, we compute activations for each layer. For a given layer, we now have the inputs, outputs, and current weights -all of the inputs of local learning rule. We can then apply the outputs of meta-learner M to update the weights of layer. We then recompute the activations of layer using the new weights. This process is done efficiently by convolution (for details, see Appendix A). We compute the activations of the first layer 1, update 1, compute the activations of the second layer 2, update 2, and so on until we compute the predicted Weightsˆ y and update |L|. On the backward pass, we backpropagate. Since we recomputed the activations of each layer using weights updated by M, the weights of M are upstream of the weights of L in the computational graph; thus, a single iteration of the backpropagation algorithm will compute gradients for both M and L. Given a gradient ∇ p for each parameter p ∈ φ L ∪ φ M, we then perform a supervised update p ← p + A(p, ∇ p). The key insight is that the convolution of the meta-learner over the weights of the learner forms a fully differentiable framework M L y. Algorithm 1 Hebbian-Augmented Training Algorithm for weights W, v +1 is a placeholder output as input to M 5: Updates weight using local rule M 6: Backpropagate loss H(v |L|, y). for layer weight W in L and M do Backward pass 9: W ← A Apply gradient update using optimizer A 10: return L, M Return updated learner and updated meta-learner We hypothesize that the HAT algorithm will have three positive effects. • HAT will train the learner L faster since there are twice as many updates. In ordinary backpropagation the metadata generated from the forward pass is computed and wasted; in HAT, the metadata is computed and used to generate a (potentially) useful update. • HAT will improve the convergence of L. The second update should introduce some stochasticity in the loss landscape since it is not directly tied to gradient descent, which may lead L into better local optima. • HAT will improve the performance of L when some examples are not labeled. Backpropagation has no ability to learn from just the input x, while HAT is able to perform the unsupervised update. We generate two learning curves to test these hypotheses: one with respect to time and one with respect to the proportion of labeled examples. The charts below represent the aggregated learning curves of 100 pairs (L i, M i). We find that the effects of HAT on training are clearly positive. The median accuracy of the neural nets trained by HAT is clearly increased along the learning curve, and the HAT-group neural nets reach a higher asymptotic value than the control group. We do note that the two learning curves seem to inflect around the same point -HAT does not seem to cause a faster convergence, just a better one. We attribute this to the meta-learner's convergence; it may take the meta-learner up to 0.5 epochs to start to have positive effects. One potential concern with adding unsupervised meta-learner updates is that after the convergence of the base learner L, the meta-learner's continued output of non-zero updates might "bounce" the base learner out of an optimum. Remarkably, we see in the above plot that the performance of the HAT-trained neural nets is quite stable for the entire 18 epochs of post-convergence duration. To our surprise, we find that HAT is more effective when there are more labels, even though the self-supervised component of the algorithm is designed to take advantage of scarce labels. We attribute this to slow convergence of the meta-learner M -when labels are scarce, the meta-learner may actually converge slower than the learner and thus provide bad update suggestions. We would like insight into why HAT improves the training of neural nets over vanilla gradient descent. Thus, we will analyze the functional form of the learned update rule M after it has fully converged. Recall the setting from experiments 1 and 2: we generate 100 pairs of learners and meta-learners: (L i, M i) for i ∈ {1, ..., 100}. We then investigate the pointwise mean M of these meta-learners. We first visualize the dependence of the function M on its inputs (v i, v j, w ij). We find that a remarkably linear dependence on v j explains almost all of the variance in the outputs of the meta-learned update rule. This indicates that the rule is a "rich-get-richer" scheme: neurons that already fired with high magnitude will experience larger incoming weights and thus be encouraged to fire with high activation in the future. This linear dependence is surprising since all of the hypothesized rules in neuroscience have a dependence on v i ·v j. As a sanity check, we attempted to directly apply this update rule (∆w ij ≈ 2·v j) without meta-learning to see if we can replicate HAT's performance improvement. However, the were decisively negative -HAT improves performance, but the a priori application of HAT's update rule decreases it. We present three hypotheses: • Perhaps M learns a good update rule while L is training, then learns a degenerate rule once L has converged. The sole purpose of this degenerate rule would be to not un-learn the important weights that have already converged (thus explaining the rich-gets-richer behavior of the rule f (·) = 2v j ). Thus, analyzing the black-box function at epoch 20 is merely the wrong time -perhaps observing the meta-learned rule at epoch 1 would be more insightful and useful. • Perhaps M learns a good update rule in each run, and these update rules are all complex functions with no good low-order polynomial approximations; however, their pointwise mean (which is itself not a good local update rule) happens to be linear. Thus, M is the wrong object to analyze and presents behaviors that are not indicative of the of experiments 1 and 2. • Perhaps the learning of M is extremely transient. For any given point in time, there is a different optimal learning rule, and our exercise in finding a fixed local, unsupervised update rule that is universal across training is futile. The HAT algorithm demonstrates that local, unsupervised signals can provide performance-improving weight updates. Neural nets under HAT converge to better asymptotic losses as long as there is sufficient time (> 0.5 epochs) and a sufficient number of labels (> 20% of the data is labeled). The latter finding is surprising since the addition of an unsupervised learning algorithm depends on the presence of labels in order to deliver marginal benefits over gradient descent. The underlying form of the learned rule that makes HAT successful is still a mystery; we find that while the meta-learner may learn a useful update rule during training, the meta-learner does not converge to this useful rule in the long run and instead devolves into a linear function ConvergedRule. This converged function preserves fully-converged weights by reinforcing incoming weights for neurons with high activations. The discovery that HAT does not stably converge to a function makes analysis quite difficult. However, there is potential for future work to do more subtle analyses. Imagine a time t during training in which the meta-learner M has converged to a useful function, but the learner L has not yet finished training. A follow-up to this thesis might be to discover whether there such a time t exists, what the structure of M at time t is, and how M changes the weights of L at time t. One potential methodology might be to observe the function f not as a 3-dimensional function in (v i, w ij, v j) but rather as a 4-dimensional function in (v i, w ij, v j, t). Observing the function along the t-axis and checking for phase changes would shed light on whether a single useful update rule is learned during training or whether HAT's learning is truly transient and continuous. If this follow-up were to succeed, then we could have an a priori rule to apply without having to metalearn update rules. Extracting the local rules from multiple domains could either find that HAT learns a universal rule or that functional distance between two rules describes the "difference" between their originating domains. • Suppose we always metalearn the same rule, regardless of problem domain. Optimal-Hebb is then a universal learning rule. • Suppose Optimal-Hebb is not universal for all problems. For local rules R A, R B on problems A, B, integrating gives an explicit measure for how similar A and B are. This provides a systematic way to identify pairs of learning problems that are good candidates for transfer learning. One implementation detail is notably not covered in the HAT pseudocode; this implementation detail patches an inadequacy in modern deep learning frameworks. Given two neural net layers i and i+1 and minibatches of size B, we have B instances of | i | × i+1 neuron pairs, each of which has 3 salient properties (v i, w ij, v j). Therefore, we would like to apply the function M over the zeroth dimension of a tensor of size 3 × B × | i | × | i+1 | in order to compute the unsupervised weight updates. However, as of this writing date, it is not possible to apply an arbitrary function M to slices of a tensor in parallel in any modern deep learning framework (e.g. Tensorflow, PyTorch, Keras); the reason is that this plays poorly with optimization of the computational graph. We thus implement the application of M's updates to the weights by convoluting M over a state tensor. This is best clarified with an example. Suppose we have a neural net with consecutive layers 1, 2 of size 784 and 183, respectively. Suppose further that we have batches of size 50. Finally, suppose that we require a meta-learner that is a neural net of architecture 3 × 100 × 1. We then copy the tensors along the boxed dimensions to stack them. We instantiate M as a sequence of 3 composed functions: 1. a convolutional layer of kernel size 1 × 1 with 3 in-channels and 100 out-channels, 2. a ReLU activation, and 3. a convolutional layer of kernel size 1 × 1 with 100 in-channels and 1 out-channels. Applying this series of functions to a 1× image with 3 channels is equivalent to passing the 3 channels into a neural net with architecture 3 × 100 × 1. PyTorch (the framework used for this research) does not support the vectorization of arbitrary functions along torch tensors. However, it does support (and heavily optimize for) convolutions. Thus, we implement our neural net function M as a series of convolutions, and we convolve the function over the input tensor of size 3 × 50 × 183 × 784. The output of M is of size 50 × 183 × 784; we average over the zeroth dimension to finally get a weight update of dimension 183 × 784, which is the same size as the original weight tensor.
Metalearning unsupervised update rules for neural networks improves performance and potentially demonstrates how neurons in the brain learn without access to global labels.
728
scitldr
Deep convolutional networks often append additive constant ("bias") terms to their convolution operations, enabling a richer repertoire of functional mappings. Biases are also used to facilitate training, by subtracting mean response over batches of training images (a component of "batch normalization"). Recent state-of-the-art blind denoising methods seem to require these terms for their success. Here, however, we show that bias terms used in most CNNs (additive constants, including those used for batch normalization) interfere with the interpretability of these networks, do not help performance, and in fact prevent generalization of performance to noise levels not including in the training data. In particular, bias-free CNNs (BF-CNNs) are locally linear, and hence amenable to direct analysis with linear-algebraic tools. These analyses provide interpretations of network functionality in terms of projection onto a union of low-dimensional subspaces, connecting the learning-based method to more traditional denoising methodology. Additionally, BF-CNNs generalize robustly, achieving near-state-of-the-art performance at noise levels well beyond the range over which they have been trained. Denoising -recovering a signal from measurements corrupted by noise -is a canonical application of statistical estimation that has been studied since the 1950's. Achieving high-quality denoising requires (at least implicitly) quantifying and exploiting the differences between signals and noise. In the case of natural photographic images, the denoising problem is both an important application, as well as a useful test-bed for our understanding of natural images. The classical solution to the denoising problem is the Wiener filter, which assumes a translation-invariant Gaussian signal model. Under this prior, the Wiener filter is the optimal estimator (in terms of mean squared error). It operates by mapping the noisy image to the frequency domain, shrinking the amplitude of all components, and mapping back to the signal domain. In the case of natural images, the high-frequency components are shrunk more aggressively than the lower-frequency components because they tend to contain less energy in natural images. This is equivalent to convolution with a lowpass filter, implying that each pixel is replaced with a weighted average over a local neighborhood. In the 1990's, more powerful solutions were developed based on multi-scale ("wavelet") transforms. These transforms map natural images to a domain where they have sparser representations. This makes it possible to perform denoising by applying nonlinear thresholding operations in order to reduce or discard components that are small relative to the noise level (4; 12; 1). From a linear-algebraic perspective, these algorithms operate by projecting the noisy input onto a lower-dimensional subspace that contains plausible signal content. The projection eliminates the orthogonal complement of the subspace, which mostly contains noise. This general methodology laid the foundations for the state-of-the-art models in the 2000's (e.g.), some of which added a data-driven perspective, learning sparsifying transforms, or more general nonlinear shrinkage functions directly from natural images (6; 10). In the past decade, purely data-driven models based on convolutional neural networks have come to dominate all previous methods in terms of performance. These models consist of cascades of convolutional filters, and rectifying nonlinearities, which are capable of representing a diverse and powerful set of functions. Training such architectures to minimize mean square error over large databases of noisy natural-image patches achieves current state-of-the-art (see also for a related approach). Neural networks have achieved particularly impressive on the blind denoising problem, in which the noise amplitude is unknown (14; 15; 9). Despite their success, We lack intuition about the denoising mechanisms these solutions implement. Network architecture and functional units are often borrowed from the image-recognition literature, and it is unclear which of these aspects contribute positively, or limit, the denoising performance. Many authors claim critical importance of specific aspects of architecture (e.g., skip connections, batch normalization, recurrence), but the benefits of these attributes are difficult to isolate and evaluate in the context of the many other elements of the system. In this work, we show that bias terms used in most CNNs (additive constants, including those used for batch normalization) interfere with the interpretability of these networks, do not help performance, and in fact prevent generalization of performance to noise levels not including in the training data. In particular, bias-free CNNs (BF-CNNs) are locally linear, and hence amenable to direct analysis with linear-algebraic tools. And BF-CNNs generalize robustly, achieving near-state-of-the-art performance at noise levels well beyond the range over which they have been trained. We assume a measurement model in which images are corrupted by additive noise: y = x + n, where x ∈ R N is the original image, containing N pixels, n is an image of i.i.d. samples of Gaussian noise with variance σ 2, and y ∈ R N is the observed noisy image. The denoising problem consists of finding a function f: R N → R N that provides a good estimate of the original image, x. Commonly, one minimizes the mean squared error: f (y) = arg min g E||x − g(y)|| 2, where the expectation is taken over some distribution over images, x, as well as over the distribution of noise realizations. Finally, if the noise standard deviation, σ, is unknown, the expectation should also be taken over a distribution of this variable. This problem is often called blind denoising in the literature. Feedforward neural networks with rectified linear units (ReLUs) are piecewise affine: for a given input signal, the effect of the network on the input is a cascade of linear transformations (convolutional or fully connected layers, each represented by a matrix, (W), additive constants (b), and pointwise multiplication by a binary mask representing the sign of the affine responses (R). Since each stage is affine, the entire cascade implements a single affine transformation. The function computed by a denoising neural network with L layers may be written where A y ∈ R N ×N is the Jacobian of f (·) evaluated at input y, and b y ∈ R N represents the net additive bias. The subscripts on A y and b y serve as a reminder that the corresponding matrix and vector, respectively, depend on the ReLU activation patterns, which in turn depend on the input vector y. If we remove all the additive ("bias") terms from every stage of a CNN, the ing bias-free CNN (BF-CNN) is strictly linear, and its net action may be expressed as where A y is again the Jacobian of f BF (·) evaluated at y. We analyze this local representation to reveal and visualize the noise-removal mechanisms implemented by BF-CNNs. We illustrate our analysis using a BF-CNN based on the architecture of the Denoising CNN (DnCNN,), although our observations also hold for other architectures (7; 11; 15). The linear representation of the denoising map given by equation 2 implies that the ith pixel of the output image is computed as an inner product between the ith row of A y and the input image. The rows of A y can be interpreted as adaptive filters that produce an estimate of the denoised pixel via a weighted average of noisy pixels. Examination of these filters reveals their diversity, and their relationship to the underlying image content: they are adapted to the local features of the noisy image, averaging over homogeneous regions of the image without blurring across edges (Figure 2). We observe that the equivalent filters of all architectures adapt to image structure. The local linear structure of a BF-CNN allows analysis of its functional capabilities via the singular value decomposition (SVD). For a given input y, we compute the SVD of the Jacobian matrix: The output is a linear combination of the left singular vectors, each weighted by the projection of the input onto the corresponding right singular vector, and scaled by the corresponding singular value. Analyzing the SVD of a BF-CNN on natural images reveals that most singular values are close to zero (Figure 1a). The network is thus discarding all but a very low-dimensional portion of the input image. We can measure an "effective dimensionality", d, of The three rightmost images show the weighting functions used to compute each of the indicated pixels (red squares). All weighting functions sum to one, and thus compute a local average (note that some weights are negative, indicated in red). Their shapes vary substantially, and are adapted to the underlying image content. this preserved subspace by computing the total noise variance remaining in the denoised image, f BF (y), which corresponds to the sum of the squares of singular values. where We also observe that the left and right singular vectors corresponding to the singular values with non-negligible amplitudes are approximately the same (Figure 1c). This means that the Jacobian is (approximately) symmetric, and we can interpret the action of the network as projecting the noisy signal onto a low-dimensional subspace, as is done in wavelet thresholding schemes. For inputs of the form y:= x + n, the subspace spanned by the singular vectors corresponding to the non-negligible singular values contains x almost entirely, in the sense that projecting x onto the subspace preserves most of its norm. The low-dimensional subspace encoded by the Jacobian is therefore tailored to the input image. This is confirmed by visualizing the singular vectors as images. The singular vectors corresponding to non-negligible singular values capture features of the input image; the ones corresponding to near-zero singular values are unstructured (Figure 3). BF-CNN therefore implements an approximate projection onto an adaptive signal subspace that preserves image structure, while suppressing much of the noise. The signal subspace depends on the noise level. We find that for a given clean image corrupted by noise, the effective dimensionality of the signal subspace decreases as the noise level increases (Figure 1b). At lower noise levels the network detects a richer set of image features, that lie in a larger signal subspace. In addition, these signal subspaces are nested: subspaces corresponding to lower noise levels contain at least 95% of the subspaces corresponding to higher noise levels. The empirical that dimensionality is equal to α σ, combined with the observation that the signal subspace contains the clean image, explains the observed denoising performance across different noise levels (Figure 4). Specifically, if we assume A y x ≈ x, the mean squared error is proportional to σ: The scaling of MSE with the square root of the noise variance implies that the PSNR of the denoised image should be a linear function of the input PSNR, with a slope of 1/2. This provides an empirical target for generalization beyond training range. We investigate generalization across noise levels, comparing networks with and without net bias. We implement BF-CNNs based on several Denoising CNNs (14; 7; 11; 15). These architectures include popular features of existing neural-network techniques in image processing: recurrence, multiscale filters, and skip connections. To construct BF-CNNs, we remove all sources of additive bias, including the mean parameter of the batch-normalization in every layer (note however that the rescaling parameters are preserved). We train the networks, following the training scheme described in, using images corrupted by i.i.d. Gaussian noise with a range of standard deviations. This range is the training range of the network. We then evaluate the networks for noise levels that are both within and beyond the training range. Figure 4 compares DnCNN from and its equivalent BF-CNN for different noise levels, inside and outside of the training range. In all cases, DnCNN generalizes very poorly to noise levels outside the training range. In contrast, BF-CNN generalizes robustly, as predicted with a slope of 1/2, even when trained only on modest levels of noise (σ =). Figure 5 shows an example that demonstrates visually the striking difference in generalization performance. We found that the same holds for the other architectures.. The CNN performs poorly at high noise levels (σ = 90, far beyond the training range), whereas BF-CNN performs at state-of-the-art levels. The CNN used for this example is DnCNN; using alternative architectures yields similar .
We show that removing constant terms from CNN architectures provides interpretability of the denoising method via linear-algebra techniques and also boosts generalization performance across noise levels.
729
scitldr
The selection of initial parameter values for gradient-based optimization of deep neural networks is one of the most impactful hyperparameter choices in deep learning systems, affecting both convergence times and model performance. Yet despite significant empirical and theoretical analysis, relatively little has been proved about the concrete effects of different initialization schemes. In this work, we analyze the effect of initialization in deep linear networks, and provide for the first time a rigorous proof that drawing the initial weights from the orthogonal group speeds up convergence relative to the standard Gaussian initialization with iid weights. We show that for deep networks, the width needed for efficient convergence to a global minimum with orthogonal initializations is independent of the depth, whereas the width needed for efficient convergence with Gaussian initializations scales linearly in the depth. Our demonstrate how the benefits of a good initialization can persist throughout learning, suggesting an explanation for the recent empirical successes found by initializing very deep non-linear networks according to the principle of dynamical isometry. Through their myriad successful applications across a wide range of disciplines, it is now well established that deep neural networks possess an unprecedented ability to model complex real-world datasets, and in many cases they can do so with minimal overfitting. Indeed, the list of practical achievements of deep learning has grown at an astonishing rate, and includes models capable of human-level performance in tasks such as image recognition , speech recognition, and machine translation . Yet to each of these deep learning triumphs corresponds a large engineering effort to produce such a high-performing model. Part of the practical difficulty in designing good models stems from a proliferation of hyperparameters and a poor understanding of the general guidelines for their selection. Given a candidate network architecture, some of the most impactful hyperparameters are those governing the choice of the model's initial weights. Although considerable study has been devoted to the selection of initial weights, relatively little has been proved about how these choices affect important quantities such as rate of convergence of gradient descent. In this work, we examine the effect of initialization on the rate of convergence of gradient descent in deep linear networks. We provide for the first time a rigorous proof that drawing the initial weights from the orthogonal group speeds up convergence relative to the standard Gaussian initialization with iid weights. In particular, we show that for deep networks, the width needed for efficient convergence for orthogonal initializations is independent of the depth, whereas the width needed for efficient convergence of Gaussian networks scales linearly in the depth. Orthogonal weight initializations have been the subject of a significant amount of prior theoretical and empirical investigation. For example, in a line of work focusing on dynamical isometry, it was found that orthogonal weights can speed up convergence for deep linear networks and for deep non-linear networks;;;;; ) when they operate in the linear regime. In the context of recurrent neural networks, orthogonality can help improve the system's stability. A main limitation of prior work is that it has focused almost exclusively on model's properties at initialization. In contrast, our analysis focuses on the benefit of orthogonal initialization on the entire training process, thereby establishing a provable benefit for optimization. The paper is organized as follows. After reviewing related work in Section 2 and establishing some preliminaries in Section 3, we present our main positive on efficient convergence from orthogonal initialization in Section 4. In Section 5, we show that Gaussian initialization leads to exponentially long convergence time if the width is too small compared with the depth. In Section 6, we perform experiments to support our theoretical . Deep linear networks. Despite the simplicity of their input-output maps, deep linear networks define high-dimensional non-convex optimization landscapes whose properties closely reflect those of their non-linear counterparts. For this reason, deep linear networks have been the subject of extensive theoretical analysis. A line of work (; ; ; ; ; Laurent & von) studied the landscape properties of deep linear networks. Although it was established that all local minima are global under certain assumptions, these properties alone are still not sufficient to guarantee global convergence or to provide a concrete rate of convergence for gradient-based optimization algorithms. Another line of work directly analyzed the trajectory taken by gradient descent and established conditions that guarantee convergence to global minimum (; ;). Most relevant to our work is the of , which shows that if the width of hidden layers is larger than the depth, gradient descent with Gaussian initialization can efficiently converge to a global minimum. Our establishes that for Gaussian initialization, this linear dependence between width and depth is necessary, while for orthogonal initialization, the width can be independent of depth. Our negative for Gaussian initialization also significantly generalizes the of , who proved a similar negative for 1-dimensional linear networks. Orthogonal weight initializations. Orthogonal weight initializations have also found significant success in non-linear networks. In the context of feedforward models, the spectral properties of a network's input-output Jacobian have been empirically linked to convergence speed (; ; 2018;). It was found that when this spectrum concentrates around 1 at initialization, a property dubbed dynamical isometry, convergence times improved by orders of magnitude. The conditions for attaining dynamical isometry in the infinitewidth limit were established by Pennington et al. (2017; 2018) and basically require that input-output map to be approximately linear and for the weight matrices to be orthogonal. Therefore the training time benefits of dynamical isometry are likely rooted in the benefits of orthogonality for deep linear networks, which we establish in this work. Orthogonal matrices are also frequently used in the context of recurrent neural networks, for which the stability of the state-to-state transition operator is determined by the spectrum of its Jacobian (; Laurent & von). Orthogonal matrices can improve the conditioning, leading to an ability to learn over long time horizons (; ; ;). While the benefits of orthogonality can be quite large at initialization, little is known about whether or in what contexts these benefits persist during training, a scenario that has lead to the development of efficient methods of constraining the optimization to the orthogonal group (; ;). Although we do not study the recurrent setting in this work, an extension of our analysis might help determine when orthogonality is beneficial in that setting. Denote by · the 2 norm of a vector or the spectral norm of a matrix. Denote by · F the Frobenius norm of a matrix. For a symmetric matrix A, let λ max (A) and λ min (A) be its maximum and minimum eigenvalues, and let λ i (A) be its i-th largest eigenvalue. For a matrix B ∈ R m×n, let σ i (B) be its i-th largest singular value (i = 1, 2, . . ., min{m, n}), and let σ max (B) = σ 1 (B), σ min (B) = σ min{m,n} (B). Denote by vec (A) be the vectorization of a matrix A in column-first order. The Kronecker product between two matrices A ∈ R m1×n1 and B ∈ R m2×n2 is defined as where a i,j is the element in the (i, j)-th entry of A. We use the standard O(·), Ω(·) and Θ(·) notation to hide universal constant factors. We also use C to represent a sufficiently large universal constant whose specific value can differ from line to line. Suppose that there are n training examples dx×n the input data matrix and by Y = (y 1, . . ., y n) ∈ R dy×n the target matrix. Consider an L-layer linear neural network with weight matrices W 1,..., W L, which given an input x ∈ R dx computes where and α is a normalization constant which will be specified later according to the initialization scheme. We study the problem of training the deep linear network by minimizing the 2 loss over training data: The algorithm we consider to minimize the objective is gradient descent with random initialization, which first randomly samples the initial weight matrices from a certain distribution, and then updates the weights using gradient descent: for time t = 0, 1, 2,..., where η > 0 is the learning rate. For convenience, we denote The time index t is used on any variable that depends on W 1,..., W L to represent its value at time t, e.g., W j: In this section we present our main positive for orthogonal initialization. We show that orthogonal initialization enables efficient convergence of gradient descent to a global minimum provided that the hidden width is not too small. In order to properly define orthogonal weights, we let the widths of all hidden layers be equal:.., W L−1 are m × m square matrices, and We sample each initial weight matrix W i independently from a uniform distribution over scaled orthogonal matrices satisfying The same scaling factor was adopted in , which preserves the expectation of the squared 2 norm of any input. F. Then * is the minimum value for the objective. Denote r = rank(X), κ = λmax(X X) λr(X X) 2 Our main theorem in this section is the following: for some δ ∈ and a sufficiently large universal constant C > 0. Set the learning rate η ≤ dy 2L X 2. Then with probability at least 1 − δ over the random initialization, we have where (t) is the objective value at iteration t. Notably, in Theorem 4.1, the width m need not depend on the depth L. This is in sharp contrast with the of for Gaussian initialization, which requires m ≥Ω(Lrκ 3 d y). It turns out that a near-linear dependence between m and L is necessary for Gaussian initialization to have efficient convergence, as we will show in Section 5. Therefore the requirement in is nearly tight in terms of the dependence on L. These together rigorously establish the benefit of orthogonal initialization in optimizing very deep linear networks. If we set the learning rate optimally according to Theorem 4.1 to η = Θ(dy L X 2), we obtain that (t) − * decreases by a ratio of 1 − Θ(κ −1) after every iteration. This matches the convergence rate of gradient descent on the (1-layer) linear regression problem min The proof uses the high-level framework from , which tracks the evolution of the network's output during optimization. This evolution is closely related to a time-varying positive semidefinite (PSD) matrix (defined in), and the proof relies on carefully upper and lower bounding the eigenvalues of this matrix throughout training, which in turn implies the desired convergence . First, we can make the following simplifying assumption without loss of generality. See Appendix B in for justification. Assumption 4.1. (Without loss of generality) X ∈ R dx×r, rank(X) = r, Y = W * X, and * = 0.'s framework. The key idea is to look at the network's output, defined as We also write U (t) = αW L:1 (t)X as the output at time t. Note that F. According to the gradient descent update rule, we write. 2r is known as the stable rank of X, which is always no more than the rank. where E(t) contains all the high-order terms (i.e., those with η 2 or higher). With this definition, the evolution of U (t) can be written as the following equation: where Notice that P (t) is always PSD since it is the sum of L PSD matrices. Therefore, in order to establish convergence, we only need to (i) show that the higher-order term E(t) is small and (ii) prove upper and lower bounds on P (t)'s eigenvalues. For the second task, it suffices to control the singular values of 3 Under orthogonal initialization, these matrices are perfectly isometric at initialization, and we will show that they stay close to isometry during training, thus enabling efficient convergence. The following lemma summarizes some properties at initialization. Lemma 4.2. At initialization, we have Furthermore, with probability at least 1 − δ, the loss at initialization satisfies Proof sketch. The spectral property follows directly from. To prove, we essentially need to upper bound the magnitude of the network's initial output. This turns out to be equivalent to studying the magnitude of the projection of a vector onto a random lowdimensional subspace, which we can bound using standard concentration inequalities. The details are given in Appendix A.1. Now we proceed to prove Theorem 4.1. We define F which is the upper bound on from. Conditioned on being satisfied, we will use induction on t to prove the following three properties A(t), B(t) and C(t) for all t = 0, 1,...: • B(t): • C(t): A and B are true according to Lemma 4.2, and C is trivially true. In order to prove A(t), B(t) and C(t) for all t, we will prove the following claims for all t ≥ 0: Claim 4.4. C(t) =⇒ B(t). The proofs of these claims are given in Appendix A. Notice that we finish the proof of Theorem 4.1 once we prove A(t) for all t ≥ 0. In this section, we show that gradient descent with Gaussian random initialization necessarily suffers from a running time that scales exponentially with the depth of the network, unless the width becomes nearly linear in the depth. Since we mostly focus on the dependence of width and running time on depth, we will assume the depth L to be very large. Recall that we want to minimize the objective F by gradient descent. We assume Y = W * X for some W * ∈ R dy×dx, so that the optimal objective value is 0. For convenience, we assume X F = Θ and Y F = Θ., and all weights in the network are independent. We set the scaling factor α such that the initial output of the network does not blow up exponentially (in expectation): Note that E f (x; We also assume that the magnitude of initialization at each layer cannot vanish with depth: Note that the assumptions and are just sanity checks to rule out the obvious pathological cases -they are easily satisfied by all the commonly used initialization schemes in practice. Now we formally state our main theorem in this section. Theorem 5.1. for some universal constant 0 < γ ≤ 1. Then there exists a universal constant c > 0 such that, if gradient descent is run with learning rate η ≤ e cL γ, then with probability at least 0.9 over the random initialization, for the first e Theorem 5.1 establishes that efficient convergence from Gaussian initialization is impossible for large depth unless the width becomes nearly linear in depth. This nearly linear dependence is the best we can hope for, since proved a positive when the width is larger than linear in depth. Therefore, a phase transition from untrainable to trainable happens at the point when the width and depth has a nearly linear relation. Furthermore, Theorem 5.1 generalizes the of , which only treats the special case of For convenience, we define a scaled version of We first give a simple upper bound on A j: Lemma 5.2. With probability at least 1 − δ, we have A j: The proof of Lemma 5.2 is given in Appendix B.1. It simply uses Markov inequality and union bound. Furthermore, a key property at initialization is that if j − i is large enough, A j:i will become exponentially small. Lemma 5.3. With probability at least Proof. We first consider a fixed pair (i, j) such that j − i ≥ L 10. In order to bound A j:i, we first take an arbitrary unit vector v ∈ R di−1 and bound A j:iv. We can write A j:.., j). Recall the expression for the moments of chi-squared random variables: (∀λ > 0). Taking λ = 1 2 and using the bound Choose a sufficiently small constant c > 0. By Markov inequality we have. Therefore we have shown that for any fixed unit vector v ∈ R di−1, with probability at least 1 − e −Ω(L γ) we have Next, we use this to bound A j:i via an -net argument. We partition the index set Now, for any u ∈ R di−1, we write it as u = q l=1 a l u l where a l is a scalar and u l is a unit vector supported on S l. By the definition of The above inequality is valid for any u ∈ R di−1. Thus we can take the unit vector u that maximizes A j:iu. This gives us A j: Finally, we take a union bound over all possible (i, j). The failure probaility is at most The following lemma shows that the properties in Lemmas 5.2 and 5.3 are still to some extent preserved after applying small perturbations on all the weight matrices. Lemma 5.4. Suppose that the initial weights satisfy A j:, where c 1 > 0 is a universal constant. Then for another set of matrices, we must have Proof. It suffices to show that the difference Expanding this product, except for the one term corresponding to A j:i, every other term has the form A j: By assumption, each ∆ k has spectral norm e −0.6c1L γ, and each A j:i has spectral norm O(L 3), so we have Therefore we have The proof of the second part of the lemma is postponed to Appendix B.2. As a consequence of Lemma 5.4, we can control the objective value and the gradient at any point sufficiently close to the random initialization. Lemma 5.5. For a set of weight matrices, the objective and the gradient satisfy The proof of Lemma 5.5 is given in Appendix B.3. Finally, we can finish the proof of Theorem 5.1 using the above lemmas. Proof of Theorem 5.1. From Lemmas 5.2 and 5.3, we know that with probability at least 0.9, we have (i) A j: Here c 1 > 0 is a universal constant. From now on we are conditioned on these properties being satisfied. We suppose that the learning rate η is at most e 0.2c1L We say that a set of weight matrices W 1,..., W L are in the "initial neighborhood" if. From Lemmas 5.4 and 5.5 we know that in the "initial neighborhood" the objective value is always between 0.4 Y 2 F and 0.6 Y 2 F. Therefore we have to escape the "initial neighborhood" in order to get the objective value out of this interval. Now we calculate how many iterations are necessary to escape the "initial neighborhood." According to Lemma 5.5, inside the "initial neighborhood" each W i can move at most η(in one iteration by definition of the gradient descent algorithm. In order to leave the "initial neighborhood," some W i must satisfy In order to move this amount, the number of iterations has to be at least This finishes the proof. In this section, we provide empirical evidence to support the in Sections 4 and 5. To study how depth and width affect convergence speed of gradient descent under orthogonal and Gaussian initialization schemes, we train a family of linear networks with their widths ranging from 10 to at t = 1258 and t = 10000, for different depth-width configurations and different initialization schemes. Darker color means smaller loss. 1000 and depths from 1 to 700, on a fixed synthetic dataset (X, Y). 4 Each network is trained using gradient descent staring from both Gaussian and orthogonal initializations. In Figure 1, We lay out the logarithm of the relative training loss (t), using heap-maps, at steps t = 1258 and t = 10000. In each heat-map, each point represents the relative training loss of one experiment; the darker the color, the smaller the loss. Figure 1 clearly demonstrates a sharp transition from untrainable to trainable (i.e., from red to black) when we increase the width of the network: • for Gaussian initialization, this transition occurs across a contour characterized by a linear relation between width and depth; • for orthogonal initialization, the transition occurs at a width that is approximately independent of the depth. These observations excellently verify our theory developed in Sections 4 and 5. To have a closer look into the training dynamics, we also plot "relative loss v.s. training time" for a variety of depth-width configurations. See Figure 2. There again we can clearly see that orthogonal initialization enables fast training at small width (independent of depth), and that the required width for Gaussian initialization depends on depth. In this work, we studied the effect of the initialization parameter values of deep linear neural networks on the convergence time of gradient descent. We found that when the initial weights are iid Gaussian, the convergence time grows exponentially in the depth unless the width is at least as large 4 We choose X ∈ R 1024×16 and W * ∈ R 10×1024, and set Y = W * X. Entries in X and W * are drawn i.i.d. from N. as the depth. In contrast, when the initial weight matrices are drawn from the orthogonal group, the width needed to guarantee efficient convergence is in fact independent of the depth. These establish for the first time a concrete proof that orthogonal initialization is superior to Gaussian initialization in terms of convergence time. A.1 PROOF OF LEMMA 4.2 Proof of Lemma 4.2. We only need to prove. We first upper bound the magnitude of the network's initial output on any given input / z 2 has the same distribution as. Note that m > C · log(r/δ). We know that with probability at least 1 − δ r we have which implies Finally, taking a union bound, we know that with probability at least 1 − δ, the inequality holds for every x ∈ {x 1, . . ., x r}, which implies A.2 PROOF OF CLAIM 4.3 Proof of Claim 4.3.. Thus we can bound the gradient norm as follows for all 0 ≤ s ≤ t and all i ∈ [L]: where we have used B(s). Then for all i ∈ [L] we have:. A.3 PROOF OF CLAIM 4.4 Proof of Claim 4.4. Expanding this product, each term except W j:i has the form: where i ≤ k 1 < · · · < k s ≤ j are locations where terms like ∆ k l are taken out. Note that every factor in of the form according to. Thus, we can bound the sum of all terms of the form as Here the last step uses m > C(LR) 2 which is implied by. Combined with, this proves B(t). Proof of Claim 4.5. Recall that we have the dynamics for U (t). In order to establish convergence from we need to prove upper and lower bounds on the eigenvalues of P (t), as well as show that the high-order term E(t) is small. We will prove these using B(t). Using the definition and property B(t), we have In the lower bound above, we make use of the following relation on dimensions: m ≥ d x ≥ r, which enables the inequality λ min (W i−1:1 (t)X) (Next, we will prove the following bound on the high-order term E(t): Recall that E(t) is the sum of all high-order terms in the product Same as, we have It suffices to show that the above bound is at most 1 6 ηλ min (P t) U (t) − Y F = 1 6 ηλ min (P t) 2 (t). Since λ min (P t) ≥ Using, and noting that either L − i − 1 or i − 1 is greater than L 4, we have
We provide for the first time a rigorous proof that orthogonal initialization speeds up convergence relative to Gaussian initialization, for deep linear networks.
730
scitldr
Survival function estimation is used in many disciplines, but it is most common in medical analytics in the form of the Kaplan-Meier estimator. Sensitive data (patient records) is used in the estimation without any explicit control on the information leakage, which is a significant privacy concern. We propose a first differentially private estimator of the survival function and show that it can be easily extended to provide differentially private confidence intervals and test statistics without spending any extra privacy budget. We further provide extensions for differentially private estimation of the competing risk cumulative incidence function. Using nine real-life clinical datasets, we provide empirical evidence that our proposed method provides good utility while simultaneously providing strong privacy guarantees. A patient progresses from HIV infection to AIDS after 4.5 years. A study using the patient's data publishes the survival function estimates (a standard practice in clinical research). An adversary, with only access to the published estimates (even in the form of survival function plots), can reconstruct user-level data . Effectively leading to the disclosure of sensitive information. This is just one scenario. The survival function is used for modeling any time to an event, taking into account that some subjects will not experience the event at the time of data collection. The survival function is used in many domains, some examples are the duration of unemployment (in economics); time until the failure of a machine part (in engineering); time to disease recurrence, time to infection, time to death (in healthcare); etc. Our personal healthcare information is the most sensitive private attribute, protected by law, violations of which carry severe penalties. And as the initial example suggests, of all application areas, information leakage in the healthcare domain is the most serious issue and is our focus in this study. For estimation of the survival function, we focus on the Kaplan-Meier's (KM) non-parametric method. KM's method is ubiquitous in clinical research. A quick search of the term on PubMed 1 yields 109,421 . It is not an overstatement to say that almost every clinical study uses KM's method to report summary statistics on their cohort's survival. Statistical agencies around the world use this method to report on the survival of the general population or specific disease-related survival estimates. To best of our knowledge, there does not exist any model that can provide formal privacy guarantees for estimation of survival function using the KM method. The only related work is by Nguyên & , which uses the output and objective perturbation for regression modeling of discrete time to event data. The approach is limited to "multivariate" regression models and cannot be directly used to estimate survival function in a differentially private fashion. One can argue that generative models such as the differentially private generative adversarial networks (; ; ; ;) can be trained to generate differentially private synthetic data. Which can then be used to estimate the survival function. But, GANs do not generalize well to the datasets typically encountered for our use-case (very small sample size (can be less than a hundred), highly constrained dimensionality (d ∈ ), a mixture of categorical and continuous variables, no data pre-processing allowed, etc.). We propose the first differentially private method for estimating the survival function based on the KM method. Grounded by the core principles of differential privacy, our method guarantees the differentially private estimation of the survival function. Also, we show that our method easily extends to provide differentially private confidence intervals and differentially private test statistics (for comparison of survival function between multiple groups) without any extra privacy cost. We further extend our method for differentially private estimation of the competing risk cumulative incidence function (another popular estimate in clinical research). Using nine real-life clinical datasets, we provide empirical evidence that our proposed method provides good utility while simultaneously providing strong privacy guarantees. Lastly, we release our method as an R 2 (R) package for rapid accessibility and adoption. We use this section to introduce the concepts central to the understanding of our method. The survival function is used to model time to event data, where the event may not have yet occurred (but the probability of occurrence is non-zero). Such as for HIV infection to AIDS timeline data, at the end of the follow-up period, some patients would have progressed (our event of interest), while others would not have yet progressed (censored observations). Accounting for censored observations (patients that never experience the event during our follow-up) is the central component in the estimation of the survival function. Formally, this gives the probability of not having an event just before time t, or more generally, the probability that the event of interest has not occurred by time t. In practice, survival function can be estimated using more than one approach. Several parametric methods (that make assumptions on the distribution of survival times) such as the ones based on the exponential, Weibull, Gompertz, and log-normal distributions are available. Or one can opt for the most famous and most often used non-parametric method (Kaplan-Meier's method ), which does not assume how the probability of an event changes over time. Our focus in this paper is the latter, which has become synonymous with survival models in clinical literature. KM estimator of the survival function is defined as followŝ where t j, (j ∈ 1, · · ·, k) is the set of k distinct failure times (not censored), d j is the number of failures at t j, and r j are the number of individuals "at risk" before the j-th failure time. We can see that the functionŜ(t) only changes at each failure time, not for censored observations, ing in a "step" function (the characteristic feature of KM estimate). Differential privacy provides provable privacy notion, with the intuition that a randomized algorithm behaves similarly on similar input datasets. Formally, Definition 1. (Differential privacy ) A randomized algorithm M with domain N |X | preserves (, δ)-differentially privacy if for all S ⊆ Range(M) and for all x, y ∈ N |X | such that ||x − y|| 1 ≤ 1, we have where the two datasets (x, y) only differ in any one row (neighboring datasets) and the probability space is over the coin flips of the mechanism M. If δ = 0, we have "pure -differential privacy". Smaller (, δ) provide stronger privacy guarantees. Differential privacy has some interesting properties. We briefly introduce the main property that is crucial to our proposal of differentially private survival function estimation. That is, the postprocessing, formally Theorem 1. (Post processing ) Let M: N |X | → R be a randomized algorithm that is (, δ)-differentially private. Let f: R → R be an arbitrary randomized mapping. Then Theorem 1 states that differential privacy is immune to post-processing. That is, an adversary acting only on the output of M, cannot increase the privacy loss. This notion is central to our approach and we will revisit it in the following sections. Now we introduce our method for differentially private estimation of the survival function using the Kaplan-Meier's method. We follow the basic principles of differential privacy to ensure that our estimate of the survival function is differentially private. We subsequently show that following our simple approach, it is possible to estimate a wide variety of accompanying statistics (such as the confidence intervals, comparison test statistics, etc.) in a differentially private way without spending any extra privacy budget. Before we begin, we recap some of the notations introduced in Section 2.1. We have a vector of time points (t j, j ∈ {1, · · ·, k}), and for each time point, we have a corresponding number of subjects at risk r j (number of subjects not experiencing a progression up to that time point), and we have the number of subjects experiencing the event at that time point (number of progressions), which we denote as d j. We first create a dataset (a matrix) where each row has the data on the number of events (d j) and the number at risk (r j) for each unique time point (t j). Let's denote this matrix by M. Then using the L 1 sensitivity (S) of M, we draw a noise matrix Z from the Laplace distribution (Lap( S /)), where is the privacy parameter and Z is of the same size as M. We then create a differentially private version of M by adding Z, that is, M = M + Z. All subsequent calculations use M. We succinctly present our method as Algorithm 1. 1: procedure DP(Ŝ(t)) 2: Create a matrix M; [r j, d j] ∈ M; for every t j 3: returnŜ (t) 6: end procedure We use this paragraph to briefly discuss Algorithm 1. We begin with the noticeable simplicity of the procedure, that is, the minimal changes required to the original estimation procedure to make it differentially private. This further boosts the accessibility of our differentially private version (it can be implemented using any readily available software package). Also, the required changes for differential privacy come with no computational overhead compared to the original estimation (our method is computationally cheap). Below we provide the formal privacy guarantees and further details on how this method can be easily extended for differentially private estimation of "other" associated statistics. Now we are ready to formally state the differential privacy guarantees of our proposed method. Before we state our main theorem, we start with a supporting Lemma for establishing the global L 1 sensitivity (S) of our method. Lemma 1. L 1 sensitivity (S) of M is two. Proof. As M only contains count variables for the number of events and number at risk for each unique time point. Adding or removing any single individual can change the counts by at most two (that is being in at-risk group and having an event). Theorem 2. Algorithm 1 is -differentially private. Proof. Sketch: We have established the L 1 sensitivity of M. Using it to add Laplace noise (M = M + Lap( 2 /)) makes sure M is differentially private and so are its individual components (that are r j, d j). Using (r j, d j) to calculate the survival function (Eqn. 2) ensures that the estimated function is differentially private by the post-processing theorem . Complete formal proof is provided in the Appendix. As mentioned in the introduction, one of the advantages of our approach is its easy extension to other essential statistics often required and reported along with the estimates of the survival function. Such as the confidence intervals, test statistics for comparing the survival function distributions, etc. Here we formally define the extensions with their privacy guarantees. When reporting survival function estimates, it is often required to include the related confidence intervals, reported to reflect the uncertainty of the estimate. And for group comparison, such as comparing the infection rates between two treatment arms of a clinical trial, hypothesis testing is used with the help of test statistic. So, it is of paramount interest to provide the differentially private counterparts of both (confidence intervals and test statistics). We start with the confidence intervals. Confidence Intervals for survival function estimates are calculated in a "point-wise" fashion, that is, they are calculated at discrete time-points whenever an event is observed (for the same time points at which the survival function changes its value). We start with proving that the calculations required for obtaining confidence intervals are differentially private following the changes made to the data in Algorithm 1. Theorem 3. Confidence Intervals forŜ (t) are -differentially private. Proof. There are more than one type of confidence intervals available for the survival function. Here we focus on the most often used Greenwood's linear-point-wise confidence intervals. Greenwood's formula for the confidence intervals is given aŝ where Replacing by their respective differentially private counterparts from Algorithm 1. estimate forV [Ŝ(t)] is now differentially private, using it in conjunction withŜ (t) makes the confidence intervals differentially private by the post-processing theorem . As we don't need any additional access to the sensitive data for calculating confidence intervals. Hence, calculating and providing differentially private confidence intervals with the differentially private survival function estimates does not incur any additional privacy cost. In other words, we get the differentially private confidence intervals for free. Hypothesis tests are often used to compare the distribution of survival function estimates between groups. For example: To compare infection rates between two treatment arms of a clinical trial. Most often used statistical test in such scenarios is the Logrank test . Below we show that using our method (Algorithm 6), the hypothesis testing using the Logrank test is differentially private. Theorem 4. Hypothesis test forŜ (t) is -differentially private. Proof. Logrank test statistic (Z) is given as where O 1j are observed number of failures (events) (d 1j) and E 1j are the expected number of failures at time j in group 1, we have and V j is the variance, given as Replacing the corresponding quantities by their differentially private counterparts using Algorithm 1, we get which makes V j differentially private as no other sensitive information is required for its estimation. Using it in conjunction with O 1j and E 1j, which can be made differentially private following the same argument, makes the test statistic Z differentially private by the post-processing theorem . The calculation, again being the case of standard post-processing on differentially private data does not add to our overall privacy budget. Hence, after using Algorithm 1, we can output the related confidence intervals and the test statistic without spending any additional privacy budget. In certain scenarios, we can have more than one type of event. Using our prior example of HIV infection, we might have a scenario where patients die before progression to AIDS, making the observation of progression impossible. Such events (death) that preclude any possibility of our event of interest (progression) are known as competing events. Competing events are a frequent occurrence in clinical data and require specialized estimates that take this phenomenon into account, without which our estimates will be biased. One such estimate is the competing risk cumulative incidence, which is also the most widely used and reported estimate in the literature, akin to the KM estimate, but for competing events. Here we show that using Algorithm 1, we can easily extend differential privacy to competing risk scenarios. Theorem 5. Competing risk cumulative incidence using our method is -differentially private. Proof. Cumulative incidence extends Kaplan-Meier estimator and is given bŷ where d ij is the number of events of type j at time t (i) andŜ(t i) is the standard Kaplan-Meier estimator at time t (i). Replacing associated quantities with their differentially private counterparts (using same reasoning as Algorithm 1).Î Its not hard to see thatÎ j (t) is differentially private by the post-processing theorem. Further statistics associated with the cumulative incidence such as the confidence intervals and hypothesis tests, hazard function and hazard ratios, etc. that directly depend on the quantities made differentially private using Algorithm 1 can be similarly argued to be differentially private. Another popular extension that we easily get from our method is the differentially private version of the Nelson-Aalen estimate of the cumulative hazard (; 1969;). Which is simplyĤ (t) = tj ≤t d j /r j, or can be estimated directly from its relationship with the survival function (Ŝ (t) = exp(−Ĥ (t))). Here we present the empirical evaluation of our method on nine real-life clinical datasets of varying properties. We start with the dataset description. Nine real-life clinical datasets with time to event information are used to evaluate our proposed method. Dataset details are provided in Table 1. For space constraints, we provide further details (dataset properties, pre-processing, group comparison details for hypothesis tests, etc.) in the Appendix. As there is no current method for producing differentially private estimates of the survival function. We compare our approach to the original "non-private" version. This provides us with a comparison to the upper bound (we cannot get better than the non-noisy version). Good utility in comparison with the original non-perturbed version will add credibility to our claim of high utility and will encourage practitioners to adopt our method for practical use. Now we present the outcome of our evaluation on nine real-life datasets. We start with the estimation of the differentially private survival function and then move on to the evaluation of the extensions (confidence intervals, test statistic, etc.). For the differentially private estimation of the survival function (our primary goal), Figure 1 shows the . We can see that our privacy-preserving estimation (green line) faithfully estimates the survival function (black line), with little utility loss. As expected, estimation deteriorates with increased privacy budget (orange line). Followup time is on the X-axis and the probability of survival is on the Y-axis. The black line is the original function estimate, the green line is the differentially private estimate with = 2, and the orange line is the differentially private estimate with = 1. We observe that our method provides good utility while protecting an individual's privacy. Small sample sized datasets fare worse compared to larger datasets. An observation worth making is that as the dataset size gets smaller (a small number of events; as in ovarian, Leukemia, Gehan), the utility of our differentially private estimation gets worse. Which is intuitive from the differential privacy point of view, because to protect an individual's privacy in a small dataset, we will need to add large noise (large perturbation). Whereas for moderate to medium-sized datasets, our differentially private estimation provides good , even for the high privacy regime. An important estimate often reported with survival function is the median survival time and its associated confidence intervals. Median survival time is defined as the time point when the survival function attains the value of 0.5, confidence intervals for the survival function at that time point serve as the confidence intervals of the median survival. Table 2 shows the . For "Median Survival (95% CI)", we see that our method estimates the median with high precision, even for high privacy regime. For some cases due to high survival (as is the case with Myeloid and Ovarian datasets), it is not possible to estimate the upper bounds on the confidence intervals, that is why they are marked as "NA". We see a similar trend as we saw with in Figure 1, our precision increases with increasing dataset size, an acceptable trade-off for individual-level privacy protection. Table 2, we observe that our differentially private estimation performs at par with the original "non-noisy" estimation, even for the high privacy regime. The test statistic (Z) follows the χ 2 distribution with one degree of freedom. Using it to derive the p-values, we observe that none of the differentially private estimates change statistical significance threshold (at 0.05 level). That is, none of the differentially private estimates make the "non-noisy" statistically significant non-significant or vice-versa. For cumulative incidence, we use two new datasets with competing risk information. Results are similar with the estimation of competing risk cumulative incidence, that is, our proposed method provides good utility while protecting an individual's privacy. Our method provides faithful estimation even at high privacy regime. For space constraints, detailed are presented in the Appendix. Much work has been done in the intersection of statistical modeling and differential privacy, including many works proposing different differentially private methods for regression modeling (; ; ; ;). Using the same principles, Nguyên & further developed a differentially private regression model for survival analysis. This approach is limited to the "multivariate" regression models and cannot be used for direct differentially private estimation of the survival function. Differentially private generative models such as the differentially private generative adversarial networks (; ; ; ;) have been recently proposed. But, as discussed in the introduction, they are not suitable for generating data for survival function estimation. We have presented the first method for differentially private estimation of the survival function and we have shown that our proposed method can be easily extended to differentially private estimation of "other" often used statistics such as the associated confidence intervals, test statistics, and the competing risk cumulative incidence. With extensive empirical evaluation on nine real-life datasets, we have shown that our proposed method provides good privacy-utility trade-off. Here we provide details on the datasets used for evaluation. 1. Cancer: It pertains to the data on survival in patients with advanced lung cancer from the. Survival time in days is converted into months. Groups compared are males and females. 2. Gehan: This is the dataset from a trial of 42 leukemia patients. Groups compared are the control and treatment groups. 3. Kidney: This dataset is on the recurrence times to infection, at the point of insertion of the catheter, for kidney patients using portable dialysis equipment. Time is converted into months and groups compared are males and females.. Time is converted into months and groups compared are whether maintenance chemotherapy was given or not. This dataset is about natural history of subjects with monoclonal gammopathy of undetermined significance (MGUS). Time is converted into months and groups compared are males and females. 6. Myeloid: Dataset is based on a trial in acute myeloid leukemia. Time is converted into months and groups compared are the two treatment arms. 7. Ovarian: This dataset pertains to survival in a randomized trial comparing two treatments for ovarian cancer. Time is converted into months and groups compared are the different treatment groups. 8. Stanford: This dataset contains the Stanford Heart Transplant data. Time is converted into months and groups compared are the age groups (above and below median). 9. Veteran: This dataset has information from randomized trial of two treatment regimens for lung cancer. Time is converted into months and groups compared are the treatment arms. For empirical evaluation in a competing risk scenario, we use two datasets that have more than one type of event. First is from a clinical trial for primary biliary cirrhosis (PBC) of the liver . With the event variable being receipt of a liver transplant, censor, or death; our event of interest is the transplant, and death here is a competing event. The second dataset has the data on the subjects on a liver transplant waiting list from 1990-1999, and their disposition: received a transplant (event of interest), died while waiting (competing risk), or censored . Figure 2 shows the (cumulative incidence is the opposite of survival function, so the plots go upward). We observe that our differentially private extension does an excellent job of differentially private estimation of the competing risk cumulative incidence function while providing strong privacy guarantees. A.3 PROOFS Theorem 6. Algorithm 1 is -differentially private. Proof. Let M ∈ R d and M * ∈ R d, such that the L 1 sensitivity, S, is ||M − M * || 1 ≤ 1, and let f denote some function, f: R d → R k. Let p M denote the probability density function of Z(M, f,), and let p M * denote the probability density function of Z(M *, f,), we compare both at some arbitrary point q ∈ R k. Figure 2: Extending differentially private estimation to competing risk cumulative incidence (cumulative incidence is the opposite of survival function, so the plots go upward). Black is the original, unperturbed estimate. Green is with = 2 and orange is with = 1. We can see that our method does a good job of estimating competing risk cumulative incidence while providing strong privacy guarantees. last inequality follows from the definition of sensitivity S As our function estimation uses everything from M (our differentially private version of M) and nothing else from the sensitive data, our survival function estimation is differentially private by the post-processing Theorem .
A first differentially private estimate of the survival function
731
scitldr
Neural networks can learn to extract statistical properties from data, but they seldom make use of structured information from the label space to help representation learning. Although some label structure can implicitly be obtained when training on huge amounts of data, in a few-shot learning context where little data is available, making explicit use of the label structure can inform the model to reshape the representation space to reflect a global sense of class dependencies. We propose a meta-learning framework, Conditional class-Aware Meta-Learning (CAML), that conditionally transforms feature representations based on a metric space that is trained to capture inter-class dependencies. This enables a conditional modulation of the feature representations of the base-learner to impose regularities informed by the label space. Experiments show that the conditional transformation in CAML leads to more disentangled representations and achieves competitive on the miniImageNet benchmark. In machine learning, the objective of classification is to train a model to categorize inputs into various classes. We usually assume a categorical distribution over the label space, and thus effectively ignore dependencies among them. However, class structure does exist in real world and is also present in most datasets. Although class structure can be implicitly obtained as a by-product during learning, it is not commonly exploited in an explicit manner to develop better learning systems. The use of label structure might not be of prime importance when having access to huge amounts of data, such the full ImageNet dataset. However, in the case of few-shot learning where little data is available, meta-information such as dependencies in the label space can be crucial. In recent years, few-shot learning-learning from few examples across many tasks-has received considerable attention BID23 BID28 BID6 BID30. In particular, the concept of meta-learning has been shown to provide effective tools for few-shot learning tasks. In contrast to common transfer learning methods that aim to fine-tune a pre-trained model, meta-learning systems are trained by being exposed to a large number of tasks and evaluated in their ability to learn new tasks effectively. In meta-training, learning happens at two levels: a meta-learner that learns across many tasks, and a base-learner that optimizes for each task. Model-Agnostic Meta-Learning (MAML) is a gradient-based meta-learning algorithm that provides a mechanism for rapid adaptation by optimizing only for the initial parameters of the base-learner BID6.Our motivation stems from a core challenge in gradient-based meta-learning, wherein the quality of gradient information is key to fast generalization: it is known that gradient-based optimization fails to converge adequately when trained from only a few examples BID23, hampering the effectiveness of gradient-based meta-learning techniques. We hypothesize that under such circumstances, introducing a metric space trained to encode regularities of the label structure can impose global class dependencies on the model. This class structure can then provide a high-level view of the input examples, in turn leading to learning more disentangled representations. We propose a meta-learning framework taking advantage of this class structure information, which is available in a number of applications. The Conditional class-Aware Meta-Learning (CAML) model is tasked with producing activations in a manner similar to a standard neural network, but with the additional flexibility to shift and scale those activations conditioned on some auxiliary meta-information. While there are no restrictions on the nature of the conditioning factor, in this work we model class dependencies by means of a metric space. We aim to learn a function mapping inputs to a metric space where semantic distances between instances follow an Euclidean geometry-classes that are semantically close lie in close proximity in an p sense. The goal of the conditional class-aware transformation is to make explicit use of the label structure to inform the model to reshape the representation landscape in a manner that incorporates a global sense of class structure. The contributions of this work are threefold: (i) We provide a meta-learning framework that makes use of structured class information in the form of a metric space to modulate representations in few-shot learning tasks; (ii) We introduce class-aware grouping to improve the statistical strength of few-shot learning tasks; (iii) We show experimentally that our proposed algorithm learns more disentangled representation and achieves competitive on the miniImageNet benchmark. We start by describing the meta-learning formulation proposed by BID30 and BID23, and review MAML BID6, of which CAML is an instance. The goal of meta-learning is to learn from a distribution of tasks. The learning happens on two levels: (i) a meta-level model, or meta-learner, that learns across many tasks, and (ii) a base-level model, or base-learner, that operates within each specific task. Meta-learning happens in task space, where each task can be treated as one meta-example. In the meta-learning formulation, we define a collection of regular tasks as meta-sets D, and each task D ∈ D has its own D train and D test split. D train is often denoted as the "support set" and D test the "query set". The ing meta-learner objective is to choose parameters θ that minimize the expected loss L(·;θ) across all tasks in D, DISPLAYFORM0 At the meta-level, the meta-sets D can be further split into disjoint meta-training set D meta−train, meta-validation set D meta−valid and meta-test set D meta−test. The meta-learner is trained on D meta−train, validated on D meta−valid and finally evaluated on D meta−test. Model-Agnostic Meta-Learning BID6 ) is a meta-learning algorithm that aims to learn representations that encourage fast adaptation across different tasks. The meta-learner and base-learner share the same network structure, and the parameters learned by the meta-learner are used to initialize the base-learner on a new task. To optimize the meta-learner, we first sample a set of tasks {D 1,D 2,...,D S} from the meta-training set D meta−train. For a meta-learner parameterized by θ, we compute its adapted parameters θ i for each sampled task D i. The adapted parameters θ i are task-specific and tell us the effectiveness of θ as to whether it can achieve generalization through one or a few additional gradient steps. The objective of the meta-learner is to optimize the representation θ such that it leads to good task-specific adaptations θ i with only a few gradient steps. The meta-learner performs slow learning at the meta-level across many tasks to support fast learning on new tasks. At meta-test time, we initialize the base-learner with the meta-learned representation θ * followed by gradient-based fine-tuning. 3.1 CONDITIONAL CLASS-AWARE META-LEARNING As shown in Figure 1, the proposed Conditional class-Aware Meta-Learning (CAML) is composed of four components: an embedding function f φ that maps inputs to a metric space, a base-learner f θ that learns each individual task, an adaptation function f c that conditionally modulates the representations of Figure 1: Overview of Conditional class-Aware Meta-Learning. Inputs to the model are mapped onto an embedding space using f φ which are then used to modulate the base-learner f θ through a conditional transformation f c. We use MAML (not shown) to meta-learn f c, f θ, and a metric loss to pre-train f φ the base-learner, and a meta-learner that learns across different tasks. Figure 1 depicts a toy illustration of the task inference procedure where examples from three classes are mapped onto a metric space using f φ, which are further used to modulate the base-learner f θ through a conditional transformation function f c.The main contribution of this paper is to incorporate metric-based conditional transformations (f c) into the meta-learning framework at the instance level. A notable feature of the proposed method is that the model has a global sense of the label space through the embedding function f φ by mapping examples onto the semantically meaningful metric space. The embeddings on the metric space inform the base-learner f θ about the label structure which in turn helps disentangle representations from different classes. This structured information can also provide a global view of the input examples to improve gradient-based meta-learning. In a simplistic form, our proposed model makes predictions usinĝ DISPLAYFORM0 where the base-learner f θ is conditioned on the embedding space f φ (x) through the conditional transformation f c. This is in contrast with a regular base-learner whereŷ =f θ (x). In our framework, we use MAML to meta-learn f c and f θ. The metric space is pre-trained using distance-based loss function. We encode information of the label structure through f φ in the form of an M-dimensional metric space, where each input example is reduced to a point in the metric space. The goal of the metric learning step is to optimize parameter φ such that distances between examples in the metric space are semantically meaningful. Given the parameters of the metric space φ, which is represented by a convolutional network, we calculate a centroid c t for each class t, DISPLAYFORM0 where K denotes the number of examples for class t, 1 {yi=t} denotes an indicator function of y i which takes value 1 when y i =t and 0 otherwise. The centroid c t is the sample mean among all instances from the same class which is treated as a prototype representation of the class t. The mapping function f φ is optimized to minimize the negative log-probability defined in Eq. by minimizing the Euclidean distance d between an example and its corresponding class centroid c t while maximizing its Euclidean distance to other class centroids c t: DISPLAYFORM1 In relation to prototypical networks BID28, we use the same loss function for metric learning. However, these frameworks differ in the test mode: we are not interested in example-centroid distances for label assignment, but rather in the projection f φ (x i) from the input space to the metric space that encapsulates inferred class regularities given the input example x i. In relation to other pre-training methods that use the meta-train classes to train a 64-way classifier, our use of the metric space imposes distance-based constraints to learn embeddings that follow semantically meaningful distance measures. We empirically find it difficult to optimize both the metric space and base-learner end-to-end. The metric space is pre-trained on the meta-train data and it is not updated during meta-learning. This also ensures the metric space is trained on a large number of classes to capture the global class dependencies. We now turn to describing the conditionally transformed convolutional block, shown in FIG0, which uses the metric space described in Section 3.2 to inform the base-learner about the label structure of a task. The conditional transformation f c receives embeddings from the metric space and produces transformation operations to modulate convolutional representations of the base-learner f θ.Our conditional transformation has close relation to Batch Normalization (BN) BID11 that normalizes the input to every layer of a neural network. In order to conditionally modulate feature representations, we use Conditional Batch Normalization (CBN) BID22 to predict scale and shift operators from conditional input s i: DISPLAYFORM0 where f c,γ and f c,β can be any differentiable function. This gives our model the flexibility to shift or scale the intermediate representations based on some source information in s i. Since examples belonging to the same class are conceptually close, we exploit this inherent relationship in the metric space to modulate the feature maps at the example level in a way that encodes the label structure. Once we obtained the embedding function f φ, we use two auxiliary networks, learned end-to-end together with the meta-learner, to predict the shift and scale factors of the convolutional feature map: DISPLAYFORM1 Having computedγ i,c andβ i,c, Conditional Batch Normalization (CBN) is applied as follows: DISPLAYFORM2 where R i,c refers to the c th feature map from the i th example, is a small constant, β c and γ c are learnable parameters shared within a task. E[R c] and Var[R c] are batch mean and variance of R c.It is worthwhile to note the effect of conditional transformation. The conditional bias transformation witĥ β i,c is analogous to concatenation-based conditioning where the conditional information is concatenated to the feature maps BID5. The conditional scaling factor provides multiplicative interactions between the metric space and the feature maps to aggregate information. Furthermore, the goal of the conditionally transformed convolutional block is to simultaneously capture the two views of a classification task: a global view that is aware of the relationships among all classes, and a local views of the current N-way K-shot classification task. The metric space, or the global view, is pre-trained in a way that is independent of the current N-way K-shot task; while the base-learner, or the local view, attempts to develop representations for the current classification task. Although the metric space is never trained on the meta-test classes, we expect the learned metric space to generalize from the meta-train tasks to the meta-test tasks. We further describe parameter sharing for CBN learning in Section 3.3.1, and class-aware grouping in Section 3.3.2 which provides more statistical strength for more effective few-shot learning. Although one can predictγ c andβ c using two separate functions, we find it beneficial to use shared parameters as shown in Figure 3. The shared representations are more efficient at producing conditional transformations which also provide a strong inductive bias to help learning BID2. We propose class-aware grouping, as shown in Figure 4 (b), to further exploit properties of metric space. The motivation stems from a lack of statistical strength when learning from only a few examples. As an example, in N-way 1-shot learning, the model is required to find the most meaningful way to distinguish different classes. However, gradient-based optimization may lead to the discovery of irrelevant features which coincide with the class labels, such as colors. We address this problem by class-aware grouping that is guided by our metric space. This is related to "transduction", which is a standard technique in MAML-based methods. Transduction as discussed in BID20, makes use of the channel mean E[R c] and variance Var[R c], defined in Eq., of query examples when evaluating a base-learner. In contrast to standard transduction methods that calculate mean and variance over all examples of the current batch, we introduce class-aware grouping that clusters examples into different groups and use group-based mean and variance to normalize different channels. The grouping is determined by distance measures in the metric space where examples are grouped together based on their nearest centroid c t defined in Section 3.2. Class-aware grouping is integrated into CBN as: DISPLAYFORM0 where 1 {xi∈t} indicates if an example x i belongs to cluster t, and E[R i,c ·1 {xi∈t} ] represents the average of channel R c among examples clustered at c t. This is depicted in Figure 4 where the channel mean and variance are calculated for every group. This approach informs the base-learner about what to expect from the query examples at the class level through channel mean and variance, which provides more explicit guidance to the meta-learning procedure. The base-learner (f θ) is composed of 4 layers of 3×3 convolutions with a 4×4 skip connections from the input to the final convolutional layer. The use of skip connections is to improve the gradient flow as MAML unfolds the inner loop into one computational graph. The use of skip connections is empirically important to the proposed model. Each convolutional layer has 30 channels and is followed by CBN, ReLU and 2×2 max-pooling operations. The output of the final convolution is flattened and fed to a 1-layer dense classifier. For learning the metric space (f φ), we use the same residual network (ResNet-12) as BID21. The metric space is pre-trained on the same meta-training dataset for 30,000 episodes and not updated while learning the base-learner. The meta-learner is trained for 50,000 episodes. We empirically observe that training the metric space and meta-learner end-to-end is overly complex and prone to over-fitting. For CBN functions (f c), we use 3 dense layers with 30 hidden units each. Every layer is followed by a ReLU except for the last layer where no activation is used. For the meta-learner, we use MAML with 1 gradient step for 1-shot learning and 5 gradient steps for 5-shot learning. We use the Adam optimizer and clip the L2 norm of gradients with an upper bound of 5. Similar to MAML, we use transduction where the statistics of the current batch is used for E and Var in Eq. FORMULA6 for both training and testing.4 RELATED WORK 4.1 META-LEARNING Meta-learning or "learning-to-learn" BID27 BID1 BID17 BID29 has been studied as a means to acquire meta-knowledge across many tasks. In recent years, meta-learning has become an important approach for few-shot learning. A number of approaches aim to learn universal learning procedure approximators by supplying training examples to the meta-learner that outputs predictions on testing examples BID10 BID30 BID26 BID15. Other approaches learn to generate model parameters conditioned on training examples BID8 BID18 BID9 BID7, or learning optimization algorithms across different tasks BID23 BID0 BID14. Our work is more inline with gradient-based meta-learning that aims to learn representations that encourage fast adaptation on new tasks. These methods are based on model-agnostic meta-learning (MAML) introduced by BID6. While the original MAML requires second-order gradients in meta-optimization, REPTILE BID20 only uses first-order gradient information. Furthermore, Latent Embedding Optimization (LEO) BID25 ) is proposed to perform gradient-based optimization on a lowdimensional latent space instead of the original high-dimensional parameter space. We emphasize that all those methods do not make explicit use of structured label information, which is a main novelty in this paper. Our work also relates closely to metric-based meta-learning that learns a metric space across different tasks. Siamese networks BID13 learn a similarity measure between inputs using a shared network architecture that outputs high probability when paired examples are from the same class. Matching networks BID30 use full context embeddings to encode examples to the metric space and use attention as a similarity measure for predictions. Prototypical networks BID28 ) compute a centroid, or prototype, for every class that are later used for distance-based queries of new examples. Task dependent adaptive metric (TADAM) BID21 uses metric scaling based on tasks representations to learn a task-dependent metric space. A notable difference between the metric-based methods and our approach is that, the metric space in our model is not aimed for distance-based classification. Rather, we use the metric space to represent class structure which facilitates the gradient-based meta learning towards better generalization. Another difference between our method and TADAM is that, TADAM scales the metric space at the task level where all examples within a task are scaled in the same manner. In contrast, our method provides instance-based conditioning that makes use of the precise representation of each example. Put another way, TADAM modulates the inference on a metric space from a task perspective, while CAML uses example-level representation to modulate the representation at the content level. In style transfer, conditional instance normalization is proposed by BID22 The notion that is common to all these methods is the use of an additional input source, e.g., style or language, to conditionally transform intermediate representations of a network. In few-shot learning, BID31 suggested that it is easier to operate in the concept space in the form of a lower dimensional representation. This is compatible with our proposed approach that uses the metric space as concept-level representation to modulate intermediate features of the base-learner. We use miniImageNet to evaluate the proposed Conditional class-Aware Meta-Learning algorithm. miniImageNet BID30 ) is composed of 84×84 colored images from 100 classes, with 600 examples in each class. We adopt the class split by BID23 that uses 64 classes for training, 16 for validation, and 20 for test. For N-way K-shot training, we randomly sample N classes from the meta-train classes each containing K examples for training and 20 examples for testing. At meta-testing time, we randomly sample 600 N-way K-shot tasks from the test classes. The presented in TAB0 show that our proposed algorithm has comparable performance on the state-of-the-art miniImageNet 5-way 1-shot classification task, and competitive on the 5-way 5-shot task. Unlike LEO that applies meta-learning on pre-trained representations, our meta-learner is able to effectively operate on the high-dimensional parameter space. Our method also does not require co-training compared with TADAM BID21. Figure 5 shows the t-SNE plot of the learned metric space for both meta-train and meta-validation classes. As seen in Figure 4b, examples from the meta-validation set form clusters consistent with their class membership, even though the metric space is not trained on these classes. For example, "mierkat", "tundrarum" and "podenco" are all animals and they are clustered close together. The first main baseline we report is MAML. CAML improves upon MAML by about 10% on both 1-shot and 5-shot tasks. This means incorporating class dependencies in the form of a metric space can greatly facilitate gradient-based meta-learning. We also compare with MAML using our base-learner architecture equipped with skip connections from the input to the last convolutional layer. MAML trained with our base-learner's architecture yields similar performance as the original MAML, suggesting the improvement is ed from the proposed CAML framework, rather than changes in the base-learner's architecture. BID23 43.44% ± 0.77% 60.60% ± 0.71% Matching Networks BID30 46.6% 60.0% Prototypical Network with Soft k-Means BID24 50.41% ± 0.31% 69.88% ± 0.20% MetaNet BID18 49.21% ± 0.96% − TCML BID16 55.71% ± 0.99% 68.88% ± 0.92% adaResNet BID19 56.88% ± 0.62% 71.94 ± 0.57% Cosine Classifier BID7 56.20% ± 0.86% 73.00% ± 0.64% TADAM BID21 58.5% 76.7% LEO BID25 61.76% ± 0.08% 77.59% ± 0.12%MAML BID6 48.7% ± 1.84% 63.11% ± 0.92% MAML on our architecture 48.26% ± 1.04% 64.25% ± 0.78% Prototypical Network BID28 49.42% ± 0.78% 68.2% ± 0.66% Prototypical Network on our metric space 55.96% ± 0.91% 71.64% ± 0.70% CAML (with multitask learning alone) 52.56% ± 0.83% 71.35% ± 1.13% CAML (with class-aware grouping alone) 55.28% ± 0.90% 71.14% ± 0.81% CAML (full model) 59.23% ± 0.99% 72.35% ± 0.71%The confidence intervals are constructed by sampling 600 evaluation tasks from the meta-test classes. The second baseline we use is prototypical network. We measure the classification ability of our metric space using prototypical network as a classifier, shown in TAB0 (Prototypical Network in our metric space). These suggest that making predictions on the metric space alone is inferior to CAML.This can be explained by CAML's ability to fast-adapt representations even when the metric space does not provide good separations. We also find that CAML has larger improvements in 1-shot tasks than 5-shot ones. This is because, in 1-shot learning, metric-based methods estimate class representations from a single example, making it difficult to provide a robust class estimation. We compare activations before and after the conditional transformation to better understand how conditional transformation modulates the feature representations. FIG3 shows the PCA projections of the last convolutional layer in the base-learner. We observe in Figure 5a that, before conditional transformation, examples from three classes ("parallel bars", "tile roof" and "reel") are mixed together. In Figure 5b, after the conditional transformation is applied, one of the previously cluttered classes ("tile roof") become separated from the rest classes. This confirms that metric space can alleviate the difficulty in few-shot learning by means of conditional transformations. We undertake ablation studies to show the impact of multitask learning and class-aware grouping. Empirical in TAB0 suggest that, while 1-shot learning is sensitive to multitask learning and class-aware grouping, 5-shot learning is not affected by those techniques. This is owing to a lack of statistical strength in 1-shot learning, which requires more explicit guidance in the training procedure. This means exploiting metric-based channel mean and variance can provide valuable information to improve meta-learning. More detailed ablation studies are included in Appendix A. In this work, we propose Conditional class-Aware Meta-Learning (CAML) that incorporates class information by means of an embedding space to conditionally modulate representations of the base-learner. By conditionally transforming the intermediate representations of the base-learner, our goal is to reshape the representation with a global sense of class structure. Experiments reveal that the proposed conditional transformation can modulate the convolutional feature maps towards a more disentangled representation. We also introduce class-aware grouping to address a lack of statistical strength in few-shot learning. The proposed approach obtains competitive with the current state-of-the-art performance on 5-way 1-shot and 5-shot miniImageNet benchmark. TAB1 suggest that, while 1-shot learning is sensitive to multitask learning and class-aware grouping, 5-shot learning is less sensitive those techniques. This is owing to a lack of sufficient training examples in 1-shot learning tasks, which requires more explicit guidance in the training procedure. We further note that, in 1-shot learning, using class-aware grouping alone can improve CBN's performance by 3%. This means exploiting metric-based channel mean and variance can provide valuable information for gradient-based meta-learning. For CBN parameters, we observe that more than half of the predictedβ c are negative. This is inline with findings from BID22 that CBN selectively suppresses activations of a feature map when followed by a ReLU. To further examine the impact of the scale and shift operators, we train CBN with each operator alone. TAB2 shows CBN works the best when bothγ c andβ c are used, andγ c contributes more thanβ c, owing to its multiplicative interactions between the metric space and convolutional feature representations.
CAML is an instance of MAML with conditional class dependencies.
732
scitldr
We study the problem of multiset prediction. The goal of multiset prediction is to train a predictor that maps an input to a multiset consisting of multiple items. Unlike existing problems in supervised learning, such as classification, ranking and sequence generation, there is no known order among items in a target multiset, and each item in the multiset may appear more than once, making this problem extremely challenging. In this paper, we propose a novel multiset loss function by viewing this problem from the perspective of sequential decision making. The proposed multiset loss function is empirically evaluated on two families of datasets, one synthetic and the other real, with varying levels of difficulty, against various baseline loss functions including reinforcement learning, sequence, and aggregated distribution matching loss functions. The experiments reveal the effectiveness of the proposed loss function over the others. A relatively less studied problem in machine learning, particularly supervised learning, is the problem of multiset prediction. The goal of this problem is to learn a mapping from an arbitrary input to a multiset 1 of items. This problem appears in a variety of contexts. For instance, in the context of high-energy physics, one of the important problems in a particle physics data analysis is to count how many physics objects, such as electrons, muons, photons, taus, and jets, are in a collision event BID4. In computer vision, automatic alt-text, such as the one available on Facebook, 2 is a representative example of multiset prediction BID16 BID9. 3 In multiset prediction, a learner is presented with an arbitrary input and the associated multiset of items. It is assumed that there is no predefined order among the items, and that there are no further annotations containing information about the relationship between the input and each of the items in the multiset. These properties make the problem of multiset prediction unique from other wellstudied problems. It is different from sequence prediction, because there is no known order among the items. It is not a ranking problem, since each item may appear more than once. It cannot be transformed into classification, because the number of possible multisets grows exponentially with respect to the maximum multiset size. In this paper, we view multiset prediction as a sequential decision making process. Under this view, the problem reduces to finding a policy that sequentially predicts one item at a time, while the outcome is still evaluated based on the aggregate multiset of the predicted items. We first propose an oracle policy that assigns non-zero probabilities only to prediction sequences that exactly in the target, ground-truth multiset given an input. This oracle is optimal in the sense that its prediction never decreases the precision and recall regardless of previous predictions. That is, its decision is optimal in any state (i.e., prediction prefix). We then propose a novel multiset loss which minimizes the KL divergence between the oracle policy and a parametrized policy at every point in a decision trajectory of the parametrized policy. 1 A set that allows multiple instances, e.g. {x, y, x}. See Appendix A for a detailed definition. https://newsroom.fb.com/news/2016/04/using-artificial-intelligenceto-help-blind-people-see-facebook/ 3 We however note that such a multiset prediction problem in computer vision can also be solved as segmentation, if fine-grained annotation is available. See, e.g., BID6.We compare the proposed multiset loss against an extensive set of baselines. They include a sequential loss with an arbitrary rank function, sequential loss with an input-dependent rank function, and an aggregated distribution matching loss and its one-step variant. We also test policy gradient, as was done by BID16 recently for multiset prediction. Our evaluation is conducted on two sets of datasets with varying difficulties and properties. According to the experiments, we find that the proposed multiset loss outperforms all the other loss functions. The paper is structured as follows. We first define multiset prediction at the beginning of Section 2, and compare it to existing problems in supervised learning in 2.1. Then we propose the multiset loss in Section 2.2, followed by alternative baseline losses in Section 3. The multiset loss and baselines are then empirically evaluated in Section 4. A multiset prediction problem is a generalization of classification, where a target is not a single class but a multiset of classes. The goal is to find a mapping from an input x to a multiset Y = y 1,..., y |Y|, where y k ∈ C. Some of the core properties of multiset prediction are 1. the input x is an arbitrary vector.2. there is no predefined order among the items y i in the target multiset Y.3. the size of Y may vary depending on the input x.4. each item in the class set C may appear more than once in Y.Refer to Appendix A for definitions related to multiset prediction. As is typical in supervised learning, in multiset prediction a model DISPLAYFORM0 and computing evaluation metrics m(·) that compare the predicted and target multisets, DISPLAYFORM1 Here, F1 score and exact match (defined in Appendix A), are used as evaluation metrics. Variants of this multiset prediction problem have been extensively studied. However, they differ from our definition of the problem. Here, we go over each variant and discuss how it differs from our definition of multiset prediction. Power Multiset Classification Perhaps the most naive approach to multiset prediction is to transform the class set C into a set M (C) of all possible multisets. This transformation, or the size of M (C), is not well defined unless some constraints are put in place. If the maximum size of a target multiset is set to K, the number of all possible multisets is DISPLAYFORM0 With some constant |C|, we notice that this grows exponentially in the maximum size of the target multiset. Once the class set C is transformed, we can train a multi-class classifier π that maps an input x to one of the elements in M (C). However, this is infeasible in practice and generally intractable. For instance, for the COCO Medium dataset used later in the experiments (see section 4.1), M (C) has roughly 20 thousand elements while the dataset only contains roughly 40 thousand training examples. For the full MS COCO dataset, |M (C)| is on the order of 10 49, making it infeasible to learn a classifier using this method. Ranking A ranking problem can be considered as learning a mapping from a pair of input x and one of the items c ∈ C to its score s(x, c). All the items in the class set are then sorted according to the score, and this sorted order determines the rank of each item. By taking the top-K items from this sorted list, we can turn this problem of ranking into set prediction. Similarly to multiset prediction, the input x is arbitrary, and the target is a set without any prespecific order. However, ranking differs from multiset prediction in that it is unable to handle multiple occurrences of a single item in the target set. Aggregated Distribution Matching Instead of considering the target multiset as an actual multiset, one can convert it into a distribution by computing the frequency of each item from the class set in the target multiset. That is, DISPLAYFORM1 where I · is an indicator function. Then, we can simply minimize a divergence between this distribution and the predictive distribution from a model. This loss function works only when the conditional distribution p(y|x) substantially differs from the marginal distribution p(y), since the model would resort to a trivial solution of predicting the marginal distribution regardless of the input x. We describe this approach in more detail in Sec. 3.1, and test it against our proposal in the experiments. Sequence prediction A sequence prediction problem is characterized as finding a mapping from an input x to a sequence of classes Y = y 1,..., y |Y|. Representative examples of sequence prediction include machine translation, automatic speech recognition and other tagging problems, such as part-of-speech tagging, in natural language processing. Similarly to multiset prediction, the input x is arbitrary, and an item in the class set C may appear more than once in the target sequence. It is, however, different from multiset prediction in that there is a clear, predetermined order of items in the target sequence. We detail this sequence prediction approach later in Sec. 3.2. In this paper, we propose a novel loss function, called multiset loss, for the problem of multiset prediction. This loss function is best motivated by treating the multiset prediction problem as a sequential decision making process with a model being considered a policy π. This policy takes as input the input x and all the previously predicted classesŷ <t at time t, and outputs the distribution over the next class to be predicted. That is, π θ (y t |ŷ <t, x). This policy is parametrized with a set θ of parameters. We first define a free label multiset at time t as Definition 1 (Free Label Multiset).Y t ← Y t−1 \ {ŷ t−1} y t−1 is the prediction made by the policy at time t − 1.This free label multiset Y t contains all the items that remain to be predicted after t − 1 predictions by the policy. We then construct an oracle policy π *. This oracle policy takes as input a sequence of predicted labelsŷ <t, the input x, and the free label multiset with respect to its predictions, Y t = Y\ {ŷ <t}. It outputs a distribution whose entire probability is evenly distributed over all the items in the free label multiset Y t. In other words, Definition 2 (Oracle). DISPLAYFORM0 An interesting and important property of this oracle is that it is optimal given any prefixŷ <t with respect to both precision and recall. This is intuitively clear by noticing that the oracle policy allows only a correct item to be selected. We call this property the optimality of the oracle. Remark 1. Given an arbitrary prefixŷ <t, DISPLAYFORM1 The proof is given in Appendix B. See Appendix A for definitions of precision and recall for multisets. From the remark above, it follows that the oracle policy is an optimal solution to the problem of multiset prediction in terms of precision and recall. Remark 2. DISPLAYFORM2 The proof can be found in Appendix C.It is trivial to show that sampling from such an oracle policy would never in an incorrect prediction. That is, this oracle policy assigns zero probability to any sequence of predictions that is not a permutation of the target multiset. Remark 3. DISPLAYFORM3 where multiset equality refers to exact match, defined in Appendix A. In short, this oracle policy tells us at each time step t which of all the items in the class set C must be selected. This optimality allows us to consider a step-wise loss between a parametrized policy π θ and the oracle policy π *, because the oracle policy provides us with an optimal decision regardless of the quality of the prefix generated so far. We thus propose to minimize the KL divergence from the oracle policy to the parametrized policy at each step separately. This divergence is defined as DISPLAYFORM4 where Y t is formed using predictionsŷ <t from π θ, and H(π t *) is the entropy of the oracle policy at time step t. This entropy term can be safely ignored when learning π θ, since it is constant with respect to θ. We define DISPLAYFORM5 and call it a per-step loss function. We note that it is indeed possible to use another divergence in the place of the KL divergence. It is intractable to minimize the per-step loss from Eq. for every possible state (ŷ <t, x), since the size of the state space grows exponentially with respect to the size of a target multiset. We thus propose here to minimize the per-step loss only for the state, defined as a pair of the input x and the prefixŷ <t, visited by the parametrized policy π θ. That is, we generate an entire trajectory (ŷ 1, . . .,ŷ T) by executing the parametrized policy until either all the items in the target multiset have been predicted or the predefined maximum number of steps have passed. Then, we compute the loss function at each time t based on (x,ŷ <t), for all t = 1,..., T. The final loss function is then the sum of all these per-step loss functions. DISPLAYFORM6 where T is the smaller of the smallest t for which Y t = ∅ and the predefined maximum number of steps allowed. Note that as a consequence of Remarks 2 and 3, minimizing the multiset loss function in maximizing F1 and exact match. As was shown by BID12, the use of the parametrized policy π θ instead of the oracle policy π * allows the upper bound on the learned policy's error to be linear with respect to the size of the target multiset. If the oracle policy had been used, the upper bound would have grown quadratically with respect to the size of the target multiset. To confirm this empirically, we test the following three alternative strategies for executing the parametrized policy π θ in the experiments:1. Greedy search:ŷ t = arg max y log π θ (y|ŷ <t, x) 2. Stochastic sampling: DISPLAYFORM7 Once the proposed multiset loss is minimized, we evaluate the learned policy by greedily selecting each item from the policy. We have defined the proposed loss function for multiset prediction while assuming that the size of the target multiset was known. However, this is a major limitation, and we introduce two different methods for relaxing this constraint. Termination Policy The termination policy π s outputs a stop distribution given the predicted sequence of itemsŷ <t and the input x. Because the size of the target multiset is known during training, we simply train this termination policy in a supervised way using a binary cross-entropy loss. At evaluation time, we simply threshold the predicted stop probability at a predefined threshold (0.5).Special Class An alternative strategy is to introduce a special item to the class set, called END, and add it to the final free label multiset Y |Y|+1 = {END}. Thus, the parametrized policy is trained to predict this special item END once all the items in the target multiset have been predicted. This is analogous to NLP sequence models which predict an end of sentence token BID14 BID0, and was used in BID16 to predict variable-sized multisets. In addition to the proposed multiset loss function, we propose three more loss functions for multiset prediction. They serve as baselines in our experiments later. In the case of distribution matching, we consider the target multiset Y as a set of samples from a single, underlying distribution q * over the class set C. This underlying distribution can be empirically estimated by counting the number of occurrences of each item c ∈ C in Y. That is, DISPLAYFORM0 where I is the indicator function as before. Similarly, we can construct an aggregated distribution computed by the parametrized policy π θ. As with the proposed multiset loss in Def. 3, we first execute π θ to predict a multisetŶ. This is converted into an aggregated distribution q θ in the same way as we turned the target multiset into the oracle aggregate distribution. Learning is equivalent to minimizing the divergence between these two distributions. In this paper, we test two types of divergences. The first one is from a family of L p distances defined as DISPLAYFORM1 where q * and q are the vectors representing the corresponding categorical distributions. The other is a usual KL divergence defined earlier in Eq.: DISPLAYFORM2 One major issue with this approach is that minimizing the divergence between the aggregated distributions does not necessarily in the optimal policy (see the oracle policy in Def. 2.) That is, a policy that minimizes this loss function may assign non-zero probability to an incorrect sequence of predictions, unlike the oracle policy. This is due to the invariance of the aggregated distribution to the order of predictions. Later when analyzing this loss function, we empirically notice that a learned policy often has a different behaviour from the oracle policy, for instance, reflected by the increasing entropy of the action distribution over time. We can train an one-step predictor with this aggregate distribution matching criterion, instead of learning a policy π θ. That is, a predictor outputs both a point q θ (·|x) in a |C|-dimensional simplex and the sizel θ (x) of the target multiset. Then, for each unique item c ∈ C, the number of its occurrences in the predicted multisetŶ is DISPLAYFORM0 where λ > 0 is a coefficient for balancing the contributions from the two terms. A major weakness of this one-step variant, compared to the approaches based on sequential decision making, is the lack of modelling dependencies among the items in the predicted multiset. We test this approach in the experiments later and observe this lack of output dependency modelling in substantially worse prediction accuracy. All the loss functions defined so far have not relied on the availability of an existing order of items in a target multiset. However, by turning the problem of multiset prediction into sequential decision making, minimizing such a loss function is equivalent to capturing an order of items in the target multiset implicitly. Here, we instead describe an approach based on explicitly defining an order in advance. This will serve as a baseline later in the experiments. We first define a rank function r that maps from one of the unique items in the class set c ∈ C to a unique integer. That is, r: C → Z. This function assigns the rank of each item and is used to order items y i in a target multiset Y. This in a sequence S = (s 1, . . ., s |Y|), where r(s i) ≥ r(s j) for all j > i, and s i ∈ Y. With this target sequence S created from Y using the rank function r, we define a sequence loss function as DISPLAYFORM0 Minimizing this loss function is equivalent to maximizing the conditional log-probability of the sequence S given x. This sequence loss function has two clear disadvantages. First, it does not take into account the actual behaviour of the policy π θ (see, e.g., BID1 BID2 BID12 . This makes a learned policy potentially vulnerable to cascading error at test time. Second and more importantly, this loss function requires a pre-specified rank function r. Because multiset prediction does not come with such a rank function by definition, we must design an arbitrary rank function, and the final performance varies significantly based on the choice. We demonstrate this variation in section 4.3.Input-Dependent Rank Function When the input x has a well-known structure, and an object within the input for each item in the target multiset is annotated, it is possible to devise a rank function per input. A representative example is an image input with bounding box annotations. Here, we present two input-dependent rank functions in such a case. First, a spatial rank function r spatial assigns an integer rank to each item in a given target multiset Y such that where x i and x j are the objects corresponding to the items y i and y j .Second, an area rank function r area decides the rank of each label in a target multiset according to the size of the corresponding object inside the input image: DISPLAYFORM1 The area may be determined based on the size of a bounding box or the number of pixels, depending on the level of annotation. We test these two image-specific input-dependent rank functions against a random rank function in the experiments. In BID16, an approach based on reinforcement learning was proposed for multiset prediction. Instead of assuming the existence of an oracle policy, this approach solely relies on a reward function r designed specifically for multiset prediction. The reward function is defined as DISPLAYFORM0 The goal is then to maximize the sum of rewards over a trajectory of predictions from a parametrized policy π θ . The final loss function is DISPLAYFORM1 where the second term inside the expectation is the negative entropy multiplied with a regularization coefficient λ. The second term encourages the exploration during training. As in BID16, we use REINFORCE to stochastically minimize the loss function above with respect to π θ.This loss function is optimal in that the return, i.e., the sum of the step-wise rewards, is maximized when both the precision and recall are maximal (= 1). In other words, the oracle policy, defined in Def. 2, maximizes the expected return. However, this approach of reinforcement learning is known to be difficult, with a high variance BID11. This is especially true here, as the size of the state space grows exponentially with respect to the size of the target multiset, and the action space of each step is as large as the number of unique items in the class set. In this section, we extensively evaluate the proposed multiset loss function against various baseline loss functions presented throughout this paper. More specifically, we focus on its applicability and performance on image-based multiset prediction. MNIST Multi MNIST Multi is a class of synthetic datasets. Each dataset consists of multiple 100x100 images, each of which contains a varying number of digits from the original MNIST . We vary the size of each digit and also add clutters. In the experiments, we consider the following variants of MNIST Multi:• MNIST Multi: |Y| = 4, 20-50 pixel digits • MNIST Multi: |Y| ∈ {1, . . ., 4}, 20-50 pixel digits • MNIST Multi: |Y| = 10, 20 pixel digits Each dataset has a training set with 70,000 examples and a test set with 10,000 examples. We randomly sample 7,000 examples from the training set to use as a validation set, and train with the remaining 63,000 examples. MS COCO As a real-world dataset, we use Microsoft COCO BID10 which includes natural images with multiple objects. Compared to MNIST Multi, each image in MS COCO has objects of more varying sizes and shapes, and there is a large variation in the number of object instances per image which spans from 1 to 91. The problem is made even more challenging with many overlapping and occluded objects. To control the difficulty in order to better study the loss functions, we create the following two variants:• COCO Easy: |Y| = 2, 10,230 training examples, 24 classes• COCO Medium: |Y| ∈ {1, . . ., 4}, 44,121 training examples, 23 classesIn both of the variants, we only include images whose |Y| objects are large and of common classes. An object is defined to be large if the object's area is above the 40-th percentile across the train set of MS COCO. After reducing the dataset to have |Y| large objects per image, we remove images containing only objects of rare classes. A class is considered rare if its frequency is less than 1 |C|, where C is the class set. These two stages ensure that only images with a proper number of large objects are kept. We do not use fine-grained annotation (pixel-level segmentation and bounding boxes) except for creating input-dependent rank functions from Sec. 3.2.For each variant, we hold out a randomly sampled 15% of the training examples as a validation set. We form separate test sets by applying the same filters to the COCO validation set. The test set sizes are 5,107 for COCO Easy and 21,944 for COCO Medium. MNIST Multi We use three convolutional layers of channel sizes 10, 10 and 32, followed by a convolutional long short-term memory (LSTM) layer BID18. At each step, the feature map from the convolutional LSTM layer is average-pooled spatially and fed to a softmax classifier. In the case of the one-step variant of aggregate distribution matching, the LSTM layer is skipped. MS COCO We use a ResNet-34 BID5 ) pretrained on ImageNet BID3 ) as a feature extractor. The final feature map from this ResNet-34 is fed to a convolutional LSTM layer, as described for MNIST Multi above. We do not finetune the ResNet-34 based feature extractor. In all experiments, for predicting variable-sized multisets we use the termination policy approach since it is easily applicable to all of the baselines, thus ensuring a fair comparison. Conversely, it is unclear how to extend the special class approach to the distribution matching baselines. When evaluating a trained policy, we use greedy decoding and the termination policy for determining the size of a predicted multiset. Each predicted multiset is compared against the ground-truth target multiset, and we report both the accuracy based on the exact match (EM) and F-1 score (F1), as defined in Appendix A.More details about the model architectures and training are in Appendix D. We test three alternatives: a random rank function 4 r and two input-dependent rank functions r spatial and r area. We compare these rank functions on MNIST Multi and COCO Easy validation sets. We present the in TAB0. It is clear from the that the performance of the sequence prediction loss function is dependent on the choice of a rank function. In the case of MNIST Multi, the area-based rank function was far worse than the other choices. However, this was not true on COCO Easy, where the spatial rank function was worst among the three. In both cases, we have observed that the random rank function performed best, and from here on, we use the random rank function in the remaining experiments. This set of experiments firmly suggests the need of an order-invariant multiset loss function, such as the multiset loss function proposed in this paper. In this set of experiments, we compare the three execution strategies for the proposed multiset loss function, illustrated in Sec. 3. They are greedy decoding, stochastic sampling and oracle sampling. We test them on MNIST Multi and COCO Easy. As shown in TAB2, greedy decoding and stochastic sampling, both of which consider states that are likely to be visited by the parametrized policy, outperform the oracle sampling. This is consistent with the theory by BID12. Although the first two strategies perform comparably to each other, across both of the datasets and the two evaluation metrics, greedy decoding tends to outperform stochastic sampling. We conjecture this is due to better matching between training and testing in the case of greedy decoding. Thus, from here on, we use greedy decoding when training a model with the proposed multiset loss function. We now compare the proposed multiset loss function against the five baseline loss functions: reinforcement learning L RL, aggregate distribution matching-L 1 dm and L KL dm -, its one-step variant L 1-step, and sequence prediction L seq.MNIST Multi We present the on the MNIST Multi variants in TAB1. On all three variants and according to both metrics, the proposed multiset loss function outperforms all the others. The reinforcement learning based approach closely follows behind. Its performance, however, drops as the number of items in a target multiset increases. This is understandable, as the variance of policy gradient grows as the length of an episode grows. A similar behaviour was observed with sequence prediction as well as aggregate distribution matching. We were not able to train any decent models with the one-step variant of aggregate distribution matching. This was true especially in terms of exact match (EM), which we attribute to the one-step variant not being capable of modelling dependencies among the predicted items. TAB3. On COCO Easy, with only two objects to predict per example, both aggregated distribution matching (with KL divergence) and the sequence loss functions are as competitive as the proposed multiset loss. The other loss functions significantly underperform these three loss functions, as they did on MNIST Multi. The performance gap between the proposed loss and the others, however, grows substantially on the more challenging COCO Medium, which has more objects per example. The proposed multiset loss outperforms the aggregated distribution matching with KL divergence by 3.7 percentage points on exact match and 4.8 on F1. This is analogous to the experiments on the MNIST Multi variants, where the performance gap increased when moving from four to ten digits. One property of the oracle policy defined in Sec. 2.2 is that the entropy of the predictive distribution strictly decreases over time, i.e., H π * (y|ŷ <t, x) > H π * (y|ŷ ≤t, x). This is a natural consequence from the fact that there is no pre-specified rank function, because the oracle policy cannot prefer any item from the others in a free label multiset. Hence, we examine here how the policy learned based on each loss function compares to the oracle policy in terms of per-step entropy. We consider the policies trained on MNIST Multi, where the differences among them were most clear. As shown in FIG1, the policy trained on MNIST Multi using the proposed multiset loss closely follows the oracle policy. The entropy decreases as the predictions are made. The decreases can be interpreted as concentrating probability mass on progressively smaller free labels sets. The variance is quite small, indicating that this strategy is uniformly applied for any input. The policy trained with reinforcement learning retains a relatively low entropy across steps, with a decreasing trend in the second half. We carefully suspect the low entropy in the earlier steps is due to the greedy nature of policy gradient. The policy receives a high reward more easily by choosing one of many possible choices in an earlier step than in a later step. This effectively discourages the policy from exploring all possible trajectories during training. On the other hand, the policy found by aggregated distribution matching (L KL dm) has the opposite behaviour. The entropy in general grows as more predictions are made. To see why this is suboptimal, consider the final (10th) step. Assuming the first nine predictions {ŷ 1, ...,ŷ 9} were correct (i.e. they form a subset of Y), there is only one correct class left for the final predictionŷ 10. The high entropy, however, indicates that the model is placing a significant amount of probability on incorrect sequences. We believe such a policy is found by minimizing the aggregated distribution matching loss function because it cannot properly distinguish between policies with increasing and decreasing entropies. The increasing entropy also indicates that the policy has learned a rank function implicitly and is fully relying on it. Given some unknown free label multiset, inferred from the input, this policy uses the implicitly learned rank function to choose one item from this set. We conjecture this reliance on an inferred rank function, which is by definition sub-optimal, 5 ed in lower performance of aggregate distribution matching. We have extensively investigated the problem of multiset prediction in this paper. We rigorously defined the problem, and proposed to approach it from the perspective of sequential decision making. In doing so, an oracle policy was defined and shown to be optimal, and a new loss function, called multiset loss, was introduced as a means to train a parametrized policy for multiset prediction. The experiments on two families of datasets, MNIST Multi variants and MS COCO variants, have revealed the effectiveness of the proposed loss function over other loss functions including reinforcement learning, sequence, and aggregated distribution matching loss functions. The success of the proposed multiset loss brings in new opportunities of applying machine learning to various new domains, including high-energy physics. Precision Precision gives the ratio of correctly predicted elements to the number of predicted elements. Specifically, letŶ = (C, µŶ), Y = (C, µ Y) be multisets. Then DISPLAYFORM0 The summation and membership are done by enumerating the multiset. For example, the multisetŝ Y = {a, a, b} and Y = {a, b} are enumerated asŶ = {a DISPLAYFORM1 Formally, precision can be defined as DISPLAYFORM2 where the summation is now over the ground set C. Intuitively, precision decreases by 1 |Ŷ| each time an extra class label is predicted. Recall Recall gives the ratio of correctly predicted elements to the number of ground-truth elements. Recall is defined analogously to precision, as: Similarly, we start with the definition of the recall: DISPLAYFORM3 Rec(ŷ <t, Y) = y∈ŷ<t I y∈Y |Y|.turned into a conditional distribution over the next item after affine transformation followed by a softmax function. When the one-step variant of aggregated distribution matching is used, we skip the convolutional LSTM layers, i.e., c = DISPLAYFORM4 See Fig. 2 for the graphical illustration of the entire network. See TAB4 for the details of the network for each dataset. conv 5 × 5 max-pool 2 × 2 feat 10 81 conv 3 × 3 feat 32 conv 5 × 5 max-pool 2 × 2 feat 10 conv 3 × 3 feat 32 conv 5 × 5 max-pool 2 × 2 feat 32 ResNet-34 361 conv 3 × 3 feat 512 conv 3 × 3 feat 512Preprocessing For MNIST Multi, we do not preprocess the input at all. In the case of MS COCO, input images are of different sizes. Each image is first resized so that its larger dimension has 600 pixels, then along its other dimension is zero-padded to 600 pixels and centered, ing in a 600x600 image. Training The model is trained end-to-end, except ResNet-34 which remains fixed after being pretrained on ImageNet. For all the experiments, we train a neural network using Adam BID7 ) with a fixed learning rate of 0.001, β of (0.9, 0.999) and of 1e-8. The learning rate was selected based on the validation performance during the preliminary experiments, and the other parameters are the default values. For MNIST Multi, the batch size was 64, and for COCO was 32.Feedforward Alternative While we use a recurrent model in the experiments, the multiset loss can be used with a feedforward model as follows. A key use of the recurrent hidden state is to retain the previously predicted labels, i.e. to remember the full conditioning setŷ 1,...,ŷ t−1 in p(y t |ŷ 1, ...,ŷ t−1). Therefore, the proposed loss can be used in a feedforward model by encodinĝ y 1,...,ŷ t−1 in the input x t, and running the feedforward model for |Ŷ| steps, where |Ŷ| is determined with a method from section 2.3. Note that compared to the recurrent model, this approach involves additional feature engineering.
We study the problem of multiset prediction and propose a novel multiset loss function, providing analysis and empirical evidence that demonstrates its effectiveness.
733
scitldr
Understanding theoretical properties of deep and locally connected nonlinear network, such as deep convolutional neural network (DCNN), is still a hard problem despite its empirical success. In this paper, we propose a novel theoretical framework for such networks with ReLU nonlinearity. The framework bridges data distribution with gradient descent rules, favors disentangled representations and is compatible with common regularization techniques such as Batch Norm, after a novel discovery of its projection nature. The framework is built upon teacher-student setting, by projecting the student's forward/backward pass onto the teacher's computational graph. We do not impose unrealistic assumptions (e.g., Gaussian inputs, independence of activation, etc). Our framework could help facilitate theoretical analysis of many practical issues, e.g. disentangled representations in deep networks. Deep Convolutional Neural Network (DCNN) has achieved a huge empirical success in multiple disciplines (e.g., computer vision BID0 BID10), Computer Go BID8 BID12 BID13, and so on). On the other hand, its theoretical properties remain an open problem and an active research topic. Learning deep models are often treated as non-convex optimization in a high-dimensional space. From this perspective, many properties in deep models have been analyzed: landscapes of loss functions (b; BID1 BID3, saddle points , relationships between local minima and global minimum (; ; BID5, trajectories of gradient descent , path between local minima BID15, etc. However, such a modeling misses two components: neither specific network structures nor input data distribution is considered. Both are critical in practice. Empirically, deep models work particular well for certain forms of data (e.g., images); theoretically, for certain data distribution, popular methods like gradient descent is shown to fail to recover network parameters .Along this direction, previous theoretical works assume specific data distributions like spherical Gaussian and focus on shallow nonlinear networks BID12; ). These assumptions yield nice gradient forms and enable analysis of many properties such as global convergence. However, it is also nontrivial to extend such approaches to deep nonlinear neural networks that yield strong empirical performance. In this paper, we propose a novel theoretical framework for deep and locally connected ReLU network that is applicable to general data distributions. Specifically, we embrace a teacher-student setting. The teacher computes classification labels via a computational graph that has local structures (e.g., CNN): intermediate variables in the graph, (called summarization variables), are computed from a subset of the input dimensions. The student network, with similar local structures, updates the weights to fit teacher's labels with gradient descent, without knowing the summarization variables. One ultimate goal is to show that after training, each node in the student network is highly selective with respect to the summarization variable in the teacher. Achieving this goal will shed light to how the training of practically effective methods like CNN works, which remains a grand challenge. As a first step, we reformulate the forward/backward pass in gradient descent by marginalizing out the input data conditioned on the graph variables of the teacher at each layer. The reformulation has nice properties: it relates data distribution with gradient update rules, it is compatible with existing Receptive fields form a hierarchy. The entire input is denoted as x (or x ω). A local region of an input x is denoted as x α. (b) For each region α, we have a latent multinomial discrete variable z α which is computed from its immediate children {z β} β∈ch (α). Given the input x, z α = z α (x α) is a function of the image content x α at α. Finally, z ω at the top level is the class label. (c) A locally connected neural network is trained with pairs (x, z ω (x)), where z ω (x) is the class label generated from the teacher. (d) For each node j, f j (x) is the activation while g j (x) is the back-propagated gradient, both as function of input x (and weights at different layers).state-of-the-art regularization techniques such as Batch Normalization , and it favors disentangled representation when data distributions have factorizable structures. To our best knowledge, our work is the first theoretical framework to achieve these properties for deep and locally connected nonlinear networks. Previous works have also proposed framework to explain deep networks, e.g., renormalization group for restricted Boltzmann machines BID2, spin-glass models (; a), transient chaos models BID4, differential equations BID11 BID6, information bottleneck (; BID14 BID7, etc. In comparison, our framework imposes mild assumptions rather than unrealistic ones (e.g., independence of activations), explicitly deals with back-propagation which is the dominant approach used for training in practice, and relates it with data distribution, and considers spatial locality of neurons, an important component in practical deep models. We consider multi-layer (deep) and locally connected network with ReLU nonlinearity. We consider supervised setting, in which we have a dataset {(x, y)}, where x is the input image and y is its label computed from x deterministically. It is hard to analyze y which does not have a structure (e.g., random labels). Here our analysis assumes the generation of y from x has a specific hierarchical structure. We use teacher-student setting to study the property: a student network learns teacher's label y via gradient descent, without knowing teacher's internal representations. An interesting characteristics in locally connected network is that each neuron only covers a fraction of the input dimension. Furthermore, for deep and locally connected network, neurons in the lower layer cover a small region while neurons in the upper layer cover a large region. We use Greek letters {α, β, . . ., ω} to represent receptive fields. For a receptive field α, x α is the content in that region. We use ω to represent the entire image (FIG0).Receptive fields form a hierarchy: α is a parent of β, denoted as α ∈ pa(β) or β ∈ ch(α), if α ⊇ β and there exists no other receptive field γ / ∈ {α, β} so that α ⊇ γ ⊇ β. Note that siblings can have substantial overlaps (e.g., β 1 and β 2 in FIG0). With this partial ordering, we can attach layer number l to each receptive field: α ∈ pa(β) implies l(β) = l(α) + 1. For top-most layer (closest to classification label), l = 0 and for bottom-most layer, l = L.For locally connected network, a neuron (or node) j ∈ α means its receptive field is α. Denote n α as the number of nodes covering the same region (e.g., multi-channel case, Fig. 2(a) ). The image content is x α(j), abbreviated as x j if no ambiguity. The parent j's receptive field covers its children's. We assume the label y of the input x is computed by a teacher in a bottom-up manner: for each region α, we compute a summarization variable z α from the summarization variables of its children: DISPLAYFORM0 Figure 2: (a) Multiple nodes (neurons) share the same receptive field α. Note that n α is the number of nodes sharing the receptive field α. (b) Grouping nodes with the same receptive fields together. By abuse of notation, α also represents the collection of all nodes with the same receptive field. DISPLAYFORM1 ). This procedure is repeated until the top-level summarization z ω is computed, which is the class label y. We denote φ = {φ α} as the collection of all summarization functions. For convenience, we assume z α be discrete variables that takes m α possible values. Intuitively, m α is exponential w.r.t the area of the receptive field sz(α), for binary input, m α ≤ 2 sz(α). We call a particular assignment of z α, z α = a, an event. For the bottom-most layers, z is just the (discretized) value in each dimension. At each stage, the upward function is deterministic but lossy: z α does not contain all the information in {z β} for β ∈ ch(α). Indeed, it keeps relevant information in the input region x α with respect to the class label, and discards the irrelevant part. During training, all summarization variables Z = {z α} are unknown to the student, except for the label y. Example of teacher networks. Locally connected network itself is a good example of teacher network, in which nodes of different channels located at one specific spatial location form some encoding of the variable z α. Note that the relationship between a particular input x and the corresponding values of the summarization variable z at each layer is purely deterministic. The reason why probabilistic quantities (e.g., P(z α) and P(z α |z β)) appear in our formulation, is due to marginalization over z (or x). This marginalization implicitly establishes a relationship between the conditional probabilities P(z α |z β) and the input data distribution P(x). If we have specified P(z α |z β) at each layer, then we implicitly specify a certain kind of data distribution P(x). Conversely, given a certain kind of P(x) and summarization function φ, we can compute P(z α |z β) by sampling x, compute summarization variable z α, and accumulate frequency statistics of P(z α |z β). If there is an overlap between sibling receptive fields, then it is likely that some relationship among P(z α |z β) might exist, which we leave for future work. Although such an indirect specification may not be as intuitive and mathematically easy to deal with as common assumptions used in previous works (e.g., assuming Gaussian input BID12 ;) ), it gives much more flexibility of the distribution x and is more likely to be true empirically. Comparison with top-down generative model. An alternative (and more traditional) way to specify data distribution is to use a top-down generative model: first sample the label y, then sample the latent variables z α at each layer in a top-down manner, until the input layer. Marginalizing over all the latent variables z α yields a class-conditioned data distribution P(x|y).The main difficulty of this top-down modeling is that when the receptive fields α and α of sibling latent variables overlap, the underlying graphical model becomes loopy. This makes the population loss function, which involves an integral over the input data x, very difficult to deal with. As a , it is nontrivial to find a concise relationship between the parameters in the top-down modeling (e.g., conditional probability) and the optimization techniques applied to neural network (e.g., gradient descent). In contrast, as we will see in Sec. 3, our modeling naturally gives relationship between gradient descent rules and conditional probability between nearby summarization variables. We consider a neuron (or node) j. Denote f j as its activation after nonlinearity and g j as the (input) gradient it receives after filtered by ReLU's gating (FIG0). Note that both f j and g j are deterministic functions of the input x and label y, and are abbreviated as f j (x) and g j (x).1.The activation f j and gradient g k can be written as (note that f j is the binary gating function): DISPLAYFORM0 And the weight update for gradient descent is DISPLAYFORM1 Here is the expectation is with respect to a training dataset (or a batch), depending on whether GD or SGD has been used. We also use f raw j and g raw j as the counterpart of f j and g j before nonlinearity. For locally connected network, the activation f j of node j is only dependent on the region x j, rather than the entire image x. This means that DISPLAYFORM2 However, the gradient g j is determined by the entire image x, and its label y, i.e., g j = g j (x, y).Note that since the label y is a deterministic (but unknown) function of x, for gradient we just write DISPLAYFORM3 Marginalized Gradient. For locally connected network, the gradient g j has some nice structures. From Eqn. 17 we knows that DISPLAYFORM4. Define x −k = x\x k as the input image x except for x k. Then we can define the marginalized gradient: DISPLAYFORM5 as the marginalization (average) of x −k, while keep x k fixed. With this notation, we can write DISPLAYFORM6 On the other hand, the gradient which back-propagates to a node k can be written as DISPLAYFORM7 where f k is the derivative of activation function of node k (for ReLU it is just a gating function). If we take expectation with respect to x −k |x k on both side, we get DISPLAYFORM8 Note that all marginalized gradients g j (x k) are independently computed by marginalizing with respect to all regions that are outside the receptive field x k. Interestingly, there is a relationship between these gradients that respects the locality structure: Theorem 1 (Recursive Property of marginalized gradient). DISPLAYFORM9 This shows that there is a recursive structure in marginal gradient: we can first compute g j (x j) for top node j, then by marginalizing over the region within x j but outside x k, we get its projection g j (x k) on child k, then by Eqn. 20 we collect all projections from all the parents of node k, to get g k (x k). This procedure can be repeated until we arrive at the leaf nodes. Let's first consider the following quantity. For each neural node j, we want to compute the expected gradient given a particular factor z α, where α = rf(j) (the reception field of node j): DISPLAYFORM0 Note that P(x j |z α) is the frequency count of x j for z α. If z α captures all information of x j, then P(x j |z α) is a delta function. Throughout the paper, we use frequentist interpretation of probabilities. Goal. Intuitively, if we have g j (z α = a) > 0 and g j (z α = a) < 0, then the node j learns about the hidden event z α = a. For multi-class classification, the top level nodes (just below the softmax layer) already embrace such correlations (here j is the class label): g j (y = j) > 0 and g j (y = j) < 0, where we know z ω = y is the top level factor. A natural question now arises:Does gradient descent automatically push g j (z α) to be correlated with the factor z α? DISPLAYFORM1 n β -by-n α Weight matrix that links group α and β P αβ m α -by-m β Prob P(z β |z α) of events at group α and β If this is true, then gradient descent on deep models is essentially a weak-supervised approach that automatically learns the intermediate events at different levels. Giving a complete answer of this question is very difficult and is beyond the scope of this paper. As a first step, we build a theoretical framework that enables such analysis. We start with the relationship between neighboring layers: Theorem 2 (Reformulation). For node j and k and their receptive field α and β. If the following two conditions holds: DISPLAYFORM2 Then the following iterative equations hold: DISPLAYFORM3 The reformulation becomes exact if z α contains all information of the region. Theorem 3. If P(x j |z α) is a delta function for all α, then all conditions in Thm. 2 hold. While Thm. 3 holds in the ideal (and maybe trivial) case, both assumptions are still practically reasonable. For assumption, the main idea is that the image content x α is most related to the summarization variable z α located at the same receptive field α, and less related to others. On the other hand, assumptions holds approximately if the summarization variable is fine-grained. Intuitively, P(x j |z α) is a distribution encoding how much information gets lost if we only know the factor z α. Climbing up the ladder, more and more information is lost while keeping the critical part for the classification. This is consistent with empirical observations , in which the low-level features in DCNN are generic, and high-level features are more class-specific. One key property of this formulation is that, it relates conditional probabilities P(z α, z β), and thus input data distribution P(x) into the gradient descent rules. This is important since running backpropagation on different dataset is now formulated into the same framework with different probability, i.e., frequency counts of events. By studying which family of distribution leads to the desired property, we could understand backpropagation better. Furthermore, the property of stochastic gradient descent (SGD) can be modeled as using an imperfect estimateP(z α, z β) of the true probability P(z α, z β) when running backpropagation. This is because each batch is a rough sample of the data distribution so the ing P(z α, z β) will also be different. This could also unite GD and SGD analysis. For boundary conditions, in the lowest level L, we could treat each input pixel (or a group of pixels) as a single event: DISPLAYFORM4 For top level, each node j corresponds to a class label j while the summarization variable z α also take class labels: DISPLAYFORM5 If we group the nodes with the same reception field at the same level together (Fig. 2), we have the matrix form of Eqn. 7 (• is element-wise multiplication): Theorem 4 (Matrix Representation of Reformulation). DISPLAYFORM6 See Tbl. 3 for the notation. For this dynamics, we want F * ω = I nω, i.e., the top n ω neurons faithfully represents the classification labels. Therefore, the top level gradient is G ω = I nω − F ω. On the other side, for each region β at the bottom layer, we have F β = I n β, i.e., the input contains all the preliminary factors. For all regions α in the top-most and bottom-most layers, we have n α = m α. Our reformulation naturally incorporates empirical regularization technique like Batch Normalization (BN) . We start with a novel finding of Batch Norm: the back-propagated gradient through Batch Norm layer at a node j is a projection onto the orthogonal complementary subspace spanned by all one vectors and the current activations of node j. Denote pre-batchnorm activations as DISPLAYFORM0 where N is the batchsize. In Batch Norm, f is whitened to bef, then linearly transformed to yield the output f bn (note that we omit node subscript j for clarity):f DISPLAYFORM1 where µ = Theorem 5 (Backpropagation of Batch Norm). For a top-down pre-BN gradient g bn (a vector of size N -by-1,N is the batchsize), the gradient after passing BN layer is the following: DISPLAYFORM2 Here P ⊥ f,1 is the orthogonal complementary projection onto subspace {f, 1} and DISPLAYFORM3 Intuitively, the back-propagated gradient g is zero-mean and perpendicular to the input activation f of BN layer, as illustrated in FIG1. Unlike that analyzes BN in an approximate manner, in Thm. 5 we do not impose any assumptions. In our reformulation, we take the expectation of input x so there is no explicit notation of batch. However, we could regard each sample in the batch as i.i.d. samples from the data distribution P(x). Then the analysis of Batch Norm in Sec. 4.1 could be applied in the reformulation and yield similar , using the quantity that DISPLAYFORM0 In this case, we have DISPLAYFORM1 f,1. Note that the projection matrix P DISPLAYFORM2 zα. In comparison, Sec. 4.1 is a special case with P(DISPLAYFORM3, where x 1, . . ., x N are the batch samples. One consequence is that forG α, we have 1 DISPLAYFORM4 is in the null space of 1 under the inner product ·, · zα . This property will be used in Sec. 5.2. With the help of the theoretical framework, we now can analyze interesting structures of gradient descent in deep models, when the data distribution P(z α, z β) satisfies specific conditions. Here we give two concrete examples: the role played by nonlinearity and in which condition disentangled representation can be achieved. Besides, from the theoretical framework, we also give general comments on multiple issues (e.g., overfitting, GD versus SGD) in deep learning. In the formulation, m α is the number of possible events within a region α, which is often exponential with respect to the size sz(α) of the region. The following analysis shows that a linear model cannot handle it, even with exponential number of nodes n α, while a nonlinear one with ReLU can. Definition 1 (Convex Hull of a Set). We define the convex hull Conv(P) of m points P ⊂ R n to be Conv(P) = P a, a ∈ ∆ n−1, where DISPLAYFORM0 ∈ Conv(P \p j). Definition 2. A matrix P of size m-by-n is called k-vert, or vert(P) = k ≤ m, if its k rows are vertices of the convex hull generated by its rows. P is called all-vert if k = m. Theorem 6 (Expressibility of ReLU Nonlinearity). Assuming m α = n α = O(exp(sz(α))), where sz(α) is the size of receptive field of α. If each P αβ is all-vert, then: (ω is top-level receptive field) DISPLAYFORM1 Note that here Loss(W) ≡ F ω − I 2 F. This shows the power of nonlinearity, which guarantees full rank of output, even if the matrices involved in the multiplication are low-rank. The following theorem shows that for intermediate layers whose input is not identity, the all-vert property remains. DISPLAYFORM2 This means that if all P αβ are all-vert and its input F β is full-rank, then with the same construction of Thm. 6, F α can be made identity. In particular, if we sample W randomly, then with probability 1, all F β are full-rank, in particular the top-level input F 1. Therefore, using top-level W 1 alone would be sufficient to yield zero generalization error, as shown in the previous works that random projection could work well. The analysis in Sec. 5.1 assumes that n α = m α, which means that we have sufficient nodes, one neuron for one event, to convey the information forward to the classification level. In practice, this is never the case. When n α m α = O(exp(sz(α))) and the network needs to represent the information in a proper way so that it can be sent to the top level. Ideally, if the factor z α can be written down as a list of binary factors: DISPLAYFORM0, the output of a node j could represent z α [j], so that all m α events can be represented concisely with n α nodes. DISPLAYFORM1 The j-th binary factor of region α. z α[j] can take 0 or 1. DISPLAYFORM2 2-by-1 marginal probability vector of binary factor DISPLAYFORM3 The j-th column of F α, G α andG α corresponding to j-th binary factor z α [j].1 / 0 All-1 / All-0 vector. Its dimension depends on context. DISPLAYFORM4 Out (or tensor) product of F 1 and DISPLAYFORM5 are the indices of downstream nodes in β to i-th binary factor in α FIG3 ). DISPLAYFORM6 The j-th subcolumn of weight matrix W βα, whose rows are selected by S αβ j. To come up with a complete theory for disentangled representation in deep nonlinear network is far from trivial and beyond the scope of this paper. In the following, we make an initial attempt by constructing factorizable P αβ so that disentangled representation is possible in the forward pass. First we need to formally define what is disentangled representation: DISPLAYFORM7 and 1 is a 2-by-1 vector. Definition 4. The gradientG α is disentangled, if its j-th columnG α,: DISPLAYFORM8 is a 2-by-1 vector. Intuitively, this means that each node j represents the binary factor z α [j]. A follow-up question is whether such disentangled properties carries over layers in the forward pass. It turns out that the disentangled structure carries if the data distribution and weights have compatible structures:Definition 5. The weights W βα is separable with respect to a disjoint set {S DISPLAYFORM9 Theorem 8 (Disentangled Forward). If for each β ∈ ch(α), P αβ can be written as a tensor product DISPLAYFORM10 where {S αβ i} are αβ-dependent disjointed set, W βα is separable with respect to {S αβ i}, F β is disentangled, then F α is also disentangled (with/without ReLU /Batch Norm). If the bottom activations are disentangled, by induction, all activations will be disentangled. The next question is whether gradient descent preserves such a structure. The answer is also conditionally yes: DISPLAYFORM11, F β andG α are both disentangled, 1 TG α = 0, then the gradient update ∆W βα is separable with respect to {S i}. Therefore, with disentangled F β andG α and centered gradient 1 TG α = 0, the separable structure is conserved over gradient descent, given the initial W βα is separable. Note that centered gradient is guaranteed if we insert Batch Norm (Eqn. 83) after linear layers. And the activation F remains disentangled if the weights are separable. The hard part is whetherG β remains disentangled during backpropagation, if {G α} α∈pa(β) are all disentangled. If so, then the disentangled representation is self-sustainable under gradient descent. This is a non-trivial problem and generally requires structures of data distribution. We put some discussion in the Appendix and leave this topic for future work. In the proposed formulation, the input x in Eqn. 7 is integrated out, and the data distribution is now encoded into the probabilistic distribution P(z α, z β), and their marginals. A change of such distribution means the input distribution has changed. For the first time, we can now analyze many practical factors and behaviors in the DL training that is traditionally not included in the formulation. Over-fitting. Given finite number of training samples, there is always error in estimated factor-factor distributionP(z α, z β) and factor-observation distributionP(x α |z α). In some cases, a slight change of distribution would drastically change the optimal weights for prediction, which is overfitting. is a noisy factor. Here is one example. Suppose there are two different kinds of events at two disjoint reception fields: z α and z γ. The class label is z ω, which equals z α but is not related to z γ. Therefore, we have: DISPLAYFORM0 Although z γ is unrelated to the class label z ω, with finite samples z γ could show spurious correlation: DISPLAYFORM1 On the other hand, as shown in Fig. 5, P(x α |z α) contains a lot of detailed structures and is almost impossible to separate in the finite sample case, while P(x γ |z γ) could be well separated for z γ = 0/1. Therefore, for node j with rf(j) = α, f j (z α) ≈ constant (input almost indistinguishable): DISPLAYFORM2 where DISPLAYFORM3, which is a strong gradient signal backpropagated from the top softmax level, since z α is strongly correlated with z ω. For node k with rf(k) = γ, an easy separation of the input (e.g., random initialization) yields distinctive f k (z γ). Therefore, DISPLAYFORM4 where g 0 (z γ) = E zω|zγ [g 0 (z ω)] = 2 z γ = 1 −2 z γ = 0, a weak signal because of z γ is (almost) unrelated to the label. Therefore, we see that the weight w j that links to meaningful receptive field z α does not receive strong gradient, while the weight w k that links to irrelevant (but spurious) receptive field z γ receives strong gradient. This will lead to overfitting. With more data, over-fitting is alleviated sinceP(z ω |z γ) becomes more accurate and → 0; P(x α |z α) starts to show statistical difference for z α = 0/1 and thus f j (z α) shows distinctiveness. Note that there exists a second explanation: we could argue that z γ is a true but weak factor that contributes to the label, while z α is a fictitious discriminative factor, since the appearance difference between z α = 0 and z α = 1 (i.e.,P(x α |z α) for α = 0/1) could be purely due to noise and thus should be neglected. With finite number of samples, these two cases are essentially indistinguishable. Models with different induction bias might prefer one to the other, yielding drastically different generalization error. For neural network, SGD prefers the second explanation but if under the pressure of training, it may also explore the first one by pushing gradient down to distinguish subtle difference in the input. This may explain why the same neural networks can fit random-labeled data, and generalize well for real data BID16. Gradient Descent: Stochastic or not? Previous works show that empirically stochastic gradient decent (SGD) with small batch size tends to converge to "flat" minima and offers better generalizable solution than those uses larger batches to compute the gradient. From our framework, SGD update with small batch size is equivalent to using a perturbed/noisy version of P(z α, z β) at each iteration. Such an approach naturally reduces aforementioned over-fitting issues, which is due to hyper-sensitivity of data distribution and makes the final weight solution invariant to changes in P(z α, z β), yielding a "flat" solution. In this paper, we propose a novel theoretical framework for deep (multi-layered) nonlinear network with ReLU activation and local receptive fields. The framework utilizes the specific structure of neural networks, and formulates input data distributions explicitly. Compared to modeling deep models as non-convex problems, our framework reveals more structures of the network; compared to recent works that also take data distribution into considerations, our theoretical framework can model deep networks without imposing idealistic analytic distribution of data like Gaussian inputs or independent activations. Besides, we also analyze regularization techniques like Batch Norm, depicts its underlying geometrical intuition, and shows that BN is compatible with our framework. Using this novel framework, we have made an initial attempt to analyze many important and practical issues in deep models, and provides a novel perspective on overfitting, generalization, disentangled representation, etc. We emphasize that in this work, we barely touch the surface of these core issues in deep learning. As a future work, we aim to explore them in a deeper and more thorough manner, by using the powerful theoretical framework proposed in this paper. We consider a neuron (or node) j. Denote f j as its activation after nonlinearity and g j as the (input) gradient it receives after filtered by ReLU's gating. Note that both f j and g j are deterministic functions of the input x and label y. Since y is a deterministic function of x, we can write f j = f j (x) and g j = g j (x). Note that all analysis still holds with bias terms. We omit them for brevity. The activation f j and gradient g k can be written as (note that f j is the binary gating function): DISPLAYFORM0 And the weight update for gradient descent is: DISPLAYFORM1 Here is the expectation is with respect to a training dataset (or a batch), depending on whether GD or SGD has been used. We also use f raw j and g raw j as the counterpart of f j and g j before nonlinearity. Given the structure of locally connected network, the gradient g j has some nice structures. From Eqn. 17 we knows that DISPLAYFORM0. Define x −k = x\x k as the input image x except for x k. Then we can define the marginalized gradient: DISPLAYFORM1 as the marginalization (average) of x −k, while keep x k fixed. With this notation, we can write DISPLAYFORM2 On the other hand, the gradient which back-propagates to a node k can be written as DISPLAYFORM3 where f k is the derivative of activation function of node k (for ReLU it is just a gating function). If we take expectation with respect to x −k |x k on both side, we get DISPLAYFORM4 Note that all marginalized gradients g j (x k) are independently computed by marginalizing with respect to all regions that are outside the receptive field x k. Interestingly, there is a relationship between these gradients that respects the locality structure: Theorem 1 (Recursive Property of marginalized gradient). DISPLAYFORM5 Proof. We have: DISPLAYFORM6 Theorem 2 (Reformulation). Denote α = rf(j) and β = rf(k). k is a child of j. If the following two conditions hold:• Focus of knowledge. P(x k |z α, z β) = P(x k |z β).• Broadness of knowledge. P(x j |z α, z β) = P(x j |z α).• Decorrelation. Given z β, (g raw k (·) and f k (·)) and (f raw k (·) and f k (·)) are uncorrelatedThen the following two conditions holds: DISPLAYFORM0 Proof. For Eqn. 22a, we have: DISPLAYFORM1 And for each of the entry, we have: DISPLAYFORM2 For P(x k |z α), using focus of knowledge, we have: DISPLAYFORM3 Therefore, following Eqn. 26, we have: DISPLAYFORM4 Putting it back to Eqn. 25 and we have: DISPLAYFORM5 For Eqn. 22b, similarly we have: DISPLAYFORM6 Notice that we have: DISPLAYFORM7 since x j covers x k which determines z β. Therefore, for each item we have: DISPLAYFORM8 Then we use the broadness of knowledge: DISPLAYFORM9 DISPLAYFORM10 Following Eqn. 40, we now have: DISPLAYFORM11 DISPLAYFORM12 DISPLAYFORM13 Putting it back to Eqn. 36 and we have: DISPLAYFORM14 Using the definition of g k (z β): DISPLAYFORM15 The un-correlation between g raw k (·) and f k (·) means that DISPLAYFORM16 Similarly for f j (z α). The following theorem shows that the reformulation is exact if z α has all information of the region. Theorem 3. If P(x j |z α) is a delta function for all α, then the conditions of Thm. 2 hold and the reformulation becomes exact. Proof. The fact that P(x j |z α) is a delta function means that there exists a function φ j so that: DISPLAYFORM0 That is, z α contains all information of x j (or x α). Therefore,• Broadness of knowledge. z α contains strictly more information than z β for β ∈ ch(α), therefore P(x j |z α, z β) = P(x j |z α).• Focus of knowledge. z β captures all information of z k, so P(x k |z α, z β) = P(x k |z β).• Decorrelation. For any h 1 (x j) and h 2 (x j) we have DISPLAYFORM1 Theorem 4 (Matrix Representation of Reformulation). DISPLAYFORM0 n β -by-n α Weight matrix that links group β and α. P αβ, P b αβ m α -by-m β Prob P(z β |z α), P(z α |z β) of events between group β and α. Λ α m α -by-m α Diagonal matrix encoding prior prob P(z α). Proof. We first consider one certain group α and β, which uses x α and x β as the receptive field. For this pair, we can write Eqn. 22 in the following matrix form: DISPLAYFORM1 we could simplify Eqn. 60 as follows: DISPLAYFORM2 Therefore, using the fact that j∈pa(k) = α∈pa(β) j∈α (where β = rf(k)) and k∈ch(j) = β∈ch(α) k∈β (where α = rf(j)), and group all nodes that share the receptive field together, we have: DISPLAYFORM3 For the gradient update rule, from Eqn. 17 notice that: DISPLAYFORM4 We assume decorrelation so we have: DISPLAYFORM5 DISPLAYFORM6 DISPLAYFORM7, again we use focus of knowledge: DISPLAYFORM8 Put them together and we have: DISPLAYFORM9 Write it in concise matrix form and we get: DISPLAYFORM10 Theorem 5 (Backpropagation of Batch Norm). For a top-down gradient g, BN layer gives the following gradient update (P ⊥ f,1 is the orthogonal complementary projection of subspace {f, 1}): DISPLAYFORM0 Proof. We denote pre-batchnorm activations as DISPLAYFORM1 whitened to bef (i), then linearly transformed to yield the output f DISPLAYFORM2 DISPLAYFORM3 2 and c 1, c 0 are learnable parameters. While in the original batch norm paper, the weight update rules are super complicated and unintuitive (listed here for a reference):Figure 7: Original BN rule from .It turns out that with vector notation, the update equations have a compact vector form with clear geometric meaning. To achieve that, we first write down the vector form of forward pass of batch normalization: DISPLAYFORM4 where f,f,f and f bn are vectors of size N, P DISPLAYFORM5 is 2-by-2 identity matrix) and thus S(x) is an column-orthogonal N -by-2 matrix. If we put everything together, then we have: DISPLAYFORM6 Using this notation, we can compute the Jacobian of batch normalization layer. Specifically, for any vector f, we have: DISPLAYFORM7 where P ⊥ f projects a vector into the orthogonal complementary space of f. Therefore we have: DISPLAYFORM8 where DISPLAYFORM9 is a symmetric projection matrix that projects the input gradient to the orthogonal complement space spanned byx and 1 FIG1 ). Note that the space spanned byf and 1 is also the space spanned by f and 1, sincef = (f − µ1)/σ can be represented linearly by f and 1. DISPLAYFORM10 An interesting property is that since f bn returns a vector in the subspace of f and 1, for the N -by-N Jacobian matrix of Batch Normalization, we have: DISPLAYFORM11 Following the backpropagation rule, we get the following gradient update for batch normalization. If g bn = ∂L/∂f is the gradient from top, then DISPLAYFORM12 Therefore, any gradient (vector of size N) that is back-propagated to the input of BN layer will be automatically orthogonal to that activation (which is also a vector of size N). The analysis of Batch Norm is compatible with the reformulation and we arrive at similar backpropagation rule, by noticing that DISPLAYFORM0 Note that we still have the projection property, but under the new inner product f j, g j zα = DISPLAYFORM1 zα. One can find an interesting quantity, by multiplying g j (x) on both side of the forward equation in Eqn. 16 and taking expectation: DISPLAYFORM0 Using the language of differential equation, we know that: DISPLAYFORM1 where DISPLAYFORM2 If we place Batch Normalization layer just after ReLU activation and linear layer, by BN property, since E x [g j f j] ≡ 0 for all iterations, the row energy E j (t) of weight matrix W of the linear layer is conserved over time. This might be part of the reason why BN helps stabilize the training. Otherwise energy might "leak" from one layer to nearby layers. With the help of the theoretical framework, we now can analyze interesting structures of gradient descent in deep models, when the data distribution P(z α, z β) satisfies specific conditions. Here we give two concrete examples: the role played by nonlinearity and in which condition disentangled representation can be achieved. Besides, from the theoretical framework, we also give general comments on multiple issues (e.g., overfitting, GD versus SGD) in deep learning. In the formulation, m α is the number of possible events within a region α, which is often exponential with respect to the size sz(α) of the region. The following analysis shows that a linear model cannot handle it, even with exponential number of nodes n α, while a nonlinear one with ReLU can. Definition 1 (Convex Hull of a Set). We define the convex hull Conv(P) of m points P ⊂ R n to be Conv(P) = P a, a ∈ ∆ n−1, where DISPLAYFORM0 ∈ Conv(P \p j). Definition 2. A matrix P of size m-by-n is called k-vert, or vert(P) = k ≤ m, if its k rows are vertices of the convex hull generated by its rows. P is called all-vert if k = m. Theorem 6 (Expressibility of ReLU Nonlinearity). Assuming m α = n α = O(exp(sz(α))), where sz(α) is the size of receptive field of α. If each P αβ is all-vert, then: (ω is top-level receptive field) DISPLAYFORM1 Here we define DISPLAYFORM2 Proof. We prove that in the case of nonlinearity, there exists a weight so that the activation F α = I for all α. We prove by induction. The base case is trivial since we already know that F α = I for all leaf regions. Suppose F β = I for any β ∈ ch(α). Since P αβ is all-vert, every row is a vertex of the convex hull, which means that for i-th row p i, there exists a weight w i and b i so that w DISPLAYFORM3 Put these weights and biases together into W βα and we have DISPLAYFORM4 All diagonal elements of F raw α are 1 while all off-diagonal elements are negative. Therefore, after ReLU, F α = I. Applying induction, we get F ω = I and G ω = I − F ω = 0. Therefore, DISPLAYFORM5 In the linear case, we know that rank(F α) ≤ β rank(P αβ F β W βα) ≤ β rank(F β), which is on the order of the size sz(α) of α's receptive field (Note that the constant relies on the overlap between receptive fields). However, at the top-level, m ω = n ω = O(exp(sz(ω))), i.e., the information contained in α is exponential with respect to the size of the receptive field. By Eckart-Young-Mirsky theorem, we know that there is a lower bound for low-rank approximation. Therefore, the loss for linear network Loss linear is at least on the order of m 0, i.e., Loss linear = O(m ω). Note that this also works if we have BN layer in-between, since BN does a linear transform in the forward pass. This shows the power of nonlinearity, which guarantees full rank of output, even if the matrices involved in the multiplication are low-rank. The following theorem shows that for intermediate layers whose input is not identity, the all-vert property remains. Theorem 7. If F is full row rank, then vert(P F) = vert(P). P F is all-vert iff P is all-vert. Proof. For, note that each row of P F is p T i F. If F is row full rank, then F has pseudo-inverse F so that F F = I. Therefore, if p i is not a vertex: DISPLAYFORM6 then p T i F is also not a vertex and vice versa. Therefore, vert(P F) = vert(P). follows from.This means that if all P αβ are all-vert and its input F β is full-rank, then with the same construction of Thm. 6, F α can be made identity. In particular, if we sample W randomly, then with probability 1, all F β are full-rank, in particular the top-level input F 1. Therefore, using top-level W 1 alone would be sufficient to yield zero generalization error, as shown in the previous works that random projection could work well. The analysis in the previous section assumes that n α = m α, which means that we have sufficient nodes, one neuron for one event, to convey the information forward to the classification level. In practice, this is never the case. When n α m α = O(exp(sz(α))) and the network needs to represent the information in a proper way so that it can be sent to the top level. Ideally, if the factor z α can be written down as a list of binary factors: DISPLAYFORM0, the output of a node j could represent z α [j], so that all m α events can be represented concisely with n α nodes. To come up with a complete theory for disentangled representation in deep nonlinear network is far from trivial and beyond the scope of this paper. In the following, we make an initial attempt by constructing factorizable P αβ so that disentangled representation is possible in the forward pass. First we need to formally define what is disentangled representation: Definition 3. The activation F α is disentangled, if its j-th column If the bottom activations are disentangled, by induction, all activations should be disentangled. The next question is whether gradient descent preserves such a structure. Here we provide a few theorems to discuss such issues. We first start with two lemmas. Both of them have simple proofs. Lemma 1. Distribution representations have the following property: DISPLAYFORM1 α is also disentangled. If F α is disentangled and h is any per-column element-wise function, then h(F α) is disentangled. DISPLAYFORM2 Proof. follows from properties of tensor product. For FORMULA7 and FORMULA9, note that the j-th column of F α is F α,:j = 1⊗... f j...⊗1, therefore h j (F α,:j) = 1⊗... h j (f j)...⊗1, and h We have P Sj 1 = 1 and 1 T p α[j] = 1. Note here for simplicity, 1 represents all-one vectors of any length, determined by the context. Since F α and G β are disentangled, their j-th column can be written as: For simplicity, in the following proofs, we just show the case that n α = 2, n β = 3, z α = z α, z α and S = {S 1, S 2} = {{1, 2}, {3}}. We write f 1,2 = [f 1 ⊗ 1, 1 ⊗ f 2] as a 2-column matrix. The general case is similar and we omit here for brevity. Theorem 8 (Disentangled Forward). If for each β ∈ ch(α), P αβ can be written as a tensor product DISPLAYFORM3 where {S αβ i} are αβ-dependent disjointed set, W βα is separable with respect to {S αβ i}, F β is disentangled, then F α is also disentangled (with/without ReLU /Batch Norm).Proof. For a certain β ∈ ch(α), we first compute the quantity P αβ F β: P αβ F β = (P 1,2 ⊗ P 3) [f 1,2 ⊗ 1, 1 ⊗ f 3] = [P 1,2 f 1,2 ⊗ 1, 1 ⊗ P 3 f 3] Therefore, the forward information sent from β to α is: One hope here is that if we consider α∈pa(β)G raw α→β, the summation over parent α could lead to a better structure, even for individual α, P.If each α ∈ pa(β) is informative in a diverse way, and |S 1 | is relatively small (e.g., 4), then v + α,S1 − v − α,S1 = 0 and spans the probability space of dimension 2 |S1| − 1. Then we can always find c α (or equivalently, weights) so that Eqn. 109 becomes rank-1 tensor (or disentangled). Besides, the gating D β, which is disentangled as it is an element-wise function of F β, will also play a role in regularizingG β.We will leave this part to future work.
This paper presents a theoretical framework that models data distribution explicitly for deep and locally connected ReLU network
734
scitldr
Multi-agent cooperation is an important feature of the natural world. Many tasks involve individual incentives that are misaligned with the common good, yet a wide range of organisms from bacteria to insects and humans are able to overcome their differences and collaborate. Therefore, the emergence of cooperative behavior amongst self-interested individuals is an important question for the fields of multi-agent reinforcement learning (MARL) and evolutionary theory. Here, we study a particular class of multi-agent problems called intertemporal social dilemmas (ISDs), where the conflict between the individual and the group is particularly sharp. By combining MARL with appropriately structured natural selection, we demonstrate that individual inductive biases for cooperation can be learned in a model-free way. To achieve this, we introduce an innovative modular architecture for deep reinforcement learning agents which supports multi-level selection. We present in two challenging environments, and interpret these in the context of cultural and ecological evolution. Nature shows a substantial amount of cooperation at all scales, from microscopic interactions of genomes and bacteria to species-wide societies of insects and humans BID36. This is in spite of natural selection pushing for short-term individual selfish interests (Darwin, 1859). In its purest form, altruism can be favored by selection when cooperating individuals preferentially interact with other cooperators, thus realising the rewards of cooperation without being exploited by defectors BID19 BID31 BID9 BID48 BID12 ). However, many other possibilities exist, including kin selection, reciprocity and group selection BID40 Úbeda & Duéñez-Guzmán, 2011; BID52 BID41 BID56 BID50.Lately the emergence of cooperation among self-interested agents has become an important topic in multi-agent deep reinforcement learning (MARL). and BID25 formalize the problem domain as an intertemporal social dilemma (ISD), which generalizes matrix game social dilemmas to Markov settings. Social dilemmas are characterized by a trade-off between collective welfare and individual utility. As predicted by evolutionary theory, self-interested reinforcement-learning agents are typically unable to achieve the collectively optimal outcome, converging instead to defecting strategies BID45. The goal is to find multi-agent training regimes in which individuals resolve social dilemmas, i.e., cooperation emerges. Previous work has found several solutions, belonging to three broad categories: 1) opponent modelling BID13 BID31, 2) long-term planning using perfect knowledge of the game's rules BID33 BID46 ) and 3) a specific intrinsic motivation function drawn from behavioral economics BID25. These hand-crafted approaches run at odds with more recent end-to-end model-free learning algorithms, which have been shown to have a greater ability to generalize (e.g. BID10). We propose that evolution can be applied to remove the hand-crafting of intrinsic motivation, similar to other applications of evolution in deep learning. Evolution has been used to optimize single-agent hyperparameters BID26, implement black-box optimization BID55, and to evolve neuroarchitectures BID38 BID51, regularization BID3, loss functions BID27 BID24, behavioral diversity BID6, and entire reward functions BID49. These principles tend to be driven by single-agent search and optimization or competitive multi-agent tasks. Therefore there is no guarantee of success when applying them in the ISD setting. More closely related to our domain are evolutionary simulations of predator-prey dynamics BID57, which used enforced subpopulations to evolve populations of neurons which are sampled to form the hidden layer of a neural network. To address the specific challenges of ISDs, the system we propose distinguishes between optimization processes that unfold over two distinct time-scales: the fast time-scale of learning and the slow time-scale of evolution (similar to BID23 . In the former, individual agents repeatedly participate in an intertemporal social dilemma using a fixed intrinsic motivation. In the latter, that motivation is itself subject to natural selection in a population. We model this intrinsic motivation as an additional additive term in the reward of each agent BID5 . We implement the intrinsic reward function as a two-layer fully-connected feed-forward neural network, whose weights define the genotype for evolution. We propose that evolution can help mitigate this intertemporal dilemma by bridging between these two timescales via an intrinsic reward function. Evolutionary theory predicts that evolving individual intrinsic reward weights across a population who interact uniformly at random does not lead to altruistic behavior BID0 . Thus, to achieve our goal, we must structure the evolutionary dynamics BID40 . We first implement a "Greenbeard" strategy BID9 BID28 in which agents choose interaction partners based on an honest, real-time signal of cooperativeness. We term this process assortative matchmaking. Although there is ecological evidence of assortative matchmaking BID30, it cannot explain cooperation in all taxa BID15 BID22 BID14 . Moreover it isn't a general method for multi-agent reinforcement learning, since honest signals of cooperativeness are not normally observable in the ISD models typically studied in deep reinforcement learning. To address the limitations of the assortative matchmaking approach, we introduce an alternative modular training scheme loosely inspired by ideas from the theory of multi-level (group) selection BID56 BID22, which we term shared reward network evolution. Here, agents are composed of two neural network modules: a policy network and a reward network. On the fast timescale of reinforcement learning, the policy network is trained using the modified rewards specified by the reward network. On the slow timescale of evolution, the policy network and reward network modules evolve separately from one another. In each episode every agent has a distinct policy network but the same reward network. As before, the fitness for the policy network is the individual's reward. In contrast, the fitness for the reward network is the collective return for the entire group of co-players. In terms of multi-level selection theory, the policy networks are the lower level units of evolution and the reward networks are the higher level units. Evolving the two modules separately in this manner prevents evolved reward networks from overfitting to specific policies. This evolutionary paradigm not only resolves difficult ISDs without handcrafting but also points to a potential mechanism for the evolutionary origin of social inductive biases. We varied and explored different combinations of parameters, namely: environments {Harvest, Cleanup}, reward network features {prospective, retrospective}, matchmaking {random, assortative}, and reward network evolution {individual, shared, none}. We describe these in the following sections. In this paper, we consider Markov games within a MARL setting. Specifically we study intertemporal social dilemmas BID25, defined as games in which individually selfish actions produce individual benefit on short timescales but have negative impacts on the group over a longer time horizon. This conflict between the two timescales characterizes the intertemporal nature of these games. The tension between individual and group-level rationality identifies them as social dilemmas (e.g. the famous Prisoner's Dilemma). We consider two dilemmas, each implemented as a partially observable Markov game on a 2D grid (see FIG0). In the Cleanup game, agents tried to collect apples (reward +1) that spawned in a field at a rate inversely related to the cleanliness of a geographically separate aquifer. Over time, this aquifer filled up with waste, lowering the respawn rate of apples linearly, until a critical point past which no apples could spawn. Episodes were initialized with no apples present and zero spawning, thus necessitating cleaning. The dilemma occurred because in order for apples to spawn, agents must leave the apple field and clean, which conferred no reward. However if all agents declined to clean (defect), then no rewards would be received by any. In the Harvest game, again agents collected rewarding apples. The apple spawn rate at a particular point on the map depended on the number of nearby apples, falling to zero once there were no apples in a certain radius. There is a dilemma between the short-term individual temptation to harvest all the apples quickly and the consequential rapid depletion of apples, leading to a lower total yield for the group in the long-term. For more details, see the Appendix. In our model, there are three components to the reward that enter into agents' loss functions total reward, which is used for the policy loss, extrinsic reward, which is used for the extrinsic value function loss and intrinsic reward, which is used for the intrinsic value function loss. The total reward for player i is the sum of the extrinsic reward and an intrinsic reward as follows: DISPLAYFORM0 (The extrinsic reward r E i (s, a) is the environment reward obtained by player i when it takes action a i from state s i, sometimes also written with a time index t. The intrinsic reward u(f) is an aggregate social preference across features f and is calculated according to the formula, DISPLAYFORM1 where σ is the ReLU activation function, and θ = {W, v, b} are the parameters of a 2-layer neural network with 2 hidden nodes. These parameters are evolved based on fitness (see Section 2.3). The elements of v = (v 1, v 2) can be seen to approximately correspond to a linear combination of the coefficients related to advantagenous and disadvantagenous inequity aversion mentioned in BID25, which were found via grid search in this previous work, but are here evolved. The feature vector f i is a player-specific quantity that other agents can transform into intrinsic reward via their reward network. Each agent has access to the same set of features, with the exception that its own feature is demarcated specially. The features themselves are a function of recently received or expected future (extrinsic) reward for each agent. In Markov games the rewards received by different players may not be aligned in time. Thus, any model of social preferences should not be overly influenced by the precise temporal alignment of different players' rewards. Intuitively, they ought to depend on comparing temporally averaged reward estimates between players, rather than instantaneous values. Therefore, we considered two different ways of temporally aggregating the rewards. Figure 2: (a) Agent A j adjusts policy π j (s, a|φ) using off-policy importance weighted actor-critic (V-Trace) BID10 by sampling from a queue with (possibly stale) trajectories recorded from 500 actors acting in parallel arenas. (b) The architecture includes intrinsic and extrinsic value heads, a policy head, and evolution of the reward network. The retrospective method derives intrinsic reward from whether an agent judges that other agents have been actually (extrinsically) rewarded in the recent past. The prospective variant derives intrinsic reward from whether other agents are expecting to be (extrinsically) rewarded in the near future.2 For the retrospective variant, f ij = e t j, where the temporally decayed reward e t j for the agents j = 1,..., N are updated at each timestep t according to DISPLAYFORM2 and η = 0.975. The prospective variant uses the value estimates V est j for f ij and has a stop-gradient before the reward network module so that gradients don't flow back into other agents. We used the same training framework as in BID27, which performs distributed asynchronous training in multi-agent environments, including population-based training (PBT) BID26. We trained a population of 50 agents 3 with policies {π i}, from which we sampled 5 players in order to populate each of 500 arenas running in parallel. Within each arena, an episode of the environment was played with the sampled agents, before resampling new ones. Agents were sampled using one of two matchmaking processes (described in more detail below). Episode trajectories lasted 1000 steps and were written to queues for learning, from which weights were updated using V-Trace (Figure 2(a) ). More details are in the Appendix. The set of weights evolved included learning rate, entropy cost weight, and reward network weights θ 4. The parameters of the policy network φ were inherited in a Lamarckian fashion as in BID26. Furthermore, we allowed agents to observe their last actions a i,t−1, last intrinsic rewards (r E i,t−1 (s i, a i)), and last extrinsic rewards (u i,t−1 (f i)) as input to the LSTM in the agent's neural network. The objective function was identical to that presented in BID10 and comprised three components: the value function gradient, policy gradient, and entropy regularization, weighted according to hyperparameters baseline cost and entropy cost (see Figure 2(b) ).Evolution was based on a fitness measure calculated as a moving average of total episode return, which was a sum of apples collected minus penalties due to tagging, smoothed as follows: DISPLAYFORM0 where ν = 0.001 and R i j is the return obtained on episode i by agent j (or reward network j in the case of the shared reward network evolution (see Section 2.5 and Appendix for details). Matches were determined according to two methods: random matchmaking and assortative matchmaking. Random matchmaking simply selected uniformly at random from the pool of agents to populate the game, while cooperative matchmaking first ranked agents within the pool according to a metric of recent cooperativeness, and then grouped agents such that players of similar rank played with each other. This ensured that highly cooperative agents played only with other cooperative agents, while defecting agents played only with other defectors. For Cleanup, cooperativeness was calculated based on the amount of steps in the last episode the agent chose to clean. For Harvest, it was calculated based on the difference between the the agent's return and the mean return of all players, so that having less return than average yielded a high cooperativeness ranking. Cooperative metric-based matchmaking was only done with either individual reward networks or no reward networks FIG2 ). We did not use cooperative metric-based matchmaking for our multi-level selection model, since these are theoretically separate approaches. Building on previous work that evolved either the intrinsic reward BID27 or the entire loss function BID24, we separately evolved the reward network within its own population, thereby allowing different modules of the agent to compete only with like components. This allowed for independent exploration of hyperparameters via separate credit assignment of fitness and thus considerably more of the hyperparameter landscape could be explored compared with using only a single pool. In addition, reward networks could be randomly assigned to any policy network, and so were forced to generalize to a wide range of policies. In a given episode, 5 separate policy networks were paired with the same reward network, which we term a shared reward network. In line with BID26, the fitness determining the copying of policy network weights and evolution of optimization-related hyperparameters (entropy cost and learning rate) were based on individual agent return. By contrast, the reward network parameters were evolved according to fitness based on total episode return across the group of co-players FIG2 ).This contribution is distinct from previous work which evolved intrinsic rewards (e.g. BID27 because FORMULA1 we evolve over social features rather than a remapping of environmental events, and reward network evolution is motivated by dealing with the inherent tension in ISDs, rather than merely providing a denser reward signal. In this sense it's closer to evolving a form of communication for social cooperation, rather than learning reward-shaping in a sparse-reward environment. We allow for multiple agents to share the same components, and as we shall see, in a social setting, this winds up being critical. Shared reward networks provide a biologically principled method that mixes between group fitness on a long timescale and individual reward on a short timescale. This contrasts with hand-crafted means of aggregation, as in previous work BID4 BID35. As shown in FIG3, PBT without using an intrinsic reward network performs poorly on both games, where it asymptotes to 0 total episode reward in Cleanup and 400 for Harvest (the number of apples gained if all agents collect as quickly as they can). Figures 4(a) and (b) compare random and assortative matchmaking with PBT and reward networks using retrospective social features. When using random matchmaking, individual reward network agents perform no better than PBT on Cleanup, and only moderately better at Harvest. Hence there is little benefit to adding reward networks over social features if players have separate networks, evolved selfishly. The assortative matchmaking experiments used either no reward network (u(f) = 0) or individual reward networks. Without a reward network, performance was the same as the PBT baseline. With individual reward networks, performance was very high, indicating that both conditioning the internal rewards on social features and a preference for cooperative agents to play together were key to resolving the dilemma. On the other hand, shared reward network agents perform as well as assortative matchmaking and the handcrafted inequity aversion intrinsic reward from BID25, even using random matchmaking. This implies that agents didn't necessarily need to have immediate access to honest signals of other agents' cooperativeness to resolve the dilemma; it was enough to simply have the same intrinsic reward function, evolved according to collective episode return. Figures 4(c) and (d) compare the retrospective and prospective variants of reward network evolution. The prospective variant, although better than PBT when using a shared reward network, generally in worse performance and more instability. This is likely because the prospective variant depends on agents learning good value estimates before the reward networks become useful, whereas the retrospective variant only depends on environmentally provided reward and thus does not suffer from this issue. We next plot various social outcome metrics in order to better capture the complexities of agent behavior (see FIG4 for Harvest, see Appendix for Cleanup). Sustainability measures the average time step on which agents received positive reward, averaged over the episode and over agents. Figure 5(a) shows that having no reward network in players collecting apples extremely quickly, compared with much more sustainable behavior with reward networks. Equality is calculated as E(1 − G(R)), where G(R) is the Gini coefficient over individual returns. FIG4 (b) demonstrates that having the prospective version of reward networks tends to lead to lower equality, while retrospective variant has very high equality. Tagging measures the average number of times a player fined another player throughout the episode. FIG4 (c) shows that there is a higher propensity for tagging when using either a prospective reward network or an individual reward network, compared to the retrospective shared reward network. This explains the performance shown in FIG3.Finally, we can directly examine the weights of the final retrospective shared reward networks which were best at resolving the ISDs. Interestingly, the final weights evolved in the second layer suggest that resolving each game might require a different set of social preferences. In Cleanup, one of the final layer weights v 2 evolved to be close to 0, whereas in Harvest, v 1 and v 2 evolved to be of large magnitude but opposite sign. We can see a similar pattern with the biases b. We interpret this to mean that Cleanup required a less complex reward network: it was enough to simply find other agents' being rewarded as intrinsically rewarding. In Harvest, however, a more complex reward function was perhaps needed in order to ensure that other agents were not over-exploiting the apples. We found that the first layer weights W tended to take on arbitrary (but positive) values. This is because of random matchmaking: co-players were randomly selected and thus there was little evolutionary pressure to specialize these weights. Real environments don't provide scalar reward signals to learn from. Instead, organisms have developed various internal drives based on either primary or secondary goals BID1. Here we examined intrinsic rewards based on features derived from other agents in the environment. In accord with evolutionary theory BID0 BID40, we found that naïvely implementing natural selection via genetic algorithms did not lead to the emergence of cooperation. Furthermore, assortative matchmaking was sufficient to generate cooperative behavior in cases where honest signals were available. Finally, we proposed a new multi-level evolutionary paradigm based on shared reward networks that achieves cooperation in more general situations. Why does evolving intrinsic social preferences promote cooperation? Firstly, evolution ameliorates the intertemporal choice problem by distilling the long timescale of collective fitness into the short timescale of individual reinforcement learning, thereby improving credit assignment between selfish acts and their temporally displaced negative group outcomes BID25. Secondly, it mitigates the social dilemma itself by allowing evolution to expose social signals that correlate with, for example, an agent's current level of selfishness. Such information powers a range of mechanisms for achieving mutual cooperation like competitive altruism BID21, other-regarding preferences BID7, and inequity aversion BID11. In accord, laboratory experiments show that humans cooperate more readily when they can communicate BID43 BID29.The shared reward network evolution model was inspired by multi-level selection; yet it does not correspond to the prototypical case of that theory since its lower level units of evolution (the policy networks) are constantly swapping which higher level unit (reward network) they are paired with. Nevertheless, there are a variety of ways in which we see this form of modularity arise in nature. For example, free-living microorganisms occasionally form multi-cellular structures to solve a higher order adaptive problem, like slime mold forming a spore-producing stalk for dispersal BID54, and many prokaryotes can incorporate plasmids (modules) found in their environment or received from other individuals as functional parts of their genome, thereby achieving cooperation in social dilemmas BID17 BID37. Alternatively, in humans a reward network may represent a shared "cultural norm", with its fitness based on cultural information accumulated from the groups in which it holds sway. In this way, the spread of norms can occur independently of the success of individual agents BID2 ).For future work, we suggest investigating alternative evolutionary mechanisms for the emergence of cooperation, such as kin selection BID16 and reciprocity BID52. It would be interesting to see whether these lead to different weights in a reward network, potentially hinting at the evolutionary origins of different social biases. Along these lines, one might consider studying an emergent version of the assortative matchmaking model along the lines suggested by BID22, adding further generality and power to our setup. Finally, it would be fascinating to determine how an evolutionary approach can be combined with multi-agent communication to produce that most paradoxical of cooperative behaviors: cheap talk. All episodes last 1000 steps, and the total size of the playable area is 25×18 for Cleanup and 36×16 for Harvest. Games are partially observable in that agents can only observe via a 15×15 RGB window, centered on their current location. The action space consists of moving left, right, up, and down, rotating left and right, and the ability to tag each other. This action has a reward cost of 1 to use, and causes the player tagged to lose 50 reward points, thus allowing for the possibility of punishing free-riders BID42 BID18. The Cleanup game has an additional action for cleaning waste. Training was done via joint optimization of network parameters via SGD and hyperparameters/reward network parameters via evolution in the standard PBT setup. Gradient updates were applied for every trajectory up to a maximum length of 100 steps, using a batch size of 32. Optimization was via RMSProp with epsilon=10 −5, momentum=0, decay rate=0.99, and an RL discount factor of 0.99. The baseline cost weight (see BID39) was fixed at 0.25, and the entropy cost was sampled from LogUniform(2 × 10 −4,0.01) and evolved throughout training using PBT. The learning rates were all initially set to 4 × 10 −4 and then allowed to evolve. PBT uses evolution (specifically genetic algorithms) to search over a space of hyperparameters rather than manually tuning or performing a random search, ing in an adaptive schedule of hyperparameters and joint optimization with network parameters learned through gradient descent BID26.There was a mutation rate of 0.1 when evolving hyperparameters, using either multiplicative perturbations of ±20% for entropy cost and learning rate, and additive perturbation of ±0.1 for reward network parameters. We implemented a burn-in period for evolution of 4 × 10 6 agent steps, to allow network parameters and hyperparameters to be used in enough episodes for an accurate assessment of fitness before evolution.
We introduce a biologically-inspired modular evolutionary algorithm in which deep RL agents learn to cooperate in a difficult multi-agent social game, which could help to explain the evolution of altruism.
735
scitldr
In adversarial attacks to machine-learning classifiers, small perturbations are added to input that is correctly classified. The perturbations yield adversarial examples, which are virtually indistinguishable from the unperturbed input, and yet are misclassified. In standard neural networks used for deep learning, attackers can craft adversarial examples from most input to cause a misclassification of their choice. We introduce a new type of network units, called RBFI units, whose non-linear structure makes them inherently resistant to adversarial attacks. On permutation-invariant MNIST, in absence of adversarial attacks, networks using RBFI units match the performance of networks using sigmoid units, and are slightly below the accuracy of networks with ReLU units. When subjected to adversarial attacks based on projected gradient descent or fast gradient-sign methods, networks with RBFI units retain accuracies above 75%, while ReLU or Sigmoid see their accuracies reduced to below 1%. Further, RBFI networks trained on regular input either exceed or closely match the accuracy of sigmoid and ReLU network trained with the help of adversarial examples. The non-linear structure of RBFI units makes them difficult to train using standard gradient descent. We show that RBFI networks of RBFI units can be efficiently trained to high accuracies using pseudogradients, computed using functions especially crafted to facilitate learning instead of their true derivatives. Machine learning via deep neural networks has been remarkably successful in a wide range of applications, from speech recognition to image classification and language processing. While very successful, deep neural networks are affected by adversarial examples: small, especially crafter modifications of correctly classified input that are misclassified BID20 ). The trouble with adversarial examples is twofold. The modifications to regular input are so small as to be difficult or impossible to detect for a human: this has been shown both in the case of images BID20; BID14 ) and sounds BID9; BID5 ). Further, the adversarial examples are in some measure transferable from one neural network to another BID7; BID14; BID16; BID22 ), so they can be crafted even without precise knowledge of the weights of the target neural network. At a fundamental level, it is hard to provide guarantees about the behavior of a deep neural network, when every correctly classified input is tightly encircled by very similar, yet misclassified, inputs. Thus far, the approach for obtaining neural networks that are more resistant to adversarial attacks has been to feed to the networks, as training data, an appropriate mix of the original training data, and adversarial examples BID7; BID12 ). In training neural networks using adversarial examples, if the examples are generated via efficient heuristics such as the fast gradient sign method, the networks learn to associate the specific adversarial examples to the original input from which they were derived, in a phenomenon known as label leaking BID10; BID12; BID21 ). This does not in increased resistance to general adversarial attacks BID12; BID4 ). If the adversarial examples used in training are generated via more general optimization techniques, as in BID12 ), networks with markedly increased resistance to adversarial attacks can be obtained, at the price of a more complex and computationally expensive training regime, and an increase in required network capacity. We pursue here a different approach, proposing the use of neural network types that are, due to their structure, inherently impervious to adversarial attacks, even when trained on standard input only. In BID7 ), the authors connect the presence of adversarial examples to the (local) linearity of neural networks. In a purely linear form n i=1 x i w i, we can perturb each x i by, taking x i + if w i > 0, and x i − if w i < 0. This causes an output perturbation of magnitude n i=1 |w i |, or nw forw the average modulus of w i. When the number of inputs n is large, as is typical of deep neural networks, a small input perturbation can cause a large output change. Of course, deep neural networks are not globally linear, but the insight of BID7 ) is that they may be sufficiently locally linear to allow adversarial attacks. Following this insight, we develop networks composed of units that are highly non-linear. The networks on which we settled after much experimentation are a variant of the well known radial basis functions (RBFs) BID0; BID6 BID15 ); we call our variant RBFI units. RBFI units are similar to classical Gaussian RBFs, except for two differences that are crucial in obtaining both high network accuracy, and high resistance to attacks. First, rather than being radially symmetrical, RBFIs can scale each input component individually; in particular, they can be highly sensitive to some inputs while ignoring others. This gives an individual RBFI unit the ability to cover more of the input space than its symmetrical variants. Further, the distance of an input from the center of the Gaussian is measured not in the Euclidean, or 2, norm, but in the infinity norm ∞, which is equal to the maximum of the differences of the individual components. This eliminates all multi-input linearity from the local behavior of a RBFI: at any point, the output depends on one input only; the n in the above discussion is always 1 for RBFIs, so to say. The "I" in RBFI stands for the infinity norm. Using deeply nonlinear models is hardly a new idea, but the challenge has been that such models are typically difficult to train. Indeed, we show that networks with RBFI units cannot be satisfactorily trained using gradient descent. To get around this, we show that the networks can be trained efficiently, and to high accuracy, using pseudogradients. A pseudogradient is computed just as an ordinary gradient, except that we artificially pretend that some functions have a derivative that is different from the true derivative, and especially crafted to facilitate training. In particular, we use pseudoderivatives for the exponential function, and for the maximum operator, that enter the definition of Gaussian RBFI units. Gaussians have very low derivative away from their center, which makes training difficult; our pseudoderivative artificially widens the region of detectable gradient around the Gaussian center. The maximum operator appearing in the infinity norm has non-zero derivative only for one of its inputs at a time; we adopt a pseudogradient that propagates back the gradient to all of its inputs, according to their proximity in value to the maximum input. Tampering with the gradient may seem unorthodox, but methods such as AdaDelta BID23 ), and even gradient descent with momentum, cause training to take a trajectory that does not follow pure gradient descent. We simply go one step further, devising a scheme that operates at the granularity of the individual unit. We show that with these two changes, RBFIs can be easily trained with standard random (pseudo)gradient descent methods, yielding networks that are both accurate, and resistant to attacks. To conduct our experiments, we have implemented RBFI networks on top of the PyTorch framework BID18 ). The code will be made available in a final version of the paper. We consider permutation invariant MNIST, which is a version of MNIST in which the 28 × 28 pixel images are flattened into a one-dimensional vector of 784 values and fed as a feature vector to neural networks BID7 ). On this test set, we show that for nets of 512,512,512,10 units, RBFI networks match the classification accuracy of networks of sigmoid units ((96.96 ± 0.14)% for RBFI vs. (96.88 ± 0.15)% for sigmoid), and are close to the performance of network with ReLU units ((98.62 ± 0.08)%). When trained over standard training sets, RBFI networks retain accuracies over 75% for adversarial attacks that reduce the accuracy of ReLU and sigmoid networks to below 2% (worse than random). We show that RBFI networks trained on normal input are superior to ReLU and sigmoid networks trained even with adversarial examples. Our experimental can be summarized as follows:• In absence of adversarial attacks, RBFI networks match the accuracy of sigmoid networks, and are slightly lower in accuracy than ReLU networks.• When networks are trained with regular input only, RBFI networks are markedly more resistant to adversarial attacks than sigmoid or ReLU networks.• In presence of adversarial attacks, RBFI networks trained on regualar input provide higher accuracy than sigmoid or ReLU networks, even when the latter are trained also on adversarial examples, and even when the adversarial examples are obtained via general projected gradient descent BID12 ).• RBFI networks can be successfully trained with pseudogradients; the training via standard gradient descent yields instead markedly inferior .• Appropriate regularization helps RBFI networks gain increased resistance to adversarial attacks. Much work remains to be done, including experimenting with convolutional networks using RBFI units for images. However, the seem promising, in that RBFI seem to offer a viable alternative to current adversarial training regimes in achieving robustness to adversarial attacks. Adversarial examples were first noticed in BID20, where they were generated via the solution of general optimization problems. In BID7, a connection was established between linearity and adversarial attacks. A fully linear form n i=1 x i w i can be perturbed by using x i + sign(w i), generating an output change of magnitude · n i=1 |w i |. In analogy, BID7 introduced the fast gradient sign method (FGSM) method of creating adversarial perturbations, by taking DISPLAYFORM0, where ∇ i L is the loss gradient with respect to input i. The work also showed how adversarial examples are often transferable across networks, and it asked the question of whether it would be possible to construct non-linear structures, perhaps inspired by RBFs, that are less linear and are more robust to adversarial attacks. This entire paper is essentially a long answer to the conjectures and suggestions expressed in BID7.It was later discovered that training on adversarial examples generated via FGSM does not confer strong resistance to attacks, as the network learns to associate the specific examples generated by FGSM to the original training examples in a phenomenon known as label leaking BID10; BID12; BID21. The FGSM method for generating adversarial examples was extended to an iterative method, I-FGSM, in BID9. In BID21, it is shown that using small random perturbations before applying FSGM enhances the robustness of the ing network. The network trained in BID21 using I-FSGM and ensemble method won the first round of the NIPS 2017 competition on defenses with respect to adversarial attacks. Carlini and Wagner in a series of papers show that training regimes based on generating adversarial examples via simple heuristics, or combinations of these, in general fail to convey true resistance to attacks BID3 b). They further advocate measuring the resistance to attacks with respect to attacks found via more general optimization processes. In particular, FGSM and I-FGSM rely on the local gradient, and training techniques that break the association between the local gradient and the location of adversarial examples makes networks harder to attack via FGSM and I-FGSM, without making the networks harder to attack via general optimization techniques. In this paper, we follow this suggestion by using a general optimization method, projected gradient descent (PGD), to generate adversarial attacks and evaluate network robustness. BID2 also shows that the technique of defensive distillation, which consists in appropriately training a neural network on the output of another BID17, protects the networks from FGSM and I-FGSM attacks, but does not improve network resistance in the face of general adversarial attacks. In BID12 it is shown that by training neural networks on adversarial examples generated via PGD, it is possible to obtain networks that are genuinely more resistant to adversarial examples. The price to pay is a more computationally intensive training, and an increase in the network capacity required. We provide an alternative way of reaching such resistance, one that does not rely on a new training regime. In BID7, the adversarial attacks are linked to the linearity of the models. Following this insight, we seek to use units that do not exhibit a marked linear behavior, and specifically, units which yield small output variations for small variations of their inputs measured in infinity norm. A linear form g(x) = i x i w i represents the norm-2 distance of the input vector x to a hyperplane perpendicular to vector w, scaled by |w| and its orientation. It is not advantageous to simply replace this norm-2 distance with an infinity-norm distance, as the infinity-norm distance between a point and a plane is not a very useful concept. It is preferable to consider the infinity-norm distance between points. Hence, we define our units as variants of the classical Gaussian radial basis functions BID1 BID15 ). We call our variant RBFI, to underline the fact that they are built using infinity norm. An RBFI unit U(u, w) for an input in IR n is parameterized by two vectors of weights u = u 1,..., u n and w = w 1,..., w n Given an input x ∈ IR n, the unit produces output DISPLAYFORM0 where is the Hadamard, or element-wise, product. In, the vector w is a point from which the distance to x is measured in infinity norm, and the vector u provides scaling factors for each coordinate. Without loss of expressiveness, we require the scaling factors to be non-negative, that is, u i ≥ 0 for all 1 ≤ i ≤ n. The scaling factors provide the flexibility of disregarding some inputs x i, by having u i ≈ 0, while emphasizing the influence of other inputs. Writing out explicitly, we have: DISPLAYFORM1 The output of a RBFI unit is close to 1 only when x is close to w in the coordinates that have large scaling factors. Thus, the unit is reminiscent of an And gate, with normal or complemented inputs, which outputs 1 only for one value of its inputs. Logic circuits are composed both of And and of Or gates. Thus, we introduce an Or RBFI unit by U OR (u, w) = 1 − U(u, w). We construct neural networks out of RBFI units using layers consisting of And units, layers consisting of Or units, and mixed layers, in which the unit type is chosen at random at network initialization. To form an intuitive idea of why networks with RBFI units might resist adversarial attacks, it is useful to compute the sensitivity of individual units to such attacks. For x ∈ IR n and > 0, let B (x) = {x | x − x ∞ ≤} be the set of inputs within distance from x in infinity norm. Given a function f: IR n → IR, we call its sensitivity to adversarial attacks the quantity: DISPLAYFORM2 The sensitivity represents the maximum change in output we can obtain via an input change within in infinity norm, as a multiple of itself. For a single ReLU unit with weight vector w, the sensitivity is given by s = n i=1 |w i | = w 1. This formula can be understood by noting that the worst case for a ReLU unit corresponds to considering an input x for which the output is positive, and taking x i = x i + if w i > 0, and BID7 ). Similarly, for a single sigmoid unit with weight vector w, we have s = 1 4 w 1, where the factor of 1/4 corresponds to the maximum derivative of the sigmoid. For a RBFI unit U(u, w), on the other hand, from we have: DISPLAYFORM3 DISPLAYFORM4 ∞. Thus, the sensitivity of ReLU and Sigmoid units increases linearly with input size, whereas the sensitivity of RBFI units is essentially constant with respect to input size. These formulas can be extended to bounds for whole networks. For a ReLU network with K 0 inputs and layers of DISPLAYFORM5 where w (k) ij is the weight for input i of unit j of layer k, for 1 ≤ k ≤ K M. We can compute an upper boundŝ for the sensitivity of the network via: DISPLAYFORM6 The formula for Sigmoid networks is identical except for the 1/4 factors. Using similar notation, for RBFI networks we have: DISPLAYFORM7 By connecting in a simple way the sensitivity to attacks to the network weights, these formulas suggest the possibility of using weight regularization to achieve robustness: by adding cŝ to the loss function for c > 0, we might be able to train networks that are both accurate and robust to attacks. We will show in Section 6.5 that such a regularization helps train more robust RBFI networks, but it does not help train more robust ReLU networks. The non-linearities in make neural networks containing RBFI units difficult to train using standard gradient descent, as we will show experimentally. The problem lies in the shape of Gaussian functions. Far from its peak for x = w, a function of the form is rather flat, and its derivative may not be large enough to cause the vector of weights w to move towards useful places in the input space during training. To obtain networks that are easy to train, we replace the derivatives for exp and max with alternate functions, which we call pseudoderivatives. These pseudoderivatives are then used in the chain-rule computation of the loss gradient in lieu of the true derivatives, yielding a pseudogradient. Exponential function. In computing the partial derivatives of via the chain rule, the first step consists in computing d dz e −z, which is of course equal to −e −z. The problem is that −e −z is very close to 0 when z is large, and z in is u (x − w) 2 ∞, which can be large. Hence, in the chain-rule computation of the gradient, we replace −e −z with the alternate "pseudoderivative" −1/ √ 1 + z, which has a much longer tail. Max. The gradient of y = max 1≤i≤n z i, of course, is given by ∂y ∂zi = 1 if z i = y, and ∂y ∂zi = 0 if z i < y. The problem is that this transmits feedback only to the largest input(s). This slows down training and can create instabilities. We use as pseudoderivative e zi−y, so that some of the feedback is transmitted to inputs z i that approach y. One may be concerned that by using the loss pseudogradient as the basis of optimization, rather than the true loss gradient, we may converge to solutions where the pseudogradient is null, and yet, we are not at a minimum of the loss function. This can indeed happen. We experimented with switching to training with true gradients once the pseudogradients failed to yield improvements; this increased the accuracy on the training set, but barely improved it on the testing set. It is conceivable that more sophisticated ways of mixing training with regular and pseudo-gradients would allow training RBFI networks to higher accuracy on the testing set. Given a correctly classified input x for a network, and a perturbation size > 0, an input x is an adversarial example for if x is misclassified, and x − x ∞ ≤ η. Consider a network trained with cost function J(θ, x, y), where θ is the set of network parameters, x is the input, and y is the output. Indicate with ∇ x J(θ, x, y) the gradient of J wrt its input x computed at values x of the inputs, parameters θ, and output y. For each input x belonging to the testing set, given a perturbation amount > 0, we produce adversarial examplesx with x −x ∞ ≤ using the following techniques. Fast Gradient Sign Method (FGSM) BID7 ). If the cost were linear around x, the optimal -max-norm perturbation of the input would be given by sign(∇ x J(θ, x, y) ). This suggests taking as adversarial example: BID9 ). Instead of computing a single perturbation of size using the sign of the gradient, we apply M perturbations of size /M, each computed from the endpoint of the previous one. Precisely, the attack computes a sequencẽ x 0,x 1,...,x M, wherex 0 = x, and where eachx i+1 is obtained, for 0 ≤ i < M, by: DISPLAYFORM0 We then takex =x M as our adversarial example. This attack is more powerful than its singlestep version, as the direction of the perturbation can better adapt to non-linear cost gradients in the neighborhood of x BID9 ).Projected Gradient Descent (PGD) BID12 ). For an input x ∈ IR n and a given maximum perturbation size > 0, we consider the set B (x) ∩ n of valid inputs around x, and we perform projected gradient descent (PGD) in B (x) ∩ n of the negative loss with which the network has been trained (or, equivalently, projected gradient ascent wrt. the loss). By following the gradient in the direction of increasing loss, we aim at finding mis-classified inputs in B (x)∩ n. As the gradient is non-linear, to check for the existence of adversarial attacks we perform the descent multiple times, each time starting from a point of B (x) ∩ n chosen uniformly at random. Noise. In addition to the above adversarial examples, we will study the robustness of our networks by feeding them inputs affected by noise. For a testing input x and a noise amount ∈, we produce an -noisy versionx viax = (1 −)x + χ, where χ is a random element of the input space, which for MNIST is n. We have implemented FGSM, I-FGSM, and PGD attacks for RBFI both relying on standard gradients, and relying on pseudogradients. In the , we denote pseudogradient-based via RBFI [psd]. The idea is that if pseudogradients are useful in training, they are likely to be useful also in attacking the networks, and an adversary may well rely on them. BID4 show that many networks that resist FGSM and I-FGSM attacks can still be attacked by using general optimization-based methods. Thus, they and argue that the evaluation of attack resistance should include general optimization methods; the PGD attacks we consider are an example of such methods. 6.1 EXPERIMENTAL SETUP Implementation. We implemented RBFI networks in the PyTorch framework BID18 ). In order to extend PyTorch with a new function f, it is necessary to specify the function behavior f (x), and the function gradient ∇ x f. To implement RBFI, we extend PyTorch with two new functions: a LargeAttractorExp function, with forward behavior e −x and backward gradient propagation according to −1/ √ 1 + x, and SharedFeedbackMax, with forward behavior y = max n i=1 x i and backward gradient propagation according to e xi−y. These two functions are used in the definition of RBFI units, as per FORMULA2, with the AutoGrad mechanism of PyTorch providing backward (pseudo)gradient propagation for the complete networks. Dataset. We use the MNIST dataset BID11 ) for our experiments, following the standard setup of 60,000 training examples and 10,000 testing examples. Each digit image was flattened to a one-dimensional feature vector of length 28 × 28 = 784, and fed to a fully-connected neural network; this is the so-called permutation-invariant MNIST.Neural networks. We compared the accuracy of the following fully-connected network structures.• ReLU networks BID13 ) whose output is fed into a softmax, and the network is trained via cross-entropy loss. Table 1: Performance of 512-512-512-10 networks for MNIST testing input, and for input corrupted by adversarial attacks and noise computed with perturbation size = 0.3.• Sigmoid networks trained with square-error loss.• RBFI networks, trained using square-error loss. For a RBFI network with m layers, we denote its type as RBFI(K 1, . . ., K m | t 1, . . ., t m), where K 1,..., K m are the numbers of units in each layer, and where the units in layer i are And units if t i = ∧, Or units if t i = ∨, and are a random mix of And and Or units if t m = *.Square-error loss worked as well or better than other loss functions for Sigmoid and RBFI networks. Unless otherwise noted, we use networks with layers 512, 512, 512, and 10 units, and in case of RBFI networks, we used geometry RBFI(512, 512, 512, 10 | ∧, ∨, ∧, ∨). For RBFI networks we use a bound of [0.01, 3] for the components of the u-vectors, and of for the w-vectors, the latter corresponding to the value range of MNIST pixels. We experimented with RBFI networks with various geometries, and we found the performance differences to be rather small, for reasons we do not yet fully understand. We trained all networks with the AdaDelta optimizer BID23 ), which yielded good for all networks considered. Attacks. We applied FGSM, I-FGSM, and noise attacks to the whole test set. In I-FGSM attacks, we performed 10 iterations of. As PGD attacks are considerably more computationally intensive, we apply them to one run only, and we compute the performance under PGD attacks for the first 5,000 examples in the test set. For each input x in the test set, we perform 20 searches, or restarts. In each search, we start from a random point in B (x) and we perform 100 steps of projected gradient descent using the AdaDelta algorithm to tune step size; if at any step a misclassified example is generated, the attack is considered successful. In Table 1 we summarize the on the accuracy and resistance to adversarial examples for networks trained on the standard MNIST training set. The are computed from 10 training runs for ReLU and Sigmoid networks, and from 5 runs for RBFI and RBFI [psd]. In each run we used different seeds for the random generator used for weight initialization; each run consisted of 30 training epochs. In a of the form a ± e, a is the percentage accuracy, and e is the standard deviation in the accuracy of the individual runs. In absence of perturbations, RBFI networks lose (1.66 ± 0.21)% performance compared to ReLU networks (from (98.62±0.07)% to (96.96±0.14)%), and perform comparably to sigmoid networks (the difference is below the standard deviation of the ). When perturbations are present, in the form of adversarial attacks or noise, the performance of RBFI networks is superior. We note that the FGSM and I-FGSM attacks performed using regular gradients are not effective against RBFI networks. This phenomenon is called gradient masking: the gradient in proximity of valid inputs offers little information about the possible location of adversarial examples BID4. Pseudogradients do avoid gradient masking, and indeed the most effective attack against RBFI networks is I-FGSM performed using pseudogradients, which lowers the accuracy to (78.92 ± 1.91)% for = 0.3. Including adversarial examples in the training set is the most common method used to make neural networks more resistant to adversarial attacks BID7; BID12 ). We explored whether ReLU and Sigmoid networks trained via a mix of normal and adversarial exam- ples offer a resistance to adversarial attacks compared to that offered by RBFI networks trained on standard examples only. For brevity, we omit the for Sigmoid networks, as they were consistently inferior to those for ReLU networks. We compared the performance of a RBFI network with that of ReLU network trained normally (indicated simply by ReLU), and with ReLU networks trained as follows:• ReLU(FGSM) and ReLU(I-FSGM): for each (x, t) in the training set, we construct an adversarial examplex via or FORMULA10, and we feed both (x, t) and (x, t) to the network for training.• ReLU(PGD): for each (x, t) in the training set, we perform 100 steps of projected gradient descent from a point chosen at random in B (x) ∩ n; denoting by x the ending point of the projected gradient descent, we feed both (x, t) and (x, t) to the network for training. We generated adversarial examples for training for = 0.3, which is consistent with BID12. Due to the high computational cost of adversarial training (and in particular, PGD adversarial training), we performed one run, and performed the training of ReLU networks for 10 epochs, which seemed sufficient for their accuracy to plateau. The are given in FIG1. Overall, the best networks may be the simple RBFI networks, trained without the use of adversarial examples: for each class of attack, they exhibit either the best performance, or they are very close in performance to the best performer; this is true for no other network type. For PGD attacks, the best performance is obtained by ReLU(PGD) networks trained on PGD attacks, but this may be simply due to gradient masking: note that ReLU(PGD) networks do not perform well with respect to I-FGSM attacks. We note that ReLU(FGSM) networks seem to learn that = 0.3 FGSM attacks are likely, but they have not usefully generalized the lesson, for instance, to attacks of size 0.1. The S-shaped performance curve of ReLU(FGSM) with respect to FGSM or noise is known as label leaking: the network learns to recognize the original input given its perturbed version BID10 ). We compared the performance achieved by training RBFI networks with standard gradients, and with pseudogradients. After 30 epochs of training RBFI(512, 512, 512, 10 | *, *, *, ∨) networks, pseudogradients yielded (96.79 ± 0.17)% accuracy, while regular gradients only (86.35 ± 0.75)%. On smaller networks, that should be easier to train, the gap even widened: for RBFI(128, 128, 10 | *, *, ∨) networks, it went from (95.00 ± 0.29)% for pseudogradients to (82.40 ± 3.72)% for regular gradients. In Section 3, we developed upper bounds for the sensitivity of ReLU and RBFI networks to adversarial attacks on the basis of network weights. It is reasonable to ask whether, using those upper bounds as weight regularizations, we might achieve robustness to adversarial attacks. For ReLU networks, the answer is substantially negative. We experimented adding to the loss used to train the network a term cŝ, for c ≥ 0 andŝ as in. We experimented systematically for many values of c. Large values prevented the network from learning. Smaller values ed in little additional robustness: for = 0.3, simple FGSM attacks lowered the network accuracy to below 10%.For RBFI networks, regularization did help. The choice of upper bound for the components of the u-vector influences the resistance of the trained networks to adversarial examples, as can be seen from. In the experiments reported thus far, we used an upper bound of 3. One may ask: would RBFI networks perform as well, if a higher bound were used? The answer is yes, provided weight regularization is used in place of a tighter bound. If we raise the bound to 10, and use no regularization, the accuracy under PGD attacks with = 0.3 drops from 93.32% to 83.62%. By adding to the loss the regularization cŝ, for c = 0.0001 andŝ as in FORMULA8, we can recover most of the lost accuracy, obtaining accuracy 89.38% at = 0.3. In this paper, we have shown that non-linear structures such as RBFI can be efficiently trained using artificial, "pseudo" gradients, and can attain both high accuracy and high resistance to adversarial attacks.
We introduce a type of neural network that is structurally resistant to adversarial attacks, even when trained on unaugmented training sets. The resistance is due to the stability of network units wrt input perturbations.
736
scitldr
We show that there exists an inherent tension between the goal of adversarial robustness and that of standard generalization. Specifically, training robust models may not only be more resource-consuming, but also lead to a reduction of standard accuracy. We demonstrate that this trade-off between the standard accuracy of a model and its robustness to adversarial perturbations provably exists even in a fairly simple and natural setting. These findings also corroborate a similar phenomenon observed in practice. Further, we argue that this phenomenon is a consequence of robust classifiers learning fundamentally different feature representations than standard classifiers. These differences, in particular, seem to in unexpected benefits: the features learned by robust models tend to align better with salient data characteristics and human perception. Deep learning models have achieved impressive performance on a number of challenging benchmarks in computer vision, speech recognition and competitive game playing (; BID24 ; ; BID25 . However, it turns out that these models are actually quite brittle. In particular, one can often synthesize small, imperceptible perturbations of the input data and cause the model to make highly-confident but erroneous predictions BID9 BID5).This problem of so-called adversarial examples has garnered significant attention recently and ed in a number of approaches both to finding these perturbations, and to training models that are robust to them BID23; BID16 BID7; a; BID14 BID1. However, building such adversarially robust models has proved to be quite challenging. In particular, many of the proposed robust training methods were subsequently shown to be ineffective BID8 BID2 ). Only recently, has there been progress towards models that achieve robustness that can be demonstrated empirically and, in some cases, even formally verified BID13;;;; BID11 b).The vulnerability of models trained using standard methods to adversarial perturbations makes it clear that the paradigm of adversarially robust learning is different from the classic learning setting. In particular, we already know that robustness comes at a cost. This cost takes the form of computationally expensive training methods (more training time), but also, as shown recently in , the potential need for more training data. It is natural then to wonder: Are these the only costs of adversarial robustness? And, if so, once we choose to pay these costs, would it always be preferable to have a robust model instead of a standard one? The goal of this work is to explore these questions and thus, in turn, to bring us closer to understanding the phenomenon of adversarial robustness. Our contributions It might be natural to expect that training models to be adversarially robust, albeit more resource-consuming, can only improve performance in the standard classification setting. In this work, we show, however, that the picture here is much more nuanced: these two goals might be fundamentally at odds. Specifically, even though applying adversarial training, the leading method for training robust models, can be beneficial in some regimes of training data size, in general, there is a trade-off between the standard accuracy and adversarially robust accuracy of a model. In fact, we show that this trade-off provably exists even in a fairly simple and natural setting. At the root of this trade-off is the fact that features learned by the optimal standard and optimal robust classifiers are fundamentally different and, interestingly, this phenomenon persists even in the limit of infinite data. This thus also goes against the natural expectation that given sufficient data, classic machine learning tools would be sufficient to learn robust models and emphasizes the need for techniques specifically tailored to training robust models. Our exploration also uncovers certain unexpected benefit of adversarially robust models. In particular, adversarially robust learning tends to equip the ing models with invariances that we would expect to be also present in human vision. This, in turn, leads to features that align better with human perception, and could also pave the way towards building models that are easier to understand. Consequently, the feature embeddings learnt by robust models yield also clean inter-class interpolations, similar to those found by generative adversarial networks (GANs) BID23 and other generative models. This hints at the existence of a stronger connection between GANs and adversarial robustness. Recall that in the canonical classification setting, the primary focus is on maximizing standard accuracy, i.e. the performance on (yet) unseen samples from the underlying distribution. Specifically, the goal is to train models that have low expected loss (also known as population risk): DISPLAYFORM0 [L(x, y; θ)].Adversarial robustness The existence of adversarial examples largely changed this picture. In particular, there has been a lot of interest in developing models that are resistant to them, or, in other words, models that are adversarially robust. In this context, the goal is to train models that have low expected adversarial loss: DISPLAYFORM1 Here, ∆ represents the set of perturbations that the adversary can apply to induce misclassification. In this work, we focus on the case when ∆ is the set of p -bounded perturbations, i.e. ∆ = {δ ∈ R d | δ p ≤ ε}. This choice is the most common one in the context of adversarial examples and serves as a standard benchmark. It is worth noting though that several other notions of adversarial perturbations have been studied. These include rotations and translations BID15, and smooth spatial deformations (a). In general, determining the "right" ∆ to use is a domain specific question. Adversarial training The most successful approach to building adversarially robust models so far BID13;; ) was socalled adversarial training BID23. Adversarial training is motivated by viewing as a statistical learning question, for which we need to solve the corresponding (adversarial) empirical risk minimization problem: DISPLAYFORM2 The ing saddle point problem can be hard to solve in general. However, it turns out to be often tractable in practice, at least in the context of p -bounded perturbations BID13. Specifically, adversarial training corresponds to a natural robust optimization approach to solving this problem BID4. In this approach, we repeatedly find the worst-case input perturbations δ (solving the inner maximization problem), and then update the model parameters to reduce the loss on these perturbed inputs. Though adversarial training is effective, this success comes with certain drawbacks. The most obvious one is an increase in the training time (we need to compute new perturbations each parameter update step). Another one is the potential need for more training data as shown recently in . These costs make training more demanding, but is that the whole price of being adversarially robust? In particular, if we are willing to pay these costs: Are robust classifiers better than standard ones in every other aspect? This is the key question that motivates our work. Comparison of the standard accuracy of models trained against an 2 -bounded adversary as a function of size of the training dataset. We observe that when training with few samples, adversarial training has a positive effect on model generalization (especially on MNIST). However, as training data increase, the standard accuracy of robust models drops below that of the standard model (ε train = 0). Similar for ∞ trained networks are shown in FIG7 of Appendix G.Adversarial Training as a Form of Data Augmentation Our point of start is a popular view of adversarial training as the "ultimate" form of data augmentation. According to this view, the adversarial perturbation set ∆ is seen as the set of invariants that a good model should satisfy (regardless of the adversarial robustness considerations). Thus, finding the worst-case δ corresponds to augmenting the training data in the "most confusing" and thus also "most helpful" manner. A key implication of this view is that adversarial training should be beneficial for the standard accuracy of a model (; 2014; BID23).Indeed, in FIG0, we see this effect, when classifiers are trained with relatively few samples (particularly on MNIST). In this setting, the amount of training data available is insufficient to learn a good standard classifier and the set of adversarial perturbations used is "compatible" with the learning task. (That is, good standard models for this task need to be also somewhat invariant to these perturbations.) In such regime, robust training does indeed act as data augmentation, regularizing the model and leading to a better solution (from standard accuracy point of view). (Note that this effect seems less pronounced for CIFAR-10, possibly because p -invariance is not as important for a good standard CIFAR-10 classifier.) Surprisingly however, in FIG7 we see that as we include more samples in the training set, this positive effect becomes less significant. In fact, after some point adversarial training actually decreases the standard accuracy. In Figure 7 in Appendix G we study the behaviour of models trained using adversarial training with different p -bounded adversaries. We observe a steady decline in standard accuracy as the strength of the adversary increases. (Note that this still holds if we train on batches that also contain natural examples, as in Kurakin et al. (2016a). See Appendix B.) Similar effects were also observed in prior work (b; BID13 BID12 ; b; ; BID3 .The goal of this work is to illustrate and explain the roots of this phenomenon. In particular, we would like to understand:Why does there seem to be a trade-off between standard and adversarially robust accuracy?As we will show, this effect is not an artifact of our adversarial training methods but in fact is inevitable consequence of different goals of adversarial robustness and standard generalization. As we discussed above, we often observe that employing adversarial training leads to a decrease in a model's standard accuracy. In what follows, we show that this phenomenon is a manifestation of an inherent tension between standard accuracy and adversarially robust accuracy. In particular, we present a theoretical model that demonstrates it. In fact, this phenomenon can be illustrated in a fairly simple setting which suggests that it is quite prevalent. Our binary classification task Our data model consists of input-label pairs (x, y) sampled from a distribution D as follows: DISPLAYFORM0 where N (µ, σ 2) is a normal distribution with mean µ and variance σ 2, and p ≥ 0.5. We chose η to be large enough so that a simple classifier attains high standard accuracy (>99%) -e.g. η = Θ(1/ √ d) will suffice. The parameter p quantifies how correlated the feature x 1 is with the label. For the sake of example, we can think of p as being 0.95. This choice is fairly arbitrary; the trade-off between standard and robust accuracy will be qualitatively similar for any p < 1.Standard classification is easy Note that samples from D consist of a single feature that is moderately correlated with the label and d other features that are only very weakly correlated with it. Despite the fact that each one of the latter type of features individually is hardly predictive of the correct label, this distribution turns out to be fairly simple to classify from a standard accuracy perspective. Specifically, a natural (linear) classifier DISPLAYFORM1 achieves standard accuracy arbitrarily close to 100%, for d large enough. Indeed, observe that DISPLAYFORM2 Adversarially robust classification Note that in our discussion so far, we effectively viewed the average of x 2,..., x d+1 as a single "meta-feature" that is highly correlated with the correct label. For a standard classifier, any feature that is even slightly correlated with the label is useful. As a , a standard classifier will take advantage (and thus rely on) the weakly correlated features x 2,..., x d+1 (by implicitly pooling information) to achieve almost perfect standard accuracy. However, this analogy breaks completely in the adversarial setting. In particular, an ∞ -bounded adversary that is only allowed to perturb each feature by a moderate ε can effectively override the effect of the aforementioned meta-feature. For instance, if ε = 2η, an adversary can shift each weaklycorrelated feature towards −y. The classifier would now see a perturbed input x such that each of the features x 2,..., x d+1 are sampled i.i.d. from N (−ηy, 1) (i.e., now becoming anti-correlated with the correct label). Thus, when ε ≥ 2η, the adversary can essentially simulate the distribution of the weakly-correlated features as if belonging to the wrong class. Formally, the probability of the meta-feature correctly predicting y in this setting is DISPLAYFORM3 As a , the simple classifier in that relies solely on these features cannot get adversarial accuracy better than 1%.Intriguingly, this discussion draws a distinction between robust features (x 1) and non-robust features (x 2, . . ., x d+1) that arises in the adversarial setting. While the meta-feature is far more predictive of the true label, it is extremely unreliable in the presence of an adversary. Hence, a tension between standard and adversarial accuracy arises. Any classifier that aims for high accuracy (say > 99%) will have to heavily rely on non-robust features (the robust feature provides only, say, 95% accuracy). However, since the non-robust features can be arbitrarily manipulated, this classifier will inevitably have low adversarial accuracy. We make this formal in the following theorem proved in Appendix C. Theorem 2.1 (Robustness-accuracy trade-off). Any classifier that attains at least 1 − δ standard accuracy on D has robust accuracy at most p 1−p δ against an ∞ -bounded adversary with ε ≥ 2η. This bound implies that if p < 1, as standard accuracy approaches 100% (δ → 0), adversarial accuracy falls to 0%. As a concrete example, consider p = 0.95, then any classifier with standard accuracy more than 1 − δ will have robust accuracy at most 19δ 1. Also it is worth noting that the theorem is tight. If δ = 1 − p, both the standard and adversarial accuracies are bounded by p which is attained by the classifier that relies solely on the first feature. Additionally, note that compared to the scale of the features ±1, the value of ε required to manipulate the standard classifier is very small DISPLAYFORM4 On the (non-)existence of an accurate and robust classifier It might be natural to expect that in the regime of infinite data, the standard classifier itself acts as a robust classifier. Note however, that this is not true for the setting we analyze above. Here, the trade-off between standard and adversarial accuracy is an inherent trait of the data distribution itself and not due to having insufficient samples. In this particular classification task, we (implicitly) assumed that there does not exist a classifier that is both robust and very accurate (i.e. > 99% standard and robust accuracy). Thus, for this task, any classifier that is very accurate (including the Bayes classifier -the classifier minimizing classification error having full-information about the distribution) will necessarily be non-robust. This seemingly goes against the common assumption in adversarial ML that humans are such perfect robust and accurate classifiers for standard datasets. However, note that there is no concrete evidence supporting this assumption. In fact, humans often have far from perfect performance in vision benchmarks (; 2014;) and are outperformed by ML models in certain tasks (b; BID19 . It is plausible that standard ML models are able to outperform humans in these tasks by relying on brittle features that humans are naturally invariant to and the observed decrease in performance might be the manifestation of that. As we have seen in the distributional model D, a classifier that achieves very high standard accuracy will inevitably have near-zero adversarial accuracy. This is true even when a classifier with reasonable standard and robust accuracy exists. Hence, in an adversarial setting, where the goal is to achieve high adversarial accuracy, the training procedure needs to be modified. We now make this phenomenon concrete for linear classifiers trained using the soft-margin SVM loss. Specifically, in Appendix D we prove the following theorem. For η ≥ 4/ √ d and p ≤ 0.975 (the first feature is not perfect), a soft-margin SVM classifier of unit weight norm minimizing the distributional loss achieves a standard accuracy of > 99% and adversarial accuracy of < 1% against an ∞ -bounded adversary of ε ≥ 2η. Minimizing the distributional adversarial loss instead leads to a robust classifier that has standard and adversarial accuracy of p against any ε < 1.This theorem shows that if our focus is on robust models, adversarial training is crucial to achieve non-trivial adversarial accuracy in this setting. Simply optimizing the standard accuracy of the model (i.e. standard training) leads to poor robust accuracy. Soft-margin SVM classifiers and the constant 0.975 are chosen for mathematical convenience. Our proofs do not depend on them in a crucial way and can be adapted, in a straightforward manner, to other natural settings, e.g. logistic regression. Transferability An interesting implication of our analysis is that standard training produces classifiers that rely on features that are weakly correlated with the correct label. This will be true for any classifier trained on the same distribution. Hence, the adversarial examples that are created by perturbing each feature in the direction of −y will transfer across classifiers trained on independent Figure 2: Visualization of the loss gradient with respect to input pixels. Recall that these gradients highlight the input features which affect the loss most strongly, and thus are important for the classifier's prediction. We observe that the gradients are significantly more interpretable for adversarially trained networks -they align well with perceptually relevant features. In contrast, for standard networks they appear very noisy. We observe that gradients of ∞ -trained models tend to be sparser than those of 2 -trained models. (For MNIST, blue and red pixels denote positive and negative gradient regions respectively. For CIFAR-10 and ImageNet, we clip gradients to within ±3σ and rescale them to lie in the range.) Additional visualizations are in FIG0 of Appendix G.samples from the distribution. This constitutes an interesting manifestation of the generally observed phenomenon of transferability and might hint at its origin. Empirical examination In Section 2.1, we showed that the trade-off between standard accuracy and robustness might be inevitable. To examine how representative our theoretical model is of real-world datasets, we also experimentally investigate this issue on MNIST as it is amenable to linear classifiers. Interestingly, we observe a qualitatively similar behavior. For instance, in FIG6 (b) in Appendix E, we see that the standard classifier assigns weight to even weakly-correlated features. (Note that in settings with finite training data, such brittle features could arise even from noise -see Appendix E.) The robust classifier on the other hand does not assign any weight beyond a certain threshold. Further, we find that it is possible to obtain a robust classifier by directly training a standard model using only features that are relatively well-correlated with the label (without adversarial training). As expected, as more features are incorporated into the training, the standard accuracy is improved at the cost of robustness (see Appendix E FIG6 (c)). In Section 2, we established that robust and standard models might depend on very different sets of features. We demonstrated how this can lead to a decrease in standard accuracy for robust models. In this section, we will argue that the features learned by robust models can also be beneficial. At a high level, robustness to adversarial perturbations can be viewed as an invariance property of a model. A model that achieves small loss for all perturbations in the set ∆, will necessarily have learned features that are invariant to such perturbations. Thus, robust training can be viewed as a method to embed certain invariances in a model. Since we also expect humans to be invariant to these perturbations (e.g. small p -bounded changes of the pixels), robust models will be more aligned with human vision than standard models. In this section, we present evidence supporting the view. Loss gradients in the input space align well with human perception As a starting point, we want to investigate which features of the input most strongly affect the prediction of the classifier both for standard and robust models. To this end, we visualize the gradients of the loss with respect to individual features (pixels) of the input in Figure 2. We observe that gradients for adversarially: Visualizing large-ε adversarial examples for standard and robust (2 / ∞ -adversarial training) models. We construct these examples by iteratively following the (negative) loss gradient while staying with 2 -distance of ε from the original image. We observe that the images produced for robust models effectively capture salient data characteristics and appear similar to examples of a different class. (The value of ε is equal for all models and much larger than the one used for training.) Additional examples are visualized in Figure 8 and 9 of Appendix G.trained networks align well with perceptually relevant features (such as edges) of the input image. In contrast, for standard networks, these gradients have no coherent patterns and appear very noisy to humans. We want to emphasize that no preprocessing was applied to the gradients (other than scaling and clipping for visualization). On the other hand, extraction of interpretable information from the gradients of standard networks has so far only been possible using additional sophisticated techniques (; ;).This observation effectively outlines an approach to train models that align better with human perception by design. By encoding the correct prior into the set of perturbations ∆, adversarial training alone might be sufficient to yield interpretable gradients. We believe that this phenomenon warrants an in-depth investigation and we view our experiments as only exploratory. Adversarial examples exhibit salient data characteristics Given how the gradients of standard and robust models are concentrated on qualitatively different input features, we want to investigate how the adversarial examples of these models appear visually. To find adversarial examples, we start from a given test image and apply Projected Gradient Descent (PGD; a standard first-order optimization method) to find the image of highest loss within an p -ball of radius ε around the original image 2. This procedure will change the pixels that are most influential for a particular model's predictions and thus hint towards how the model is making its predictions. The ing visualizations are presented in FIG2 (details in Appendix A). Surprisingly, we can observe that adversarial perturbations for robust models tend to produce salient characteristics of another class. In fact, the corresponding adversarial examples for robust models can often be perceived as samples from that class. This behavior is in stark contrast to standard models, for which adversarial examples appear as noisy variants of the input image. These findings provide additional evidence that adversarial training does not necessarily lead to gradient obfuscation BID2. Following the gradient changes the image in a meaningful way and (eventually) leads to images of different classes. Hence, the robustness of these models does not stem from having gradients that are ill-suited for first-order methods. Smooth cross-class interpolations via gradient descent By linearly interpolating between the original image and the image produced by PGD we can produce a smooth, "perceptually plausible" interpolation between classes FIG3. Such interpolation have thus far been restricted to generative models such as GANs BID22 and VAEs , involved manipulation of learned representations , and hand-designed methods . In fact, we conjecture that the similarity of these inter-class trajectories to GAN interpolations is not a coincidence. We postulate that the saddle point problem that is key in both these approaches may be at the root of this effect. We hope that future research will investigate this connection further and explore how to utilize the loss landscape of robust models as an alternative method to smoothly interpolate between classes. Due to the large body of related work, we will only focus on the most relevant studies here and defer the full discussion to Appendix F. BID18 prove upper bounds on the robust of classifiers and exhibit a standard vs. robust accuracy trade-off for a specific classifier families on a synthetic task. Their setting also (implicitly) utilizes the notion of robust and non-robust features, however these features have small magnitude rather than weak correlation. propose regularizing the gradient of the classifier with respect to its input. They find that the ing classifiers have more interpretable gradients and targeted adversarial examples resemble the target class for digit and character recognition tasks. There has been recent of work proving upper bounds on classifier robustness BID20; BID17. However, this work is orthogonal to ours as in these settings there exist classifiers that are both robust and accurate. In this work, we show that the goal of adversarially robust generalization might fundamentally be at odds with that of standard generalization. Specifically, we identify an inherent trade-off between the standard accuracy and adversarial robustness of a model, that provably manifests in a concrete, simple setting. This trade-off stems from intrinsic differences between the feature learned by standard and robust models. Our analysis also explains the drop in standard accuracy observed when employing adversarial training in practice. Moreover, it emphasizes the need to develop robust training methods, since robustness is unlikely to arise as a consequence of standard training. We discover that even though adversarial robustness comes at a price, it has some unexpected benefits. Robust models learn features that align well with salient data characteristics. The root of this phenomenon is that the set of adversarial perturbations encodes some prior for human perception. Thus, classifiers that are robust to these perturbations are also necessarily invariant to input modifications that we expect humans to be invariant to. We demonstrate a striking consequence of this phenomenon: robust models yield clean feature interpolations similar to those obtained from generative models such as GANs BID23. This emphasizes the possibility of a stronger connection between GANs and adversarial robustness. Finally, our findings show that the interplay between adversarial robustness and standard classification might be more nuanced that one might expect. This motivates further work to fully undertand the relative costs and benefits of each of these notions. Kaiming we filter out all the images from the MNIST dataset other than the "5" and "7" labelled examples. For the ImageNet dataset, adversarial training is significantly harder since the classification problem is challenging by itself and standard classifiers are already computationally expensive to train. We thus restrict our focus to a smaller subset of the dataset. We group together a subset of existing, semantically similar ImageNet classes into 8 different super-classes, as shown in TAB1. We train and evaluate only on examples corresponding to these classes. "Dog" 151 to 268 "Cat" 281 to 285 "Frog" 30 to 32 "Turtle" 33 to 37 "Bird" 80 to 100 "Primate" 365 to 382 "Fish" 389 to 397 "Crab" 118 to 121 "Insect" 300 to 319A.2 MODELS• Binary MNIST (Section 2.2): We train a linear classifier with parameters w ∈ R 784, b ∈ R on the dataset described in Section A.1 (labels −1 and +1 correspond to images labelled as "5" and "7" respectively). We use the cross-entropy loss and perform 100 epochs of gradient descent in training.• MNIST: We use the simple convolution architecture from the TensorFlow tutorial 3.• CIFAR-10: We consider a standard ResNet model BID25. It has 4 groups of residual layers with filter sizes and 5 residual units each 4.• Restricted ImageNet: We use a ResNet-50 BID25 architecture using the code from the tensorpack repository . We do not modify the model architecture, and change the training procedure only by changing the number of examples per "epoch" from 1,280,000 images to 76,800 images. We perform adversarial training to train robust classifiers following BID13. Specifically, we train against a projected gradient descent (PGD) adversary, starting from a random initial perturbation of the training data. We consider adversarial perturbations in p norm where p = {2, ∞}.Unless otherwise specified, we use the values of ε provided in TAB2 to train/evaluate our models. The images we generated for FIG2 were allowed a much larger perturbation from the original sample in order to produce visible changes to the images. These values are listed in Table 3. Since Table 3 these levels of perturbations would allow to truly change the class of the image, training against such strong adversaries would be impossible. Still, we observe that smaller values of ε suffices to ensure that the models rely on the most robust (and hence interpretable) features. In order to make sure that the standard accuracy drop in Figure 7 is not an artifact of only training on adversarial examples, we experimented with including unperturbed examples in each training batch, following the recommendation of (a). We found that while this slightly improves the standard accuracy of the classifier, it decreases it's robust accuracy by a roughly proportional amount, see TAB3. DISPLAYFORM0 The main idea of the proof is that an adversary with ε = 2η is able to change the distribution of features x 2,..., x d+1 to reflect a label of −y instead of y by subtracting εy from each variable. Hence any information that is used from these features to achieve better standard accuracy can be used by the adversary to reduce adversarial accuracy. We define G + to be the distribution of x 2,..., x d+1 when y = +1 and G − to be that distribution when y = −1. We will consider the setting where ε = 2η and fix the adversary that replaces x i by x i − yε for each i ≥ 2. This adversary is able to change G + to G − in the adversarial setting and vice-versa. Consider any classifier f (x) that maps an input x to a class in {−1, +1}. Let us fix the probability that this classifier predicts class +1 for some fixed value of x 1 and distribution of x 2,..., x d+1. Concretely, we define p ij to be the probability of predicting +1 given that the first feature has sign i and the rest of the features are distributed according to G j. Formally, DISPLAYFORM1 Using these definitions, we can express the standard accuracy of the classifier as DISPLAYFORM2 Similarly, we can express the accuracy of this classifier against the adversary that replaces G + with G − (and vice-versa) as DISPLAYFORM3 For convenience we will define a = 1 − p ++ + p −− and b = 1 − p −+ + p +−. Then we can rewrite DISPLAYFORM4 We are assuming that the standard accuracy of the classifier is at least 1 − δ for some small δ. This implies that DISPLAYFORM5 Since p ij are probabilities, we can guarantee that a ≥ 0. Moreover, since p ≥ 0.5, we have p/(1 − p) ≥ 1. We use these to upper bound the adversarial accuracy by DISPLAYFORM6 We consider the problem of fitting the distribution D of by using a standard soft-margin SVM classifier. Specifically, this can be formulated as: DISPLAYFORM7 for some value of λ. We will assume that we tune λ such that the optimal solution w * has 2 -norm of 1. This is without much loss of generality since our proofs can be adapted to the general case. We will refer to the first term of as the margin term and the second term as the regularization term. First we will argue that, due to symmetry, the optimal solution will assign equal weight to all the features x i for i = 2,..., d + 1. Lemma D.1. Consider an optimal solution w * to the optimization problem. Then, DISPLAYFORM8 Proof. Assume that ∃ i, j ∈ {2, ..., d + 1} such that w * i = w * j. Since the distribution of x i and x j are identical, we can swap the value of w i and w j, to get an alternative set of parametersŵ that has the same loss function value (ŵ j = w i,ŵ i = w j,ŵ k = w k for k = i, j).Moreover, since the margin term of the loss is convex in w, using Jensen's inequality, we get that averaging w * andŵ will not increase the value of that margin term. Note, however, that w * +ŵ 2 2 < w * 2, hence the regularization loss is strictly smaller for the average point. This contradicts the optimality of w *.Since every optimal solution will assign equal weight to all x i for k ≥ 2, we can replace these features by their sum (and divide by √ d for convenience). We will define DISPLAYFORM9 which, by the properties of the normal distribution, is distributed as DISPLAYFORM10 By assigning a weight of v to that combined feature the optimal solutions can be parametrized as DISPLAYFORM11 where the regularization term of the loss is λ(w DISPLAYFORM12 Recall that our chosen value of η is 4/ √ d, which implies that the contribution of vz is distributed normally with mean 4yv and variance v 2 . By the concentration of the normal distribution, the probability of vz being larger than v is large. We will use this fact to show that the optimal classifier will assign on v at least as much weight as it assigns on w 1 . Lemma D.2. Consider the optimal solution (w * 1, v *) of the problem. Then DISPLAYFORM13 Proof. Assume for the sake of contradiction that v * < 1/ √ 2. Then, with probability at least 1 − p, the first feature predicts the wrong label and without enough weight, the remaining features cannot compensate for it. Concretely, DISPLAYFORM14 We will now show that a solution that assigns zero weight on the first feature (v = 1 and w 1 = 0), achieves a better margin loss. DISPLAYFORM15 Hence, as long as p ≤ 0.975, this solution has a smaller margin loss than the original solution. Since both solutions have the same norm, the solution that assigns weight only on v is better than the original solution (w * 1, v *), contradicting its optimality. We have established that the learned classifier will assign more weight to v than w 1. Since z will be at least y with large probability, we will show that the behavior of the classifier depends entirely on z. Lemma D.3. The standard accuracy of the soft-margin SVM learned for problem is at least 99%.Proof. By Lemma D.2, the classifier predicts the sign of w 1 x 1 + vz where vz ∼ N (4yv, v 2) and v ≥ 1/ √ 2. Hence with probability at least 99%, vzy > 1/ √ 2 ≥ w 1 and thus the predicted class is y (the correct class) independent of x 1.We can utilize the same argument to show that an adversary that changes the distribution of z has essentially full control over the classifier prediction. Lemma D.4. The adversarial accuracy of the soft-margin SVM learned for is at most 1% against an ∞ -bounded adversary of ε = 2η. Proof. Observe that the adversary can shift each feature x i towards y by 2η. This will cause z to be distributed as DISPLAYFORM16 Therefore with probability at least 99%, vyz < −y ≤ −w 1 and the predicted class will be −y (wrong class) independent of x 1.It remains to show that adversarial training for this classification task with ε > 2η will in a classifier that has relies solely on the first feature. Lemma D.5. Minimizing the adversarial variant of the loss in a classifier that assigns 0 weight to features x i for i ≥ 2.Proof. The optimization problem that adversarial training solves is DISPLAYFORM17 which is equivalent to DISPLAYFORM18 Consider any optimal solution w for which w i > 0 for some i > 2. The contribution of terms depending on w i to 1 − yw x + ε w 1 is a normally-distributed random variable with mean 2η − ε ≤ 0. Since the mean is non-positive, setting w i to zero can only decrease the margin term of the loss. At the same time, setting w i to zero strictly decreases the regularization term, contradicting the optimality of w. Clearly, such a classifier will have standard and adversarial accuracy of p against any ε < 1 since such a value of ε is not sufficient to change the sign of the first feature. This concludes the proof of the theorem. Our theoretical analysis shows that there is an inherent tension between standard accuracy and adversarial robustness. At the core of this trade-off is the concept of robust and non-robust features. The robustness of a feature is characterized by the strength of its correlation with the correct label. It is natural to wonder whether this concept of robust features is an artifact of our theoretical analysis or if it manifests more broadly. We thus investigate this issue experimentally on a dataset that is amenable to linear classifiers, MNIST (details in Appendix A).Recall the goal of standard classification for linear classifiers is to predict accurately, i.e. y = sign(w x). Hence the correlation of a feature i with the true label, computed as |E[yx i]|, quantifies how useful this feature is for classification. In the adversarial setting, against an ε ∞ -bounded adversary we need to ensure that y = sign(w x − εy w 1). In that case we expect a feature i to be DISPLAYFORM0 This calculation suggests that in the adversarial setting, there is an implicit threshold on feature correlations imposed by the threat model (the perturbation allowed to the adversary). While standard models may utilize all features with non-zero correlations, a robust model cannot rely on features with correlation below this threshold. In FIG6 (b), we visualize the correlation of each pixel (feature) in the MNIST dataset along with the learned weights of the standard and robust classifiers. As expected, we see that the standard classifier assigns weights even to weakly-correlated pixels so as to maximize prediction confidence. On the other hand, the robust classifier does not assign any weight below a certain correlation threshold which is dictated by the adversary's strength (ε) FIG6 )Interestingly, the standard model assigns non-zero weight even to very weakly correlated pixels (FIG6). In settings with finite training data, such non-robust features could arise from noise.(For instance, in N tosses of an unbiased coin, the expected imbalance between heads and tails is O( √ N) with high probability.) A standard classifier would try to take advantage of even this "hallucinated" information by assigning non-zero weights to these features. TAB4.) (a) Visualization of network weights per input feature. (b) Comparison of feature-label correlation to the weight assigned to the feature by each network. Adversarially trained networks put weights only on a small number of strongly-correlated or "robust" features. (c) Performance of a model trained using standard training only on the most robust features. Specifically, we sort features based on decreasing correlation with the label and train using only the most correlated ones. Beyond a certain threshold, we observe that as more non-robust or (weakly correlated) features are available to the model, the standard accuracy increases at the cost of robustness. The analysis above highlights an interesting trade-off between the predictive power of a feature and its vulnerability to adversarial perturbations. This brings forth the question -Could we use these insights to train robust classifiers with standard methods (i.e. without performing adversarial training)? As a first step, we train a (standard) linear classifier on MNIST utilizing input features (pixels) that lie above a given correlation threshold (see FIG6). As expected, as more non robust features are incorporated in training, the standard accuracy increases at the cost of robustness. Further, we observe that a standard classifier trained in this manner using few robust features attains better robustness than even adversarial training. This suggest a more direct (and potentially better) method of training robust networks in certain settings. F derive parameter-dependent bounds on the robustness of any fixed classifier. Our focus on the statistical setting itself and provide lower bounds for all classifiers learned in this setting. analyze the adversarial robustness of nearest neighbor classifiers. Instead we focus on lower bounds that are inherent to the statistical setting itself and apply to all classifiers. study the generalization aspect of adversarially robustness. They show that the number of samples needed to achieve adversarially robust generalization is polynomially larger in the dimension than the number of samples needed to ensure standard generalization. However, in the limit of infinite data, one can learn classifiers that are both robust and accurate. BID20 demonstrate a setting where even a small amount of standard error implies that most points provably have a misclassified point close to them. In this setting, achieving perfect standard accuracy (easily achieved by a simple classifier) is sufficient to achieve perfect adversarial robustness. In contrast, our work focuses on a setting where adversarial training (provably) matters and there exists a trade-off between standard and adversarial accuracy. explore the connection between robustness and generalization, showing that, in a certain sense, robustness can imply generalization. This direction is orthogonal to our, since we work in the limit of infinite data, optimizing the distributional loss directly. BID17 prove lower bounds on the robustness of any classifier based on certain generative assumptions. Since these bounds apply to all classifiers, independent of architecture and training procedure, they fail to capture the situation we face in practice where robust optimization can significantly improve the adversarial robustness of standard classifiers BID13;; ).A recent work BID6 turns out to (implicitly) rely on the distinction between robust and non-robust features in constructing a distribution for which adversarial robustness is hard from a different, computational point of view. BID23 observed that adversarial training in feature weights that depend on fewer input features (similar to FIG6). Additionally, it has been observed that for naturally trained RBF classifiers on MNIST, targeted adversarial attacks resemble images of the target class BID21. empirically observe a similar trade-off between the accuracy and robustness of standard models across different deep architectures on ImageNet. BID3 study an extreme multi-label problem and observe that for classes with relatively few examples, 1 -regularization (which corresponds to adversarial training for linear models) is helpful, while for classes with more samples, it is harmful to the model accuracy. Comparison of standard accuracies of models trained against an ∞ -bounded adversary as a function of the size of the training dataset. We observe that in the low-data regime, adversarial training has an effect similar to data augmentation and helps with generalization in certain cases (particularly on MNIST). However, in the limit of sufficient training data, we see that the standard accuracy of robust models is less than that of the standard model (ε train = 0), which supports the theoretical analysis in Section 2.1. Figure 7: Standard test accuracy of adversarially trained classifiers. The adversary used during training is constrained within some p -ball of radius ε train (details in Appendix A). We observe a consistent decrease in accuracy as the strength of the adversary increases. FIG0: Visualization of the gradient of the loss with respect to input features (pixels) for standard and adversarially trained networks for 10 randomly chosen samples, similar to those in Figure 2. Gradients are significantly more interpretable for adversarially trained networks -they align almost perfectly with perceptually relevant features. For MNIST, blue and red pixels denote positive and negative gradient regions respectively. For CIFAR10 and Restricted ImageNet we clip pixel to 3 standard deviations and scale to.
We show that adversarial robustness might come at the cost of standard classification performance, but also yields unexpected benefits.
737
scitldr
Large deep neural networks are powerful, but exhibit undesirable behaviors such as memorization and sensitivity to adversarial examples. In this work, we propose mixup, a simple learning principle to alleviate these issues. In essence, mixup trains a neural network on convex combinations of pairs of examples and their labels. By doing so, mixup regularizes the neural network to favor simple linear behavior in-between training examples. Our experiments on the ImageNet-2012, CIFAR-10, CIFAR-100, Google commands and UCI datasets show that mixup improves the generalization of state-of-the-art neural network architectures. We also find that mixup reduces the memorization of corrupt labels, increases the robustness to adversarial examples, and stabilizes the training of generative adversarial networks. Large deep neural networks have enabled breakthroughs in fields such as computer vision BID22, speech recognition, and reinforcement learning BID28. In most successful applications, these neural networks share two commonalities. First, they are trained as to minimize their average error over the training data, a learning rule also known as the Empirical Risk Minimization (ERM) principle BID35. Second, the size of these state-of-theart neural networks scales linearly with the number of training examples. For instance, the network of BID31 used 10 6 parameters to model the 5 · 10 4 images in the CIFAR-10 dataset, the network of BID30 Strikingly, a classical in learning theory BID36 tells us that the convergence of ERM is guaranteed as long as the size of the learning machine (e.g., the neural network) does not increase with the number of training data. Here, the size of a learning machine is measured in terms of its number of parameters or, relatedly, its VC-complexity BID16.This contradiction challenges the suitability of ERM to train our current neural network models, as highlighted in recent research. On the one hand, ERM allows large neural networks to memorize (instead of generalize from) the training data even in the presence of strong regularization, or in classification problems where the labels are assigned at random. On the other hand, neural networks trained with ERM change their predictions drastically when evaluated on examples just outside the training distribution BID33, also known as adversarial examples. This evidence suggests that ERM is unable to explain or provide generalization on testing distributions that differ only slightly from the training data. However, what is the alternative to ERM?The method of choice to train on similar but different examples to the training data is known as data augmentation BID29, formalized by the Vicinal Risk Minimization (VRM) principle BID3. In VRM, human knowledge is required to describe a vicinity or neighborhood around each example in the training data. Then, additional virtual examples can be drawn from the vicinity distribution of the training examples to enlarge the support of the training distribution. For instance, when performing image classification, it is common to define the vicinity of one image as the set of its horizontal reflections, slight rotations, and mild scalings. While data augmentation consistently leads to improved generalization BID29, the procedure is dataset-dependent, and thus requires the use of expert knowledge. Furthermore, data augmentation assumes that the examples in the vicinity share the same class, and does not model the vicinity relation across examples of different classes. Contribution Motivated by these issues, we introduce a simple and data-agnostic data augmentation routine, termed mixup (Section 2). In a nutshell, mixup constructs virtual training examples DISPLAYFORM0 where x i, x j are raw input vectors y = λy i + (1 − λ)y j, where y i, y j are one-hot label encodings (x i, y i) and (x j, y j) are two examples drawn at random from our training data, and λ ∈. Therefore, mixup extends the training distribution by incorporating the prior knowledge that linear interpolations of feature vectors should lead to linear interpolations of the associated targets. mixup can be implemented in a few lines of code, and introduces minimal computation overhead. Despite its simplicity, mixup allows a new state-of-the-art performance in the CIFAR-10, CIFAR-100, and ImageNet-2012 image classification datasets (Sections 3.1 and 3.2). Furthermore, mixup increases the robustness of neural networks when learning from corrupt labels (Section 3.4), or facing adversarial examples (Section 3.5). Finally, mixup improves generalization on speech (Sections 3.3) and tabular (Section 3.6) data, and can be used to stabilize the training of GANs (Section 3.7). The source-code necessary to replicate our CIFAR-10 experiments is available at:https://github.com/facebookresearch/mixup-cifar10.To understand the effects of various design choices in mixup, we conduct a thorough set of ablation study experiments (Section 3.8). The suggest that mixup performs significantly better than related methods in previous work, and each of the design choices contributes to the final performance. We conclude by exploring the connections to prior work (Section 4), as well as offering some points for discussion (Section 5). In supervised learning, we are interested in finding a function f ∈ F that describes the relationship between a random feature vector X and a random target vector Y, which follow the joint distribution P (X, Y). To this end, we first define a loss function that penalizes the differences between predictions f (x) and actual targets y, for examples (x, y) ∼ P. Then, we minimize the average of the loss function over the data distribution P, also known as the expected risk: DISPLAYFORM0 Unfortunately, the distribution P is unknown in most practical situations. Instead, we usually have access to a set of training data DISPLAYFORM1, where (x i, y i) ∼ P for all i = 1,..., n. Using the training data D, we may approximate P by the empirical distribution DISPLAYFORM2 where δ(x = x i, y = y i) is a Dirac mass centered at (x i, y i). Using the empirical distribution P δ, we can now approximate the expected risk by the empirical risk: DISPLAYFORM3 Learning the function f by minimizing is known as the Empirical Risk Minimization (ERM) principle BID35. While efficient to compute, the empirical risk monitors the behaviour of f only at a finite set of n examples. When considering functions with a number parameters comparable to n (such as large neural networks), one trivial way to minimize is to memorize the training data. Memorization, in turn, leads to the undesirable behaviour of f outside the training data BID33. However, the naïve estimate P δ is one out of many possible choices to approximate the true distribution P. For instance, in the Vicinal Risk Minimization (VRM) principle BID3, the distribution P is approximated by DISPLAYFORM4 where ν is a vicinity distribution that measures the probability of finding the virtual feature-target pair (x,ỹ) in the vicinity of the training feature-target pair (x i, y i). In particular, BID3 considered Gaussian vicinities ν(x,ỹ|x i, DISPLAYFORM5, which is equivalent to augmenting the training data with additive Gaussian noise. To learn using VRM, we sample the vicinal distribution to construct a dataset DISPLAYFORM6, and minimize the empirical vicinal risk: DISPLAYFORM7 The contribution of this paper is to propose a generic vicinal distribution, called mixup: DISPLAYFORM8 where λ ∼ Beta(α, α), for α ∈ (0, ∞). In a nutshell, sampling from the mixup vicinal distribution produces virtual feature-target vectorsx DISPLAYFORM9 where (x i, y i) and (x j, y j) are two feature-target vectors drawn at random from the training data, and λ ∈. The mixup hyper-parameter α controls the strength of interpolation between feature-target pairs, recovering the ERM principle as α → 0.The implementation of mixup training is straightforward, and introduces a minimal computation overhead. FIG2 shows the few lines of code necessary to implement mixup training in PyTorch. Finally, we mention alternative design choices. First, in preliminary experiments we find that convex combinations of three or more examples with weights sampled from a Dirichlet distribution does not provide further gain, but increases the computation cost of mixup. Second, our current implementation uses a single data loader to obtain one minibatch, and then mixup is applied to the same minibatch after random shuffling. We found this strategy works equally well, while reducing I/O requirements. Third, interpolating only between inputs with equal label did not lead to the performance gains of mixup discussed in the sequel. More empirical comparison can be found in Section 3.8.What is mixup doing? The mixup vicinal distribution can be understood as a form of data augmentation that encourages the model f to behave linearly in-between training examples. We argue that this linear behaviour reduces the amount of undesirable oscillations when predicting outside the training examples. Also, linearity is a good inductive bias from the perspective of Occam's razor, since it is one of the simplest possible behaviors. FIG2 shows that mixup leads to decision boundaries that transition linearly from class to class, providing a smoother estimate of uncertainty. FIG4 illustrate the average behaviors of two neural network models trained on the CIFAR-10 dataset using ERM and mixup. Both models have the same architecture, are trained with the same procedure, and are evaluated at the same points in-between randomly sampled training data. The model trained with mixup is more stable in terms of model predictions and gradient norms in-between training samples. We evaluate mixup on the ImageNet-2012 classification dataset BID27. This dataset contains 1.3 million training images and 50,000 validation images, from a total of 1,000 classes. For training, we follow standard data augmentation practices: scale and aspect ratio distortions, random crops, and horizontal flips BID13. During evaluation, only the 224 × 224 central crop of each image is tested. We use mixup and ERM to train several state-of-the-art ImageNet-2012 classification models, and report both top-1 and top-5 error rates in For all the experiments in this section, we use data-parallel distributed training in Caffe2 1 with a minibatch size of 1,024. We use the learning rate schedule described in BID13. Specifically, the learning rate is increased linearly from 0.1 to 0.4 during the first 5 epochs, and it is then divided by 10 after 30, 60 and 80 epochs when training for 90 epochs; or after 60, 120 and 180 epochs when training for 200 epochs. For mixup, we find that α ∈ [0.1, 0.4] leads to improved performance over ERM, whereas for large α, mixup leads to underfitting. We also find that models with higher capacities and/or longer training runs are the ones to benefit the most from mixup. For example, when trained for 90 epochs, the mixup variants of ResNet-101 and ResNeXt-101 obtain a greater improvement (0.5% to 0.6%) over their ERM analogues than the gain of smaller models such as ResNet-50 (0.2%). When trained for 200 epochs, the top-1 error of the mixup variant of ResNet-50 is further reduced by 1.2% compared to the 90 epoch run, whereas its ERM analogue stays the same. We conduct additional image classification experiments on the CIFAR-10 and CIFAR-100 datasets to further evaluate the generalization performance of mixup. In particular, we compare ERM and mixup training for: PreAct ResNet-18 as implemented in BID25, WideResNet-28-10 (a) as implemented in BID40, and DenseNet BID20 as implemented in BID37. For DenseNet, we change the growth rate to 40 to follow the DenseNet-BC-190 specification from BID20. For mixup, we fix α = 1, which in interpolations λ uniformly distributed between zero and one. All models are trained on a single Nvidia Tesla P100 GPU using PyTorch 2 for 200 epochs on the training set with 128 examples per minibatch, and evaluated on the test set. Learning rates start at 0.1 and are divided by 10 after 100 and 150 epochs for all models except WideResNet. For WideResNet, we follow BID39 and divide the learning rate by 10 after 60, 120 and 180 epochs. Weight decay is set to 10. We do not use dropout in these experiments. We summarize our in FIG5. In both CIFAR-10 and CIFAR-100 classification problems, the models trained using mixup significantly outperform their analogues trained with ERM. As seen in FIG5, mixup and ERM converge at a similar speed to their best test errors. Note that the DenseNet models in BID20 were trained for 300 epochs with further learning rate decays scheduled at the 150 and 225 epochs, which may explain the discrepancy the performance of DenseNet reported in FIG5 and the original of BID20. extract normalized spectrograms from the original waveforms at a sampling rate of 16 kHz. Next, we zero-pad the spectrograms to equalize their sizes at 160 × 101. For speech data, it is reasonable to apply mixup both at the waveform and spectrogram levels. Here, we apply mixup at the spectrogram level just before feeding the data to the network. For this experiment, we compare a LeNet BID23 ) and a VGG-11 BID30 architecture, each of them composed by two convolutional and two fully-connected layers. We train each model for 30 epochs with minibatches of 100 examples, using Adam as the optimizer BID21. Training starts with a learning rate equal to 3 × 10 DISPLAYFORM0 and is divided by 10 every 10 epochs. For mixup, we use a warm-up period of five epochs where we train the network on original training examples, since we find it speeds up initial convergence. TAB6 shows that mixup outperforms ERM on this task, specially when using VGG-11, the model with larger capacity. Following, we evaluate the robustness of ERM and mixup models against randomly corrupted labels. We hypothesize that increasing the strength of mixup interpolation α should generate virtual examples further from the training examples, making memorization more difficult to achieve. In particular, it should be easier to learn interpolations between real examples compared to memorizing interpolations involving random labels. We adapt an open-source implementation BID42 to generate three CIFAR-10 training sets, where 20%, 50%, or 80% of the labels are replaced by random noise, respectively. All the test labels are kept intact for evaluation. Dropout BID32 ) is considered the state-of-the-art method for learning with corrupted labels BID1. Thus, we compare in these experiments mixup, dropout, mixup + dropout, and ERM. For mixup, we choose α ∈ {1, 2, 8, 32}; for dropout, we add one dropout layer in each PreAct block after the ReLU activation layer between two convolution layers, as suggested in BID39. We choose the dropout probability p ∈ {0.5, 0.7, 0.8, 0.9}. For the combination of mixup and dropout, we choose α ∈ {1, 2, 4, 8} and p ∈ {0.3, 0.5, 0.7}. These experiments use the PreAct ResNet-18 model implemented in BID25. All the other settings are the same as in Section 3.2.We summarize our in TAB3, where we note the best test error achieved during the training session, as well as the final test error after 200 epochs. To quantify the amount of memorization, we also evaluate the training errors at the last epoch on real labels and corrupted labels. As the training progresses with a smaller learning rate (e.g. less than 0.01), the ERM model starts to overfit the corrupted labels. When using a large probability (e.g. 0.7 or 0.8), dropout can effectively reduce overfitting. mixup with a large α (e.g. 8 or 32) outperforms dropout on both the best and last epoch test errors, and achieves lower training error on real labels while remaining resistant to noisy labels. Interestingly, mixup + dropout performs the best of all, showing that the two methods are compatible. One undesirable consequence of models trained using ERM is their fragility to adversarial examples BID33. Adversarial examples are obtained by adding tiny (visually imperceptible) perturbations to legitimate examples in order to deteriorate the performance of the model. The adversarial noise is generated by ascending the gradient of the loss surface with respect to the legitimate example. Improving the robustness to adversarial examples is a topic of active research. Among the several methods aiming to solve this problem, some have proposed to penalize the norm of the Jacobian of the model to control its Lipschitz constant BID9 BID6 BID2 BID18. Other approaches perform data augmentation by producing and training on adversarial examples BID12. Unfortunately, all of these methods add significant computational overhead to ERM. Here, we show that mixup can significantly improve the robustness of neural networks without hindering the speed of ERM by penalizing the norm of the gradient of the loss w.r.t a given input along the most plausible directions (e.g. the directions to other training points). Indeed, FIG4 shows that mixup in models having a smaller loss and gradient norm between examples compared to vanilla ERM.To assess the robustness of mixup models to adversarial examples, we use three ResNet-101 models: two of them trained using ERM on ImageNet-2012, and the third trained using mixup. In the first set of experiments, we study the robustness of one ERM model and the mixup model against white box attacks. That is, for each of the two models, we use the model itself to generate adversarial examples, either using the Fast Gradient Sign Method (FGSM) or the Iterative FGSM (I-FGSM) methods BID12, allowing a maximum perturbation of = 4 for every pixel. For I-FGSM, we use 10 iterations with equal step size. In the second set of experiments, we evaluate robustness against black box attacks. That is, we use the first ERM model to produce adversarial examples using FGSM and I-FGSM. Then, we test the robustness of the second ERM model and the mixup model to these examples. The of both settings are summarized in TAB4.For the FGSM white box attack, the mixup model is 2.7 times more robust than the ERM model in terms of Top-1 error. For the FGSM black box attack, the mixup model is 1.25 times more robust than the ERM model in terms of Top-1 error. Also, while both mixup and ERM are not robust to white box I-FGSM attacks, mixup is about 40% more robust than ERM in the black box I-FGSM setting. Overall, mixup produces neural networks that are significantly more robust than ERM against adversarial examples in white box and black settings without additional overhead compared to ERM. ERM GAN mixup GAN (α = 0.2) Figure 5: Effect of mixup on stabilizing GAN training at iterations 10, 100, 1000, 10000, and 20000. To further explore the performance of mixup on non-image data, we performed a series of experiments on six arbitrary classification problems drawn from the UCI dataset BID24. The neural networks in this section are fully-connected, and have two hidden layers of 128 ReLU units. The parameters of these neural networks are learned using Adam BID21 with default hyper-parameters, over 10 epochs of mini-batches of size 16. TAB6 shows that mixup improves the average test error on four out of the six considered datasets, and never underperforms ERM. Generative Adversarial Networks, also known as GANs, are a powerful family of implicit generative models. In GANs, a generator and a discriminator compete against each other to model a distribution P. On the one hand, the generator g competes to transform noise vectors z ∼ Q into fake samples g(z) that resemble real samples x ∼ P. On the other hand, the discriminator competes to distinguish between real samples x and fake samples g(z). Mathematically, training a GAN is equivalent to solving the optimization problem DISPLAYFORM0 where is the binary cross entropy loss. Unfortunately, solving the previous min-max equation is a notoriously difficult optimization problem BID10, since the discriminator often provides the generator with vanishing gradients. We argue that mixup should stabilize GAN training because it acts as a regularizer on the gradients of the discriminator, akin to the binary classifier in FIG2. Then, the smoothness of the discriminator guarantees a stable source of gradient information to the generator. The mixup formulation of GANs is: DISPLAYFORM1 ), λ). Figure 5 illustrates the stabilizing effect of mixup the training of GAN (orange samples) when modeling two toy datasets (blue samples). The neural networks in these experiments are fullyconnected and have three hidden layers of 512 ReLU units. The generator network accepts twodimensional Gaussian noise vectors. The networks are trained for 20,000 mini-batches of size 128 using the Adam optimizer with default parameters, where the discriminator is trained for five iterations before every generator iteration. The training of mixup GANs seems promisingly robust to hyper-parameter and architectural choices. mixup is a data augmentation method that consists of only two parts: random convex combination of raw inputs, and correspondingly, convex combination of one-hot label encodings. However, there are several design choices to make. For example, on how to augment the inputs, we could have chosen to interpolate the latent representations (i.e. feature maps) of a neural network, and we could have chosen to interpolate only between the nearest neighbors, or only between inputs of the same class. When the inputs to interpolate come from two different classes, we could have chosen to assign a single label to the synthetic input, for example using the label of the input that weights more in the convex combination. To compare mixup with these alternative possibilities, we run a set of ablation study experiments using the PreAct ResNet-18 architecture on the CIFAR-10 dataset. Specifically, for each of the data augmentation methods, we test two weight decay settings (10 which works well for ERM). All the other settings and hyperparameters are the same as reported in Section 3.2.To compare interpolating raw inputs with interpolating latent representations, we test on random convex combination of the learned representations before each residual block (denoted Layer 1-4) or before the uppermost "average pooling + fully connected" layer (denoted Layer 5). To compare mixing random pairs of inputs (RP) with mixing nearest neighbors (KNN), we first compute the 200 nearest neighbors for each training sample, either from the same class (SC) or from all the classes (AC). Then during training, for each sample in a minibatch, we replace the sample with a synthetic sample by convex combination with a random draw from its nearest neighbors. To compare mixing all the classes (AC) with mixing within the same class (SC), we convex combine a minibatch with a random permutation of its sample index, where the permutation is done in a per-batch basis (AC) or a per-class basis (SC). To compare mixing inputs and labels with mixing inputs only, we either use a convex combination of the two one-hot encodings as the target, or select the one-hot encoding of the closer training sample as the target. For label smoothing, we follow BID34 and use 10 as the target for incorrect classes, and 1 − 9 10 as the target for the correct class. Adding Gaussian noise to inputs is used as another baseline. We report the median test errors of the last 10 epochs. Results are shown in TAB8.From the ablation study experiments, we have the following observations. First, mixup is the best data augmentation method we test, and is significantly better than the second best method (mix input + label smoothing). Second, the effect of regularization can be seen by comparing the test error with a small weight decay. For example, for ERM a large weight decay works better, whereas for mixup a small weight decay is preferred, confirming its regularization effects. We also see an increasing advantage of large weight decay when interpolating in higher layers of latent representations, indicating decreasing strength of regularization. Among all the input interpolation methods, mixing random pairs from all classes (AC + RP) has the strongest regularization effect. Label smoothing and adding Gaussian noise have a relatively small regularization effect. Finally, we note that the SMOTE algorithm BID4 does not lead to a noticeable gain in performance. Data augmentation lies at the heart of all successful applications of deep learning, ranging from image classification BID22 to speech recognition BID14 BID0. In all cases, substantial domain knowledge is leveraged to design suitable data transformations leading to improved generalization. In image classification, for example, one routinely uses rotation, translation, cropping, resizing, flipping BID23 BID30, and random erasing BID43 to enforce visually plausible invariances in the model through the training data. Similarly, in speech recognition, noise injection is a prevalent practice to improve the robustness and accuracy of the trained models BID0.More related to mixup, BID4 propose to augment the rare class in an imbalanced dataset by interpolating the nearest neighbors; BID8 show that interpolation and extrapolation the nearest neighbors of the same class in feature space can improve generalization. However, their proposals only operate among the nearest neighbors within a certain class at the input / feature level, and hence does not account for changes in the corresponding labels. Recent approaches have also proposed to regularize the output distribution of a neural network by label smoothing BID34, or penalizing high-confidence softmax distributions BID26. These methods bear similarities with mixup in the sense that supervision depends on multiple smooth labels, rather than on single hard labels as in traditional ERM. However, the label smoothing in these works is applied or regularized independently from the associated feature values.mixup enjoys several desirable aspects of previous data augmentation and regularization schemes without suffering from their drawbacks. Like the method of , it does not require significant domain knowledge. Like label smoothing, the supervision of every example is not overly dominated by the ground-truth label. Unlike both of these approaches, the mixup transformation establishes a linear relationship between data augmentation and the supervision signal. We believe that this leads to a strong regularizer that improves generalization as demonstrated by our experiments. The linearity constraint, through its effect on the derivatives of the function approximated, also relates mixup to other methods such as Sobolev training of neural networks BID7 or WGAN-GP BID15. We have proposed mixup, a data-agnostic and straightforward data augmentation principle. We have shown that mixup is a form of vicinal risk minimization, which trains on virtual examples constructed as the linear interpolation of two random examples from the training set and their labels. Incorporating mixup into existing training pipelines reduces to a few lines of code, and introduces little or no computational overhead. Throughout an extensive evaluation, we have shown that mixup improves the generalization error of state-of-the-art models on ImageNet, CIFAR, speech, and tabular datasets. Furthermore, mixup helps to combat memorization of corrupt labels, sensitivity to adversarial examples, and instability in adversarial training. In our experiments, the following trend is consistent: with increasingly large α, the training error on real data increases, while the generalization gap decreases. This sustains our hypothesis that mixup implicitly controls model complexity. However, we do not yet have a good theory for understanding the'sweet spot' of this bias-variance trade-off. For example, in CIFAR-10 classification we can get very low training error on real data even when α → ∞ (i.e., training only on averages of pairs of real examples), whereas in ImageNet classification, the training error on real data increases significantly with α → ∞. Based on our ImageNet and Google commands experiments with different model architectures, we conjecture that increasing the model capacity would make training error less sensitive to large α, hence giving mixup a more significant advantage.mixup also opens up several possibilities for further exploration. First, is it possible to make similar ideas work on other types of supervised learning problems, such as regression and structured prediction? While generalizing mixup to regression problems is straightforward, its application to structured prediction problems such as image segmentation remains less obvious. Second, can similar methods prove helpful beyond supervised learning? The interpolation principle seems like a reasonable inductive bias which might also help in unsupervised, semi-supervised, and reinforcement learning. Can we extend mixup to feature-label extrapolation to guarantee a robust model behavior far away from the training data? Although our discussion of these directions is still speculative, we are excited about the possibilities mixup opens up, and hope that our observations will prove useful for future development.
Training on convex combinations between random training examples and their labels improves generalization in deep neural networks
738
scitldr
We present a novel approach to spike sorting for high-density multielectrode probes using the Neural Clustering Process (NCP), a recently introduced neural architecture that performs scalable amortized approximate Bayesian inference for efficient probabilistic clustering. To optimally encode spike waveforms for clustering, we extended NCP by adding a convolutional spike encoder, which is learned end-to-end with the NCP network. Trained purely on labeled synthetic spikes from a simple generative model, the NCP spike sorting model shows promising performance for clustering multi-channel spike waveforms. The model provides higher clustering quality than an alternative Bayesian algorithm, finds more spike templates with clear receptive fields on real data and recovers more ground truth neurons on hybrid test data compared to a recent spike sorting algorithm. Furthermore, NCP is able to handle the clustering uncertainty of ambiguous small spikes by GPU-parallelized posterior sampling. The source code is publicly available. Large-scale neuronal population recordings using high-density multi-electrode arrays (MEA) are at the forefront of current progress in understanding neural circuit dynamics. In MEA recordings, each electrode channel reads extracellular signals from many neurons, and each neuron is recorded by multiple nearby electrodes. A key step in the analysis of MEA data is spike sorting, which converts the raw electrical signal into a set of neural spike trains belonging to individual neurons. As MEAs grow in scale and popularity, there is a new urgency in improving spike sorting performance. A typical spike sorting pipeline consists of three steps. The spike detection step extracts putative spike events from noisy recordings. The clustering step groups similar spike waveforms into clusters, each representing a putative neuron. To resolve colliding waveforms, a deconvolution step is often performed. Spike clustering is at the core of the pipeline, as the clustering performance determines both the accuracy of spike assignment and the quality of spike templates used for deconvolution. Spike clustering, however, poses significant challenges: Spike waveforms form highly nonGaussian clusters in spatial and temporal dimensions, and it is unclear what are the optimal features for clustering. It is unknown a priori how many clusters there are. Although existing methods perform well on spikes with high signal-to-noise ratios (SNR), there remain significant challenges in the lower-SNR regime with increased clustering uncertainty. Fully-Bayesian approaches proposed to handle this uncertainty do not scale to large datasets due to expensive Gibbs sampling. To address these challenges, we propose a novel approach to spike clustering using the recently introduced Neural Clustering Process (NCP) (Figure 1). NCP is based on a neural architecture that performs scalable amortized approximate Bayesian clustering. Rather than selecting arbitrary features for clustering, the spike waveforms are encoded with a convolutional neural network (ConvNet), which is learned end-to-end jointly with the NCP network to ensure optimal feature encoding. Using a variable-input softmax function, NCP is able to compute full posterior distributions on cluster labels and the number of clusters, without assuming a fixed or maximum number of clusters. NCP allows for efficient probabilistic clustering by GPU-parallelized posterior sampling, which is particularly useful for handling the clustering uncertainty of ambiguous small spikes. The computational cost of NCP training can be highly amortized, since neuroscientists often sort spikes form many statistically similar datasets. We trained NCP for spike clustering using synthetic spikes from a simple yet effective generative model that mimics the distribution of real spikes, and evaluated the performance on labeled synthetic data, unlabeled real data, and hybrid test data with partial ground truth. We show that using NCP for spike sorting provides high clustering quality, matches or outperforms a recent spike sorting algorithm, and handles clustering uncertainty by efficiently producing multiple plausible clustering configurations. These show substantial promise for incorporating NCP into a production-scale spike sorting pipeline.. The model is composed by the deep networks h, g, q, f. Bottom left: After assigning the cluster labels c 1:n−1, each possible discrete value k for c n gives a different symmetry-invariant encoding of x 1:n into the vector G k, using the functions h and g. The remaining, yet-unassigned points x n+1:N are encoded by q and summed into the vector Q. Bottom right: Each pair G k, Q is mapped by f into a real number (logit), which in turn is mapped into the multinomial distribution q θ (c n |c 1:n−1, x) via a variable-input softmax. 2 Spike Sorting using the Neural Clustering Process Data preprocessing. Training and test data come from the retinal recordings in using a 512-channel 2D hexagonal MEA with 20 kHz sampling rate. After spike detection, each multi-channel spike waveform was assigned to the channel where the waveform has the maximum peak-to-peak (PTP) amplitude (i.e. the center channel, ch0). This partitioned the recording data by channel such that each center-channel-based partition only contains multi-channel spike waveforms centered at that channel. Each spike waveform is represented as a 7 × 32 array containing the 32 time steps surrounding the peak from the center channel and the same time window from the 6 immediate neighbor channels (Figure 1 top). These 7 × 32 arrays are the spikes on which clustering was performed. Neural architecture for NCP spike sorting. The NCP architecture contains four neural networks, h, q, g, f, as shown in Figure 1 (bottom). We refer to for the detailed formulation and notations of NCP. To extract useful features from the spatial-temporal patterns of spike waveforms, we use a 1D ConvNet as the h and q encoder functions. The convolution is applied along the time axis, with each electrode channel treated as a feature dimension. The ConvNet uses a ResNet architecture with 4 residual blocks, each having 32, 64, 128, 256 feature maps (kernel size = 3, stride =). The last block is followed by an averaged pooling layer and a final linear layer. The outputs of the ResNet encoder are the h i and q i vectors of NCP, i.e. The other two functions, g and f, are multilayer perceptrons identical to those in the 2D Gaussian example in. Training NCP using synthetic data. To train NCP for spike clustering, we created synthetic labeled training data (Figure 2) using a mixture of finite mixtures (MFM) generative model of noisy spike waveforms that mimics the distribution of real spikes: Here, N is the number of spikes between. The number of clusters K is sampled from a shifted Poisson distribution with λ = 2 so that each channel has on average 3 clusters. π 1:K represents the proportion of each cluster and is sampled from a Dirichlet distribution with α 1:K = 1. The training spike templates µ k ∈ R 7×32 are sampled from a reservoir of 957 ground-truth templates not present in any test data, with the temporal axis slightly jittered by random resampling. Finally, each waveform x i is obtained by adding to µ ci Gaussian noise with covariance given by the Kronecker product of spatial and temporal correlation matrices estimated from the training data. This method creates spatially and temporally correlated noise patterns similar to real data (Figure 2). We trained NCP for 20000 iterations on a GPU with a batch size of 32 to optimize the NLL loss by the Adam optimizer. A learning rate of 0.0001 was used (reduced by half at 10k and 17k iterations). Probabilistic spike clustering using NCP. At inference time, we fed the 7 x 32 arrays of spike waveforms to NCP, and performed GPU-parallelized posterior sampling of cluster labels (Figure 1). Using beam search with a beam size of 150, we were able to efficiently sample 150 high-likelihood clustering configurations for 2000 spikes in less than 10 seconds on a single GPU. After clustering, we obtained a spike template for each cluster as the average shape of the spike waveforms. The clustering configuration with the highest probability was used in most experiments. We compared NCP spike sorting against two other methods: Variational inference on a Gaussian Mixture of Finite Mixtures (vGMFM), which is an alternative Bayesian clustering algorithm, and Kilosort, a state-of-the-art spike sorting pipeline described in. For vGMFM, the first 5 principal components of the spike waveforms from each channel were used as the input features. For Kilosort, we run the entire pipeline using the Kilosort2 package. Synthetic Data. We run NCP and vGMFM on 20 sets of synthetic test data each with 500, 1000, and 2000 spikes. As the ground-truth cluster labels are known, we compared the clustering quality using Adjusted Mutual Information (AMI). As shown in Figure 3, The AMI of NCP is on average 11% higher than vGMFM, showing better performance of NCP on synthetic data. Real Data. We run NCP, vGMFM and Kilosort on a 49-channel, 20-minute retina recording with white noise stimulus, and extracted the averaged spike template of each cluster (i.e. putative neuron). For NCP and vGMFM, we performed clustering on 2000 randomly sampled spikes from each channel (clusters containing less than 20 spikes were discarded), and assigned all remaining spikes to a cluster based on the L2 distance to the cluster centers. Then, a final set of unique spike templates were computed, and each detected spike was assigned to one of the templates. Example clustering of NCP and vGMFM in Figure 4 (top and bottom-left) show that NCP produces clean clusters with visually more distinct spike waveforms compared to vGMFM. As real data do not come with ground-truth cluster labels, we compared the receptive fields (RFs) extracted by NCP and Kilosort; the RF is computed for each cluster as the spike-triggered average of the stimulus (spatiotemporal white noise in this experiment). A clearly demarcated RF provides encouraging evidence that the spike template corresponds to a real neuron. After extracting spike templates and RFs from each pipeline, we matched pairs of templates from different methods by L-infinity distance and pairs of RFs by cosine distance. Side-by-side comparisons of 5 example RF pairs are shown in Figure 4 (bottom-right). See Figure 7 in Appendix for more examples. Overall, the NCP pipeline found 103 templates with clear RFs, among which 48 were not found in Kilosort. Kilosort found 72 and 17 of them were not found by NCP (Figure 4 bottom-right). This shows that NCP performs at least as well as Kilosort, and finds many additional spike templates with clear RFs. Hybrid Data. We compared NCP against vGMFM and Kilosort on a hybrid recording with partial ground truth as in. 20 ground-truth spike templates were manually selected from a 49-channel test recording and injected into another test dataset according to the original spike times. This approach tests the clustering performance on realistic recordings with complex noise and colliding spikes. As shown in Figure 5, NCP recovered 13 of the 20 injected ground-truth templates, outperforming both Kilosort and vGMFM, which recovered 8 and 6, respectively. Probabilistic clustering of ambiguous small spikes. Spike sorting of small-amplitude waveforms has been challenging due to the low SNR and increased uncertainty of cluster assignment. Traditional methods could not handle the uncertainty and previous fully-Bayesian approaches do not scale. By efficient GPU-parallelized sampling of cluster labels from the posterior, NCP is able to handle the clustering uncertainty by producing multiple plausible clustering . Figure 6 shows examples where NCP separates spike clusters with amplitude as low as 3-4× the standard deviation of the noise into plausible units that are not mere scaled version of each other but have distinct shapes on different channels. Conclusions. Our show that NCP spike sorting achieves high clustering quality, matches or outperforms a state-of-the-art method, and is able to handle clustering uncertainty by efficient posterior sampling. Future directions include more realistic generative models, better spike encoders that utilize information from distant channels, and integrating NCP into a standard spike sorting pipeline.
We present a novel approach to spike sorting using the Neural Clustering Process (NCP), a recently introduced neural architecture that performs scalable amortized approximate Bayesian inference for efficient probabilistic clustering.
739
scitldr
The goal of the paper is to propose an algorithm for learning the most generalizable solution from given training data. It is shown that Bayesian approach leads to a solution that dependent on statistics of training data and not on particular samples. The solution is stable under perturbations of training data because it is defined by an integral contribution of multiple maxima of the likelihood and not by a single global maximum. Specifically, the Bayesian probability distribution of parameters (weights) of a probabilistic model given by a neural network is estimated via recurrent variational approximations. Derived recurrent update rules correspond to SGD-type rules for finding a minimum of an effective loss that is an average of an original negative log-likelihood over the Gaussian distributions of weights, which makes it a function of means and variances. The effective loss is convex for large variances and non-convex in the limit of small variances. Among stationary solutions of the update rules there are trivial solutions with zero variances at local minima of the original loss and a single non-trivial solution with finite variances that is a critical point at the end of convexity of the effective loss in the mean-variance space. At the critical point both first- and second-order gradients of the effective loss w.r.t. means are zero. The empirical study confirms that the critical point represents the most generalizable solution. While the location of the critical point in the weight space depends on specifics of the used probabilistic model some properties at the critical point are universal and model independent. Finding a generalizable solution is a critical problem for any machine learning task. The ultimate goal of learning from the available ground truths is to make a good prediction for new data. The Bayesian method is a very powerful approach that gives a probabilistic measure of the ability of a proposed model to predict by estimating how well the model predicts known data. The accuracy of the predictions depends on how the found solution is able to overcome a sampling bias to avoid overfitting for given particular samples of training data. Specifically, in Bayesian method predictions of labels y for an input x are made by using a probabilistic model, for certainty a neural network, which defines a function parametrized by weights w that allows computing probabilities P (y|x, w) for each weight point. Each weight point contributes to predicted probabilities of labels P rob(y|x) in accordance with probability distribution of weights. The distribution of weights is learned from a known training data set {x n, y n ; n = 1..N} and its prior probability distribution P 0 (w) in the following way: P rob(y|x) = w P (y|x, w)P 0 (w) N n=1 P (y n |x n, w)/ w P 0 (w) N n=1 P (y n |x n, w)Here the predicted probability P rob(y|x) is an average of the model probability P (y|x, w) at a weight w over the learned weight distribution. To make predictions we are only interested in a method that allows to find averages in eq. FORMULA0 and not absolute values of the integrals. According to mean value theorem (Cauchy, also in Encyclopedia of ) values of the averages can be represented by a single point, which in our case means that there is a single point in the weight space w 0 that represents a of computing the integrals, so P rob(y|x) = P (y|x, w 0). That point w 0 is a solution of the training of the neural network. A standard approach to get the solution is a maximum likelihood method that finds a maximum of the integrand. However, there are some cases when the maximum likelihood fails to represent main contribution to the integral by weights. Consider this example: if log-likelihood for N data samples has a maximum at some weight point w 1, then in general its first derivative by weights is zero, second derivative is negative and proportional to N, so corresponding Gaussian integral by the weights is proportional to N −d/2, where d is number of weights. This will change if there is a flat maximum, which has not only first but also second and third derivatives equal to zero. In this case the integral is proportional to N −d/4. For large number of samples the flat maximum makes the most significant contribution to the integral by weights: DISPLAYFORM0 and DISPLAYFORM1. For a typical case when the number of weights d ∼ N and average sample probabilities at maxima are comparable O(P 1) ∼ O(P 2) the integral around flat maximum I 2 is always bigger than the integral around narrow maximum I 1, unless P 2 is zero. While in general a likelihood has a number of regular local maxima and no flat maximum the effect of integration over multiple frequent local maxima can in an effective flat maximum that defines a solution. We argue that any local or global maximum of likelihood gives a wrong solution that is not generalizable and so makes inaccurate predictions, because the locations for the global maximum and local maxima depend on specific samples in the training data and any modification of it by adding or removing samples will change the solution BID6. Instead we will show that there is another solution that more associated with properties of the distribution of training data and less with particular samples. The purpose of this paper is to show that the effective flat maximum always exists for specific parameters of prior weight distribution P 0 (w) (regularization parameters) and corresponding solution is the most generalizable solution that can be found in training. We show that the solution is a critical point in an effective loss that represents the of integration over the weights. In the next sections we derive the algorithm for the optimizer for finding the critical point solution and analyze properties of the solutions. The empirical study is outside of the scope of the paper and will be presented separately. For simplicity we use same notations for a vector of weights and its components, as well as corresponding parameters of distributions of weights because all weight components are independent in our consideration and it is clear from context when it is a vector or its component. We use a recurrent approach for estimating the integrals. First, we represent a probability of each training sample as a product of factors close to one, P (y|x, w) = (1 + 1/T ln P (y|x, w))T, where free parameter T 1 is a number of epochs: DISPLAYFORM0 then model each factor as a product of Gaussian distributions Q(w|µ, σ) one for each component of weight vector. For each iteration a running prior distribution of weights is updated by absorbing a single factor to produce a new prior DISPLAYFORM1, with Q 0 (w) = P 0 (w) and Q(w|µ, σ) = e DISPLAYFORM2 Under review as a conference paper at ICLR 2019Specifically, we do the following:First, let's enumerate all factors for all data samples DISPLAYFORM3 Then, under the integral by weights we use an identical re-writing for a product of a prior Q t (w) and one of the factors and a new prior Q t+1 (w) and normalization factor N t DISPLAYFORM4 where normalization factor DISPLAYFORM5 and distribution DISPLAYFORM6 Finally, we make an approximation by replacing ratio of distributions R t (w)/Q t+1 (w) by its mean for distribution Q t+1 which is equal to 1. Then the iterations are repeated until all factors from probabilities of data are replaced by a single final Gaussian distribution and some normalization factor. To minimize the introduced error on each iteration we select the means and variances of the new prior Q t+1 to minimize variance of the ratio R t /Q t+1. The variance is equal to DISPLAYFORM7 The lower bound of the variance is expressible via KL divergence BID3 ) of R t and Q t+1 DISPLAYFORM8 Finding the minimum of KL divergence is equivalent to minimizing the lower bound of the variance of the ratio of R t /Q t+1 which leads to equations DISPLAYFORM9 Then solving the above equations gives the update rules for means and variances of each weight component DISPLAYFORM10 where averages are defined as A(w) µ,σ = w Q(w|µ, σ)A(w) and averages of gradients by weights are equal to gradients of averages by means DISPLAYFORM11 Another useful identity allows to replace second gradient of (µ, σ)-average w.r.t mean on first gradient by variance DISPLAYFORM12 Under review as a conference paper at ICLR 2019For a full batch the log of probability of data in eqs. ln P (w) = N n ln P (y n |x n, w), while for a minibatch the sum goes over the size of the minibatch. In eqs. we used rescaled variances σ 2 → σ 2 /N so all gradients are normalized per data sample. With the rescaled variances the prior distribution of weights is a product over all weight dimensions d DISPLAYFORM13 The index t enumerates iterations over all minibatches in all epochs. Then for a minibatch size one the total number of iterations equals to a number of epochs T times number of data samples N in a training set. By recursively applying the update rules in eqs. FORMULA13 over N samples and T epochs we obtain the approximation of Bayesian integrals that allows to compute averages DISPLAYFORM14 It is important to emphasize that the averages are defined by distribution Q tmax (w) after a finite number of iterations t max, which for the minibatch one is equal to a product N × T and not by distribution at the infinite number of iterations Q ∞ (w). Number of epochs T controls the accuracy of the factorized representation and in practical computing any T larger than 10 or 100 is good enough. The prediction probability in eq. FORMULA0 is defined by weight w 0 that is a mean µ tmax of distribution Q tmax(w) P rob(y|x) ≈ P (y|x, µ tmax).That mean µ tmax is a final point of iterations in eqs.. The trajectory in mean-variance space is defined by starting point (µ 0, σ 0), the mean and variance of a prior distribution of weights in eq. which are regularization parameters: P 0 (w) = Q(w|µ 0, σ 0). Before going into detailed analysis of eqs. FORMULA13 let's formulate the in the form of the following statements. The update rules above are solving SGD-type optimization problem for an effective loss given by Gaussian average L(µ, σ) = − w Q(w|µ, σ) n ln P (y n |x n, w) for each mean-variance point (µ, σ).The following statements have been proved:1. The effective loss L(µ, σ) is convex for large variances σ 2. In particular, it is true for any neural network with ReLU activations and L1, L2 or cross entropy loss functions. For a convex effective loss its second-order gradient by mean is positive.2. The effective loss is converging to the original loss in the limit of small variances where it is generally non-convex (excluding completely trivial linear cases).3. For each mean there is a critical variance σ 2 c that separates convex effective loss from non-convex. At the critical variance second-order gradient of effective loss by mean is zero.4. There are trivial stationary solutions of the update rules that correspond to zero variances and zero gradients of loss w.r.t. weights. These solutions are unstable when training set is modified because they correspond to narrow minima that are changing drastically as data change BID6. Because second-order gradients by weights are large and positive, changes in loss are second-order by weights and changes in solutions are at least linear and often in jumps to a new location.5. There is a non-trivial stationary solution at critical point at the end of convexity where both firstorder and second-order gradients of effective loss are zero. That solution is much less sensitive to changes in data. It is responding by cubic changes in loss and quadratic changes in solutions and due to convexity there is no jumps. For that reason the critical point solution is the most generalizable solution as well as most stable against adversarial perturbations BID1.6. Trajectories in mean-variance space that follow from update rules show universal behavior in vicinity of the critical point. Analysis shows that not all trajectories are converging to the critical point. Typical trajectory that starts in convex area moves toward the critical point until it may cross to non-convex area then it moves away from critical point and finally ends as a trivial solution in a local minimum.7. Approximating Gaussian averaging by sampling in weight space in dropout method also known as "fast dropout" BID5 and "dropconnect" BID4. The update rules completely define the dropout rate for each weight component. With averaging via dropout the critical point and the convexity of effective loss exist only in an unreachable limit, nevertheless it allows to find a solution that is close to the critical point. To understand better an effect of averaging let's consider a simple polynomial case where loss l(w) as a function of weights has two symmetrical minima at points (−a, +a): DISPLAYFORM0 Then the effective loss is Gaussian average of the loss above L(µ, σ) = l(w) µ,σ, specifically DISPLAYFORM1 And the first and second gradients of the effective loss are DISPLAYFORM2 When variance is large, σ 2 > a 2 /3, second gradient of the effective loss L w.r.t. µ is positive and there is only one minimum at µ = 0. When variance is small σ 2 < a 2 /3, the effective loss has a maximum at µ = 0 and two minima at µ = ± √ a 2 − 3σ 2.When σ 2 = a 2 /3 there is a critical point at µ = 0 where both first and second gradients of the effective loss w.r.t. µ are zero: DISPLAYFORM3 ∂µ 2 = 0. By using the update rules from eqs. FORMULA13 where average log of probability is equal to the negative effective loss L = − ln P with first and second gradients defined above in eqs. FORMULA20 we can consider trajectories in the mean-variance space: DISPLAYFORM4 We can see that the critical point is a saddle point in the mean-variance space. Any trajectory that is missing a critical point after an infinite number of iterations ends in a local minimum of an effective loss with zero variance σ 2, which is an original local minimum. There are only two trajectories starting with µ = 0 from convex area (3σ 2 > a 2) and non-convex area (3σ 2 < a 2) that after a large enough number of iterations will always converge to the critical point. However, for a finite number of iterations there is an area of starting points (µ 0, σ 0) that defines trajectories with end points arbitrary close to a critical point. Let's consider a multilayered neural network where predictions y are computed via ReLU activations and loss function l(y) is a convex function of predictions. For each weight w predictions y are piecewise functions of the weight. Then second gradient of loss function by weight is DISPLAYFORM0 There are two terms in second gradient of loss by weight: first is always positive due to convexity of l(y) and second contains second gradient of predictions. Average of the first term is always positive. Average of the second term could be positive or negative. Due to piecewise weight dependency second gradient of predictions ∂ 2 y/∂w 2 is singular and not zero, however for any network with finite number of layers predictions y are linear functions of the weight when the value of the weight goes to infinity. For that reason second gradient of predictions for large values of the weight is zero and second gradient of loss by weight is positive for large weight values. If variance of Gaussian distribution of the weight is very large then averages are defined by contributions of large weight values and then average of the second gradient of loss by weight is positive. DISPLAYFORM1 That proves the convexity of the effective loss for large variances. On other side when the variance goes to zero the effective loss converges to original loss which generally non-convex. Because an effective loss is a continuous function of the variance there exists a critical value of the variance where second gradient of the effective loss may have zero at some mean. That means the mean-variance plane could be divided on two convex and non-convex areas. These areas are separated by the line on mean-variance plane where second gradient of the effective loss w.r.t. means is zero. The general properties of the critical point are universal due to the identity in eq.. Let's consider an expansion of the effective loss L(µ, σ) in a vicinity of a point µ c, σ c where both first gradients of L by mean and variance are zero: DISPLAYFORM2 Then using the identity in equation FORMULA15 we can see that 1 2 ∂ 2 L ∂µ 2 = a 2 + 3a 3 (µ − µ c) and a 2 = 0 because zero-order term by (µ − µ c) in ∂L ∂σ 2 is zero. Also it gives c = 3a 3. Due to the identity the universal structure of the effective loss at the critical point is as follows DISPLAYFORM3 Essentially, the condition makes a regular minimum of the effective loss to be a critical point. At critical point both first and second gradients by means of the effective loss are zero. At critical point the effective loss is a flat function of the means. That makes the solution stable when training data set is modified unlike any maximum log-likelihood solutions that changes when data changes. For that reason the critical point solution is most generalizable. Any available data set is a collection of samples from an unknown true distribution. The empirical loss is defined as loss per sample for sampled data for a given weight point. The expected loss is average over unknown true distribution of data at a given weight point. In a two-minimum model loss for a data point is l(w) = (w 2 − a 2) 2, where a is a sample specific parameter. The empirical loss per sample for m samples is DISPLAYFORM0 while the expected loss per data point for the same weight point is DISPLAYFORM1 The generalization error is a difference between an expected loss and empirical loss for the same solution w: DISPLAYFORM2 For a given weight point the expectation of the generalization error is zero: DISPLAYFORM3 However, we only can find solution for a minimum of the empirical loss. DISPLAYFORM4 Due to sampling a minimum loss solution for the empirical loss depends on specific samples. For that reason the expectation of the generalization error for a minimum of the empirical loss solution is not zero and is equal to DISPLAYFORM5 The expectation of the generalization error is positive which reflects overfitting and goes to zero when number of samples m goes to infinity. Now, let's compute generalization error for the effective loss for m data samples. The solution for empirical effective loss are found from equations DISPLAYFORM6 There are trivial solutions with zero variance σ 0 = 0, µ DISPLAYFORM7 And there is a critical point solution with non-zero variance: µ c = 0, 3σ DISPLAYFORM8 n. That solution gives the expected generalization error that is 3 times smaller than the expected generalization error for trivial solutions: m E (a 2 − E a 2) 2 /3 = E [GErr ef f (µ 0, σ 0)] /3.This example supports the claim of the paper that the critical point solution is more generalizable than a trivial minimum loss solution. In the paper we consider a learning of a predictive model from training data by approximately computing Bayesian integral over weights -the parameters of the model. By using recurrent variational approximations with Gaussian weight distributions we are able to find a solution -a single point in weight space that represents an effect of averaging over distribution of weights in the Bayesian integrals. We show that this approach leads to SGD-type optimization problem for an effective loss in meanvariance space. For each mean-variance point the effective loss is defined by average of the loglikelihood over Gaussian distribution at same mean-variance point. Due to averaging the effective loss and its gradients of any order are continuous function of means even for ReLU based neural networks. The recurrent update rules define trajectories in mean-variance space. Starting points of the trajectories are defined by regularization parameters, which are parameters of the Gaussian weight prior in Bayesian integrals. It is shown that there are two types of stationary solutions of the update rules. First solution type corresponds to local minima of the original loss or maxima of the log-likelihood. Second solution type is a critical point in mean-variance space that is a of the integration over multiple maxima of the log-likelihood. At the critical point both first and second gradient of the effective loss are zero. That leads to stability of the solution against perturbations of the training data set due to addition or removal data samples or via creation of adversarial examples.
Proposed method for finding the most generalizable solution that is stable w.r.t. perturbations of trainig data.
740
scitldr
Humans rely on episodic memory constantly, in remembering the name of someone they met 10 minutes ago, the plot of a movie as it unfolds, or where they parked the car. Endowing reinforcement learning agents with episodic memory is a key step on the path toward replicating human-like general intelligence. We analyze why standard RL agents lack episodic memory today, and why existing RL tasks don't require it. We design a new form of external memory called Masked Experience Memory, or MEM, modeled after key features of human episodic memory. To evaluate episodic memory we define an RL task based on the common children's game of Concentration. We find that a MEM RL agent leverages episodic memory effectively to master Concentration, unlike the baseline agents we tested. From a neurobiological perspective, episodic memory is a key component of human life -remembering the name of a new acquaintance, recalling the plot of a movie as it unfolds, or realizing where the car is parked, are all examples of how we use episodic memory 1 to store and recall novel information. If a person's ability to form and retrieve new episodic memories is lost, as in advanced Alzheimer's disease, the person is severely incapacitated as a . Although today's standard Reinforcement Learning (RL) agents possess forms of procedural and semantic memory BID10, they lack any functional equivalent of episodic memory. Our motivation is to expand the general intelligence of RL agents by imbuing them with a useful form of episodic memory. Human episodic memories appear to be records of experience that are re-experienced when associatively recalled BID8. In RL, fundamental experiences are termed observations. Accordingly, we propose the following working definition: Episodic memory for an RL agent is the ability to leverage details of a past observation that is similar to the current observation. This definition implies that an agent would exercise episodic memory by doing certain things at specific points in time, including 1. At the time of the old observation, the details of that observation must be stored somewhere in the agent. This stored record is the episodic memory. 2. Later, when another observation arrives, it must somehow be compared with the stored observations. If one of those is sufficiently similar, then the details of the old observation must be retrieved from memory. There are different implementations of similarity and retrieval. We will propose a concrete one later. 3. After retrieving the details of the old observation that is similar to the new one, the agent must be able to utilize that information to benefit it's pursuit of reward. Designing an RL agent with episodic memory is one challenge, and designing an RL task to evaluate episodic memory in an agent is another. The main difficulty is that unless the task is very carefully designed, the RL agent may find a way to solve the task using other learning abilities besides episodic memory. To illustrate, we briefly introduce the RL task that we will present later in detail. To evaluate an agent's episodic memory ability, we introduce the Concentration task based on the card game of the same name. Concentration is a memory game with the goal of identifying matching pairs of cards among a large set of face-down cards. During play, one card at a time is temporarily revealed to the player who must correctly memorize and recall the locations of each pair. Concentration tests episodic memory by requiring an agent to leverage past observations of cards and their locations in order to succeed. In our variant of Concentration, cards are not limited to the standard deck and are instead randomly generated for each game, so each card pair is unique and never before seen in the agent's lifetime. Unique cards test the agent's ability to use episodic memory to reason about the identities and locations of the cards that are seen within the current episode, rather than learning to recognize specific cards. Recently, the capabilities of intelligent agents have greatly expanded through the combination of deep learning and reinforcement learning. Deep RL agents have achieved notable success outperforming humans on Atari games BID15. However, many of the hardest tasks in which RL agents still fail to surpass humans are fraught with the difficulties of sparse rewards, partial observability, and a limited amount of samples. Equipping an RL agent with memory is a promising approach to tackling some of these challenges, and has attracted a growing amount of interest in the research community. Recurrent neural networks such as LSTMs are commonly used as controllers BID13. LSTMs can be trained to maintain and use information on timescales of tens of steps, but have trouble learning over longer sequences. Additionally, LSTMs do not store observations as discrete entities, so it is unclear how an LSTM could compare a never-before-seen observation (such as a unique card) with detailed instances of past observations, which also may have occurred only once. Memory augmented neural networks provide storage capabilities beyond those of an LSTM. One such architecture, the differentiable neural computer (DNC) has been shown to be capable of handling several different memory-based tasks. We evaluate the DNC on Concentration, but discover that it has difficulty reusing elements of its memory matrix. The key contributions of this paper are:• We propose a working definition of episodic memory for RL agents.• We introduce the Concentration task for evaluating episodic memory.• We present the Masked Experience Memory (MEM) architecture, a new type of external memory designed to provide an RL agent with human-inspired episodic memory, and incorporating a novel improvement over cosine similarity for content-based addressing.• We empirically demonstrate that MEM successfully enables an RL agent to solve the Concentration task by remembering the identities and locations of cards it has seen only once.• We show that baseline RL agents (LSTM-based and DNC-based) fail to solve the task. Neither neural network weights nor activation states support the storage and associative retrieval of discrete, one-shot experiences necessary for episodic memory. Neural network weights change too slowly to store samples as individual experiences. Fast weights are one approach to storing information from discrete samples in weights, but we find no published evaluations of fast weights on episodic memory tasks. Many external memory architectures have been proposed for augmenting the capabilities of neural networks in the supervised learning setting. Some of these were evaluated on data samples that occur a relatively small number of times BID28 BID19 BID23 ). Adapting these architectures to RL tasks is non-trivial. For instance, the memory module of requires ground truth output labels to create and modify memories. A few augmented memory architectures have been applied to the more challenging setting of deep reinforcement learning BID29 BID17 BID20. These external memory schemes were shown to improve learning on certain tasks, typically by increasing sample efficiency. But to our knowledge none of them were evaluated on episodic memory tasks. Despite the growing body of literature on active memory, most environments do not capture the diversity of observations required to test episodic memory. For instance, in maze tasks such as the the T-Maze studied by BID17, the agent must remember a color seen at the start of the episode, then use that information later to move in the correct direction at the T-junction. But since only two colors are ever displayed at the start of the maze, the agent can learn to associate the floor tile color with the correct actions using neural network weights. In contrast, an episodic memory task like Concentration presents many previously unseen observations which must be handled correctly without prior exposure. With the differentiable neural computer (DNC), showed than an RL agent could use a memory matrix to buffer and use data to complete a moving blocks puzzle (Mini-SHRDLU). We show that Mini-SHRDLU can be decomposed into separate data buffering and problem-solving subtasks (Appendix B), neither of which requires human-like episodic memory. Episodic memory tasks can be viewed as a special type of transfer learning. Transfer and multitask learning BID26 ) involve evaluating the agent on a novel task (e.g. with new observations, rewards, transition dynamics). Episodic memory tasks such as Concentration feature an endless stream of novel observations but unchanged rewards and dynamics. Prior work on transfer learning has relied on techniques from DeepRL and model compression BID18 BID22 BID9.Concentration can be viewed as an episodic one-shot learning task in which novel observations must be correctly memorized and recalled after a single viewing. Prior work on few-shot image classification BID28 BID21 has used learned metric spaces and siamese networks . The Masked Experience Memory (MEM) architecture imbues an RL agent with the ability to leverage details of a past observation that is similar to the current observation. MEM's focus on observations differentiates it from DNC, which writes vectors to memory which are abstract in the sense that they are not bound to agent observations. Similarly, while MEM's read operation compares past observations with the current observation, DNC's read operation compares previously written memory vectors to an abstract read vector having no necessary connection to any observation. In these respects, DNC's memory mechanism is strictly more general than that of MEM, possessing more freedom of representation, as well as more potential challenges in training. This makes DNC a valuable baseline for comparison to MEM.Each MEM memory write operation copies the last observation into a fixed-size memory store, while the oldest memory is dropped from the store. Other external memory implementations share this general method of writing memories BID24 BID17. This corresponds to the rapid forgetting of human episodic memories BID8. Despite its simplicity, we view this design as applying the useful prior assumption that the most recent history is often the most relevant to selecting a good next action. Each memory read operation compares the current observation with all past observations in memory, and returns a vector calculated as a weighted sum of all memories. This part of the read operation is the same as that used by most content-based addressing memory architectures, given by: DISPLAYFORM0, where the memory matrix M contains N memories, each implemented as a real-valued vector of D dimensions (elements); the read vector R (also of length D) is a weighted average of all memories; the read weighting vector b is a normalized probability distribution over memories; finally, each memory's weight b i is a function of that memory's similarity (Q i) to the current read key vector. There are various choices for the similarity function Q, such as the popular cosine function. Here, we propose to use vector quadrance scaled by an explicit mask over vector elements: DISPLAYFORM1 where each memory's similarity (Q i) to the current read key (state vector s) is the squared Euclidean distance (quadrance) between them scaled by the corresponding element of the mask vector a; the mask vector a is a learned distribution over memory dimensions; the mask weight vector w and attention distribution sharpness parameter z are trained by gradient descent. The mask weight vector w is intended to learn which memory dimensions should be used as the lookup key for memory read operations, which in turn determines the attention distribution over memories. For instance, as a particular mask weight w i increases, the corresponding mask element a i will also increase, causing that element of each memory to contribute more to all the similarity calculations. Figure 1: Pathological Example of Cosine Similarity: Although memory vector 1 is identical to the completed (non-zero) portion of the key vector, cosine similarity judges memory vector 2 to be more similar to the key vector. MEM's usage of a mask vector in calculating vector similarity is designed to avoid a potential source of noise associated with a commonlyused similarity calculation, cosine similarity: DISPLAYFORM2 Since cosine similarity measures the angular similarity between two vectors by normalizing out their magnitudes, it is ideally suited for comparing word count vectors from documents of different lengths, for instance. But in the general case of content-based addressing, it is often intuitive to view the read key as partially specified, with zeros in the unspecified elements. From that perspective the read operation replaces the zeros with values from a memory or memories that best match the read key. This avoids the complexity of separate key and value vectors. However, applying cosine similarity in this way can add noise to the similarity calculation, as illustrated in Figure 1. Since cosine similarity normalizes the dot product by the magnitudes of both vectors being compared, the supposedly masked-out elements of the memory vector can still affect the . This noise becomes large as the non-zero portion of the key vector becomes small. MEM avoids this problem by using an explicit mask to select which vector elements will participate in the similarity measurement. In this section, we discuss how to use MEM in an RL agent. An RL agent aims to maximize its expected long-term return by acting in an initially unknown environment BID25. In each step, it makes an observation about the environment, takes an action, then receives an immediate reward and next observation. While there exist many algorithms in the literature, for concreteness, we use one of the most effective algorithms known as Asynchronous Advantage Actor-Critic (A3C) to explain how to incorporate MEM, and run experiments with this instantiation. The use of our memory architecture is similar for other RL algorithms. Since the environment is only partially observable in many real-world problems as well as in the game of Concentration, the observations received in individual time steps are not Markovian BID25. Information collected from past observations should therefore be remembered and used to make a decision at each step. One possibility is to use an LSTM to compress past observations into a fixed-length vector, which is used to approximate a Markovian state of the environment. This approach makes use of a limited form of memory, and is illustrated in FIG0 (left panel) where LSTM is used inside both the actor and critic networks of A3C.The more general DNC-based agent uses separate actor and critic DNCs, each containing its own LSTM controller and memory matrix, as shown in the right panel of FIG0. Note that the same approach is taken by for the Mini-SHRDLU task. Our proposed architecture, based on MEM, is given in the middle panel of FIG0. It uses separate actor and critic LSTM controllers which share the same memory store. For episodic RL tasks, we clear MEM's memory store at the beginning of each episode, although other possibilities exist. For all agents on every time step, each LSTM controller receives as inputs the current observation vector from the environment, concatenated with a one-hot vector representing the last action taken, plus the reward just received. Each LSTM also receives as input the most recent output from the memory store (whether MEM or DNC). In the case of MEM, a memory similarity strength value is also passed as an additional feature to the LSTMs. In the case of DNC, the memory store's output is immediately concatenated with the output from its LSTM running to the output layer of its network. To the best of our knowledge, no existing RL benchmark task could unambiguously evaluate episodic memory in RL agents. We therefore designed a new task for this purpose, derived from the common children's memory game of Concentration, as described in Wikipedia. BID0 The game is played with a deck of cards in which each card face appears twice. At the start of each game, the cards are arranged face down on a flat surface. A player's turn consists of turning over any two of the cards. If their faces are found to match, the player wins those two cards and removes them from the table, then plays again. If the two cards do not match, the player turns them face down again, then play passes to the next player. The game proceeds until all cards have been matched and removed from the table. The winning strategy is to remember the locations of the cards as their faces are revealed, then use those memories to find matching pairs. We convert the Concentration game into a single-player, episodic RL task. The agent occupies one cell at a time within a square grid of cells, each of which may be empty or may contain one card. The grid is just large enough to hold all the cards. Each card may be either face up or face down on any given time step. The agent's available actions are to take one step in any of the four directions, or to flip over the card at the current location. Whenever two cards fail to match, they automatically turn face down on the next time step. Whenever two cards match, the agent is rewarded, and the two cards are automatically removed from the grid on the next time step. The episode terminates when the last two cards are matched and removed, or when enough time steps have passed for all cards to have been removed. The agent receives a small penalty for each card it flips over. The major issue in designing this as a test of episodic memory is how to represent the agent's observation of a card face. The simplest scheme would be to represent each card face by a onehot vector of length N/2, where N is the number of cards in the deck. But this would allow the agent to solve the task without relying on episodic memory, because the total number of card faces and card positions could be small enough for the agent's network (over many training episodes) to dedicate a different unit to every possible card-plus-position combination. Then in the course of play, whenever a card was revealed, the network would only need to toggle the activation state of the unit representing that particular card and position. If two units corresponding to a matching pair were both active, the agent would know their locations and could then proceed to flip both of them over. Instead of using one-hot vectors, each card face could be represented by a complex image, such as an Omniglot character, to be processed by a convolutional neural network. But the network could still employ the strategy described above, only at a higher embedding level in the network, after learning the fixed identities of the cards. So to make this an unambiguous test of episodic memory, we generate new images for all card faces at the start of each episode. This is equivalent to playing just one game of Concentration with a deck of cards, then replacing it with a new deck of cards having different face images for the next game, etc. This ensures that each image appears in no more than one episode, making it impractical for an agent's neural network to learn each image as a persistent entity from game to game. Instead of using images composed of pixels, we define each card face to be a random real-valued vector of length 6. This can be thought of as an image embedding vector learned at some upper level of a CNN. We did not try any other image sizes. Each card face image (to appear on two cards) is generated by randomly selecting six real numbers in the range. If the ing vector is too close to an already generated vector, based on a fixed Euclidean distance threshold, the vector is randomly regenerated. The agent's performance is evaluated in terms of card-pair matches per card flip, which is closely tied to the reward received. Agents are not directly penalized for spending time wandering around the grid, but reward per time step is maximized by clearing the board quickly. For the experiments reported here, all tasks used 8 cards on a 3 × 3 grid. We tested the following six agents on the Concentration task: During training, counters keep track of the number of cards flipped by all 16 worker agents, as well as the number of card-pair matches obtained by the agents. After every 100, 000 training steps these counts are used to calculate the matches per flip for that period, then reset to zero. This produces a trailing estimate (including exploratory actions) of the agent's performance on the task. Table 1: Hyper-parameter settings agent using its optimal settings on a number of additional runs. Table 1 gives the hyper-parameter settings used on Concentration, both tuned and fixed. Key are collected in FIG2. The optimal mean performance attainable by an agent with perfect episodic memory is shown at the top of FIG2 BID27. Only the MEM agent learned a near-optimal policy. The baseline LSTM-A3C agent's were overlapped with those of its colorblind version 3b, demonstrating that the LSTM-A3C agent never learned to remember the locations of the cards it saw. The Sonnet LSTM agent performed consistently better than the TensorFlow LSTM agent 3b, though not by a large amount. Both implementations claim to be based on BID30, so the difference in behavior is unexpected. Despite being unable to see the card faces, the colorblind MEM agent 3b still performed a bit better than any of the LSTM agents, indicating that it found some other strategy (not based on card faces) to derive a small amount of gain from its external memory. Even after dozens of trial settings over a wide range of hyper-parameters, the DNC agent performed only very slightly better than the LSTM-A3C agent, and noticeably worse than its own recurrent controller alone, the Sonnet LSTM agent. We did not attempt curriculum learning. Appendix A presents a detailed investigation into the causes of DNC's poor performance on this type of task. Performing ablation studies on the MEM architecture, we found that using the mask (instead of cosine similarity) and Euclidean distance squared were both essential to scoring above the LSTM-A3C baseline. Adaptation of the sharpness term turned out to be essential for stable . On the other hand, the similarity strength feature provided no measurable benefit. As intended, MEM's most positive learned mask weights were the ones for the six card face dimensions. At convergence of the best MEM model, 83% of the mask's mass was concentrated on those six elements, even though they constitute only 11% of the observation vector's 54 elements. We have defined episodic memory for RL agents, provided an unambiguous test for evaluating it, and presented an implementation of episodic memory that corrects a problem with current content-based addressing methods. Our show that this MEM architecture, designed to emulate specific aspects of human episodic memory, is able to use that memory effectively in the Concentration task by remembering the locations of cards it has seen only once before. This is in sharp contrast to the other agents tested, which never learned to remember card locations. The code to replicate this work will be made public prior to the conference. MEM represents the initial step on a path towards more robust and powerful episodic memory for RL agents. We plan to extend MEM in several significant ways:1. Making the mask weights context-sensitive so that read key vectors can quickly shift to cover different aspects of experience depending on the situation. 2. Expanding the memory dimensions beyond the current observation to also include recurrent network activations, so that an agent's internal thought vectors can themselves be stored as experiences for later recall, and can be used as read keys. 3. Rendering memory deletion a function of memory importance, so that certain experiences can be remembered longer than others. 4. Introducing an additional mask over dimensions for write operations, so that memories need not cover all available dimensions. The human mind offers a remote, shining existence proof of general intelligence still beyond our reach. Despite the distance, it lights our path, and grows brighter with each step we take toward it. The differentiable neural computer ) employs a differentiable memory matrix as external memory, which shares certain sequential addressing features with computer random access memory, with the objective of combining the advantages of neural and computational processing in one trainable system. To explore why DNC never learned to use its memory matrix on the Concentration task, we applied DNC to a series of simpler tests of associative recall. The general pattern was that DNC performance seemed to deteriorate with episode length, as if DNC had difficulty reusing previously allocated locations in its memory matrix. We found the simplest demonstration of this problem to be a small modification of the copy task included in DNC's GitHub repository. For Figures 4 and 5 the memory matrix was of size 16x16, exactly large enough to store the 16x16 random input data in each copy round, even if none of the data was stored in the LSTM (size 64). But when given two copy rounds in a sequence with no reset in between, the only way to achieve zero error was to reuse some locations in the memory matrix. This happened on two of our runs, but not on the other three runs. The instabilities in FIG4 demonstrate the difficulty that DNC had in learning to reuse its memory matrix. B MINI-SHRDLU DNC was evaluated on multiple supervised tasks and one RL task: Mini-SHRDLU. The Mini-SHRDLU task was actually composed of two separate sub-tasks: data buffering and puzzle solving. The constraints defining the problem, along with many other decoy constraints, were fed to the RL agent once while the agent was not allowed to work on the puzzle. Only after termination of the constraint presentation phase was the agent allowed to reposition the blocks to solve the puzzle. The demonstrated that DNC used its external memory to buffer the incoming constraint information in the memory matrix, then used that data to achieve significantly better on the combined buffer-puzzle task than did a baseline LSTM-based RL agent without external memory. We considered using the Mini-SHRDLU task as a test of an RL agent's episodic memory. The databuffering stage of the task did not seem relevant to this goal, since human memory seems ill-suited for memorizing long lists of data seen only once. Since the buffer and puzzle sub-tasks were not Under review as a conference paper at ICLR 2018 evaluated separately, we couldn't be sure whether DNC's external memory helped in the puzzlesolving component of the combined task. We investigated this question by implementing Mini-SHRDLU without the data-buffering subtask, and giving an LSTM-based A3C RL agent access to a simple, non-differentiable, circular array of constraints. This allowed the agent to read the instructions at its own pace using 3 additional actions (move to next element, move to previous element, stay at current element). Figure 6: Mini-SHRDLU . The DNC were copied from. The LSTM-A3C were achieved by an RL agent with no external memory that read the constraints from a simple array at its own pace. After training on a lesson, the LSTM agent was tested on that lesson using 10,000 random problems. Direct numeric comparisons between these two performance curves are not meaningful for various reasons: The LSTM agent was advanced to the next lesson only after its performance on the previous lesson had plateaued, which gave it more training problems than used by DNC. The LSTM agent was always given problems using the full 6 blocks on the first 6 lessons, which is why those are not shown. Twelve of the subsequent lessons were actually skipped during training, and the LSTM agent was tested on those lessons using the model from the next trained lesson. As shown in Figure 6, the LSTM-based RL agent learns to solve Mini-SHRDLU problems as well as DNC, but without using a differentiable memory matrix. These demonstrate that external memory was not required for the problem-solving stage of the task.
Implementing and evaluating episodic memory for RL.
741
scitldr
Parameters are one of the most critical components of machine learning models. As datasets and learning domains change, it is often necessary and time-consuming to re-learn entire models. Rather than re-learning the parameters from scratch, replacing learning with optimization, we propose a framework building upon the theory of \emph{optimal transport} to adapt model parameters by discovering correspondences between models and data, significantly amortizing the training cost. We demonstrate our idea on the challenging problem of creating probabilistic spatial representations for autonomous robots. Although recent mapping techniques have facilitated robust occupancy mapping, learning all spatially-diverse parameters in such approximate Bayesian models demand considerable computational time, discouraging them to be used in real-world robotic mapping. Considering the fact that the geometric features a robot would observe with its sensors are similar across various environments, in this paper, we demonstrate how to re-use parameters and hyperparameters learned in different domains. This adaptation is computationally more efficient than variational inference and Monte Carlo techniques. A series of experiments conducted on realistic settings verified the possibility of transferring thousands of such parameters with a negligible time and memory cost, enabling large-scale mapping in urban environments. The quintessential paradigm in the machine learning pipeline consists of the stages of data acquisition and inference of the given data. As data become plentiful, or as ones problem set become more diverse over time, it is common to learn new models tailored to the new data or problem. Contrasting this conventional modeling archetype, we argue that it is often redundant to perform inference and re-learn parameters from scratch. Such model adaptation procedures are indispensable in application domains such as robotics in which the operating environments change continuously. For instance, if the model is represented as a Bayesian model, its distribution should be redetermined regularly to adjust for changes in new data. In this paper, we focus on significantly improving the training time of building Bayesian occupancy maps such as automorphing Bayesian Hilbert maps (ABHMs) by transferring model parameters associated with a set of source datasets to a target dataset in a zero-shot fashion. Despite having attractive theoretical properties and being robust, the main reason that hinders models such as ABHM being used in real-world settings is the run-time cost of learning thousands of parameters (main parameters and hyperparameters). Moreover, these parameters not only vary across different places in the same environment, but also change over time. We demonstrate domain adaptation of "geometry-dependent spatial features" of the ABHM model from a pool of source domains to the current target domain. This is efficiently done using the theory of. Since the proposed approach completely bypasses explicitly learning parameters of the Bayesian model using domain adaptation, this process can be thought of as "replacing parameter learning with domain adapatation." The notation given in Table 1 will be used throughout the rest of the paper. An occupancy model is typically a parameterized function which gives the probability of a given point in the environment being occupied. For instance, having learned a function with parameters θ, it is possible to query y * = p(occupied|x *, θ) ∈ for anywhere in the space x * = (longitude, latitude) ∈ R 2. The parameters θ must be estimated from data gathered using a LIDAR sensor with labels y = {0, 1} = {free, hit}. The high level idea of ABHM is projecting LIDAR data into the reproducing kernel Hilbert space (RKHS)-a rich high dimensional feature space-and performing Bayesian logistic regression. The occupancy probability of a point x is given by p(y|x) = sigmoid) with weights w ∈ R, kernel hinged at spatial locations h ∈ R 2, and width of the squaredexponential (SE) kernel γ ∈ R +. As shown in Figure 1, here, M SE kernels positioned at M sparial locations {h m} M m=1 are used to project 2D data into a M dimensional vector such that each kernel has more effect from data in its locality. must be learned from LIDAR data. Slightly abusing standard notations, in this paper,¯ands ymbols are used to represent the mean and dispersion parameters, respectively. One of the most important parameters for later discussions is the location parameterh m ∈ R 2. Because of the intractable posterior, the parameters of the model are learned using variational inference through probabilistic programming. In this section, we propose a framework for swiftly adapting thousands of parameter and hyperparameters of the Bayesian mapping model. To adapt to domains, we require accurately pre-trained maps from which we can extract spatially relevant features. In the context of our problem we must extract LIDAR scans (hits and free) with their corresponding model parameters {(h, θ)}. To simplify further discussions, as in Figure 1, θ is defined as all parameters except the mean location parameterh. We define source LIDAR data with corresponding parameters learned from ABHM {θ as the source atom. The source is an environment small enough to be trainable with ABHM. Having determined the source atom, our objective is to determine the new set of parameters . As illustrated in Figure 7, we are looking for a nonlinear mapping technique to convert a source (S) to a target (T). We recognize this as an optimal transport (OT) problem. In occupancy mapping, the probability measures are from LIDAR data. For a new target dataset, we attempt to obtain the optimal coupling, for a given D ∈ R N (S) ×N (T) distance matrix (e.g. Euclidean distance between sourcetarget pairs) with the information entropy of P, r(P) = − ij P ij log P ij. This entropic regularization, commonly known as the Sinkhorn distance; , enables solving the otherwise hard integer programming problem using an efficient iterative algorithm. Here, λ controls the amount of regularization. Having obtained the optimal coupling between source and target LIDAR, as illustrated in Figures 7 (b) -(c), now it is possible to transport corresponding source parameters θ (S) to the target domain. This is done by associating the parameter positions with source samplesh (S) as a linear map x (S). Note that all other θ (S) parameters associated with h (S) will also be transported. This implicit transfer process is depicted in Figure 5. Since ABHM can only be executed in small areas due to the high computational cost, we learn individual ABHM maps for different areas and construct a dictionary of source atoms which we call a dictionary of atoms X (S). As a , as depicted in Figure 2, atoms from various domains will be transferred to the target. The entire algorithm is given in Algorithm 1. We used the Carla simulator and KITTI benchmark dataset for experiments. A summary of datasets is listed in Table 5. We compared against vanilla variational inference; and variational inference with reparameterization trick;. Intra-domain and inter-domain adaptation Here we consider two paradigms: intradomain and inter-domain transfer. In intra-domain transfer, the source atoms are generated from the first 10 frames of a particular dataset and parameters are transferred within the Figure 2: A high-level overview of our method: Parameter Optimal Transport. Training domains correspond to potentially independent, data-intensive, expensive, yet small-scale prelearned models. After storing in a dictionary of atoms, representative data-space and modelparameter tuples from the pre-learned set of models, we find data-space correspondences using optimal transport maps via the ranking procedure. These maps are then used to transport pre-learned parameters to out-of-sample test domains. Our method is largely insensitive to data-space invariances between source training domains and test domains reducing knowledge loss during the transfer process. same dataset. In inter-domain transfer they are transferred to a completely new town. Results are in Table 4 with 20% randomly sampled test LIDAR beams. We consider two paradigms: intra-domain and inter-domain transfer. In intra-domain transfer, the source atoms are generated from the first 10 frames of a particular dataset and parameters are transferred within the same dataset. In inter-domain transfer they are transferred to a new town. Results are in Table 4 Building instantaneous maps This experiment demonstrates performance of building instantaneous maps. For this purpose, we use the two dynamic datasets: SimCarla and RealKITTI. The source dictionary of atoms was prepared similar to the intra/inter-domain Table 2: Instantaneous map building in dynamic environments. Mean and SD are given. We evaluated the test performance of our model using accuracy (ACC), area under ROC curve (AUC), and negative log-likelihood (NLL). The higher the ACC and AUC or lower the NLL, the better. Figure 6. Table 2 shows the performance of transferring features extracted from each town to the dynamic datasets.
We present a method of adapting hyperparameters of probabilistic models using optimal transport with applications in robotics
742
scitldr
An algorithm is introduced for learning a predictive state representation with off-policy temporal difference (TD) learning that is then used to learn to steer a vehicle with reinforcement learning. There are three components being learned simultaneously: the off-policy predictions as a compact representation of state, the behavior policy distribution for estimating the off-policy predictions, and the deterministic policy gradient for learning to act. A behavior policy discriminator is learned and used for estimating the important sampling ratios needed to learn the predictive representation off-policy with general value functions (GVFs). A linear deterministic policy gradient method is used to train the agent with only the predictive representations while the predictions are being learned. All three components are combined, demonstrated and evaluated on the problem of steering the vehicle from images in the TORCS racing simulator environment. Steering from only images is a challenging problem where evaluation is completed on a held-out set of tracks that were never seen during training in order to measure the generalization of the predictions and controller. Experiments show the proposed method is able to steer smoothly and navigate many but not all of the tracks available in TORCS with performance that exceeds DDPG using only images as input and approaches the performance of an ideal non-vision based kinematics model. Predicting the future is an important topic in machine learning and is believed to be an important part of how humans process and interact with the world, cf. Study of the brain shows that it is highly predictive of future events and outcomes. Despite these advances, there is still much work needed to bridge the worlds of predictive learning and control. Most predictive control approaches learn either a forward model or a backward model however these next-step models suffer from compounding errors. This paper introduces a predictive control architecture using one kind of off-policy predictive learning, called general value functions (GVFs) Modayil et al. (2012 , that learns to predict the relevant aspects of the environment, decided by an expert, from raw sensor data such as pixel data captured from a camera. GVFs answer the predictive question, "if I follow policy τ, how much total cumulant will I see in the future?" The value of the GVF framework is not yet fully understood and realized despite the connections to neuroscience; but some early work has investigated its advantages for predictive representations and found that the representations are compact and general . An objective of this research is to better understand the value that GVFs have to offer in real-world applications. Our work is based on the hypothesis that predictive representations are good for generalization . We are motivated by the belief that GVFs, like RL, could allow for behavior that is anticipative of future consequences rather than reactive to the current state. General value functions (GVFs) are an understudied topic of interest in AI research fields and applications. There is a considerable focus on understanding how to learn these predictions but limited efforts on understanding how to use them in real applications. This is unfortunate, as todate, research into applications of GVFs suggest they have potential in real world robotics and its applications Günther et al. )White (2015 . However, several elements have been missing to apply these predictions to a larger scale problem such as autonomous driving: how to characterize the behavior policy to achieve off-policy learning when it is unknown, what predictions are useful, and how to use those predictions to control the vehicle. Our objective is two-fold: introduce a novel architecture combining elements of predictive learning, adversarial learning and reinforcement learning, and demonstrate how this architecture can be used to steer a vehicle in a racing simulator. Steering a vehicle is a challenging problem where the bicycle model is the classical approach. However, the bicycle model requires knowing the angle of the vehicle with respect to the road direction in order to compute the desired steering angle. Steering directly from images has been a long desired goal in autonomous driving where approaches like advocate for a two point model which inspired the multi-point predictive representation proposed in this paper. In comparison, learning to regress an image directly to the steering angle in an end-to-end manner has been a recent hot topic Chen & HuangSallab et al. (2017 . However, a serious challenge is ensuring robustness of the controller when learning end-to-end . In particular, the agent is not typically trained on recovery mode scenarios and so there are generalization and data coverage issues; for this reason, authors introduced augmented images in training by artificially shifting and rotating them to help the network learn to recover with some limited success. The approach in learns to predict the current road angle directly from images and then uses a classical steering controller to control the vehicle. The proposed approach is similar except we predict future road angles and lane centeredness at different temporal horizons which is then passed to a controller module to choose steering angles. Policy gradient with the predictive state representation is the approach used in this paper but this can also be replaced with other controllers. This architecture allows for a degree of interpretability in the controller that is not easily achieved with end-to-end approaches despite work on understanding and improving its robustness. We consider an environment described by a set of states S, a set of actions A, and Markov transition dynamics with probability P (s |s, a) of transitioning to next state s after taking action a from state s. This setting is nearly identical to a Markov Decision Process (MDP) where the only difference is the absence of a reward signal to maximize. The goal is to learn an estimator that predicts the return G t of a cumulant c t defined by where c t is a cumulant signal to be predicted, and 0 ≤ γ t < 1 is the continuation function. The general value function is defined as where τ (a|s), γ(s, a, s), and c(s, a, s) are the policy, continuation and cumulant functions, respectively, that make up the predictive question where V τ (s) represents the total discounted cumulant starting from state s and acting under policy τ. Unfortuantely, there are currently no algorithms to learn the predictive question through interaction with the environment; thus, τ, γ, and c are typically defined by an expert. Cumulants are commonly scaled by a factor of 1 − γ when γ is a constant in non-episodic predictions. A GVF can be approximated with a function approximator, such as a neural network, parameterized by θ to predict equation 1. The agent usually collects experience under a different behavior policy µ(a|s) where off-policy policy evaluation methods are needed to learn the GVF. The parameters θ are optimized with gradient descent minimizing the following loss function where δ = E[y −v τ (s; θ)|s, a] is the TD error and ρ = τ (a|s) µ(a|s) is the importance sampling ratio to correct for the difference between the target policy distribution τ and behavior distribution µ. Note that only the behavior policy distribution is corrected rather than the state distribution d µ. The target y is produced by bootstrapping a prediction of the value of the next state following target policy τ given by where y is a bootstrap prediction using recent parameters θ that are assumed constant in the gradient computation. Some approaches use older parameters θ of the network to make a bootstrapped prediction to improve stability in the learning. However, this was not found to be necessary when learning GVFs since the target policy is fixed and the learning is simply off-policy policy evaluation. d µ is the state distribution of the behavior policy µ and the time subscript on c and γ has been dropped to simplify notation. The gradient of the loss function equation 3 is given by An alternative approach to using importance sampling ratios ρ is to apply importance resampling. With importance resampling, a replay buffer D of size N is required and the gradient is multiplied with the average importance sampling ratio of the samples in the buffer. The importance resampling gradient is given by where the transitions in the replay buffer are sampled according to for transition with state s i and action a i in replay buffer D. This approach is proven to have lower variance than equation 5 with linear function approximation. An efficient data structure for the replay buffer is the SumTree used in prioritized experience replay. This is a natural approach to learning predictions with deep reinforcement learning since sampling a mini-batch from the replay buffer helps to decorrelate sample updates in deep function approximation. A behavior policy needs to be defined to adequately explore the environment when learning GVFs. This may be an evolving policy that is learned by RL, a random policy for exploring the environment, or a human driver collecting data safely. It is common, especially in the case of human drivers, for the behavior policy distribution µ(a|s) of the agent to be unknown. We propose an algorithm using the density ratio trick to learn the behavior policy distribution in an adversarial way. It is well suited for problems with low dimensional action spaces like autonomous driving. The ratio of two probability densities can be expressed as a ratio of discriminator class probabilities that distinguish samples from the two distributions. Let us define a probability density function η(a|s) for the distribution to compare to the behavior distribution µ(a|s) and class labels y = +1 and y = −1 that denote the class of the distribution that the state action pair was sampled from: µ(a|s) or η(a|s) respectively. A discriminator g(a, s) is learned that distinguishes state action pairs from these two distributions using the cross-entropy loss. The ratio of the densities can be computed using only the discriminator g(a, s). Here we assume that p(y = +1) = p(y = −1). From this , we can estimate µ(a|s) withμ(a|s) as followsμ where η(a|s) is a known distribution over action conditioned on state. The uniform distribution over the action is independent of state and has the advantage of being effective and easy to implement. The algorithm for training a GVF off-policy with an unknown behavior distribution is given by Algorithm 1 Off-policy GVF training algorithm with unknown µ(a|s) Compute cumulant c t+1 = c(s t, a t, s t+1) 7: Compute behavior density valueμ(a t |s t) according to equation 8 9: Compute importance sampling ratio Sample random minibatch A of transitions (s i, a i, c i+1, γ i+1, s i+1) from D according to probability Compute Perform gradient descent step on (y i −v τ (s i ; θ)) 2 according to equation 6 for minibatch A Sample random minibatch B of state action pairs (s i, a i) from D according to a uniform probability and assign label y = +1 to each pair 15: Randomly select half the samples in the minibatch B and temporarily replace the label with y = −1 and action with a t ∼ η(a|s) Update behavior discriminator g(a, s) with modified minibatch B Let us consider an MDP with a predictive representation φ(s) mapping state s to predictions φ(s). The reward for the problem is denoted as r. The problem is to find a policy π(a|φ(s)) that maximizes future return or accumulated discounted reward. We hypothesize that this approach should be easier to train than learning π(a|s) directly for the following reasons: • the target policy of the predictions τ is fixed making for faster learning • the compact abstraction φ(s) allows for simple (possibly even linear) policy and action-value functions • the cumulant signal c may only be available during training or is expensive to obtain The last advantage is particularly important in autonomous driving where localization techniques often require a collection of expensive sensors and high definition map data that is not always available or easily scalable to a fleet of autonomous vehicles. In this way, one can train a neural network to map images captured by inexpensive cameras to predictions of lane centeredness and road angle captured by any number of highly accurate but expensive localization approaches with the hopes of generalizing features for lane control. The agent learns to steer with deterministic policy gradient (DPG) using the predictions as the state of the agent. When linear policy function approximation is used, the controller learned is essentially equivalent to a prediction-based PID controller only where there is a deep mapping from images captured by a camera to predictions of the future used to control the vehicle. Using predictions for PID control is not new; this approach can be used to tackle problems with high temporal delay between the error signal and the corrective actions. One can also add integral and derivative terms of the predictions to the state space representation of the agent. Action-value Q π (φ(s), a) and policy π(φ(s)) networks are trained according to where the action value approximates the expected discounted return, i.e.. In addition, a policy network π(φ(s)) produces an action according to the current predictive state representation φ(s) that maximizes the expected discounted return. Because of the interesting connection between DPG and PID control when linear function approximation is used, the policy network is parameterized as where ψ is a matrix denoting parameters to be learned by DPG. They also represent the gain coefficients for a proportional controller which allows for interpretability of the learned parameters. The action-value network Q π (φ(s), a) is given by a small neural network that maps the predictive state representations φ(s) and action a to an estimate of the action-value in the current state s if a is taken and the optimal policy π followed thereafter. In autonomous driving, knowing the road curvature ahead can be informative for making decisions to maintain lane centeredness in a tight turn. In Figure 1, it is demonstrated how future off-policy predictions of lane centeredness can be used to predict the deviation (or error) between the true center of lane and the projected lane centeredness along the current direction of the vehicle. These predictions must be off-policy because if they were on-policy they would tell us no information to inform the agent how much adjustment is needed to make corrective actions to stay in the center of the lane. The lane centeredness α and road angle β are two kinds of predictions that are useful in steering the vehicle as depicted in Figure 2. We represent the road curvature as a set of predictions of future lane centeredness α and road angles β at different time horizons. The predictive state representation is given by the feature vector φ(s) where V τ α (s t ; γ = γ 0) is a GVF prediction under target policy τ, cumulant α (lane centeredness) and continuation function γ 0 while V τ β (s t ; γ = γ 0) is a GVF prediction of cumulant β (road angle). There are m α number of γ functions for predicting lane centeredness and m β number of γ functions for predicting road angle at different temporal horizons. Because the predictions represent deviations from the desired lane centeredness and road angle, the policy network of DPG can be linear. The predictive learning approach is applied to the challenging problem of learning to steer a vehicle in the racing environment. A kinematic-based steering approach is used as a baseline for all the experiments. The reward in the TORCS environment is given by r t = v t cos β t where v t is the speed of the vehicle in km/h, β t is the angle between the road direction and the vehicle direction. A simple scaling factor of 0.01 was applied to the reward in order to reduce the variance of the action-value. Notice that this reward doesn't force the agent to stay in the center of the lane; however, this strategy is likely a good idea to achieve high total reward on all the test tracks. The target speed of all the agents is 50 km/h where vehicle speed was controlled by a separate manually tuned PID controller. The agents were trained on 85% of the 40 tracks available in TORCS. The rest of the tracks were used for testing (6 in total); all of the in this section are on the testing tracks which were never presented to the agents during training. This was done to measure the generalization performance of the policies learned by the agents which can be a serious problem in RL, cf. Zhao et al.Farebrother et al. (2018 . Tracks where the road was not level were excluded since they proved to be challenging likely because a non-zero action was required to keep the vehicle centered. In all the figures, blue corresponds to DDPG-Image with only image and speed as input, green corresponds to DDPG-ImageLowDim with images, current speed, α and β provided as input, orange corresponds to the new GVF-DPG approach with image, current speed and last two actions since the target policy of the prediction depends on the last action taken, and red corresponds to the classical front wheel steering model. A history of two images were provided to each method. A supervised method of predicting the current lane centeredness and road angle directly from the image was attempted with negative : the controller learned was consistently unstable. Results were repeated over 5 runs. The total score achieved on the test tracks by the agents is plotted in Figure 3 over 1M training iterations. It is clear that DDPG-ImageLowDim performs best of the learned methods on all test tracks. This low dimensional information provides ground truth to the agent which may not always be available in all locations; however, it makes a good baseline target for what we hope our proposed method could achieve through generalization. With DDPG-Image, it is clear that it does not learn to steer from images very well. Figure 4 where DDPG-Image consistently converges to a solution that oscillates between the extreme left and right steering actions very rapidly. This oscillation is so extreme that the agent is unable to achieve the target speed of 50 km/h; instead it travels at 40 km/h on average for all the test tracks. The performance gap also suggests that DDPG-ImageLowDim may be relying more on the low dimensional lane information rather than the image. To highlight how uncomfortable the two DDPG agents drive compared to the GVF-DPG agent, Figure 4 shows the standard deviation of the change in action during 1M iterations of training. On most tracks, it is apparent that the GVF-DPG approach controls the vehicle more smoothly than the other learned methods. The performance of the individual test tracks is given in the following Figure 5 . The GVF-DPG approach does not steer successfully on all test tracks: it fails immediately on wheel-2, part-way through on a-wheelway, and drives well on most of alpine-2. The DDPG-Image fails to complete dirt-4, wheel-2, spring, and a-speedway. Finally, DDPG-ImageLowDim successfully completes all the test tracks; however, the agent has a strong bias to the left side of the track. The GVF-DPG agent often follows the classical controller relatively well except on the tracks where the agent fails. This suggests that using a predictive representation of lane centeredness α and road angle β achieves closer performance to a classical controller than an end-to-end learned approach. However, more work is needed to improve the generalization abilities of the approach. The learning curves for the predictors and the policy gradient agents is given in the following Figure 6 . It is interesting that the learning curves of the action-value function estimator of the GVF-DPG agent is much smaller than the other agents and quite smooth. The reason is believed to be because the predictive state representation is constrained to values between [−1, +1] acting as a sort of regularizer to the state representation of the agent. The learning curve of the DDPG-ImageLowDim however eventually approaches the low error of the GVF-DPG agent. The predictors converge relatively quickly as shown in Figure 6 (b). The behavior estimator in Figure 6 (c) stabilizes relatively quickly during learning as well; it is postulated that the error does not decrease further since the behavior policy is changing slowly over time and the behavior estimator must track this change. A method of learning a predictive representation off-policy is presented where the behavior policy distribution is estimated via an adversarial method employing the density ratio trick. It is demonstrated that deep off-policy predictions can be learned with a deep behavior policy estimation to predict future lane centeredness and road angles from images. The predictive representation is learned with linear deterministic policy gradient. All of these components are combined together in a framework called GVF-DPG and learned simultaneously on the challenging problem of steering a vehicle in TORCS from only images. The show that the GVF-DPG is able to steer smoothly with less change in action and achieve better performance than DDPG from only images and similar performance to the kinematics model in several but not all of the test tracks. This work is also a demonstration that we can learn off-policy predictions, characterize the behavior policy and learn the controller all at the same time despite the challenges of the behavior policy evolving with the agent and the predictive state representation changing over time. Our work demonstrates that a learned prediction-based vision-only steering controller could potentially be viable with more work on improving the generalizability of the off-policy predictions. This work supports the predictive state representation hypothesis in that deep predictions can improve the generalization of RL to new road environments when using only images as input. For future work, we hope to study how to learn the question for the predictive state representation: τ, γ, and c. Moreover, because the behavior policy is unknown and estimated, our suggest that collecting real-world human driving to train predictions off-policy without the need for a simulator could be a viable approach to steering a vehicle from images. This is potentially advantageous since the human driver can explore the road safely. The predictive learning approach presented in this paper is called GVF-DPG (general value function deterministic policy gradient). Exploration followed the same approach as where an Ornstein Uhlenbeck process is used to explore the track where the parameters of the process (θ = 0.01, σ = 0.01) were tuned to provide a gradual wandering behavior on the track without excessive oscillations in the action. The reason is to improve the learning of the off-policy predictions for GVF-DPG since the behavior policy µ(a|s) is closer to the target policy τ (a|s) of the predictions. The GVF-DPG approach learned 8 predictions: 4 predictions of lane centeredness α, and 4 predictions of road angle β. Each of the 4 predictions had different values of γ for different temporal horizons: 0.5, 0.9, 0.95, 0.97. This allowed the agent to make short-term, medium term and long term predictions of lane centeredness α and road angle β. The GVF predictors all share the same deep convolutional neural network where the convolutional layers are identical to the architecture in followed by three fully connected layers of 512, 384 and 8 outputs, respectively. The behavior estimator µ(a|s) also uses a very similar neural network with identical convolutional layers followed by three fully connected layers of 512, 256, and 1 output, respectively. The GVF predictors and behavior estimator are both given the current image, previous image, current speed and the last two actions taken. The policy network of the DPG agent is linear with respect to the predictions. The action-value network is a small 4 layer network of 64x64x32x1 and outputs the change in the action from the previous action. The predictions are supplied to the agent and the last action is needed to compute the next action. All networks use rectified linear unit activations except for the last layers which are linear. The learning rates for the linear policy network, action-value network, predictors and behavior policy network were 1e −4, 1e −8, 1e −4, and 1e −4 respectively. It was noted that the action-value learning rate needed to be small in order to faciliate two time-scale learning of the predictive representation and the action-values. The GVF-DPG approach is compared to two DDPG (deep deterministic policy gradient) baselines where the differences are given by only the information provided in the state. The first method is a vision-based approach where the image and current speed is provided to the agent; the only information available to the agent about the road is supplied via images. This proved challenging for the DDPG agent. Thus, a second agent was trained with the image and the lane centeredness α and road angle β that the GVF-DPG agent had access to during training. This is cheating in a sense because the agent must have the lane centeredness and road angle available at all times including at test time, whereas the GVF-DPG agent is hopefully learning features that are generalizable to unseen roads where this information may not be always available. However, it provides a good target for evaluating the GVF-DPG agent. An Ornstein Uhlenbeck process is used to explore the track (θ = 0.05, σ = 0.05). In previous works with DDPG, target networks were utilized for both the action-value and the policy networks. The target networks are copies of the action-value and policy networks that are updated more slowly in order make the bootstrapped prediction of the action-values more stable. However, in our experiments it was found that removing the target networks improved learning and so no target networks were deployed in any of the following experiments. Identical network architectures were used for the policy and action-value networks of the DDPG agents as was used for the behavior network for GVF-DPG. The low dimensional state information was feed through a separate branch of 12 neurons that were then merged with the fully connected layer of 512 neurons. This method of merging can present challenges due to mismatching statistical properties of the branches but that did not seem to present a significant challenge in learning. Future work would be to find better ways to bridge these two different pieces of information. The hyperbolic tangent activation was used for output layer of the policy network, the linear activation was used for the output layer of the action-value network and rectified linear activation was used everywhere else. During training, a track is selected randomly according to a priority sampling method, since the tracks were not balanced, and the agent is allowed to interact with the environment and learn until termination. Termination occurs when either the agent leaves the lane or the maximum number of steps has been reached (1200 steps = 120 seconds). A priority sampling method was chosen so that tracks that were more difficult were selected more often. The probability of sampling a track i is given by e − n i κ N j=1 e − n j κ where n i is the number of steps that the agent was able to achieve in the last episode for that track and κ controls the spread of the distribution. A value of κ = 1 N N j=1 n j was found to perform well. The initial probabilities are equal for all tracks. The TORCS environment was modified to provide higher resolution images in grayscale rather than RGB with most of the image above the horizon cropped out of the image. The grayscale images were 128 pixels wide by 64 pixels high. This allowed the agent to see more detail farther away which is very helpful in making long term predictions which is beneficial to both policy gradient methods and predictive learning.
An algorithm to learn a predictive state representation with general value functions and off-policy learning is applied to the problem of vision-based steering in autonomous driving.
743
scitldr
For numerous domains, including for instance earth observation, medical imaging, astrophysics,..., available image and signal datasets often irregular space-time sampling patterns and large missing data rates. These sampling properties is a critical issue to apply state-of-the-art learning-based (e.g., auto-encoders, CNNs,...) to fully benefit from the available large-scale observations and reach breakthroughs in the reconstruction and identification of processes of interest. In this paper, we address the end-to-end learning of representations of signals, images and image sequences from irregularly-sampled data, {\em i.e.} when the training data involved missing data. From an analogy to Bayesian formulation, we consider energy-based representations. Two energy forms are investigated: one derived from auto-encoders and one relating to Gibbs energies. The learning stage of these energy-based representations (or priors) involve a joint interpolation issue, which resorts to solving an energy minimization problem under observation constraints. Using a neural-network-based implementation of the considered energy forms, we can state an end-to-end learning scheme from irregularly-sampled data. We demonstrate the relevance of the proposed representations for different case-studies: namely, multivariate time series, 2{\sc} images and image sequences. In numerous application domains, the available observation datasets do not involve gap-free and regularly-gridded signals or images. The irregular-sampling may both from the characteristics of the sensors and sampling strategy, e.g. considered orbits and swaths in spacebone earth observation and astrophysics, sampling schemes in medical imaging, as well as environmental conditions which may affect the sensor, e.g. atmospheric conditions and clouds for earth observation. A rich literature exists on interpolation for irregularly-sampled signals and images (also referred to as inpainting in image processing). A classic framework states the interpolation issue as the miminisation of an energy, which may be interpreted in a Bayesian framework. A variety of energy forms, including Markovian priors, patch-based priors, gradient norms in variational and/or PDE-based formulations, Gaussian priors as well as dynamical priors in fluid dynamics. The later relates to optimal interpolation and kriging, which is among the state-of-the-art and operational schemes in geoscience. Optimal schemes classically involve the inference of the considered covariance-based priors from irregularly-sampled data. This may however be at the expense of Gaussianity and linearity assumptions, which do not often apply for real signals and images. For the other types of energy forms, their parameterization are generally set a priori and not learnt from the data. Regarding more particularly data-driven and learning-based approaches, most previous works (2; 11; 20) have addressed the learning of interpolation schemes under the assumption that a representative gap-free dataset is available. This gap-free dataset may be the image itself (9; 20; 18). For numerous application domains, as mentionned above, this assumption cannot be fulfilled. Regarding recent advances in learning-based schemes, a variety of deep learning models, e.g. (7; 16; 24; 23), have been proposed. Most of these works focus on learning an interpolator. One may however expect to learn not only an interpolator but also some representation of considered data, which may be of interest for other applications. In this respect, RBM models (Restricted Boltzmann In this section, we formally introduce the considered issue, namely the end-to-end learning of representations and interpolators from irregularly-sampled data. Within a classic Bayesian or energybased framework, interpolation issues may be stated as a minimization issue where X is the considered signal, image or image series (referred to hereafter as the hidden state), Y the observation data, only available on a subdomain Ω of the entire domain D, and U θ the considered energy prior parameterized by θ. As briefly introduced above, a variety of energy priors have been proposed in the literature, e.g. (4; 20; 5). We assume we are provided with a series of irregularly-sampled observations, that is to say a set is only defined on subdomain Ω (i). Assuming that all X (i) share some underlying energy representation U θ , we may define the following operator such that I(Here, we aim to learn the parameters θ of the energy U θ from the available observation dataset {Y (i), Ω (i) } i. Assuming operator I is known, this learning issue can be stated as the minimization of reconstruction error for the observed data where. 2 Ω refers to the L2 norm evaluated on subdomain. Learning energy U θ from observation dataset {Y (i), Ω (i) } i clearly involves a joint interpolation issue solved by operator I. Given this general formulation, the end-to-end learning issue comes to solve minimization according to some given parameterization of energy U θ . In, interpolation operator I is clearly critical. In Section 3, we investigate a neural-network implementation of this general framework, which embeds a neural-network formulations both for energy U θ and interpolation operator I. In this section, we detail the proposed neural-network-based implementation of the end-to-end formulation introduced in the previous section. We first present the considered paramaterizations for energy U θ in (Section 3.1). We derive associated NN-based interpolation operators I (Section 3.2) and describe our overall NN architectures for the end-to-end learning of representations and interpolators from irregularly-sampled datasets (Section 3.3). We first investigate NN-based energy representations based on auto-encoders. Let us denote by φ E and φ D the encoding and decoding operators of an auto-encoder (AE), which may comprise both dense auto-encoders (AEs), convolutional AEs as well as recurrent AEs when dealing with time-related processes. The key feature of AEs is that the encoding operator φ E maps the state X into a low-dimensional space. Auto-encoders are naturally associated with the following energy Minimizing according to this energy amounts to retrieving the hidden state whose lowdimensional representation in the encoding space matches the observed data in the original decoded space. Here, parameters θ refer to the parameters of the encoder φ E and decoder φ D, respectively θ E and θ D. The mapping to lower-dimensional space may be regarded as a potential loss in the representation potential of the representation. Gibbs models provide an appealing framework for an alternative energy-based representation, with no such dimensionality reduction constraint. Gibbs models introduced in statistical physics have also been widely explored in computer vision and pattern recognition from the 80s. Gibbs models relate to the decomposition of U θ as a sum of potentials U θ (X) = c∈C V c (X c) where C is a set of cliques, i.e. a set of interacting sites (typically, local neighbors), and V c the potential on clique c. In statistical physics, this formulation states the global energy of the system as the sum of local energies (the potential over locally-interacting sites). Here, we focus on the following parameterization of the potential function with N s the set of neighbors of site s for the entire domain D and ψ a potential function. Lowenergy state for this energy refers to state X which operator ψ provides a good prediction at any site s knowing the state in the neighborhood N s of s. This type of Gibbs energy relates to Gaussian Markov random fields, where the conditional likelihood at one site given its neighborhood follows a Gaussian distribution. We implement this type of Gibbs energy using the following NN-based parameterization of operator ψ: ψ(X) = ψ 2 (ψ 1 (X)) It involves the composition of a space and/or time convolutional operator ψ 1 and a coordinate-wise operator ψ 2. The convolutional kernel for operator ψ 1 is such that the coefficients for the center of convolutional window are set to zero. This property fulfills the constraint that X(s) is not involved in the computation of ψ (X Ns) at site s. As an example, for a univariate image, ψ 1 can be set as a convolutional layer with N F filters with kernels of size 3x3x1, such that for each kernel K f K f = 0 (the same applies to biases). In such a case, operator ψ 2 would be a convolution layer with one filter with a kernel of size 1x1xN F. Both ψ 1 and ψ 2 can also involve non-linear activations. Without loss of generality, given this parameterization for operator ψ, we may rewrite energy U θ as U θ (X) = X − ψ (X) 2 where ψ (X)) at site s is given by ψ (X Ns). Overall, we may use the following common formulation for the two types of energy-based representation They differ in the parameterization chosen for operator ψ. Besides the NN-based energy formulation, the general formulation stated in involves the definition of interpolation operator I, which refers to minimization. We here derive NN-based interpolation architectures from the considered NN-based energy parameterization. Given parameterization, a simple fixed-point algorithm may be considered to solve for. This algorithm at the basis of DINEOF algorithm and XXX for matrix completion under subspace constraints (2; 14) involves the following iterative update Interestingly, the algorithm is parameter-free and can be readily implemented in a NN architecture given the number of iterations to be considered. Given some initialisation, one may typically consider an iterative gradient-based descent which applies at each iteration k with J U θ the gradient of energy U θ w.r.t. state X, λ the gradient step and Ω the missing data area. Automatic differentiation tool embedded in neural network frameworks may provide the numerical computation for gradient J U θ given the NN-based parameterization for energy U θ. This proved numerically too expensive and was not further investigated in our experiments. Given the considered form for energy U θ, its gradient w.r.t. X decomposes as a product and X − ψ (X) may be regarded as a suboptimal gradient descent. Hence, rather than considering the true Jacobian J ψ for operator ψ, we may consider an approximation through a trainable CNN G such that the gradient descent becomes where andG is a CNN to be learnt jointly to ψ during the learning stage. Interestingly, this gradient descent embeds the fixed-point algorithm whenG is the identity. Let us denote respectively by I F P and I G the fixed-point and gradient-based NN-based interpolators, which implement N I iterations of the proposed interpolation updates. Below, I N N will denote both I F P and I G. Whereas I F P is parameter-free, I G involves the parameterization of operator G. We typically consider a CNN with ReLu activations with increasing numbers of filter through layers up to the final layer which applies a linear convolutional with a number of filters given by the dimension of the state. Figure 1: Sketch of the considered end-to-end architecture: we depict the considered N I -block architecture which implements a N I -step interpolation algorithm described in Section 3.2. Operator ψ is defined through energy representation and G refers to the NN-based approximation of the gradient-based update for minimization. This architecture uses as input a mask Ω corresponding to the missing-data-free domain and an initial gap-filling X for state X. We typically initially fill missing data with zeros for centered and normalized states. Given the parameterizations for energy U θ and the associated NN-based interpolators presented previously, we design an end-to-end learning for energy representation U θ and associated interpolator I N N, which uses as inputs an observed sample Y (i) and the associated missing-data-free domain Ω (i). Using a normalization preprocessing step, we initially fill missing data with zeros to provide an initial interpolated state to the architecture. We provide a sketch of the architecture in Fig.1. Regarding implementation details, beyond the design of the architectures, which may be applicationdependent for operators ψ andG (see Section 4), we consider an implementation under keras using tensorflow as backend. Regarding the training strategy, we use adam optimizer. We iteratively increase the number of blocks N I (number of gradient steps) to avoid the training to diverge. Similarly, we decrease the learning rate accross iterations, typically from 1e-3 to 1e-6. In our experiments, we typically consider from 5 to 15 blocks. All the experiments were run under workstations with a single GPU (Nvidia GTX 1080 and GTX 1080 Ti). In this section, we report numerical experiments on different datasets to evaluate and demonstrate the proposed scheme. We consider three different case-studies: an image dataset, namely MNIST; a multivariate time-series through an application to Lorenz-63 dynamics and an image sequence dataset through an application to ocean remote sensing data with real missing data patterns. In all experiments, we refer to the AE-based framework, respectively as FP(d)-ConvAE and G(d)-ConvAE using the fixed-point or gradient-based interpolator where the value of d refers to the number of interpolation steps. Similarly, we refer to the Gibbs-based frameworks respectively as FP(d)-GENN and G(d)-GENN. We evaluate the proposed framework on MNIST datasets for which we simulate missing data patterns. The dataset comprises 60000 28x28 grayscale images. For this dataset, we only evaluate the AE-based setting. We consider the following convolutional AE architecture with a 20-dimensional encoding space: • Encoder operator φ E: Conv2D+ ReLU + AvPooling + Conv2D + ReLU + AveragePooling + Dense + ReLU + Dense; • Decoder operator φ E: Conv2DTranspose + ResNet, ResNet: Conv2D+ReLU+Conv2D We generate random missing data patterns composed of N S squares of size W S xW S, the center of the square is randomly sampled uniformly over the image grid. As illustrated in Fig.3, we consider four missing data patterns: N S = 20 and W S = 5, N S = 30 and W S = 5, N S = 3 and W S = 9, N S = 6 and W S = 9. As performance measure, we evaluate an interpolation score (I-score), a global reconstruction score (R-score) for the interpolated images and an auto-encoding (AE-score) score of the trained auto-encoder applied to gap-free data, in terms of explained variance. We also evaluate a classification score (C-score), in terms of mean accurcay, using the 20-dimensional encoding space as feature space for classification with a 3-layer MLP. We report all performance measures for both the test dataset in Tab.1 for MNIST dataset. For benchmarking purposes, we also report the performance of DINEOF framework, which uses a 20-dimensional PCA trained on the gap-free dataset, the auto-encoder architecture trained on gap-free dataset as well as the considered convolutional auto-encoder trained using an initial zero-filling for missing data areas and a training loss computed only of observed data areas. The later can be regarded as a FP-ConvAE architecture using a single block in Fig.1. Overall, these illustrate that representations trained from gap-free data may not apply when considering significant missing data rates as illustrated by relatively poor performance of PCA-based and AE schemes, when trained from gap-free data. Similarly, training an AE representations using as input a zero-filling strategy lowers the auto-encoding power when applied to gap-free data. Overall, the proposed scheme guarantees a good representation in terms of AE score with an additional gain in terms of interpolation performance, typically between ≈ 15% and 30% depending of the missing data patterns, the gain being greater when considering larger missing data areas. Table 1: Performance of AE schemes in presence of missing data for Fashion MNIST dataset: for a given convolutional AE architecture (see main text for details), a PCA and ConvAE models trained on gap-free data with a 15-iteration projection-based interpolation (resp., DINEOF and ConvAE), a zero-filling stratefy with the same ConvAE architecture (Zero-ConvAE) and the fixedpoint and gradient-based versions of the proposed scheme. For each experiment, we evaluate four measures: the reconstruction performance for the known image areas (R-score), the interpolation performance for the missing data areas (I-score), the reconstruction performance of the trained AE when applied to gap-free images (AE-score), the classification score of a MLP classifier trained in the trained latent space for training images involving missing data. We present an application to the Lorenz-63 dynamics, which involve a 3-dimensional state governed by the following ordinary differential equation: Under parameterization σ = 10, ρ = 28 and β = 8/3 considered here, Lorenz-63 dynamics are chaotic dynamics, which make then challening in our context. They can be regarded as a reducedorder model of turbulence dynamics. We simulate Lorenz-63 time series of 200 time steps using a Runge-Kutta-4 ODE solver with an integration step of 0.01 from an initial condition in the attractor. For a given experiment, we first subsample the simulated series to a given time step dt and then generate using a uniform random samplong a missing data mask accounting for 75% of the data. Overall, training and test time series are formed by subsequences of 200 time steps. We report experiments with the GE-NN setting. The AE-based framework showed lower performance and is not included here. The considered GE-NN architecture is as follows: a 1D convolution layer with 120 filters with a kernel width of 3, zero-weight-constraints for the center of the convolution kernel and a Relu activation, a 1D convolution layer with 6 filters a kernel width of 1 and a Relu activation, a residual network with 4 residual units using 6 filters with a kernel width of 1 and a linear activation. The last layer is a convolutional layer with 3 filters, a kernel width of 1 and a linear activation. Figure 2: Example of missing data interpolation for Lorenz-63 dynamics: from left to right, the time series of each of the three components of Lorenz-63 states for dt = 0.02 and a 75% missing data rate. We depict the irregularly-sampled observed data (black dots), the true state (green,-), the interpolated states using DINEOF (blue, -) and the interpolated states using the proposed approach (G-NN-FP-OI) (red, -). Visually, the interpolated sequence using our approach can hardly be distinguished from the true states. For benchmarking purposes, we report the interpolation performance issued from an ensemble Kalman smoother (EnKS) knowing the true model, regarded as a lower-bound of the interpolation performance. The parameter setting of the EnKS is as follows: 200 members, noise-free dynamical model and spherical observation covariannce to 0.1 · I. We also compare the proposed approach to DINEOF (2; 21). Here, the learning of the PCA decomposition used in the DINEOF scheme relies on gap-free data. Fig.2 illustrates this comparison for one sequence of 200 time steps with dt = 0.02. In this example, one can hardly distinguish the interpolated sequence using the proposed approach (FP-GE-NN). By contrast, DINEOF scheme cannot retrieve some of the largest deviations. We report in Appendix Tab.3 the performance of the different interpolation schemes. The proposed approach clearly outperforms DINEOF by about one order of magnitude for the experiments with a time-step dt = 0.02. The interpolation error for observed states (first line in Tab.3) also stresses the improved prior issued from the proposed Gibbs-like energy setting. For chaotic dynamics, global PCA representation seems poorly adapted where local representations as embedded by the considered Gibbs energy setting appear more appealing. The third case-study addresses satellite-derived Sea Surface Temperature (SST) image time series. Due to their sensitivity to the cloud cover, such SST datasets issued from infrared sensors may involve large missing data rates (typically, between 70% and 90%, Fig.? ? for an illustration). For evaluation purposes, we build a groundtruthed dataset from high-resolution numerical simulations, namely NATL60 data, using real cloud masks from METOP AVHRR sensor (19 For this case-study, we consider the following four architectures for the AEs and the GE-NNs: • ConvAE 1 : the first convolutional auto-encoder involves the following encoder architecture: five consecutive blocks with a Conv2D layer, a ReLu layer and a 2x2 average pooling layer, the first one with 20 filters the following four ones with 40 filters, and a final linear convolutional layer with 20 filters. The output of the encoder is 4x16x20. The decoder involves a Conv2DTranspose layer with ReLu activation for an initial 16x16 upsampling stage a Conv2DTranspose layer with ReLu activation for an additional 2x2 upsampling stage, a Conv2D layer with 16 filters and a last Conv2D layer with 5 filters. All Conv2D layers use 3x3 kernels. Overall, this model involves ≈ 400,000 parameters. • ConvAE 2 : we consider a more complex auto-encoder with an architecture similar to ConvAE 1 where the number of filters is doubled (e.g., The output of the encoder is a 4x16x40 tensor). Overall, this model involves ≈ 900,000 parameters. • GE-NN 1,2: we consider two GE-NN architectures. They share the same global architecture with an initial 4x4 average pooling, a Conv2D layer with ReLu activation with a zeroweight constraint on the center of the convolution window, a 1x1 Conv2D layer with N filters, a ResNet with a bilinear residual unit, composed of an initial mapping to an initial 32x128x(5*N) space with a Conv2D+ReLu layer, a linear 1x1 Conv2D+ReLu layer with N filters and a final 4x4 Conv2DTranspose layer with a linear activation for an upsampling to the input shape. GE-NN 1 and GE-NN 2 differ in the convolutional parameters of the first Conv2D layers and in the number of residual units. GE-NN 1 involves 5x5 kernels, N = 20 and 3 residual units for a total of ≈ 30,000 parameters. For GE-NN 2, we consider 11x11 kernels, N = 100 and 10 residual units for a total of ≈ 570,000 parameters. These different parameterizations were selected so that ConvAE 1 and GE-NN 2 involve a modeling complexity in the same range. We may point out that the considered GE-NN architecture are not applied to the finest resolution but to downscaled grids by a factor of 4. The application of GENNs to the finest resolution showed poor performance. This is regarded as an illustration of the requirement for considering a scale-selection problem when applying a given prior. The upscaling involves the combination of a Conv2DTranspose layer with 11 filters, a Conv2D layer with a ReLu activation with 22 filters and a linear Conv2D layer with 11 filters. Similarly to MNIST dataset, we report the performance of the different models in terms of interpolation score (I-score), reconstruction score (R-score) and auto-encoding score (AE-score) both for the training and test dataset. We compare the performance of the four models using the fixed-point and gradient-based interpolation. Overall, we can draw conlusions similar to MNIST case-study. Representations trained from gap-free data lead to poor performance and the proposed scheme reaches the best performance (gain over 50% in terms of explained variance for the interpolation and reconstruction score). Here, models trained with a zero-filling strategy show good interpolation and reconstruction performance, but very poor AE score, stressing that cannot apply beyond the considered interpolation task. When comparing GE-NN and AE settings, GE-NNs show slightly better performance with a much lower complexity (e.g., 30,000 parameters for GE-NN 1 vs. 400,000 parameters for ConvAE 1). Regarding the comparison between the fixed-point and gradient-based interpolation strategies, the later reaches slightly better interpolation and reconstruction score. We may point out the significant gain w.r.t. OI, which is the current operational tool for ocean remote sensing data. We illustrate these in Appendix (Fig.6), which further stresses the gain w.r.t. OI for the reconstruction of finer-scale structures. Table 2: Performance on SST dataset: We evaluate for each model interpolation, reconstruction and auto-encoding scores, resp. I-score, R-score and AE-score, in terms of percentage of explained variance resp. for the interpolation of missing data areas, the reconstruction of the whole image with missing data and the reconstruction of gap-free images. For each model, we evaluate these score for the training data (first row) and the test dataset (second row in brackets). We consider four different auto-encoder models, namely 20 and 80-dimensional PCAs and ConvAE 1,2 models, and two GE-NN models, GE-NN 1,2, combined with three interpolation strategies: the classic zero-filling strategy (Zero) and proposed iterative fixed-point (FP) and gradient-based (G) schemes, the figure in brackets denoting the number of iterations. For instance, FP-GE-NN 1 refers to GE-NN 1 with a 10-step fixed-point interpolation scheme. The PCAs are trained from gap-free data. We also consider an Optimal Interpolation (OI) with a space-time Gaussian covariance with empiricallytuned parameters. We refer the reader to the main text for the detailed parameterization of the considered models. In this paper, we have addressed the learning of energy-based representations of signals and images from observation datasets involving missing data (with possibly very large missing data rates). Using the proposed architectures, we can jointly learn relevant representations of signals and images while jointly providing the associated interpolation schemes. Our experiments stress that learning representations from gap-free data may lead to representations poorly adapted to the analysis of data with large missing data areas. We have also introduced a Gibbs priors embedded in a neural network architecture. Relying on local characteristics rather than global ones as in AE schemes, these priors involve a much lower complexity. Our experiments support their relevance for addressing inverse problems in signal and image analysis. Future work may further explore multi-scale extensions of the proposed schemes along with couplings between global and local energy representations and hybrid minimization schemes combining both gradient-based and fixed-point strategies in the considered end-to-end formulation. A.1 SUPPLEMENTARY FOR MNIST DATASET We illustrate below both the considered masking patterns as well as reconstruction examples for the proposed framework applied to MNIST dataset. The first row depicts the reference image, the second row the missing data mask and the third one the interpolated image. The first two panels illustrate interpolation for training data and last two for test data. We depict grayscale mnist images using false colors to highmight differences. We report below a Table which details the interpolation performance of the proposed GE-NN representation applied to Lorenz-63 time series in comparison with a PCA-based scheme and a lowerbound provided by the interpolation assuming the ODE model is known. A.3 SUPPLEMENTARY FOR SST DATASET We report below reconstruction examples for the application of the proposed GE-NN approach to SST time series with real missing data masks, which involve very large missing data rates (typically above 80%). The consistency between the interpolation and the reconstruction of the gap-free image from the learnt energy-based representation further stresses the ability of the proposed approach to extract a generic representation from irregularly-sampled data. These reulsts is known, and a DINEOF scheme. We report interpolation for a 75% missing data rate with uniform random sampling for three different sampling time steps, dt = 0.01, dt = 0.02 and dt = 0.04. We report the mean square error of the interpolation for the observed data (first row) and masked ones (second row). also emphasize a much greater ability of the proposed learning-based scheme to reconstruct finescale structures, which can hardly be reconstructed by an OI scheme with a Gaussian space-time covariance model. We may recall that the later is the stae-of-the-art approach for the processing of satellite-derived earth observation data. Interpolation examples for SST data used during training: first row, reference SST images corresponding to the center of the considered 11-day time window; second row, associated SST observations with missing data, third row, interpolation issued from FP-GE-NN 2 model; third row, reconstruction of the gap-free image series issued from FP-GE-NN 2 model; interpolation issued from an optimal interpolation scheme using a Gaussian covariance model with empirically tuned parameters. Figure 6: Interpolation examples for SST data never seen during training: first row, reference SST images corresponding to the center of the considered 11-day time window; second row, associated SST observations with missing data, third row, interpolation issued from FP-GE-NN 2 model; third row, reconstruction of the gap-free image series issued from FP-GE-NN 2 model; interpolation issued from an optimal interpolation scheme using a Gaussian covariance model with empirically tuned parameters.
We address the end-to-end learning of energy-based representations for signal and image observation dataset with irregular sampling patterns.
744
scitldr
Credit assignment in Meta-reinforcement learning (Meta-RL) is still poorly understood. Existing methods either neglect credit assignment to pre-adaptation behavior or implement it naively. This leads to poor sample-efficiency during meta-training as well as ineffective task identification strategies. This paper provides a theoretical analysis of credit assignment in gradient-based Meta-RL. Building on the gained insights we develop a novel meta-learning algorithm that overcomes both the issue of poor credit assignment and previous difficulties in estimating meta-policy gradients. By controlling the statistical distance of both pre-adaptation and adapted policies during meta-policy search, the proposed algorithm endows efficient and stable meta-learning. Our approach leads to superior pre-adaptation policy behavior and consistently outperforms previous Meta-RL algorithms in sample-efficiency, wall-clock time, and asymptotic performance. A remarkable trait of human intelligence is the ability to adapt to new situations in the face of limited experience. In contrast, our most successful artificial agents struggle in such scenarios. While achieving impressive , they suffer from high sample complexity in learning even a single task, fail to generalize to new situations, and require large amounts of additional data to successfully adapt to new environments. Meta-learning addresses these shortcomings by learning how to learn. Its objective is to learn an algorithm that allows the artificial agent to succeed in an unseen task when only limited experience is available, aiming to achieve the same fast adaptation that humans possess .Despite recent progress, deep reinforcement learning (RL) still relies heavily on hand-crafted features and reward functions as well as engineered problem specific inductive bias. Meta-RL aims to forego such reliance by acquiring inductive bias in a data-driven manner. Recent work proves this approach to be promising, demonstrating that Meta-RL allows agents to obtain a diverse set of skills, attain better exploration strategies, and learn faster through meta-learned dynamics models or synthetic returns BID8; BID14 ).Meta-RL is a multi-stage process in which the agent, after a few sampled environment interactions, adapts its behavior to the given task. Despite its wide utilization, little work has been done to promote theoretical understanding of this process, leaving Meta-RL grounded on unstable foundations. Although the behavior prior to the adaptation step is instrumental for task identification, the interplay between pre-adaptation sampling and posterior performance of the policy remains poorly understood. In fact, prior work in gradient-based Meta-RL has either entirely neglected credit assignment to the pre-update distribution BID9 or implemented such credit assignment in a naive way BID10 ).To our knowledge, we provide the first formal in-depth analysis of credit assignment w.r.t. preadaptation sampling distribution in Meta-RL. Based on our findings, we develop a novel Meta-RL algorithm. First, we analyze two distinct methods for assigning credit to pre-adaptation behavior.of MAML, was first introduced by BID9. We refer to it as formulation I which can be expressed as maximizing the objective DISPLAYFORM0 In that U denotes the update function which depends on the task T, and performs one VPG step towards maximizing the performance of the policy in T. For national brevity and conciseness we assume a single policy gradient adaptation step. Nonetheless, all presented concepts can easily be extended to multiple adaptation steps. Later work proposes a slightly different notion of gradient-based Meta-RL, also known as E-MAML, that attempts to circumvent issues with the meta-gradient estimation in MAML BID10 ): DISPLAYFORM1 R(τ) with θ:= U (θ, τ 1: DISPLAYFORM2 Formulation II views U as a deterministic function that depends on N sampled trajectories from a specific task. In contrast to formulation I, the expectation over pre-update trajectories τ is applied outside of the update function. Throughout this paper we refer to π θ as pre-update policy, and π θ as post-update policy. This section analyzes the two gradient-based Meta-RL formulations introduced in Section 3. The red arrows depict how credit assignment w.r.t the pre-update sampling distribution P T (τ |θ) is propagated. Formulation I (left) propagates the credit assignment through the update step, thereby exploiting the full problem structure. In contrast, formulation II (right) neglects the inherent structure, directly assigning credit from post-update return R to the pre-update policy π θ which leads to noisier, less effective credit assignment. Both formulations optimize for the same objective, and are equivalent at the 0 th order. However, because of the difference in their formulation and stochastic computation graph, their gradients and the ing optimization step differs. In the following, we shed light on how and where formulation II loses signal by analyzing the gradients of both formulations, which can be written as (see Appendix A for more details and derivations) ∇ θ J post (τ, τ) simply corresponds to a policy gradient step on the post-update policy π θ w.r.t θ, followed by a linear transformation from post-to pre-update parameters. It corresponds to increasing the likelihood of the trajectories τ that led to higher returns. However, this term does not optimize for the pre-update sampling distribution, i.e., which trajectories τ led to better adaptation steps. The credit assignment w.r.t. the pre-updated sampling distribution is carried out by the second term. In formulation II, ∇ θ J II pre can be viewed as standard reinforcement learning on π θ with R(τ) as reward signal, treating the update function U as part of the unknown dynamics of the system. This shifts the pre-update sampling distribution to better adaptation steps. Formulation I takes the causal dependence of P T (τ |θ) on P T (τ |θ) into account. It does so by maximizing the inner product of pre-update and post-update policy gradients (see Eq. 4). This steers the pre-update policy towards 1) larger post-updates returns 2) larger adaptation steps α∇ θ J inner, 3) better alignment of pre-and post-update policy gradients . When combined, these effects directly optimize for adaptation. As a , we expect the first meta-policy gradient formulation, J I, to yield superior learning properties. In the previous section we show that the formulation introduced by BID9 in superior meta-gradient updates, which should in principle lead to improved convergence properties. However, obtaining correct and low variance estimates of the respective meta-gradients proves challenging. As discussed by BID10, and shown in Appendix B.3, the score function surrogate objective approach is ill suited for calculating higher order derivatives via automatic differentiation toolboxes. This important fact was overlooked in the original RL-MAML implementation BID9 leading to incorrect meta-gradient estimates 1. As a , ∇ θ J pre does not appear in the gradients of the meta-objective (i.e. ∇ θ J = ∇ θ J post). Hence, MAML does not perform any credit assignment to pre-adaptation behavior. But, even when properly implemented, we show that the meta-gradients exhibit high variance. Specifically, the estimation of the hessian of the RL-objective, which is inherent in the metagradients, requires special consideration. In this section, we motivate and introduce the low variance curvature estimator (LVC): an improved estimator for the hessian of the RL-objective which promotes better meta-policy gradient updates. As we show in Appendix A.1, we can write the gradient of the meta-learning objective as DISPLAYFORM0 H−1 t =t r(s t, a t)|s t, a t denotes the expected state-action value function under policy π θ at time t. Computing the expectation of the RL-objective is in general intractable. Typically, its gradients are computed with a Monte Carlo estimate based on the policy gradient theorem (Eq. 82). In practical implementations, such an estimate is obtained by automatically differentiating a surrogate objective (b). However, this in a highly biased hessian estimate which just computes H 2, entirely dropping the terms H 1 and H 12 +H 12. In the notation of the previous section, it leads to neglecting the ∇ θ J pre term, ignoring the influence of the pre-update sampling distribution. The issue can be overcome using the DiCE formulation, which allows to compute unbiased higherorder Monte Carlos estimates of arbitrary stochastic computation graphs BID10. The DiCE-RL objective can be rewritten as follows DISPLAYFORM1 DISPLAYFORM2 In that, ⊥ denotes the "stop gradient" operator, i.e., DISPLAYFORM3 The sequential dependence of π θ (a t |s t) within the trajectory, manifesting itself through the product of importance weights in FORMULA4, in high variance estimates of the hessian DISPLAYFORM4 As noted by BID12, H 12 is particularly difficult to estimate, since it involves three nested sums along the trajectory. In section 7.2 we empirically show that the high variance estimates of the DiCE objective lead to noisy meta-policy gradients and poor learning performance. To facilitate a sample efficient meta-learning, we introduce the low variance curvature (LVC) estimator: DISPLAYFORM5 By removing the sequential dependence of π θ (a t |s t) within trajectories, the hessian estimate neglects the term H 12 + H 12 which leads to a variance reduction, but makes the estimate biased. The choice of this objective function is motivated by findings in BID12: under certain conditions the term H 12 + H 12 vanishes around local optima θ DISPLAYFORM6 Hence, the bias of the LVC estimator becomes negligible close to local optima. The experiments in section 7.2 underpin the theoretical findings, showing that the low variance hessian estimates obtained through J LVC improve the sample-efficiency of meta-learning by a significant margin when compared to J DiCE. We refer the interested reader to Appendix B for derivations and a more detailed discussion.6 PROMP: PROXIMAL META-POLICY SEARCH Building on the previous sections, we develop a novel meta-policy search method based on the low variance curvature objective which aims to solve the following optimization problem: DISPLAYFORM7 Prior work has optimized this objective using either vanilla policy gradient (VPG) or TRPO (a). TRPO holds the promise to be more data efficient and stable during the learning process when compared to VPG. However, it requires computing the Fisher information matrix (FIM). Estimating the FIM is particularly problematic in the meta-learning set up. The meta-policy gradients already involve second order derivatives; as a , the time complexity of the FIM estimate is cubic in the number of policy parameters. Typically, the problem is circumvented using finite difference methods, which introduce further approximation errors. The recently introduced PPO algorithm achieves comparable to TRPO with the advantage of being a first order method. PPO uses a surrogate clipping objective which allows it to safely take multiple gradient steps without re-sampling trajectories. for step n = 0,..., N − 1 do DISPLAYFORM8 if n = 0 then 6: DISPLAYFORM0 for all DISPLAYFORM1 Sample pre-update trajectories D i = {τ i} from T i using π θ 9:Compute adapted parameters DISPLAYFORM2 Sample post-update trajectories DISPLAYFORM3 11: DISPLAYFORM4 In case of Meta-RL, it does not suffice to just replace the post-update reward objective with J CLIP T. In order to safely perform multiple meta-gradient steps based on the same sampled data from a recent policy π θo, we also need to 1) account for changes in the pre-update action distribution π θ (a t |s t), and 2) bound changes in the pre-update state visitation distribution .We propose Proximal Meta-Policy Search (ProMP) which incorporates both the benefits of proximal policy optimization and the low variance curvature objective (see Alg. 1.) In order to comply with requirement 1), ProMP replaces the "stop gradient" importance weight DISPLAYFORM5, which in the following objective DISPLAYFORM6 An important feature of this objective is that its derivatives w.r.t θ evaluated at θ o are identical to those of the LVC objective, and it additionally accounts for changes in the pre-update action distribution. To satisfy condition 2) we extend the clipped meta-objective with a KL-penalty term between π θ and π θo. This KL-penalty term enforces a soft local "trust region" around π θo, preventing the shift in state visitation distribution to become large during optimization. This enables us to take multiple meta-policy gradient steps without re-sampling. Altogether, ProMP optimizes DISPLAYFORM7 ProMP consolidates the insights developed throughout the course of this paper, while at the same time making maximal use of recently developed policy gradients algorithms. First, its meta-learning formulation exploits the full structural knowledge of gradient-based meta-learning. Second, it incorporates a low variance estimate of the RL-objective hessian. Third, ProMP controls the statistical distance of both pre-and post-adaptation policies, promoting efficient and stable meta-learning. All in all, ProMP consistently outperforms previous gradient-based meta-RL algorithms in sample complexity, wall clock time, and asymptotic performance (see Section 7.1). In order to empirically validate the theoretical arguments outlined above, this section provides a detailed experimental analysis that aims to answer the following questions: (i) How does ProMP perform against previous Meta-RL algorithms? (ii) How do the lower variance but biased LVC gradient estimates compare to the high variance, unbiased DiCE estimates? (iii) Do the different formulations in different pre-update exploration properties? (iv) How do formulation I and formulation II differ in their meta-gradient estimates and convergence properties?To answer the posed questions, we evaluate our approach on six continuous control Meta-RL benchmark environments based on OpenAI Gym and the Mujoco simulator BID5 ). A description of the experimental setup is found in Appendix D. In all experiments, the reported curves are averaged over at least three random seeds. Returns are estimated based on sampled trajectories from the adapted post-update policies and averaged over sampled tasks. The source code and the experiment data are available on our supplementary website. We compare our method, ProMP, in sample complexity and asymptotic performance to the gradientbased meta-learning approaches MAML-TRPO BID9 ) and E-MAML-TRPO (see FIG2). Note that MAML corresponds to the original implementation of RL-MAML by BID9 where no credit assignment to the pre-adaptation policy is happening (see Appendix B.3 for details). Moreover, we provide a second study which focuses on the underlying meta-gradient estimator. Specifically, we compare the LVC, DiCE, MAML and E-MAML estimators while optimizing meta-learning objective with vanilla policy gradient (VPG) ascent. This can be viewed as an ablated version of the algorithms which tries to eliminate the influences of the outer optimizers on the learning performance (see Fig. 3).These algorithms are benchmarked on six different locomotion tasks that require adaptation: the half-cheetah and walker must switch between running forward and backward, the high-dimensional agents ant and humanoid must learn to adapt to run in different directions in the 2D-plane, and the hopper and walker have to adapt to different configuration of their dynamics. The in FIG2 highlight the strength of ProMP in terms of sample efficiency and asymptotic performance. In the meta-gradient estimator study in Fig. 3, we demonstrate the positive effect of the LVC objective, as it consistently outperforms the other estimators. In contrast, DiCE learns only slowly when compared to the other approaches. As we have motivated mathematically and substantiate empirically in the following experiment, the poor performance of DiCE may be ascribed to the high variance of its meta-gradient estimates. The fact that the of MAML and E-MAML are comparable underpins the ineffectiveness of the naive pre-update credit assignment (i.e. formulation II), as discussed in section 4.Results for four additional environments are displayed in Appendix D along with hyperparameter settings, environment specifications and a wall-clock time comparison of the algorithms. In Section 5 we discussed how the DiCE formulation yields unbiased but high variance estimates of the RL-objective hessian and served as motivation for the low variance curvature (LVC) estimator. Here we investigate the meta-gradient variance of both estimators as well as its implication on the learning performance. Specifically, we report the relative standard deviation of the metapolicy gradients as well as the average return throughout the learning process in three of the metaenvironments. The , depicted in Figure 4, highlight the advantage of the low variance curvature estimate. The trajectory level dependencies inherent in the DiCE estimator leads to a meta-gradient standard deviation that is on average 60% higher when compared to LVC. As the learning curves indicate, the noisy gradients may be a driving factor for the poor performance of DiCE, impeding sample efficient meta-learning. Meta-policy search based on the LVC estimator leads to substantially better sample-efficiency and asymptotic performance. In case of HalfCheetahFwdBack, we observe some unstable learning behavior of LVC-VPG which is most likely caused by the bias of LVC in combination with the naive VPG optimizer. However, the mechanisms in ProMP that ensure proximity w.r.t. to the policys KL-divergence seem to counteract these instabilities during training, giving us a stable and efficient meta-learning algorithm. Here we evaluate the effect of the different objectives on the learned pre-update sampling distribution. We compare the low variance curvature (LVC) estimator with TRPO (LVC-TRPO) against MAML BID9 ) and E-MAML-TRPO in a 2D environment on which the exploration behavior can be visualized. Each task of this environment corresponds to reaching a different corner location; however, the 2D agent only experiences reward when it is sufficiently close to the corner (translucent regions of FIG5). Thus, to successfully identify the task, the agent must explore the different regions. We perform three inner adaptation steps on each task, allowing the agent to fully change its behavior from exploration to exploitation. functions. Through its superior credit assignment, the LVC objective learns a pre-update policy that is able to identify the current task and respectively adapt its policy, successfully reaching the goal (dark green circle).The different exploration-exploitation strategies are displayed in FIG5. Since the MAML implementation does not assign credit to the pre-update sampling trajectory, it is unable to learn a sound exploration strategy for task identification and thus fails to accomplish the task. On the other hand, E-MAML, which corresponds to formulation II, learns to explore in long but random paths: because it can only assign credit to batches of pre-update trajectories, there is no notion of which actions in particular facilitate good task adaptation. As consequence the adapted policy slightly misses the task-specific target. The LVC estimator, instead, learns a consistent pattern of exploration, visiting each of the four regions, which it harnesses to fully solve the task. To shed more light on the differences of the gradients of formulation I and formulation II, we evaluate the meta-gradient updates and the corresponding convergence to the optimum of both formulations in a simple 1D environment. In this environment, the agent starts in a random position in the real line and has to reach a goal located at the position 1 or -1. In order to visualize the convergence, we parameterize the policy with only two parameters θ 0 and θ 1. We employ formulation I by optimizing the DiCE objective with VPG, and formulation II by optimizing its (E-MAML) objective with VPG. Figure 6 depicts meta-gradient updates of the parameters θ i for both formulations. Formulation I (red) exploits the internal structure of the adaptation update yielding faster and steadier convergence to the optimum. Due to its inferior credit assignment, formulation II (green) produces noisier gradient estimates leading to worse convergence properties. In this paper we propose a novel Meta-RL algorithm, proximal meta-policy search (ProMP), which fully optimizes for the pre-update sampling distribution leading to effective task identification. Our method is the of a theoretical analysis of gradient-based Meta-RL formulations, based on which we develop the low variance curvature (LVC) surrogate objective that produces low variance meta-policy gradient estimates. Experimental demonstrate that our approach surpasses previous meta-reinforcement learning approaches in a diverse set of continuous control tasks. Finally, we underpin our theoretical contributions with illustrative examples which further justify the soundness and effectiveness of our method. In this section we discuss two different gradient-based meta-learning formulations, derive their gradients and analyze the differences between them. The first meta-learning formulation, known as MAML BID9, views the inner update rule U (θ, T) as a mapping from the pre-update parameter θ and the task T to an adapted policy parameter θ. The update function can be viewed as stand-alone procedure that encapsulates sampling from the task-specific trajectory distribution P T (τ |π θ) and updating the policy parameters. Building on this concept, the meta-objective can be written as DISPLAYFORM0 The task-specific gradients follow as DISPLAYFORM1 DISPLAYFORM2 DISPLAYFORM3 In order to derive the gradients of the inner update ∇ θ θ = ∇ θ U (θ, T) it is necessary to know the structure of U. The main part of this paper assumes the inner update rule to be a policy gradient descent step DISPLAYFORM4 DISPLAYFORM5 Thereby the second term in is the local curvature (hessian) of the inner adaptation objective function. The correct hessian of the inner objective can be derived as follows: DISPLAYFORM6 The second meta-reinforcement learning formulation views the the inner update θ = U (θ, τ 1:N) as a deterministic function of the pre-update policy parameters θ and N trajectories τ 1:N ∼ P T (τ 1:N |θ) sampled from the pre-update trajectory distribution. This formulation was introduced in BID10 and further discussed with respect to its exploration properties in.Viewing U as a function that adapts the policy parameters θ to a specific task T given policy rollouts in this task, the corresponding meta-learning objective can be written as DISPLAYFORM0 Since the first part of the gradient derivation is agnostic to the inner update rule U (θ, τ 1:N), we only assume that the inner update function U is differentiable w.r.t. θ. First we rewrite the meta-objective J(θ) as expectation of task specific objectives J II T (θ) under the task distribution. This allows us to express the meta-policy gradients as expectation of task-specific gradients: DISPLAYFORM1 The task specific gradients can be calculated as follows DISPLAYFORM2 As in A.1 the structure of U (θ, τ 1:N) must be known in order to derive the gradient ∇ θ θ. Since we assume the inner update to be vanilla policy gradient, the respective gradient follows as DISPLAYFORM3 The respective gradient of U (θ, τ 1:N) follows as DISPLAYFORM4 DISPLAYFORM5 In the following we analyze the differences between the gradients derived for the two formulations. To do so, we begin with ∇ θ J I T (θ) by inserting the gradient of the inner adaptation step into FORMULA4: DISPLAYFORM0 We can substitute the hessian of the inner objective by its derived expression from FORMULA26 and then rearrange the terms. Also note that ∇ θ log P T (τ |θ) = ∇ θ log π θ (τ) = H−1 t=1 log π θ (a t |s t) where H is the MDP horizon. DISPLAYFORM1 Next, we rearrange the gradient of J II into a similar form as ∇ θ J I T (θ). For that, we start by inserting for ∇ θ θ and replacing the expectation over pre-update trajectories τ 1:N by the expectation over a single trajectory τ. DISPLAYFORM2 While the first part of the gradients match ( and ), the second part ( and FORMULA35 ) differs. Since the second gradient term can be viewed as responsible for shifting the pre-update sampling distribution P T (τ |θ) towards higher post-update returns, we refer to it as ∇ θ J pre (τ, τ). To further analyze the difference between ∇ θ J I pre and ∇ θ J II pre we slightly rearrange and put both gradient terms next to each other: DISPLAYFORM3 In the following we interpret and and compare of the derived gradient terms, aiming to provide intuition for the differences between the formulations:The first gradient term J post that matches in both formulations corresponds to a policy gradient step on the post-update policy π θ. Since θ itself is a function of θ, the term I + αR(τ)∇ 2 θ log π θ (τ)) can be seen as linear transformation of the policy gradient update R(τ)∇ θ log π θ (τ) from the post-update parameter θ into θ. Although J post takes into account the functional relationship between θ and θ, it does not take into account the pre-update sampling distribution P T (τ |θ).This is where ∇ θ J pre comes into play: ∇ θ J I pre can be viewed as policy gradient update of the preupdate policy π θ w.r.t. to the post-update return R(τ). Hence this gradient term aims to shift the pre-update sampling distribution so that higher post-update returns are achieved. However, ∇ θ J II pre does not take into account the causal dependence of the post-update policy on the pre-update policy. Thus a change in θ due to ∇ θ J II pre may counteract the change due to ∇ θ J II post. In contrast, ∇ θ J I pre takes the dependence of the the post-update policy on the pre-update sampling distribution into account. Instead of simply weighting the gradients of the pre-update policy ∇ θ log π θ (τ) with R(τ) as in ∇ θ J I post, ∇ θ J I post weights the gradients with inner product of the pre-update and post-update policy gradients. This inner product can be written as DISPLAYFORM4 wherein δ denotes the angle between the the inner and outer pre-update and post-update policy gradients. Hence, ∇ θ J I post steers the pre-update policy towards not only towards larger post-updates returns but also towards larger adaptation steps α∇ θ J inner, and better alignment of pre-and postupdate policy gradients. This directly optimizes for maximal improvement / adaptation for the respective task.; for a comparable analysis in case of domain generalization and supervised meta-learning. Also note that allows formulation I to perform credit assignment on the trajectory level whereas formulation II can only assign credit to entire batches of N pre-update trajectories τ 1:N.As a , we expect the first meta-policy gradient formulation to learn faster and more stably since the respective gradients take the dependence of the pre-update returns on the pre-update sampling distribution into account while this causal link is neglected in the second formulation. When employing formulation I for gradient-based meta-learning, we aim maximize the loss DISPLAYFORM0 by performing a form of gradient-descent on J(θ). Note that we, from now on, assume J:= J I and thus omit the superscript indicating the respective meta-learning formulation. As shown in A.2 the gradient can be derived as DISPLAYFORM1 where DISPLAYFORM2 ] denotes hessian of the inner adaptation objective w.r.t. θ. This section concerns the question of how to properly estimate this hessian. Since the expectation over the trajectory distribution P T (τ |θ) is in general intractable, the score function trick is typically used to used to produce a Monte Carlo estimate of the policy gradients. Although the gradient estimate can be directly defined, when using a automatic-differentiation toolbox it is usually more convenient to use an objective function whose gradients correspond to the policy gradient estimate. Due to the Policy Gradient Theorem (PGT) such a "surrogate" objective can be written as: DISPLAYFORM0 While and FORMULA41 are equivalent , the more popular formulation formulation can be seen as forward looking credit assignment while can be interpreted as backward looking credit assignment BID10. A generalized procedure for constructing "surrogate" objectives for arbitrary stochastic computation graphs can be found in Schulman et al. (2015a). Estimating the the hessian of the reinforcement learning objective has been discussed in BID12 and BID4 with focus on second order policy gradient methods. In the infinite horizon MDP case, BID4 derive a decomposition of the hessian. In the following, we extend their finding to the finite horizon case. Proof. As derived in, the hessian of J inner (θ) follows as: DISPLAYFORM0 DISPLAYFORM1 The term in is equal to H 2. We continue by showing that the remaining term in is equivalent to H 1 + H 12 + H 12. For that, we split the inner double sum in into three components: DISPLAYFORM2 DISPLAYFORM3 By changing the backward looking summation over outer products into a forward looking summation of rewards, can be shown to be equal to H 1: DISPLAYFORM4 DISPLAYFORM5 By simply exchanging the summation indices t and h in FORMULA45 it is straightforward to show that FORMULA45 is the transpose of. Hence it is sufficient to show that is equivalent to H 12. However, instead of following the direction of the previous proof we will now start with the definition of H 12 and derive the expression in. DISPLAYFORM6 The gradient of Q π θ t can be expressed recursively: DISPLAYFORM7 By induction, it follows that DISPLAYFORM8 When inserting FORMULA51 into FORMULA48 and swapping the summation, we are able to show that H 12 is equivalent to. DISPLAYFORM9 This concludes the proof that the hessian of the expected sum of rewards under policy π θ and an MDP with finite time horizon H can be decomposed into H 1 + H 2 + H 12 + H 12. As pointed out by BID10 and BID10, simply differentiating through the gradient of surrogate objective J PGT as done in the original MAML version BID9 leads to biased hessian estimates. Specifically, when compared with the unbiased estimate, as derived in FORMULA26 and decomposed in Appendix B.2, both H 1 and H 12 + H 12 are missing. Thus, ∇ θ J pre does not appear in the gradients of the meta-objective (i.e. ∇ θ J = ∇ θ J post). Only performing gradient descent with ∇ θ J post entirely neglects influences of the pre-update sampling distribution. This issue was overseen in the RL-MAML implementation of BID9. As discussed in this leads to poor performance in meta-learning problems that require exploration during the pre-update sampling. Addressing the issue of incorrect higher-order derivatives of monte-carlo estimators, BID10 propose DICE which mainly builds upon an newly introduced MagicBox operator. This operator allows to formulate monte-carlo estimators with correct higher-order derivatives. A DICE formulation of a policy gradient estimator reads as: DISPLAYFORM0 DISPLAYFORM1 In that, ⊥ denotes a "stop gradient" operator (i.e. DISPLAYFORM2 Note that → denotes a "evaluates to" and does not necessarily imply equality w.r.t. to gradients. Hence, J DICE (θ) evaluates to the sum of rewards at 0th order but produces the unbiased gradients ∇ n θ J DICE (θ) when differentiated n-times (see BID10 for proof). To shed more light on the maverick DICE formulation, we rewrite as follows: DISPLAYFORM3 Interpreting this novel formulation, the MagicBox operator θ ({a t ≤t}) can be understood as "dry" importance sampling weight. At 0th order it evaluates to 1 and leaves the objective function unaffected, but when differentiated once it yields an estimator for the marginal rate of return due to a change in the policy-implied trajectory distribution. In addition to the expected reward J(π) under policy π, we will use the state value function V π, the state-action value function Q π as well as the advantage function A π: DISPLAYFORM4 with a t ∼ π(a t |s t) and s t+1 ∼ p(s t+1 |s t, a t).The expected return under a policyπ can be expressed as the sum of the expected return of another policy π and the expected discounted advantage ofπ over π (see Schulman et al. (2015a) for proof). DISPLAYFORM5 Let d π denote the discounted state visitation frequency: DISPLAYFORM6 We can use d π to express the expectation over trajectories τ ∼ p π (τ) in terms of states and actions: DISPLAYFORM7 Local policy search aims to find a policy update π →π in the proximity of π so that J(π) is maximized. Since J(π) is not affected by the policy update π →π, it is sufficient to maximize the expected advantage underπ. However, the complex dependence of dπ(s) onπ makes it hard to directly maximize the objective in. Using a local approximation of where it is assumed that the state visitation frequencies d π and dπ are identical, the optimization can be phrased as DISPLAYFORM8 In the following we refer toJ(π) as surrogate objective. It can be shown that the surrogate objectivẽ J matches J to first order when π =π (see). If π θ is a parametric and differentiable function with parameter vector θ, this means that for any θ o: DISPLAYFORM9 When π =π, an approximation error of the surrogate objectiveJ w.r.t. to the true objective J is introduced. BID0 derive a lower bound for the true expected return ofπ: DISPLAYFORM10 DISPLAYFORM11 Trust region policy optimization (TPRO) (a) attempts to approximate the bound in by phrasing local policy search as a constrained optimization problem: DISPLAYFORM12 Thereby the KL-constraint δ induces a local trust region around the current policy π θo. A practical implementation of TPRO uses a quadratic approximation of the KL-constraint which leads to the following update rule: DISPLAYFORM13 with g:= ∇ θ E s∼dπ θo (s) DISPLAYFORM14 π θo (a|s) A π θo (s, a) being the gradient of the objective and F = ∇ 2 θD KL [π θo ||π θ] the Fisher information matrix of the current policy π θo. In order to avoid the cubic time complexity that arise when inverting F, the Conjugate Gradient (CG) algorithm is typically used to approximate the Hessian vector product F −1 g. While TPRO is framed as constrained optimization, the theory discussed in Appendix C.1 suggest to optimize the lower bound. Based on this insight, propose adding a KL penalty to the objective and solve the following unconstrained optimization problem: DISPLAYFORM0 However, they also show that it is not sufficient to set a fixed penalty coefficient β and propose two alternative methods, known as Proximal Policy Optimization (PPO) that aim towards alleviating this issue:1) Adapting the KL coefficient β so that a desired target KL-divergenceD KL [π θo ||π θ] between the policy before and after the parameter update is achieved 2) Clipping the likelihood ratio so that the optimization has no incentive to move the policy π θ too far away from the original policy π θo. A corresponding optimization objective reads as: DISPLAYFORM1 Empirical show that the latter approach leads to better learning performance .Since PPO objective keeps π θ in proximity of π θo, it allows to perform multiple gradient steps without re-sampling trajectories from the updated policy. This property substantially improves the data-efficiency of PPO over vanilla policy gradient methods which need to re-estimate the gradients after each step. The optimal hyperparameter for each algorithm was determined using parameter sweeps. Table 1 contains the hyperparameter settings used for the different algorithms. Any environment specific modifications are noted in the respective paragraph describing the environment. PointEnv (used in the experiment in 7.3)• Trajectory Length: 100• Num Adapt Steps: 3 In this environment, each task corresponds to one corner of the area. The point mass must reach the goal by applying directional forces. The agent only experiences a reward when within a certain radius of the goal, and the magnitude of the reward is equal to the distance to the goal. HalfCheetahFwdBack, AntFwdBack, WalkerFwdBack, HumanoidFwdBack• Trajectory Length: 100 (HalfCheetah, Ant); 200 (Humanoid, Walker)• Num Adapt Steps: 1The task is chosen between two directions -forward and backward. Each agent must run along the goal direction as far as possible, with reward equal to average velocity minus control costs. • Trajectory Length: 100 (Ant); 200 (Humanoid)• Num Adapt Steps: 1Each task corresponds to a random direction in the XY plane. As above, each agent must learn to run in that direction as far as possible, with reward equal to average velocity minus control costs. • Trajectory Length: 200• Num Adapt Steps: 2In this environment, each task is a location randomly chosen from a circle in the XY plane. The goal is not given to the agent -it must learn to locate, approach, and stop at the target. The agent receives a penalty equal to the distance from the goal. • Trajectory Length: 200• Inner LR: 0.05• Num Adapt Steps: 1The agent must move forward as quickly as it can. Each task is a different randomization of the simulation parameters, including friction, joint mass, and inertia. The agent receives a reward equal to its velocity. Published as a conference paper at ICLR 2019 In addition to the six environments displayed in 2, we ran experiments on the other four continuous control environments described above. The are displayed in 7. In addition to the improved sample complexity and better asymptotic performance, another advantage of ProMP is its computation time. FIG8 shows the average time spent per iteration throughout the learning process in the humanoid environment differences of ProMP, LVC-VPG, and MAML-TRPO. Due to the expensive conjugate gradient steps used in TRPO, MAML takes far longer than either first order method. Since ProMP takes multiple stochastic gradient descent steps per iteration, it leads to longer outer update times compared to VPG, but in both cases the update time is a fraction of the time spent sampling from the environment. The difference in sampling time is due to the reset process: resetting the environment when the agent "dies" is an expensive operation. ProMP acquires better performance quicker, and as a the agent experiences longer trajectories and the environment is reset less often. In our setup, instances of the environment are run in parallel and performing a reset blocks all environments.
A novel and theoretically grounded meta-reinforcement learning algorithm
745
scitldr
When training a neural network for a desired task, one may prefer to adapt a pretrained network rather than start with a randomly initialized one -- due to lacking enough training data, performing lifelong learning where the system has to learn a new task while being previously trained for other tasks, or wishing to encode priors in the network via preset weights. The most commonly employed approaches for network adaptation are fine-tuning and using the pre-trained network as a fixed feature extractor, among others. In this paper we propose a straightforward alternative: Side-Tuning. Side-tuning adapts a pretrained network by training a lightweight "side" network that is fused with the (unchanged) pre-rained network using a simple additive process. This simple method works as well as or better than existing solutions while it resolves some of the basic issues with fine-tuning, fixed features, and several other common baselines. In particular, side-tuning is less prone to overfitting when little training data is available, yields better than using a fixed feature extractor, and doesn't suffer from catastrophic forgetting in lifelong learning. We demonstrate the performance of side-tuning under a diverse set of scenarios, including lifelong learning (iCIFAR, Taskonomy), reinforcement learning, imitation learning (visual navigation in Habitat), NLP question-answering (SQuAD v2), and single-task transfer learning (Taskonomy), with consistently promising . The goal of side-tuning is to capitalize on a pretrained model to better learn one or more novel tasks. By design, side-tuning does so without degrading performance of the base model. The framework is straightforward: it assumes access to the frozen base model B: X → Y that maps inputs into some representation space that is shared between the base task and the current (target) task. This representation space is flexible and could either be a latent space (e.g. in R N) or actual model predictions. Side-tuning then learns a side model S: X → Y, so that the representations for the target task are R(x) B(x) ⊕ S(x), fine-tuning adapts too easily and forgets old information. Side-tuning is a simple method to address these limitations. for some combining operation ⊕. We use a learned alpha-blending, a ⊕ b αa + (1 − α)b for this purpose (other options are discussed in Section 3.0.3). Certain pre-set curricula of α reduce the side-tuning framework to: fine-tuning, feature extration, and stage-wise training (see Fig. 3, right). Hence those can be viewed as special cases of the general side-tuning framework. Also, other curricula suggest (e.g.) a maximum a posteriori estimator that integrates the B(x) prior with the evidence from S(x). Side-tuning is an example of an additive learning approach as it adds (strategically placed) parameters for each new task. Fixed feature extraction would be a simple example of an additive approach with zero new parameters. As a , fixed features are don't adapt the base network over the lifetime of the agent. A number of existing approaches address this by learning new parameters (the number of which scales with the size of the base network) for each new task (e.g. . Unlike these approaches, side-tuning places no constraints on the structure of the side network, allowing the parameters to be strategically allocated. In particular, side-tuning can use tiny networks when the base requires only minor updates. By adding fewer parameters per task, side-tuning can learn more tasks before the model grows large enough to require parameter consolidation. These approaches stand in contrast to most existing methods for incremental learning, which do not increase the number of parameters over time and instead gradually fill up the capacity of a large base model. For example, fine-tuning updates all the parameters. A large body of constraint-based methods focus on how to regularize these updates in order to prevent inter-task interference . Side-tuning does not require such regularization since the additive structure means inter-task interference is not possible. We compare side-tuning to alternative approaches on both the iCIFAR and Taskonomy datasets. iCIFAR consists of ten distinct 10-class image classification problems. Taskonomy covers multiple tasks of varied complexity from across computer vision (surface normal and depth estimation, edge detection, image 1000-way classification, etc.). On these datasets, side-tuning uses side networks that are much smaller than the base. Consequently, even without consolidation, side-tuning uses fewer learnable parameters than the alternative methods. This remarkably simple approach deals with the key challenges of incremental learning. Namely, it does not suffer from either: • Catastrophic forgetting: which is the tendency of a network to abruptly lose previously learned knowledge upon learning new information. We show this in Section 4.2.1. • Rigidity: where networks become increasingly unable to adapt to new problems as they accrue constraints from previous problems. We explore this in Section 4.2.2. Side-tuning avoids these problems while remaining highly performant, which we demonstrate in Section 4.2.3. Broadly speaking, network adaptation methods either overwrite existing knowledge (substitutive methods) or save it and add new parameters (additive learning). In incremental (lifelong) learning, substitutive methods like fine-tuning are at risk of forgetting early tasks. To prevent forgetting, existing methods add non-interference constraints that eventually slow down learning or they force tasks to be independent which prevents reusing knowledge. Side-tuning is an additive approach that performs well and scales well, and, by design, does not suffer from the aforementioned problems. We show this experimentally on various tasks and datasets, including iCIFAR (b), Habitat , SQuAD v2 , and Taskonomy . In the remainder of this section we overview sidetuning's connection to related fields. Network Adaptation modifies an existing network to solve a single new task. The most common approach is to update some or all of the network weights (fine-tuning), possibly by adding constraints. Other approaches freeze the weights and modulate the output by learning additional task-specific parameters. An economical approach is to use off-the-shelf-features with one or more readout layers . Other approaches use custom connection schema. instead modulate the output by applying learned weight masks. These approaches, like side-tuning, are examples of additive learning. Incremental learning has the objective of learning a sequence of tasks T 1,..., T m and, at the end of training, performing well on the entire set. The sequential presentation creates two problems for neural networks. The first is catastrophic forgetting and the second is learning speed, which should not slow down as more tasks are added. Because of these issues (and scaling), not every network adaptation approach lends itself to incremental learning. Standard incremental learning approaches avoid catastrophic forgetting by imposing constraints how the parameters are updated. relegates each task to approximately orthogonal subspaces.;; add a parameter regularization term per task. Imposing constraints tends to slow down learning on later tasks (intransigence,) while making tasks independent ignores the possibility for useful transfer from relevant previous tasks. Additive methods in general, and side-tuning in particular, have the advantage that they do not suffer from catastrophic forgetting and are capable of transfer. However, additive methods have not been much explored because it is assumed that they either have poor performance or scale poorly. We show that side-tuning has good performance and scaling, and demonstrate the additive advantages experimentally; using the iCIFAR and Taskonomy datasets. Meta-learning seeks to create agents that rapidly adapt to new problems by first training on tasks sampled from a standing distribution of tasks. Side-tuning is fundamentally compatible with this formulation and with existing approaches (e.g.). Moreover, recent work suggests that these approaches work primarily by feature adaptation rather than rapid learning , and feature adaptation is also the motivation for our method. Residual Learning exploits the fact that it is sometimes easier to approximate a difference rather than the original function. This has been successfully used in ResNets and robotics, where residual RL learns a single task by first training a coarse policy (e.g. behavior cloning) and then training a residual network on top (using RL). Additive Learning in Other Literature. Concepts similar to additive learning have been studied in a number of fields. For instance, developing infants are hypothesized to learn separate, discontinuous, and context-dependent perception systems during development . Adults are able to rapidly learn new affordances, but only when those are minor updates to familiar, well-practiced systems . On a more fine-grained scale, there are areas of functional specificity within the brain , including wholly separate pathways where output is mutually conditioned on one another . Side-tuning learns a side model S(X) and combines this with a base model B(x) so that the representations for the target task are computed as R(x) B(x) ⊕ S(x). The base model B(x) provides some core cognition or perception, and we put no restrictions on how B(x) is computed. We never update B(x), and in our approach it has zero learnable parameters. In general B(x) could be nonparametric, and it might not be optimized for any particular task. We consider several choices for B(x) in Section 4.4, but the simplest choice is just a pretrained network. Unlike the base model, the side network S(x) is updated during training; learning a residual that we apply on top of the base encoding. Iteratively learning residuals for a single task is known as gradient boosting (see Section 4.4 for a comparison). Side-tuning is instead focused on learning multiple tasks. One crucial component of the framework is that the complexity of the side network can scale to the difficulty of the problem at hand. When the base is relevant and requires only a minor update, a very small network can suffice. Section 4.4 explores the effect of network size, how that changes with the choice of base and target tasks. While the side network can be initialized using a variety of methods, we initialize the side network with a copy of the base network. When the forms of the base and side networks differ, we initialize the side network with weights distilled from the base network using knowledge distillation . We test alternatives in Section 4.4. The final side-tuning representation is a combination, B(x) ⊕ S(x). What should ⊕ be? Side-tuning admits several options for this combination operator. Choosing max yields the subsumption architecture. Concatenation and summation are other viable choices. We observe that alpha blending, a ⊕ b αa + (1 − α)b, works well in practice. Alpha blending preserves the dimensions of the inputs and is simpler than concatenation. In fact, concatenation followed by a channel-collapsing operation (e.g. 1x1 convolution) is a strict generalization of alpha-blending. While simple, alpha blending is expressive enough that it encompasses several common transfer learning approaches. As shown in Figure 3 and when the side network is the same as the base, sidetuning is equivalent to feature extraction when α = 1. When α = 0, side-tuning is instead equivalent to fine-tuning. If we allow α to vary during training (which we generally do), then switching α from 1 to 0 is equivalent to the common (stage-wise) training curriculum in RL where a policy is trained on top of some fixed features that are unlocked partway through training. Another notable curriculum is α(N) = k k+N for k > 0 (hyperbolic decay). In this curriculum, α controls the weighting of the prior (B(x)) with the learned estimate (S(x)), and the weight of the evidence scales with the amount of data. This curriculum is suggestive of a maximum a posteriori estimate and, like the MAP estimate, it converges to the MLE (fine-tuning, α = 0). Finally, α can treated as a learnable free parameter that determines how heavily to weight the base model. In practice, the value of α correlates with task relevance (see Section 4.4). When minimizing estimation error there is often a tradeoff between the bias and variance contributions . Choosing between feature extraction or fine-tuning exemplifies this dilemma. Feature extraction (α = 0) locks the weights and corresponds to a point-mass prior that, unless the weights are already optimal, yields a very biased estimator. In fact, the estimator allows no adaptation to new evidence and is asymptotically inconsistent. On the other hand, fine-tuning (α = 1), is an uninformative prior yielding a low-bias high-variance estimator. With enough data, fine-tuning can produce better estimates, but this usually takes more data than feature extraction. Side-tuning addresses both the these problems. It reduces variance by including the fixed features in the representation, and it is consistent because it allows updating via the residual side network. While α provides a way to control the importance of the prior, another natural approach for enforcing a prior is to penalize deviations from the original feature representation. Typically, it is easier to specify meaningful explicit priors on outputs (e.g. L2 for pixels) than on the latent representations, which can be difficult if not impossible to interpret. As long as the decoder D: Y → A is differentiable, any distance measure on the outputs can be pulled back through the decoder and into the latent space. This induced distance d D on the latent representations is called the pullback metric in differential geometry, and in deep learning it is called the perceptual loss . This may be a useful method for knowledge transfer when (i) the previous task is relevant to the new task and (ii) there is limited training data. A recent successful application of this approach would be the auxiliary losses in GPT , though we did not find it effective. Perceptual regularization is often used to dampen catastrophic forgetting. For example, Elastic Weight Consolidation uses a diagonalized second-order Taylor expansion of the expectation of the pullback metric. Learning Without Forgetting uses a decoder-based approach that can be interpreted as jointly updating both the base network and the pullback metric. We show that such regularization does not fully address the problem of catastrophic forgetting (Section 4.2.1). Side-tuning avoids catastrophic forgetting by design (as the base network is never updated). Network adaptability is the sole criterion only if we care we solely about raw performance on a single target task. In reality we often care about the performance on both the current and previous tasks. This is the case for incremental learning, where we want an agent that can learn a sequence of tasks T 1,..., T m and, at the end, is capable of reasonable performance across the entire set. Thus, catastrophic forgetting becomes a major issue. In our experiments we dedicate one new side network to each new task and train it independently of the earlier side networks. In principle, learning of new tasks can benefit from all the side networks learned in previous tasks (i.e. the nth task can use all n − 1 previous tasks). Since we do not make use of this available information, our should be considered as a lower bound on side-tuning performance. We show that this simple approach provides a strong baseline for incremental learning-outperforming existing approaches in the literature while using fewer parameters on more tasks (in Section 4.2). Side-tuning takes an additive learning approach to incremental learning, which means that alreadylearned components are never updated and performance across the whole set can only increase as the agent sees more tasks. This monotonicity is the key property of the additive family of algorithms. It is worth repeating that there is No Catastrophic Forgetting in Additive Learning and a typical learning curve for one of the tasks is shown in Figure 4. Furthermore, because our implementation of side-tuning treats problems independently of their order in the sequence (always using the fixed base and one side network), side-tuning incurs no rigidity during training. We show this in Section 4.2.2. Side-tuning naturally handles other continuous learning scenarios besides incremental learning. A related problem is that of continuous adaptation, where the agent needs to perform well (e.g. minimizing regret) on a stream of tasks with undefined boundaries and where there might very little data per task and no task repeats. As we show in Section 4.2, inflexibility becomes a serious problem for constraint-based methods and task-specific performance declines after learning more than a handful of tasks. Moreover, continuous adaptation requires an online method as task boundaries must be detected and data cannot be replayed (e.g. to generate constraints for EWC). Side-tuning could be applied to continuous adaptation by keeping a small working memory of cheap side networks that constantly adapt the base network to the input task. These side networks are small, easy to train, and when one of the networks begins performing poorly (e.g. signaling a distribution shift) that network can simply be discarded. This is an online approach, and online adaptation with small cheap networks has found recent success in (e.g.). In the first section we show that side-tuning compares favorably to existing incremental learning approaches on both iCIFAR and the more challenging Taskonomy dataset. We then extend to multiple domains (computer vision, RL, imitation learning, NLP) in the simplified (transfer learning) scenario for N = 2 tasks. Finally, we interpret side-tuning in a series of analysis experiments. We provide comparisons of side-tuning against the following methods: The network is given a good random initialization and then trained normally. The base network is used as-is and is not updated during training. Fine-tuning: An umbrella term that encompasses a variety of techniques, we consider a more narrow definition where pretrained weights are used as initialization and then training proceeds as in scratch. A constraint-based incremental learning approach from. We use the formulation from which scales better-, giving an advantage to EWC since otherwise we could use a larger side-tune network and maintain parameter parity. Parameter Superposition (PSP): A parameter-masking approach from. Progressive Neural Network (PNN): A network adaptation approach from. Independent: Each task uses a network trained independently for the target task. This method uses far more learnable parameters than all the alternatives (e.g. saving a separate ResNet-50 for each task) and achieves very strong performance. Due to the scaling, it is generally not considered an incremental learning method. incremental learning experiments for three tasks on Taskonomy (left) and iCIFAR dataset (right). The fact that side-tuning losses are flat after training (as we go right) shows that it does not forget previously learned tasks. Similarly, the performance remains consistent even on later tasks (as we go down), showing that side-tuning does not become rigid. Alternative methods clearly forget (e.g. PSP) and/or become rigid (e.g. EWC). In Taskonomy, PNN and Independent are hidden under Sidetune. In iCIFAR, Sidetune (A) merges base and side information with a multilayer perceptron (adapter). On both the Taskonomy dataset and incremental CIFAR (iCIFAR, a), side-tuning outperforms existing incremental learning approaches while using fewer parameters 1. Moreover, the performance gap is larger on more challenging datasets. Taskonomy includes labels for multiple computer vision tasks including 2D (e.g. edge detection), 3D (e.g. surface normal estimation), and semantic (e.g. object classification) tasks. We first selected the twelve tasks that make predictions from a single RGB image, and then created an incremental learning setup by selecting a random order in which to learn these tasks (starting with curvature). As images are 256x256 we use a ResNet-50 for the base network and a 5-layer convolutional network for the side-tuning side network. The number of learnable network parameters used across all tasks is 24.6M for EWC and PSP, and 11.0M for side-tuning 2. iCIFAR. First, we pretrain the base network (ResNet-44) on CIFAR-10. Then the 10 subsequent tasks are formed by partitioning CIFAR-100 classes into 10 disjoint sets of 10-classes each. We train on each subtask for 20k steps before moving to the next one. Our state-of-the-art substitutive baselines (EWC and PSP) update the base network for each task (683K parameters), while sidetuning updates a four layer convolutional network per task (259K parameters after 10 tasks). As expected, there is no catastrophic forgetting in side-tuning. Figure 6 shows that the error for side-tuning does not increase after training (blue shaded region), while it increases sharply for the other methods on both Taskonomy and iCIFAR. The difference is meaningful, and Figure 5 shows sample predictions from side-tuning vs. EWC for a few tasks during and after training. As is evident from the bottom rows, EWC exhibits catastrophic forgetting on all tasks (worse image quality as we move right). In contrast, side-tuning (top) shows no forgetting and the final predictions are significantly closer to the ground-truth (boxed red). Side-tuning learns later tasks as easily as the first, while constraint-based methods such as EWC stagnate. The predictions for later tasks such as surface normals (in Figure 5) are significantly better using side-tuning-even immediately after training and before any forgetting can occur. Figure 7: Rigidity and average rank on Taskonomy and iCiFAR. From left: Side-tuning always learns new tasks easily; EWC becomes increasingly unable to learn new tasks as training progresses. Center: The same trend holds on iCIFAR, and the average rigidity is zero for side-tuning (and almost zero for PSP). Right: Side-tuning outperforms alternatives on both datasets, achieving a significantly better average rank on all tasks. Figure 7 quantifies this slowdown. We measure rigidity as the log-ratio of the actual loss or the ith task over the loss when that task is instead trained first in the sequence. As expected, side-tuning experiences zero slowdown on both datasets. For EWC, the increasing constraints make learning new tasks increasingly difficult-and the log-ratio increases with the number of tasks (Taskonomy, left). It is too rigid (log-ratio > 0) even in iCIFAR, where the later tasks are similar to earlier ones. Overall, side-tuning significantly outperforms the other methods while using fewer than half the number of trainable parameters of the other methods. When the other methods use smaller networks, their performance decreases further. On both iCIFAR and Taskonomy, side-tuning achieves the best average rank (1.12 of 4 on Taskonomy, while the next best is 2.33 (PSP)). This is a direct of the fact (shown above) that side-tuning does not suffer from catastrophic forgetting or rigidity. It is not due to the fact that the sidetuning structure is specially designed for these types of image tasks; it is not (we show in Sec. 4.3 that it performs well on other domains). In fact, the much larger networks used in EWC and PSP should achieve better performance on any single task. For example, EWC produces sharper images early on in training, before it has had a chance to accumulate too many constraints (e.g. reshading in Fig. 5). But this factor was outweighed by side-tuning's immunity from the effects of catastrophic forgetting and creeping rigidity. In order to address the possibility that side-tuning is somehow domain-or task-specific, we provide showing that it is well-behaved in other settings. As the concern with additive learning is mainly that it is too inflexible to learn new tasks, we compare with fine-tuning (which outperforms other lifelong learning tasks when forgetting is not an issue). For extremely limited amounts of data, feature extraction can outperform fine-tuning. We show that side-tuning generally performs as well as features or fine-tuning-whichever is better. We trained networks to perform one of three target tasks (object classification, surface normal estimation, and curvature estimation) on the Taskonomy dataset and varied the size of the training set N ∈ {100, 4 × 10 6}. In each scenario, the base network was trained (from scratch) to predict one of the non-target tasks. The side network was a copy of the original base network. We experimented with a version of fine-tuning that updated both the base and side networks; the were similar to standard fine-tuning 3. In all scenarios, side-tuning successfully matched the adaptiveness of fine-tuning, and significantly outperformed learning from scratch, as shown in Figure 4.3. The additional structure of the frozen base did not constrain performance with large amounts of data (4M images), and side-tuning performed as well as (and sometimes slightly better than) fine-tuning. We also evaluated side-tuning on a question-answering task (SQuAD v2 ) using a non-convolutional architecture. We use a pretrained BERT model for our base, and a second for the side network 3. Unlike in the previous experiments, BERT uses attention and no convolutions. Still, side-tuning adapts to the new task just as well as fine-tuning, outperforming features and scratch (Figure 4.3). We trained an agent to navigate to a target coordinate in the Habitat environment. The agent is provided with both RGB input image and also an occupancy map of previous locations. The map does not contain any information about the environment-just previous locations. In this section we use Behavior Cloning to train an agent to imitate experts following the shortest path on 49k trajectories in 72 buildings. The agents are evaluated in 14 held-out validation buildings. Depending on the what the base network was trained on, the source task might be useful (Curvature) or harmful (Denoising) for imitating the expert and this determines whether features or learning from scratch performs best. Figure 4.3 shows that regardless of the which approach worked best, side-tuning consistently matched or beat it. Reinforcement Learning for Navigation in Habitat Using a different learning algorithm (PPO) and using direct interaction instead of expert trajectories, we observe identical trends. We trained agents directly in Habitat (74 buildings) 3. Figure 4.3 shows performance in 16 held-out buildings after 10M frames of training. Side-tuning performs comparably to the max of competing approaches. Task relevance predicts alpha α. In our experiments, we treat α as a learnable parameter and find that the relative values of α are predictive of emprical performance. In imitation learning (Fig. 4.3), curvature (α = 0.557) outperformed denoising (α = 0.252). In Taskonomy, the α values from training on just 100 images predicted the actual transfer performance to normals in , (e.g. curvature (α = 0.56) outperformed object classification (α = 0.50)). For small datasets, usually α ≈ 0.5 and the relative order, rather than the actual value is important. We showed in the previous section that side-tuning performs like the best of {features, fine-tuning, scratch} in domains with abundant or scant data. In order to test whether side-tuning could profitably synthesize the features with intermediate amounts of data, we evaluated each approach's ability to learn to navigate using 49, 490, 4900, or 49k expert trajectories and pretrained denoising features. Side-tuning was always the best-performing approach and, on intermediate amounts of data (e.g. 4.9k trajectories), outperformed the other techniques (side-tune 9.3 vs. fine-tune: 7.5, features: 6.7, scratch: 6.6), Figure 9b ). Network size. Does network size matter? We find (i) If the target problem benefits from a large network (e.g. classification tasks), then performance is sensitive to side network size but not size of the base. (ii) The base network can usually be distilled to a smaller network and sidetuning will still offer advantages over alternatives. In the supplementary material we provide supporting experiments from Taskonomy using both high-and low-data settings (curvature → {obj. class, normals}, obj. class → normals), and in Habitat (RL using {curvature, denoise} → navigation). Not Boosting. Since the side network learns a residual on top of the base network, we ask: what benefits we could glean by extending side-tuning to do boosting? Although network boosting this does improve performance on iCIFAR (Figure 9a), if catastrophic forgetting is not a concern then the parameters would've been better used in a deeper network rather than many shallow networks. Initialization. A good side network initialization can yield a minor boost in performance. We found that initializing from the base network slightly outperforms a low-energy initialization 4, which slightly outperforms Xavier initialization. However, we found that these differences were not statistically significant across tasks (H 0 : pretrained = xavier; p = 0.07, Wilcoxon signed-rank test). We suspect that initialization might be more important on harder problems. We test this by repeating the analysis without the simple texture-based tasks (2D keypoint + edge detection and autoencoding) and find the difference in initialization is now significant (p = 0.01). More than just stable updates. In RL, fine-tuning often fails to improve performance. One common rationalization is that the early updates in RL are'high variance'. The usual solution is to first train using fixed features and then unfreeze the weights at some point in training (via a hyperparameter to be set). We found that this stage-wise approach performs as well (but no better than) keeping the features fixed-and side-tuning performed as well as both while being simpler than stage-wise (Fig. 9c). We tested the'high-variance update' theory by fine-tuning with both gradient clipping and an optimizer designed to prevent such high-variance updates by adaptively warming up the learning rate . This provided no benefits over vanilla fine-tuning, suggesting that the benefits of side-tuning are not solely due to gradient stabilization early in training. We have introduced the side-tuning framework, a simple yet effective approach for additive learning. Since it does not suffer from catastrophic forgetting or rigidity, it is naturally suited to incremental learning. The theoretical advantages are reflected in empirical , and side-tuning outperforms existing approaches in challenging contexts and with state-of-the-art neural networks. We further demonstrated that the approach is effective in multiple domains and with various network types. The naïve approach to incremental learning used in this paper made a number of design decisions. These decisions could be analyzed and subsequently relaxed. In particular: Flexible parameterizations for side networks: Our incremental learning experiments used the same side network architecture for all subtasks. A method for automatically adapting the networks to the subtask at hand could make more efficient use of the computation and supervision. Better forward transfer: Our experiments used only a single base and single side network. Leveraging the already previously trained side networks could yield better performance on later tasks. Learning when to deploy side networks: Like most incremental learning setups, we assume that the tasks are presented in a sequence and that task identities are known. Using several active side networks in tandem would provide a natural way to detecting distribution shift. Using side-tuning to measure task relevance: We noted that α tracked task relevance, but a more rigorous treatment of the interaction between the base network, side network, α and final performance could yield insight into how tasks relate to one another. We test the effect of base model architecture on performance and find that the small five layer convolutional network does comparable to the ResNet-50 when using features. Rectified Adam is a method introduced to deal with destructive high variance updates at the beginning of training. We tried using this for RL but found no improvements (shown in Figure 15). We ablate over different quantities of expert trajectories. We observe that when data is scarce, features is a powerful choice whereas when data is plentiful, fine-tuning performs well. In both scenarios, side-tuning is able to perform as well as the stronger approach. In domains with very few examples, we found that side-tuning is unable to match the performance of other methods. We evaluated our setup in vision transfer for 5 images from the same building, imitation learning given 5 expert trajectories. Taskonomy Our data is 4M images on 12 single image tasks. The tasks that we use are the following: curvature, semantic segmentation, reshading, keypoints3d, keypoints2d, texture edges, occlusion edges, distance, depth, surface normals, object classification and autoencoding. The tasks were chosen in no particular special order. Our base model and side model are ResNet-50s. We pretrain on curvatures. Then, we train each task for three epochs before moving on to the next task. We use cross entropy loss for classification tasks (semantic segmentation and object classification), L2 loss for curvature and L1 loss for the other tasks. We use Adam optimizer with an initial learning rate of 1e-4, weight decay coefficient of 2e-6, gradient clipping to 1.0, and batch size of 32. We evaluate our performance on a held out set of images, both immediately after training a specific task, and after training of all the tasks are complete. iCIFAR We start by pretraining a model on CIFAR 10 (from https://github.com/ akamaster/pytorch_resnet_cifar10). Then we partition CIFAR100 into 10 distinct sets of 10 classes. Then, we train for 4 epochs on these tasks using Adam optimizer, learning rate of 1e-3, batch size of 128. We train and test on the the question answering dataset SQuAD2.0, a reading comprehension dataset consisting of 100,000 questions with 50,000 unanswerable questions. Both our base encoding and side network is a BERT transformer pretrained on a larger corpus. Finetuning trains a single BERT transfer. We use the training setup found at https://github.com/huggingface/ pytorch-transformers (train for 2 epochs at a learning rate of 3e-5) wth one caveat -we use an effective batch size of 3 (vs. their 24) due to the We borrow the experimental setup from work to be published in October 2019: We use the Habitat environment with the Gibson dataset. The dataset virtualizes 572 actual buildings, reproducing the intrinsic visual and semantic complexity of real-world scenes. We train and test our agents in two disjoint sets of buildings (Fig. ??). During testing we use buildings that are different and completely unseen during training. We use up to 72 building for training and 14 test buildings for testing. The train and test spaces comprise 15678.4m 2 (square meters) and 1752.4m 2, respectively. The agent must direct itself to a given nonvisual target destination (specified using coordinates), avoiding obstacles and walls as it navigates to the target. The maximum episode length is 500 timesteps, and the target distance is between 1.4 and 15 meters from the start. This setup is shared between imitation learning and RL, which differ in the data, architecture and optimization process. Imitation Learning We collect 49,325 shortest path expert trajectories in Habitat, 2,813,750 state action pairs. We learn a neural network mapping from states to actions. Our base encoding is a ResNet-50 and the side network is a five layer convolutional network. The representation output is then fed into a neural network policy. We train the model for 10 epochs using cross entropy loss and Adam at an initial learning rate of 2e-4 and weight decay coefficient of 3.8e-7. We initialize alpha to 0.5. Finetuning uses the same model architecture but updates all the weights. Feature extraction only uses the ResNet-50 to collect features. RL Similarly, we borrow the RL setup from the same work. In all experiments we use the common Proximal Policy Optimization (PPO) algorithm with Generalized Advan-tage Estimation. Due to the computational load of ren-dering perceptually realistic images in Gibson we are only able to use a single rollout worker and we therefore decorre-late our batches using experience replay and off-policy vari-ant of PPO. The formulation is similar to Actor-Critic with Experience Replay (ACER) in that full trajectories are sampled from the replay buffer and reweighted using the first-order approximation for importance sampling. During training, the agent receives a large one-time reward for reaching the goal, a positive reward proportional to Euclidean distance toward the goal and a small negative reward each timestep. The maximum episode length is 500 timesteps, and the target distance is between 1.4 and 15 meters from the start. Due to this paradigms' compute and memory constraints, it would be difficult for us to use large architectures for this setting. Thus, our base encoding is a five layer convolutional network distilled from the trained ResNet-50. Our side network is also a five layer convoultional network. Finetuning is handled the same way -update all the weights in this setup. Feature extraction uses the five layer network to collect features. Low energy initialization In classical teacher student distillation, the student is trained to minimize the distance between its output and the teacher's output. In this setting, we minimize the distance between the teacher's output and the summation of the student's output and the teacher's output). The output space may have a different geometry than that of the input space and this would allow us to work with A.5 ADDITIONAL ANALYSIS We provide alternative perspectives and additional insights for our lifelong learning tasks. iCIFAR In Fig. 7 (right), we see that the average rank of side-tuning higher than that of PNN. We find that side-tuning can bridge this gap with a multilayer perceptron (adapter) to merge the base and side networks. This is a common practice in PNN. In Fig. 20, we see with the adapter network, the two methods are very similar when measuring classification error. I n d e p. P N N S i d e t u n e (A) S i d e t u n e E W C P S P F i n e t u n e In d e p. S id e t u n e P N N 3 P N N F in e t u n e E W C P S P In d e p. S id e t u n e P N N 3 P N N F in e t u n e E W C P S P In d e p. S id e t u n e P N N 3 P N N F in e t u n e E W C P S P Taskonomy In Fig. 7 (right), we found that the ranking of our method is better than all other methods, including PNN. By altering the connections in the PNN, we found an alternate (PNN3) that has comparable performance to side-tuning. In 21, we show all the losses normalized by single task loss (independent) as presented in. The quantitative performance of our method outperforms all other methods and matches closely with that of PNN. We qualitatively show in Figures 22, 23, 24, that these methods are comparable in performance. An alternative perspective views these methods as various fusions between some base information and new side information. In this framework, side-tuning is a late-fusion approach whereas PNN is a distributed-fusion approach. In Fig. 25, we compare various fusion methods in iCIFAR and find that late fusion performs better than early fusion and comparable to if not better than distributed fusion. We run this analysis in Taskonomy as well -while the loss values differ somewhat, we find that the qualitative seen in Figures 26, 27, 28 are rather similar. Thus, we conclude that the methods do not vary much. Distributed and early fusion require knowledge about the structure of how the information is computed. This is something late fusion is agnostic to and it can consider each information column a black box -this is useful in the case that your base information is NOT a neural network, perhaps non-parametric. In Fig. A.5.2, we show that side-tuning can effectively use ground truth curvature as a base for lifelong learning whereas all the methods we compare against cannot use this information. Specifically, we downsample the curvature image and stack it into the same dimensions as the side output. Side-tuning with ground truth curvature achieves a better rank on the Taskonomy dataset than all other methods and comparable performance. GT Late Dist Early Figure 29: Sidetuning can be used successfully even with black-box side information. When the base information comes from a black-box process for which we have no other information, sidetuning can still be used (and performance improves vis-a-vis not using the inputs, and vs using inputs generated from a neural network). Existing lifelong learning approaches have no standard way to make use of this type of information.
Side-tuning adapts a pre-trained network by training a lightweight "side" network that is fused with the (unchanged) pre-trained network using a simple additive process.
746
scitldr
In this paper, we present a technique for generating artificial datasets that retain statistical properties of the real data while providing differential privacy guarantees with respect to this data. We include a Gaussian noise layer in the discriminator of a generative adversarial network to make the output and the gradients differentially private with respect to the training data, and then use the generator component to synthesise privacy-preserving artificial dataset. Our experiments show that under a reasonably small privacy budget we are able to generate data of high quality and successfully train machine learning models on this artificial data. Following recent advancements in deep learning BID28 BID12 BID30, more and more people and companies are interested in putting their data in use as they see that machine learning is able to generate a wide range of benefits, including financial, social, medical, security, and so on. At the same time, however, such models are often able to capture a fine level of detail in training data potentially compromising privacy of individuals who's features sharply differ from others. This problem is partially mitigated by the use of regularisation techniques that "smooth out" outstanding details and avoid overfitting, but it does not give any theoretical privacy guarantees. Recent research by BID8 suggests that even without access to internal model parameters, by using hill climbing on output probabilities of a neural network, it is possible to recover (up to a certain degree) individual faces from a training set. The latter is especially disturbing knowing that deep learning models are becoming an integral part of our lives, making its way to phones, smart watches, cars, and appliances. And since these models are often trained on customers data, such training set recovery techniques will endanger privacy even without access to the manufacturer's servers where these models are being trained. In order to protect privacy while still benefiting from the use of statistics and machine learning, a number of techniques for data anonymisation has been developed over the years, including kanonymity BID29, l-diversity BID18, t-closeness BID17, and differential privacy BID2 BID3 BID7. The latter has been recognised as a strong standard and is widely accepted by the research community. We study the task of publishing datasets in a differentially private manner. In particular, we are interested in solving two problems. First, we want to be able to benefit from the use of machine learning by third parties while protecting sensitive information of individuals in our dataset. Second, we want to be sure that even if adversaries get access to the third-party model trained on our data, they would not be able to recover private information. An additional challenge is to be able to publish an entire dataset, as opposed to being required to use a query interface like in a typical differentially private framework. In this paper, we propose a simple solution to this problem. The main idea of our approach is to use generative adversarial networks (GANs) introduced in BID9, trained with addition of Gaussian noise in the embedding space, to create artificial datasets that follow the same distribution as the real data while providing differential privacy guarantees. This method has a number of advantages over the methods proposed earlier. First of all, this solution is simple to implement, e.g. it does not require training ensembles of models on disjoint data. Second, it can be done on a user side, and not on the side of the machine learning service provider, which eliminates the necessity of trusting this service provider or implementing privacy-preserving models locally. Third, similarly to, privacy cannot be compromised even if the entire trained model is accessible to an adversary. Our contributions in this paper are the following:• we propose a novel mechanism for non-interactive differentially private data release, and to the best of our knowledge this is the first practical solution for complex real-world data; • we introduce a new technique of preserving privacy in neural networks via adding noise in the forward pass during training; • we show that this technique guarantees differential privacy for both the outputs and the learned weights of the network; • we demonstrate that we are able to achieve high accuracy in learning tasks while maintaining a reasonable (single-digit) privacy budget. The remainder of the paper is structured as follows. In Section 2, we give an overview of related work. Section 3 contains necessary on differential privacy and generative adversarial networks. In Section 4, we describe our approach and provide its theoretical analysis and some practical aspects. Experimental and implementation details are presented in Section 5, and Section 6 concludes the paper. The theorem proofs and additional details can be found in the Appendix. Given the level of attention to deep learning and the rising importance of privacy, it is unsurprising that there has been a significant increase in the number of publications on the topic of privacypreserving deep learning (and machine learning in general) in recent years. One take on the problem is to distribute training and use disjoint sets of training data. An example of such approach is the paper of BID27, where they propose to train in a distributed manner by communicating sanitised updates from participants to a central authority. Such a method, however, yields high privacy losses as pointed out by and BID23. An alternative technique, also using disjoint training sets, suggested by BID23, applies an ensemble of independently trained teacher models and semi-supervised knowledge transfer to a student model to achieve almost state-of-the-art (non-private) accuracy on MNIST BID16 and SVHN BID20 with single-digit differential privacy bounds. This work was based on a paper by BID11 and extends their method to generic learning models with any type of loss functions or optimisation algorithms. To the best of our knowledge, this is the most accurate privacy-preserving learning to date, although one has to make sure that all the teaching ensemble and the aggregator are inaccessible to an adversary and the model is queried for teachers' votes only a small number of times. A somewhat different approach is taken in. They suggest using differentially private stochastic gradient descent (for brevity, we will refer to it as DP-SGD in the remainder of the paper) to train deep learning models in a private manner. This approach allows to achieve high accuracy while maintaining low differential privacy bounds, and does not require distributed training. As stated above, our goal is to enable data usage by third party machine learning service providers to benefit from their expertise. All of the aforementioned methods, however, require every provider of such service to comply with the chosen privacy-preserving procedure which is not realistic. An alternative solution to this problem is to focus on sanitising data and making sure that training machine learning models on it would not compromise privacy. This direction is taken, for example, by BID1. The authors use a graphical probabilistic model to learn an underlying data distribution and transform real data points (seeds) into synthetic data points. Synthetic data is then filtered by a privacy test based on a plausible deniability criterion, which can be equivalent to differential privacy under certain conditions. Our approach, on the other hand, is to generate private data without requiring any real seeds. Thus, there is no need for privacy tests at the release stage, and the only requirement is that the generative model is privacy-preserving. By using GANs BID9 we ensure that our method is scalable and applicable to complex real-world data. This section gives a short introduction to GANs and differential privacy. Another important notion is the moments accountant method used to compute actual privacy bounds during training. However, since it is not essential for understanding the paper, we defer its description to the Appendix. In recent years, generative adversarial networks BID9 BID26 and its extensions, such as DCGAN BID24 and EBGAN BID31, have received great attention and pushed the boundaries for deep generative models along with variational autoencoders (VAEs) BID15 BID25 BID10 and recursive neural networks (e.g. PixelRNN by Oord et al. FORMULA1). The most successful application for such generative models so far has been realistic image generation, perhaps due to abundance of training data and inherent geometric structure. In our work, we decided to choose GANs for several reasons. Firstly, GANs have shown very good in practice, generating sharper images compared to other generative models. Secondly, the forward pass for generating data is much faster than that of, for instance, RNNs. Thirdly, the generator part of the model, the one we eventually interested in, does not interact with real training data at any point in the learning process, only getting gradients from the discriminator. In short, GANs can be described as follows. The model consists of two separate components: the generator G(z) and the discriminator D(x). The generator's goal is to produce realistic samples of data based on a random variable z ∼ p z (z), while the discriminator is tasked with distinguishing real data samples x ∼ p data (x) from generated samplesx ∼ p g (x). These two models are trained in an adversarial fashion, essentially playing a two-player game, with the goal to converge to the Nash equilibrium. Since training GANs in practice can be challenging, there is a number of commonly used tricks to improve convergence, such as using the Adam optimisation method BID14, feature matching, batch normalisation, and one-sided label smoothing BID26. We also observe improvements with adding labels to the discriminator BID21 and unrolling discriminator updates BID19. The notion of differential privacy has been introduced and extended in a series of papers by Dwork et al. BID2 BID3 BID7, and is regarded as a strong privacy standard. It is defined for two adjacent datasets that differ by a single element: DISPLAYFORM0 DISPLAYFORM1 Among the mechanisms to achieve differential privacy, two of the most widely used are Laplacian and Gaussian noise mechanisms. We are primarily interested in the latter, because of the improved privacy bounds analysis provided by the moments accountant method described in the Appendix. The Gaussian noise mechanism is defined as follows: DISPLAYFORM2 where s f is the sensitivity of f (i.e. DISPLAYFORM3 is the Gaussian distribution with the mean 0 and the standard deviation s f σ. Sensitive data X is fed into a discriminator D with a privacypreserving layer (dashed line). This discriminator is used to train a differentially private generator G to produce a private artificial dataset X. In this section, we describe our solution and provide a theoretical proof of privacy guarantees, as well as discuss limitations of the method. Let us begin with the formal problem statement. Problem Statement. Given the dataset X ∼ p data (x), generate an artificial dataset X = M(X) using the privacy mechanism M: X → X, such that 1. it follows the same data distribution: DISPLAYFORM0 2. it provides differential privacy guarantees: DISPLAYFORM1 ε Pr [M(X) ∈ S] + δ for any adjacent datasets X, X, and for any S ⊆ X.Here X = {X | X ∼ p data (x)} is the space of all datasets formed by points drawn from the same distribution p data (x).In most real-world problems, the true data distribution p data (x) is unknown and needs to be estimated empirically. Since we are primarily interested in data synthesis, we will turn to generative models, and in particular we are going to use GANs as the mechanism to estimate p data (x) and draw samples from it. If trained properly, GAN will provide a solution to the sub-problem.Despite the fact that the generator does not have access to the real data X in the training process, one cannot guarantee differential privacy because of the information passed through with the gradients from the discriminator. A simple high level example will illustrate such breach of privacy. Let the datasets X, X contain small real numbers. The only difference between these two datasets is the number x ∈ X, which happens to be extremely large. Since the gradients of the model depend on x, one of the updates of the discriminator trained on X may be very different from the rest, and this difference will the be propagated to the generator breaking privacy in general case. In order to maintain differential privacy guarantees, we propose the following solution. Proposition. Introduce a Gaussian noise layer in the discriminator network of GAN, so that its output, and therefore the weights of the trained generator, are differentially private with respect to the input data X. Use this generator to create a publishable differentially private dataset. The components of our solution are depicted in FIG1 To validate the proposed solution, we first analyse it theoretically and show that the addition of a Gaussian noise layer in the discriminator network yields differential privacy in the generator. We will take the following steps to do that:1. analyse privacy of the output of the noise layer w.r.t. the inputs X and X; 2. determine privacy bounds on the output of the whole network; 3. show that the same bounds hold for gradient updates. Let us start by describing the setting and notation used in the remainder of the section. We are given two adjacent datasets (X, y) and (X, y) and a deterministic feed-forward neural network N with a Gaussian noise layer π. We denote the inputs of the layer π as x π and x π, and the outputs of the final layer of the networkŷ = N (X) andŷ = N (X) correspondingly. To ensure (ε, δ)-differential privacy of π, the standard deviation of the noise has to be at least σ = C 2 log(1.25/δ)/ε, where C is the sensitivity of the preceding layer's output x π. Lemma 1. If the output of the noise layer π(x π) is (ε, δ)-differentially private w.r.t. x π and the network layers before π preserve adjacency of X and X, then π(X) is also (ε, δ)-differentially private w.r.t. X.The proof of this lemma and the following Theorems 1 and 2 can be found in the appendix. Using Lemma 1, we are able demonstrate that the outputs of a feed-forward neural network with a Gaussian noise layer are differentially private with respect to the input data, which is expressed in the following theorem. Theorem 1. (Forward pass) The outputŷ of a deterministic feed-forward neural network N with (ε, δ)-differentially private layer π, is also (ε, δ)-differentially private with respect to X. Now, given that the forward pass is differentially private, we can formulate the main theoretical of the paper: differential privacy of the gradients, and thus, the weights of the network N. Theorem 2. (Backward pass) Given a feed-forward neural network N with (ε, δ)-differentially private outputsŷ, weight updates ω DISPLAYFORM0 X are also (ε, δ)-differentially private with respect to X in each iteration i of gradient descent. Since we are interested in generating data using GANs, we will also need the following corollary to finalise the theoretical foundation for our framework. Corollary 1. (GANs) Given a generative adversarial network consisting of the generator G and the discriminator D with a privacy-preserving layer, gradient updates of G will have the same privacy bounds as gradient updates of D.Proof. This trivially follows from Theorem 2 once we observe that generator updates are a function of discriminator updates. The above analysis is applicable for each individual iteration of the gradient descent, and privacy bounds on the final parameters can be obtained using composition theorems or a more efficient moments accountant method.Note that Theorems 1 and 2 define differential privacy of the neural network with respect to the inputs X only, not taking into account the labels y. In certain cases, when labels of interest are already a public knowledge and do not reveal any information about data, it may be sufficient. However, if labels privacy is required, it is possible to incorporate it in the proposed approach in two ways. A first solution is to modify the learning problem so that labels become a part of data. For example, if one wants to train a face recognition model with privacy-breaking labels (e.g. specific namesJohn, Bob, Julia, etc.), it is possible to add these labels to X, and instead use True and False labels in y, indicating whether the input image and the input name correspond to each other. This way, label privacy will be handled by the same framework. Alternatively, one can use a separate privacy-preserving mechanism to retrieve labels during training. In this case, the eventual privacy w.r.t. the pair (X, y) may be derived from a composition of two mechanisms, which is shown in the theorem below. One possible candidate for such mechanism is the noisy voting scheme as used in BID23. Theorem 3. (Private labels) Given a feed-forward neural network N with (ε 1, δ 1)-differentially private outputsŷ, and the training labels y satisfying (ε 2, δ 2)-differential privacy w.r.t. the true labels y, the gradient updates ω (i) X are (ε 1 + ε 2, δ 1 + δ 2)-differentially private with respect to (X, y) on each iteration i of gradient descent. Proof. There are two privacy mechanisms M 1 and M 2 applied to X and y correspondingly. Observe that M 1 does not have access to y, and thus, y cannot influence the output probabilities of M 1. The same is true for M 2 and X. Consequently, we can assume that both mechanisms are applied to a pair (X, y). This allows us to employ a basic sequential composition theorem for differential privacy BID4 ) to obtain the privacy bounds. While it may appeal to use parallel composition instead of sequential composition to obtain a tighter bound, since X and y appear to be disjoint, it would be incorrect. The reason is that X and y are strongly correlated and breaking privacy of one can reveal the other. Alternatively, one could use advanced composition theorems (see e.g. BID6 ; BID13) to prove tighter privacy bounds, but it is not the goal of our paper. Based on the analysis above, we can do a number of important observations regarding applicability of this technique. First of all, the analysis is performed for feed-forward networks. Other architectures, such as RNNs, LSTMs, or memory networks, require additional investigation. Second, we focused on deterministic networks, meaning that the only two sources of stochasticity are data shuffling and privacypreserving noise layer π. Additional randomness in the network would complicate the proofs by introducing uncertainty in mappings. Third, conditions of Lemma 1 dictate that the network layers prior to π must preserve adjacency of the input. One layer breaking this condition is batch normalisation, because it introduces interdependencies between examples inside a batch, and just one different instance can change an entire batch. Summarising these limitations, the neural network under question must• be a feed-forward network;• not have randomised layers, e.g. dropout;• not have adjacency breaking layers before the privacy layer, e.g. batch normalisation. In the following section, we will touch upon some implications of it that affect practical performance. Note that these restrictions only apply to the network, in which we insert a privacypreserving layer, i.e. only the discriminator in our case. In this section, we provide some implementation details and discuss evaluation obtained on MNIST BID16 and SVHN BID20 ) datasets. We evaluate our solution as follows. First, we train a generative model on original datasets (using only training parts of each) with differential privacy by adding a Gaussian noise layer to the discriminator. We will call this model a teacher, analogously to BID23. Then, we generate an artificial dataset of comparable size using the obtained model. Finally, we train a separate (nonprivate) classifier, which we call a student, on generated data and test it using held-out test sets. The last step is important from two perspectives: we can quantify the quality of generated samples as opposed to visual inspection typically done with GANs, and we can compare test errors to previously reported values. Note that there is no dependencies between the teacher and the student models. Moreover, student models are not constrained to neural networks and can be implemented as any type of machine learning algorithm. We choose two commonly used image classification datasets for our experiments: MNIST and SVHN. MNIST is a handwritten digit recognition dataset consisting of 60'000 training examples Table 1: Accuracy of student models for given privacy bounds for our method and semi-supervised knowledge transfer approach of BID23. In both cases, we restricted our method to have tighter privacy bounds. and 10'000 test examples, each example is a 28x28 size greyscale image. SVHN is also a digit recognition task, with 73'257 images for training and 26'032 for testing. The examples are coloured 32x32 pixel images of house numbers from Google Street View. Implementation was done in Python using Pytorch 1. For generative model, we used a modified version of DCGAN by BID24. More specifically, the discriminator consists of five (four for MNIST) convolutional layers followed by leaky ReLU activations and a linear classifier with sigmoid output. We clip the output of the third convolutional layer (to ensure bounded sensitivity) and add Gaussian noise before passing it to the remaining convolutions with batch normalisation. The generator has two linear layers in front of five deconvolutions with batch normalisation and ReLU activations, ensued by fractional max pooling with tanh activation at the end. Both networks were trained using Adam optimiser BID14 with parameters typical for GAN training: learning rate set to 0.0002, β 1 = 0.5, β 2 = 0.999, and a batch size of 32. Privacy bounds were evaluated using the moments accountant and the privacy amplification theorem, and therefore, are data-dependent and are tighter than using normal composition theorems. The student network is constructed of two convolutional layers with ReLU activations, batch normalisation and max pooling, followed by two fully connected layers with ReLU, and a softmax output layer. Again, training is performed by Adam algorithm. It is worth mentioning that this network does not achieve state-of-the-art performance on the used datasets, but we are primarily interested in evaluating the performance drop compared to a non-private model rather than getting the best test score. Using the experimental setup and implementation described above, we were able to get close to BID23 although not quite matching their accuracy for the same privacy bounds on SVHN. A performance gap is expected due to more generic nature of our method and a simpler privacy-preserving procedure. Overall, we managed to achieve 98.19% accuracy on MNIST and 83.49% accuracy on SVHN while maintaining approximately (3.45, 10 −5) and (8, 10 −6)-differential privacy. These numbers, along with the corresponding of BID23, can be found in Table 1. It is also worth noting that we did not perform rigorous hyper-parameter tuning due to limited computational resources; even better accuracy could be achieved have we had done that. Additionally, we trained a simple logistic regression model on MNIST, and obtained 88.96% accuracy on privately generated data compared to 92.58% on the original data, which confirms that any model can be used as a student. Examples of real and generated privacy-preserving images for MNIST and SVHN data are depicted on FIG2. It can be seen that generated images don't have the same contrast and dynamic range as real examples, which is not a problem in non-private GANs. We attribute it to the lack of batch normalisation in the discriminator. In addition to quantitative analysis of test errors and privacy bounds, we perform visual inspection of generated examples and corresponding nearest neighbours in real data. FIG3 depicts a set of generated private examples and their nearest real counterparts. We observe that while some generated images are very close to real examples they don't match exactly, differing either in shape, colour or surrounding digits. Moreover, a lot of pairs come from entirely different classes. We investigate the problem of non-interactive private data release with differential privacy guarantees. We employ generative adversarial networks to produce artificial privacy-preserving datasets. Contrary to existing privacy protection work in deep learning, this method allows to publish sanitised data and train any non-private models on it. The choice of GANs as a generative model ensures scalability and makes the technique suitable for real-world data with complex structure. Moreover, this method does not require running privacy tests on generated data before releasing it. Additionally, we introduce a novel method for preserving privacy of training data specific to deep neural networks based on adding noise in the embedding space during forward pass. It provides differential privacy guarantees and allows to construct privacy-preserving models in a simple and straightforward fashion, without modifying optimisation algorithms. In our experiments, we show that student models trained on artificial data can achieve high utility on MNIST dataset, while maintaining performance costs of added privacy and flexibility at acceptable levels on a more complicated SVHN data. Adding privacy directly to the trained model still provides better accuracy, and therefore, one of the possible directions for future work is to improve the quality of generated data for given privacy bounds. Extending presented technique and analysis to other types of deep neural networks provides another exciting opportunity for further research. In this appendix, we state again and prove lemmas and theorems from Section 4.1. Lemma 2. If the output of the noise layer π(x π) is (ε, δ)-differentially private w.r.t. x π and the network layers before π preserve adjacency of X and X, then π(X) is also (ε, δ)-differentially private w.r.t. X.Proof. By definition of differential privacy: DISPLAYFORM0 for all adjacent x π and x π.We need to show that the same holds for all adjacent inputs X, X, i.e. DISPLAYFORM1 Observe that we defined our network as deterministic (i.e. not having any randomness apart from initial data shuffling). Therefore, P [X π |X] = δ xπ (X π), where δ x (X) is a Dirac delta function. Conceptually, it means that the entire mass of the distribution of X π is concentrated on the point x π.Using the above observation, DISPLAYFORM2 Remark. Allowing randomised layers in the network would complicate the proof due to marginalisation over all possible outcomes X π corresponding to the input X. Theorem 1. (Forward pass) The outputŷ of a deterministic feed-forward neural network N with (ε, δ)-differentially private layer π, is also (ε, δ)-differentially private with respect to X.Proof. Using the lemma above, we can show that outputs of the layer π are (ε, δ)-differentially private w.r.t. the inputs X, i.e. DISPLAYFORM0 Since we require all the layers of N (except π) to be deterministic, there is a deterministic mapping from the outputs of π toŷ. Let us denote this mapping f (π), and the preimage of a set S under this mapping DISPLAYFORM1 Note that we treat X and X as points in the space of all datasets X, and thus, π and f are not set-valued functions. Also, to avoid confusion, let us restate that f −1 [S] is a preimage of a set S under f, and not a function inverse. Hence, we do not require f to be bijective, or even injective. Using the above, P [ŷ ∈ S] = P [f (π(X)) ∈ S] DISPLAYFORM2 ≤ e ε P [π(X) ∈ f −1 [S]] + δ = e ε P [f (π(X)) ∈ S] + δ = e ε P [ŷ ∈ S] + δ,for any pair of adjacent datasets X and X (differing in one training example), thus, proving the theorem. Theorem 2. (Backward pass) Given a feed-forward neural network N with (ε, δ)-differentially private outputsŷ, weight updates ω Since gradient is a function of network outputs and labels, we have DISPLAYFORM0 Combining the above , DISPLAYFORM1 = e ε P [ω (i) DISPLAYFORM2 for any pair of adjacent datasets X and X, demonstrating that weight updates stay (ε, δ)-differentially private w.r.t to the input. The privacy bound produced by the strong composition theorem is often too loose, and therefore, we exploit the moments accountant technique developed by for analysing their DP-SGD algorithm. To give the main idea of the method, let us start with defining the privacy loss. Definition 2. Let M: D → R be a randomized mechanism and d, d a pair of adjacent databases. Let aux denote an auxiliary input. For an outcome o ∈ R, the privacy loss at o is defined as: The moments accountant is then defined as follows: Definition 3. Again, let M: D → R be a randomized mechanism, d, d a pair of adjacent databases, and aux denote an auxiliary input. The moments accountant is α M (λ) max DISPLAYFORM0 DISPLAYFORM1 where α M (λ; aux, d, d) log E[exp(λC(M, aux, d, d))] is a moment-generating function. In short, the moments accountant method tracks the bounds on the moments of the privacy loss random variable and then uses Markov inequality to obtain the tail bound on this random variable corresponding to the values of ε and δ.
Train GANs with differential privacy to generate artificial privacy-preserving datasets.
747
scitldr
This paper presents two methods to disentangle and interpret contextual effects that are encoded in a pre-trained deep neural network. Unlike convolutional studies that visualize image appearances corresponding to the network output or a neural activation from a global perspective, our research aims to clarify how a certain input unit (dimension) collaborates with other units (dimensions) to constitute inference patterns of the neural network and thus contribute to the network output. The analysis of local contextual effects w.r.t. certain input units is of special values in real applications. In particular, we used our methods to explain the gaming strategy of the alphaGo Zero model in experiments, and our method successfully disentangled the rationale of each move during the game. Interpreting the decision-making logic hidden inside neural networks is an emerging research direction in recent years. The visualization of neural networks and the extraction of pixel-level inputoutput correlations are two typical methodologies. However, previous studies usually interpret the knowledge inside a pre-trained neural network from a global perspective. For example, BID17 BID14 BID10 mined input units (dimensions or pixels) that the network output is sensitive to; BID2 visualized receptive fields of filters in intermediate layers; BID33 BID15 BID24 BID5 BID6 BID20 illustrated image appearances that maximized the score of the network output, a filter's response, or a certain activation unit in a feature map. However, instead of visualizing the entire appearance that is responsible for a network output or an activation unit, we are more interested in the following questions.• How does a local input unit contribute to the network output? Here, we can vectorize the input of the network into a high-dimensional vector, and we treat each dimension as a specific "unit" without ambiguity. As we know, a single input unit is usually not informative enough to make independent contributions to the network output. Thus, we need to clarify which other input units the target input unit collaborates with to constitute inference patterns of the neural network, so as to pass information to high layers.• Can we quantitatively measure the significance of above contextual collaborations between the target input unit and its neighboring units?Method: Therefore, given a pre-trained convolutional neural network (CNN), we propose to disentangle contextual effects w.r.t. certain input units. As shown in Fig. 1, we design two methods to interpret contextual collaborations at different scales, which are agnostic to the structure of CNNs. The first method estimates a rough region of contextual collaborations, i.e. clarifying whether the target input unit mainly collaborates with a few neighboring units or most units of the input. This method distills knowledge from the pre-trained network into a mixture of local models (see Fig. 2), where each model encodes contextual collaborations within a specific input region to make predictions. We hope that the knowledge-distillation strategy can help people determine quantitative contributions from different regions. Then, given a model for Extracting fine-grained contextual effects from a student net A lattice within the Go board Figure 1: Explaining the alphaGo model. Given the state of the Go board and the next move, we use the alphaGo model to explain the rationale of the move. We first estimate a rough region of contextual collaborations w.r.t. the current move by distilling knowledge from the value net to student nets that receive different regions of the Go board as inputs. Then, given a student net, we analyze fine-grained contextual collaborations within its region of the Go board. In this figure, we use a board state from a real Go game between humans for clarity.local collaborations, the second method further analyzes the significance of detailed collaborations between each pair of input units, when we use the local model to make predictions on an image. The quantitative analysis of contextual collaborations w.r.t. a local input unit is of special values in some tasks. For example, explaining the alphaGo model BID22 BID7 October 2017) is a typical application. The alphaGo model contains a value network to evaluate the current state of the game-a high output score indicates a high probability of winning. As we know, the contribution of a single move (i.e. placing a new stone on the Go board) to the output score during the game depends on contextual shapes on the Go board. Thus, disentangling explicit contextual collaborations that contribute to the output of the value network is important to understand the logic of each new move hidden in the alphaGo model. More crucially, in this study, we explain the alphaGo Zero model BID7, which extends the scope of interests of this study from diagnosing feature representations of a neural network to a more appealing issue letting self-improving AI teach people new knowledge. The alphaGo Zero model is pre-trained via self-play without receiving any prior knowledge from human experience as supervision. In this way, all extracted contextual collaborations represent the automatically learned intelligence, rather than human knowledge. As demonstrated in well-known Go competitions between the alphaGo and human players (alp, Retrieved 17 March 2016; 2017-05-27), the automatically learned model sometimes made decisions that could not be explained by existing gaming principles. The visualization of contextual collaborations may provide new knowledge beyond people's current understanding of the Go game. Contributions of this paper can be summarized as follows.(i) In this paper, we focus on a new problem, i.e. visualizing local contextual effects in the decisionmaking of a pre-trained neural network w.r.t. a certain input unit.(ii) We propose two new methods to extract contextual effects via diagnosing feature representations and knowledge distillation.(iii) We have combined two proposed methods to explain the alphaGo Zero model, and experimental have demonstrated the effectiveness of our methods. Understanding feature representations inside neural networks is an emerging research direction in recent years. Related studies include 1) the visualization and diagnosis of network features, 2) disentangling or distilling network feature representations into interpretable models, and 3) learning neural networks with disentangled and interpretable features in intermediate layers. Network visualization: Instead of analyzing network features from a global view BID30 BID19 BID16, BID2 BID33 BID15 BID24 BID5 BID32 BID34 showed the appearance that maximized the score of a given unit. BID5 used up-convolutional nets to invert CNN feature maps to their corresponding images. Pattern retrieval: Some studies retrieved certain units from intermediate layers of CNNs that were related to certain semantics, although the relationship between a certain semantics and each neural unit was usually convincing enough. People usually parallel the retrieved units similar to conventional mid-level features BID25 of images. BID37 selected units from feature maps to describe "scenes". BID23 discovered objects from feature maps. Model diagnosis and distillation: Model-diagnosis methods, such as the LIME BID17, the SHAP , influence functions BID11, gradientbased visualization methods BID6 BID20, and BID12 extracted image regions that were responsible for network outputs. BID29 BID36 ) distilled knowledge from a pre-trained neural network into explainable models to interpret the logic of the target network. Such distillation-based network explanation is related to the first method proposed in this paper. However, unlike previous studies distilling knowledge into explicit visual concepts, our using distillation to disentangle local contextual effects has not been explored in previous studies. A new trend is to learn networks with meaningful feature representations in intermediate layers BID9 BID26 BID13 in a weakly-supervised or unsupervised manner. For example, capsule nets BID18 and interpretable RCNN learned interpretable middle-layer features. InfoGAN BID3 and β-VAE BID8 learned meaningful input codes of generative networks. BID35 ) developed a loss to push each middle-layer filter towards the representation of a specific object part during the learning process without given part annotations. All above related studies mainly focused on semantic meanings of a filter, an activation unit, a network output. In contrast, our work first analyzes quantitative contextual effects w.r.t. a specific input unit during the inference process. Clarifying explicit mechanisms of how an input unit contributes to the network output has special values in applications. In the following two subsections, we will introduce two methods that extract contextual collaborations w.r.t. a certain input unit from a CNN at different scales. Then, we will introduce the application that uses the proposed methods to explain the alphaGo Zero model. Since the input feature usually has a huge number of dimensions (units), it is difficult to accurately discover a few input units that collaborate with a target input unit. Therefore, it is important to first approximate the rough region of contextual collaborations before the unit-level analysis of contextual collaborations, i.e. clarifying in which regions contextual collaborations are contained. Given a pre-trained neural network, an input sample, and a target unit of the sample, we propose a method that uses knowledge distillation to determine the region of contextual collaborations w.r.t. the target input unit. Let I ∈ I denote the input feature (e.g. an image or the state in a Go board).Note that input features of most CNNs can be represented as a tensor I ∈ R H×W ×D, where H and W indicate the height of the width of the input, respectively; D is the channel number. We clip different lattices (regions) Λ 1, Λ 2,..., Λ N ∈ Λ from the input tensor, and input units within the i-th lattice are given as I Λi ∈ R h×w×D, h ≤ H, w ≤ W. Different lattices overlap with each other. The core idea is that we use a mixture of models to approximate the function of the given pretrained neural network (namely the teacher net), where each model is a student net and uses input information within a specific lattice I Λi to make predictions. DISPLAYFORM0 Generate weights 2x2 lattices for the first type of student nets 3x3 lattices for the second type of student nets. We only illustrate three of the nine lattices for clarity. Figure 2: Division of lattices for two types of student nets. We distill knowledge from the value net into a mixture of four/nine student nets to approximate decision-making logic of the value net. whereŷ = f (I) and y i = f i (I Λi) denote the output of the pre-trained teacher net f and the output of the i-th student net f i, respectively. α i is a scalar weight, which depends on the input I. Because different lattices within the input are not equally informative w.r.t. the target task, input units within different lattices make different contributions to final network output. More crucially, given different inputs, the importance for the same lattice may also change. For example, as shown in BID20, the head appearance is the dominating feature in the classification of animal categories. Thus, if a lattice corresponds to the head, then this lattice will contribute more than other lattices, thereby having a large weight α i. Therefore, our method estimates a specific weight α i for each input I, i.e. α i is formulated as a function of I (which will be introduced later).Significance of contextual collaborations: Based on the above equation, the significance of contextual collaborations within each lattice Λ i w.r.t. an input unit can be measured as DISPLAYFORM0 Impacts from the first lattice Λ1 DISPLAYFORM1 where we revise the value of the target unit in the input and check the change of network outputs, DISPLAYFORM2 If contextual collaborations w.r.t. the target unit mainly localize within the i-th lattice Λ i, then α i · ∆y i can be expected to contribute the most to the change ofŷ. We conduct two knowledge-distillation processes to learn student nets and a model of determining {α i}, respectively. The first process distills knowledge from the teacher net to each student net f i with parameters θ i based on the distillation loss min θi I∈I y I,i −ŷ I 2, where the subscript I indicates the output for the input I. Considering that Λ i only contains partial information of I, we do not expect y I,i to reconstructŷ I without any errors. Distilling knowledge to weights: Then, the second distillation process estimates a set of weights α = [α I,1, α I,2, . . ., α I,n] for each specific input I. We use the following loss to learn another neural network g with parameters θ g to infer the weight. DISPLAYFORM0 3.2 FINE-GRAINED CONTEXTUAL COLLABORATIONS w.r.t. AN INPUT UNITIn the above subsection, we introduce a method to distill knowledge of contextual collaborations into student nets of different regions. Given a student net, in this subsection, we develop an approach to disentangling from the student net explicit contextual collaborations w.r.t. a specific input unit u, i.e. identifying which input unit v collaborates with u to compute the network output. We can consider a student net as a cascade of functions of N layers, i.e. DISPLAYFORM1, where x (l) denotes the output feature of the l-th layer. In particular, x and x (n) indicate the input and output of the network, respectively. We only focus on a single scalar output of the network (we may handle different output dimensions separately if the network has a high-dimensional output). If the sigmoid/softmax layer is the last layer, we use the score before the softmax/sigmoid operation as x (n) to simplify the analysis. As preliminaries of our algorithm, we extend the technique of BID21 to estimate the quantitative contribution of each neural activation in a feature map to the final prediction. We use C x ∈ R H l ×W l ×D l to denote the contribution distribution of neural activations on the l-th layer x ∈ R H l ×W l ×D l. The score of the i-th element C xi denotes the ratio of the unit x i's score contribution w.r.t. the entire network output score. Because x (n) is the scalar network output, it has a unit contribution C x (n) = 1. Then, we introduce how to back-propagate contributions to feature maps in low layers. The method of contribution propagation is similar to network visualization based on gradient backpropagation BID15 BID32. However, contribution propagation reflects more objective distribution of numerical contributions over {x i}, instead of biasedly boosting compacts of the most important activations. Without loss of generality, in this paragraph, we use o = φ(x) to simplify the notation of the function of a certain layer. If the layer is a conv-layer or a fully-connected layer, then we can represent the convolution operation for computing each elementary activation score o i of o in a vectorized form DISPLAYFORM0 We consider x j w j as the numerical contribution of x j to o i. Thus, we can decompose the entire contribution of o i, C oi, into elementary contributions of x j, i.e. C oi→xj = C oi · xj wj oi+max{−b,0}, which satisfies C oi→xj ∝ x j w j (see the appendix for details). Then, the entire contribution of x j is computed as the sum of elementary contributions from all o i in the above layer, i.e. C xj = i C oi→xj.A cascade of a conv-layer and a batch-normalization layer can be rewritten in the form of a single conv-layer, where normalization parameters are absorbed into the conv-layer 1. For skip connections, a neural unit may receive contributions from different layers, C x DISPLAYFORM1. If the layer is a ReLU layer or a Pooling layer, the contribution propagation has the same formulation as gradient back-propagations of those layers 1. As discussed in BID2, each neural activation o i of a middle-layer feature o can be considered as the detection of a mid-level inference pattern. All input units must collaborate with neighboring units to activate some middle-layer feature units, in order to pass their information to the network output. Therefore, in this research, we develop a method to 1. determine which mid-level patterns (or which neural activations o i) the target unit u constitutes; 2. clarify which input units v help the target u to constitute the mid-level patterns; 3. measure the strength of the collaboration between u and v. Let o bfr and o denote the feature map of a certain conv-layer o = f (x) when the network receives input features with the target unit u being activated and the feature map generated without u being activated, respectively. In this way, we can use |o − o bfr | to represent the absolute effect of u on the feature map o. The overall contribution of the i-th neural unit C oi depends on the activation score o i, C oi ∝ max{o i, 0}, where max{o i, 0} measures the activation strength used for inference. The proportion of the contribution is affected by the target unit u can be roughly formulated asC o. DISPLAYFORM0 where C oi = 0 and thusC oi = 0 if o i ≤ 0, because negative activation scores of a conv-layer cannot pass information through the following ReLU layer (o is not the feature map of the last conv-layer before the network output).In this way,C oi highlights a few mid-level patterns (neural activations) related to the target unit u. C o measures the contribution proportion that is affected by the target unit u. We can useC o to replace C o and use techniques in Section 3.2.1 to propagateC o back to input units DISPLAYFORM1 represents a map of fine-grained contextual collaborations w.r.t. u. Each element in the mapC DISPLAYFORM2 j's collaboration with u. We can understand the proposed method as follows. The relative activation change DISPLAYFORM3 can be used as a weight to evaluate the correlation between u and the i-th activation unit (inference pattern). In this way, we can extract input units that make great influences on u's inference patterns, rather than affect all inference patterns. Note that both u and v may either increase or decrease the value of o i. It means that the contextual unit v may either boost u's effects on the inference pattern, or weaken u's effects. We use the ELF OpenGo BID28 BID30 as the implementation of the alphaGo Zero model. We combine the above two methods to jointly explain each move's logic hidden in the value net of the alphaGo Zero model during the game. As we know, the alphaGo Zero model contains a value net, policy nets, and the module of the Monte-Carlo Tree Search (MCTS). Generally speaking, the superior performance of the alphaGo model greatly relies on the enumeration power of the policy net and the MCTS, but the value net provides the most direct information about how the model evaluates the current state of the game. Therefore, we explain the value net, rather than the policy net or the MCTS. In the ELF OpenGo implementation, the value net is a residual network with 20 residual blocks, each containing two conv-layers. We take the scalar output 2 before the final (sigmoid) layer as the target value to evaluate the current state on the Go board. Given the current move of the game, our goal is to estimate unit-level contextual collaborations w.r.t. the current move. I.e. we aim to analyze which neighboring stones and/or what global shapes help the current move make influences to the game. We distill knowledge from the value net to student networks to approximate contextual collaborations within different regions. Then, we estimate unitlevel contextual collaborations based on the student net. Determining local contextual collaborations: We design two types of student networks, which receive lattices at the scales of 13 × 13 and 10 × 10, respectively. In this way, we can conduct two distillation processes to learn neural networks that encode contextual collaborations at different scales. As shown in Fig. 2, we have four student nets {f i |i = 1, . . ., 4} oriented to 13 × 13 lattices. Except for the output, the four student nets have the same network structure as the value net. The four student nets share parameters in all layers. The input of a student net only has two channels corresponding to maps of white stones and black stones, respectively, on the Go board. We crop four overlapping lattices at the four corners of the Go board for both training and testing. Note that we rotate the board state within each lattice I Λi to make the top-left position corresponds to the corner of the board, before we input I Λi to the student net. The neural network g has the same settings as the value net. g receives a concatenation of [I Λ1, . . ., I Λ4] as the input. g outputs four scalar weights {α i} for the four local student networks {y i}. We learn g via knowledge distillation. Student nets for 10×10 lattices have similar settings as those for 13×13 lattices. We divide the entire Go board into 3 × 3 overlapping 10 × 10 lattices. Nine student nets encode local knowledge from nine local lattices. We learn another neural network g, which uses a concatenation of [I Λ1, . . ., I Λ9] to weight for the nine local lattices. Finally, we select the most relevant 10 × 10 lattice and the most relevant 13 × 13 lattice, via max i s i, for explanation. Estimating unit-level contextual collaborations: In order to obtain fine-grained collaborations, we apply the method in Section 3.2.2 to explain two student nets corresponding to the two selected relevant lattices. We also use our method to explain the value net. We compute a map of contextual collaborations for each neural network and normalize values in the map. We sum up maps of the three networks together to obtain the final map of contextual collaborationsĈ.More specifically, given a neural network, we use the feature of each conv-layer to compute the initialC o in Equation and propagatedC o to obtain a map of collaborationsC x. We sum up maps based on the 1st, 3rd, 5th, and 7th conv-layers to obtain the collaboration map of the network. In experiments, we distilled knowledge of the value network to student nets, and disentangled finegrained contextual collaborations w.r.t. each new move. We compared the extracted contextual collaborations and human explanations for the new move to evaluate the proposed method. In this section, we propose two metrics to evaluate the accuracy of the extracted contextual collaborations w.r.t. the new move. Note that considering the high complexity of the Go game, there is no exact ground-truth explanation for contextual collaborations. Different Go players usually have different analysis of the same board state. More crucially, as shown in competitions between the alphaGo and human players (alp, Retrieved 17 March 2016; 2017-05-27), the knowledge encoded in the alphaGo was sometimes beyond humans' current understanding of the Go game and could not be explained by existing gaming principles. In this study, we compared the similarity between the extracted contextual collaborations and humans' analysis of the new move. The extracted contextual collaborations were just rough explanations from the perspective of the alphaGo. We expected these collaborations to be close to, but not exactly the same as human understanding. More specifically, we invited Go players who had obtained four-dan grading rank to label contextual collaborations. To simplify the metric, Go players were asked to label a relative strength value of the collaboration between each stone and the target move (stone), no matter whether the relationship between the two stones was collaborative or adversarial. Considering the double-blind policy, the paper will introduce the Go players if the paper is accepted. Let Ω be a set of existing stones except for the target stone u on the Go board. p v ≥ 0 denotes the labeled collaboration strength between each stone v ∈ Ω and the target stone u. q v = |Ĉ v | is referred to as the collaboration strength estimated by our method, whereĈ v denotes the final estimated collaboration value on the stone v. We normalized the collaboration strength,p v = p v / v p v, q v = q v / v q v and computed the Jaccard similarity between the distribution of p and the distribution of q as the similarity metric. In addition, considering the great complexity of the Go game, different Go players may annotate different contextual collaborations. Therefore, we also required Go players to provide a subjective rating for the extracted contextual collaborations of each board state, i.e. selecting one of the five ratings: 1-Unacceptable, 2-Problematic, 3-Acceptable, 4-Good, and 5-Perfect. FIG0 shows the significance of the extracted contextual collaborations, as well as possible explanations for contextual collaborations, where the significance of the stone v's contextual collaboration was reported as the absolute collaboration strength q v instead of the original scoreĈ v in experiments. Without loss of generality, let us focus on the winning probability of the black. Considering the complexity of the Go game, there may be two cases of a positive (or negative) value of the collaboration scoreĈ v. The simplest case is that when a white stone had a negative value ofĈ v, it means that the white stone decreased the winning probability of the black. However, sometimes a white stone had a positiveĈ v. It may be because that this white stone did not sufficiently exhibit its power due to its contexts. Since the white and the white usually had a very similar number of stones in the Go board, putting a relatively ineffective white stone in a local region also wasted the opportunity of winning advantages in other regions in the zero-sum game. Similarly, the black stone may also have either a positive or a negative value ofĈ v. The Jaccard similarity between the extracted collaborations and the manually-annotated collaborations was 0.3633. Nevertheless, considering the great diversity of explaining the same game state, The black stone at has a high value, because it collaborates with the new stone to escape from the surrounding of the white. The white stone at has a high value, because it is about to be eaten by the new black stone. The black stone at has a high value, because it collaborates with the new stone to eat the white stone at. Explanations for the estimated collaborations The black stone at has a high value, because it collaborates with the new stone, because it collaborates with the new black stone to get a head out of the white's regime. The two black stone also communicate with black stones on the top. The white stone at has a high value, because future white stones can only be placed to the right to escape from the regime of the new black stone. The black stone at has a high value, because it collaborates with the new black stone to separate white stones into the left and right groups, which increases the probability of attacking white stones on the left in the future. The white stone at has a high value, because the new black stone reduces the white stone's space of "making eyes" in the future. The black stone at has a high value, because the new black stone helps this black stone to get a head out of the white's regime. The white stone at has a high value, because the new black stone limits the potential of the white's future development in its neighboring area. the average rating score that was made by Go players for the extracted collaborations was 3.7 (between 3-Acceptable and 4-Good). Please see the appendix for more . In this paper, we have proposed two typical methods for quantitative analysis of contextual collaborations w.r.t. a certain input unit in the decision-making of a neural network. Extracting fine-grained contextual collaborations to clarify the reason why and how an input unit passes its information to the network output is of significant values in specific applications, but it has not been well explored before, to the best of our knowledge. In particular, we have applied our methods to the alphaGo Zero model, in order to explain the potential logic hidden inside the model that is automatically learned via self-play without human annotations. Experiments have demonstrated the effectiveness of the proposed methods. Note that there is no exact ground-truth for contextual collaborations of the Go game, and how to evaluate the quality of the extracted contextual collaborations is still an open problem. As a pioneering study, we do not require the explanation to be exactly fit human logics, because human logic is usually not the only correct explanations. Instead, we just aim to visualize contextual collaborations without manually pushing visualization towards human-interpretable concepts. This is different from some previous studies of network visualization BID15 BID32 that added losses as the natural image prior, in order to obtain beautiful but biased visualization . In the future, we will continue to cooperate with professional Go players to further refine the algorithm to visualize more accurate knowledge inside the alphaGo Zero model. Let o = ω ⊗ x + β denote the convolutional operation of a conv-layer. We can rewrite the this equation in a vectorized form as DISPLAYFORM0 If the conv-layer is a fully-connected layer, then each element W ij corresponds to an element in ω. Otherwise, W is a sparse matrix, i.e. W ij = 0 if o i and x j are too far way to be covered by the convolutional filter. Thus, we can write o i = j x j w j + b to simplify the notation. Intuitively, we can propagate the contribution of o i to its compositional elements x j based on their numerical scores. Note that we only consider the case of o i > 0, because if o i ≤ 0, o i cannot pass information through the ReLU layer, and we obtain C oi = 0 and thus C oi→xj = 0. In particular, when b ≥ 0, all compositional scores just contribute an activation score o i − b, thereby receiving a total contribution of C oi oi−b oi. When b < 0, we believe the contribution of C oi all comes from elements of {x j}, and each element's contribution is given a C oi · xj wj oi−b. Thus, we get DISPLAYFORM1 When a batch-normalization layer follows a conv-layer, then the function of the two cascaded layers can be written as DISPLAYFORM2 Thus, we can absorb parameters for the batch normalization into the conv-layer, i.e. w j ← For ReLU layers and Pooling layers, the formulation of the contribution propagation is identical to the formulation for the gradient back-propagation, because the gradient back-propagation and the contribution propagation both pass information to neural activations that are used during the forward propagation. Considering the great complexity of the Go game, there do not exist ground-truth annotations for the significance of contextual collaborations. Different Go players may have different understanding of the same Go board state, thereby annotating different heat maps for the significance of contextual collaborations. More crucially, our reflect the logic of the automatically-learned alphaGo Zero model, rather than the logic of humans. Therefore, in addition to manual annotations of collaboration significance, we also require Go players to provide a subjective evaluation for the extracted contextual collaborations. 94.63% Figure 6: We show the significance of contextual collaborations within a local lattice. The score for the i-th lattice is reported as si j sj.
This paper presents methods to disentangle and interpret contextual effects that are encoded in a deep neural network.
748
scitldr
The main goal of network pruning is imposing sparsity on the neural network by increasing the number of parameters with zero value in order to reduce the architecture size and the computational speedup. Recent advances in deep neural networks came with ideas to train deep architectures that have led to near-human accuracy for image recognition, object categorization and a wide variety of other applications;;;;. One possible issue is that an over-parameterized network may make the architecture overcomplicated for the task at hand and it might be prone to over-fitting as well. In addition to the model complexity, a huge amount of computational power is required to train such deep models due to having billions of weights. Moreover, even if a huge model is trained, it cannot be effectively employed for model evaluation on low-power devices mainly due to having exhaustive matrix multiplications. So far, a wide variety of approaches have been proposed for creating more compact models. Traditional methods include model compression;, network pruning Han et al. (2015b), sparsity-inducing regularizer , and low-rank approximation;;;. The aforementioned methods usually induce random connection pruning which yields to few or no improvement in the computational cost. On the other hand, structured pruning methods proposed to compress the architecture with significant computational efficiency;. One of the critical subjects of interest in sparsity learning is to maintain the accuracy level. In this paper, we discuss the intuitive reasons behind the accuracy drop and propose a method to prevent it. The important step is to determine how the sparsity and accuracy are connected together in order to be able to propose a mechanism for controlling the sparsity to prevent severe accuracy drop. In order to connect the sparsity to accuracy, intuitively, the accuracy drop is caused by imposing too much sparsity on the network in a way that the remaining elements cannot transfer enough information for optimal feature extraction for the desired task. Another intuitive reasoning is to argue that the sparsity is not supervised with any attention towards the model performance during optimization. For effective network pruning and feature selection, different approaches such as employing the group lasso for sparse structure learning , structure scale constraining , and structured regularizing deep architectures known as Structured Sparsity Learning (SSL) have previously been proposed. For most of the previous research efforts, there is lack of addressing the direct effect of the proposed method on the combination of the sparsity and accuracy drop. One may claim that successful sparsity imposing with negligible accuracy drop might be due to the initial over-parameterizing the network. Moreover, there is no control mechanism to supervise the sparsity operation connected to the model performance which limits the available methods to intensive hyper-parameter tuning and multiple stages of training. Our contribution. We designed and employed a supervised attention mechanism for sparsity learning which: performs model compression for having less number of parameters prevents the accuracy drop by sparsity supervision by paying an attention towards the network using variance regularization and is a generic mechanism that is not restricted by the sparsity penalty or any other limiting assumption regarding the network architecture. To the best of our knowledge, this is the first research effort which proposes a supervised attention mechanism for sparsity learning. Paper Organization. At first, we provide a review of the related research efforts (Section 2). Then, we introduce the attention mechanism which is aimed at forcing some sections of the network to be active (Section 3). Later in Section 4, we propose an algorithm only for the attention supervision. We complement our proposed method in Section 5, by providing experimental for which we target the sparsity level, accuracy drop and robustness of the model to hyper-parameter tuning. As will be observed, the proposed mechanism prevents the severe accuracy drop in higher levels of sparsity. We will empirically show the robustness to exhaustive hyper-parameter tuning in addition to performance superiority of the proposed method in higher sparsity levels. Network weights pruning. Network compression for parameter reduction has been of great interest for a long time and a large number of research efforts are dedicated to it. In Han et al. (2015b; a);;, network pruning has been performed with a significant reduction in parameters, although they suffer from computational inefficiency due to the mere weight pruning rather than the structure pruning. Network structure pruning. In Louizos et al. (2017a);; , pruning the unimportant parts of the structure 1 rather than simple weight pruning has been proposed and significant computational speedup has been achieved. However, for the aforementioned methods, the architecture must be fully trained at first and the potential training speedup regarding the sparsity enforcement cannot be attained. A solution for training speedup has been proposed by 0 -regularization technique by using online sparsification Louizos et al. (2017b). Training speedup is of great importance but adding a regularization for solely speeding up the training (because of the concurrent network pruning) is not efficient due to adding a computational cost for imposing 0 -regularization itself. Instead, we will use an adaptive gradient clipping for training speedup. Attention. In this paper, the goal is to impose the sparsity in an accurate and interpretable way using the attention mechanism. So far, attention-based deep architectures has been proposed for image;;; and speech domains;; , as well as machine translation;;. Recently, the supervision of the attention mechanism became a subject of interest as well Liu et al. (2016; ; ; for which they proposed to supervise the attention using some external guidance. We propose the use of guided attention for enforcing the sparsity to map the sparsity distribution of the targeted elements 2 to the desired target distribution. The main objective of the attention mechanism is to control and supervise the sparsity operation. For this aim, it is necessary to propose a method which is neither dependent on the architecture of the model nor to any layer type while maintaining the model accuracy and enforcing the compression objective. Considering the aforementioned goals, we propose the variance loss as an auxiliary cost term to force the distribution of the weights 3 to be skewed. A skewed distribution with a high variance and a concentration on zero (to satisfy the sparsity objective) is desired. Our proposed scheme supervises the sparsity operation to keep a portion of the targeted elements (such as weights) to be dominant (with respect to their magnitude) as opposed to the other majority of the weights to simultaneously impose and control sparsity. Intuitively, this mechanism may force a portion of weights to survive for sufficient information transmission through the architecture. Assume enforcing the sparsity is desired on a parametric model; let's have the training samples with {x i, y i} as pairs. We propose the following objective function which is aimed to create a sparse structure in addition to the variance regularization: in which Γ corresponds to the cross-entropy loss function and θ can be any combination of the target parameters. Model function is defined as F , R is some regularization function, G and H are some arbitrary functions 4 on parameters (such as grouping parameters), N is the number of samples, λ parameters are the weighting coefficients for the associated losses and and Ψ are the sparsity and variance functions 5, respectively. The variance function is the utilized regularization term for any set of θ parameters 6. The inverse operation on top of the Ψ in Eq. 1 is necessary due to the fact that the higher variance is the desired objective. The power of the variance as a regularizer has been investigated in. In this work, we expand the scope of variance regularization to the sparsity supervision as well. Adding a new term to the loss function can increase the model complexity due to adding a new hyper-parameter (the coefficient of the variance loss). For preventing this issue, we propose to have a dependent parameter as the variance loss coefficient. If the new hyperparameter is defined in terms of a variable dependent upon another hyperparameter, then it does not increase the model complexity. Considering the suggested approach, a dependency function must be defined over the hyperparameters definition. The dependency is defined as λ v = f (λ s) = α × λ s in which α is a scalar multiplier. Group Sparsity. Group sparsity has widely been utilized mostly for its feature selection ability by deactivating neurons 7 via imposing sparsity on the whole cluster of weights in a group;. Regarding the Eq. 1, the group sparsity objective function can be defined by following expression: in which w (j) is the j th group of weights in w and |w (j) | is the number of weights in the associated group in the case of having M groups. The l indicates the layer index, |G(W l)| is a normalizer factor which is in essence the number of groups for the l th layer and l demonstrates the elements (weights) belonging to the the l th layer. Structured attention. We propose a Structured Attention (SA) regularization, which adds the attention mechanism on group sparsity learning (the sparsity imposition operation is similar to). The attention is on the predefined groups. Under our general framework, it can be expressed by the following substitutions in Eq. 1: which is simply the variance of the group values for each layer, normalized by a factor and aggregated for all the layers. Generalizability. It is worth noting that our proposed mechanism is not limited to the suggested structured method. It can operate on any function as sparsity objective because the definition of the attention is independent of the type of the sparsity. As an example, one can utilize an unstructured attention which is simply putting the attention function Ψ on all the network weights without considering any special groups of weights or prior objectives such as pruning unimportant channels or filters. The attention mechanism observes the areas of structure 8 on which the sparsity is supposed to be enforced. we propose the Guided Attention in Sparsity Learning (GASL) mechanism, which aims at the attention supervision toward mapping the distribution of the elements' values to a certain target distribution. The target distribution characteristics must be aligned with the attention objective function with the goal of increasing the variance of the elements for sparsity imposition. Assume we have the vector T that is the values of the elements in the group [θ] = {θ 1, θ 2, ..., θ |θ|} and for which we want to maximize the variance. , variational Bayesian inference has been employed for the gradient computation of variational lower bound. Inspired by , in which random vectors are used for stochastic gradient optimization, we utilize the additive random vectors for variance regularization. The random vector is defined as The formulation is as below: where M is a |θ| × |θ| matrix. The ed vectorV (θ) does not make any changes in the mean of the parameters distribution since it has the same mean as the initial V (θ) vector. Proof. For that aim, the task breaks to the subtask of finding the optimal M for which the trace of theV (θ) is maximized. For the mini-batch optimization problem, we prove that the proposed model is robust to the selection of M. The maximizer M can be obtained when the trace of the variance matrix is maximized and it can be demonstrated as follows: As can be observed from Eq. 5, as long as M is a positive definite matrix, the additive random can add to the value of the matrix trace without any upper bound. The detailed mathematical proof is available in the Appendix. Considering the mathematical proof, one can infer that the mere utilization of the variance loss term in Eq. 1, as the attention mechanism and without the additive random vector, can supervise the sparsity operation. However, we will empirically show that the additive random vectors can improve the accuracy due to the supervision of the attention. The supervision of the variance is important regarding the fact that the high variance of the parameters may decrease the algorithm speed and performance for sparsity enforcement. This is due to the large number of operations that is necessary for gradient descent to find the trade-off between the sparsity and variance regularization. From now one, without the loss of generality, we assume M to be identity matrix in our experiments. In practice, the choice of V r should be highly correlated with V. Furthermore, Eq. 5 shows that without being correlated, the terms associated with Cov[V r (θ), V (θ)] may go to zero which affects the high variance objective, negatively. The algorithm for random vector selection is declared in Algorithm. 1. The distribution pdf should be specified regarding the desired output distribution. We chose log-normal distribution due to its special characteristics which create a concentration around zero and a skewed tail. If the variance of the random vector V r (θ) is less than the main vector V (θ), no additive operation will be performed. In case the [θ] parameters variance is high-enough compared to the V (θ) vector, then there is no need for additive random samples. This preventing mechanism has been added due to the practical speedup. Replacement operation: ReplaceV (θ) with V (θ); Return:V (θ); else Return: V (θ); Computation: Update gradient; 4.3 COMBINATION OF GASL AND SA GASL algorithm can operate on the top of the structured attention for attention supervision. The schematic is depicted in Fig. 1. Furthermore, a visualized example of the output channels from the second convolutional layer in the MNIST experiments has also demonstrated in Fig. 1. The structured attention is dedicated to the output channels of a convolutional layer. Figure 1: The combination of GASL and structured attention. The cube demonstrates the output feature map of a convolutional layer. The weights associated with each channel, form a group. For visualization, the right column is the activation visualization of the attention-based sparsity enforcement on output channels and the left one is the of imposing sparsity without attention. As can be observed, some of the channels are turned off and the remaining ones are intensified. We use three databases for the evaluation of our proposed method: , CIFAR-10 and CIFAR-100. For increasing the convergence speed without degrading the overall performance, we used gradient clipping. A common approach is to clip individual gradients to some fixed predefined range [−ζ, ζ]. As the learning rate becomes smaller continuously, the effective gradient 9 will approach zero and training convergence may become extremely slow. For tackling this issue, we used the method proposed in for gradient clipping which defined the range dynamically as [−ζ/γ, ζ/γ] for which γ is the current learning rate. We chose ζ = 0.1 in our experiments. Hyper-parameters are selected by cross-validation. For all our experiments, the output channels of convolutional layers and neurons in fully connected layers are considered as groups. For experiments on MNIST dataset, we use 2 -regularization with the default hyperparameters. Two network architectures have been employed: LeNet-5-Caffe 10 and a multilayer perceptron (MLP). For the MLP network, the group sparsity is enforced on each neuron's outputs for feature selection; Same can be applied on the neurons' inputs as well. The are shown in Table. 1. The percentage of sparsity is reported layer-wise. One important observation is the superiority of the SA method to the SSL in terms of accuracy, while the sparsity objective function for both is identical and the only difference is the addition of structural attention (Eq. 3). As a comparison to Louizos et al. (2017a), we achieved closed sparsity level with better accuracy for the MLP network. For the LeNet network, we obtained the highest sparsity level with competitive accuracy compared to the best method proposed in. 9 Which is gradient × learning rate 10 https://github.com/BVLC/caffe/blob/master/examples/mnist Table 1: Experiments on LeNet-5-Caffe architecture with 20-50-800-500 number of output filters and hidden layers and MLP with the architecture of 784-500-300 as the number of hidden units for each layer. The sparsity level is reported layer-wise. The sparsity and error are both reported as %. For experiments in this section, we used VGG-16 as the baseline architecture. Random cropping, horizontal flipping, and per-image standardization have been performed for data augmentation in the training phase and in the evaluation stage, only center cropping has been used. Batch-normalization has also been utilized after each convolutional layer and before the activation. The initial learning rate of 0.1 has been chosen and the learning rate is dropped by a factor of 10 when the error plateaus for 5 consecutive epochs. As can be observed from Table. 2, the combination of the GASL algorithm and SA dominates regarding the achieved sparsity level and demonstrates competitive in terms of accuracy for Cifar-100. We terminate the training after 300 epochs or if the averaged error is not improving for 20 consecutive epochs, whichever comes earlier. For Cifar-10 , we obtained the second best for both accuracy and sparsity level. The advantage of the proposed method for higher sparsity levels. For Cifar-100 experiments, we continued the process of enforcing sparsity for achieving the desired level of compression 11. We chose three discrete level of sparsity and for any of which, the accuracy drop for different methods is reported. Table. 3 demonstrates the comparison of different methods with regard to their accuracy drops at different levels of sparsity. For some levels of sparsity, it was observed that some methods performed better than the baseline. We deliberately selected higher levels of sparsity for having some performance drop as opposed to the baseline for all the implemented methods. As can be observed, our method shows its performance superiority in accuracy for the higher levels of sparsity. In another word, the proposed method outperforms in preventing the accuracy drop in the situation of having high sparsity level. Robustness to the hyperparameter tuning. Regarding the discussion in Section 3.1, it is worth to investigate the effect of λ v on the accuracy drop. In another word, we investigate the relative importance of tuning the variance loss coefficient. The accuracy drop is reported for Cifar-100 experiments using different α values and sparsity levels. The depicted in Table. 4, empirically shows the robustness of the proposed method to the selection of α, as the dependent factor, for which in the dynamic range of [0.1, 10], the accuracy drop is not changing drastically. This clearly demonstrates the robustness of the proposed method to the selection of the new hyperparameter associated with the attention mechanism as it is only a dependent factor to the sparsity penalty coefficient. In this paper, we proposed a guided attention mechanism for controlled sparsity enforcement by keeping a portion of the targeted elements to be alive. The GASL algorithm has been utilized on top of the structured attention for attention supervision to prune unimportant channels and neurons of the convolutional and fully-connected layers. We demonstrated the superiority of the method for preventing the accuracy drop in high levels of sparsity. Moreover, it has been shown that regardless of adding a new term to the loss function objective, the model complexity remains the same and the proposed approach is relatively robust to exhaustive hyper-parameter selection. Without the loss of generality, the method can be adapted to any layer type and different sparsity objectives such as weight pruning for unstructured sparsity or channel, neuron or filter cancellation for structured sparsity.
Proposing a novel method based on the guided attention to enforce the sparisty in deep neural networks.
749
scitldr
Giving provable guarantees for learning neural networks is a core challenge of machine learning theory. Most prior work gives parameter recovery guarantees for one hidden layer networks, however, the networks used in practice have multiple non-linear layers. In this work, we show how we can strengthen such to deeper networks -- we address the problem of uncovering the lowest layer in a deep neural network under the assumption that the lowest layer uses a high threshold before applying the activation, the upper network can be modeled as a well-behaved polynomial and the input distribution is gaussian. Understanding the landscape of learning neural networks has been a major challege in machine learning. Various works gives parameter recovery guarantees for simple one-hidden-layer networks where the hidden layer applies a non-linear activation u after transforming the input x by a matrix W, and the upper layer is the weighted sum operator: thus f (x) = a i u(w T i x). However, the networks used in practice have multiple non-linear layers and it is not clear how to extend these known techniques to deeper networks. We consider a multilayer neural network with the first layer activation u and the layers above represented by an unknown polynomial P such that it has non-zero non-linear components. More precisely, the function f computed by the neural network is as follows: f W (x) = P (u(w We assume that the input x is generated from the standard Gaussian distribution and there is an underlying true network (parameterized by some unknown W *) 1 from which the labels are generated. In this work we strengthen previous for one hidden layer networks to a larger class of functions representing the transform made by the upper layer functions if the lowest layer uses a high threshold (high bias term) before applying the activation: u(a − t) instead of u(a). Intuitively, a high threshold is looking for a high correlation of the input a with a direction w * i. Thus even if the function f is applying a complex transform after the first layer, the identity of these high threshold directions may be preserved in the training data generated using f.Learning with linear terms in P. Suppose P has a linear component then we show that increasing the threshold t in the lowest layer is equivalent to amplifying the coefficients of the linear part. Instead of dealing with the polynomial P it turns out that we can roughly think of it as P (µX 1, ..., µX d) where µ decreases exponentially in t (µ ≈ e −t 2). As µ decreases it has the effect of diminishing the non-linear terms more strongly so that relatively the linear terms stand out. Taking advantage of this effect we manage to show that if t exceeds a certain threshold the non linear terms drop in value enough so that the directions w i can be learned by relatively simple methods. We show that we can get close to the w i applying a simple variant of PCA. While an application of PCA can be thought of as finding principal directions as the local maxima of max ||z||=1 E[f (x)(z T x) 2 ], DISPLAYFORM0 If W * has a constant condition number then the local maxima can be used to recover directions that are transforms of w i. Theorem 1 (informal version of Claim 2, Theorem 11). If t > c √ log d for large enough constant c > 0 and P has linear terms with absolute value of coefficients at least 1/poly(d) and all coefficients at most O, we can recover the weight vector w i within error 1/poly(d) in time poly(d).These approximations of w i obtained collectively can be further refined by looking at directions along which there is a high gradient in f; for monotone functions we show how in this way we can recover w i exactly (or within any desired precision. Theorem 2. (informal version of Theorem 5) Under the conditions of the previous theorem, for monotone P, there exists a procedure to refine the angle to precision in time poly(1/, d) starting from an estimate that is 1/poly(d) close. The above mentioned theorems hold for u being sign and ReLU. 3 When P is monotone and u is the sign function, learning W is equivalent to learning a union of half spaces. We learn W * by learning sign of P which is exactly the union of halfspaces w T i x = t. Thus our algorithm can also be viewed as a polynomial time algorithm for learning a union of large number of half spaces that are far from the origin -to our knowledge this is the first polynomial time algorithm for this problem but with this extra requirement (see earlier work BID12 for an exponential time algorithm). Refer to Appendix B.6 for more details. Such linear components in P may easily be present: consider for example the case where P (X) = u(v T X − b) where u is say the sigmoid or the logloss function. The taylor series of such functions has a linear component -note that since the linear term in the taylor expansion of u(x) has coefficient u, for expansion of u(x−b) it will be u (−b) which is Θ(e −b) in the case of sigmoid. In fact one may even have a tower (deep network) or such sigmoid/logloss layers and the linear components will still be present -unless they are made to cancel out precisely; however, the coefficients will drop exponentially in the depth of the networks and the threshold b. Sample complexity with low thresholds and no explicit linear terms. Even if the threshold is not large or P is not monotone, we show that W * can be learned with a polynomial sample complexity (although possibly exponential time complexity) by finding directions that maximize the gradient of f. Theorem 3 (informal version of Corollary 1). If u is the sign function and w i's are orthogonal then in poly(1/, d) samples one can determine W * within precision if the coefficient of the linear terms in P (µ(X 1 + 1), µ(X 2 + 1), µ(X 3 + 1),...) is least 1/poly(d)Learning without explicit linear terms. We further provide evidence that P may not even need to have the linear terms -under some restricted cases (section 4), we show how such linear terms may implicitly arise even though they may be entirely apparently absent. For instance consider the case when P = X i X j that does not have any linear terms. Under certain additional assumptions we show that one can recover w i as long as the polynomial P (µ(X 1 + 1), µ(X 2 + 1), µ(X 3 + 1),..) (where µ is e −t has linear terms components larger than the coefficients of the other terms). Note that this transform when applied to P automatically introduces linear terms. Note that as the threshold increases applying this transform on P has the effect of gathering linear components from all the different monomials in P and penalizing the higher degree monomials. We show that if W * is a sparse binary matrix then we can recover W * when activation u(a) = e ρa under certain assumptions about the structure of P. When we assume the coefficients are positive then these extend for binary low l 1 -norm vectors without any threshold. Lastly, we show that for even activations (∀a, u(a) = u(−a)) under orthogonal weights, we can recover the weights with no threshold. Learning with high thresholds at deeper layers. We also point out how such high threshold layers could potentially facilitate learning at any depth, not just at the lowest layer. If there is any cut in the network that takes inputs X 1,..., X d and if the upper layers operations can be modelled by a polynomial P, then assuming the inputs X i have some degree of independence we could use this to modularly learn the lower and upper parts of the network separately (Appendix E) Related Work. Various works have attempted to understand the learnability of simple neural networks. Despite known hardness BID8; BID2, there has been an array of positive under various distributional assumptions on the input and the underlying noise in the label. Most of these works have focused on analyzing one hidden layer neural networks. A line of research has focused on understanding the dynamics of gradient descent on these networks for recovering the underlying parameters under gaussian input distribution Du et al. FIG1; BID10; BID16; BID14; BID17. Another line of research borrows ideas from kernel methods and polynomial approximations to approximate the neural network by a linear function in a high dimensional space and subsequently learning the same BID15; BID8; BID7 a). Tensor decomposition methods BID0 BID9 have also been applied to learning these simple architectures. The complexity of recovering arises from the highly non-convex nature of the loss function to be optimized. The main we extend in this work is by BID5. They learn the neural network by designing a loss function that allows a "well-behaved" landscape for optimization avoiding the complexity. However, much like most other , it is unclear how to extend to deeper networks. The only known for networks with more than one hidden layer is by BID7. Combining kernel methods with isotonic regression, they show that they can provably learn networks with sigmoids in the first hidden layer and a single unit in the second hidden layer in polynomial time. We however model the above layer as a multivariate polynomial allowing for larger representation. Another work BID1 deals with learning a deep generative network when several random examples are generated in an unsupervised setting. By looking at correlations between input coordinates they are able to recover the network layer by layer. We use some of their ideas in section 4 when W is a sparse binary matrix. Notation. We denote vectors and matrices in bold face. || · || p denotes the l p -norm of a vector. || · || without subscript implies the l 2 -norm. For matrices || · || denotes the spectral norm and || · || F denotes the forbenius norm. N (0, Σ) denotes the multivariate gausssian distribution with mean 0 and covariance Σ. For a scalar x we will use φ(x) to denote the p.d.f. of the univariate standard normal distribution with mean zero and variance 1.For a vector x we will use φ(x) to denote the p.d.f. of the multivariate standard normal distribution with mean zero and variance 1 in each direction. Φ denotes the c.d.f. of the standard gausssian distribution. Also define Φ c = 1 − Φ. Let h i denote the ith normalized Hermite polynomial Wikipedia contributors. For a function f, letf i denote the ith coefficient in the hermite expansion of f, that is, DISPLAYFORM1 For a given function f computed by the neural network, we assume that the training samples (x, y) are such that x ∈ R n is distributed according to N and label has no noise, that is, y = f (x).Note: Most proofs are deferred to the Appendix due to lack of space. In this section we consider the case when P has a positive linear component and we wish to recover the parameters of true parameters W *. The algorithm has two-steps: 1) uses existing one-hidden layer learning algorithm (SGD on carefully designed loss BID5) to recover an approximate solution, 2) refine the approximate solution by performing local search (for monotone P). The intuition behind the first step is that high thresholds enable P to in expectation be approximately close to a one-hidden-layer network which allows us to transfer algorithms with approximate guarantees. Secondly, with the approximate solutions as starting points, we can evaluate the closeness of the estimate of each weight vector to the true weight vector using simple correlations. The intuition of this step is to correlate with a function that is large only in the direction of the true weight vectors. This equips us with a way to design a local search based algorithm to refine the estimate to small error. For simplicity in this section we will work with P where the highest degree in any X i is 1. The degree of the overall polynomial can still be n. See Appendix B.8 for the extension to general P. More formally, Assumption 1 (Structure of network). We assume that P has the following structure DISPLAYFORM0 to be the linear part of f.Next we will upper bound expected value of u(x): for "high-threshold" ReLU, that is, DISPLAYFORM1 2σ 2 (see Lemma 10). We also get a lower bound on |û 4 | in terms of ρ (t, σ) 5 This enables us to make the following assumption. Assumption 2. Activation function u is a positive high threshold activation with threshold t, that is, the bias term is t. DISPLAYFORM2 where ρ is a positive decreasing function of t. Also, DISPLAYFORM3 Assumption 3 (Value of t). t is large enough such that ρ(t, ||W DISPLAYFORM4 with for large enough constant η > 0 and p ∈.For example, for high threshold ReLU, ρ(t, 1) = e −t 2 /2 and µ = ρ(t, ||W DISPLAYFORM5, thus t = √ 2η log d for large enough d suffices to get the above assumption (κ(W *) is a constant).These high-threshold activation are useful for learning as in expectation, they ensure that f is close to f lin since the product terms have low expected value. This is made clear by the following lemmas: Lemma 1. For |S| > 1, under Assumption 2 we have, DISPLAYFORM6. Under Assumptions 1, 2 and 3, if t is such that dρ(t, ||W * ||) ≤ c for some small enough constant c > 0 we have, DISPLAYFORM7 Note: We should point out that f (x) and f lin (x) are very different point wise; they are just close in expectation under the distribution of x. In fact, if d is some constant then even the difference in expectation is some small constant. This closeness suggests that algorithms for recovering under the labels from f lin can be used to recover with labels from f approximately. Learning One Layer Neural Networks using Landscape Design. BID5 proposed an algorithm for learning one-hidden-layer networks. Intuitively, the approach of BID5 is to design a well behaved loss function based on correlations to recover the underlying weight vectors. They show that the local minima of the following optimization corresponds to some transform of each of the w * i -thus it can be used to recover a transform of w * i, one at a time. max DISPLAYFORM8 which they optimize using the Lagrangian formulation (viewed as a minimization): DISPLAYFORM9 where DISPLAYFORM10 and DISPLAYFORM11 (see Appendix A.1 for more details). Using properties 4 We can handle DISPLAYFORM12 for some constant C by changing the scaling on t. 5 For similar bounds for sigmoid and sign refer to Appendix B.7.We previously showed that f is close to f lin in expectation due to the high threshold property. This also implies that G lin and G are close and so are the gradients and (eignevalues of) hessians of the same. This closeness implies that the landscape properties of one approximately transfers to the other function. More formally, Theorem 4. Let Z be an (, τ)-local minimum of function A. If ||∇(B −A)(Z)|| ≤ ρ and ||∇ 2 (B − A)(Z)|| ≤ γ then Z is an (+ ρ, τ + γ)-local minimum of function B and vice-versa. We will now apply above lemma on our G lin (z) and G(z). DISPLAYFORM13 where w i are columns of (TW *) −1 (ignoring log d factors).Note: For ReLU, setting t = √ C log d for large enough C > 0 we can get closeness 1/poly(d) to the columns of (TW *) −1. Refer Appendix B.7 for details for sigmoid. The paper BID5 also provides an alternate optimization that when minimized simultaneously recovers the entire matrix W * instead of having to learn columns of (TW *) −1 separately. We show how applying our methods can also be applied to that optimization in Appendix B.4 to recover W * by optimizing a single objective. Assuming P is monotone, we can show that the approximate solution from the previous analysis can be refined to arbitrarily closeness using a random search method followed by approximately finding the angle of our current estimate to the true direction. The idea at a high level is to correlate with δ (z T x − t) where δ is the Dirac delta function. It turns out that the correlation is maximized when z is equal to one of the w i. Correlation with δ (z T x−t) is checking how fast the correlation of f with δ(z T x−t) is changing as you change t. To understand this look at the case when our activation u is the sign function then note that correlation of u t (w T x − t) with δ (w T x − t) is very high as its correlation with δ(w T x − t) is 0 when t < t and significant when t > t. So as we change t' slightly from t − to t + there is a sudden increase. If z and w differ then it can be shown that correlation of u t (w T x − t) with δ (z T x − t) essentially depends on cot(α) where α is the angle between w and z (for a quick intuition note that one can DISPLAYFORM0 . See Lemma 16 in Appendix). In the next section we will show how the same ideas work for non-monotone P even if it may not have any linear terms but we only manage to prove polynomial sample complexity for finding w instead of polynomial time complexity. In this section we will not correlate exactly with δ (z T x − t) but instead we will use this high level idea to estimate how fast the correlation with δ(z T x − t) changes between two specific values as one changes t, to get an estimate for cot(α). Secondly since we can't to a smooth optimization over z, we will do a local search by using a random perturbation and iteratively check if the correlation has increased. We can assume that the polynomial P doesn't have a constant term c 0 as otherwise it can easily be determined and cancelled out 6.We will refine the weights one by one. WLOG, let us assume that w * 1 = e 1 and we have z such that DISPLAYFORM1 Algorithm 1 RefineEstimate 1: Run EstimateT anAlpha on z to get s = tan(α) where α is the angle between z and w * 1. 2: Perturb current estimate z by a vector along the d − 1 dimensional hyperplane normal to z with the distribution n(0, DISPLAYFORM2 Run EstimateT anAlpha on z to get s = tan(α) where α is the angle between z and w * DISPLAYFORM3 Algorithm 2 EstimateTanAlpha 1: Find t 1 and t 2 such that P r[sgn(f (x))|x ∈ l(z, t,)] at t 1 is 0.4 and at t 2 is 0.6. 2: Return t2−t1 DISPLAYFORM4 The algorithm (Algorithm 1) estimates the angle of the current estimate with the true vector and then subsequently perturbs the vector to get closer after each successful iteration. Theorem 5. Given a vector z ∈ S d−1 such that it is 1/poly(d)-close to the underlying true vector DISPLAYFORM5 We prove the correctness of the algorithm by first showing that EstimateT anAlpha gives a multiplicative approximation to tan(α). The following lemma captures this property. Lemma 3. EstimateT anAlpha(z) outputs y such that y = (1 ± O(η)) tan(α) where α is the angle between z and w * 1.Proof. We first show that the given probability when computed with sgn(x T w * 1 −t) is a well defined function of the angle between the current estimate and the true parameter up to multiplicative error. Subsequently we show that the computed probability is close to the one we can estimate using f (x) since the current estimate is close to one direction. The following two lemmas capture these properties. Lemma 4. For t, t and ≤ 1/t, we have DISPLAYFORM6 6 for example with RELU activation, f will be c0 most of the time as other terms in P will never activate. So c0 can be set to say the median value of f.Using the above, we can show that, DISPLAYFORM7 where η 1, η 2 > 0 are the noise due to estimating using f and DISPLAYFORM8 The following lemma bounds the range of t 1 and t 2.Lemma 6. We have 0 ≤ t 1 ≤ t 2 ≤ t cos(α1).Thus, we have, DISPLAYFORM9 as long as η 2 +Ot 2 ≤ c for some constant c > 0. Thus, we can get a multiplicative approximation to tan(α) up to error η (can be chosen to make its contribution smaller than η).Finally we show (proof in Appendix ??) that with constant probability, a random perturbation reduces the angle by a factor of (1 − 1/d) of the current estimate hence the algorithm will halt after DISPLAYFORM10 Lemma 7. By applying a random Gaussian perturbation along the d − 1 dimensional hyperplane normal to z with the distribution n(0, Θ(α/d)) d−1 and scaling back to the unit sphere, with constant probability, the angle α (< π/2) with the fixed vector decreases by at least Ω(α/d). We extend the methods of the previous section to a broader class of polynomials but only to obtain in terms of sample complexity. The main idea as in the previous section is to correlate with δ (z T x−t) (the derivative of the dirac delta function) and find arg max ||z||2=1 E[f (x)δ (z T x−t)]. We will show that the correlation goes to infinity when z is one of w * i and bounded if it is far from all of them. From a practical standpoint we calculate δ (z T x − s) by measuring correlation with DISPLAYFORM0, as in the previous section, for an even smaller; however, for ease of exposition, in this section, we will assume that correlations with δ(z T x − s) can be measured exactly. DISPLAYFORM1 If u = sgn then P has degree at most 1 in each X i. Let ∂P ∂Xi denote the symbolic partial derivative of P with respect to X i; so, it drops monomials without X i and factors off X i from the remaining ones. Let us separate dependence on X i in P as follows: DISPLAYFORM2 We will overload the polynomial P such that P [x] to denote the polynomial computed by substituting X i = u((w * 1) T x) and similarly for Q and R. Under this notation f (x) = P [x]. We will also assume that |P (X)| ≤ ||X|| O = ||X|| c1 (say). By using simple correlations we will show: DISPLAYFORM3 ) samples one can determine the w * i's within error 2. Note that if all the w * i's are orthogonal then X i are independent and E Q i [x] (w * i) T x = t is just value of Q i evaluated by setting X i = 1 and setting all the the remaining X j = µ where µ = E[X j]. This is same as 1/µ times the coefficient of X i in P (µ(X 1 + 1),..., µ(X d + 1)). ) one can determine W * within error 2 in each entry, if the coefficient of the linear terms in DISPLAYFORM0 The main point behind the proof of Theorem 6 is that the correlation is high when z is along one of w * i and negligible if it is not close to any of them. DISPLAYFORM1. Otherwise if all angles α i between z and w * i are at least 2 it is at most DISPLAYFORM2 We will use the notation g(x) x=s to denote g(x) evaluated at x = s. Thus Cauchy's mean value theorem can be stated as g(DISPLAYFORM3 . We will over load the notation a bit: φ(z T x = s) will denote the probability density that vz T x = s; so if z is a unit vector this is just φ(s); φ(z DISPLAYFORM4 denotes the probability density that both z DISPLAYFORM5 The following claim interprets correlation with δ(z T x − s) as the expected value along the corresponding plane z T x = s. DISPLAYFORM6 The following claim computes the correlation of P with δ (z T x − s). DISPLAYFORM7 We use this to show that the correlation is bounded if all the angles are lower bounded. Claim 5. If P (X) ≤ ||X|| c1 and if z has an angle of at least 2 with all the w * DISPLAYFORM8 Above claims can be used to prove main Lemma 8. Refer to the Appendix C for proofs. Proof of Theorem 6. If we wish to determine w * i within an angle of accuracy 2 let us set to be O(3 2 φ(t)d −c ). From Lemma 8, for some large enough c, this will ensure that if all α i > 2 the correlation is o(φ(t) 3 ). Otherwise it is φ(t) 3 (1±o). Since φ(t) = poly(1/d), given poly samples, we can test if a given direction is within accuracy 2 of a w * i or not. Under additional structural assumptions on W * such as the weights being binary, that is, in {0, 1}, sparsity or certain restrictions on activation functions, we can give stronger recovery guarantees. Proofs have been deferred to Appendix D.Theorem 7. For activation u t (a) = e ρ(a−t). Let the weight vectors w * i be 0, 1 vectors that select the coordinates of x. For each i, there are exactly d indices j such that w ij = 1 and the coefficient of the linear terms in P (µ(X 1 + 1), µ(X 2 + 1), µ(X 3 + 1),..) for µ = e −ρt is larger than the coefficient of all the product terms (constant factor gap) then we can learn the W *.In order to prove the above, we will construct a correlation graph over x 1,..., x n and subsequently identify cliques in the graph to recover w * i' s. With no threshold, recovery is still possible for disjoint, low l 1 -norm vector. The proof uses simple correlations and shows that the optimization landscape for maximizing these correlations has local maximas being w * i' s. Theorem 8. For activation u(a) = e a. If all w * i ∈ {0, 1} n are disjoint, then we can learn w * i as long as P has all positive coefficients and product terms have degree at most 1 in each variable. For even activations, it is possible to recover the weight vectors even when the threshold is 0. The technique used is the PCA like optimization using hermite polynomials as in Section 2. Denote DISPLAYFORM0 Theorem 9. If the activation is even and for every i, j: DISPLAYFORM1 u0û4 C({i, j},û 0 ) then there exists an algorithm that can recover the underlying weight vectors. In this work we show how activations in a deep network that have a high threshold make it easier to learn the lowest layer of the network. We show that for a large class of functions that represent the upper layers, the lowest layer can be learned with high precision. Even if the threshold is low we show that the sample complexity is polynomially bounded. An interesting open direction is to apply these methods to learn all layers recursively. It would also be interesting to obtain stronger if the high thresholds are only present at a higher layer based on the intuition we discussed. Hermite polynomials form a complete orthogonal basis for the gaussian distribution with unit variance. For more details refer to Wikipedia contributors. Let h i be the normalized hermite polynomials. They satisfy the following, DISPLAYFORM0 This can be extended to the following:Fact 2. For a, b with marginal distribution N and correlation ρ, DISPLAYFORM1 Consider the following expansion of u into the hermite basis (h i), DISPLAYFORM2 Proof. Observe that v T x and w T x have marginal distribution N and correlation v T w. Thus using Fact 2, DISPLAYFORM3 For gaussians with mean 0 and variance σ 2 define weighted hermite polynomials H σ l (a) = |σ| l h l (a/σ). Given input v T x for x ∼ N (0, I), we suppress the superscript σ = ||v||.Corollary 2. For a non-zero vector v (not necessarily unit norm) and a unit norm vector w, DISPLAYFORM4 Proof. It follows as the proof of the previous lemma, DISPLAYFORM5 Consider matrix A ∈ R m×m. Let σ i (A) to be the ith singular value of A such that DISPLAYFORM0 Fact 7. Let B be a (mk) × (mk) principal submatrix of A, then κ(B) ≤ κ(A). Lemma 10. For u being a high threshold ReLU, that is, u t (a) = max(0, a − t) we have for t ≥ C for large enough constant DISPLAYFORM0 Proof. We have DISPLAYFORM1 Also,û DISPLAYFORM2 To upper bound,û DISPLAYFORM3 Similar analysis holds forû 2.Observe that sgn can be bounded very similarly replacing g − t by 1 which can affect the bounds up to only a polynomial in t factor. Lemma 11. For u being a high threshold sgn, that is, u t (a) = sgn(a − t) we have for t ≥ C for DISPLAYFORM4 For sigmoid, the dependence varies as follows: Lemma 12. For u being a high threshold sigmoid, that is, u t (a) = 1 1+e −(a−t) we have for t ≥ C for large enough constant DISPLAYFORM5 Proof. We have DISPLAYFORM6 Also,û DISPLAYFORM7 = Ω(e −t).We can upper bound similarly and boundû 2. Let us consider the linear case with w * i's are orthonormal. Consider the following maximization problem for even l ≥ 4, max DISPLAYFORM0 where h l is the lth hermite polynomial. Then we have, DISPLAYFORM1 It is easy to see that for z ∈ S n−1, the above is maximized at exactly one of the w i's (up to sign flip for even l) for l ≥ 3 as long as u l = 0. Thus, each w i is a local minima of the above problem. DISPLAYFORM2 For constraint ||z|| 2 = 1, we have the following optimality conditions (see BID11 for more details). For all w = 0 such that w DISPLAYFORM3 For our function, we have: DISPLAYFORM4 The last follows from using the first order condition. For the second order condition to be satisfied we will show that |S| = 1. Suppose |S| > 2, then choosing w such that w i = 0 for i ∈ S and such that w T z = 0 (it is possible to choose such a value since |S| > 2), we get w T (∇ 2 L(z) − 2λI)w = 2(l − 2)λ||w|| 2 which is negative since λ < 0, thus these cannot be global minima. However, for |S| = 1, we cannot have such a w, since to satisfy w T z = 0, we need w i = 0 for all i ∈ S, this gives us w T (∇ 2 L(z) − 2λI)w = −2λ||w|| 2 which is always positive. Thus z = ±e i are the only local minimas of this problem. Lemma 13 BID5 ). If z is an (, τ)-local minima of F (z) = − i α i z • (Derived from Proposition 5.7) z max = ±1 ± O(dτ /α min) ± O(/λ) where |z| max is the value of the largest entry in terms of magnitude of z. Proof of Lemma 1. Let O ∈ R d×d be the orthonormal basis (row-wise) of the subspace spanned by w * i for all i ∈ [d] generated using Gram-schmidt (with the procedure done in order with elements of |S| first). Now let O S ∈ R |S|×d be the matrix corresponding to the first S rows and let O ⊥ S ∈ R (d−|S|)×n be that corresponding to the remaining rows. Note that OW * (W * also has the same ordering) is an upper triangular matrix under this construction. DISPLAYFORM0 Now observe that O S W * S is also an upper triangular matrix since it is a principal sub-matrix of OW *. Thus using Fact 6 and 7, we get the last equality. Also, the single non-zero entry row has non-zero entry being 1 (||w * i || = 1 for all i). This gives us that the inverse will also have the single non-zero entry row has non-zero entry being 1. WLOG assume index 1 corresponds to this row. Thus we can split this as following DISPLAYFORM1 Proof of Claim 1. Consider the SVD of matrix M = UDU T. Let W = UD −1/2 and y i = √ c i W T w * i for all i. It is easy to see that y i are orthogonal. Let F (z) = G(Wz): DISPLAYFORM2 Since y i are orthogonal, for means of analysis, we can assume that y i = e i, thus the formulation reduces to max z |û 4 | i 1 ci (z i) 4 − λ ||z|| 2 − 1 2 up to scaling of λ = λû 2 2. Note that this is of the form in Lemma 13 hence using that we can show that the approximate local minimas of F (z) are close to y i and thus the local maximas of G(z) are close to DISPLAYFORM3 due to the linear transformation. This can alternately be viewed as the columns of (TW DISPLAYFORM4 Proof of Theorem 4. Let Z be an (, τ)-local minimum of A, then we have ||∇A(Z)|| ≤ and DISPLAYFORM5 Also observe that DISPLAYFORM6 Here we use |λ min (M)| ≤ ||M|| for any symmetric matrix. To prove this, we have ||M|| = max x∈S n−1 ||Mx||. We have x = i x i v i where v i are the eigenvectors. Thus we have Mx = DISPLAYFORM7 Proof of Lemma 2. Expanding f, we have DISPLAYFORM8 Proof. We have DISPLAYFORM9, for c = Θ(√ η log d we get the required . Lemma 15. For ||z|| = Ω and λ = Θ(|û 4 |/û DISPLAYFORM0 Proof. Let K = κ(W *) which by assumption is θ. We will argue that local minima of G cannot have z with large norm. First lets argue this for G lin (z). We know that DISPLAYFORM1 2 where α = |û 4 | and β =û 2. We will argue that z T ∇G lin (z) is large if z is large. DISPLAYFORM2 Let y = W * z then K||z|| ≥ ||y|| ≥ ||z||/K since K is the condition number of W *. Then this implies DISPLAYFORM3 Now we need to argue for G. DISPLAYFORM4 We know that E[f lin (x)h 2 (z T x/||z||)] has a factor of β giving us using Lemma 14: DISPLAYFORM5 Proof of Claim 2. We have G − G lin as follows, DISPLAYFORM6 Thus we have, DISPLAYFORM7 Observe that H 2 and H 4 are degree 2 and 4 (respectively) polynomials thus norm of gradient and hessian of the same can be bounded by at most O(||z||||x|| 4). Using Lemma 14 we can bound each term by roughly O(log d)d −(1+p)η+3 ||z|| 4. Note that λ being large does not hurt as it is scaled appropriately in each term. Subsequently, using Lemma 15, we can show that ||z|| is bounded by a constant since ||G(z)|| ≤ d −2η. Similar analysis holds for the hessian too. ≥. Now using Claim 1, we get the required . BID5 also showed simultaneous recovery by minimizing the following loss function G lin defined below has a well-behaved landscape. They gave the following . Theorem 10 (We show that this minimization is robust. Let us consider the corresponding function G to G lin with the additional non-linear terms as follows: DISPLAYFORM0 Now we can show that G and G lin are close as in the one-by-one case. DISPLAYFORM1 Using similar analysis as the one-by-one case, we can show the required closeness. It is easy to see that ||∇L|| and ||∇ 2 L|| will be bounded above by a constant degree polynomial in DISPLAYFORM2 No row can have large weight as if any row is large, then looking at the gradient for that row, it reduces to the one-by-one case, and there it can not be larger than a constant. Thus we have the same closeness as in the one-by-one case. Combining this with Theorem 10 and 4, we have the following theorem:Theorem 11. Let c be a sufficiently small universal constant (e.g. c = 0.01 suffices), and under Assumptions 1, 2 and 3. Assume γ ≤ c, λ = Θ(d η), and W * be the true weight matrix. The function G satisfies the following 1. Any saddle point W has a strictly negative curvature in the sense that DISPLAYFORM3 −Ω )-approximate local minimum, then W can be written as DISPLAYFORM4 )}, P is a permutation matrix, and the error term ||E|| ≤ O(log d)d−Ω.Using standard optimization techniques we can find a local minima. Lemma 16. If u is the sign function then E[u(w T x)δ (z T x)] = c| cot(α)| where w, z are unit vectors and α is the angle between them and c is some constant. Proof. WLOG we can work the in the plane spanned by z and w and assume that z is the vector i along and w = i cos α + j sin α. Thus we can replace the vector x by ix + jy where x, y are normally distributed scalars. Also note that u = δ (Dirac delta function). DISPLAYFORM0 Using the fact that x δ (x)h(x)dx = h this becomes DISPLAYFORM1 Substituting s = y sin α this becomes DISPLAYFORM2 Proof of Lemma 4. Let us compute the probability of lying in the -band for any t: DISPLAYFORM3 where the last equality follows from the mean-value theorem for somet ∈ [t −, t].Next we compute the following: DISPLAYFORM4 where the last equality follows from the mean-value theorem for some t * ∈ [t −, t]. Combining, we get: DISPLAYFORM5 Proof of Lemma 5. Recall that P is monotone with positive linear term, thus for high threshold u (0 unless input exceeds t and positive after) we have sgn(f (x)) = ∨sgn(x T w * i − t). This is because, for any i, P applied to X i > 0 and ∀j = i, X j = 0 gives us c i which is positive. Also, P = 0. Thus, sgn(P) is 1 if any of the inputs are positive. Using this, we have, DISPLAYFORM6 We will show that η is not large since a z is close to one of the vectors, it can not be close to the others thus α i will be large for all i = j. Let us bound η, DISPLAYFORM7 DISPLAYFORM8 | sin(αi)|. The above follows since γ i ≥ 0 by assumption on t. Under the assumption, let β = max i =1 cos(α i) we have DISPLAYFORM9 under our setting. Thus we have, DISPLAYFORM10 for small enough.Proof of Lemma 6. Let us assume that < c/t for sufficiently small constant c, then we have that DISPLAYFORM11 Similarly for t 1. Now we need to argue that t 1, t 2 ≥ 0. Observe that DISPLAYFORM12 Thus for sufficiently large t = Ω(√ log d), this will be less than 0.4. Hence there will be some t 1, t 2 ≥ 0 with probability evaluating to 0.4 since the probability is an almost increasing function of t up to small noise in the given range (see proof of Lemma 5).Proof of Lemma 7. Let V be the plane spanned by w * 1 and z and let v 1 = w * 1 and v 2 be the basis of this space. Thus, we can write z = cos(α)v 1 + sin(α)v 2.Let us apply a Gaussian perturbation ρ along the tangential hyperplane normal to z. Say it has distribution N along any direction tangential to the vector z. Let 1 be the component of ρ on to V and let 2 be the component perpendicular to it. We can write the perturbation as ρ = 1 (sin(α)v 1 − cos(α)v 2 ) + 2 v 3 where v 3 is orthogonal to both v 1 and v 2. So the new angle α of z after the perturbation is given by DISPLAYFORM13 Note that with constant probability 1 ≥ as ρ is a Gaussian variable with standard deviation. And with high probability ||ρ|| < O(DISPLAYFORM14 Thus with constant probability: DISPLAYFORM15 Thus change in cos(α) is given by ∆ cos(α) ≥ Ω(sin(α)). Now change in the angle α satisfies by the Mean Value Theorem: DISPLAYFORM16 Theorem 12. Given non-noisy labels from a union of halfspaces that are at a distance Ω(√ log d) and are each a constant angle apart, there is an algorithm to recover the underlying weights to closeness in polynomial time. Proof. Observe that X i is equivalent to P (X 1, ·, DISPLAYFORM0 Since P and sgn here satisfies our assumptions 1, 2, for t = Ω( √ log d) (see Lemma 11) we can apply Theorem 11 to recover the vectors w * i approximately. Subsequently, refining to arbitrarily close using Theorem 5 is possible due to the monotonicity. Thus we can recover the vectors to arbitrary closeness in polynomial time. Observe that for sigmoid activation, Assumption 2 is satisfied for ρ(t, σ) = e −t+σ 2 /2. Thus to satisfy Assumption 3, we need t = Ω(η log d).Note that for such value of t, the probability of the threshold being crossed is small. To avoid this we further assume that f is non-negative and we have access to an oracle that biases the samples towards larger values of f; that after x is drawn from the Gaussian distribution, it retains the sample (x, f (x)) with probability proportional to f (x) -so P r [x] in the new distribution. This enables us to compute correlations even if E xÑ (0,I [f (x)] is small. In particular by computing E[h(x)] from this distribution, we are obtaining E[f (x)h(x)]/E[f (x)] in the original distribution. Thus we can compute correlations that are scaled. We get our approximate theorem: Theorem 13. For t = Ω(log d), columns of (TW *) −1 can be recovered within error 1/poly(d) using the algorithm in polynomial time. In the main section we assumed that the polynomial has degree at most 1 in each variable. Let us give a high level overview of how to extend this to the case where each variable is allowed a large degree. P now has the following structure, DISPLAYFORM0 If P has a higher degree in X i then Assumption 2 changes to a more complex (stronger) condition. Let q i (x) = r∈Z d + |∀j =i,rj =0 c r x ri, that is q i is obtained by setting all X j for j = i to 0. DISPLAYFORM1 The last assumption holds for the case when the degree is a constant and each coefficient is upper bounded by a constant. It can hold for decaying coefficients. Let us collect the univariate terms DISPLAYFORM2 Corresponding to the same we get f uni. This will correspond to the f lin we had before. Note that the difference now is that instead of being the same activation for each weight vector, now we have different ones q i for each. Using H 4 correlation as before, now we get that: DISPLAYFORM3 where q i • u t are hermite coefficients for q i • u t. Now the assumption guarantees that these are positive which is what we had in the degree 1 case. Second we need to show that even with higher degree, E[|f (x) − f uni (x)|] is small. Observe that Lemma 17. For r such that ||r|| 0 > 1, under Assumption 4 we have, DISPLAYFORM4 The proof essentially uses the same idea, except that now the dependence is not on ||r|| 1 but only the number of non-zero entries (number of different weight vectors). With this bound, we can now bound the deviation in expectation. DISPLAYFORM5 Proof. We have, DISPLAYFORM6 |c r |ρ(t, 1) (ρ(t, ||W * ||)) DISPLAYFORM7 Thus as before, if we choose t appropriately, we get the required . Similar ideas can be used to extend to non-constant degree under stronger conditions on the coefficients. Proof of Lemma 3. DISPLAYFORM0 dx Let x 0 be the component of x along z and y be the component along z ⊥. So x = x 0ẑ + yz ⊥. Interpreting x as a function of x 0 and y: DISPLAYFORM1 where the second equality follows from DISPLAYFORM2 Proof of Claim 4. Let x 0 be the component of x along z and y be the component of x in the space orthogonal to z. Letẑ denote a unit vector along z. We have x = x 0ẑ + y and ∂x ∂x0 =ẑ. So, correlation can be computed as follows: DISPLAYFORM3 ](x = a) this implies: DISPLAYFORM4 If u is the sign function then u (x) = δ(x). So focusing on one summand in the sum we get DISPLAYFORM5 Again let y = y 0 (w * i) + z where z is perpendicular to w * i and z. And (w * i) is perpendicular component of w * i to z. Interpreting x = tẑ + y 0 (w * 1) + z as a function of y 0, z we get: DISPLAYFORM6 Note that by substituting v = ax we get DISPLAYFORM7. So this becomes: DISPLAYFORM8 Under review as a conference paper at ICLR 2019 DISPLAYFORM9 Let α i be the angle between z and w * i. Then this is DISPLAYFORM10 Thus, overall correlation DISPLAYFORM11 Proof of Claim 5. Note that for small α, DISPLAYFORM12 which is a decreasing function of α i in the range [0, π] So if all α i are upper bounded by 2 then by above corollary, DISPLAYFORM13 Observe that the above proof does not really depend on P and holds for for any polynomial of u((w * i) T x) as long as the polynomial is bounded and the w * i are far off from z. DISPLAYFORM14 Since u((w * i) T x) = 0 for z T x = t − and 1 for z T x = t +, and using the Cauchy mean value theorem for the second term this is DISPLAYFORM15 The last step follows from Claim 5 applied on Q i and R i as all the directions of w * j are well separated from z = w * i and w * i is absent from both Q i and R i. Also the corresponding Q i and R i are bounded. If u is the RELU activation, the high level idea is to use correlation with the second derivative δ of the Dirac delta function instead of δ. More precisely we will compute DISPLAYFORM0 Although we show the analysis only for the RELU activation, the same idea works for any activation that has non-zero derivative at 0.Note that now u = sgn and u = δ. For ReLU activation, Lemma 8 gets replaced by the following Lemma. The rest of the argument is as for the sgn activation. We will need to assume that P has constant degree and sum of absolute value of all coefficients is poly(d) Lemma 19. Assuming polynomial P has constant degree, and sum of the magnitude of all coefficients is at most DISPLAYFORM1. Otherwise if all angles α i between z and w i are at least 2 it is at most DISPLAYFORM2 We will prove the above lemma in the rest of this section. First we will show that z is far from any of the w * DISPLAYFORM3 Lemma 20. If the sum of the absolute value of the coefficients of P is bounded by poly(d), its degree is at most constant, DISPLAYFORM4 Proof. Let x 0 be the component of x along z and y be the component of x in the space orthogonal to z as before. We have x = x 0ẑ + y and ∂x ∂x0 =ẑ. We will look at monomials M l in P = l M l. As before since x δ (x − a)f (x)dx = To construct this correlation graph, we will run the following Algorithm 3 Denote T i:= {j : w ij = 1}. Let us compute E[f (x)x i x j ]: DISPLAYFORM5 c S e −ρt|S| E e ρ p∈S x T w * p x i x j Lemma 22. At the local maxima, for all i ∈ [n], z is such that for all j, k ∈ S i, z j = z k at local maxima. Proof. We prove by contradiction. Suppose there exists j, k such that z j < z k. Consider the following perturbation: z + (z k − z j)(e j − e k) for 1 ≤ > 0. Observe that g(z) depends on only r∈Si z r and since that remains constant by this update g(z) does not change. Also note that ||z|| 1 does not change. However ||z|| 2 2 decreases by 2 (1 −)(z k − z j) 2 implying that overall h(z) increases. Thus there is a direction of improvement and thus it can not be a local maxima. Lemma 23. At the local maxima, ||z|| 1 ≥ α for λ < S c S |∪ i∈S Si| n e |∪ i∈S S i | 2 − γ(2α + 1).Proof. We prove by contradiction. Suppose ||z|| 1 < α, consider the following perturbation, z + 1.Then we have h(z + 1) − h(z) = c S e |∪ i∈S S i | 2 + p∈∪ i∈S S izp (e |∪ i∈S Si| − 1) − nλ − nγ (2||z|| 1 +) > c S e |∪ i∈S S i | 2 | ∪ i∈S S i | − nλ − nγ (2α + 1)For given λ there is a direction of improvement giving a contradiction that this is the local maxima. Combining the above, we have that we can choose λ, γ = poly(n, 1/, s) where s is a paramater that depends on structure of f such that at any local maxima there exists i such that for all j ∈ S i, z j ≥ 1 and for all k ∈ ∪ j∈Si, z k ≤. Let us consider correlation with h 4 (z T x). This above can be further simplified by observing that when we correlate with h 4, i∈S u (x i)h 4 (z T x) = 0 for |S | ≥ 2. Observe that h 4 (z T x) = d1,...,dn∈: di≤4 c(d 1, . . ., d n) h di (x i) for some coefficients c which are functions of z. Thus when we correlate i∈S u (x i)h 4 (z T x) for |S | ≥ 3 then we can only get a non-zero term if we have at least h 2k (x i) with k ≥ 1 for all i ∈ S. This is not possible for |S | ≥ 3, hence, these terms are 0. Thus, DISPLAYFORM6 Lets compute these correlations. DISPLAYFORM7
We provably recover the lowest layer in a deep neural network assuming that the lowest layer uses a "high threshold" activation and the above network is a "well-behaved" polynomial.
750
scitldr
Federated learning involves training and effectively combining machine learning models from distributed partitions of data (i.e., tasks) on edge devices, and be naturally viewed as a multi- task learning problem. While Federated Averaging (FedAvg) is the leading optimization method for training non-convex models in this setting, its behavior is not well understood in realistic federated settings when the devices/tasks are statistically heterogeneous, i.e., where each device collects data in a non-identical fashion. In this work, we introduce a framework, called FedProx, to tackle statistical heterogeneity. FedProx encompasses FedAvg as a special case. We provide convergence guarantees for FedProx through a device dissimilarity assumption. Our empirical evaluation validates our theoretical analysis and demonstrates the improved robustness and stability of FedProx for learning in heterogeneous networks. Large networks of remote devices, such as phones, vehicles, and wearable sensors, generate a wealth of data each day. Federated learning has emerged as an attractive paradigm to push the training of models in such networks to the edge . In such settings, the goal is to jointly learn over distributed partitions of data/tasks, where statistical heterogeneity and systems constraints present significant challenges. Optimization methods that allow for local updating and low participation have become the de facto solvers for federated learning . These methods perform a variable number of local updates on a subset of devices to enable flexible and efficient communication. Of current federated optimization methods, FedAvg has become state-of-the-art for non-convex federated learning. However, FedAvg was not designed to tackle the statistical heterogeneity which is inherent in federated settings; namely, that data may be non-identically distributed across devices. In realistic statistically heterogeneous settings, FedAvg has been shown to diverge empirically (, Sec 3), and it also lacks theoretical convergence guarantees. Indeed, recent works exploring convergence guarantees are limited to unrealistic scenarios, where the data is either shared across devices or distributed in an IID (identically and independently distributed) manner, or all devices are active at each communication round (; ; ; ; ;).Due to the statistical heterogeneity of the data in federated networks, one can think of federated learning as a prime example of distributed multi-task learning, where each device corresponds to a task. However, the more common goal of federated learning-and the focus of this work-involves training a single global model on distributed data collected for these various tasks. We introduce and study a novel optimization framework in the federated setting. Our focus on its convergence behavior in the face of statistically heterogeneous data is closely related to the classical multi-task setting which involves jointly learning task-specific models from statistically heterogeneous data. Contributions. We propose a federated optimization framework for heterogeneous networks, FedProx, which encompasses FedAvg. In order to characterize the convergence behavior of FedProx, we invoke a device dissimilarity assumption in the network. Under this assumption, we provide the first convergence guarantees for FedProx. Finally, we demonstrate that our theoretical assumptions reflect empirical performance, and that FedProx can improve the robustness and stability of convergence over FedAvg when data is heterogeneous across devices. Large-scale distributed machine learning has motivated the development of numerous distributed optimization meth-ods in the past decade (see, e.g., BID5 ; a; ; ; ; Richtárik & Takáč, 2016; BID3 . However, it is increasingly attractive to learn statistical models directly over networks of distributed devices. This problem, known as federated learning, requires tackling novel challenges with privacy, heterogeneous data, and massively distributed networks. Recent optimization methods have been proposed that are tailored to the specific challenges in the federated setting. These methods have shown significant improvements over traditional distributed approaches like ADMM BID2 by allowing both for inexact local updating in order to balance communication vs. computation in large networks, and for a small subset of devices to be active at any communication round (; ;). For example, proposes a communication-efficient primal-dual optimization method that learns separate but related models for each device through a multi-task learning framework. However, such an approach does not generalize to non-convex problems, e.g. deep learning, due to lack of strong duality. In the non-convex setting, Federated Averaging (FedAvg), a heuristic method based on averaging local Stochastic Gradient Descent (SGD) updates, has instead been shown to work well empirically .Unfortunately, FedAvg is quite challenging to analyze due to its local updating scheme, the fact that few devices are active at each round, and the issue that data is heterogeneous. Recent works have made steps towards analyzing FedAvg in simpler settings. For instance, parallel SGD and related variants (; ; ; ;), which make local updates similar to FedAvg, have been studied in the IID setting. Although some works (; ;) have recently explored convergence guarantees in heterogeneous settings, they make the limiting assumptions such as full participation of all devices, convexity , or uniformly bounded gradients . There are also several heuristic approaches that aim to tackle statistical heterogeneity, either by sharing the local device data or some server-side proxy data (; ;), which may be unrealistic in practical federated settings. In this section, we introduce the key ingredients behind recent methods for federated learning, including FedAvg, and then outline our proposed framework, FedProx. Federated learning methods (e.g., ; ;) are designed to handle multiple devices collecting data and a central server coordinating the global learning objective across the network. The aim is to minimize: DISPLAYFORM0 where N is the number of devices, p k ≥ 0, ∀k, and k p k =1. In general, the local objectives measure the local empirical risk over possibly differing data distributions DISPLAYFORM1, with n k samples available at each device k. Hence, we can set p k = n k n, where n= k n k is the total number of data points. To reduce communication and handle systems constraints, federated optimization methods commonly allow for low participation and local updating. At each round, a subset of the devices are selected and use local solvers to optimize the local objectives. Then the local updates are aggregated via a central server. Each of the local objectives can be solved inexactly, as formally defined below. Definition 1 (γ-inexact solution). For a function h(w; w 0) = F (w) + µ 2 w − w 0 2, and γ ∈, we say w * is a γ-inexact solution of min w h(w; w 0), if ∇h(w * ; w 0) ≤ γ ∇h(w 0 ; w 0), where ∇h(w; w 0) = ∇F (w) + µ(w − w 0). Note that a smaller γ corresponds to higher accuracy. We use γ-inexactness in our analysis (Section 4) to measure the amount of local computation from each local solver. In experiments (Section 5), we simply run an iterative local solver for some number of local epochs, which can be seen as a proxy for γ-inexactness. In Federated Averaging (FedAvg) , at each round, a subset K N of devices are selected and run SGD locally for E number of epochs to optimize the local objective F k on device k, and then the ing model updates are averaged. shows empirically that it is crucial to tune the number of local epochs for FedAvg to converge, as additional local epochs allow local models to move further away from the initial global model, potentially causing divergence. Thus, it is beneficial to restrict the amount of local deviation through a more principled tool than heuristically limiting the number of local epochs of some iterative solver. This serves as our inspiration for FedProx, introduced below. Instead of just minimizing the local function F k, in FedProx, device k uses its local solver to approximately minimize the following surrogate objective h k: DISPLAYFORM0 The proximal term in the above expression effectively limits the impact of local updates by restricting them to be close to the current model w t. We note that proximal terms such as the one above are a popular tool utilized throughout the optimization literature (see Appendix C). An important distinction of the proposed usage is that we suggest, explore, and analyze such a term for the purpose of tackling statistical heterogeneity in federated settings. DISPLAYFORM1 Server selects a subset S t of K devices at random (each device k is chosen with probability p k); Server sends w t to all chosen devices; Each chosen device k ∈ S t finds a w t+1 k which is a γ-inexact minimizer of: DISPLAYFORM2 Each chosen device k sends w t+1 k back to the server; Server aggregates the w's as w DISPLAYFORM3 In Section 4, we see that the usage of the proximal term makes FedProx more amenable to theoretical analysis. In Section 5, we also see the modified local subproblem in FedProx in more robust and stable convergence compared to FedAvg for heterogeneous datasets. Note that FedAvg is a special case of FedProx with µ = 0. In this section we first introduce a metric that specifically measures the dissimilarity among local functions. We call this metric local dissimilarity. We then analyze FedProx under an assumption on bounded local dissimilarity. Definition 2 (B-local dissimilarity). The local functions DISPLAYFORM0 for ∇f (w) = 0.Here E k [·] denotes the expectation over devices with masses p k =n k /n and N k=1 p k =1. Note that B(w)≥ 1 and the larger the value of B(w), the larger is the dissimilarity among the local functions. Moreover, if F k (·)'s are associated with empirical risk objectives and the samples on all the devices are homogeneous, then B(w) → 1 for every w as all the local functions converge to the same expected risk function. Interestingly, similar assumptions (e.g., ;) have been explored elsewhere for differing purposes; see more in Appendix C. Using Definition 2, we now state our formal dissimilarity assumption, which we use in our convergence analysis. Assumption 1 (Bounded dissimilarity). For some > 0, there exists a B such that for all the points w ∈ S c = {w | ∇f (w) 2 > }, B(w) ≤ B.Using Assumption 1, we analyze the amount of expected objective decrease if one step of FedProx is performed. Theorem 3 (Non-convex FedProx Convergence: B-local dissimilarity). Let Assumption 1 hold. Assume the functions F k are non-convex, L-Lipschitz smooth, and there exists DISPLAYFORM1 t is not a stationary solution and the local functions F k are B-dissimilar, i.e. B(w t) ≤ B. If µ, K, and γ in Algorithm 1 are chosen such that DISPLAYFORM2 then at iteration t of Algorithm 1, we have the following expected decrease in the global objective: DISPLAYFORM3 where S t is the set of K devices chosen at iteration t. We direct the reader to Appendix A.1 for a detailed proof. Theorem 3 uses the dissimilarity in Definition 2 to identify sufficient decrease at each iteration for FedProx. In Appendix A.2, we provide a corollary characterizing the performance with a more common (though slightly more restrictive) bounded variance assumption. Remark 4. In order for ρ in Theorem 3 to be positive, we need γB < 1. Moreover, we also need DISPLAYFORM4 These conditions help to quantify the trade-off between dissimilarity bound (B) and the algorithm parameters (γ, K).Finally, we can use the above sufficient decrease to characterize the rate of convergence under Assumption 1. Note that these hold for general non-convex F k (·).Theorem 5 (Convergence rate: FedProx). Given some > 0, assume that for B ≥ B, µ, γ and K the assumptions of Theorem 3 hold at each iteration of FedProx. DISPLAYFORM5 While the thus far hold for non-convex F k (·), we prove the convergence for convex loss in Appendix A.3. To help provide context for the rate in Theorem 5, we compare it with SGD in the convex case in Appendix A.4, Remark 9. We now present empirical for FedProx. We study the effect of statistical heterogeneity on the convergence of FedAvg and FedProx, explore properties of the FedProx framework, and show how empirical convergence relates to the bounded dissimilarity assumption. We show a subset of our experiments here due to space constraints; for full we defer the reader to Appendix B. All code, data, and experiments are publicly available at github.com/litian96/FedProx. Experimental Details. We evaluate FedProx on diverse tasks, models, and both synthetic and real-world datasets. The real datasets are curated from prior work in federated learning (; BID3 . In particular, We study convex models on partitioned MNIST , Federated Extended MNIST BID4 BID3 ) (FEM-NIST), and FMNIST*, and non-convex models on Sentiment140 BID7 ) FORMULA0 Effect of Statistical Heterogeneity. In Figure 1, we study how statistical heterogeneity affects convergence using four synthetic datasets. From left to right, as data become more heterogeneous, convergence becomes worse for FedProx with µ=0 (FedAvg). Setting µ > 0 is particularly useful in heterogeneous settings although that may slow convergence for IID data. Properties of FedProx Framework. The key parameters of FedProx that affect performance are the number of local epochs, E, and the proximal term scaled by µ. We study FedProx under different values of E and µ using the federated datasets described in TAB3 in Appendix B.1. We report the on Shakespeare dataset here and provide similar on all datasets in Appendix B.3. Dependence on E. We explore the effect of E in Figure 2 (left) and show the convergence in terms of the training loss. We see that large E leads to divergence on Shakespeare. In Appendix B.3, we further show that large E leads to similar instability on other heterogeneous datasets. We note here that a large E may be particularly useful in practice when communication is expensive (which is common in federated networks) where small E is prohibitive. In Figure 3, e.g., we show that FedProx with a large E (E=50) and an appropriate µ (µ=0.2) leads to faster and more stable convergence compared with E=1, µ=0 (slow convergence) and E=50, µ=0 (unstable convergence). Figure 1. Effect of data heterogeneity on convergence. We show training loss (see testing accuracy and dissimilarity metric in Appendix B.3, FIG4) on four synthetic datasets whose heterogeneity increases from left to right. The method with µ = 0 corresponds to FedAvg. Increasing heterogeneity leads to worse convergence, but setting µ > 0 can help to combat this. Dependence on µ. We consider the effect of µ on convergence in Figure 2 (middle). We observe that the appropriate µ can force divergent methods to converge or increase the stability for unstable methods (Figure 5, Appendix B.3), thus making the performance of FedProx less dependent on E. In practice, µ can be adaptively chosen based on the current performance of the models. For example, one simple heuristic is to increase µ when seeing the loss increasing and decreasing µ when seeing the loss decreasing. We provide additional experiments demonstrating the effectiveness of this approach in Appendix B.5.Dissimilarity Measurement and Divergence. Finally, in Figure 2 (right), we track the variance of gradients on each device, DISPLAYFORM0, which is lower bounded by B (see Bounded Variance Equivalence Corollary 6). We observe that the dissimilarity metric in Definition 2 is consistent with the training loss. Therefore, smaller dissimilarity indicates better convergence, which can be enforced by setting µ appropriately. Proof. Using our notion of γ-inexactness for each local solver (Definition 1), we can define e t+1 k such that: DISPLAYFORM1 Now let us definew DISPLAYFORM2. Based on this definition, we know DISPLAYFORM3 Let us defineμ = µ − L − > 0 andŵ t+1 k = arg min w h k (w; w t). Then, due to theμ-strong convexity of h k, we have DISPLAYFORM4 Note that once again, due to theμ-strong convexity of h k, we know that DISPLAYFORM5. Now we can use the triangle inequality to get DISPLAYFORM6 Therefore, DISPLAYFORM7 where the last inequality is due to the bounded dissimilarity assumption. Now let us define M t+1 such thatw DISPLAYFORM8 where the last inequality is also due to bounded dissimilarity assumption. Based on the L-Lipschitz smoothness of f and Taylor expansion, we have DISPLAYFORM9 From the above inequality it follows that if we set the penalty parameter µ large enough, we can get a decrease in the objective value of f (w t+1) − f (w t) which is proportional to ∇f (w t) 2. However, this is not the way that the algorithm works. In the algorithm, we only use K devices that are chosen randomly to approximatew t. So, in order to find the E f (w t+1), we use local Lipschitz continuity of the function f. DISPLAYFORM10 where L 0 is the local Lipschitz continuity constant of function f and we have DISPLAYFORM11 Therefore, if we take expectation with respect to the choice of devices in round t we need to bound DISPLAYFORM12 where Q t = E St L 0 w t+1 −w t+1. Note that the expectation is taken over the random choice of devices to update. DISPLAYFORM13 From FORMULA19, we have that DISPLAYFORM14 and DISPLAYFORM15 where the first inequality is a of K devices being chosen randomly to get w t and the last inequality is due to bounded dissimilarity assumption. If we replace these bounds in we get DISPLAYFORM16 Combining,, and and using the notation α = 1 µ we get DISPLAYFORM17 Theorem 3 uses the dissimilarity in Definition 2 to identify sufficient decrease at each iteration for FedProx. Here we provide a corollary characterizing the performance with a more common (though slightly more restrictive) bounded variance assumption. This assumption is commonly employed, e.g., when analyzing methods such as SGD. Corollary 6 (Bounded Variance Equivalence). Let Assumption 1 hold. Then, in the case of bounded variance, i.e., DISPLAYFORM0 Proof. We have, DISPLAYFORM1 With Corollary 6 in place, we can restate the main in Theorem 3 in terms of the bounded variance assumption. Theorem 7 (Non-Convex FedProx Convergence: Bounded Variance). Let the assertions of Theorem 3 hold. In addition, let the iterate w t be such that ∇f (w t) 2 ≥, and let E k ∇F k (w) − ∇f (w) 2 ≤ σ 2 hold instead of the dissimilarity condition. If µ, K and γ in Algorithm 1 are chosen such that DISPLAYFORM2 then at iteration t of Algorithm 1, we have the following expected decrease in the global objective: DISPLAYFORM3 where S t is the set of K devices chosen at iteration t. The proof of Theorem 7 follows from the proof of Theorem 3 by noting the relationship between the bounded variance assumption and the dissimilarity assumption as portrayed by Corollary 6. Corollary 8 (Convergence: Convex Case). Let the assertions of Theorem 3 hold. In addition, let F k (·) be convex and γ = 0, i.e., all the local problems are solved exactly. If 1 B ≤ 0.5 √ K, then we can choose µ ≈ 6LB 2 from which it follows that ρ ≈ 1 24LB 2.Proof. In the convex case, where L − = 0 andμ = µ, if γ = 0, i.e., all subproblems are solved accurately, we can get a decrease proportional to ∇f (w t) 2 if B < √ K. In such a case if we assume 1 << B ≤ 0.5 √ K, then we can write DISPLAYFORM0 In this case, if we choose µ ≈ 6LB 2 we get DISPLAYFORM1 Note that the expectation in FORMULA0 is a conditional expectation conditioned on the previous iterate. Taking expectation of both sides, and telescoping, we have that the number of iterations to at least generate one solution with squared norm of gradient less than is O(DISPLAYFORM2 Remark 9 (Comparison with SGD). Note that FedProx achieves the same asymptotic convergence guarantee as SGD. In other words, under the bounded variance assumption, for small, if we replace B with its upper-bound in Corollary 6 and choose µ large enough, then the iteration complexity of FedProx when the subproblems are solved exactly and DISPLAYFORM0 ), which is the same as SGD BID6. Synthetic data. To generate synthetic data, we follow a similar setup to that described in , additionally imposing heterogeneity among devices. Full details are given in Appendix B.1. In particular, for each device k, we generate synthetic samples DISPLAYFORM0, where the covariance matrix Σ is diagonal with Σ j,j = j −1.2. Each element in the mean vector v k is drawn from N (B k, 1), B k ∼ N (0, β). Therefore, α controls how much local models differ from each other and β controls how much the local data at each device differs from that of other devices. We vary α, β to generate three heterogeneous distributed datasets, Synthetic (α, β), as shown in Figure 1. We also generate one IID dataset by setting the same W, b on all devices and setting X k to follow the same distribution. Our goal is to learn a global W and b. Real data. We also explore five real datasets, their statistics summarized in TAB3 in Appendix B.1. These datasets are curated from prior work in federated learning as well as recent federated learning-related benchmarks (; BID3 . We study two convex models on partitioned MNIST , Federated Extended MNIST BID4 BID3 ) (FEMNIST), and FMNIST*. We study two non-convex models on Sentiment140 BID7 ) FORMULA0 Implementation. We implement FedAvg and FedProx in Tensorflow BID0. See details in Appendix B.2.Setup. For each experiment, we tune the learning rate and ratio of active devices per round on FedAvg. We randomly split the data on each local device into 80% training set and 20% testing set. For each comparison, the devices selected and data read at each round are the same across all runs. We report all metrics based on the global objective f (w). Note that FedAvg (µ = 0) and FedProx (µ ≥ 0) perform the same amount of work at each round when the number of local epochs, E, is the same; we therefore report in terms of rounds rather than FLOPs or wall-clock time. Here we provide full details on the datasets and models used in our experiments. We curate a diverse set of non-synthetic datasets, including those used in prior work on federated learning , and some proposed in LEAF, a benchmark for federated settings BID3. We also create synthetic data to directly test the effect of heterogeneity on convergence, as in Section 5.• Synthetic: We set (α, β)=, (0.5,0.5) and respectively to generate three non-identical distributed datasets (Figure 1). In the IID data, we set the same W, b ∼ N on all devices and X k to follow the same distribution N (v, Σ) where each element in the mean vector v is drawn from N and Σ is diagonal with Σ j,j = j −1.2. For all synthetic datasets, there are 30 devices in total and the number of samples on each device follows a power law.• MNIST: We study image classification of handwritten digits 0-9 in MNIST using multinomial logistic regression. To simulate a heterogeneous setting, we distribute the data among 1000 devices such that each device has samples of only 2 digits and the number of samples per device follows a power law. The input of the model is a flattened 784-dimensional (28 × 28) image, and the output is a class label between 0 and 9.• FEMNIST: We study an image classification problem on the 62-class EMNIST dataset BID4 using multinomial logistic regression. Each device corresponds to a writer of the digits/characters in EMNIST. We call this federated version of EMNIST FEMNIST. The input of the model is a flattened 784-dimensional (28 × 28) image, and the output is a class label between 0 and 61.• Shakespeare: This is a dataset built from The Complete Works of William Shakespeare . Each speaking role in a play represents a different device. We use a two layer LSTM classifier containing 100 hidden units with a 8D embedding layer. The task is next character prediction and there are 80 classes of characters in total. The model takes as input a sequence of 80 characters, embeds each of the character into a learned 8 dimensional space and outputs one character per training sample after 2 LSTM layers and a densely-connected layer.• Sent140: In non-convex settings, we consider a text sentiment analysis task on tweets from Sentiment140 BID7 ) (Sent140) with a two layer LSTM binary classifier containing 256 hidden units with pretrained 300D GloVe embedding . Each twitter account corresponds to a device. The model takes as input a sequence of 25 characters, embeds each of the character into a 300 dimensional space by looking up Glove and outputs one character per training sample after 2 LSTM layers and a densely-connected layer. • FEMNIST*: We generate FEMNIST* by subsampling 26 lower case characters from FEMNIST and distributing only 20 classes to each device. There are 200 devices in total. The model is the same as the one used on FEMNIST. We report the total number of devices, samples, and the mean and standard deviation of samples per device of real federated datasets in TAB3. (Implementation) In order to draw a fair comparison with FedAvg, we use SGD as a local solver for FedProx, and adopt a slightly different device sampling scheme than that in Algorithms FedAvg and 1: sampling devices uniformly and averaging updates with weights proportional to the number of local data points (as originally proposed in ). While this sampling scheme is not supported by our analysis, we observe similar relative behavior of FedProx vs. FedAvg whether or not it is employed. Interestingly, we also observe that the sampling scheme proposed herein in more stable performance for both methods (see Appendix B.4, Figure 10). This suggests an added benefit of the proposed framework.(Machines) We simulate the federated learning setup (1 server and N devices) on a commodity machine with 2 Intel R Xeon R E5-2650 v4 CPUs and 8 NVidia R 1080Ti GPUs.(Hyperparameters) For each dataset, we tune the ratio of active clients per round from {0.01, 0.05, 0.1} on FedAvg. For synthetic datasets, roughly 10% of the devices are active at each round. For MNIST, FEMNIST, Shakespeare, Sent140 and FEMNIST*, the number of active devices (K) are 1%, 5%, 10%, 1% and 5% respectively. We also do a grid search on the learning rate based on FedAvg. We do not decay the learning rate through all rounds. For all synthetic data experiments, the learning rate is 0.01. For MNIST, FEMNIST, Shakespeare, Sent140 and FEMNIST*, we use the learning rates of 0.03, 0.003, 0.8, 0.3 and 0.003. We use a batch size of 10 for all experiments.(Libraries) All code is implemented in Tensorflow BID0 Version 1.10.1. Please see github.com/litian96/FedProx for full details. We explore the effect of E in Figure 4. For each dataset, we set E to be 1, 20, and 50 while keeping µ = 0 (FedProx reduces to FedAvg in this case) and show the convergence in terms of the training loss. We see that large E leads to divergence or instability on MNIST and Shakespeare. On FEMNIST and Sent140, nevertheless, larger E speeds up the convergence. Based on drawn from Figure 1, we hypothesize this is due to the fact that the data distributed across devices after partitioning FEMNIST and Sent140 lack significant heterogeneity. We validate this hypothesis by observing instability on FEMNIST*, which is a skewed variant of the FEMNIST dataset. We consider the effect of µ on convergence in Figure 5. For each experiment, in the case of E = 50, we compare the between µ = 0 and the best µ. For three out of the four datasets (all but Sent140) we observe that the appropriate µ can increase the stability for unstable methods and can force divergent methods to converge. Finally, in Figure 6, we demonstrate that our B-local dissimilarity measurement in Definition 2 captures the heterogeneity of datasets and is therefore an appropriate proxy of performance. In particular, we track the variance of gradients on each device, DISPLAYFORM0, which is lower bounded by B (see Bounded Variance Equivalence Corollary 6). We observe that the dissimilarity metric is consistent with the training loss. Therefore, smaller dissimilarity indicates better convergence, which can be enforced by setting µ appropriately. Full tracking B (for all experiments performed) are provided in Appendix B.3.We present testing accuracy, training loss and dissimilarity measurements of all the experiments in FIG4, Figure 8 and Figure 9. We show the training loss, testing accuracy and dissimilarity measurement of FedProx using two different device sampling schemes in Figure 10. We show a simple adaptive heuristic of setting µ on four synthetic datasets in Figure 11. Two aspects of the proposed work: our framework, FedProx, and analysis tool, the bounded dissimilarity assumption, have been utilized throughout the optimization literature-though often with very different motivations. For completeness, we provide a discussion below on our relation to these prior works. Figure 9. Training loss, testing accuracy and dissimilarity measurement for experiments in Figure 5 Proximal term. We note here a connection to elastic averaging SGD (EASGD) , which was proposed as a way to train deep networks in the data center setting, and uses a similar proximal term in its objective. While the intuition is similar to EASGD (this term helps to prevent large deviations on each device/machine), EASGD employs a more complex moving average to update parameters, is limited to using SGD as a local solver, and has only been analyzed Figure 10. Differences between two sampling schemes in terms of training loss, testing accuracy and dissimilarity measurement. Sampling devices with a probability proportional to the number of local data points and then simply averaging local models performs slightly better than uniformly sampling devices and averaging the local models with weights proportional to the number of local data points. Under either sampling scheme, the settings with µ = 1 demonstrate more stable performance than settings with µ = 0.for simple quadratic problems. The proximal term we introduce has also been explored in previous optimization literature with very different purposes, such as , to speed up (mini-batch) SGD training on a single machine. Li et al. (2014b) also employs a similar proximal term for efficient SGD training both in a single machine and distributed settings, but their analysis is limited to a single machine setting with different assumptions (e.g., IID data and solving the subproblem exactly at each round). DANE also includes a proximal term in the local objective function. However, due to the inexact estimation of full gradients (i.e., ∇φ(w (t−1) ) in (, Eq )) with device subsampling schemes and the staleness of the gradient correction term (, Eq ) in local updating methods, it is not directly applicable to our setting and performs worse on heterogeneous datasets (see Figure 12).Bounded dissimilarity assumption. The bounded dissimilarity assumption has appeared in different forms, for example in . In , the bounded similarity assumption is used in context of asserting gradient diversity and quantifying the benefit in terms of scaling of the mean square error for mini-batch SGD for data which is i.i.d. In , the authors use a similar assumption, called strong growth condition, which is a stronger version of Assumption 1 with = 0. They prove that some interesting practical problems satisfy such a condition. They also use this assumption to prove better convergence rates for SGD with constant step-size. Note that this is different with our approach as the algorithm that we are analyzing is not SGD and our analysis is different in spite of the similarity in the assumptions.
We introduce FedProx, a framework to tackle statistical heterogeneity in federated settings with convergence guarantees and improved robustness and stability.
751
scitldr
Referential games offer a grounded learning environment for neural agents which accounts for the fact that language is functionally used to communicate. However, they do not take into account a second constraint considered to be fundamental for the shape of human language: that it must be learnable by new language learners and thus has to overcome a transmission bottleneck. In this work, we insert such a bottleneck in a referential game, by introducing a changing population of agents in which new agents learn by playing with more experienced agents. We show that mere cultural transmission in a substantial improvement in language efficiency and communicative success, measured in convergence speed, degree of structure in the emerged languages and within-population consistency of the language. However, as our core contribution, we show that the optimal situation is to co-evolve language and agents. When we allow the agent population to evolve through genotypical evolution, we achieve across the board improvements on all considered metrics. These stress that for language emergence studies cultural evolution is important, but also the suitability of the architecture itself should be considered. Human languages show a remarkable degree of structure and complexity, and how such a complex system can have emerged is still an open question. One concept frequently named in the context of language evolution is cultural evolution. Unlike animal languages, which are taken to be mostly innate, human languages must be re-acquired by each individual BID29 BID10. This pressures them to fit two constraints that govern their cross-generational transmission: They must be learnable by new language users, and they must allow effective communication between proficient language users (see, e.g. BID31 .In the recent past, computational studies of language emergence using referential games (see Section 2.1 for a review) has received a new wave of attention. These studies are motivated by the second constraint, that language is used to communicate. The first constraint, on the other hand, is in this framework not considered: language is not transmitted from agent to agent and there is thus no need for agents to develop languages that would survive a transmission bottleneck. 1 In this work, we introduce a transmission bottleneck in a population of agents playing referential games, implicitly modelling cultural evolution. However, merely adding a transmission bottleneck is not enough. Since the types of language that may emerge through passing this bottleneck are not just dependent on the existence of a bottleneck, but also on the shape of the bottleneck, which is determined by the biases of the architecture of the agents playing the game (their genotypical design). If the genotypical design of those agents is not suitable to solve this task through communication, they will -at best -converge to a language that doesn't allow for effective communication or is difficult to learn for every new agent or -at worst -not converge to an appropriate culturally transmittable language at all. In this work, we therefore study the co-evolution of language and architecture in a referential games. To this end, we introduce the Language Transmission Engine that allows to model both cultural and genetic evolution in a population of agents. We demonstrate that the emerging languages ben-efit from including cultural transmission as well as genetic evolution, but the best are achieved when both types of evolution are included and languages and agents can co-evolve.2 Related Work Much work has been done on the emergence of language in artificial agents and investigating its subsequent structure, compositionality and morphosyntax BID15 BID17. The original computer simulations dealt with logic and symbolic representations BID15 BID3, but with the advent of modern deep learning methods and sequence-to-sequence models BID33, there has been a renewed interest in simulating the emergence of language through neural network agents (i.a. BID21 BID8 . In the exploration of language emergence, different training approaches and tasks have been proposed to encourage agents to learn and develop communication. These tasks are commonly set up in an end-to-end setting where reinforcement learning can be applied. This is often a two-player referential game where one agent must communicate the information it has access to (typically an image), while the other must guess it out of a lineup BID6 BID21. BID25 and BID2 find that structure and compositionalility can arise in emerged languages in such setups; BID19 show that'natural' language does not arise naturally and has to be incentivised by imposing specific restrictions on games and agents. The evolution of human language is a well-studied but still poorly understood topic. One particular open question concerns the relation between two different evolutionary processes: genetic evolution of the agents in the population and cultural evolution of the language itself BID7. BID3 assert that the question of genetic versus cultural evolution ultimately arises from three distinct but interacting adaptive systems: individual learning, cultural transmission, and genetic evolution. Cultural transmission is thought to enforce structure and compression to languages, since a language must be used and learned by all individuals of the culture in which it resides and at the same time be suitable for a variety of tasks. BID18 define those two pressures as compressibility and expressivity and find that structure arises from the trade-off between these pressures in generated languages. The importance of cultural evolution for the emergence of structure is supported by a number of artificial language learning studies (e.g. BID30 and computational studies using the Iterated Learning paradigm, in which agents learn a language by observing the output produced by another agent from the previous 'generation' (e.g. BID13 BID16 BID18 . An alternative way of imposing cultural pressures on agents, is by simulating a large population of them and pairing agents randomly to solve a communicative game BID4 . This approach is more naturally aligned with cultural pressures in humans (see e.g. BID34 and is the one we use in this paper. While there is much controversy about the selection pressures under which the fundamental traits underlying the human ability to learn and use language evolved in other humans, that genetic evolution played an essential role in endowing humans with the capabilities to learn and use language is generally undebated. Pre-modern humans, for instance, did not have the ability to speak or understand complex structures BID7 .There are several approaches to simulate genetic evolution of neural network agents. Neural Architectural Search (NAS) focuses on searching the architecture space of the networks, unlike many traditional evolutionary techniques which often include parameter weights in their search space. Some of the earlier techniques such as NEAT gained considerable traction as a sound way of doing topology search using biologically inspired concepts BID32. NAS methods however have mostly reverted to optimising solely the neural architecture and using gradient based methods such as SGD for weight optimisation due to the large parameter space of modern architectures (see, e.g., BID5 for a survey).More recently, state-of-the-art one-shot search techniques such as ENAS (Efficient Neural Architecture Search) and DARTS (Differentiable Architecture Search) have allowed to bring a gradientbased approach to NAS through the use of intelligent weight-sharing schemes BID24 ).In this work, we use the DARTS search space, which is constrained but still obtained state-of-the-art performance on benchmark natural language tasks BID23. We study language emergence in a referential game inspired by the signalling games proposed by BID22. In this game, one agent (called the sender) observes an image and generates a discrete message. The other agent, the receiver of the message, uses the message to select the right image from a set of images containing both the sender image and several distractor images. Since the information shown to the sender agent is crucial to the receivers success, this setup urges the two agents to come up with a communication protocol that conveys the right information. Formally, our referential game is similar to BID8:We use z = 512, and n = 3 and train agents with Gumbel-Softmax BID11 ) based on task-success. We introduce both cultural and genetic evolution to this game through a process that we call the Language Transmission Engine (LTE), which is depicted in FIG0. 2 Similar to , we create a population of communicating agents. In every training iteration, two random agents are sampled to play the game. This forces the agents to adopt a simpler language naturally: to succeed they must be able to communicate or understand all opposing agents. In our setup, agents are either sender or receiver, they do not switch roles during their lifetime. To model cultural evolution in the LTE, we periodically replace agents in the population with newly initialised agents. Cultural evolution is implicitly modelled in this setup, as new agents have to learn to communicate with agents that already master the task. Following BID4, we experiment with three different methods to select the agents that are replaced: randomly (no selection pressure), replacing the oldest agents or replacing the agents with the lowest fitness (as defined in Section 3.3). We call these setups cu-random, cu-age and cu-best, respectively. To model genetic evolution, rather than periodically replacing agents with randomly initialised new agents, we instead mutate the most successful agents and replace the worst agents with variations of the best agents, as outlined in Section 3.2.2. Note that cultural evolution is still implicitly modelled in this setup, as new agents still have to learn to communicate with older agents. Therefore, we call this setup with the term co-evolution. Culling We refer to the selection process and subsequent mutation or re-initialisation step as culling. In biology, culling is the process of artificially removing organisms from a group to promote certain characteristics, so, in this case, culling consists of removing a subset of the worst agents and replacing them with variations of the best architecture. The proportion of agents from each population selected to be mutated is determined by the culling rate ↵, where ↵ 2. The culling interval l defines the number of iterations between culling steps. A formalisation of the LTE can be found in appendix A.1. We base potential mutations on the RNN cell search space DARTS, defined by BID24. This space includes recurrent cells with up to N nodes, where each node n 1, n 2,..., n N can take the output of any preceding nodes including n 0, which represents the cell's input. All potential connections are modulated by an activation function, which can be the identity function, Tanh, Sigmoid or ReLU. Following BID24 and , we enhance each operation with a highway bypass BID35 and the average of all intermediate nodes is treated as the cell output. To sample the initial model, we sample a random cell with a single node (N = 1). As this node must necessarily be connected to the input, the only variation stems from the possible activation functions applied to the output of n 1, ing in four possible starting configurations. We set a node cap of N = 8. We mutate cells by randomly sampling an architecture which is one edit step away from the previous architecture. Edit steps are uniformly sampled from i) changing an incoming connection, ii) changing an output operation or iii) adding a new node; the mutation location is uniformly sample from all possible mutations. 3 Note that while we use the DARTS search space to define potential mutations, contrary to BID24, we do not use differentation to sample new architectures based on a selection criterion. The fitness criterion that we use in both the cu-best and co-evolution setup is based on task performance. However, rather than considering agents' performance right before the culling step, we consider the age of the youngest agent in the population (defined in terms of number of batches that it was trained) and for every agent compute their performance up until when they had DISPLAYFORM0 where T A = min a2A T (a) is the age T (a) of the youngest agent in the population, and L(a t j) is the loss of agent a j at time step t. This fitness criterion is not biased towards older agents, that have seem already more data and have simply converged more. It is thus not only considering task performance but also the speed at which this performance is reached. We test the LTE framework on a compositionally defined image dataset, using a range of different selection mechanisms. In all our experiments, we use a modified version of the Shapes dataset BID0, which consists of 30 by 30 pixel images of 2D objects, characterised by shape (circle, square, triangle), colour (red, green, blue), and size (small, big). While every image has a unique symbolic description -consisting of the shape, colour and size of the object and its horizontal and vertical position in a 3x3 grid -one symbolic representation maps to multiple images, that differ in terms of exact pixels and object location. We use 80k, 8k, 40k images for train, validation and test sets, respectively. Some example images are depicted in FIG1.We pre-train a CNN feature extractor for the images in a two-agent setting of the task (see Appendix A.4 for more details). For our co-evolution experiments, we use the DARTS search space as described above. For all cultural evolution approaches, we use an LSTM BID9 for both the sender and receiver architectures (see Appendix A.3 for more details). Unless otherwise specified, we use the same sizes and hyper-parameters for all models. The sender and receiver models have a hidden size of 64 for the recurrent layer and an embedding layer of size 64. Further, we use a vocabulary size V of 4, with an additional bound token serving as the indicator for beginning and end- of-sequence. We limit the maximum length of a sentence L to 5.We back-propagate gradients through the discrete step outputs (message) of the sender by using the Straight-Through (ST) Gumbel-Softmax Estimator BID12. We run all experiments with a fixed temperature ⌧ = 1.2. We use the default Pytorch BID26 Adam optimiser with a learning rate of 0.001 and a batch-size of 1024. Note that the optimiser is reset for every batch. For all multi-agent experiments we use a population size of 16 senders and 16 receivers. The culling rate ↵ is set to 0.25 or four agents, and we cull (re-initialise or mutate) every l = 5k iterations. We run the experiments for a total of I = 500k iterations, and evaluate the populations before each culling step. We use an range of metrics to evaluate both the population of agents and the emerging languages. Jaccard Similarity We measure the consistency of the emerged languages throughout the population using Jaccard Similarity, which is defined as the ratio between the size of the intersection and the union of two sets. We sample 200 messages per input image for each possible sender-receiver pair and average the Jaccard Similarity of the samples over the population. A high Jaccard Similarity between two messages is an indication that the same tokens are used in both messages. We compute how similar the messages that different agents emit for the same inputs by looking at all possible (sender, message) pairs for one input and assess whether they are the same. This metric is 1 when all agents always emit the same messages for the same inputs. We compute the average number of unique messages generated by each sender in the population. An intuitive reference point for this metric is the number of images with distinct symbolic representations. If agents generate more messages than expected by this reference point, this demonstrates that they use multiple messages for the images that are -from a task perspective -identical. A smaller number of unique messages, on the other hand, indicates that the agent is using a simpler language which is underspecified compared to the symbolic description of the image. Topographic Similarity Topographic similarity, used in a similar context by, represents the similarity between the meaning space (defined by the symbolic representations) and the signal space (the messages send by an agent). It is defined as the correlation between the distances between pairs in meaning space and the distances between the corresponding messages in the signal space. We compute the topographic similarity for an agent by sampling 5,000 pairs of symbolic inputs and corresponding messages and compute the Pearson's ⇢ correlation between the cosine similarity of the one-hot encoded symbolic input pairs and the cosine similarity of the one-hot encoded message pairs. Average Population Convergence To estimate the speed of learning of the agents in the population, estimate the average population convergence. For each agent, at each point in time, this is defined as the agents average performance from the time it was born until it had the age of the current youngest agent in the population (analogous to the fitness criterion defined in Section 3.3). To get the average population convergence, we average those values for all agents in the population. Average Agent Entropy We compute the average certainty of sender agents in their generation process by computing and averaging their entropy during generation. We now present a detailed comparison of our cultural and co-evolution setups. For each approach, we averaged over four random seeds, the error bars in all plots represent the standard deviation across these four runs. To analyse the evolution of both agents and languages, we consider the development of all previously outlined metrics over time. We then test the best converged languages and architectures in a single sender-receiver setup, to assess the impact of cultural and genetic evolution more independently. In these experiments, we compare also directly to a single sender-receiver baseline, which is impossible for most of the metrics we consider in this paper. Finally, we briefly consider the emerged architectures from a qualitative perspective. We first confirm that all setups in fact converge to a solution to the task. As can be seen in FIG2, all populations converge to a (close to perfect) solution to the game. The cu-age approach slightly outperforms the other approaches, with a accuracy that surpasses the 95% accuracy mark. Note that, due to the ever changing population, the accuracy at any point in time is an average of both'children' and'adults', that communicate with different members of the population. To assess the behaviour of the agents over time, we monitor their average message entropy convergence speed. As can be seen in FIG3, the co-evolution setup in the lowest average entropy scores, the messages that they assign to one particular image will thus have lower variation than in the other setups. Of the cultural evolution setups, the lowest entropy score is achieved in the cu-best setup. FIG4 shows the average population convergence over time. Also in this case, we observe a clear difference between cultural evolution only and co-evolution, with an immediately much lower convergence time for co-evolution and a slightly downward trending curve. To check the consistencies of languages within a population, we compare the Jaccard Similarity and the Average Proportion of Unique Matches, which we plot in Figure 6. This shows that, compared to cultural evolution only, not only are the messages in co-evolution more similar across agents (higher Jaccard Similarity), but also that agents are considerably more aligned with respect to the same inputs (less unique matches).To assess the level of structure of the emerged languages, we plot the average Topographic Similarity and the Average Number of Unique Messages generated by all senders (Figure 7). The co-evolution condition again outperforms all cultural only conditions, with a simpler language (the number of the unique messages closer to the symbolic reference point) that is structurally more sim- Figure 6: Average Jaccard Similarity and proportion of message matches for all cultural transmission modes and evolution ilar to the symbolic representation of the input (higher Topographical Similarity). In Figure 8 we show the co-evolution of an agent and a sample of its language during three selected iterations in the co-evolution setup. Strikingly, the best sender architecture does not evolve from its original form, which could point towards the limitations of of our search strategy and space. On the contrary, the receiver goes through quite some evolution steps and converges into a significantly more complex architecture than its original form. We observe a unification of language throughout evolution in Figure 8, which is also supported by Figure 7. The population of senders starts out 11 different unique messages and ends with only two to describe the same input image. We will leave more detailed analysis of the evolved architectures for future work. With a series of experiments we test the a priori suitability of the evolved languages and agents for the task at hand, by monitoring the accuracy of new agents that are paired with converged agents and train them from scratch. We focus, in particular, on training receivers with a frozen sender from different setups, which allows us to assess 1) whether cultural evolution made languages evolve to be more easily picked up by new agents 2) whether the genetic evolution made architectures converge more quickly when faced with this task. We compare the accuracy development of: Figure 7: Average Number of Unique Messages and Topographic Similarity for all cultural evolution modes and co-evolution. For comparison, we also plot the number of unique messages for a symbolic solution that fully encodes all relevant features of the image (since we have three possible shapes and colours, two possible sizes, and a 3 ⇥ 3 grid of possible positions, this symbolic reference solution has 3 ⇥ 3 ⇥ 2 ⇥ 9 = 162 distinct messages.• An LSTM receiver trained with a frozen sender taken from cu-best;• An evolved receiver trained with a frozen evolved sender. For both these experiments, we compare with two baselines:• The performance of a receiver agent trained from scratch along with a receiver agent that has either the cu architecture or the evolved co architecture (cu-baseline and co-baseline, respectively);• The performance of an agent trained with an agent that is pretrained in the single agent setup, with either the cu architecture or an evolved architecture (cu-baseline-pretrained and co-baseline-pretrained).Each experiment is run 10 times, keeping the same frozen agent. The confirm cultural evolution contributes to the learnability and suitability of emerging languages: the cu-best accuracy (green line) converges substantially quicker and is substantially higher than the cu-baseline-pretrained accuracy (orange line). Selective pressure on the Figure 8: Evolution of the best sender and receiver architecture according to convergence, and the evolution of the population's message description of the same input through iterations. The bold messages represent the message outputted by the best sender whose architecture is pictured above. The count of each message represents the number of agents in the population which uttered this exact sequence. language appears to be important: the ing languages are only easier to learn in the cu-best setup. 4 In addition, they show that the agents benefit also from the genetic evolution: the best accuracies are achieved in the co-evolution setup (red line). The difference between the cu-baseline (blue) and the co-baseline (brown) further shows that even if the evolved architectures are trained from scratch, they perform much better than a baseline model trained from scratch. The difference between the co-baseline-pretrained (only genetic evolution, purple line) and the co-evolution of agents and language line (red line) illustrates that genetic evolution alone is not enough: while a new evolved receiver certainly benefits from learning from a (from scratch) pretrained evolved sender, without the cultural transmission pressure, it's performance is still substantially below a receiver that learns from an evolved sender whose language was evolved as well. In this paper, we introduced a language transmission bottleneck in a referential game, where new agents have to learn the language by playing with more experienced agents. To overcome such bottleneck, we enabled both the cultural evolution of language and the genetic evolution of agents, using a new Language Transmission Engine. Us- ing a battery of metrics, we monitored their respective impact on communication efficiency, degree of linguistic structure and intra-population language homogeneity. While we could find important differences in between cultural evolution strategies, it is when we included genetic evolution that agents scored best. In a second experiment, we paired new agents with evolved languages and agents and again confirmed that, while cultural evolution makes a language easier to learn, coevolution leads to the best communication. In future research, we would like to apply the Language Transmission Engine on new, more complex tasks and further increase our understanding of the properties of the emerged languages and architectures. Additionally, we would like to investigate other neuro-evolution techniques and apply them on different search spaces.
We enable both the cultural evolution of language and the genetic evolution of agents in a referential game, using a new Language Transmission Engine.
752
scitldr
The need for large amounts of training image data with clearly defined features is a major obstacle to applying generative adversarial networks(GAN) on image generation where training data is limited but diverse, since insufficient latent feature representation in the already scarce data often leads to instability and mode collapse during GAN training. To overcome the hurdle of limited data when applying GAN to limited datasets, we propose in this paper the strategy of \textit{parallel recurrent data augmentation}, where the GAN model progressively enriches its training set with sample images constructed from GANs trained in parallel at consecutive training epochs. Experiments on a variety of small yet diverse datasets demonstrate that our method, with little model-specific considerations, produces images of better quality as compared to the images generated without such strategy. The source code and generated images of this paper will be made public after review. Generative Adversarial Networks(GAN) BID5 ) are powerful unsupervised learning models that have recently achieved great success in learning high-dimensional distributions in various types of problems and on different datasets. In the context of image generation, the basic framework of a GAN model consists of two parts: a generator G that generates images by translating random input z into an image, and a discriminator D which determines the authenticity of a generated image x as compared to the real data. These two components are alternatively optimized against each other during the training process, with the goal of minimizing the difference between the distribution of generated image data and target distribution of real image data. A notable challenge in GAN training, however, lies in the need for large amounts of clearly labeled data to capture the diversity features across various types of images into the model. Such requirement makes it difficult or even impossible to utilize GAN in applications where the amount of available training data is small but diverse. Moreover, recent deep learning models BID6 ) have demonstrated tendencies of misrepresentation in classification tasks when influenced by adversarial noise. Such vulnerability may also translate to unsatisfactory image generation as most generative models are implemented with deep networks. Thus, given these considerations, we propose in this paper the strategy of parallel recurrent sample augmentation agnostic to specific model details. Our contributions can be summarized as follows:• We proposed a general black-box method using recurrent image addition to diversify training data and enhance its quality over a large class of GANs without model specifications.• We also includes in our model a novel K-fold parallel framework, which better augments training data by stabilizing model output and preventing overfitting.• Experiments across various datasets and GAN objectives demonstrate the effectiveness of our method using authenticity measures such as Inception Score and Frechet Inception Distance. Building reliable deep generative adversarial models on limited amounts of training data has been a persistent challenge within the research community. Previous efforts to address the issue of labeled data scarcity generally fall into two groups: optimizing the structures of GANs to allow for better feature representation of data, and augmenting the training data through techniques. Along the first line of research, prior research optimized the GAN in BID5 by considering stronger mathematical objectives for more powerful latent space representation in general BID14,, BID7, BID13 ). In addition, recent research on GANs BID8, BID14 ) reparametrized the input noise using variational inference by assuming that the latent space could be modeled by a tractable prior, but noise reparametrization has severe mathematical limitation that prevents applicability to more general models. Furthermore, distributed multi-discriminator models BID10, ) also enhance the performances, with great potential room for further optimization. For the second line of research, data augmentation has already enjoyed considerable success in the problem of image classification BID12 ). Traditional data augmentation methods such as crop, mirror, rotation and distortion BID12, BID17 ) generally require domain-specific expert knowledge and manual operations, and only produce limited variation in augmented images. Recent developments centered on automatic augmentation of training data using controlled RNNs after transforming the problem into policy search in reinforcement learning , but the space complexity the search algorithm requires still does not apply to quick augmentation with limited data. In this section we describe the details of Parallel Recurrent Data Augmentation(PRDA), which consists of recurrent image data construction via noise addition on images and parallel generation with fold division. Recent research suggests that adversarial noise to deep learning models greatly perturbs the performances of deep neural networks. In CNNs for classification, for instance, the neural network may assign irrelevant labels to semantically unambiguous images after the addition of noise BID6 ). This phenomeon is due to the omission of possible latent features in new data generation caused by over-dependency on our limited training data. Since virtually all GANs are implemented with deep networks, a similar deficiency in representation of latent feature space may translate to lower qualities of the generated images. To counter this effect, we consider the strategy of recurrent data augmentation, which constructs varied images given the limited training set by repeatedly generating and modifying these samples on training set for subsequent training sample generation. Running the original generative model for a fixed number of times, we extract sampled images using standard procedures of sample image generation as described in BID15, BID7. Random noise is then added to these samples to produce new images, which are then used for subsequent training. This procedure is repeated for a fixed number of times or until convergence. FIG0 is a flow-chart of our procedure. Notice that the addition of high dimensional normal random noise allows the additional images to retain the original latent features to be learned by GAN. Compared with traditional methods such as rotation, cropping and mirroring BID17, BID12 ) which may lead to information loss, random noise addition doesn't reduce the information about the latent features in training set while making the model more robust, because the expectation of noise is invariant at 0. Additionally, noise addition is agnostic to the type of generative model, since the procedure is independent from the specific choice of neural network or objective functions. Additionally, we introduce a parallel data generation strategy inspired by K-fold cross validation in machine learning BID2 ). Dividing the training data into K folds at the beginning, we run in parallel K independent generators on K data groups, each consisting of K − 1 folds of the training set. When data is generated in each generator, the sample images produced by each generator at the given epochs are then added with random noise. These noised images, in turn, are fed back into the respective training data sets. To allow for maximal usage of each generated image, we insert the images such that the image generated by one generator goes to the augmented training set of all other K − 1 generators. This is to insure that the different generators in parallel have access to as many varied data pieces as possible in subsequent steps of training, so as to prevent overfitting and bolster the robustness of our model. Figure 2 demonstrates the mechanism of our algorithm. Notice that our K-fold division goes hand in hand with the recurrent data generation with no need for model specific considerations. As demonstrated by our experiments Section 4, training different GANs in parallel from different folds of data substantially boosts the quality of the training set and that of the generated images. For a comparative analysis, we have conducted experiments on previous GAN architectures over various datasets with/without data augmentation. The GANs we have tested on include DC-GAN BID15 ), BEGAN BID1 ), and WGAN-GP BID7 ). Additionally, to simulate limited data, we randomly select 5000 images from the datasets CIFAR-10, CelebA and Places to create their corresponding reduced datasets named reduced-CIFAR, reducedCelebA, and reduced-Places, and conduct our experiments on these limited datasets. All of our experiments are conducted with CPU Intel(R) Core(R) CPU 8700-K (3.7GHz) and GPU GTX 1080. In our experiments, we augment the training set with 8 noised images every 100 training epoches, and repeat the procedure 3-5 times. By comparison, the unaugmented GAN is run over the same initial training data, with the number of epochs the same as the product of 100 and augmentation times. FIG1,4,5 are some sample images that our method produces with the state-of-the-art GAN WGAN-GP as compared to the ones produced by GAN without data augmentation. We observe that GANs with parallel recurrent image augmentation produce semantically coherent and visually diverse images earlier than the unaugmented GANs, while able to avoid fluctuations seen in unaugmented GANs during training. To evaluate the quality of the images generated by our augmentation method as compared with those generated without augmentation, we use the Inception Score(IS) BID16 ) and Frechet BID9 ). IS measures the entropy of generated images, with higher scores indicating greater diversity. On the other hand, FID measures the distance between the generated data and real data with two respective means and variances. Thus, the larger the IS and the smaller the FID, the better the performances of the model. Table 2: IS/FID scores of GANs on Reduced-CIFAR with/without Augmentation, given K = 5 Table 1 lists the combinations of GAN and dataset we tested our strategy on, as well as the Inception Score and Frechet Inception Distance of the images that are generated with and without our method using the state-of-the-art GAN WGAN-GP. Table 2 lists IS and FID of different GAN models on the Reduced-CIFAR with/without data augmentation. Clearly, on a variety of GAN structures and Datasets, recurrent sample augmentation produces better images as measured quantitatively. In sum, our paper shows that parallel recurrent sample augmentation can significantly improve the quality of synthetic images for a large class of GAN models. Our strategy is not only simple to implement, but also agnostic to the specific type of GAN to be improved on. As a further step, we are investigating the relationship between our proposed approach and other established methods. One possible pathway, for instance, lies in reinforcement learning as described in BID3 that gives more control to image generation via reward designation. We also hope to apply our idea to other generative models such as the VAE BID11 ) and further optimize our strategy using recent theoretical advances.
We introduced a novel, simple, and efficient data augmentation method that boosts the performances of existing GANs when training data is limited and diverse.
753
scitldr
We develop a stochastic whole-brain and body simulator of the nematode roundworm Caenorhabditis elegans (C. elegans) and show that it is sufficiently regularizing to allow imputation of latent membrane potentials from partial calcium fluorescence imaging observations. This is the first attempt we know of to ``complete the circle,'' where an anatomically grounded whole-connectome simulator is used to impute a time-varying ``brain'' state at single-cell fidelity from covariates that are measurable in practice. Using state of the art Bayesian machine learning methods to condition on readily obtainable data, our method paves the way for neuroscientists to recover interpretable connectome-wide state representations, automatically estimate physiologically relevant parameter values from data, and perform simulations investigating intelligent lifeforms in silico. One of the goals of artificial intelligence, neuroscience and connectomics is to understand how sentience emerges from the interactions of the atomic units of the brain, to be able to probe these mechanisms on the deepest level in living organisms, and to be able to simulate this interaction ad infinitum. In this work, we assemble an anatomically grounded, interpretable probabilistic brainbody simulator for the widely studied nematode roundworm Caenorhabditis elegans (C. elegans). We then present methods for performing posterior inference in the time evolution of the state of the worm and estimate the global simulator parameter values from readily obtainable non-invasive calcium fluorescence data. We refer to using an anatomically grounded model to infer latent states and parameters, conditioned on partial data, as a "virtual patch clamp" (VPC). The VPC also facilitates in silico experimentation on "digital" C. elegans specimens, by programmatically modifying the simulator and observing the ing simulations; enabling rapid, wide-reaching, fully observable and perfectly repeatable exploration of hypotheses into the how the fundamental units of the neural circuit of C. elegans combine to create intelligent behaviour. Due to the simplicity and regularity of its anatomy, and its predictable yet sophisticated behavioural repertoire, C. elegans is used as a "model organism" across biology and neuroscience research. Notably, its connectome is regular across wild-type specimens and has been mapped at synapse and. (e) gap junction fidelity using electron microscopy. Because of this fixed architecture, neural circuit simulators, imbued with anatomically correct structure, have been developed to produce feasible whole C. elegans connectome simulators by leveraging highly accurate neural dynamics models. Likewise, its simple anatomy has allowed body and locomotion simulators to be developed. The first contribution of this paper is a new C. elegans simulator that integrates existing simulators and models developed by the C. elegans community. At a high level, our simulator is comprised of three components: a simulator for the time-evolution of the membrane potential and intracellular calcium ion concentration in all 302 C. elegans neurons, a simulator for the physical form of the worm and the associated neural stimuli and proprioceptive feedback, and a model relating the intracellular calcium to the observable florescence data. The first component of our model is a simulator of connectome-scale, single-neuron fidelity neural dynamics. We modify the simulator presented by Marblestone, which builds on Wicks et al., called'simple C. elegans' (SCE). SCE is designed to be an easily interpretable simulator of C. elegans neural membrane potential dynamics via single-compartment neuron models connected by chemical synapses and electrical gap junctions. Exemplar voltage traces generated by our simulator are shown as black dashed lines in Figure 1 (b). We add to SCE a model for intracellular calcium ion concentration. We also integrate a simulator of the body shape of the worm, WormSim. WormSim models the body shape in two dimensions as a series of rods, contractile units and springs driven by impulses generated by a simplified neural network. We integrate the anatomically correct representation used by SCE to drive WormSim and receive proprioceptive feedback. A typical evolution of body state is shown in black in Figure 1 (e). Finally we incorporate a model of the fluorescence signals observed through calcium imaging. This dependence is described by a saturating Hill-type conditioned on intracellular calcium concentration, where only M of the 302 neurons are observed and identified (here M = 49, see Kato et al. ). To summarize our model, the neuron states, body state, and proprioceptive feedback define the latent "brain" and "body" state of the worm, denoted at time t as x t ∈ R 994. The observed data, y t ∈ R M +, is the calcium florescence signal. We now demonstrate how the tools of Bayesian inference can be employed to condition simulations on partial observations, make predictions conditionally or unconditionally, and perform marginal maximum a posteriori parameter estimation. The second contribution of this paper is the adoption and scaling of a method to impute the entire latent state, x t, conditioned on observable calcium imaging florescences. We wish to quantify the distribution over the latent states conditioned on the observed data, referred to as the posterior distribution p(x 0:T |y 1:T, θ). To relate how this achieves to our outlined objectives, this represents, under the model, the distribution over all latent neural and physiological states, x t, conditioned on the observed data, providing the imputation element of the VPC. Forward simulation of the particles initialized from the posterior distribution at T provides posterior predictive inference over state evolution, where, for instance, physiological variables can be programmatically clamped (inspiring the name VPC). Finally the posterior, p(θ|y 1:T) = p(y 1:T |θ)p(θ), allows us to objectively compare models and hypotheses, which will be used later for parameter estimation. Due to the non-invertible, non-differentiable nature of the simulator, we use sequential Monte Carlo (SMC) for estimating the posterior as a weighted discrete measure approximating the target distribution, as well as providing an estimation of the model evidence, p(y 1:T |θ). In our first experiment we first generate a synthetic state trajectory by sampling from the model, and then recover the known ground-truth trajectory from observed fluorescence traces using a fixed model. Specifically we condition on the same 49 neurons identified in the calcium imaging data released by Kato et al.. Results for this are shown in Figure 1(b), where the true state is shown in black, while the filtering distribution recovered by SMC is shown in blue. The blue reconstructions are congruent with the black trace, indicating that the latent behaviour of the complete system is being well-reconstructed despite partial observability. Critically, neurons not directly connected to observed neurons (for instance VD6) are correctly reconstructed, indicating that the regularizing capacity of the model is sufficient to constrain these variables. Further confirmation of the power of this method can be seen in the leftmost column of Figure 1(e), showing the predicted body shape closely matches the true state, despite not being explicitly conditioned upon body shape. This experiment shows that the VPC is tractable and is capable of yielding high-fidelity reconstructions of pertinent latent states given partial calcium imaging observations via the application of Bayesian inference to time series models of C. elegans. The posterior inference and evidence approximation presented in the previous section is useful for imputing values and performing in silico experimentation. In the previous section we fixed the model to demonstrate the viability of SMC for latent state imputation. We now allow the parameters of the simulator, collectively denoted θ, such as the non-directly observable electrical and chemical characteristics of individual synapses in the C. elegans connectome, as well as parameters of the body model, the calcium fluorescence model, etc, to be unknown and hence must be learned. We conclude this paper by taking concrete steps towards performing such parameter estimation, as defined by the simulator-structured hypothesis class defined by the chosen model. Our goal is to estimate the best simulator parameters θ * given observed data, i.e. θ * = argmax θ p(θ|y) = argmax θ p(y|θ)p(θ). The method we employ for performing parameter estimation is a novel combination of variational optimization (VO) and SMC evidence approximation. This in a stochastic gradient for parameter estimation that does not require a differentiable simulator and can deal with a large number of latent variables. VO starts with the following bound The gradient of U (φ) with respect to φ can then be computed as where Monte Carlo integration is used to evaluate this expectation. The objective function is the joint density f (θ) = −p(y, θ) = −p(y|θ)p(θ), where the likelihood term is approximated via SMC. To our knowledge, this is the first time that pseudo-marginal methods have been paired with variational optimization methods. We refer to this procedure as particle marginal variational optimization (PMVO). We implement a framework for embarrassingly parallel evaluation of multiple SMC sweeps on large, distributed high performance compute clusters, where each SMC sweep is executed on a single node, eliminating network overheads. We conclude by demonstrating the utility of our PMVO technique by recovering known simulator parameters on synthetic data generated by the model. For this work, we optimize the two parameters we introduced by integrating SCE and WormSim, namely the strength of motor stimulation, w m, and proprioceptive feedback, w s. The of this experiment are shown in Figure 1. Figures 1(c) and 1(e) show the imputed voltage traces and body poses when using the true parameters (blue), initial parameters (red) and optimized parameters (green), conditioned on just 49 neurons. Recovery of "good" parameter values facilitates good imputation of latent states, especially for body position which is not explicitly conditioned on after initialization. Figure 1(d) shows the distribution of convergence of the two parameters towards the true value. This experiment shows that parameter inference in C. elegans models using PMVO is viable. Increasing the number of particles used in the SMC sweeps, the number of samples drawn from the proposal and observing more neurons (although currently logistically infeasible) improves the quality of the reconstructions and recovery of parameters. In this work we have explored performing Bayesian inference in whole-connectome neural and whole-body C. elegans simulations. We describe the model-based Bayesian inference aspect of this as a "virtual patch clamp," whereby unobserved latent membrane potentials can be inferred from partial observations gathered non-invasively. Our choice of inference method facilitates estimation of the model evidence, a measure of how well the model explains the observed data. We presented a method for maximizing this evidence without requiring differentiable simulation components. In the past year several articles discussing open research issues pertaining to C. elegans simulation have been produced by the C. elegans community. Figure 1 (a) outlines the community planned development pipeline for C. elegans simulation. Our work addresses the implementation of the box simply labelled "optimization." We show on representative synthetic data that our method is capable of performing such an optimization. This approach promises to allow neuroscientists to peer deeper into the neural function of a living organism, testing hypothesis on neural function that were previously unreachable. It is widely touted that convolutional neural networks were developed by wide-scale study of the V1 cortex. We believe connectome-level optimization and simulation, as demonstrated here, is the next step in neuroscience to understanding the very root of intelligence, but also discovering and developing techniques building towards artificial general intelligence.
We develop a whole-connectome and body simulator for C. elegans and demonstrate joint state-space and parameter inference in the simulator.
754
scitldr
The high-quality node embeddings learned from the Graph Neural Networks (GNNs) have been applied to a wide range of node-based applications and some of them have achieved state-of-the-art (SOTA) performance. However, when applying node embeddings learned from GNNs to generate graph embeddings, the scalar node representation may not suffice to preserve the node/graph properties efficiently, ing in sub-optimal graph embeddings. Inspired by the Capsule Neural Network (CapsNet), we propose the Capsule Graph Neural Network (CapsGNN), which adopts the concept of capsules to address the weakness in existing GNN-based graph embeddings algorithms. By extracting node features in the form of capsules, routing mechanism can be utilized to capture important information at the graph level. As a , our model generates multiple embeddings for each graph to capture graph properties from different aspects. The attention module incorporated in CapsGNN is used to tackle graphs with various sizes which also enables the model to focus on critical parts of the graphs. Our extensive evaluations with 10 graph-structured datasets demonstrate that CapsGNN has a powerful mechanism that operates to capture macroscopic properties of the whole graph by data-driven. It outperforms other SOTA techniques on several graph classification tasks, by virtue of the new instrument. GNN is a general type of deep-learning architectures that can be directly applied to structured data. These architectures are mainly generalized from other well-established deep-learning models like CNN BID9 and RNN BID12. In this paper, we mainly focus on Convolution-based Graph Neural Networks which attract increasing interest recently. Convolution operation can be embedded into Graph Neural Networks from spectral or spatial perspective. BID1 defines the convolution operation in the Fourier domain which needs to calculate the eigendecomposition of the graph Laplacian. This method is computationally expensive and the filters they defined are non-spatially localized. Later, BID4 introduces Chebyshev expansion of the graph Laplacian to avoid computing eigenvectors and BID8 proposes to do convolution within 1-step neighbor nodes to reduce the complexity. From the spatial perspective, BID3 and propose to define a node receptive-field and do convolution within this field during which the information of each node as well as their neighbor nodes is gathered and new representation of each node is generated through an activation function. Both of these two perspectives perform well in node representation learning and a number of variants BID20 are developed based on the convolution idea and some of them have proven to achieve SOTA in various tasks. The success of GNN in node representation learning has inspired many deep-learning-based approaches to leverage on node embeddings extracted from GNN to generate graph embeddings for graph-based applications. However, during this procedure, the learned representation of each node will be considered as multiple individual scalar features instead of one vector. For example, applies element-wise max-pooling to nodes embeddings when generating graph embeddings, BID22 generates graph embeddings by computing the element-wise covariance of all nodes. These operations indicate that the authors capture node features in the form of scalar when they generate graph embeddings which may not suffice to preserve the node/graph properties efficiently. To build high-quality graph embeddings, it is important to not only detect the presence of different structures around each node but also preserve their detailed properties such as position, direction, connection, etc. However, encoding these properties information in the form of scalar means activating elements in a vector one-by-one which is exponentially less efficient than encoding them with distributed representations. This has been identified discussed in BID16. Inspired by CapsNet, we propose to extend scalar to vector during the procedure of applying GNN to graph representation learning. Compared with scalar-based neural network, vector-based neural network preserves the information of node/graph properties more efficiently. The technique for extracting features in the form of vectors is proposed in BID5 and improved in BID16 and BID6. This technique is mainly devised for image processing. In their work, the extracted vector is referred to as capsule (a group of neurons in neural network), so we follow the same notation in our work. Introducing capsules allows us to use routing mechanism to generate high-level features which we believe is a more efficient way for features encoding. Compared with max-pooling in CNN in which all information will be dropped except for the most active one, routing preserves all the information from low-level capsules and routes them to the closest high-level capsules. Besides, this allows to model each graph with multiple embeddings and each embedding reflects different properties of the graph. This is more representative than only one embedding used in other scalar-based approaches. In this paper, we propose Capsule Graph Neural Network (CapsGNN), a novel deep learning architecture, which is inspired by CapsNet and uses node features extracted from GNN to generate high-quality graph embeddings. In this architecture, each graph is represented as multiple embeddings and each embedding reflects the graph properties from different aspects. More specifically, basic node features are extracted in the form of capsules through GNN and routing mechanism is applied to generate high-level graph capsules as well as class capsules. In the procedure of generating graph capsules, an Attention Module can be applied to tackle graphs in various sizes. It also assigns different weights to each capsule of each node so that this model focuses on critical parts of the graph. We validate the performance of generated graph embeddings on classification task over 5 biological datasets and 5 social datasets. CapsGNN achieves SOTA performance on 6 out of 10 benchmark datasets and comparable on the rest. T-SNE BID11 ) is used to visualize the learned graph embeddings and the show that different graph capsules indeed capture different information of the graphs. Here, we provide a brief introduction to Graph Convolutional Networks (GCNs) BID8, routing mechanism in CapsNet and Attention mechanism which is used in CapsGNN. By definition, a weighted directed graph can be represented by G = (V, X, A) where V = {v 1, v 2, ...v N} is the set of nodes and A ∈ {0, 1} N ×N is the adjacency matrix. If there is an edge from v i to v j, then A ij = 1 otherwise A ij = 0. X ∈ R N ×d represents the features of each node. d is the number of feature channels and N is the number of nodes. GCN, a widely used GNN architecture, is chosen as one of the key building blocks in our work. At each layer of the GCN, the convolution operation is applied to each node as well as its neighbors and the new representation of each node is computed through an activation function. This procedure can be written as: DISPLAYFORM0 where Z l ∈ R N ×d represents nodes features at the layer l, d represents the number of feature channels and Z 0 = X, W l ∈ R d×d is a trainable weights matrix which serves as a channel filter, f is a nonlinear activation function, T ∈ R N ×N is the information transform matrix and it is usually calculated from the adjacency matrix A for guiding the information flowing between nodes. A complete GNN usually stacks L layers to generate final nodes embeddings Z L. In the architecture proposed by BID8, at the lth layer of GCN, the extracted features of each node actually take all its adjacent nodes within l steps into consideration. So l can be considered as the size of the node receptive-field at this layer. This special property inspired us to use nodes features extracted from different layers to generate the graph capsules. The concept of capsules is invented by Hinton's team BID5 and used recently in BID16 and BID6. CapsNet is designed for image features extraction and it is developed based on CNN. However, unlike traditional CNN in which the presence of feature is represented with scalar value in feature maps, the features in CapsNet are represented with capsules (vectors). In BID16, the direction of capsules reflects the detailed properties of the features and the length of capsules reflects the probability of the presence of different features. The transmission of information between layers follows Dynamic Routing mechanism. The specific procedure of Dynamic Routing can be found in Appendix A for the completeness. Inspired by CapsNet, the capsule mechanism is adopted and fused with GNN in our proposed Caps-GNN to generate graph capsules and class capsules on the basis of node capsules which are extracted from GNN. Dynamic Routing is applied to update weights between capsules from one layer to the next layer so that the properties captured by node capsules can be propagated to suitable graph capsules. Thus, each graph is modeled as multiple graph capsules, and then modeled as multiple class capsules. Different graph capsules reflect the properties of the graph from different aspects. Attention mechanism is widely applied in image BID26 and natural language processing domain BID2 where it is used to find the relevant parts of the input data to the task target. The main procedure of Attention mechanism is: 1) defining an attention measure which is used to measure the relevance of each part of the input data to the task target. 2) normalizing the generated attention value. 3) scaling each part with the normalized attention value. In CapsGNN, we apply Attention mechanism for two purposes: 1) scaling each node capsule so that the graph capsules that are generated from different graphs are still comparable even though these graphs are vastly different in sizes. 2) guiding the model to focus on more relevant parts of graphs. In this section, we outline CapsGNN and show how it is used to generate high-quality graph capsules which then can be applied to graph classification task. Figure 1 shows a simplified version of CapsGNN. It consists of three key blocks: 1) Basic node capsules extraction block: GNN is applied to extract local vertices features with different receptive-field and then primary node capsules are built in this block. 2) High level graph capsules extraction block: Attention Module and Dynamic Routing are fused to generate multiple capsules for graphs. 3) Graph classification block: Dynamic Routing is applied again to generate class capsules for graph classification. The details of each block is explained in the following. Firstly, the basic node features are extracted with GNN. Node degrees can be used as node attributes if nodes do not have attributes. We use the architecture improved by BID8 (GCN) as the node features extractor. The difference is that we extract multi-scale node features from different layers and the extracted features are represented in the form of capsules. The procedure can be written as: DISPLAYFORM0 Figure 1: Framework of CapsGNN. At first, GNN is used to extract node embeddings and form primary capsules. Attention module is used to scale node embeddings which is followed by Dynamic Routing to generate graph capsules. At the last stage, Dynamic Routing is applied again to perform graph classification.where W l ij ∈ R d×d is the trainable weights matrix. It serves as the channel filters from the ith channel at the lth layer to the jth channel at the (l + 1)th layer. Here, we choose f (·) = tanh(·) as the activation function. DISPLAYFORM1 To preserve features of sub-components with different sizes, we use nodes features extracted from all GNN layers to generate high-level capsules. After getting local node capsules, global routing mechanism is applied to generate graph capsules. The input of this block contains N sets of node capsules, each set is S n = {s 11, .., DISPLAYFORM0, where C l is the number of channels at the lth layer of GNN, d is the dimension of each capsule. The output of this block is a set of graph capsules H ∈ R P ×d . Each of the capsules reflects the properties of the graph from different aspects. The length of these capsules reflects the probability of the presence of these properties and the angle reflects the details of the graph properties. Before generating graph capsules with node capsules, an Attention Module is introduced to scale node capsules. Attention Module. In CapsGNN, primary capsules are extracted based on each node which means the number of primary capsules depends on the size of input graphs. In this case, if the routing mechanism is directly applied, the value of the generated high-level capsules will highly depend on the number of primary capsules (graph size) which is not the ideal case. Hence, an Attention Module is introduced to combat this issue. The attention measure we choose is a two-layer fully connected neural network F attn (·). The number of input units of F attn (·) is d × C all where C all = l C l and the number of output units equals to C all. We apply node-based normalization to generate attention value in each channel and then scale the original node capsules. The details of Attention Module is shown in FIG0 and the procedure can be written as: DISPLAYFORM1 wheres n ∈ R 1×C all d is obtained by concatenating all capsules of the node n. DISPLAYFORM2 represents the ith capsule of the node n and F attn (s n) ∈ R 1×C all is the generated attention value. In this way, the generated graph capsules can be independent to the size of graphs and the architecture will focus on more important parts of the input graph. The structure of Attention Module. We first flatten primary capsules and apply two layer fully-connected neural network to generate attention value for each capsule. Node-based normalization (normalize each row here) is applied to generate final attention value. Scaled capsules are calculated by multiplying the normalized value with primary capsules. After Attention Module, coordinate addition module can be used to preserve the position information of each node during the procedure of generating node capsule votes. Here, we introduce coordinate addition as an additional module and more details can be found in Appendix C.The procedure of generating multiple graph capsules is summarized as follows: 1) Scale primary capsules: Apply Attention Module to scale primary capsules. The of this module should be S ∈ R N ×C all ×d.2) Calculate votes: When calculating votes, capsules of different nodes from the same channel share the transform matrix. The of this step is a set of votes V ∈ R N ×C all ×P ×d where C all denotes the number of channels. P denotes the defined number of graph capsules.3) Dynamic Routing Mechanism: High-level graph capsules are computed with the procedure introduced in Section 2.3 based on votes produced in previous steps. This block is designed for graph classification using the graph capsules. Classification Loss. Dynamic Routing is applied again over graph capsules to generate final class capsules C ∈ R K×d, where K is the number of graph classes. Here, we use margin loss function proposed in BID16 to calculate the classification loss and it is computed as: DISPLAYFORM0 where m + = 0.9, m − = 0.1 and T k = 1 iff the input graph belongs to class k. λ is used to stop initial learning from reducing the length of all class capsules especially when K is large. Reconstruction Loss. Following BID16, we use reconstruction loss as regularization method. Here, all class capsules are masked except the correct one and it is decoded with two fullyconnected layer to reconstruct the input information. The information we reconstruct here is the histogram of input nodes. The procedure can be written as: DISPLAYFORM1 where m i represents the number of nodes with the attribute i appear in the input graph, d i is the corresponding decoded value. M P i = 1 iff input graph contains nodes with attribute i. Equation 5 is used to prevent reducing reconstruction loss from setting all decoded value as 0 especially when most of the elements of the ground truth are 0.The architecture details presented in section 3 describe the key design idea of CapsGNN which is based on the fusing of GNN and CapsNet. We also present a general comparison between CapsGNN with existing approaches in Appendix D. We verify the performance of the graph embeddings extracted from CapsGNN against a number of SOTA approaches and some classical approaches on classification task with 10 benchmark datasets. Besides, we conduct experimental study to assess the impact of capsules in efficiency of encoding features of graphs. We also conduct brief analysis on the generated graph/class capsules. The experimental and analysis is shown in the following. In addition to the analysis of the whole framework, we also provide a comparison experiment to evaluate the contribution of each module of CapsGNN with classification task. More details can be found in Appendix F. The goal of graph classification is to predict the classes these graphs belong to by analyzing the structure and nodes labels information of graphs. More specifically, given a set of labeled graphs DISPLAYFORM0 The objective of graph classification is to find a mapping f such that f: G → Y. We compare CapsGNN with both kernel-based and deep-learning-based algorithms. The details are given as follows:Kernel-based Methods: Three kernel-based algorithms, namely the Weisfeiler-Lehman subtree kernel (WL) BID18, the graphlet count kernel(GK) BID17, and the Random Walk (RW) BID23. Typically, kernel-based algorithms first decompose graphs into sub-components based on the kernel definition, then build graph embeddings in a feature-based manner. Lastly, some machine learning algorithms (i.e., SVM) are applied to perform graph classification. Deep-Learning-based Methods: Three types of deep-learning-based algorithms are selected: 1) Graph2vec BID14, Deep Graph Kernel (DGK) BID24 and AWE BID7. Graph2vec, DGK and AWE require extracting substructures in advance while Graph2vec and AWE learn the representations of graphs in the manner of Doc2vec BID10, DGK applies Word2vec BID13 to learn the similarity between each pair of sub-structures which will be used to build the graph kernel. Then kernel-based machine learning methods (i.e., SVM) are applied to perform graph classification. These three algorithms as well as kernel-based methods are all sub-components based and they all require two stages to do graph classification. So although Graph2vec, DGK and AWE apply learning approaches to learn the embeddings, we still consider them and other kernel-based algorithms as the same type in our experiments and we mainly compare our proposed architecture with the other remained methods which are all end-to-end and totally data-driven architectures.2) PATCHY-SAN (PSCN) BID15. This method first sorts all nodes, then defines a receptive-field size for each node. These receptive-field are then filled with sorted neighbor nodes. Lastly, 1-D CNN is applied to perform graph classification.3) GCAPS-CNN BID22, Dynamic Edge CNN (ECC) BID19 and Deep Graph CNN (DGCNN). These methods are all GNN-based algorithms. GCAPS-CNN first extract FGSD BID21 features for nodes that do not have attributes and then generate capsules for each node with higher-order statistical moment value of its neighbor nodes. At the last layer, they calculate covariance between all nodes to generate graph embeddings. ECC extracts node features on the condition of edge labels in GNN and then apply multi-scale pyramid structure to coarsen the graph. It uses average pooling at the last layer to generate graph embeddings. DGCNN generates nodes embeddings through a multi-layer GNN and combine features extracted from all layers. Then they order the nodes based on the embeddings extracted from the last layer which is followed by 1-D CNN. Five biological graph datasets: MUTAG, ENZYMES, NCI1, PROTEINS, D&D and five social network datasets: COLLAB, IMDB-B, IMDB-M, RE-M5K, RE-M12K BID24 are used for our experimental study. Details of these datasets can be found in Appendix B.We applied 10-fold cross validation to evaluate the performance objectively. Each time we use 1 training fold as validation fold to adjust hyper-parameters, 8 training fold to train the architecture and the remained 1 testing fold to test the performance. We stop training when the performance on the validation fold reaches to the highest. Then we use the accuracy on the test fold as our test . The final is the average of these 10 test accuracy. By default, we use the reported in the original work for baseline comparison. However, in cases where the are not available, we use the best testing reported in BID22, and BID7. More details about experimental setting can be found in Appendix E. CapsGNN achieves the SOTA performance on social datasets. More specifically, we are able to improve the classification accuracy by a margin of 2.78% and 5.30% on RE-M5K and RE-M12K respectively. This demonstrates that learning features in the form of capsules and modeling a graph to multiple embeddings is beneficial to capture macroscopic properties of graphs which are more important in classifying social networks. These also consistent with the property of CapsNet, as it focuses more on extracting important information from children capsules by voting. However, applying routing to the whole graph leads to preserve all the information at a graph level and this property is not suitable to give prominence to individual fine structures which might be more important to biological datasets analysis. This in less robust of CapsGNN on biological datasets. Despite this, the performance of CapsGNN in graph classification task still demonstrates its capability of graph representation especially its high potential of large graph dataset analysis. The main objective of this experiment is to examine the efficiency of capsules in encoding graph features. More efficient in feature encoding here means representing more information with the similar number of neurons. We construct a scalar-based neural network for each CapsGNN and then compare the CapsGNN with its related scalar-based architecture by comparing their training and testing accuracy on graph classification task to demonstrate the efficiency in feature representation. More specifically,these scalar-based architectures are designed by replacing the graph capsules block and the class capsules block in CapsGNN with fully-connected layers (FC). In this case, the only difference between each pair of CapsGNN and its corresponding scalar-based architecture is that CapsGNN represents features with vectors and uses routing to propagate information between layers while the scalar-based architecture encodes features with scalar values. In this experiment, the number of layers of GNN is set as L = 3, the number of channels at each layer is all set as C l = 2. We construct different CapsGNNs by adjusting the dimension of nodes (d n) and graphs (d g) capsules and the number of graph capsules (P). The size of FC in scalar-based architectures is adjusted based on the size of CapsGNNs so that they have comparable number of trainable weights. Other hyper-parameters are the same as Appendix E. The details of the tested architectures are shown in TAB3. Besides, NCI1 dataset, which has more than 4000 graphs, is used for the test. The accuracy of NCI1 on various architectures can be found in FIG2. In TAB3 and FIG2, the setting of different architectures is represented as d n -d g -P. Here, we choose the simplest setting as an example: 2-4-2 means that the dimension of nodes capsules is d n = 2, the dimension of graph and class capsules is d g = 4 and the number of graph capsules P equals to 2. Besides, we set the dimension of FC of its corresponding scalar-based architecture as 12 so that they have comparable number of trainable weights. In this case, each graph is modeled as 2 4-dimensional graph embeddings in the CapsGNN or 1 12-dimensional graph embedding in its corresponding scalar-based architecture. Both architectures are sub-optimal to represent the whole dataset while CapsGNN can still reach higher accuracy compared with the scalar-based architecture. As we can see from Figure dimension of FC is slightly higher than the dimension of graph embeddings in CapsGNN, CapsGNN can still reach higher accuracy which indicates that CapsGNN is more powerful in representing the whole dataset. When we keep increasing the number of graph capsules in CapsGNN and enlarging the dimension of FC in scalar-based architectures, the difference between the dimension of graph embeddings and the size of FC becomes larger, their training accuracy will be closer. It is noted that the training accuracy of scalar-based architectures is slightly higher than the CapsGNNs when the dimension of FC is about 20% larger than the dimension of graph capsules. In this experiment, we use extremely simple architectures on purpose to simulate the situation where we need to model complex datasets with relatively simple architectures. Since each pair of CapsGNN and its corresponding scalar-based architecture have similar structure and comparable number of trainable weights, the higher training accuracy and testing accuracy of CapsGNN demonstrate its efficiency in feature encoding and its strong capability of generalization. CapsGNN leverages on capsules idea to get multiple embeddings for each graph so that complex information underlying graphs can be captured more effectively. To explore the properties of the extracted graph/class capsules, we plot the graph distribution based on capsules extracted from different channels with t-SNE. Due to space constrain, we only take REDDIT-M12K as an example. We choose to depict the distribution of graphs which are generated from 3 categories, namely atheism, IAmA and mildlyinteresting with capsules extracted from the 1st, 2nd, 11th, 14th channel of graph capsules. As we can see from TAB4, different channels of capsules represent different aspects of graph properties. atheism and IAmA can be discriminated obviously with capsules extracted from the 11th and the 14th channels while they are hard to be separated with capsules extracted from the 1st and the 2nd channels. However, atheism and mildlyinteresting can be discriminated with the capsules extracted from the 1st and the 2nd channels while they are mixed in the 11th and the 14th channels which is opposite to the case of atheism and IAmA. This phenomenon can also be observed in other multi-class datasets. It is still hard to figure out the specific aspects these capsules focus on. However, compared with scalar-based neural networks, modeling an object with multiple embeddings makes it possible to explore the meaning of each channel which may lead the model to learn more interpretable embeddings in the future. DISPLAYFORM0 As we can see from TAB5, different class capsules focus on different classification-related graph properties. For example, the capsules that represent athesism (first column) can well discriminate athesism (red) from the other two types of graphs while IAmA (green) and mildlyinteresting (blue) are mixed in this channel. The similar phenomenon can also be found in other class capsules. Besides, when we concatenate the capsules of these three classes together, three types of graphs can be well discriminated with the concatenated capsules which also directly reflect the classification performance. This property is quite different from standard scalar-based architectures where each graph is modeled with only one graph embedding 2. By introducing the concept of capsules, the graph and class capsules can not only preserve classification-related properties of each graph (reflected with the length of class capsules) but also other properties information (reflected with the angle of class capsules). The generated class capsules can also be useful in other follow-up work and we leave this to be explored in the future. We have proposed CapsGNN, a novel framework that fuses capsules theory into GNN for more efficient graph representation learning. Inspired by CapsNet, the concepts of capsules are introduced in this architecture to extract features in the form of vectors on the basis of nodes features extracted from GNN. As a , one graph is represented as multiple embeddings and each embedding captures different aspects of the graph properties. The generated graph and class capsules can preserve not only the classification-related information but also other information with respect to graph properties which might be useful in the follow-up work and we leave this to be explored in the future. We believe this is a novel, efficient and powerful data-driven method to represent high-dimensional data such as graphs. Our model has successfully achieved better or comparable performance when compared with other SOTA algorithms on 6 out of 10 graph classification tasks especially on social datasets. Compared with similar scalar-based architectures, CapsGNN is more efficient in encoding features and this would be very beneficial for processing large datasets. The specific procedure of routing is shown in Algorithm 1.Algorithm 1 Dynamic routing mechanism returns parent capsules H given children capsules S, a set of trainable transform matrices W and the number of iterations t.1: procedure DYNAMIC ROUTING(t, S, W)2:for all children capsule i: DISPLAYFORM0 for all children capsule i to all parent capsule j: r ij ← 0 4: DISPLAYFORM1 for all children capsule i:r i ← sof tmax(r i)6:for all parent capsule j: DISPLAYFORM2 for all parent capsule j: DISPLAYFORM3 for all children capsule i to all parent capsule j: r ij ← r ij +h DISPLAYFORM4 end for 10:returnh j 11: end procedure The details of benchmark datasets we use in our experiment is shown in TAB6. FIG3. This module is not necessary in some datasets. Here, we propose this module as a selective optimization. When the GNN goes deeper, the extracted nodes features contain more specific position information of each node. Inspired by where the node embeddings learned from the last layer of GNN are taken to order all nodes, we also take the capsules extracted from the last layer of GNN as the position indicators of corresponding nodes by concatenating it with each capsule of the node. The procedure of calculating votes with node position indicators can be written as: DISPLAYFORM0 where v (n,i)j ∈ R 1×(dn+dp) represents the node capsule vote from the ith channel of the nth node to the jth channel of graph capsules. W n ij ∈ R d×dn and W p j ∈ R d×dp are the transform matrices. s (n,i) is the same as introduced in Section 3.2 and represents concatenate operation. Here, we present a general comparison between CapsGNN with existing approaches. BID0, BID19 and (GNN-based graph representation learning architectures), CapsGNN represents node features in the form of capsules. This is helpful to preserve the properties information contained in nodes more efficiently when generating graph embeddings. Besides, each graph is modeled as multiple embeddings in CapsGNN instead of only one embedding used in other approaches. This allows us to capture information of graphs from different aspects. The second difference is that, in these approaches, each part of the graph is given equal importance. However, the attention mechanism used in CapsGNN allows it to assign various weights to different nodes. This leads the model to focus on critical parts of input graphs. Lastly, different from BID0 and BID19, uses node features extracted from multiple layers of GNN so that different size of receptive-fields are applied to preserve more information. 2) GCAPS-CNN proposed by BID22 also introduced capsule-related concept into graph representation learning. However, they generate capsules in a feature-based manner instead of learning capsules as distributed embeddings. More specifically, when they extend a scalar feature to a capsule for the node n, P higher-order statistical moment value is calculated based on its neighbor nodes and these P value is concatenated to a P -dimensional capsule. Between layers, GCAPS-CNN performs dimension reduction to compress capsules back to scalar features, which defeats the purpose of having capsules in the first place. CapsGNN learns each dimension of capsules in a data-driven manner and apply routing mechanism between layers to preserve the learned meaning of each capsule. This also allows us to preserve multiple properties information contained in nodes more efficiently especially when generating graph embeddings.3) Compared with CapsNet proposed by BID16 which works well in image processing domain, CapsGNN needs to handle more complex situations when handling graphs. In image processing domain, the size of the input images can be standardized by resizing the images. However, it is not possible to simply resize the graphs. So, we introduced an additional Attention Module to tackle graphs that are vastly different in sizes and preserve important parts of graphs. We also propose to use features extracted from all layers of GNN since it is hard to define a suitable receptive-field size for graphs. Furthermore, compared with the architecture of CapsNet, CapsGNN has one additional graph capsules layer which is used to learn multiple graph embeddings and these embeddings reflect different aspects of graph properties which is valuable in future research. To the best of our knowledge, we are the first one to model a graph as multiple embeddings in the form of distributed capsules and we believe this approach of learning representations has a high potential for other complex data analysis which is not limited to graphs. Besides, CapsGNN has different explanation of linear transformation. In CapsNet, by applying a linear trainable transformation to pose vectors, the spatial relationship between object parts and the whole object can be well modeled. However, by applying linear trainable transformation, CapsGNN is simply computing the prediction vectors from nodes-level representations to graph-level representations. This transform matrix is not trying to model the change of viewpoint or capture viewpoint invariant knowledge but to model the relationship between the properties of nodes and the properties of the whole graph. The same architecture settings are used in CapsGNN for all datasets to show its robust performance. For the node capsules extraction, the GCN has 5 layers (L = 5), the number of channels at each layer is set as the same which is 2 (C l = 2). The number of graph capsules is fixed as 16 (P = 16). The dimension of all capsules are set as 8 (d = 8). The number of units in the hidden layer of Attention Module is set as 1 16 of the number of input units. The number of iterations in routing is set as 3. During training stage, we simultaneously reduce Loss c and Loss r and we scale Loss r with 0.1 so that the model focuses on classification task. λ is set as 0.5 and 1.0 for multi-class classification and binary classification respectively. As for the node attributes in different datasets, considering that REDDIT-M5K and REDDIT-M12K are large-scale datasets with widely distributed nodes degree, we set the attributes of all nodes in these two datasets as the same which means we consider the initial node embeddings for all the nodes as the same to avoid over-fitting. For the remained relatively small datasets, both of the node degree and other node attributes are sent to CapsGNN as node features to speed up the training and we apply dropout(dropput rate is 0.3) to the input node features to improve the learning performance. The settings are summarized in the 3) CapsGNN-Avg (GCN + Average + Routing + Reconstruction): The Attention module in basic CapsGNN is replaced with the Average module.4) CapsGNN-noRout (GCN + Attention + Reconstruction): In this architecture, we will fix the similarity coefficients between all the capsules from one layer to the next layer as the same so that each children capsule will be equally routed to all the parent capsules.5) CapsGNN-noRecon (GCN + Attention + Routing): In this architecture, we directly remove the Reconstruction loss module.6) CapsGNN-Avg-noRout (GCN + Routing): In this architecture, we replace the Attention module in basic CapsGNN with the Average module and fix the similarity coefficients between all the capsules from one layer to the next layer as the same. The validation accuracy of each architecture is shown in TAB9 where we highlight the highest and lowest accuracy respectively. As we can see from the Table, IMDB-B and D&D reach better performance with CapsGNN-Coord which indicates the effectiveness of Coordinate Addition Module. However, the performance of social datasets is still comparable across all types of architectures. On the other hand, the performance of biological datasets(NCI1, PROTEINS, D&D) is more sensitive to the introduced modules in each architecture. The highest accuracy of NCI1, PROTEINS is achieved on CapsGNN which indicates the little effectiveness of Coordinate Addition Module in these two datasets. More specifically, the comparison between CapsGNN-Avg-noRout and CapsGNN-Avg on NCI1, PROTEINS and D&D indicates the effectiveness of Routing mechanism which improves the accuracy by 2.1%, 0.6% and 0.51% respectively. Besides, the comparison between CapsGNNAvg-noRout and CapsGNN-noRout on NCI1, PROTEINS and D&D indicates the effectiveness of Attention Module which improves the accuracy by 1.37%, 1.24% and 1.35% respectively. The accuracy of NCI1, PROTEINS and D&D can be improved by as much as 2.98%, 1.33% and 2.23% when Attention Module and Routing mechanism are combined in the architecture. Overall, CapsGNN is a general framework that fuses capsule theory to GNN for more efficient graph representation learning. In this framework, we also provide multiple possible modules to improve the quality of learned graph embeddings while we do not target to find the best combination of modules for each dataset. Since each possible module plays a different role in different datasets, it would be better to adjust the architecture and hyper-parameters based on practical situation.
Inspired by CapsNet, we propose a novel architecture for graph embeddings on the basis of node features extracted from GNN.
755
scitldr
We introduce a novel framework for generative models based on Restricted Kernel Machines (RKMs) with multi-view generation and uncorrelated feature learning capabilities, called Gen-RKM. To incorporate multi-view generation, this mechanism uses a shared representation of data from various views. The mechanism is flexible to incorporate both kernel-based, (deep) neural network and convolutional based models within the same setting. To update the parameters of the network, we propose a novel training procedure which jointly learns the features and shared representation. Experiments demonstrate the potential of the framework through qualitative evaluation of generated samples. In the past decade, interest in generative models has grown tremendously, finding applications in multiple fields such as, generated art, on-demand video, image denoising , exploration in reinforcement learning , collaborative filtering , inpainting and many more. Some examples of graphical models based on a probabilistic framework with latent variables are Variational Auto-Encoders and Restricted Boltzmann Machines (RBMs) . More recently proposed models are based on adversarial training such as Generative Adversarial Networks (GANs) and its many variants. Furthermore, auto-regressive models such as Pixel Recurrent Neural Networks (PixelRNNs) model the conditional distribution of every individual pixel given previous pixels. All these approaches have their own advantages and disadvantages. For example, RBMs perform both learning and Bayesian inference in graphical models with latent variables. However, such probabilistic models must be properly normalized, which requires evaluating intractable integrals over the space of all possible variable configurations . Currently GANs are considered as the state-of-the-art for generative modeling tasks, producing high-quality images but are more difficult to train due to unstable training dynamics, unless more sophisticated variants are applied. Many datasets are comprised of different representations of the data, or views. Views can correspond to different modalities such as sounds, images, videos, sequences of previous frames, etc. Although each view could individually be used for learning tasks, exploiting information from all views together could improve the learning quality (; ;). Also, it is among the goals of the latent variable modelling to model the description of data in terms of uncorrelated or independent components. Some classical examples are Independent Component Analysis; Hidden Markov models ; Probabilistic Principal Component Analysis (PCA) ; Gaussian-Process Latent variable model and factor analysis. Hence, when learning a latent space in generative models, it becomes interesting to find a disentangled representation. Disentangled variables are generally considered to contain interpretable information and reflect separate factors of variation in the data for e.g. lighting conditions, style, colors, etc. The definition of disentanglement in the literature is not precise, however many believe that a representation with statistically independent variables is a good starting point . Such representations extract information into a compact form which makes it possible to generate samples with specific characteristics (; ; ;). Additionally, these representations have been found to generalize better and be more robust against adversarial attacks . In this work, we propose an alternative generative mechanism based on the framework of Restricted Kernel Machines (RKMs) , called Generative RKM (Gen-RKM). RKMs yield a representation of kernel methods with visible and hidden units establishing links between Kernel PCA, Least-Squares Support Vector Machines (LS-SVM) and RBMs. This framework has a similar energy form as RBMs, though there is a non-probabilistic training procedure where the eigenvalue decomposition plays the role of normalization. used this framework to develop tensor-based multi-view classification models and showed how kernel PCA fits into this framework. Contributions. 1) A novel multi-view generative model based on the RKM framework where multiple views of the data can be generated simultaneously. 2) Two methods are proposed for computing the pre-image of the feature vectors: with the feature map explicitly known or unknown. We show that the mechanism is flexible to incorporate both kernel-based, (deep) convolutional neural network based models within the same setting. 3) When using explicit feature maps, we propose a training algorithm that jointly performs the feature-selection and learns the common-subspace representation in the same procedure. 4) Qualitative and quantitative experiments demonstrate that the model is capable of generating good quality images of natural objects. Further experiments on multi-view datasets exhibit the potential of the model. Thanks to the orthogonality of eigenvectors of the kernel matrix, the learned latent variables are uncorrelated. This resembles a disentangled representation, which makes it possible to generate data with specific characteristics. This paper is organized as follows. In Section 2, we discuss the Gen-RKM training and generation mechanism when multiple data sources are available. In Section 3, we explain how the model incorporates both kernel methods and neural networks through the use of implicit and explicit feature maps respectively. When the feature maps are defined by neural networks, the Gen-RKM algorithm is explained in Section 4. In Section 5, we show experimental of our model applied on various public datasets. Section 6 concludes the paper along with directions towards the future work. Additional supplementary materials are given in the Appendix A. The proposed Gen-RKM framework consists of two phases: a training phase and a generation phase which occurs one after another. Similar to Energy-Based Models (EBMs, see for details), the RKM objective function captures dependencies between variables by associating a scalar energy to each configuration of the variables. Learning consists of finding an energy function in which the observed configurations of the variables are given lower energies than unobserved ones. Note that the schematic representation, as shown in Figure 1 is similar to Discriminative RBMs and the objective function J t (defined below) has an energy form similar to RBMs with additional regularization terms. The latent space dimension in the RKM setting has a similar interpretation as the number of hidden units in a restricted Boltzmann machine, where in the specific case of the RKM these hidden units are uncorrelated. We assume a dataset, with x i ∈ R d, y i ∈ R p comprising of N data points. Here y i may represent an additional view of x i, e.g., an additional image from a different angle, the caption of an image or a class label. Starting from the RKM interpretation of Kernel PCA, which gives an upper bound on the equality constrained Least-Squares Kernel PCA objective function , and applying the feature-maps φ 1: Figure 1: Gen-RKM schematic representation modeling a common subspace H between two data sources X and Y. The φ 1, φ 2 are the feature maps (F x and F y represent the feature-spaces) corresponding to the two data sources. While ψ 1, ψ 2 represent the pre-image maps. The interconnection matrices U, V model dependencies between latent variables and the mapped data sources. the training objective function J t for generative RKM is given by 1: where U ∈ R d f ×s and V ∈ R p f ×s are the unknown interaction matrices, and h i ∈ R s are the latent variables modeling a common subspace H between the two input spaces X and Y (see Figure 1). The derivation of this objective function is given in the Appendix A.1. Given η 1 > 0 and η 2 > 0 as regularization parameters, the stationary points of J t are given by: Substituting U and V in the first equation above, denoting Λ = diag{λ 1, . . ., λ s} ∈ R s×s with s ≤ N, yields the following eigenvalue problem: where H = h 1,..., h N ∈ R s×N with s ≤ N is the number of selected principal components and K 1, K 2 ∈ R N ×N are the kernel matrices corresponding to data sources 2. Based on Mercer's theorem , positive-definite kernel functions k 1:.., N forms the elements of corresponding kernel matrices. The feature maps φ 1 and φ 2, mapping the input data to the high-dimensional feature space (possibly infinite) are implicitly defined by kernel functions. Typical examples of such kernels are given by the Gaussian RBF kernel − xi−xj 2/σ just to name a few . However, one can also define explicit feature maps, still preserving the positive-definiteness of the kernel function by construction . In this section, we derive the equations for the generative mechanism. RKMs resembling energybased models, the inference consists in clamping the value of observed variables and finding configurations of the remaining variables that minimizes the energy . Given the Otherwise, a centered kernel matrix could be obtained using Eq. 17 (Appendix A.4). 2 While in the above section we have assumed that only two data sources (namely X and Y) are available for learning, the above procedure could be extended to multiple data-sources. For the M views or data-sources, this yields the training problem: learned interconnection matrices U and V, and a given latent variable h, consider the following objective function: with an additional regularization term on data sources. Here J g denotes the objective function for generation. The given latent variable h can be the corresponding latent code of a training point, a newly sampled hidden unit or a specifically determined one. Above cases correspond to generating the reconstructed visible unit, generating a random new visible unit or exploring the latent space by carefully selecting hidden units respectively. The stationary points of J g are characterized by: Using U and V from Eq. 2, we obtain the generated feature vectors: To obtain the generated data, one now needs to compute the inverse images of the feature maps φ 1 (·) and φ 2 (·) in the respective input spaces, i.e., solve the pre-image problem. We seek to find the functions where φ 1 (x) and φ 2 (y) are calculated using Eq. 6. When using kernel methods, explicit feature maps are not necessarily known. Commonly used kernels such as the radial-basis function and polynomial kernels map the input data to a very high dimensional feature space. Hence finding the pre-image, in general, is known to be an ill-conditioned problem . However, various approximation techniques have been proposed (; ; ;) which could be used to obtain the approximate pre-imagex of φ 1 (x). In section 3.1, we employ one such technique to demonstrate the applicability in our model, and consequently generate the multi-view data. One could also define explicit pre-image maps. In section 3.2, we define parametric pre-image maps and learn the parameters by minimizing the appropriately defined objective function. The next section describes the above two pre-image methods for both cases, i.e., when the feature map is explicitly known or unknown, in greater detail. As noted in the previous section, since x may not exist, we find an approximationx. A possible technique is shown by. Left multiplying Eq. 6 by φ 1 (x i) and φ 2 (y i), ∀i = 1,..., N, we obtain: where, represents the similarities between φ 1 (x) and training data points in the feature space, and K 1 ∈ R N ×N represents the centered kernel matrix of X. Similar conventions follow for Y respectively. Using the kernel-smoother method , the pre-images are given by: wherek 1 (x i, x) andk 2 (y i, y) are the scaled similarities (see Eq. 8) between 0 and 1 and n r the number of closest points based on the similarity defined by kernelsk 1 andk 2. While using an explicit feature map, Mercer's theorem is still applicable due to the positive semidefiniteness of the kernel function by construction, thereby allowing the derivation of Eq. 3. In the experiments, we use a set of (convolutional) neural networks as the feature maps φ θ (·). Another (transposed convolutional) neural network is used for the pre-image map ψ ζ (·) . The network parameters {θ, ζ} are learned by minimizing the reconstruction errors defined by L 1 (x, ψ 1 ζ 1 (φ 1 θ 1 (x))) and L 2 (y, ψ 2 ζ 2 (φ 2 θ 2 (y))). In our experiments, we use the mean-squared, however, in principle, one can use any other loss appropriate to the dataset. Here φ 1 θ 1 (x i) and φ 2 θ 2 (y i) are computed from Eq. 6, i.e., the generated points in feature space from the subspace H. Adding the loss function directly into the objective function J t is not suitable for minimization. Instead, we use the stabilized objective function defined as is the regularization constant . This tends to push the objective function J t towards zero, which is also the case when substituting the solutions λ i, h i back into J t (see Appendix A.3 for details). The combined training objective is given by: where c acc ∈ R + is a regularization constant to control the stability with reconstruction accuracy. In this way, we combine feature-selection and subspace learning within the same training procedure. There is also an intuitive connection between Gen-RKM and autoencoders. Namely, the properties of kernel PCA resemble the objectives of the 3 variations of an autoencoder: standard , VAE and β-VAE. 1) Similar to an autoencoder, Gen-RKM minimizes the reconstruction error in the loss function (see Eq. 9), where kernel PCA which acts as a denoiser (the information is compressed in the principal components). 2) By interpreting kernel PCA within the LS-SVM setting , the PCA analysis can take the interpretation of a one-class modeling problem with zero target value around which one maximizes the variance . When choosing a good feature map, one expects the latent variables to be normally distributed around zero. This property resembles the added regularization term in the objective of the VAE , which is expressed as the Kullback-Leibler divergence between the encoder's distribution and a unit Gaussian as a prior on the latent variables. 3) Kernel PCA gives uncorrelated components in feature space. While it was already shown that PCA does not give a good disentangled representation for images (; . Hence by designing a good kernel (through appropriate feature-maps) and doing kernel PCA, it is possible to get a disentangled representation for images as we show on the example in Figure 5. The uncorrelated components enhances the interpretation of the model. Based on the previous analysis, we propose a novel algorithm, called the Gen-RKM algorithm, combining kernel learning and generative models. We show that this procedure is efficient to train and evaluate. It is also scalable to large datasets when using explicit feature maps. The training procedure simultaneously involves feature selection, common-subspace learning and pre-image map learning. This is achieved via an optimization procedure where one iteration involves an eigendecomposition of the kernel matrix which is composed of the features from various views (see Eq. 3). The latent variables are given by the eigenvectors, which are then passed via a pre-image map to reconstruct the sample. Figure 1 shows a schematic representation of the algorithm when two data sources are available. Thanks to training in m mini-batches, this procedure is scalable to large datasets (sample size N) with training time scaling super-linearly with T m = c. While using neural networks as feature maps, d f and p f correspond to the number of neurons in the output layer, which are chosen as hyperparameters by the practitioner. Eigendecomposition of this smaller covariance matrix would yield U and V as eigenvectors (see Eq. 10 and Appendix A.2 for detailed derivation), where computing the h i involves only matrix-multiplication which is readily parallelizable on modern GPUs: Algorithm 1 Gen-RKM, η1, η2, feature map φj(·) -explicit or implicit via kernels kj(·, ·), for j ∈ {1, 2} Output: Generated data x, y 1: procedure TRAIN 2: if φj(·) = Implicit then 3: Hyperparameters: kernel specific 4: Solve Eq. 3 5: Select s principal components 6: else if φj(·) = Explicit then 7: while not converged do 8: {x, y} ← {Get mini-batch} 9: φ1(x) ← x; φ2(y) ← y 10: do steps 4-5 11: {φ1(x), φ2(y)} ← h (Eq. 6) 12: {x, y} ← {ψ1(φ1(x)), ψ2(φ2(y))} 13: ∆θ1 ∝ −∇ θ 1 Jc; ∆θ2 ∝ −∇ θ 2 Jc 14: if φj(·) = Implicit then 4: Hyperparameter: nr 5: Compute kx *, ky * (Eq. 7) 6: Getx,ŷ (Eq. 8) 7: else if φj(·) = Explicit then 8: do steps 11-12 9: end if 10: end procedure To demonstrate the applicability of the proposed framework and algorithm, we trained the Gen-RKM model on a variety of datasets commonly used to evaluate generative models: MNIST , Fashion-MNIST , CIFAR-10 , CelebA , Dsprites and Teapot . The experiments were performed using both the implicit feature map defined by a Gaussian kernel and parametric explicit feature maps defined by deep neural networks, either convolutional or fully connected. As explained in Section 2, in case of kernel methods, training only involves constructing the kernel matrix and solving the eigenvalue problem in Eq. 3. In our experiments, we fit a Gaussian mixture model (GMM) with l components to the latent variables of the training set, and randomly sample a new point h for generating views using a kernel smoother. In case of explicit feature maps, we define φ 1 θ 1 and ψ 1 ζ 1 as convolution and transposed-convolution neural networks, respectively ; and φ 2 θ 2 and ψ 1 ζ 2 as fully-connected networks. The particular architecture details are outlined in Table 3 in the Appendix. The training procedure in case of explicitly defined maps consists of minimizing J c using the Adam optimizer to update the weights and biases. To speed-up learning, we subdivided the datasets into m mini-batches, and within each iteration of the optimizer, Eq. 3 is solved to update the value of H. Information on the datasets and hyperparameters used for the experiments is given in Table 4 in the Appendix. Qualitative examples: Figure 2 shows the generated images using a convolutional neural network and transposed-convolutional neural network as the feature map and pre-image map respectively. The first column in yellow-boxes shows the training samples and the second column on the right shows the reconstructed samples. The other images shown are generated by random sampling from a GMM over the learned latent variables. Notice that the reconstructed samples are of better quality visually than the other images generated by random sampling. To elucidate that the model has not merely memorized the training examples, we show the generated images via bilinear-interpolations in the latent space in 2e and 2f. Comparison: We compare the proposed model with the standard VAE . For a fair comparison, the models have the same encoder/decoder architecture, optimization parameters and are trained until convergence, where the details are given in Table 3. We evaluate the performance qualitatively by comparing reconstruction and random sampling, the are shown in Figure 8 in the Appendix. In order to quantitatively assess the quality of the randomly generated samples, we use the Fréchet Inception Distance (FID) introduced by. The are reported in Table 1. Experiments were repeated for different latent-space dimensions (h dim), and we observe empirically that FID scores are better for the Gen-RKM. This is confirmed by the qualitative evaluation in Table 8, where the VAE generates smoother images. An interesting trend could be noted that as the dimension of latent-space is increased, VAE gets better at generating images whereas the performance of Gen-RKM decreases slightly. This is attributed to the eigendecomposition of the kernel matrix whose eigenvalue spectrum decreases rapidly depicting that most information is captured in few principal components, while the rest is noise. The presence of noise hinders the convergence of the model. It is therefore important to select the number of latent variables proportionally to the size of the mini-batch and the corresponding spectrum of the kernel matrix (the diversity within a mini-batch affects the eigenvalue spectrum of the kernel matrix). Multi-view Generation: Figures 3 & 4 demonstrate the multi-view generative capabilities of the model. In these datasets, labels or attributes are seen as another view of the image that provides extra information. One-hot encoding of the labels was used to train the model. Figure 4a shows the generated images and labels when feature maps are only implicitly known i.e. through a Gaussian kernel. Figures 4b, 4c shows the same when using fully-connected networks as parametric functions to encode and decode labels. We can see that both the generated image and the generated label matches in most cases, albeit not all. Qualitative examples: The latent variables are uncorrelated, which gives an indication that the model could resemble a disentangled representation. This is confirmed by the empirical evidence on Figure 5, where we explore the uncorrelated features learned by the models on the Dsprites and celebA dataset. In our experiments, the Dsprites training dataset comprised of 32 × 32 positions of oval and heart-shaped objects. The number of principal components chosen were 2 and the goal was to findout whether traversing along the eigenvectors, corresponds to traversing the generated im-age in one particular direction while preserving the shape of the object. Rows 1 and 2 of Figure 5 show the reconstructed images of an oval while moving along first and second principal component respectively. Notice that the first and second components correspond to the y and x positions respectively. Rows 3 and 4 show the same for hearts. On the celebA dataset, we train the Gen-RKM with 15 components. Rows 5 and 6 shows the reconstructed images while traversing along the principal components. When moving along the first component from left-to-right, the hair-color of the women changes, while preserving the face structure. Whereas traversal along the second component, transforms a man to woman while preserving the orientation. When the number of principal components were 2 while training, the brightness and light-source corresponds to the two largest variances in the dataset. Also notice that, the reconstructed images are more blurry due to the selection of less number of components to model H. Comparison: To quantitatively assess disentanglement performance, we compare Gen-RKM with VAE and beta-VAE on the Dsprites and Teapot datasets . The models have the same encoder/decoder architecture, optimization parameters and are trained until convergence, where the details are given in Table 3. The performance is measured using the proposed framework 3 of , which gives 3 measures: disentanglement, completeness and informativeness. The are depicted in Table 2. Gen-RKM has good performance on the Dsprites dataset when the latent space dimension is equal to 2. This is expected as the number of disentangled generating factors in the dataset is also equal to 2, hence there are no noisy components in the kernel PCA hindering the convergence. The opposite happens in the case h dim = 10, where noisy component are present. The above is confirmed by the Relative Importance Matrix on Figure 6 in the Appendix, where the 2 generating factors are well separated in the latent space of the Gen-RKM. For the Teapot dataset, Gen-RKM has good performance when h dim = 10. More components are needed to capture all variations in the dataset, where the number of generating factors is now equal to 5. In the other cases, Gen-RKM has a performance comparable to the others. The paper proposes a novel framework, called Gen-RKM, for generative models based on RKMs with extensions to multi-view generation and learning uncorrelated representations. This allows for a mechanism where the feature map can be implicitly defined using kernel functions or explicitly by (deep) neural network based methods. When using kernel functions, the training consists of only solving an eigenvalue problem. In the case of a (convolutional) neural network based explicit feature map, we used (transposed) networks as the pre-image functions. Consequently, a training procedure was proposed which involves joint feature-selection and subspace learning. Thanks to training in mini-batches and capability of working with covariance matrices, the training is scalable to large datasets. Experiments on benchmark datasets illustrate the merit of the proposed framework for generation quality as well as disentanglement. Extensions of this work consists of adapting the model to more advanced multi-view datatsets involving speech, images and texts; further analysis on other feature maps, pre-image methods, loss-functions and uncorrelated feature learning. Finally, this paper has demonstrated the applicability of the Gen-RKM framework, suggesting new research directions to be worth exploring., where ) for the two data sources can be written as: where U ∈ R d×s and V ∈ R p×s are the interconnection matrices. Using the notion of conjugate feature duality introduced in , the error variables e i are conjugated to latent variables h i using: which is also known as the Fenchel-Young inequality for the case of quadratic functions . By eliminating the variables e i from Eq. 11 and using Eq. 12, we obtain the Gen-RKM training objective function: A.2 KERNEL PCA IN THE PRIMAL From Eq. 2, eliminating the variables h i yields the following: Denote.., λ s } ∈ R s×s with s ≤ N. Now, composing the above equations in matrix form, we get the following eigen-decomposition problem: Here the size of the covariance matrix is The latent variables h i can be computed using Eq. 2, which simply involves matrix multiplications. A.3 STABILIZING THE OBJECTIVE FUNCTION Proposition 1. All stationary solutions for H,Λ in Eq. 3 of J t lead to J t = 0. Proof. Let λ i, h i are given by Eq. 3. Using Eq. 2 to substitute V and U in Eq. 1 yields: From Eq. 3, we get: Proposition 2. Let J(x): R N − → R be a smooth function, for all x ∈ R N and for c ∈ R >0, definē 2. Assuming (1 + cJ(x)) = 0, then x is the stationary points ofJ(x) iff x is the stationary point for J(x). Proof. Let x be a stationary point of J(x), meaning that ∇J(x) = 0. The stationary points for J(x) can be obtained from: It is easy to see from Eq. 2 that if x = x *, ∇J(x *) = 0, we have that dJ dx x * = 0, meaning that all the stationary points of J(x) are stationary points ofJ(x). To show the other way, let x be stationary point ofJ(x) i.e. ∇J(x) = 0. Assuming (1 + cJ(x)) = 0, then from Eq. 16 for all c ∈ R >0, we have Based on the above propositions, we stabilize our original objective function Eq. 1 to keep it bounded and hence is suitable for minimization with Gradient-descent methods. Without the reconstruction errors, the stabilized objective function is Since the derivatives of J t are given by Eq. 2, the stationary points of J are: assuming 1 + c stab J t = 0. Elimination of V and U yields 1 η1 K 1 + 1 η2 K 2 H = H Λ, which is indeed the same solution for c stab = 0 in Eq. 1 and Eq. 3. Centering of the kernel matrix is done by the following equation: where 1 denotes an N -dimensional vector of ones and K is either K 1 or K 2. See Table 3 and 4 for details on model architectures, datasets and hyperparameters used in this paper. The PyTorch library in Python was used as the programming language with a 8GB NVIDIA QUADRO P4000 GPU. Random Generation CelebA Figure 8: Comparing Gen-RKM and standard VAE for reconstruction and generation quality. In reconstruction MNIST and reconstruction CelebA, uneven columns correspond to the original image, even columns to the reconstructed image.
Gen-RKM: a novel framework for generative models using Restricted Kernel Machines with multi-view generation and uncorrelated feature learning.
756
scitldr
Work on the problem of contextualized word representation—the development of reusable neural network components for sentence understanding—has recently seen a surge of progress centered on the unsupervised pretraining task of language modeling with methods like ELMo . This paper contributes the first large-scale systematic study comparing different pretraining tasks in this context, both as complements to language modeling and as potential alternatives. The primary of the study support the use of language modeling as a pretraining task and set a new state of the art among comparable models using multitask learning with language models. However, a closer look at these reveals worryingly strong baselines and strikingly varied across target tasks, suggesting that the widely-used paradigm of pretraining and freezing sentence encoders may not be an ideal platform for further work. InputFigure 1: Our common model design: During pretraining, we train the shared encoder and the task-specific model for each pretraining task. We then freeze the shared encoder and train the task-specific model anew for each target evaluation task. Tasks may involve more than one sentence. State-of-the-art models for natural language processing (NLP) tasks like translation, question answering, and parsing include components intended to extract representations for the meaning and contents of each input sentence. These sentence encoder components are typically trained directly for the target task at hand. This approach can be effective on data rich tasks and yields human performance on some narrowly-defined benchmarks BID35 BID13, but it is tenable only for the few NLP tasks with millions of examples of training data. This has prompted interest in pretraining for sentence encoding: There is good reason to believe it should be possible to exploit outside data and training signals to effectively pretrain these encoders, both because they are intended to primarily capture sentence meaning rather than any task-specific skill, and because we have seen dramatic successes with pretraining in the related domains of word embeddings and image encoders BID46.More concretely, four recent papers show that pretrained sentence encoders can yield very strong performance on NLP tasks. show that a BiLSTM encoder from a neural machine translation (MT) system can be effectively reused elsewhere. BID16,, and BID33 show that various kinds of encoder pretrained in an unsupervised fashion through generative language modeling (LM) are effective as well. Each paper uses its own evaluation methods, though, making it unclear which pretraining task is most effective or whether multiple pretraining tasks can be productively combined; in the related setting of sentence-to-vector encoding, multitask learning with multiple labeled datasets has yielded a robust state of the art BID39. This paper attempts to systematically address these questions. We train reusable sentence encoders on 17 different pretraining tasks, several simple baselines, and several combinations of these tasks, all using a single model architecture and procedure for pretraining and transfer, inspired by ELMo. We then evaluate each of these encoders on the nine target language understanding tasks in the GLUE benchmark BID41, yielding a total of 40 sentence encoders and 360 total trained models. We then measure correlation in performance across target tasks and plot learning curves evaluating the effect of training data volume on each pretraining and target tasks. Looking to the of this experiment, we find that language modeling is the most effective single pretraining task we study, and that multitask learning during pretraining can offer further gains and a new state-of-the-art among fixed sentence encoders. We also, however, find reasons to worry that ELMo-style pretraining, in which we pretrain a model and use it on target tasks with no further fine-tuning, is brittle and seriously limiting: (i) Trivial baseline representations do nearly as well as the best pretrained encoders, and the margins between substantially different pretraining tasks can be extremely small. (ii) Different target tasks differ dramatically on what kinds of pretraining they benefit most from, and multitask pretraining is not sufficient to circumvent this problem and offer general-purpose pretrained encoders. Work toward learning reusable sentence encoders can be traced back at least as far as the multitask model of BID7, but has seen a recent surge in progress with the successes of CoVe BID25, ULMFit BID16, ELMo, and the Transformer LM BID33. However, each uses a different model and dataset from the others, so while these works serve as existence proofs that effective reusable sentence encoders are possible, they do not address the question of what task or tasks should be used to create them. The revival of interest in sentence encoder pretraining is recent enough that relatively little has been done to understand the relative merits of these models, though two exceptions stand out. In unpublished work, offer an analysis of the relative strengths of translation and language modeling using a single architecture and training dataset. They find that encoders trained as language models reliably uncover the most syntactic structure, even when they are trained on a strict subset of the data used for a comparable translation model. Peters et al. offer a deeper investigation of model design issues for ELMo, showing that all of the standard architectures for sentence encoding can be effectively pretrained with broadly similar performance, and that all learn reasonably good representations of the morphological and syntactic properties of sentences. There has been a great deal of work on sentence-to-vector encoding, a setting in which the pretrained encoder produces a fixed-size vector representation for each input sentence BID10 BID20 BID14 BID8 BID45. These vectors are potentially useful for tasks that require fast similarity-based matching of sentences, but using them to replace sentence encoders trained in the conventional way on a given target text classification task does not reliably yield state-of-the art performance on that task BID39.Multitask representation learning in NLP in general has been well studied, and again can be traced back at least as far as BID7. For example, BID23 show promising from the combination of translation and parsing, BID39 show the benefits of multitask learning in sentence-to-vector encoding, and BID0 and BID4 offer studies of when multitask learning is helpful for lower-level NLP tasks. Our main experiment compares encoders pretrained on a large number of tasks and task combinations, where a task is a dataset-objective function pair. This section lists these tasks, which we select either to serve as baselines or because they have shown promise in outside prior work, especially prior work on sentence-to-vector encoding. Appendix A includes additional details on how we implemented some of these tasks, and names tasks we evaluated but left out. Random Encoder Our primary baseline is equivalent to pretraining on a task with zero examples. Here, we randomly initialize a sentence encoder and use it directly with no further training. This baseline works well, yielding scores far above those of a bag-of-words encoder.1 This surprising matches seen recently with ELMo-like models by and earlier work on Reservoir Computing. This baseline is especially strong because our model contains a skip connection from the input of the shared encoder to its output, allowing the task-specific model to directly see our word representations, or, in experiments where we use a pretrained ELMo model as our input layer, ELMo's contextual word representations. We use the nine tasks included with GLUE as pretraining tasks: acceptability classification with CoLA BID42; binary sentiment classification with SST BID38; semantic similarity with the MSR Paraphrase Corpus (MRPC; BID11, the Quora Question Pairs 2 (QQP), and STS-Benchmark (STS; BID3 ; and textual entailment with the Multi-Genre NLI Corpus (MNLI BID44, RTE 1, 2, 3, and 5 (RTE; , et seq.), and data from SQuAD (QNLI, BID34 and the Winograd Schema Challenge (WNLI, BID21 recast as entailment in the style of BID43 . MNLI is the only task with substantial prior work in this area, as it was found to be highly effective as a pretraining strategy by BID8 and BID39 . Other tasks are included to represent a broad sample of labeling schemes commonly used in NLP. We train language models on two datasets: WikiText-103 (WP, BID26 and 1 Billion Word Language Model Benchmark (BWB, BID5, which are used by ULMFit BID16 and ELMo respectively. Translation We train MT models on two datasets: WMT14 English-German BID1 and WMT17 English-Russian BID2 . SkipThought Our SkipThought model BID20 BID40) is a sequence-tosequence model that reads a sentence from WikiText-103 running text and attempts to decode the following sentence from that text. We train our DisSent model BID17 BID29 to read two separate clauses that appear in WikiText-103 connected by a discourse marker such as and, but, or so and predict the identity of the discourse marker. Reddit These models reconstruct comment threads from reddit.com using a dataset of about 18M comment-response pairs collected from 2008-2011 by BID45. We consider two settings: A classification task in which the model makes a binary prediction about whether a candidate response is the actual response to a given comment, and a sequence-to-sequence task in the model attempts to generate the true response to a comment. We implement our models using the AllenNLP toolkit BID12, aiming to build the simplest architecture that could be reasonably expected to perform well on the target tasks under study. 3 The design of the models roughly follows that used in the GLUE baselines and ELMo. The core of our model is a two-layer 1024D bidirectional LSTM. We feed the word representations to the biLSTM and take the sequence of hidden states from the top-level LSTM as the contextual representation. The downstream task-specific model sees both the top-layer hidden states of this model and, through a skip connection, the input representations for each word. All of our models use the pretrained character-level convolutional neural network (CNN) word encoder from ELMo. This encoder acts as a standard input layer which uses no information beyond the word, and allows us to avoid potentially the difficult issues surrounding unknown word handling in transfer learning. In some experiments, we use the full pretrained ELMo model as an input handler, yielding a form of multitask learning in which the lower layers of the overall model (ELMo) are pretrained on language modeling, and the higher layers (our shared encoder) are pretrained on some additional task or tasks. We choose to use this pretrained model because it represents a larger model with more extensive tuning than we have the resources to produce ourselves. We compare pretraining tasks in this setting to understand how well they complement large-scale language model pretraining, and we additionally train our own language models to directly compare between language modeling and other pretraining methods. We follow the standard practice of training a set of scalar weights of ELMo's three layers. We use one set of weights to supply input to the shared encoder, and an additional set for each target task to use in the skip connection. We use only ELMo and not the similarly-situated CoVe, as BID41 showed CoVe to be less effective on the GLUE tasks. Evaluation and Per-Task Models The GLUE benchmark BID41 ) is an open-ended shared task competition and evaluation toolkit for reusable sentence encoders, and we use it as our primary vehicle for evaluation. GLUE is a set of nine classification or regression tasks over sentences and sentence pairs spanning a range of dataset sizes, paired with private test data and an online leaderboard. GLUE offers a larger set of tasks than evaluated by ELMo or CoVe while omitting more expensive paragraph-level tasks, allowing us to evaluate a substantially larger number of experiments with available compute resources. To evaluate the shared encoder, we use the following procedure: We freeze the pretrained encoder and, for each of the nine tasks in the GLUE benchmark, separately train a target-task model on the representations produced by the encoder. We then evaluate each of these models on the validation or test set of the corresponding task using the standard metric(s) for that task, and report the ing scores and the overall average GLUE scores, which weight each task equally. For single-sentence target tasks (CoLA, SST) and sentence-pair tasks with smaller training datasets (MRPC, RTE, WNLI) we train a linear projection over the output states of the shared encoder, max-pool over those projected states, and feed the to a one-hidden-layer classifier MLP. For smaller sentence pair-tasks, we perform these steps on both sentences and use the heuristic matching feature vector [h 1 ; h 2 ; h 1 · h 2 ; h 1 − h 2] in the MLP, following BID28.For the remaining sentence-pair tasks (MNLI, QNLI, QQP, STS), we use an attention mechanism between all pairs of words, followed by a 512D ×2 BiLSTM with max-pooling over time, following the basic mechanism used in BiDAF BID37. This is followed by heuristic matching and a final MLP, as above. Appendices A and B present additional details on the task specific models. Pretraining Task Models For pretraining on GLUE tasks, we use the architecture described above, except that we do not use an attention mechanism, as early indicated that this hurt cross-task transfer performance. For consistency with other experiments when pretraining on a GLUE task, we reinitialize the task-specific parameters between pretraining and target-task training. Several of the outside (non-GLUE) pretraining tasks involve sentence pair classification. For these, we use the same non-attentive architecture as for the larger GLUE tasks. For LM, to prevent information leakage across directions and LSTM layers, we follow the broad strategy used by ELMo: We train separate forward and backward two-layer LSTM language models, and concatenate the outputs during target task training. For sequence-to-sequence pretraining tasks (MT, SkipThought, Reddit), we use an LSTM decoder with a single layer. We also investigate three sets of tasks for multitask pretraining: all GLUE tasks, all outside (non-GLUE) pretraining tasks, and all pretraining tasks. Because ELMo representations are computed with the full context and so cannot be used as the input to downstream unidirectional language models, we exclude language modeling from multitask runs that use ELMo. At each update during multitask learning, we randomly sample a single task with probability proportional to its training data size raised to the power of 0.75. This sampling rate is meant to balance the risks of overfitting small-data tasks and underfitting large ones, and performed best in early exper-iments. More extensive experiments with methods like this are shown in Appendix C. We perform early stopping based on an unweighted average of the pretraining tasks' validation metrics. For validation metrics like perplexity that decrease from high starting values during training, we include the transformed metric 1 − m 250 in our average, tuning the constant 250 in early experiments. Optimization We train our models with the AMSGrad optimizer -a variant of Adam BID19. We perform early stopping at pretraining time and target task training time using the respective dev set performances. Typical experiments, including pretraining one encoder and training the nine associated target-task models, take 1-5 days to complete on an NVIDIA P100 GPU. See Appendix B for more details. Hyperparameter Tuning Appendix B describes our chosen hyperparameter values. As our primary experiment required more than 100 GPU-days on NVIDIA P100 GPUs to run-not counting debugging or learning curves-we did not have the resources for extensive hyperparameter tuning. Instead of carefully tuning our shared and task-specific models on a single pretraining task in a way that might bias toward that task, we simply chose commonly-used values for most hyperparameters. The choice not to tune limits our ability to diagnose the causes of poor performance when it occurs, and we invite readers to further refine our models using the public code.5 TAB0 shows on the GLUE dev set for all our pretrained encoders, each with and without the pretrained ELMo BiLSTM layers (E). The N/A baselines are untrained encoders with random intialization. The Single-Task baselines are aggregations of from nine GLUE runs: The in this row for a given GLUE task uses the encoder pretrained on only that task. For consistency with other runs, we treat the pretraining task and the target task as two separate tasks in all cases (including here) and give them separate task-specific parameters, despite the fact that they use identical data. We use S and C to distinguish the sequence-to-sequence and classification versions of the Reddit task, respectively. To comply with GLUE's limits on test set access, we evaluated only three of our pretrained encoders on test data. These reflect our best models with and without the use of the pretrained ELMo encoder, and with and without the use of GLUE data during pretraining. For discussion of our limited hyperparameter tuning, see above. For roughly-comparable GLUE in prior work, see BID41 or https://www.gluebenchmark.com; we omit them here in the interest of space. The limited size of a US Letter page prevent us from including these baselines in this table. As of writing, the best test using a comparable frozen pretrained encoder is 68.9 from BID41 for a model similar to our GLUE E multitask model, and the best overall is 72.8 from BID33 with a model that is fine-tuned in its entirety for each target task. While not feasible to run each setting multiple times, we estimate the variance of the GLUE score by re-running the random encoder and MNLI pretraining setups with and without ELMo with different random seeds. Across five runs, we recorded σ = 0.4 for the random encoder (N/A in table), and σ = 0.2 for MNLI E. This variation is substantial but not so high as to render meaningless. For the explicitly adversarial WNLI dataset (based on the Winograd Schema Challenge; BID21, only one of our models reached even the most frequent class performance of 56.3. In computing average and test set performances, we replace model predictions with the most frequent label to simulate the better performance achievable by choosing not to model that task. Looking to other target tasks, the grammar-related CoLA task benefits dramatically from ELMo pretraining: The best without language model pretraining is less than half the achieved with such pretraining. In contrast, the meaning-oriented textual similarity benchmark STS sees good with several kinds of pretraining, but does not benefit substantially from the use of ELMo. Comparing pretraining tasks in isolation without ELMo, language modeling performs best, followed by MNLI. The remaining pretraining tasks yield performance near that of the random baseline. Even when training directly on each target task (Single-Task in table), we get less than a one point gain over this simple baseline. Adding ELMo yielded improvements in performance across all pretraining tasks. MNLI and English-German translation perform best in this setting, with SkipThought, Reddit classification, and DisSent also outperforming the ELMo-augmented random baseline. With ELMo, a multitask model performs best, but without it, all three multitask models are tied or outperformed by models trained on one of their constituent tasks, suggesting that our approach to multitask learning is not reliably able to produce models that productively use the knowledge taught by each training task. However, of the two non-ELMo models that perform best on the development data, the multitask model generalizes better than the single-task model on test data for tasks like STS where the test set contains new out-of-domain data. TAB1 presents an alternative view of the of the main experiment TAB0: The table shows the correlations between pairs of tasks over the space of pretrained encoders. These reflect the degree to which knowing the performance of one target task with some encoder will allow us to predict the performance of the other target task with that same encoder. Many correlations are low, suggesting that different tasks benefit from different forms of pretraining to a substantial degree, and mirroring the observation that no one pretraining task yields good performance on all target tasks. As noted above, the models that tended to perform best overall also overfit the WNLI training set most, leading to a negative correlation between WNLI and overall GLUE score. STS also shows a negative correlation, likely due to the observation that it does not benefit from ELMo pretraining. In contrast, CoLA shows a strong 0.93 correlation with the overall GLUE scores, but has weak or negative correlations with many tasks-the use of ELMo or LM pretraining dramatically improves CoLA performance, but most other forms of pretraining have little effect. FIG1 shows two types of learning curves. The first set measures performance on the overall GLUE metric for encoders trained to convergence on each pretraining task with varying amounts of data. The second set focuses on three pretrained encoders and measures performance on each GLUE target task separately with varying amounts of target task data. Looking at pretraining tasks in isolation (top left), most tasks improve slightly as the amount of pretraining data increases, with the LM and MT tasks showing the most promising combination of slope and maximum performance. Combining these pretraining tasks with ELMo (top right) yields a less interpretable : the relationship between training data volume and performance becomes weaker, and some of the best reported in this paper are achieved by models that combine pretrained ELMo with restricted-data versions of other pretraining tasks like MNLI and QQP.Looking at target task performance as target task training data volume varies, we see that all tasks benefit from increasing data quantities, with no obvious diminishing returns, and that most tasks see a constant improvement in performance across data volumes from the use of pretraining, either with ELMo (center) or with multitask learning (right). Results on the GLUE Diagnostic Set From GLUE's auxiliary diagnostic analysis dataset, we find that ELMo and other forms of unsupervised pretraining helps on examples that involve world knowledge and lexical-semantic knowledge, and less so on examples that highlight complex sentence structures. See TAB5 in Appendix D for more details. This paper presents a systematic comparison of tasks and task-combinations for the pretraining of sentence-level BiLSTM encoders like those seen in ELMo and CoVe. With 40 pretraining tasks and task combinations (not counting many more ruled out early) and nine target tasks, this represents a far more comprehensive study than any seen on this problem to date. Our chief positive are perhaps unsurprising: Language modeling works well as a pretraining task, and no other single task is consistently better. Multitask pretraining can produce better than any single task can, and sets a new state-of-the-art among comparable models. Target task performance continues to improve with the addition of more language model data, even at large scales, suggesting that further work scaling up language model pretraining is warranted. However, a closer look at our suggests that the pretrain-and-freeze paradigm that underlies ELMo and CoVe might not be a sound platform for future work: Some trivial baselines do strikingly well, the margins between pretraining tasks are small, and some pretraining configurations (such as MNLI E) yield better performance with less data. This suggests that we may be nearing an upper bound on the performance that can be reached with methods like these. In addition, different tasks benefit from different forms of pretraining to a striking degree-with correlations between target tasks often low or negative-and multitask pretraining tasks fail to reliably produce models better than their best individual components. This suggests that if truly generalpurpose sentence encoders are possible, our current methods cannot produce them. While further work on language modeling seems straightforward and worthwhile, the author(s) of this paper believe that the future of this line of work will require a better understanding of the ways in which neural network target task models can benefit from outside knowledge and data, and new methods for pretraining and transfer learning to allow them to do so. DisSent To extract discourse model examples from the WikiText-103 corpus BID26, we follow the procedure described in BID29 by extracting clause-pairs that follow specific dependency relationships within the corpus (see Figure 4 in BID29 . We use the Stanford Parser BID6 distributed in Stanford CoreNLP version 3.9.1 to identify the relevant dependency arcs. Reddit Response Prediction The Reddit classification task requires a model to select which of two candidate replies to a comment is correct. Since the dataset from BID45 contains only real comment-reply pairs, we select an incorrect distractor reply for each correct reply by permuting each minibatch. Alternative Tasks Any large-scale comparison like the one attempted in this paper is inevitably incomplete. Among the thousands of publicly available NLP datasets, we also performed initial trial experiments on several datasets for which we were not able to reach development-set performance above that of the random encoder baseline in any setting. These include image-caption matching with MSCOCO BID22, following BID18 ; the small-to-medium-data textunderstanding tasks collected in NLI format by BID32 ; ordinal common sense inference ; POS tagging on the Penn Treebank BID24; and supertagging on CCGBank BID15. See Section 4 for general comments on hyperparameter tuning. Validation We evaluate on the validation set for the current training task or tasks every 1,000 steps, except where noted otherwise for small-data target tasks. During multitask learning, we multiply this interval by the number of tasks, evaluating every 9,000 steps during GLUE multitask training, for example. Optimizer We use AMSGrad BID36. During pretraining, we use a learning rate of 1e-4 for classification and regression tasks, and 1e-3 for text generation tasks. During target-task training, we use a learning rate of 3e-4 for all tasks. Learning Rate Decay We multiply the learning rate by 0.5 whenever validation performance fails to improve for more than 4 validation checks. We stop training if the learning rate falls below 1e-6.Early Stopping We maintain a saved checkpoint reflecting the best validation seen so far. We stop training if we see no improvement after more than 20 validation checks. After training, we use the last saved checkpoint. Regularization We apply dropout with a drop rate of 0.2 after the input layer (the character CNN or ELMo), after each LSTM layer, and after each MLP layer in the task-specific classifier or regressor. For small-data target tasks, we increase MLP dropout to 0.4 during target-task training. Preprocessing We use Moses tokenizer for encoder inputs, and set a maximum sequence length of 40 tokens. There is no input vocabulary, as we use ELMo's character-based input layer. For English text generation tasks, we use the Moses tokenizer to tokenize our data, but use a wordlevel output vocabulary of 20,000 types for tasks that require text generation. For translation tasks, we use BPE tokenization with a vocabulary of 20,000 types. For all sequence-to-sequence tasks we train word embeddings on the decoder side. Target-Task-Specific Parameters To ensure that baseline performance for each target task is competitive, we find it necessary to use slightly different models and training regimes for larger and smaller target tasks. We used partially-heuristic tuning to separate GLUE tasks into big-, mediumand small-data groups, giving each group its own heuristically chosen task-specific model specifications. Exact values are shown in Table 3. Table 3: Hyperparameter settings for target-task models and target-task training. Attention is always disabled when pretraining on GLUE tasks. STS has a relatively small training set, but consistently patterns with the larger tasks in its behavior. Sequence-to-Sequence Models We found attention to be helpful for the SkipThought and Reddit pretraining tasks but not for machine translation, and report for these configurations. We use the max-pooled output of the encoder to initialize the hidden state of the decoder, and the size of this hidden state is equal to the size of the output of our shared encoder. We reduce the dimension of the output of the decoder by half via a linear projection before the output softmax layer. Our multitask learning experiments have three somewhat distinctive properties: (i) We mix tasks with very different amounts of training data-at the extreme, under 1,000 examples for WNLI, and over 1,000,000,000 examples from LM BWB. (ii) Our goal is to optimize the quality of the shared encoder, not the performance of any one of the tasks in the multitask mix. (iii) We mix a relatively large number of tasks, up to eighteen at once in some conditions. These conditions make it challenging but important to avoid overfitting or underfitting any of our tasks. Relatively little work has been done on this problem, so we conduct a small experiment here. All our experiments use the basic paradigm of randomly sampling a new task to train on at each step, and we experiment with two hyperparameters that can be used to control over-and underfitting: The probability with which we sample each task and the weight with which we scale the loss for each task. Our experiments follow the setup in Appendix B, and do not use the ELMo BiLSTM.Task Sampling We consider several approaches to determine the probability with which to sample a task during training, generally making this probability a function of the amount of data available for the task. For task i with training set size N i, the probability is DISPLAYFORM0 where a is a constant. Loss Scaling At each update, we scale the loss of a task with weight DISPLAYFORM1 Experiments For task sampling, we run experiments with multitask learning on the full set of nine GLUE tasks, as well as three subsets: single sentence tasks (S1: SST, CoLA), similarity and paraphrase tasks (S2: MRPC, STS, QQP), and inference tasks (S3: WNLI, QNLI, MNLI, RTE). The are shown in TAB3.We also experiment with several combinations of task sampling and loss scaling methods, using only the full set of GLUE tasks. The are shown in TAB4.While no combination of methods consistently offers dramatically better performance than any other, we observe that it is generally better to apply only one of non-uniform sampling and nonuniform loss scaling at a time rather than apply both simultaneously, as they provide roughly the same effect. Following encouraging from earlier pilot experiments, we use power 0.75 task sampling and uniform loss scaling in the multitask learning experiments shown in TAB0. TAB5, below, shows on the four coarse-grained categories of the GLUE diagnostic set for all our pretraining experiments. This set consists of about 1000 expert-constructed examples in NLI While no model achieves near-human performance, the use of ELMo and other forms of unsupervised pretraining appears to be helpful on examples that highlight world knowledge and lexicalsemantic knowledge, and less so on examples that highlight complex logical reasoning patterns or alternations in sentence structure. This relative weakness on sentence structure is somewhat surprising given the finding in that language model pretraining is helpful for tasks involving sentence structure.
We compare many tasks and task combinations for pretraining sentence-level BiLSTMs for NLP tasks. Language modeling is the best single pretraining task, but simple baselines also do well.
757
scitldr
In this paper, we study the adversarial attack and defence problem in deep learning from the perspective of Fourier analysis. We first explicitly compute the Fourier transform of deep ReLU neural networks and show that there exist decaying but non-zero high frequency components in the Fourier spectrum of neural networks. We then demonstrate that the vulnerability of neural networks towards adversarial samples can be attributed to these insignificant but non-zero high frequency components. Based on this analysis, we propose to use a simple post-averaging technique to smooth out these high frequency components to improve the robustness of neural networks against adversarial attacks. Experimental on the ImageNet and the CIFAR-10 datasets have shown that our proposed method is universally effective to defend many existing adversarial attacking methods proposed in the literature, including FGSM, PGD, DeepFool and C&W attacks. Our post-averaging method is simple since it does not require any re-training, and meanwhile it can successfully defend over 80-96% of the adversarial samples generated by these methods without introducing significant performance degradation (less than 2%) on the original clean images. Although deep neural networks (DNN) have shown to be powerful in many machine learning tasks, found that they are vulnerable to adversarial samples. Adversarial samples are subtly altered inputs that can fool the trained model to produce erroneous outputs. They are more commonly seen in image classification task and typically the perturbations to the original images are so small that they are imperceptible to human eye. Research in adversarial attacks and defences is highly active in recent years. In the attack side, many attacking methods have been proposed (; ; a; ; ; ; a; ; ;, with various ways to generate effective adversarial samples to circumvent new proposed defence methods. However, since different attacks usually are effective to different defences or datasets, there is no consensus on which attack is the strongest. Hence for the sake of simplicity, in this work, we will evaluate our proposed defence approach against four popular attacks for empirical analysis. In the defence side, various defence mechanisms have also been proposed, including adversarial training (; ; Tramèr et al., 2017;), network distillation (b), gradient masking , adversarial detection and adding modifications to neural networks . Nonetheless, many of them were quickly defeated by new types of attacks (; 2017b; c; a;). tried to provide a theoretical security guarantee for adversarial training by a min-max loss formulation, but the difficulties in non-convex optimization and in finding the ultimate adversarial samples for training may loosen this robustness guarantee. As a , so far there is no defence that is universally robust to all adversarial attacks. Along the line of researches, there were also investigations into the properties and existence of adversarial samples. first observed the transferability of adversarial samples across models trained with different hyper-parameters and across different training sets. They also attributed the adversarial samples to the low-probability blind spots in the manifold. In , the authors explained adversarial samples as "a of models being too linear, rather than too nonlinear." In , the authors showed the transferability occurs across models with different structures and even different machine learning techniques in addition to neural networks. In summary, the general existence and transferability of adversarial samples are well known but the reason of adversarial vulnerability still needs further investigation. Generally speaking, when we view neural network as a multivariate function f (x) of input x, if a small imperceptible perturbation ∆x leads to a huge fluctuation ∆f (x), the large quantity ∆f (x)/∆x essentially corresponds to high frequency components in the Fourier spectrum of f (x). In this paper, we will start with the Fourier analysis of neural networks and elucidate why there always exist some decaying but nonzero high frequency response components in neural networks. Based on this analysis, we show that neural networks are inherently vulnerable to adversarial samples due to the underlying model structure. Next, we propose a simple post-averaging method to tackle this problem. Our proposed method is fairly simple since it works as a post-processing stage of any given neural network models and it does not require re-training the networks at all. Furthermore, we have evaluated the post-averaging method against four popular adversarial attacking methods and our method is shown to be universally effective in defending all examined attacks. Experimental on the ImageNet and the CIFAR-10 datasets have shown that our simple post-averaging method can successfully defend over 80-96% of the adversarial samples generated by these attacks with little performance degradation (less than 2%) on the original clean images. In order to understand the behaviour of adversarial samples, it is essential to find the Fourier transform of neural networks. Fortunately, for some widely used neural networks, namely fully-connected neural networks using ReLU activation functions, we may explicitly derive their Fourier transform under some minor conditions. As we will show, these theoretical will shed light on how adversarial samples happen in neural networks. As we know, any fully-connected ReLU neural networks (prior to the softmax layer) essentially form piece-wise linear functions in input space. Due to space limit, we will only present the main in this section and the proofs and more details may be found in Appendix. Definition 2.1. A piece-wise linear function is a continuous function f: R n − → R such that there are some hyperplanes passing through origin and dividing R n into M pairwise disjoint regions R m, (m = 1, 2, ..., M), on each of which f is linear: Composition of a piece-wise linear function with a ReLU activation function is also a piece-wise linear function. Theorem 2.3. The output of any hidden unit in an unbiased fully-connected ReLU neural network is a piece-wise linear function. This is straightforward because the input to any hidden node is a linear combination of piece-wise linear functions and this input is composed with the ReLU activation function to yield the output, which is also piece-wise linear. However, each region R m is the intersection of a different number of half-spaces, enclosed by various hyperplanes in R n. In general, these regions R m (m = 1, · · ·, M) do not have simple shapes. For the purpose of mathematical analysis, we need to decompose each region into a union of some well-defined shapes having a uniform form, which is called infinite simplex. Definition 2.4. Let V = {v 1, v 2, ..., v n} be a set of n linearly independent vectors in R n. An infinite simplex, R + V, is defined as the region linearly spanned by V using only positive weights: Theorem 2.5. Each piece-wise linear function f (x) can be formulated as a summation of some simpler functions:, each of which is linear and non-zero only in an infinite simplex as follows: where V l is a set of n linearly independent vectors, and w l is a weight vector. In practice, we can always assume that the input to neural networks, x, is bounded. As a , for computational convenience, we may normalize all inputs x into the unit hyper-cube, U n = n. Obviously, this assumption can be easily incorporated into the above analysis by multiplying each is the Heaviside step function. Alternatively, we may simplify this term by adding n 2 additional hyperplanes to further split the input space to ensure all the elements of x do not change signs within each region R + Vq. In this case, within each region R + Vq, the largest absolute value among all elements of x is always achieved by a specific element, which is denoted as r q. In other words, the dimension x rq achieves the largest absolute value inside R + Vq. Similarly, the normalized piece-wise linear function may be represented as a summation of some functions: f (x) = Q q=1 g q (x), where each g q (x) (q = 1, 2, · · ·, Q) has the following form: For every V q, there exists an n × n invertible matrix A q to linearly transform all vectors of V q into standard basis vectors e i in R n. As a , each function g q (x) may be represented in terms of standard bases V * = {e 1, · · ·, e n} as follows: q. Lemma 2.6. Fourier transform of the following function: 0 otherwise may be presented as: where ω r is the r-th component of frequency vector ω (r = 1, · · ·, n), and ω 0 = 0. Finally we derive the Fourier transform of fully-connected ReLU neural networks as follows. Theorem 2.7. The Fourier transform of the output of any hidden node in a fully-connected unbiased 1 ReLU neural network may be represented as Obviously, neural networks are the so-called approximated bandlimited models as defined in , which have decaying high frequency components in Fourier spectrum. Theorem 2.7 further suggests that the matrices A −1 q may contribute to the high frequency components when the corresponding region R + Vq are too small. This is clear because the determinant of A q is proportional to the volume of R + Vq in R n. In summary, the high frequency components of neural networks are mostly attributed to these tiny regions in the input space. As we will show later, these small regions may be explicitly exploited to generate adversarial samples for neural networks. As shown in Theorem 2.3, neural network may be viewed as a sequential division of the input space into many small regions, as illustrated in Figure 1. Each layer is a further division of the existing regions from the previous layers, with each region being divided differently. Hence a neural network with multiple layers would in a tremendous amount of sub-regions in the input space. For example, when cutting an n-dimensional space using N hyperplanes, the maximum number of regions may be computed as For a hidden layer of N = 1000 nodes and input dimension is n = 200, the maximum number of regions is roughly equal to 10 200. In other words, even a middle-sized neural network can partition input space into a huge number of sub-regions, which can easily exceed the total number of atoms in the universe. When we learn a neural network, we can not expect there is at least one training sample inside each region. For those regions that do not have any training sample, the ant linear functions in them may be arbitrary since they do not contribute to the training objective function at all. Of course, most of these regions are extremely small in size. When we measure the expected loss function over the entire space, their contributions are negligible since the chance for a randomly sampled point to fall into these tiny regions is extremely small. However, adversarial attack is imposing a new challenge since adversarial samples are not naturally sampled. Given that the total number of regions is huge, those tiny regions are almost everywhere in the input space. For any data point in the input space, we almost surely can find such a tiny region in proximity where the linear function is arbitrary. If a point inside this tiny region is selected, the output of the neural network may be unexpected. We believe that these tiny unlearned regions may be a major reason why neural networks are vulnerable to adversarial samples. In layered deep neural networks, the linear functions in all regions are not totally independent. If we use v (l) to denote the weight matrix in layer l, the ant linear weight w k in eq. is actually the sum of all concatenated v (l) along all active paths. When we make a small perturbation ∆x to any input x, the fluctuation in the output of any hidden node can be approximated represented as: where N denotes the total number of hyperplanes to be crossed when moving x to x + ∆x. In any practical neural network, we normally have at least tens of thousands of hyperplanes crossing the hypercube U n = n. In other words, for any input x in a high-dimensional space, a small perturbation can always easily cross a large number of hyperplanes to enter a tiny unlearned region. When N is fairly large, the above equation indicates that the output of a neural network can still fluctuate dramatically even after all weight vectors are regularized by L 1 or L 2 norm. As a reference, we have verified this on some ImageNet data using a VGG16 model. When PGD is used to generate adversarial samples with average perturbation ||∆x|| 2 ≤ 0.35, which is extremely small perturbation since x has over a hundred thousand dimensions on ImageNet, we have observed that in average about N = 5278 hyperplanes are crossed per layer even after such a small perturbation is added. At last, since the ubiquitous existence of unlearned tiny regions is an intrinsic property of neural networks given its current model structure, we believe that adversarial training strategies will not be sufficient to completely get rid of adversarial samples. In principle, neural networks must be strictly bandlimited to filter out those decaying high frequency components in order to completely eliminate all adversarial samples. We definitely need more research efforts to figure out how to do this effectively and efficiently for neural networks. 3 THE PROPOSED DEFENCE APPROACH: POST-AVERAGING 3.1 POST-AVERAGING In this paper, we propose a simple post-processing method to smooth out those high frequency components as much as possible, which relies on a simple idea similar to moving-average in onedimensional sequential data. Instead of generating prediction merely from one data point, we use the averaged value within a small neighborhood around the data point, which is called post-averaging here. Mathematically, the post-averaging is computed as an integral over a small neighborhood centered at the input: where x is the input and f (x) represents the output of the neural network, and C denotes a small neighborhood centered at the origin and V C denotes its volume. When we choose C to be an n-sphere in R n of radius r, we may simply derive the Fourier transform of f C (x) as follows: where J n 2 (·) is the first kind Bessel function of order n/2. Since the Bessel functions, J ν (ω), decay with rate 1/ √ ω as |ω| → ∞ , we have as |ω| → ∞. Therefore, if r is chosen properly, the post-averaging operation can significantly bandlimit neural networks by smoothing out high frequency components. Note that the similar ideas have been used in to improve robustness in speech recognition. However, it is intractable to compute the above integral for any meaningful neural network used in practical applications. In this work, we propose to use a simple numerical method to approximate it. For any input x, we select K points in the neighborhood C centered at x, i.e. {x 1, x 2, · · ·, x K}, to approximately compute the integral as Obviously, in order to defend against adversarial samples, it is important to have samples outside the current unlearned tiny region. In the following, we use a simple sampling method based on directional vectors. To generate a relatively even set of samples for eq., we first determine some directional vectorsv, and then move the input x along these directions using several step sizes within the sphere of radius r:, ±r], andv is a selected unit-length directional vector. For each selected direction, we generate six samples within C along both the positive and the negative directions to ensure efficiency and even sampling. We use this implementation for the convenience to extend with different types of sampling strategies. We tried several direction sampling strategies, including using the directions towards the closest region boundaries, and found that the simple random direction sampling gives the best performance. In this sampling method, we fill the directional vectors with random numbers generated from a standard normal distribution, and then normalize them to have unit length. In this section, we evaluate the above post-averaging method on defending against several popular adversarial attacking methods. • Dataset: We evaluated our method on both the ImageNet and CIFAR-10 datasets. Since our proposed post-averaging method does not need to re-train neural networks, we do not need to use any training data in our experiments. For evaluation purpose, we use the validation set of the ImageNet dataset. The validation set consists of 50000 images labelled into 1000 categories. For computational efficiency, we randomly choose 5000 images from the ImageNet validation set and evaluate our model on these 5000 images. For the CIFAR-10 dataset, we use the full test set, which consists of 10000 images labelled into 10 categories. • Target model: For model on ImageNet, we use a pre-trained ResNet-152 network that is available from PyTorch, while for CIFAR-10, we use a pre-trained ResNet-110 network from Yerlan Idelbayev 2. In our experiments, we directly use these pre-trained models without any modification. • Source of adversarial attacking methods: We use Foolbox, an open source tool box to generate adversarial samples using different adversarial attacking methods. In this work, we tested our method against four popular attacking methods in the literature: Fast Gradient Sign method (FGSM) , Projected Gradient Descent (PGD) method , DeepFool (DF) attack method and Carlini & Wagner (C&W) L2 attack method (a). We used these attack methods in their default settings. • Threat model: In our experiments, we use an l ∞ norm to constrain the allowed perturbation distance. For each experiment, we define: • Clean set: The dataset that consists of the original images from ImageNet or CIFAR-10. • Attacked set: For every correctly classified image in the Clean set, if an adversarial sample is successfully generated under the attacking criteria, the original sample is replaced with the adversarial sample; if no adversarial sample is found, the original sample is kept in the dataset. Meanwhile, all the misclassified images are kept in the dataset without any change. Therefore the Attacked set has the same number of images as the clean set. In our experiments, we evaluate the original network and the network defended by post-averaging on both the Clean and the Attacked sets. The performance is measured in terms of: • Accuracy: number of correctly classified images over the whole dataset. • Defence rate: number of successfully defended adversarial samples over the total number of adversarial samples in the Attacked set. By "successfully defended", it refers to the case where an adversarial sample is correctly classified after the original model is defended by the post-averaging approach. Table 1 shows the performance of our defence approach against different attacking methods. In this table, the samples for post-averaging are selected within an n-sphere of radius r as in eq., with K = 15 different directions. Thus in a total of 15 × 2 × 3 + 1 = 91 samples (including the input) for each input image to be used in eq.. Moreover, all the adversarial samples generated are restricted to be within the perturbation range = 8 /255. We show the top-1 accuracy of the original model and the defended model on both the Clean and the Attacked set respectively, as well as the defence rate of the defended model. Besides, we also show the number of adversarial samples successfully generated by each attacking method in the last column. From Table 1, we can see that our proposed defence approach is universally robust to all of the attacking methods we have examined. It has achieved above 80-96% defence rates in all the experiments with only a minor performance degradation in the Clean set (less than 2%). Especially on the ImageNet dataset, our method is able to defend about 95% of the adversarial samples. However, an interesting observation from the experimental is that the defence rate in the CIFAR-10 dataset is lower than the usually more challenging ImageNet dataset. We think this may be because data points are sparser in the ImageNet space than in the CIFAR-10 space, as ImageNet has a much larger dimensionality. Generally, using a larger sampling radius r can increase the chance of moving out of the unlearned regions as we desired, but it will also introduce more noise that can harm the prediction accuracy; On the other hand, using a smaller sampling radius r can reduce the performance degradation but it may not be sufficient to defend against adversarial samples. The optimal value for r varies with different datasets due to their dimensionality and data sparsity. In experiments, we found that r = 30 for ImageNet and r = 6 for CIFAR-10 achieved relatively better performance. Figure 2 shows how the model defence rate on ImageNet varies with different r. As shown in the figure, the optimal value for r also varies in different attacking methods, but the performance variations are small. In general, our model retains high defence rate throughout the r range. We also tested the effect of K, the number of sampling directions used, on the model performance. From Table 2, we can see that our model performance is not very sensitive to K. It is able to achieve a good defence rate with only K = 6, that is, 37 samples used for each input image. In implementation, these samples can be easily packed into a mini-batch for fast computation in GPUs. When running on the same machine, we measured the averaged inference time for a single input image on the original network as 0.04 seconds, while the inference time for our models with different K are shown in Table 2. By comparison, we can know that the inference time after adding post-averaging is roughly 2 3 K of the original inference time. At last, we evaluated our post-averaging defence approach against attacks with different allowed perturbation ranges. The are shown in Figure 3. As we can see, our model retains very good attack defence rate up to = 32 /255. Note that the defence rate against PGD and C&W doesn't change much along the variation of, this is because PGD and C&W have already successfully generated adversarial samples for most of the correctly classified inputs when is small. Hence their generated adversarial samples will not change much when using larger. For FGSM, our method yields lower defending performance. The possible reason is that FGSM tends to generate much larger perturbations than other three stronger attacking methods under the same setting. A large perturbation is more likely to move samples across class-specific decision boundaries to generate much more confusing samples. In our opinion, this is a general phenomenon in pattern classification, not particular to adversarial attacks. In this paper, we have presented some theoretical by Fourier analysis of ReLU neural networks. These are useful for us to understand why neural networks are vulnerable to adversarial samples. Based on the , we hypothesize that the inevitable and ubiquitous existence of tiny unlearned regions in the model function mapping may be a major reason for adversarial vulnerability. As a possible defence strategy, we have proposed a simple post-averaging method. Experimental on the ImageNet and the CIFAR-10 datasets have demonstrated that our simple defence technique turns out to be very effective against many popular attack methods in the literature. Finally, it will be interesting to see whether our post-averaging method will be still robust against any new attack methods in the future. Definition B.1. A piece-wise linear function is a continuous function f: R n − → R such that there are some hyperplanes passing through origin and dividing R n into M pairwise disjoint regions R m, (m = 1, 2, ..., M), on each of which f is linear: Proof. This proposition immediately follows lemma B.2. Definition B.4. Let V = {v 1, v 2, ..., v n} be a set of n independent vectors in R n. An infinite simplex, R + V, is defined as the region linearly spanned by V using only positive weights: Theorem B.5. Each piece-wise linear function f (x) can be formulated as a summation of some functions:, each of which is linear and non-zero only in an infinite simplex as follows: where V k is a set of n independent vectors, and w k is a weight vector. Proof. Each region R p of a piece-wise linear function, f (x), which describes the behavior of a ReLU node if intersects with an affine hyper-plane in a convex polytope. This convex polytope can be triangulated into some simplices. Define V k, (k = 1, 2, ..., K), sets of vertexes of these simplices. The infinite simplexes created by these vector sets will have the desired property and f (x) can be written as: As explained earlier in the original article by adding n 2 hyper-planes to those defining the piece-wise linear function, the output of a ReLU node may be represented as f (x) = Q q=1 g q (x). These hyper-planes are those perpendicular to standard basis vectors and subtraction of one of these vectors from another one. That is, e i (i = 1, . . ., n) and e i − e j (1 ≤ i < j ≤ n). Given this representation, the final step to achieve the Fourier transform is the following lemma: Lemma B.6. Fourier transform of the following function: 0 otherwise may be presented as: where ω r is the rth component of frequency vector ω (r = 1, · · ·, n), and ω 0 = 0. Proof. Alternatively, s(x) may be represented as: Therefore, we need to compute Fourier transform of h(x)h(1 − x): By taking the inverse Fourier transform of the function: where δ n is n-dimensional Dirac Delta function, it can be shown that it is the Fourier transform of Now we can find the Fourier transform of s(x) where Ω = {ω 0, ..., ω n}, σ B is the summation over elements of B and A r = If B does not contain ω r and have at least 2 elements then the terms for B and B ∪ {ω r} will cancel each other out. Also, sign(|B| − 1) will vanish if B has only one element. Therefore, there only remains empty set and sets with two elements one of them being ω r. Given the fact that A r = 0, the of the integral will be: or equivalently: Therefore: whereω q = ωA −1 q. As for the Fourier transform computed in section 3.1, it should be mentioned that the integral in equation 6 is the Fourier transform of: which can be derived utilizing the property of the Fourier transforms for radially symmetric functions : Given this transform:
An insight into the reason of adversarial vulnerability, an effective defense method against adversarial attacks.
758
scitldr
Reinforcement learning methods that continuously learn neural networks by episode generation with game tree search have been successful in two-person complete information deterministic games such as chess, shogi, and Go. However, there are only reports of practical cases and there are little evidence to guarantee the stability and the final performance of learning process. In this research, the coordination of episode generation was focused on. By means of regarding the entire system as game tree search, the new method can handle the trade-off between exploitation and exploration during episode generation. The experiments with a small problem showed that it had robust performance compared to the existing method, Alpha Zero. The that computer programs beat professional human players on chess, shogi and Go was a huge achievement in computer science. In particular, the development of highly general methods totally changed our perspective about two-person complete information deterministic games. Then, has this field already finished? My answer is no. To deal with many games, more robust methods are required to free humans from hyperparameter tuning. Moreover, the challenge to the god of games won't be finished and we want algorithms that can achieve better final performance. This study attempts to bring suggestions for recent achievements in two-player complete information deterministic games from classical game tree search context. More specifically, this is a new approach in which the reinforcement learning system using game tree search itself is handled as game tree search. Turn-based complete information environments this report deals with is called Markov decision process. When the state s t is observed at the time t and the action a t is performed, the environment returns immediate reward r t and the next state s t+1. In general, reinforcement learning is a framework which improves agent's policy p and value estimation v in a given environment. While p and v can be stored in a table for all states in a small environment, function approximation is commonly used for larger problems. Model-free reinforcement learning using a deep neural network (DNN) has been achieving great resutls after the success on several video game environments. This study is aimed at two-person complete information deterministic games. While various modelfree reinforcement learning algorithms can be applied to these games, we can also apply modelbased reinforcement learning because agents can keep the state transition model of the environment. In particular, we can apply forward planning to the two-person complete information deterministic game using the fact that the progress of the game is represented as a tree structure. Ideally, the winning player and the optimal action in any two-player complete information deterministic game in which a finite number of states can appear can be determined by performing minimax search on the game tree within finite time. Actually, it is not necessary to search all states in order to determine the winner at the initial state. We can use alpha-beta search which reduces useless search. However, if the game is not very easy, it is not realistic to complete alpha-beta search that takes exponential time against the size of the game. For this reason, various methods have been developed so far that can be executed in a realistic time and achieve sufficient performance. Roughly, both the algorithmic improvement of the search and the reduction of the search amount by the function approximation have been much effective. For example, Monte Carlo tree search (MCTS), which is the base of this research, performs effective forward search using function approximations. With the good property that the performance improves monotonously the the time being spent is, MCTS is widely used in this field. There are a number of variants of Monte Carlo tree search, but the algorithm PUCT in algorithm 1 2 is described in this report. In that pseudo code, it is assumed that the turn player is changed every turn, and the discount factor is 1, and the immediate reward is obtained only at the terminal state. The Monte Carlo tree search starts from an empty game-tree, and the new node which represents one state is added to expand the game tree in each simulation. After expanding tree, the new state value which approximation function (e.g. neural nets) has returned is fed back to the upper nodes to update the estimated action value of each node. The core of the Monte Carlo tree search is how to decide action response on the game tree, where the bandit algorithm is used. The bandit algorithm is an algorithm for maximizing the cumulative reward in a problem setting where we select one of multiple slot machines that we do not know the true average reward in one turn. It is required for bandit algorithms to handle the trade-off between exploration and exploitation. In the Monte Carlo tree search, by using this bandit algorithm for selection phase at each node, the best action is asymptotically proven and selected at each node, and as a , it can asymptotically find the optimal action. PUCT uses the following PUCB formula 1 as the bandit algorithm. In previous research, the output of Monte Carlo tree search usually only consists the best action or the new policy probability distribution at the root node. However, the new estimated value at the root node (v s in algorithm 2) is also returned in algorithm 2 in order to use it in this research. Although it is a detailed point, there is important point that is heavily related to the experimental of this report, which is that some noise is added to the policy by applying AddNoise function only at the root node shown in the algorithm 1. This is because the root node is recognized not only as bandits but also as best arm identification problems. Therefore the action selection should be more exploratory to find good actions which have been evaluated poorly. In previous method, noises generated by the Dirichlet distribution are added as described in 2. This study follows that. Also from the viewpoint of reinforcement learning, it is useful for agents to be able to perform the game tree search using the true environment model. This is because agents can usually obtain a Algorithm 2: PUCT Data: state s 0, neural net net, total simulation count N Result: posterior estimation of policy and value at s 0 begin while better strategy and a more accurate value estimation by performing the game tree search. Therefore, the game tree search can be seen as a kind of operator to improve there estimates. Various reinforcement learning studies using this property have been done. In particular Alpha Zero and EXpert ITeration are the great success, which enables us to handle various games. These algorithms continuously update approximation function by making training targets by using Monte Carlo tree search as such operator. The procedure in Alpha Zero is followed in this research. The algorithm 3 describes the single thread version of Alpha Zero. Algorithm 3: Alpha Zero (single thread version) Data: total epochs E, games per epoch G, each simulation count N Result: trained neural net In the previous method, the variety of generated episodes is dependent on random action selection after Monte Carlo tree search. However, if there is a state where the policy calculated by the neural network is too sharp, it might be difficult to reverse action evaluation in a limited number of simulations. Therefore, there may be is several weak states even after long-time learning process. The experimental in this study suggest that such phenomena can be actually observed. proposed the preprocess and the post-process to the simulation counts in order to improve explorability at the root node. His suggested that this method can accelarate the learning speed in leaning Go. In this report, from the other perspective, a method to apply Monte Carlo tree search to the whole episode generation procedure is proposed. The output of the Monte Carlo tree search is a new probability distribution and a new state value estimation. Therefore, the Monte Carlo tree search itself can be also regarded as a function of policy and value estimation. The fact that the Monte Carlo tree search itself can be regarded as a function of policy and value estimation implies that we can execute Monte Carlo tree search which uses this function. In the proposed method, the Monte Carlo tree search is performed on the master side that organizes episode generation. This is hereinafter referred as a master game tree. The value of the table 1 is stored in each node of the master game tree. Thus, the values stored in each node in the master game tree has a one-to-one correspondence with the values stored in the ordinal Monte Carlo tree search node. Therefore the master game tree can be implemented as same as the ordinal one. It is desired that the master game tree converges to the optimal strategy asymptotically by assigning episode generation by this master game tree. It seems that theoretical convergence will be guaranteed if same classic bandit algorithm with no prior information is used in each action selection on the master game tree. However, through the success of research on the Monte Carlo tree search so far, function approximation is also used on the master game tree side in the proposed implementation in this report. It means that PUCT is also applied to the master game tree. Since the proposed method uses the master game tree so as to refine the performance of Monte Carlo tree search by the other Monte Carlo tree search, this report named it MbM (MCTS-by-MCTS). The difference between this master game tree search and the ordinal Monte Carlo tree search is that the accuracy of the policy and the value estimation returned by neural nets is increased through proceeding training. Therefore, by updating the p and v, which were fixed after the node creation timing in ordinal PUCT, bandits on the nodes that has already created can behave as if they had created recently. Instead of replacing p or v with new ones, the weighted average between old ones and new ones are calculated after each episode in the proposed method. It helps absorb the fluctuation of the of each trial of the Monte Carlo tree search. This weighting ratio can be increased as the generation of neural networks progresses to emphasize newer estimation. In this report, however, a simple average is used. Specifically, the master game tree is updated as in the algorithm 4. The state value estimate of the newly added node and the difference between the old and the new state value estimation inside the tree are summed and propagated to the upper nodes. Using this update rule, the entire proposed method is discribed in the algorithm 5. The NextPathByMasterTree function, which generates the opening action sequence by descending the master game tree, is almost the same as the operation of descending the tree in the algorithm 1. When going down the master game tree, the noise is always added to p s in the algorithm 1 in the regular version of the proposed method. This point will be discussed at the later part of this report. In the experiment, in the tic-tac-toe where the solution is already sought for the small size, the existing method AlphaZero and the proposed method learned for 20,000 battles, The following points were compared by the obtained neural network policy. • Ratio of defeating against random players • Ratio of defeating to perfect players Tic-tac-toe is obviously very simple task, and it is easy to obtain a perfect strategy by minimax search. Only by function approximation using neural network, however, it may be necessary to generate a certain number of episodes and train from them to obtain good policy. Still, it is so easy problem that we desire that neural nets can output the optimal strategy. The experiments were performed with the following settings. In the proposed method, in the bandit on the master game tree side, the same as the root node on the episode generation side. We tried two ways of adding Dirichlet noise to the policy and not adding it. Especially when adding Dirichlet noise to the policy, if the other parameters are the same in Alpha Zero and the MbM, episode generation become more exploratory in the proposed method than Alpha Zero. The performance can vary greatly depending on the effect (actually shown in the experimental in this paper). Therefore, the initial values of episode generation temperature parameters were varied as 0.3, 0.6, 1.2, and 2.4, respectively into both the previous method and the proposed method. Other experimental settings and hyperparameters are shown below. For details, please refer to the online experimental code. 1. • attenuation rate of generation temperature: 0.8 • C in PUCB: 2.0 • noise to the policy α = 0.1, ϵ = 0.25 • the number of epochs: 200 • the number of episodes per epoch: 100 • input feature: 2 planes (the position mine, the position of opponents) • detail of neural nets: (3x3 convolution with 32 filters + BN), (Wide Resnet x 4 layers), (1x1 convolution with 2 filters + BN + fully connected + softmax in policy head), (1x1 convolution with 2filters + BN + fully connected + tanh) • detail of training data: (batch size 32 x the number of episodes before x 50 loop) per epoch, • detail of optimization: SGD (learning rate=1e-3, momentum=0.75 weight decay=1e-4) The experimental are plotted in the figure 1 2, comparing the temperature conditions for each opponent (random player, perfect player) in the previous method AlphaZero and the proposed method MbM. From the of AlphaZero in the FIg. 1 2, Alpha Zero has not reached the optimal strategy within 200 epochs when the temperature of episode generation is low. In comparison, the of MbM in Fig. 1 2 shows that almost optimal strategies seem to be obtained under all conditions. Compared through those , the performance of proposed methods are equal to or better than the best one of AlphaZero. In the of the previous experiment, the proposed method MbM showed good performance. However, compared to the of Alpha Zero, there seems to be the possibility that the difference of performance caused only by the explorability of episode generation by adding noise to the policy when going down the master game tree. Additional experiment was done with the following two modified versions of the proposed method under the temperature is low (t = 0.3 or t = 0.6). • MbM-NoNoise: no noise is added to the policy • MbM-Relaxation: applying conversion to the policy in order to soften sharpness of it. In the latter one, the policy p is modified as p ←− (p + 0.1)/(∑ (p + 0.1)). The figure shows learning up to 100 epochs under each condition and comparing the with those in previous experiment. The in Fig. 3 show that MbM-NoNoise, which does not add noise, is even worse than Alpha Zero. MbM-UniformRelaxation, which the fixed amount of diversity is added to the policy, was not so good as MbM but pretty better than Alpha Zero. This point will be discussed in the final section. In this study, we examined a very simple task, Tic-tac-toe. First of all, it was shown that obtaining the optimal strategy is sometimes difficult depending on the parameters. The suggest that reinforcement learning methods like Alpha Zero often suffer from naiveness against exploration. In the proposed method, it is possible to vary the beginning of the game in the episode generation by the master game tree. The suggest that the proposed method has ability to control the search for the beginning of the game by adding proper noise. On the other hand, when PUCT using the strategy as it is applied to the master game tree (MbMNoNoise), the performance was lower than the baseline. The reason of this is that the policy has converged earlier than the effect of exploration in the master game tree. Due to this point, it was not effective. In this report, PUCT is applied to the master game tree as same as ordinal game tree. However, it is necessary to examine a mechanism that makes it more exploratory. Lastly, in this study, we verified only one of the simplest games, Tic-tac-toe. From the experimental in this paper, it is expected that the proposed method can produce robust with respect to temperature parameters even for larger games. It will also be necessary to verify whether the speed of improvement in real time is better that previous methods. I hope that the combination of tree search and reinforcement learning will be used for a wider range of domains if there exists the method in which both stableness and speed are better performance.
Apply Monte Carlo Tree Search to episode generation in Alpha Zero
759
scitldr
Message-passing neural networks (MPNNs) have been successfully applied in a wide variety of applications in the real world. However, two fundamental weaknesses of MPNNs' aggregators limit their ability to represent graph-structured data: losing the structural information of nodes in neighborhoods and lacking the ability to capture long-range dependencies in disassortative graphs. Few studies have noticed the weaknesses from different perspectives. From the observations on classical neural network and network geometry, we propose a novel geometric aggregation scheme for graph neural networks to overcome the two weaknesses. The behind basic idea is the aggregation on a graph can benefit from a continuous space underlying the graph. The proposed aggregation scheme is permutation-invariant and consists of three modules, node embedding, structural neighborhood, and bi-level aggregation. We also present an implementation of the scheme in graph convolutional networks, termed Geom-GCN, to perform transductive learning on graphs. Experimental show the proposed Geom-GCN achieved state-of-the-art performance on a wide range of open datasets of graphs. Message-passing neural networks (MPNNs), such as GNN , ChebNet , GG-NN , GCN , are powerful for learning on graphs with various applications ranging from brain networks to online social network . In a layer of MPNNs, each node sends its feature representation, a "message", to the nodes in its neighborhood; and then updates its feature representation by aggregating all "messages" received from the neighborhood. The neighborhood is often defined as the set of adjacent nodes in graph. By adopting permutation-invariant aggregation functions (e.g., summation, maximum, and mean), MPNNs are able to learn representations which are invariant to isomorphic graphs, i.e., graphs that are topologically identical. Although existing MPNNs have been successfully applied in a wide variety of scenarios, two fundamental weaknesses of MPNNs' aggregators limit their ability to represent graph-structured data. Firstly, the aggregators lose the structural information of nodes in neighborhoods. Permutation invariance is an essential requirement for any graph learning method. To meet it, existing MPNNs adopt permutation-invariant aggregation functions which treat all "messages" from neighborhood as a set. For instance, GCN simply sums the normalized "messages" from all one-hop neighbors . Such aggregation loses the structural information of nodes in neighborhood because it does not distinguish the "messages" from different nodes. Therefore, after such aggregation, we cannot know which node contributes what to the final aggregated output. Without modeling such structural information, as shown in and, the existing MPNNs cannot discriminate between certain non-isomorphic graphs. In those cases, MPNNs may map non-isomorphic graphs to the same feature representations, which is obviously not desirable for graph representation learning. Unlike MPNNs, classical convolutional neural networks (CNNs) avoid this problem by using aggregators (i.e., convolutional filters) with a structural receiving filed defined on grids, i.e., a Euclidean space, and are hence able to distinguish each input unit. As shown by our experiments, such structural information often contains clues regarding topology patterns in graph (e.g., hierarchy), and should be extracted and used to learn more discriminating representations for graph-structured data. Secondly, the aggregators lack the ability to capture long-range dependencies in disassortative graphs. In MPNNs, the neighborhood is defined as the set of all neighbors one hop away (e.g., GCN), or all neighbors up to r hops away (e.g., ChebNet). In other words, only messages from nearby nodes are aggregated. The MPNNs with such aggregation are inclined to learn similar representations for proximal nodes in a graph. This implies that they are probably desirable methods for assortative graphs (e.g., citation networks and community networks ) where node homophily holds (i.e., similar nodes are more likely to be proximal, and vice versa), but may be inappropriate to the disassortative graphs where node homophily does not hold. For example, shows disassortative graphs where nodes of the same class exhibit high structural similarity but are far apart from each other. In such cases, the representation ability of MPNNs may be limited significantly, since they cannot capture the important features from distant but informative nodes. A straightforward strategy to address this limitation is to use a multi-layered architecture so as to receive "messages" from distant nodes. For instance, due to the localized nature of convolutional filters in classical CNNs, a single convolutional layer is similarly limited in its representational ability. CNNs typically use multiple layers connected in a hierarchical manner to learn complex and global representations. However, unlike CNNs, it is difficult for multi-layer MPNNs to learn good representations for disassortative graphs because of two reasons. On one hand, relevant messages from distant nodes are mixed indistinguishably with a large number of irrelevant messages from proximal nodes in multi-layer MPNNs, which implies that the relevant information will be "washed out" and cannot be extracted effectively. On the other hand, the representations of different nodes would become very similar in multi-layer MPNNs, and every node's representation actually carries the information about the entire graph. In this paper, we overcome the aforementioned weaknesses of graph neural networks starting from two basic observations: i) Classical neural networks effectively address the similar limitations thanks to the stationarity, locality, and compositionality in a continuous space; ii) The notion of network geometry bridges the gap between continuous space and graph . Network geometry aims to understand networks by revealing the latent continuous space underlying them, which assumes that nodes are sampled discretely from a latent continuous space and edges are established according to their distance. In the latent space, complicated topology patterns in graphs can be preserved and presented as intuitive geometry, such as subgraph , community (, and hierarchy (; . Inspired by those two observations, we raise an enlightening question about the aggregation scheme in graph neural network. • Can the aggregation on a graph benefit from a continuous latent space, such as using geometry in the space to build structural neighborhoods and capture long-range dependencies in the graph? To answer the above question, we propose a novel aggregation scheme for graph neural networks, termed the geometric aggregation scheme. In the scheme, we map a graph to a continuous latent space via node embedding, and then use the geometric relationships defined in the latent space to build structural neighborhoods for aggregation. Also, we design a bi-level aggregator operating on the structural neighborhoods to update the feature representations of nodes in graph neural networks, which are able to guarantee permutation invariance for graph-structured data. Compared with exist-ing MPNNs, the scheme extracts more structural information of the graph and can aggregate feature representations from distant nodes via mapping them to neighborhoods defined in the latent space. We then present an implementation of the geometric aggregation scheme in graph convolutional networks, which we call Geom-GCN, to perform transductive learning, node classification, on graphs. We design particular geometric relationships to build the structural neighborhood in Euclidean and hyperbolic embedding space respectively. We choose different embedding methods to map the graph to a suitable latent space for different applications, where suitable topology patterns of graph are preserved. Finally, we empirically validate and analyze Geom-GCN on a wide range of open datasets of graphs, and Geom-GCN achieved the state-of-the-art . In summary, the contribution of this paper is three-fold: i) We propose a novel geometric aggregation scheme for graph neural network, which operates in both graph and latent space, to overcome the aforementioned two weaknesses; ii) We present an implementation of the scheme, Geom-GCN, for transductive learning in graph; iii) We validate and analyze Geom-GCN via extensive comparisons with state-of-the-art methods on several challenging benchmarks. In this section, we start by presenting the geometric aggregation scheme, and then outline its advantages and limitations compared to existing works. As shown in Fig. 1, the aggregation scheme consists of three modules, node embedding (panel A1 and A2), structural neighborhood (panel B1 and B2), and bi-level aggregation (panel C). We will elaborate on them in the following. Figure 1: An illustration of the geometric aggregation scheme. A1-A2 The original graph is mapped to a latent continuous space. B1-B2 The structural neighborhood. All adjacent nodes lie in a small region around a center node in B1 for visualization. In B2, the neighborhood in the graph contains all adjacent nodes in graph; the neighborhood in the latent space contains the nodes within the dashed circle whose radius is ρ. The relational operator τ is illustrated by a colorful 3 × 3 grid where each unit is corresponding to a geometric relationship to the red target node. C Bi-level aggregation on the structural neighborhood. Dashed and solid arrows denote the low-level and high-level aggregation, respectively. Blue and green arrows denote the aggregation on the neighborhood in the graph and the latent space, respectively. A. Node embedding. This is a fundamental module which maps the nodes in a graph to a latent continuous space. Let G = (V, E) be a graph, where each node v ∈ V has a feature vector x v and each edge e ∈ E connects two nodes. Let f: v → z v be a mapping function from a node in graph to a representation vector. Here, z v ∈ R d can also be considered as the position of node v in a latent continuous space, and d is the number of dimensions of the space. During the mapping, the structure and properties of graph are preserved and presented as the geometry in the latent space. For instance, hierarchical pattern in graph is presented as the distance to the original in embedding hyperbolic space . One can employ various embedding methods to infer the latent space (; . B. Structural neighborhood. Based on the graph and the latent space, we then build a structural neighborhood, N (v) = ({N g (v), N s (v)}, τ ), for the next aggregation. The structural neighborhood consists of a set of neighborhood {N g (v), N s (v)}, and a relational operator on neighborhoods τ. The neighborhood in the graph, N g (v) = {u|u ∈ V, (u, v) ∈ E}, is the set of adjacent nodes of v. The neighborhood in the latent space, N s (v) = {u|u ∈ V, d(z u, z v) < ρ}, is the set of nodes from which the distance to v is less than a pre-given parameter ρ. The distance function d(·, ·) depends on the particular metric in the space. Compared with N g (v), N s (v) may contain nodes which are far from v in the graph, but have a certain similarity with v, and hence are mapped together with v in the latent space though preserving the similarity. By aggregating on such neighborhood N s (v), the long-range dependencies in disassortative graphs can be captured. The relational operator τ is a function defined in the latent space. It inputs an ordered position pair (z v, z u) of nodes v and u, and outputs a discrete variable r which indicates the geometric relationship from v to u in the latent space. where R is the set of the geometric relationships. According to the particular latent space and application, r can be specified as an arbitrary geometric relationship of interest. A requirement on τ is that it should guarantee that each ordered position pair has only one geometric relationship. For example, τ is illustrated in Fig. 1B by a colorful 3 × 3 grid in a 2-dimensional Euclidean space, in which each unit is corresponding to a geometric relationship to node v. C. Bi-level aggregation. With the structural neighborhood N (v), we propose a novel bi-level aggregation scheme for graph neural network to update the hidden features of nodes. The bi-level aggregation consists of two aggregation functions and operates in a neural network layer. It can extract effectively structural information of nodes in neighborhoods as well as guarantee permutation invariance for graph. Let h In the low-level, the hidden features of nodes that are in the same neighborhood i and have the same geometric relationship r are aggregated to a virtual node via the aggregation function p. The features of the virtual node are e v,l+1 (i,r), and the virtual node is indexed by (i, r) which is corresponding to the combination of a neighborhood i and a relationship r. It is required to adopt a permutation-invariant function for p, such as an L p -norm (the choice of p = 1, 2, or ∞ in average, energy, or max pooling). The low level aggregation is illustrated by dashed arrows in Fig. 1C. In the high-level, the features of virtual nodes are further aggregated by function q. The inputs of function q contain both the features of virtual nodes e v,l+1 (i,r) and the identity of virtual nodes (i, r). That is, q can be a function that take an ordered object as input, e.g., concatenation, to distinguish the features of different virtual nodes, thereby extracting the structural information in the neighborhoods explicitly. The output of high-level aggregation is a vector m, are given by a non-linear transform, wherein W l is a learnable weight matrix on the l-th layer shared by all nodes, and σ(·) is a non-linear activation function, e.g., a ReLU. Permutation invariance is an essential requirement for aggregators in graph neural networks. Thus, we then prove that the proposed bi-level aggregation, Eq. 1, is able to guarantee invariance for any permutation of nodes. We firstly give a definition for permutation-invariant mapping of graph. Definition 1. Let a bijective function ψ: V → V be a permutation for nodes, which renames v ∈ V as ψ(v) ∈ V. Let V and E be the node and edge set after a permutation ψ, respectively. A mapping of graph, φ(G), is permutation-invariant if, given any permutation ψ, we have Proof. Let G be an isomorphic graph of G after a permutation ψ, as defined in Definition 1. If φ 2 (G) is permutation-invariant, we have φ 2 (G) = φ 2 (G). Therefore, the entire composite function Theorem 1. Given a graph G = (V, E) and its structural neighborhood N (v), ∀v ∈ V, the bi-level aggregation, Eq. 1, is a permutation-invariant mapping of graph. Proof. The bi-level aggregation, Eq. 1, is a composite function, where the low-level aggregation is the input of the high-level aggregation. Thus, Eq. 1 is permutation-invariant if the low-level aggregation is permutation-invariant according to Lemma 1. We then prove that the low-level aggregation is permutation-invariant. The low-level aggregation consists of 2×|R| sub-aggregations, each of which is corresponding to the nodes in a neighborhood i and with a relationship r to v. Firstly, the input of each sub-aggregations is permutation-invariant because both i ∈ {g, s} and r ∈ R are determined by the given structural neighborhood N (v), ∀v ∈ V, which is constant for any permutation. Secondly, Eq. 1 adopts a permutation-invariant aggregation function p for the sub-aggregations. Thus the low-level aggregation is permutation-invariant. We now discuss how the proposed geometric aggregation scheme overcomes the two aforementioned weaknesses, i.e., how it effectively models the structural information and captures the long-range dependencies, in comparison to some closely related works. To overcome the first weakness of MPNNs, i.e., losing the structural information of nodes in neighborhoods, the proposed scheme explicitly models the structural information by exploiting the geometric relationship between nodes in latent space and then extracting the information effectively by using the bi-level aggregations. In contrast, several existing works attempt to learn some implicit structure-like information to distinguish different neighbors when aggregating features. For example, GAT , LGCL and GG-NN learn weights on "messages" from different neighbors by using attention mechanisms and node and/or edge attributes. CCN utilizes a covariance architecture to learn structure-aware representations. The major difference between these works and ours is that we offer an explicit and interpretable way to model the structural information of nodes in neighborhood, with the assistance of the geometry in a latent space. We note that our work is orthogonal with existing methods and thus can be readily incorporated to further improve their performance. In particular, we exploit geometric relationships from the aspect of graph topology, while other methods focus on that of feature representation-the two aspects are complementary. For the second weakness of MPNNs, i.e., lacking the ability to capture long-range dependencies, the proposed scheme models the long-range dependencies in disassortative graphs in two different ways. First of all, the distant (but similar) nodes in the graph can be mapped into a latent-spacebased neighborhood of the target node, and then their useful feature representations can be used for aggregations. This way depends on an appropriate embedding method, which is able to preserve the similarities between the distant nodes and the target node. On the other hand, the structural information enables the method to distinguish different nodes in a graph-based neighborhood (as mentioned above). The informative nodes may have some special geometric relationships to the target node (e.g., a particular angle or distance), whose relevant features hence will be passed to the target node with much higher weights, compared to the uninformative nodes. As a , the long-range dependencies are captured indirectly through the whole message propagation process in all graph-based neighborhoods. In literature, a recent method JK-Nets captures the long-range dependencies by skipping connections during feature aggregations. In literature, and construct several non-isomorphic example graphs that cannot be distinguished by the aggregators (e.g., mean and maximum) in existing MPNNs. We present a case study to illustrate how to distinguish the non-isomorphic example graphs once the structural neighborhood is applied. We take two non-isomorphic graphs in as an example, where each node has the same feature a and after any mapping f (a) remains the same across all nodes, as shown in Fig. 2 (left). Then the aggregator, e.g., mean or maximum, over f (a) remains f (a), and hence the final representations of the nodes are the same. That is, mean and maximum aggregators fail to distinguish the two different graphs. In contrast, the two graphs become distinguishable once we apply a structural neighborhood in aggregation. With the structural neighborhood, the nodes have different geometric relationships to the center node V 1 in the structural neighborhood, as shown in Fig. 2 (right). Taking aggregation for V 1 as an example, we can adopt different mapping function f r, r ∈ R to the neighbors with different geometric relationship r to V 1. Then, the aggregator in two graph have different inputs, {f 2 (a), f 8 (a)} in the left graph and {f 2 (a), f 7 (a), f 9 (a)} in the right graph. Finally, the aggregator (mean or maximum) will output different representations for the node V 1 in the two graphs, thereby distinguishing the topological difference between the two graphs. In this section, we present Geom-GCN, a specific implementation of the geometric aggregation scheme in graph convolutional networks, to perform transductive learning in graphs. To implement the general aggregation scheme, one needs to specify its three modules: node embedding, structural neighborhood, and bi-level aggregation function. Node embedding is the fundamental. As shown in our experiments, a common embedding method which only preserves the connection and distance pattern in a graph can already benefit the aggregation. For particular applications, one can specify embedding methods to create suitable latent spaces where particular topology patterns (e.g., hierarchy) are preserved. We employ three embedding methods, Isomap , Poincare embedding , and struc2vec , which in three Geom-GCN variants: Geom-GCN-I, Geom-GCN-P, and Geom-GCN-S. Isomap is a widely used isometry embedding method, by which distance patterns (lengths of shortest paths) are preserved explicitly in the latent space. Poincare embedding and struc2vec can create particular latent spaces that preserve hierarchies and local structures in a graph, respectively. We use an embedding space of dimension 2 for ease of explanation. The structural neighborhood N (v) = ({N g (v), N s (v)}, τ ) of node v includes its neighborhoods in both the graph and latent space. The neighborhood-in-graph N g (v) consists of the set of v's adjacent nodes in the graph, and the neighborhood-in-latent-space N s (v) those nodes whose distances to v are less than a parameter ρ in the latent space. We determine ρ by increasing ρ from zero until the average cardinality of N s (v) equals to that of N g (v), ∀v ∈ V -i.e., when the average neighborhood sizes in the graph and latent spaces are the same. We use Euclidean distance in the Euclidean space. In the hyperbolic space, we approximate the geodesic distance between two nodes via their Euclidean distance in the local tangent plane. Here we simply implement the geometric operator τ as four relationships of the relative positions between two nodes in a 2-D Euclidean or hyperbolic space. Particularly, the relationship set R = {upper left, upper right, lower left, lower right}, and a τ (z v, z u) is given by Table 1. Note that, we adopt the rectangular coordinate system in the Euclidean space and angular coordinate in the hyperbolic space. By this way, the relationship "upper" indicates the node nearer to the origin and thus lie in a higher level in a hierarchical graph. One can design a more sophisticated operator τ, such as borrowing the structure of descriptors in manifold geometry , thereby preserving more and richer structural information in neighborhood. lower left lower right Finally, to implement the bi-level aggregation, we adopt the same summation of normalized hidden features as GCN as the aggregation function p in the low-level aggregation, where deg(v) is the degree of node v in graph, and δ(·, ·) is a Kronecker delta function that only allows the nodes with relationship r to v to be included. The features of all virtual nodes e v,l+1 (i,r) are further aggregated in the high-level aggregation. The aggregation function q is a concatenation || for all layers except the final layer, which uses mean for its aggregation function. Then, the overall bi-level aggregation of Geom-GCN is given by where we use ReLU as the non-linear activation function σ(·) and W l is the weight matrix to estimate by backpropagation. We validate Geom-GCN by comparing Geom-GCN's performance with the performance of Graph Convolutional Networks (GCN) and Graph Attention Networks (GAT) . Two state-of-the-art graph neural networks, on transductive node-label classification tasks on a wide variety of open graph datasets. We utilize nine open graph datasets to validate the proposed Geom-GCN. An overview summary of characteristics of the datasets is given in Table 2. Citation networks. Cora, Citeseer, and Pubmed are standard citation network benchmark datasets . In these networks, nodes represent papers, and edges denote citations of one paper by another. Node features are the bag-of-words representation of papers, and node label is the academic topic of a paper. WebKB. WebKB 1 is a webpage dataset collected from computer science departments of various universities by Carnegie Mellon University. We use the three subdatasets of it, Cornell, Texas, and Wisconsin, where nodes represent web pages, and edges are hyperlinks between them. Node features are the bag-of-words representation of web pages. The web pages are manually classified into the five categories, student, project, course, staff, and faculty. Actor co-occurrence network. This dataset is the actor-only induced subgraph of the film-directoractor-writer network . Each nodes correspond to an actor, and the edge between two nodes denotes co-occurrence on the same Wikipedia page. Node features correspond to some keywords in the Wikipedia pages. We classify the nodes into five categories in term of words of actor's Wikipedia. Wikipedia network. Chameleon and squirrel are two page-page networks on specific topics in Wikipedia . In those datasets, nodes represent web pages and edges are mutual links between pages. And node features correspond to several informative nouns in the Wikipedia pages. We classify the nodes into five categories in term of the number of the average monthly traffic of the web page. As mentioned in Section 3, we construct three Geom-GCN variants by using three embedding methods, Isomap (Geom-GCN-I), Poincare (Geom-GCN-P), and struc2vec (Geom-GCN-S). We specify the dimension of embedding space as two, and use the relationship operator τ defined in Table 1, and apply mean and concatenation as the low-and high-level aggregation function, respectively. With the structural neighborhood, we perform a hyper-parameter search for all models on validation set. For fairness, the size of search space for each method is the same. The searching hyperparameters include number of hidden unit, initial learning rate, weight decay, and dropout. We fix the number of layer to 2 and use Adam optimizer for all models. We use ReLU as the activation function for Geom-GCN and GCN, and ELU for GAT. The final hyper-parameter setting is dropout of p = 0.5, initial learning rate of 0.05, patience of 100 epochs, weight decay of 5E-6 (WebKB datasets) or 5E-5 (the other all datasets). In GCN, the number of hidden unit is 16 (Cora), 16 (Citeseer), 64 (Pubmed), 32 (WebKB), 48 (Wikipedia), and 32 (Actor). In Geom-GCN, the number of hidden unit is 8 times as many as the number in GCN since Geom-GCN has 8 virtual nodes. For each attention head in GAT, the number of hidden unit is 8 (Citation networks), 32 (WebKB), 48 (Wikipedia), and 32 (Actor). GAT has 8 attention heads in layer one and 8 (Pubmed) or 1 (the all other datasets) attention heads in layer two. For all graph datasets, we randomly split nodes of each class into 60%, 20%, and 20% for training, validation and testing. With the hyper-parameter setting, we report the average performance of all models on the test sets over 10 random splits. Results are summarized in Table 3. The reported numbers denote the mean classification accuracy in percent. In general, Geom-GCN achieves state-of-the-art performance. The best performing method is highlighted. From the , Isomap embedding (Geom-GCN-I) which only preserves the connection and distance pattern in graph can already benefit the aggregation. We can also specify an embedding method to create a suitable latent space for a particular application (e.g., disassortative graph or hierarchical graph), by doing which a significant performance improvement is achieved (e.g., Geom-GCN-P). The proposed Geom-GCN aggregates "message" from two neighborhoods which are defined in graph and latent space respectively. In this section, we present an ablation study to evaluate the contribution from each neighborhood though constructing new Geom-GCN variants with only one neighborhood. For the variants with only neighborhood in graph, we use "g" as a suffix of their name (e.g., Geom-GCN-I-g), and use suffix "s" to denote the variants with only neighborhood in latent space (e.g., Geom-GCN-I-s). Here we set GCN as a baseline so that the contribution can be measured via the performance improvement comparing with GCN. The are summarized in Table 4, where positive improvement is denoted by an up arrow ↑ and negative improvement by a down arrow ↓. The best performing method is highlighted. We also design an index denoted by β to measure the homophily in a graph, Number of v's neighbors who have the same label as v Number of v's neighbors. A large β value implies that the homophily, in term of node label, is strong in a graph, i.e., similar nodes tend to connect together. From Table 4, one can see that assortative graphs (e.g., citation networks) have a much larger β than disassortative graphs (e.g., WebKB networks). Table 4 exhibits three interesting patterns: i) Neighborhoods in graph and latent space both benefit the aggregation in most cases; ii) Neighborhoods in latent space have larger contributions in disassortative graphs (with a small β) than assortative ones, which implies relevant information from disconnected nodes is captured effectively by the neighborhoods in latent space; iii) To our surprise, several variants with only one neighborhood (in Table 4) achieve better performances than the variants with two neighborhoods (in Tabel 3). We think the reason is that Geom-GCN with two neighborhoods aggregate more irrelevant "messages" than Geom-GCN with only one neighborhood, and the irrelevant "messages" adversely affect the performance. Thus, we believe an attention mechanism can alleviate this issue-which we will study as future work. The structural neighborhood in Geom-GCN is very flexible, where one can combine arbitrary embedding space. To study which combination of embedding spaces is desirable, we construct new Geom-GCN variants by adopting neighborhoods built by different embedding space. For the variants adopted Isomap and poincare embedding space to build neighborhood in graph and in latent space respectively, we use Geom-GCN-IP to denote it. The naming rule is the same for other combinations. The performances of all variants are summarized in Table 5. One can observe that several combinations achieve better performance than Geom-GCN with neighborhoods built by only one embedding space (in Table 3); and there are also many combinations that have bad performance. Thus, we think it's significant future work to design an end-to-end framework that can automatically determine the right embedding spaces for Geom-GCN. for each virtual node (i.e., (i, r)), and 2|R| is the number of virtual nodes. Geom-GCN has 2|R| times complexity than GCN whose time complexity is O(n × m). We also compare the real running time (500 epochs) of GCN, GAT, and Geom-GCN on all datasets with the hyper-parameters described in Section 4.2. Results are shown in Fig. 3 (a). One can see that GCN is the fastest, and GAT and Geom-GCN are on the same level. An important future work is to develop accelerating technology so as to solve the scalability of Geom-GCN. (a) (b) Figure 3: (a) Running time comparison. GCN, GAT, and Geom-GCN both run 500 epochs, and y axis is the log seconds. GCN is the fastest, and GAT and Geom-GCN are on the same level. (b) A visualization for the feature representations of Cora obtained from Geom-GCN-P in a 2-D space. Node colors denote node labels. There are two obvious patterns, nodes with the same label exhibit a spatial clustering and all nodes distribute radially. The radial pattern indicates graph's hierarchy learned by Poincare embedding. To study what patterns are learned in the feature representations of node by Geom-GCN, we visualize the feature representations extracted by the last layer of Geom-GCN-P on Cora dataset by mapping it into a 2-D space though t-SNE , as shown in Fig. 3 (b). In the figure, the nodes with the same label exhibit spatial clustering, which could shows the discriminative power of Geom-GCN. That all nodes distribute radially in the figure indicates the proposed model learn graph's hierarchy by Poincare embedding. We tackle the two major weaknesses of existing message-passing neural networks over graphslosses of discriminative structures and long-range dependencies. As our key insight, we bridge a discrete graph to a continuous geometric space via graph embedding. That is, we exploit the principle of convolution: spatial aggregation over a meaningful space-and our approach thus extracts or "recovers" the lost information (discriminative structures and long-range dependencies) in an embedding space from a graph. We proposed a general geometric aggregation scheme and instantiated it with several specific Geom-GCN implementations, and our experiments validated clear advantages over the state-of-the-art. As future work, we will explore techniques for choosing a right embedding method-depending not only on input graphs but also on target applications, such as epidemic dynamic prediction on social contact network .
For graph neural networks, the aggregation on a graph can benefit from a continuous space underlying the graph.
760
scitldr
We consider the task of program synthesis in the presence of a reward function over the output of programs, where the goal is to find programs with maximal rewards. We introduce a novel iterative optimization scheme, where we train an RNN on a dataset of K best programs from a priority queue of the generated programs so far. Then, we synthesize new programs and add them to the priority queue by sampling from the RNN. We benchmark our algorithm called priority queue training (PQT) against genetic algorithm and reinforcement learning baselines on a simple but expressive Turing complete programming language called BF. Our experimental show that our deceptively simple PQT algorithm significantly outperforms the baselines. By adding a program length penalty to the reward function, we are able to synthesize short, human readable programs. Automatic program synthesis is an important task with many potential applications. Traditional approaches (e.g., BID29 ; BID1) typically do not make use of machine learning and therefore require domain specific knowledge about the programming languages and hand-crafted heuristics to speed up the underlying combinatorial search. To create more generic programming tools without much domain specific knowledge, there has been a surge of recent interest in developing neural models that facilitate some form of memory access and symbolic reasoning (e.g., BID37 ; BID33 ; BID21 ; BID49 ;). Despite several appealing contributions, none of these approaches is able to synthesize source code in an expressive programming language. More recently, there have been several successful attempts at using neural networks to explicitly induce programs from input-output examples BID8 BID2 BID35 and even from unstructured text BID35, but often using restrictive programming syntax and requiring supervisory signal in the form of ground-truth programs or correct outputs. By contrast, we advocate the use of an expressive programming language called BF 1, which has a simple syntax, but is Turing complete. Moreover, we aim to synthesize programs under the reinforcement learning (RL) paradigm, where only a solution checker is required to compute a reward signal. Furthermore, one can include a notion of code length penalty or execution speed into the reward signal to search for short and efficient programs. Hence, the problem of program synthesis based on reward is more flexible than other formulations in which the desired programs or correct outputs are required during training. To address program synthesis based on a reward signal, we study two different approaches. The first approach is a policy gradient (PG) algorithm BID44, where we train a recurrent neural network (RNN) to generate programs one token at a time. Then, the program is executed and scored, and a reward feedback is sent back to the RNN to update its parameters such that over time better programs are produced. The second approach is a deceptively simple optimization algorithm called priority queue training (PQT). We keep a priority queue of K best programs seen during training and train an RNN with a log-likelihood objective on the top K programs in the queue. We then sample new programs from the RNN, update the queue, and iterate. We also compare against a genetic algorithm (GA) baseline which has been shown to generate BF programs BID3. Surprisingly, we find that the PQT approach significantly outperforms the GA and PG methods. We assess the effectiveness of our method on the BF programming language. The BF language is Turing complete, while comprising only 8 operations. The minimalist syntax of the BF language makes it easier to generate a syntactically correct program, as opposed to more higher level languages. We consider various string manipulation, numerical, and algorithmic tasks. Our demonstrate that all of the search algorithms we consider are capable of finding correct programs for most of the tasks, and that our method is the most reliable in that it finds solutions on most random seeds and most tasks. The key contributions of the paper include,• We propose a learning framework for program synthesis where only a reward function is required during training (the ground-truth programs or correct outputs are not needed).Further, we advocate to use a simple and expressive programming language, BF, as a benchmark environment for program synthesis (see also BID3).• We propose an effective search algorithm using a priority queue and an RNN.• We propose an experimental methodology to compare program synthesis methods including genetic algorithm and policy gradient. Our methodology measures the success rates of each synthesis method on average and provides a standard way to tune the hyper-parameters. With this methodology, we find that a recurrent network trained with priority queue training outperforms the baselines. Our method shares the same goal with traditional techniques in program synthesis and inductive programming BID40 BID7 BID29 BID1. These techniques have found many important applications in practice, ranging from education to programming assistance BID14. In machine learning, probabilistic program induction has been used successfully in many settings, such as learning to solve simple Q&A BID26, and learning with very few examples BID24.There has been a surge of recent interest in using neural networks to induce and execute programs either implicitly or explicitly BID12 BID48 BID20 BID21 BID23 BID33 BID37 BID0 BID2 BID6 BID11 BID22 BID34 BID39 BID49 BID27 BID5 BID9 BID10 BID17 BID15 BID30 BID35 BID36 BID42 BID47. For example, there have been promising on the task of binary search BID31, sorting an array of numbers BID37 BID9, solving simple Q&A from tables BID33 BID5, visual Q&A BID0 BID17, filling missing values in tables BID10, and querying tables. There are several key components that highlight our problem formulation in the context of previous work. First, our approach uses a Turing complete language instead of a potentially restricted domain-specific language. Second, it does not need existing programs or even the stack-trace of existing programs. Third, it only assumes the presence of a verifier that scores the outputs of hypothesis programs, but does not need access to correct outputs. This is important for domains where finding the correct outputs is hard but scoring the outputs is easy. Finally, our formulation does not need to modify the internal workings of the programming language to obtain a differentiable error signal. The PG approach adopted in this paper for program synthesis is closely related to neural architecture search BID50 and neural combinatorial optimization BID4, where variants of PG are used to train an RNN and a pointer network BID43 to perform combinatorial search. BID31 applies PG to program synthesis, but they differ from us in that they train RNNs that implicitly model a program by consuming inputs and emitting machine instructions as opposed to explicit programs. Our PG baseline resembles such previous techniques. The PQT algorithm presented here is partly inspired by, where they use a priority queue of top-K programs to augment PG with off-policy training. PQT also bears resemblance to the cross-entropy method (CEM), a reinforcement learning technique which has been used to play games such as Tetris BID41.Our use of BF programming language enables a comparison between our technique and a concurrent work by BID3 on the use of genetic algorithms for program synthesis in the BF language. However, our method for program synthesis has important benefits over BID3 including better performance and the potential for transfer learning, which is possible with neural networks BID19. We also make the observation that PQT alone is stable and effective, without needing to use PG. We implement a generative model of programs as an RNN that emits a strings of BF language one character at a time. Figure 2 depicts the RNN model, which enables sampling a sequence of BF characters in an autoregressive fashion, where one feeds the previous prediction as an input to the next time step. The input to the first time step is a special START symbol. The RNN stops when it generates a special EOS symbol, indicating the end of sequence, or if the length of the program exceeds a pre-specified maximum length. The predictions at each timestep are sampled from a multinomial distribution (a softmax layer with shared weights across timesteps). The joint probability of the program sequence is the product of the probabilities of all of the tokens. We study two training algorithms, which are also compatible and can be combined. These are policy gradient, and priority queue training. We treat the RNN program synthesizer as a policy π(a 1:T ; θ) parametrized by θ, where a 1:T ≡ (a 1, . . ., a T) denotes a sequence of T actions, each of which represents a symbol in the BF langauge (and optionally an EOS symbol). The policy is factored using the chain rule as π(a 1:T ; θ) = t π(a t | a 1:t−1 ; θ) where each term in the product is given by the RNN as depicted in Figure 2. Typically an RL agent receives a reward after each action. However in our setup, we cannot score the code until completion of the program (either by emitting the EOS symbol or by hitting the maximum episode length), and accordingly, only a terminal reward of r(a 1:T) is provided. The goal is to learn a policy that assigns a high probability to plausible programs. We use the well known policy gradient approach: the REINFORCE algorithm BID44.As suggested by REINFORCE, we optimize the parameters θ to maximize the following expected reward objective, DISPLAYFORM0 We perform stochastic gradient descent in O ER to iteratively refine θ. To estimate the gradient of, we draw N Monte Carlo samples from π θ denoted {a DISPLAYFORM1 and compute the gradient of the policy as, DISPLAYFORM2 where N is the number of episodes sampled from the policy in one mini-batch, and T i denotes the number of actions in the i th episode. The gradient in Equation FORMULA2 is an unbiased estimate of the policy's true gradient, but it suffers from high variance in practice. The term b, known as a baseline, subtracted from the rewards serves as a control variate to reduce the variance of this estimator BID44 . We use an exponential moving average over rewards as the baseline. Our key technical contribution in the paper involves training an RNN with a continually updated buffer of top-K best programs (i.e., a priority queue of maximum size K). The queue is initialized empty, and after each gradient update, it is provided with new sampled programs, keeping only the programs that fall within the K highest rewarded programs. We use supervised learning to maximize the probability of the programs in the top-K buffer denoted {ã DISPLAYFORM0 under the current policy. In this way, the RNN and priority queue bootstrap off each other, with the RNN finding better programs through exploration, and the priority queue providing better training targets. The objective for PQT is simply log-likelihood in the form, DISPLAYFORM1 When PG and PQT objectives are combined, their respective gradients can simply be added together to arrive at the joint gradient. In the joint setting the priority queue component has a stabilizing affect and helps reduce catastrophic forgetting in the policy. This approach bears some similarity to the RL approaches adopted by Google's Neural Machine Translation and Neural Symbolic Machines .Entropy exploration. We also regularize the policy by adding an entropy term which aims to increase the uncertainty of the model and encourage exploration. This prevents the policy from assigning too much probability mass to any particular sequence, thereby encouraging more diversity among sampled programs. This regularizer has been prescribed initially by BID45 and more recently adopted by BID28 ; BID32 . We use the entropy regularizer for both PG and PQT.The most general form of the objective can be expressed as the sum of all of these components into one quantity. We assign different scalar weights to the PG, PQT, and entropy terms, and the gradient of the overall objective is expressed as, DISPLAYFORM2 where entropy DISPLAYFORM3 The optimization goal is the maximize. Any specific term can be remove by settings its corresponding λ to 0. When we train vanilla PG, λ TOPK = 0, and when we train with PQT, λ ER = 0.Distributed training. To speed up the training of the program synthesizers, we also make use of an asynchronous distributed setup, where a parameter server stores the shared model parameters for a number of synthesizer replicas. Each synthesizer replica samples a batch of episodes from its local copy of the policy and computes the gradients. Then, the gradients are sent to the parameter server, which asynchronously updates the shared parameters BID28. The replicas periodically update their local policy with up-to-date parameters from the parameter server. Also, to make the implementation of distributed PQT simple, each replica has its own priority queue of size K. We use 32 replicas for all of the experiments. We assess the effectiveness of our program synthesis setup by trying to discover BF programs. In what follows, we first describe the BF programming language. Then we discuss the tasks that were considered, and we present our experimental protocol and . BF is a minimalist Turing complete language consisting of only 8 low-level operations, each represented by a char from +-<>[].,. See TAB0 for operation descriptions. Operations are executed from left to right and square brackets enable looping, which is the only control flow available in the language. BF programs operate on a memory tape and internally manipulate a data pointer. The data pointer is unbounded in the positive direction but is not permitted to be negative. Memory values are not accessed by address, but relatively by shifting the data pointer left or right, akin to a Turing machine. Likewise, to change a value in memory, only increment and decrement operations are available (overflow and underflow is allowed). We include an execution demo in Appendix 6.1 to further aid understanding of the language. BF programs are able to read from an input stream and write to an output stream (one int at a time). This is how inputs and outputs are passed into and out of a BF program in our tasks. In our implementation, will write zeros once the end of the input stream is reached, and many synthesized programs make use of this feature to zero out memory. Memory values are integers, typically 1 byte and so they are often interpreted as chars for string tasks. In our BF implementation the interpreter is given a task-dependent base B so that each int is in Z B, i.e., the set of integers modulo base B. By default B = 256, unless otherwise specified. In our main experiments programs are fixed length, and most characters end up being useless no-ops. There are many ways to make no-ops in BF, and so it is very easy to pad out programs. For example, the move left operation < when the data pointer is at the leftmost position is a no-op. Unmatched braces when strict mode is off are also no-ops. Putting opposite operations together, like +-or <>, work as no-op pairs. Notice that there is only one type of syntax error in BF: unmatched braces. BF's extremely simple syntax is another advantage for program synthesis. For languages with more complex syntax, synthesizing at the character level would be very difficult due to the fact that most programs will not run or compile, thus the reward landscape is even sparser. We gave our BF interpreter a flag which turns off "strict mode" so that unmatched braces are just ignored. We found that turning off strict mode makes synthesis easier. To evaluate a given program under a task, the code is executed on one or many test cases sampled from the task (separate execution for each test input). Each test case is scored based on the program's output and the scores are summed to compute the final reward for the program. More formally, let T be the task we want to synthesize code for, and let P be a candidate program. We treat P as a function of input I so that output Q = P (I). Inputs and outputs are lists of integers (in base B). In principle for any NP task a polynomial time reward function can be computed. A trivial reward function would assign 1.0 to correct outputs and 0.0 to everything else. However, such 0/1 reward functions are extremely sparse and non-smooth (i.e., a code string with reward 1.0 may be completely surrounded by strings of reward 0.0 in edit-distance space). In practice a somewhat graded reward function must be used in order for learning to take place. The following formulation presents a possible way to compute rewards for these tasks or other similar tasks, but our method does not depend on any particular form of the reward (as long as it is not too sparse). Because all of the tasks we chose in our experiments are in the polynomial time class, the easiest way for us to score program outputs is just to directly compare to the correct output Q * for a given input I. We use a continuous comparison metric between Q and Q * to reduce reward sparsity. To evaluate a given program P under task T, a set of test cases is sampled {(I 1, Q * 1),..., (I n, Q * n)} from T. We leave open the option for T to produce static test cases, i.e., the same test cases are provided each time, or stochastic test cases drawn from a distribution each time. In practice, we find that static test cases are helpful for learning most tasks (see Section 4.3).For simplicity we define a standardized scoring function S(Q, Q *) for all our tasks and test cases (see Appendix 6.2 for details). Total reward for the program is DISPLAYFORM0 where ζ is a constant scaling factor, which can differ across tasks. We use ζ to keep rewards approximately in the range [−1, 1]. R tot is given to the agent as terminal reward. When generating variable length programs we give preference to shorter programs by adding a program length bonus to the total reward: R tot + 1 − |P | /MaxProgramLength. If the BF interpreter is running in strict mode and there is a syntax error (i.e., unmatched braces) we assign the program a small negative reward. We also assign negative reward to programs which exceed 5000 execution steps to prevent infinite loops. Note that R tot is the terminal reward for the agent. We assess the effectiveness of our priority queue training method against the following baselines:• Genetic algorithm (GA) implemented based on BID3.2 See Appendix 6.3 for more details regarding our implementation.• Policy gradient (PG) as described in Section where λ TOPK = 0.• Policy gradient (PG) combined with priority queue training (PQT), where λ TOPK > 0.We are operating under the RL paradigm, and so there is no test/evaluation phase. We use a set of benchmark coding tasks (listed in Appendix 6.4) to compare these methods, where different models are trained on each task. Simply the best program found during training is used as the final program for that task. In order to tune each method, we propose to carry out experiments in two phases. First, the hyperparameters will be tuned on a subset of the tasks (reverse and remove-char). Next, the best hyperparameters will be fixed, and each synthesis method will be trained on all tasks. To compare performance between the synthesis methods, we measure the success rate of each method after a predetermined number of executed programs. More details will be described in Section 4.4.We tune the hyperparameters of the synthesis methods on reverse and remove-char tasks. We use grid search to find the best hyperparameters in a given set of possible values. The tuning space is as follows. For PG, learning rate ∈ {10 −5, 10 −4, 10 −3} and entropy regularizer ∈ {0.005, 0.01, 0.05, 0.10}. For PQT, learning rate and entropy regularizer are searched in the same spaces, and we also allow the entropy regularizer to be 0; PQT loss multiplier (λ TOPK from Equation 4) is searched in {1.0, 10.0, 50.0, 200.0}. For GA, population size ∈ {10, 25, 50, 100, 500}, crossover rate ∈ {0.2, 0.5, 0.7, 0.9, 0.95} and mutation rate ∈ {0.01, 0.03, 0.05, 0.1, 0.15}.For PQT we set K = 10 (maximum size of the priority queue) in all experiments. In early experiments we found 10 is a nice compromise between a very small queue which is too easy for the RNN to memorize, and a large queue which can dilute the training pool with bad programs. • PG+PQT: entropy regularizer = 0.01, learning rate = 10 −4, λ TOPK = 50.0.• PQT: entropy regularizer = 0.01, learning rate = 10 −4, λ TOPK = 200.0.• GA: population size = 100, crossover rate = 0.95, mutation rate = 0.15.Other model and environment choices. For PG and PQT methods we use the following architecture: a 2-layer LSTM RNN BID16 with 35 units in each layer. We jointly train embeddings for the program symbols of size 10. The outputs of the top LSTM layer are passed through a linear layer with 8 or 9 outputs (8 BF ops plus an optional EOS token) which are used as logits for the softmax policy. We train on minibatches of size 64, and use 32 asynchronous training replicas. Additional hyperparameter values: gradient norm clipping threshold is set at 50, parameter initialization factor 3 is set at 0.5, RMSProp is used as the optimizer, and decay for the exponential moving average baseline is set at 0.99.We explored a number of strategies for making test cases. For the reverse task we tried:1. One random test case of random length.2. Five random test cases of random length.3. Five random test cases of lengths 1 through 5. However, solutions were only found when we used static test cases (option 4).In the experiments below, all programs in each task are evaluated on the same test cases. The test inputs are randomly generated with a fixed seed before any training happens. By default each task has 16 test cases, with a few exceptions noted in Appendix 6.4. For the two tuning tasks we continue to use a small number of hand crafted test cases. A potential problem with using the test cases for code synthesis is that the synthesized code can overfit, i.e., the code can contain hard-coded solutions for test inputs. In the experiments below we also run synthesized code on a large set of held-out eval test cases. These eval test cases are also randomly generated with a fixed seed, and the total number of test cases (train and eval) for each task is 1000. Success rates on training test cases and all test cases are reported in TAB2. We do manual inspection of code solutions in Table 4 to identify overfitting programs, which are highlighted in red. In the following, we show success rates on tuning tasks and held-out tasks for all algorithms. Again, our metric to compare these methods is the success rate of each method at finding the correct program after a predetermined maximum number of executed programs. For all tasks, we ran training 25 times independently to estimate success rates. A training run is successful if a program is found which solves all the test cases for the task. Training is stopped when the maximum number of programs executed (NPE) is reached, and the run is considered a failure. For tuning, we use maximum NPE of 5M. For evaluation we use a maximum NPE of 20M.Our genetic algorithm is best suited for generating programs of fixed length, meaning all code strings considered are of some fixed length preset by the experimenter. In all the experiments presented below, the EOS token is disabled for all algorithms, so that there are 8 possible code characters. Program length is 100 characters for all experiments, and the search space size is 8 100 ≈ 10 90.In TAB1, we report the success rates from tuning of the genetic algorithm, policy gradient, priority queue training, and policy gradient with priority queue training. The for these tuning tasks are different from the same tasks in TAB2, due to the smaller NPE used in tuning, and the fact that we tune on a different set of hand-selected test cases. There is also sensitivity of each synthesis method to initialization, sampling noise, and asynchronous weight updates which accounts for differences between multiple runs of the same tasks. In TAB2, we report the success rates of the same algorithms plus uniform random search. We include success rates for training and eval test cases. We also do an aggregate comparison between columns by taking the average at the bottom. As can be seen from the table, PQT is clearly better than PG and GA according to training and eval averages. PG+PQT is on par with PQT alone. The eval success rates are lower than the training success rates in many cases due to overfitting programs. DISPLAYFORM0 2.5 / 1.0 8.6 / 4.3 3.3 / 1.5 13.0 / 5.7 12.9 / 5.4 In this section we have our method generate shortest possible code string. Code shortening, sometimes called code golf, is a common competitive exercise among human BF programmers. We use PG+PQT to generate programs of variable length (RNN must output EOS token) with a length bonus in the reward to encourage code simplification (see 4.2). We train each task just once, but with a much larger maximum NPE of 500M. We do not stop training early, so that the agent can iterate on known solutions. We find that alternative hyperparameters work better for code shortening, with λ ENT = 0.05 and λ TOPK = 0.5.In Table 4 we show simplified programs for coding tasks where a solution was found. Table 4: Synthesized BF programs for solved tasks. All programs were discovered by the agent as-is, and no code characters were removed or altered in anyway. Notice that some programs overfit their tasks. For example, cascade is correct only for up to length 6 (provided test cases are no longer than this). We highlighted in red all the tasks with code solutions that overfit, which we determined by manual inspection. In this paper, we considered the task of learning to synthesize programs for problems where a reward function is defined. We use an RNN trained with our priority queue training method. We experimented with BF, a simple Turing-complete programming language, and compared our method against a genetic algorithm baseline. Our experimental showed that our method is more stable than vanilla policy gradient or a genetic algorithm. That PQT works as a standalone search algorithm is surprising, and future work is needed in order to better explain it. We can speculate that it is implementing a simple hill climbing algorithm where the buffer stores the best known samples, thereby saving progress, while the RNN functions as an exploration mechanism. Even more surprising is that this algorithm is able to bootstrap itself to a solution starting from an empty buffer and a randomly initialized RNN. We believe that our coding environment complements the PQT algorithm, since finding code with non-zero reward through purely random search is feasible. 6.1 BF EXECUTION DEMO Figure 2: In the following figure we step through a BF program that reverses a given list. The target list is loaded into the input buffer, and the programs output will be written to the output buffer. Each row depicts the state of the program and memory before executing that step. Purple indicates that some action will be taken when the current step is executed. We skip some steps which are easy to infer. Vertical ellipses indicate the continuation of a loop until its completion. Our implementation of S(Q, Q *) is computed from a non-symmetric distance function d(Q, Q *) which is an extension of Hamming distance H(list 1, list 2) for non-equal length lists (note that arguments to H must be the same length). Hamming distance provides additional information, rather than just saying values in Q and Q * are equal or not equal. Further since BF operates on values only through increment and decrement operations this notation of distance is very useful for conveying to the agent information about how many inc or dec ops to use in various places. This serves to make the reward space smoother and less sparse. We want a distance of 0 to in the maximum score and a large distance to in a small or negative score. Thus we define: DISPLAYFORM0 where ∅ is the empty list, l = |Q|, l * = |Q * |, and B is the integer base (number of possible ints at each position). We define our distance function: DISPLAYFORM1 Essentially d(Q, Q *) adds maximum char distance to the Hamming distance for each missing position or each extra position, depending on whether Q is shorter or longer than Q * respectively. S(Q, Q *) subtracts the distance d(Q, Q *) from the distance of an empty list to Q *, which is equal to B |Q * |. A genetic algorithm (GA) simulates sexual reproduction in a population of genomes in order to optimize a fitness function. To synthesize BF programs, we let a genome be one code string. The GA is initialized with a population of randomly chosen code strings. For each iteration a new population of children is sampled from the existing population. Each new population is called a generation. GA samples a new population from the current population with 3 steps: 1) parent selection, 2) mating, and 3) mutation. Many algorithms have been developed for each of these steps which have varying effects on the GA as a whole. We describe our algorithm for each step:1. Parent selection: Randomly sample a set of parents from the population. We use roulette selection (a.k.a. fitness proportionate selection) where parents are chosen with probability proportional to their fitness.2. Mating: Choose pairs of parents and perform an operation ing in two children to replace the parents. We use single point crossover where a position in the genome (code string) for the first parent is sampled uniformly and the two parents' genetic material is swapped after that point to create two new children. Crossover is performed with probability p crossover.3. Mutation: With some probability make small random modifications to each child. We use the primaryobjects mutation function. This function iterates through each code token and with probability p mutate chooses among 4 possible mutation operations to apply: insert random token, replace with random token, delete token, shift and rotate either left or right. When inserting the last token is removed, and when deleting a random token is added at the end so that none of the mutation operations change the number of tokens in the list. We tune p mutate, p crossover, and the population size (see Experimental Setup). For each task, the genome size is set to the maximum code length, since GA operates on fixed length genomes. For input of length N, return the (n − 1) th input, then the 0 th, then (n − 2), 1, (n − 3), 2,... unriffle Inverse of the riffle function. middle-char For input of length N, return the value at position f loor(N/2). remove-last Remove the last character from the list. remove-last-two Remove the last two characters from the list. echo-alternating Return every even indexed value, followed by every odd indexed value. echo-half Return the first half of the input. lengthReturn the length of the list. echo-nth-seq For M input sequences each seperated by a 0, return the n th sequence, where n is given as the first value in the input. substringReturn a sub-range of the input list, given a starting index i and length l. divide-2 Return input value divided by two (integer division). dedup Return input list, in which all duplicate adjacent values removed.
We use a simple search algorithm involving an RNN and priority queue to find solutions to coding tasks.
761
scitldr
We present graph wavelet neural network (GWNN), a novel graph convolutional neural network (CNN), leveraging graph wavelet transform to address the shortcomings of previous spectral graph CNN methods that depend on graph Fourier transform. Different from graph Fourier transform, graph wavelet transform can be obtained via a fast algorithm without requiring matrix eigendecomposition with high computational cost. Moreover, graph wavelets are sparse and localized in vertex domain, offering high efficiency and good interpretability for graph convolution. The proposed GWNN significantly outperforms previous spectral graph CNNs in the task of graph-based semi-supervised classification on three benchmark datasets: Cora, Citeseer and Pubmed. Convolutional neural networks (CNNs) BID15 have been successfully used in many machine learning problems, such as image classification BID10 and speech recognition BID11, where there is an underlying Euclidean structure. The success of CNNs lies in their ability to leverage the statistical properties of Euclidean data, e.g., translation invariance. However, in many research areas, data are naturally located in a non-Euclidean space, with graph or network being one typical case. The non-Euclidean nature of graph is the main obstacle or challenge when we attempt to generalize CNNs to graph. For example, convolution is not well defined in graph, due to that the size of neighborhood for each node varies dramatically.Existing methods attempting to generalize CNNs to graph data fall into two categories, spatial methods and spectral methods, according to the way that convolution is defined. Spatial methods define convolution directly on the vertex domain, following the practice of the conventional CNN. For each vertex, convolution is defined as a weighted average function over all vertices located in its neighborhood, with the weighting function characterizing the influence exerting to the target vertex by its neighbors. The main challenge is to define a convolution operator that can handle neighborhood with different sizes and maintain the weight sharing property of CNN. Although spatial methods gain some initial success and offer us a flexible framework to generalize CNNs to graph, it is still elusive to determine appropriate neighborhood. Spectral methods define convolution via graph Fourier transform and convolution theorem. Spectral methods leverage graph Fourier transform to convert signals defined in vertex domain into spectral domain, e.g., the space spanned by the eigenvectors of the graph Laplacian matrix, and then filter is defined in spectral domain, maintaining the weight sharing property of CNN. As the pioneering work of spectral methods, spectral CNN BID3 exploited graph data with the graph Fourier transform to implement convolution operator using convolution theorem. Some subsequent works make spectral methods spectrum-free BID4 BID14 BID12, achieving locality in spatial domain and avoiding high computational cost of the eigendecomposition of Laplacian matrix. In this paper, we present graph wavelet neural network to implement efficient convolution on graph data. We take graph wavelets instead of the eigenvectors of graph Laplacian as a set of bases, and define the convolution operator via wavelet transform and convolution theorem. Graph wavelet neural network distinguishes itself from spectral CNN by its three desirable properties: Graph wavelets can be obtained via a fast algorithm without requiring the eigendecomposition of Laplacian matrix, and thus is efficient; Graph wavelets are sparse, while eigenvectors of Laplacian matrix are dense. As a , graph wavelet transform is much more efficient than graph Fourier transform; Graph wavelets are localized in vertex domain, reflecting the information diffusion centered at each node BID27. This property eases the understanding of graph convolution defined by graph wavelets. We develop an efficient implementation of the proposed graph wavelet neural network. Convolution in conventional CNN learns an individual convolution kernel for each pair of input feature and output feature, causing a huge number of parameters especially when the number of features is high. We detach the feature transformation from convolution and learn a sole convolution kernel among all features, substantially reducing the number of parameters. Finally, we validate the effectiveness of the proposed graph wavelet neural network by applying it to graph-based semi-supervised classification. Experimental demonstrate that our method consistently outperforms previous spectral CNNs on three benchmark datasets, i.e., Cora, Citeseer, and Pubmed.2 OUR METHOD 2.1 PRELIMINARY Let G = {V, E, A} be an undirected graph, where V is the set of nodes with |V| = n, E is the set of edges, and A is adjacency matrix with A i,j = A j,i to define the connection between node i and node j. The graph Laplacian matrix L is defined as L = D −A where D is a diagonal degree matrix with D i,i = j A i,j, and the normalized Laplacian matrix is L = I n − D −1/2 AD −1/2 where I n is the identity matrix. Since L is a real symmetric matrix, it has a complete set of orthonormal eigenvectors U = (u 1, u 2, ..., u n), known as Laplacian eigenvectors. These eigenvectors have associated real, non-negative eigenvalues {λ l} n l=1, identified as the frequencies of graph. Eigenvectors associated with smaller eigenvalues carry slow varying signals, indicating that connected nodes share similar values. In contrast, eigenvectors associated with larger eigenvalues carry faster varying signals across connected nodes. Taking the eigenvectors of normalized Laplacian matrix as a set of bases, graph Fourier transform of a signal x ∈ R n on graph G is defined asx = U x, and the inverse graph Fourier transform is x = Ux BID24. Graph Fourier transform, according to convolution theorem, offers us a way to define the graph convolution operator, denoted as * G. Denoting with y the convolution kernel, * G is defined as DISPLAYFORM0 where is the element-wise Hadamard product. Replacing the vector U y by a diagonal matrix g θ, then Hadamard product can be written in the form of matrix multiplication. Filtering the signal x by the filter g θ, we can write Equation as U g θ U x. However, there are some limitations when using Fourier transform to implement graph convolution: Eigendecomposition of Laplacian matrix to obtain Fourier basis U is of high computational cost with O(n 3); Graph Fourier transform is inefficient, since it involves the multiplication between a dense matrix U and the signal x; Graph convolution defined through Fourier transform is not localized in vertex domain, i.e., the influence to the signal on one node is not localized in its neighborhood. To address these limitations, ChebyNet BID4 restricts convolution kernel g θ to a polynomial expansion DISPLAYFORM1 where K is a hyper-parameter to determine the range of node neighborhoods via the shortest path distance, θ ∈ R K is a vector of polynomial coefficients, and Λ =diag({λ l} n l=1 ). However, such a polynomial approximation limits the flexibility to define appropriate convolution on graph, i.e., with a smaller K, it's hard to approximate the diagonal matrix g θ with n free parameters. While with a larger K, locality is no longer guaranteed. Different from ChebyNet, we address the aforementioned three limitations through replacing graph Fourier transform with graph wavelet transform. Similar to graph Fourier transform, graph wavelet transform projects graph signal from vertex domain into spectral domain. Graph wavelet transform employs a set of wavelets as bases, defined as ψ s = (ψ s1, ψ s2, ..., ψ sn), where each wavelet ψ si corresponds to a signal on graph diffused away from node i and s is a scaling parameter. Mathematically, ψ si can be written as DISPLAYFORM0 where U is Laplacian eigenvectors, G s =diag g(sλ 1),..., g(sλ n) is a scaling matrix and g(sλ i) = e λis.Using graph wavelets as bases, graph wavelet transform of a signal x on graph is defined asx = ψ −1 s x and the inverse graph wavelet transform is x = ψ sx. Note that ψ −1 s can be obtained by simply replacing the g(sλ i) in ψ s with g(−sλ i) corresponding to a heat kernel BID6. Replacing the graph Fourier transform in Equation FORMULA0 with graph wavelet transform, we obtain the graph convolution as DISPLAYFORM1 Compared to graph Fourier transform, graph wavelet transform has the following benefits when being used to define graph convolution: 2. High spareness: the matrix ψ s and ψ −1 s are both sparse for real world networks, given that these networks are usually sparse. Therefore, graph wavelet transform is much more computationally efficient than graph Fourier transform. For example, in the Cora dataset, more than 97% elements in ψ −1 s are zero while only less than 1% elements in U are zero TAB3. 3. Localized convolution: each wavelet corresponds to a signal on graph diffused away from a centered node, highly localized in vertex domain. As a , the graph convolution defined in Equation FORMULA3 is localized in vertex domain. We show the localization property of graph convolution in Appendix A. It is the localization property that explains why graph wavelet transform outperforms Fourier transform in defining graph convolution and the associated tasks like graph-based semisupervised learning. 4. Flexible neighborhood: graph wavelets are more flexible to adjust node's neighborhoods. Different from previous methods which constrain neighborhoods by the discrete shortest path distance, our method leverages a continuous manner, i.e., varying the scaling parameter s. A small value of s generally corresponds to a smaller neighborhood. FIG0 shows two wavelet bases at different scale on an example network, depicted using GSP toolbox BID22. Replacing Fourier transform with wavelet transform, graph wavelet neural network (GWNN) is a multi-layer convolutional neural network. The structure of the m-th layer is DISPLAYFORM0 where ψ s is wavelet bases, ψ −1 s is the graph wavelet transform matrix at scale s which projects signal in vertex domain into spectral domain, X m [:,i] with dimensions n × 1 is the i-th column of X m, F m i,j is a diagonal filter matrix learned in spectral domain, and h is a non-linear activation function. This layer transforms an input tensor X m with dimensions n × p into an output tensor X m+1 with dimensions n × q. In this paper, we consider a two-layer GWNN for semi-supervised node classification on graph. The formulation of our model is DISPLAYFORM1 second layer: DISPLAYFORM2 where c is the number of classes in node classification, Z of dimensions n × c is the prediction . The loss function is the cross-entropy error over all labeled examples: DISPLAYFORM3 where y L is the labeled node set, Y li = 1 if the label of node l is i, and Y li = 0 otherwise. The weights F are trained using gradient descent. In Equation FORMULA4, the parameter complexity of each layer is O(n × p × q), where n is the number of nodes, p is the number of features of each vertex in current layer, and q is the number of features of each vertex in next layer. Conventional CNN methods learn convolution kernel for each pair of input feature and output feature. This in a huge number of parameters and generally requires huge training data for parameter learning. This is prohibited for graph-based semi-supervised learning. To combat this issue, we detach the feature transformation from graph convolution. Each layer in GWNN is divided into two components: feature transformation and graph convolution. Spectially, we have feature transformation: DISPLAYFORM0 graph convolution: DISPLAYFORM1 where W ∈ R p×q is the parameter matrix for feature transformation, X m with dimensions n × q is the feature matrix after feature transformation, F m is the diagonal matrix for graph convolution kernel, and h is a non-linear activation function. After detaching feature transformation from graph convolution, the parameter complexity is reduced from O(n × p × q) to O(n + p × q). The reduction of parameters is particularly valuable fro graphbased semi-supervised learning where labels are quite limited. Graph convolutional neural networks on graphs. The success of CNNs when dealing with images, videos, and speeches motivates researchers to design graph convolutional neural network on graphs. The key of generalizing CNNs to graphs is defining convolution operator on graphs. Existing methods are classified into two categories, i.e., spectral methods and spatial methods. Spectral methods define convolution via convolution theorem. Spectral CNN BID3 is the first attempt at implementing CNNs on graphs, leveraging graph Fourier transform and defining convolution kernel in spectral domain. BID1 developed a local spectral CNN approach based on the graph Windowed Fourier Transform. BID4 introduced a Chebyshev polynomial parametrization for spectral filter, offering us a fast localized spectral filtering method. BID14 provided a simplified version of ChebyNet, gaining success in graph-based semi-supervised learning task. BID12 represented images as signals on graph and learned their transformation invariant representations. They used Chebyshev approximations to implement graph convolution, avoiding matrix eigendecomposition. BID16 used rational functions instead of polynomials and created anisotropic spectral filters on manifolds. Spatial methods define convolution as a weighted average function over neighborhood of target vertex. GraphSAGE takes one-hop neighbors as neighborhoods and defines the weighting function as various aggregators over neighborhood BID8. Graph attention network (GAT) proposes to learn the weighting function via self-attention mechanism BID28. MoNet offers us a general framework for design spatial methods, taking convolution as the weighted average of multiple weighting functions defined over neighborhood. Some works devote to making graph convolutional networks more powerful. BID20 alternated convolutions on vertices and edges, generalizing GAT and leading to better performance. GraphsGAN BID5 generalizes GANs to graph, and generates fake samples in low-density areas between subgraphs to improve the performance on graph-based semi-supervised learning. Graph wavelets. presented a lifting scheme, a simple construction of wavelets that can be adapted to graphs without learning process. BID9 proposed a method to construct wavelet transform on graphs. Moreover, they designed an efficient way to bypass the eigendecomposition of the Laplacian and approximated wavelets with Chebyshev polynomials. BID27 leveraged graph wavelets for multi-scale community mining by modulating a scaling parameter. Owing to the property of describing information diffusion, BID6 learned structural node embeddings via wavelets. All these works prove that graph wavelets are not only local and sparse but also valuable for signal processiong on graph. To evaluate the proposed GWNN, we apply GWNN on semi-supervised node classification, and conduct experiments on three benchmark datasets, namely, Cora, Citeseer and Pubmed BID23. In the three citation network datasets, nodes represent documents and edges are citation links. Details of these datasets are demonstrated in TAB0. Here, the label rate denotes the proportion of labeled nodes used for training. Following the experimental setup of GCN , we fetch 20 labeled nodes per class in each dataset to train the model. We compare with several traditional semi-supervised learning methods, including label propagation (LP) BID31, semi-supervised embedding (SemiEmb) BID29, manifold regularization (ManiReg) BID0, graph embeddings (DeepWalk) BID21, iterative classification algorithm (ICA) BID17 and Planetoid BID30.Furthermore, along with the development of deep learning on graph, graph convolutional networks are proved to be effective in semi-supervised learning. Since our method is a spectral method based on convolution theorem, we compare it with the Spectral CNN BID3. ChebyNet BID4 and GCN , two variants of the Spectral CNN, are also included as our baselines. Considering spatial methods, we take MoNet as our baseline, which also depends on Laplacian matrix. We train a two-layer graph wavelet neural network with 16 hidden units, and prediction accuracy is evaluated on a test set of 1000 labeled samples. The partition of datasets is the same as GCN BID14 with an additional validation set of 500 labeled samples to determine hyper-parameters. Weights are initialized following BID7. We adopt the Adam optimizer for parameter optimization with an initial learning rate lr = 0.01. For computational efficiency, we set the elements of ψ s and ψ −1 s smaller than a threshold t to 0. We find the optimal hyper-parameters s and t through grid search, and the detailed discussion about the two hyperparameters is introduced in Appendix B. For Cora, s = 1.0 and t = 1e − 4. For Citeseer, s = 0.7 and t = 1e − 5. For Pubmed, s = 0.5 and t = 1e − 7. To avoid overfitting, dropout BID25 is applied. Meanwhile, we terminate the training if the validation loss does not decrease for 100 consecutive epochs. Since the number of parameters for the undetached version of GWNN is O(n × p × q), we can hardly implement this version in the case of networks with a large number n of nodes and a huge number p of input features. Here, we validate the effectiveness of detaching feature transformation form convolution on ChebyNet (introduced in Section 2.2), whose parameter complexity is O(K × p × q). For ChebyNet of detaching feature transformation from graph convolution, the number of parameters is reduced to O(K + p × q). TAB1 shows the performance and the number of parameters on three datasets. Here, the reported performance is the optimal performance varying the order K = 2, 3, 4. As demonstrated in TAB1, with fewer parameters, we improve the accuracy on Pubmed by a large margin. This is due to that the label rate of Pubmed is only 0.003. By detaching feature transformation from convolution, the parameter complexity is significantly reduced, alleviating overfitting in semi-supervised learning and thus remarkably improving prediction accuracy. On Citeseer, there is a little drop on the accuracy. One possible explanation is that reducing the number of parameters may restrict the modeling capacity to some degree. We now validate the effectiveness of GWNN with detaching technique on node classification. Experimental are reported in TAB2. GWNN improves the classification accuracy on all the three datasets. In particular, replacing Fourier transform with wavelet transform, the proposed GWNN is comfortably ahead of Spectral CNN, achieving 10% improvement on Cora and Citeseer, and 5% improvement on Pubmed. The large improvement could be explained from two perspectives: Convolution in Spectral CNN is non-local in vertex domain, and thus the range of feature diffusion is not restricted to neighboring nodes; The scaling parameter s of wavelet transform is flexible to adjust the diffusion range to suit different applications and different networks. GWNN consistently outperforms ChebyNet, since it has enough degree of freedom to learn the convolution kernel, while ChebyNet is a kind of approximation with limited degree of freedom. Furthermore, our GWNN also performs better than GCN and MoNet, reflecting that it is promising to design appropriate bases for spectral methods to achieve good performance. Besides the improvement on prediction accuracy, wavelet transform with localized and sparse transform matrix holds sparsity in both spatial domain and spectral domain. Here, we take Cora as an example to illustrate the sparsity of graph wavelet transform. The sparsity of transform matrix. There are 2,708 nodes in Cora. Thus, the wavelet transform matrix ψ −1 s and the Fourier transform matrix U both belong to R 2,708×2,708. The first two rows in TAB3 demonstrate that ψ −1 s is much sparser than U. Sparse wavelets not only accelerate the computation, but also well capture the neighboring topology centered at each node. The sparsity of projected signal. As mentioned above, each node in Cora represents a document and has a sparse bag-of-words feature. The input feature X ∈ R n×p is a binary matrix, and X [i,j] = 1 when the i-th document contains the j-th word in the bag of words, it equals 0 otherwise. Here, X [:,j] denotes the j-th column of X, and each column represents the feature vector of a word. Considering a specific signal X [:,984], we project the spatial signal into spectral domain, and get its projected vector. Here, p = ψ −1 s X [:,984] denotes the projected vector via wavelet transform, q = U X [:,984] denotes the projected vector via Fourier transform, and p, q ∈ R 2,708. The last row in TAB3 lists the numbers of non-zero elements in p and q. As shown in TAB3, with wavelet transform, the projected signal is much sparser. Compare with graph convolution network using Fourier transform, GWNN provides good interpretability. Here, we show the interpretability with specific examples in Cora. Each feature, i.e. word in the bag of words, has a projected vector, and each element in this vector is associated with a spectral wavelet basis. Here, each basis is centered at a node, corresponding to a document. The value can be regarded as the relation between the word and the document. Thus, each value in p can be interpreted as the relation between W ord 984 and a document. In order to elaborate the interpretability of wavelet transform, we analyze the projected values of different feature as following. Considering two features W ord 984 and W ord 1177, we select the top-10 active bases, which have the 10 largest projected values of each feature. As illustrated in Figure 2, for clarity, we magnify the local structure of corresponding nodes and marked them with bold rims. The central network in each subgraph denotes the dataset Cora, each node represents a document, and 7 different colors represent 7 classes. These nodes are clustered by OpenOrd BID18 based on the adjacency matrix. Figure 2a shows the top-10 active bases of W ord 984. In Cora, this word only appears 8 times, and all the documents containing W ord 984 belong to the class " Case-Based ". Consistently, all top-10 nodes activated by W ord 984 are concentrated and belong to the class " Case-Based ". And, the frequencies of W ord 1177 appearing in different classes are similar, indicating that W ord 1177 is a universal word. In concordance with our expectation, the top-10 active bases of W ord 1177 are discrete and belong to different classes in Figure 2b. DISPLAYFORM0 Figure 2: Top-10 active bases of two words in Cora. The central network of each subgraph represents the dataset Cora, which is split into 7 classes. Each node represents a document, and its color indicates its label. The nodes that represent the top-10 active bases are marked with bold rims. (a) W ord 984 only appears in documents of the class " Case-Based " in Cora. Consistently, all its 10 active bases also belong to the class " Case-Based ". (b) The frequencies of W ord 1177 appearing in different classes are similar in Cora. As expected, the top-10 active bases of W ord 1177 also belong to different classes. Owing to the properties of graph wavelets, which describe the neighboring topology centered at each node, the projected values of wavelet transform can be explained as the correlation between features and nodes. These properties provide an interpretable domain transformation and ease the understanding of graph convolution. Replacing graph Fourier transform with graph wavelet transform, we proposed GWNN. Graph wavelet transform has three desirable properties: Graph wavelets are local and sparse; Graph wavelet transform is computationally efficient; Convolution is localized in vertex domain. These advantages make the whole learning process interpretable and efficient. Moreover, to reduce the number of parameters and the dependence on huge training data, we detached the feature transformation from convolution. This practice makes GWNN applicable to large graphs, with remarkable performance improvement on graph-based semi-supervised learning. We use a diagonal matrix Θ to represent the learned kernel transformed by wavelets ψ −1 s y, and replace the Hadamard product with matrix muplication. Then Equation FORMULA3 is: DISPLAYFORM0 We set ψ s = (ψ s1, ψ s2, ..., ψ sn), ψ DISPLAYFORM1 ). Equation FORMULA0 becomes: DISPLAYFORM2 As proved Since each M k is local, for any convolution kernel Θ, ψ s Θψ −1 s is local, and it means that convolution is localized in vertex domain. By replacing Θ with an identity matrix in Equation FORMULA0, we get x * G y = n k=1 M k x. We define H = n k=1 M k, and Figure 4 shows H [1,:] in different scaling, i.e., correlation between the first node and other nodes during convolution. The locality of H suggests that graph convolution is localized in vertex domain. Moreover, as the scaling parameter s becomes larger, the range of feature diffusion becomes larger. GWNN leverages graph wavelets to implement graph convolution, where s is used to modulate the range of neighborhoods. From Figure 5, as s becomes larger starting from 0, the range of neighboring nodes becomes large, ing the increase of accuracy on Cora. However when s becomes too large, some irrelevant nodes are included, leading to decreasing of accuracy. The hyperparameter t only used for computational efficiency, has any slight influence on its performance. For experiments on specific dataset, s and t are choosen via grid search using validation. Generally, a appropriate s is in the range of [0.5, 1], which can not only capture the graph structure but also guarantee the locality of convolution, and t is less insensive to dataset. We show the parameter complexity of node classification in TAB4. The high parameter complexity O(n * p * q) of Spectral CNN makes it difficult to generalize to real world networks. ChebyNet approximates the convolution kernel via polynomial function of the diagonal matrix of Laplacian eigenvalues, reducing parameter complexity to O(K * p * q) with K being the order of polynomial function. GCN simplifies ChebyNet via setting K=1. We detach feature transformation from graph convolution to implement GWNN and Spectral CNN in our experiments, which can reduce parameter to O(n + p * q). In Cora and Citeseer, with smaller parameter complexity, GWNN achieves better performance than ChebyNet, reflecting that it is promising to implement convolution via graph wavelet transform. As Pubmed has a large number of nodes, the parameter complexity of GWNN is larger than ChebyNet. As future work, it is an interesting attempt to select wavelets associated with a subset of nodes, further reducing parameter complexity with potential loss of performance. With the stable recurrence relation T k (y) = 2yT k−1 (y) − T k−2 (y), we can generate the Chebyshev polynomials T k (y). Here T 0 = 1 and T 1 = y. For y sampled between -1 and 1, the trigonometric expression T k (y) = cos(k arccos (y) cos(kθ)g(s(a(cos(θ) + 1)))dθ. we truncate the Chebyshev expansion to m terms and achieve Polynomial approximation. Here we give the example of the ψ −1 s and g(sx) = e −sx, the graph signal is f ∈ R n. Then we can give the fast approximation wavelets by ψ The sparsity of the graph wavelets depends on the sparsity of the Laplacian matrix and the hyperparameter s, We show the sparsity of spectral transform matrix and Laplacian matrix in TAB5. The sparsity of Laplacian matrix is sparser than graph wavelets, and this property limits our method, i.e., the higher time complexity than some methods depending on Laplacian matrix and identity matrix, e.g., GCN. Specifically, both our method and GCN aim to improve Spectral CNN via designing localized graph convolution. GCN, as a simplified version of ChebyNet, leverages Laplacian matrix as weighted matrix and expresses the spectral graph convolution in spatial domain, acting as spatial-like method. However, our method resorts to using graph wavelets as a new set of bases, directly designing localized spectral graph convolution. GWNN offers a localized graph convolution via replacing graph Fourier transform with graph wavelet transform, finding good spectral basis with localization property and good interpretability. This distinguishes GWNN from ChebyNet and GCN, which express the graph convolution defined via graph Fourier transform in vertex domain.
We present graph wavelet neural network (GWNN), a novel graph convolutional neural network (CNN), leveraging graph wavelet transform to address the shortcoming of previous spectral graph CNN methods that depend on graph Fourier transform.
762
scitldr
Learning rate decay (lrDecay) is a \emph{de facto} technique for training modern neural networks. It starts with a large learning rate and then decays it multiple times. It is empirically observed to help both optimization and generalization. Common beliefs in how lrDecay works come from the optimization analysis of (Stochastic) Gradient Descent: 1) an initially large learning rate accelerates training or helps the network escape spurious local minima; 2) decaying the learning rate helps the network converge to a local minimum and avoid oscillation. Despite the popularity of these common beliefs, experiments suggest that they are insufficient in explaining the general effectiveness of lrDecay in training modern neural networks that are deep, wide, and nonconvex. We provide another novel explanation: an initially large learning rate suppresses the network from memorizing noisy data while decaying the learning rate improves the learning of complex patterns. The proposed explanation is validated on a carefully-constructed dataset with tractable pattern complexity. And its implication, that additional patterns learned in later stages of lrDecay are more complex and thus less transferable, is justified in real-world datasets. We believe that this alternative explanation will shed light into the design of better training strategies for modern neural networks. Modern neural networks are deep, wide, and nonconvex. They are powerful tools for representation learning and serve as core components of deep learning systems. They are top-performing models in language translation , visual recognition (, and decision making . However, the understanding of modern neural networks is way behind their broad applications. A series of pioneering works (; ;) reveal the difficulty of applying conventional machine learning wisdom to deep learning. A better understanding of deep learning is a major mission in the AI field. One obstacle in the way of understanding deep learning is the existence of magic modules in modern neural networks and magic tricks to train them. Take batch normalization module for example, its pervasiveness in both academia and industry is undoubted. The exact reason why it expedites training and helps generalization, however, remains mysterious and is actively studied in recent years (; ;). Only when we clearly understand these magical practices can we promote the theoretical understanding of modern neural networks. Learning rate is "the single most important hyper-parameter" in training neural networks. Learning rate decay (lrDecay) is a de facto technique for training modern neural networks, where we adopt an initially large learning rate and then decay it by a certain factor after pre-defined epochs. Popular deep networks such as ResNet , DenseNet (b) are all trained by Stochastic Gradient Descent (SGD) with lrDecay. Figure 1 (a) is an example of lrDecay, with the learning rate decayed by 10 every 30 epochs. The training is divided into several stages by the moments of decay. These stages can be easily identified in learning curves (such as Figure 1(b) ), where the performance boosts sharply shortly after the learning rate is decayed. The lrDecay enjoys great popularity due to its simplicity and general effectiveness. Common beliefs in how lrDecay works are derived from the optimization analysis in (Stochastic) Gradient Descent . They attribute the effect of an initially optimization escapes bad local minima converges to local minimum Proposed pattern complexity avoids fitting noisy data learns more complex patterns Table 1: Comparison of explanations on why lrDecay helps training neural networks. The column "supported" means whether the explanation is supported by the empirical experiments in this paper. large learning rate to escaping spurious local minima or accelerating training and attribute the effect of decaying the learning rate to avoiding oscillation around local minima. However, these common beliefs are insufficient to explain our empirical observations from a series of carefully-designed experiments in Section 4. In this paper, we provide an alternative view: the magnitude of the learning rate is closely related to the complexity of learnable patterns. From this perspective, we propose a novel explanation for the efficacy of lrDecay: an initially large learning rate suppresses the memorization of noisy data while decaying the learning rate improves the learning of complex patterns. This is validated on a carefully-constructed dataset with tractable pattern complexity. The pattern complexity in realworld datasets is often intractable. We thus validate the explanation by testing its implication on real-world datasets. The implication that additional patterns learned in later stages of lrDecay are more complex and thus less transferable across different datasets, is also justified empirically. A comparison between the proposed explanation and the common beliefs is summarized in Table 1. Our explanation is supported by carefully-designed experiments and provides a new perspective on analyzing learning rate decay. The contribution of this paper is two-fold: • We demonstrate by experiments that existing explanations of how lrDecay works are insufficient in explaining the training behaviors in modern neural networks. • We propose a novel explanation based on pattern complexity, which is validated on a dataset with tractable pattern complexity, and its implication is validated on real-world datasets. The explanation also suggests that complex patterns are only learnable after learning rate decay. Thus, when the model learns all simple patterns, but the epoch to decay has not reached, immediately decaying the learning rate will not hurt the performance. This implication is validated in Section A.1. Recently, researchers reveal the behavior of SGD from multiple perspectives (; ;). They respect the difference among data items rather than treat them as identical samples from a distribution. They study the behavior of SGD in a given dataset. , they show that deep models first learn easy examples classifiable by shallow methods. The mutual information between deep models and linear models is measured in , which suggests deep models first learn data explainable by linear models. Note that they are not relevant to learning rates. analyze a toy problem to uncover the regularization effect of an initially large learning rate. Their theoretical explanation is, however, based on a specific two-layer neural network they design. Different from these works, Section 5 studies the behavior of SGD induced by lrDecay in a modern WideResNet , finding that learning rate decay improves learning of complex patterns. We formally define pattern complexity by expected class conditional entropy, while the measure of pattern complexity in; relies on an auxiliary model. Adaptive learning rate methods such as AdaGrad , AdaDelta , and ADAM are sophisticated optimization algorithms for training modern neural networks. It remains an active research field to study their behaviors and underlying mechanisms . However, we focus on learning rate decay in SGD rather than on the adaptive learning rate methods. On the one hand, SGD is the de facto training algorithm for popular models (; b) while lrDecay is not common in the adaptive methods; On the other hand, many adaptive methods are not as simple as SGD and even degenerate in convergence in some scenarios . We choose to study SGD with lrDecay, without introducing adaptive learning rate to keep away from its confounding factors. Besides the commonly used lrDecay, there are other learning rate strategies. proposes a cyclic strategy, claiming to dismiss the need for tuning learning rates. Warm restart of learning rate is explored in. They achieve better when combined with Snapshot Ensemble (a). These learning rate strategies often yield better at the cost of additional hyperparameters that are not intuitive. Consequently, it is still the de facto to decay the learning rate after pre-defined epochs as in Figure 1 (a). We stick our analysis to lrDecay rather than to other fancy ones because of its simplicity and general effectiveness. Training a model on one dataset that can be transferred to other datasets has long been a goal of AI researches. The exploration of model transferability has attracted extensive attention. , deep features trained for classification are transferred to improve object detection successfully. study the transferability of different modules in pre-trained networks, indicating that higher layers are more task-specific and less transferable across datasets. By varying network architectures, show architectures with a better ImageNet accuracy generally transfer better. explore transfer learning in the field of medical imaging to address domain-specific difficulties. Different from these works that only consider the transferability of models after training, we investigate another dimension of model transferability in Section 6: the evolution of transferability during training with lrDecay. The practice of lrDecay in training neural networks dates back to. The most popular belief in the effect of lrDecay comes from the optimization analysis of Gradient Descent (GD) . Although SGD is more practical in deep learning, researchers are usually satisfied with the analysis of GD considering that SGD is a stochastic variant of GD. From left to right: 1) learning rate is small enough to converge around a minimum, 2) moderate so that it bounces among minima, 3) too large to converge. analyze the property of a quadratic loss surface which can be seen as a second-order approximation around a local minimum in nonconvex optimization. Learning rates are characterized by the relationship with eigenvalues of the Hessian at a local minimum. Denote η the learning rate, H the Hessian, λ an eigenvalue of H, and v an eigenvector of λ. The behavior of the network along the direction v can be characterized as (1 − ηλ) k v, with k the iteration number. Convergence in the direction of v requires 0 < η < 2/λ, while η > 2/λ leads to divergence in the direction of v. If 0 < η < 2/λ holds for every eigenvalue of the Hessian, the network will converge quickly (Figure 2 left). If it holds for some directions but not for all directions, the network will diverge in some directions and thus jump into the neighborhood of another local minimum (Figure 2 middle). If the learning rate is too large, the network will not converge (Figure 2 right). In particular, when oscillation happens, it means the learning rate is too large and should be decayed. The effect of lrDecay hence is to avoid oscillation and to obtain faster convergence. only analyze a simple one-layer network. It may not hold for modern neural networks (see Section 4.1). Another common belief is the Stochastic Gradient Descent explanation, arguing that "with a high learning rate, the system is unable to settle down into deeper, but narrower parts of the loss function." 1 Although it is common, this argument has not been formally analyzed until very recently.). The first plot: an initially large learning rate helps escape spurious local minima. From the second to the fourth plots: after more rounds of learning rate decay, the probability of reaching the minimum becomes larger. Under some assumptions, prove SGD is equivalent to the convolution of loss surface, with the learning rate serving as the conceptual kernel size of the convolution. With an appropriate learning rate, spurious local minima can be smoothed out, thus helping neural networks escape bad local minima. The decay of learning rate later helps the network converge around the minimum. Figure 3 is an intuitive one-dimensional example. The first plot shows that a large learning rate helps escape bad local minima in both sides. The lrDecay in subsequent plots increases the probability of reaching the global minimum. Although intuitive, the explanation requires some assumptions that may not hold for modern neural networks (see Section 4.2). Although the (Stochastic) Gradient Descent explanations in Section 3 account for the effect of lrDecay to some extent, in this section, we show by carefully-designed experiments that they are insufficient to explain the efficacy of lrDecay in modern neural networks. In all the experiments except for Section 6, we use a modern neural network named WideResNet . It is deep, wide, nonconvex, and suitable for datasets like CIFAR10 . We train a WideResNet on CIFAR10 dataset with GD, decay the learning rate at different epochs, and report the training loss (optimization) as well as the test accuracy (generalization) in Figure 4. WideResNet and CIFAR10 are commonly used for studying deep learning . CIFAR10 is not too large so that we can feed the whole dataset as a single batch using distributed training, computing the exact gradient rather than estimating it in mini-batches. Experiments show that lrDecay brings negligible benefit to either optimization or generalization. No matter when the learning rate is decayed, the final performances are almost the same. The instability in the beginning is related to the high loss wall described in , which is not the focus of this paper. The GD explanation in Section 3.1 attributes the effect of lrDecay to avoiding oscillation. Oscillation means there is a small divergence in some directions of the landscape so that the network bounces among nearby minima. However, the divergence factor 1 − ηλ for the largest eigenvalue (≈ 200) is too large even for a small growth of learning rate. Thus, the learning rate is either small enough to converge in a local minimum or large enough to diverge. It is hardly possible to observe the oscillation in learning curves (Figure 2 middle), and diverging learning curves (Figure 2 right) can be discarded during hyperparameter tuning. Therefore, only stable solutions are observable where η is small enough (Figure 2 left), leaving no necessity for learning rate decay. Indeed, when the learning rate is increased mildly, we immediately observe diverging learning curves (Section A.2). In short, the GD explanation cannot explain the effect of lrDecay in training modern neural networks. We follow the experiment setups in Section 4.1, but replace GD with SGD in Figure 7. According to the SGD explanation in Section 3.2, the effect of learning rate decay is to increase the probability of reaching a good minimum. If it is true, the model trained before decay can also reach minima, only by a smaller probability compared to the model after decay. In other words, the SGD explanation indicates the best performances before and after decay are the same. It predicts learning curves like Figure 6. However, Figure 7 does not comply with the SGD explanation: the best performances before and after lrDecay are different by a noticeable margin. Without lrDecay (the rightmost column in Figure 7), the performance plateaus and oscillates, with no chance reaching the performance of the other columns after decay. The performance boost after learning rate decay is widely observed (Figure 1(b) for example). However, possibly due to the violation of its assumptions , the SGD explanation cannot explain the underlying effect of lrDecay. Section 4 uncovers the insufficiency of common beliefs in explaining lrDecay. We thus set off to find a better explanation.; reveal that SGD (without learning rate decay) learns from easy to complex. As learning rates often change from large to small in typical learning rate strategies, we hypothesize that the complexity of learned patterns is related to the magnitude of learning rates. Based on this, we provide a novel explanation from the view of pattern complexity: the effect of learning rate decay is to improve the learning of complex patterns while the effect of an initially large learning rate is to avoid memorization of noisy data. To justify this explanation, we carefully construct a dataset with tractable pattern complexity, and record model accuracies in simple and complex patterns separately with and without lrDecay. The explanation we propose involves pattern complexity, which is generally conceptual and sometimes measured with the help of a simple auxiliary model as in;. Here we try to formalize the idea of pattern complexity: the complexity of a dataset is defined as the expected class conditional entropy: C({(x i, y i)} n i=1 ) = E y H(P (x|y)), where H denotes the entropy functional. The complexity of patterns depends on the complexity of the dataset they belong to. Higher C means larger complexity because there are averagely more patterns in each class to be recognized (consider an animal dataset with 10 subspecies in each species vs. an animal dataset with 100 subspecies in each species). Equipped with the formal definition of complexity, we construct a Pattern Separation 10 (PS10) dataset with ten categories and explicitly separated simple patterns and complex patterns. We first generate a simple sub-dataset together with a complex sub-dataset in R 3. As shown in Figure 8 (a) and Figure 8 (b), patterns are visualized as colors because they lie in R 3. The category label can be identified by either simple patterns or complex patterns. We then merge the two sub-datasets into one dataset. The merging method in Figure 8 (c) is specifically designed such that the simple subset and complex subset are fed into different channels of the WideResNet. This mimics the intuition of patterns as the eye pattern and the nose pattern have different locations in an image of human face. To be compatible with the sliding window fashion of convolutional computation, we make patterns the same across spatial dimensions of height and weight to have the same image size as CIFAR10. To reveal the effect of decaying the learning rate, we compare experiments with and without lrDecay. For those without lrDecay, we set the learning rates equal to the learning rate of each stage in lrDecay. We measure not only the total accuracy but also the accuracies on simple and complex patterns separately. These accuracies are plotted in Figure 9. The first plot in Figure 9 clearly shows the model first learns simple patterns quickly. The boost in total accuracy mainly comes from the accuracy gain on complex patterns when the learning rate is decayed. Plots 2, 3, and 4 show the network learns more complex patterns with a smaller learning rate, leading to the that learning rate decay helps the network learn complex patterns. Figure 9 seems to indicate that an initially large learning rate does nothing more than accelerating training: in Plot 4, a small constant learning rate can achieve roughly the same accuracy compared with lrDecay. However, by adding 10% noisy data to mimic real-world datasets, we observe something interesting. Figure 10 shows the accuracies on simple patterns, complex patterns, and noise data when we add noise into the dataset. Plot 2 in Figure 10 shows an initially large learning rate helps the accuracy on complex patterns. Plot 3 in Figure 10 further shows the accuracy gain on complex patterns comes from the suppression of fitting noisy data. (Note that a larger accuracy on noisy data implies overfitting the noisy data, which is undesirable.) In short, the memorizing noisy data hurts the learning of complex patterns but can be suppressed by an initially large learning rate. report that an initially large learning rate with decay outperforms a small and constant learning rate. They suspect that the network starting with an initially small learning rate will be stuck at some spurious local minima. Our experiments provide an alternative view that spurious local minima may stem from noisy data. And the regularization effect of an initially large learning rate is to suppress the memorization of noisy data. Section 5 examines the proposed explanation on the PS10 dataset. Now we further validate the explanation on real-world datasets. Because there are no clearly separated simple and complex patterns in real-world datasets, it is difficult to directly validate the explanation. The proposed explanation suggests that SGD with lrDecay learns patterns of increasing complexity. Intuitively, more complex patterns are less transferable, harder to generalize across datasets. Thus an immediate implication is that SGD with lrDecay learns patterns of decreasing transferability. We validate it by transfer-learning experiments on real-world datasets, to implicitly support the proposed explanation. The transferability is measured by transferring a model from ImageNet to different target datasets. To get models in different training stages, we train a ResNet-50 on ImageNet from scratch and save checkpoints of models in different stages. The learning rate is decayed twice, leading to three stages. Target datasets for transferring are: Caltech256 with 256 general object classes; CUB-200 with 200 bird classes; MITIndoors with 67 indoor scenes; Sketch250 with sketch painting in 250 general classes. Sketch250 is the most dissimilar to ImageNet because it contains sketch paintings. We study two widely-used strategies of transfer learning: "fix" (ImageNet snapshot models are only used as fixed feature extractors) and "finetune" (feature extractors are jointly trained together with task-specific layers). Let acc i denotes the accuracy of stage i snapshot model on ImageNet and tacc i denotes the accuracy of transferring the snapshot to the target dataset, then the transferability of additional patterns learned in stage i is defined as tacci−tacci−1 acci−acci−1, i = 2, 3. By definition, the transferability of patterns from ImageNet to ImageNet is 1.0, complying with common sense. The transferability is plotted in Figure 11. Table 2 contains the accuracies used to compute it. In all experiments, we find that the transferability of additional patterns learned in stage 3 is less than that in stage 2. Besides, in Sketch250 dataset, the transferability of additional patterns learned in stage 3 is negative. These findings support our claim that additional patterns learned in later stages of lrDecay are more complex and thus less transferable. They also suggest deep model-zoo developer provide pre-trained model snapshots in different stages so that downstream users can select the most transferable snapshot model according to their tasks. In this paper, we dive into how learning rate decay (lrDecay) helps modern neural networks. We uncover the insufficiency of common beliefs and propose a novel explanation: the effect of decaying learning rate is to improve the learning of complex patterns, and the effect of an initially large learning rate is to avoid memorization of noisy data. It is supported by experiments on a dataset with tractable pattern complexity as well as on real-world datasets. It would be interesting to further bridge the proposed explanation and the formal analysis of optimization procedure. A.1 AUTODECAY Experiments in Section 5.2 implies that not all complex patterns are learnable under a constant learning rate. The training under a certain learning rate has no effect when the loss plateaus. This indicates we can expedite the training process by killing the over-training of each stage (decay the learning rate when the loss plateaus) with little influence on the performance. To validate the implication, we propose AutoDecay to shorten the useless training and check if the performance of the model can be untouched. In Figure 7, it appears obvious to decide the optimal moment to decay when we have a big picture of the training process. The problem is, however, how can we make a decision to decay depending on the current and past observations. It is a non-trivial problem given that the statistics exhibit noticeable noise. We formalize the observed training loss into two parts:ˆ (t) = (t) + (t), with (t) the ground truth loss (unobservable) and (t) the noise introduced by SGD. Here t indicates the training process (typically the epoch number) and takes value in N = {1, 2, 3, . . .}. To simplify the problem, we assume (t) is independent with t and (t) is independent of (t)(t = t) in SGD. The nature of noise gives rise to the zero-expectation property E (t) = 0. Denote σ 2 = Var (t) the variance of the noise. Due to the noise of SGD, the observed training loss usually vibrates in a short time window but decreases in a long time window. Our task is to find out whether the loss value is stable in the presence of noise. Exponential Decay Moving Average (EDMA) with Bias Correction. Observations with lower variance are more trustworthy. However, there is nothing we can do about the variance ofˆ (t). We consider computing a low-variance statistic aboutˆ (t). We adopt moving average with bias correction . Let g(t) be the moving average of (t) andĝ(t) be the moving average ofˆ (t). The explicit form is in Equation 1, where β ∈ is the decay factor in EDMA. The recursive (and thus implicit) form is in Equation 2. It enables us to compute the statisticĝ online (without storing all the previous {ˆ (i)|i < t}) at the cost of maintainingf (t). Asĝ(t) is a linear combination of {ˆ (i)|i ≤ t}, it is easy to showĝ(t) is unbiased: The variance ofĝ(t) is The fact that β ∈ indicates Varĝ(t) is monotonically decreasing. Typically β = 0.9 (Figure 13), and the variance can rapidly converge to 0.05σ 2, much smaller than the variance of the noise.ĝ(t) well represents the unobservable g(t). If (t) gets stable, we shall observe thatĝ(t) is stable, too. Criterion of Being Stable. We only want to decay the learning rate when the loss plateaus, i.e. when the loss is stable. For observed values of G = {ĝ(i)|i − W + 1 ≤ i ≤ t} within the window size of W, we call them stable if where is a small constant that prevents zero-division error, and η indicates the tolerance of variation. Criterion of Significant Drop. When we keep decaying the learning rate, there comes a time when the learning rate is too small and the network cannot make any progress. When it happens, we should terminate the training. Termination is adopted when their is no significant drop between the stable value and the original valueĝ. To be specific, the criterion of significant drop isĝ where is a small constant that prevents zero-division error, and ζ indicates the degree of drop. The entire procedure of AutoDecay is described in Figure 12. We try AutoDecay on ImageNet to test whether it can expedite the training without hurting the performance. We are not trying to set up a new state-of-the-art record. We train a ResNet-50 model on ImageNet following the official code of PyTorch. The only change is we replace the StepDecay strategy with the proposed AutoDecay strategy. Each experiment costs roughly two days with 8 TITAN X GPUs. The in Table 14 show that AutoDecay can shorten the training time by 10% without hurting the performance (even bringing a slight improvement), successfully vaidates the proposed explanation in this paper. When we increase the learning rate mildly for Gradient Descent, we immediately observe diverging learning curves (Figure 15), which echos with the reason mentioned in Section 4.1 why the Gradient Descent explanation fails to work in modern neural networks: modern neural networks have a very large spectrum norm at a local minimum, and even a small growth of learning rate can lead to divergence. In other words, training modern neural networks with GD must use a small enough learning rate, dismissing the value of learning rate decay.
We provide another novel explanation of learning rate decay: an initially large learning rate suppresses the network from memorizing noisy data while decaying the learning rate improves the learning of complex patterns.
763
scitldr
We show how an ensemble of $Q^*$-functions can be leveraged for more effective exploration in deep reinforcement learning. We build on well established algorithms from the bandit setting, and adapt them to the $Q$-learning setting. We propose an exploration strategy based on upper-confidence bounds (UCB). Our experiments show significant gains on the Atari benchmark. Deep reinforcement learning seeks to learn mappings from high-dimensional observations to actions. Deep Q-learning BID15 ) is a leading technique that has been used successfully, especially for video game benchmarks. However, fundamental challenges remain, for example, improving sample efficiency and ensuring convergence to high quality solutions. Provably optimal solutions exist in the bandit setting and for small MDPs, and at the core of these solutions are exploration schemes. However these provably optimal exploration techniques do not extend to deep RL in a straightforward way. Bootstrapped DQN BID16 ) is a previous attempt at adapting a theoretically verified approach to deep RL. In particular, it draws inspiration from posterior sampling for reinforcement learning (PSRL, BID17 ; BID16), which has near-optimal regret bounds. PSRL samples an MDP from its posterior each episode and exactly solves Q *, its optimal Q-function. However, in high-dimensional settings, both approximating the posterior over MDPs and solving the sampled MDP are intractable. Bootstrapped DQN avoids having to establish and sample from the posterior over MDPs by instead approximating the posterior over Q *. In addition, bootstrapped DQN uses a multi-headed neural network to represent the Q-ensemble. While the authors proposed bootstrapping to estimate the posterior distribution, their empirical findings show best performance is attained by simply relying on different initializations for the different heads, not requiring the sampling-with-replacement process that is prescribed by bootstrapping. In this paper, we design new algorithms that build on the Q-ensemble approach from BID16. However, instead of using posterior sampling for exploration, we construct uncertainty estimates from the Q-ensemble. Specifically, we first propose the Ensemble Voting algorithm where the agent takes action by a majority vote from the Q-ensemble. Next, we propose the UCB exploration strategy. This strategy is inspired by established UCB algorithms in the bandit setting and constructs uncertainty estimates of the Q-values. In this strategy, agents are optimistic and take actions with the highest UCB. We demonstrate that our algorithms significantly improve performance on the Atari benchmark. We model reinforcement learning as a Markov decision process (MDP). We define an MDP as (S, A, T, R, p 0, γ), in which both the state space S and action space A are discrete, T: S × A × S → R + is the transition distribution, R: S × A → R is the reward function, assumed deterministic given the state and action, and γ ∈ is a discount factor, and p 0 is the initial state distribution. We denote a transition experience as τ = (s, a, r, s) where s ∼ T (s |s, a) and r = R(s, a). A policy π: S → A specifies the action taken after observing a state. We denote the Q-function for policy π as Q π (s, a):= E π ∞ t=0 γ t r t |s 0 = s, a 0 = a where r t = R(s t, a t). The optimal Q * -function corresponds to taking the optimal policy Q * (s, a):= sup π Q π (s, a)and satisfies the Bellman equation Q * (s, a) = E s ∼T (·|s,a) r + γ · max a Q * (s, a). A notable early optimality in reinforcement learning was the proof by; that an online Q-learning algorithm is guaranteed to converge to the optimal policy, provided that every state is visited an infinite number of times. However, the convergence of Watkins' Q-learning can be prohibitively slow in MDPs wheregreedy action selection explores state space randomly. Later work developed reinforcement learning algorithms with provably fast (polynomial-time) convergence BID10 BID4 ). At the core of these provably-optimal learning methods is some exploration strategy, which actively encourages the agent to visit novel state-action pairs. For example, R-MAX optimistically assumes that infrequently-visited states provide maximal reward, and delayed Q-learning initializes the Q-function with high values to ensure that each state-action is chosen enough times to drive the value down. Since the theoretically sound RL algorithms are not computationally practical in the deep RL setting, deep RL implementations often use simple exploration methods such as -greedy and Boltzmann exploration, which are often sample-inefficient and fail to find good policies. One common approach of exploration in deep RL is to construct an exploration bonus, which adds a reward for visiting state-action pairs that are deemed to be novel or informative. In particular, several prior methods define an exploration bonus based on a density model or dynamics model. Examples include VIME by BID9, which uses variational inference on the forward-dynamics model, and Tang et al. FORMULA8, BID2, , BID8. While these methods yield successful exploration in some problems, a major drawback is that this exploration bonus does not depend on the rewards, so the exploration may focus on irrelevant aspects of the environment, which are unrelated to reward. Earlier works on Bayesian reinforcement learning include BID6 BID7. BID6 studied Bayesian Q-learning in the model-free setting and learned the distribution of Q * -values through Bayesian updates. The prior and posterior specification relied on several simplifying assumptions, some of which are not compatible with the MDP setting. BID7 took a model-based approach that updates the posterior distribution of the MDP. The algorithm samples from the MDP posterior multiple times and solving the Q * values at every step. This approach is only feasible for RL problems with very small state space and action space. proposed posterior sampling for reinforcement learning (PSRL). PSRL instead takes a single sample of the MDP from the posterior in each episode and solves the Q * values. Recent works including BID17 and BID16 established near-optimal Bayesian regret bounds for episodic RL. models the environment and constructs exploration bonus from variance of model parameters. These methods are experimented on low dimensional problems only, because the computational cost of these methods is intractable for high dimensional RL. Inspired by PSRL, but wanting to reduce computational cost, prior work developed approximate methods. proposed randomized least-square value iteration for linearly-parameterized value functions. applies to Q-functions parameterized by deep neural networks. Bootstrapped DQN BID16 ) maintains a Q-ensemble, represented by a multi-head neural net structure to parameterize K ∈ N + Q-functions. This multi-head structure shares the convolution layers but includes multiple "heads", each of which defines a Q-function Q k.Bootstrapped DQN diversifies the Q-ensemble through two mechanisms. The first mechanism is independent initialization. The second mechanism applies different samples to train each Q-function. These Q-functions can be trained simultaneously by combining their loss functions with the help of a random mask DISPLAYFORM0 where y Q k τ is the target of the kth Q-function. Thus, the transition τ updates Q k only if m k τ is nonzero. To avoid the overestimation issue in DQN, bootstrapped DQN calculates the target value y Q k τ using the approach of Double DQN , such that the current Q k (·; θ t) network determines the optimal action and the target network Q k (·; θ −) estimates the value DISPLAYFORM1 In their experiments on Atari games, BID16 set the mask m τ = (1, . . ., 1) such that all {Q k} are trained with the same samples and their only difference is initialization. Bootstrapped DQN picks one Q k uniformly at random at the start of an episode and follows the greedy action a t = argmax a Q k (s t, a) for the whole episode. Ignoring computational costs, the ideal Bayesian approach to reinforcement learning is to maintain a posterior over the MDP. However, with limited computation and model capacity, it is more tractable to maintain a posterior of the Q * -function. This motivates using a Q-ensemble as a particle filter-based approach to approximate the posterior over Q * -function and we display our first proposed method, Ensemble Voting, in Algorithm 1. DISPLAYFORM0 is parametrized with a deep neural network whose parameters are initialized independently at the start of training. Each Q k proposes an action that maximizes the Q-value according to Q k at every time step and the agent chooses the action by a majority vote DISPLAYFORM1 At each learning interval, a minibatch of transitions is sampled from the replay buffer and each Q k takes a Bellman update based on this minibatch. For stability, Algorithm 1 also uses a target network for each Q k as in Double DQN in the batched update. We point out that the difference among the parameters of the Q-ensemble {Q k} comes only from the independent random initialization. The deep neural network parametrization of the Q-ensemble introduces nonconvexity into the objective function of Bellman update, so the Q-ensemble {Q k} do not converge to the same Q-function during training even though they are trained with the same minibatches at every update. We also experimented with bagging by updating each Q k using an independently drawn minibatch. However, bagging led to inferior learning performance. This phenomenon that that bagging deteriorates the performance of deep ensembles is also observed in supervised learning settings. BID13 observed that supervised learning trained with deep ensembles with random initializations perform better than bagging for deep ensembles. BID12 used deep ensembles for uncertainty estimates and also observed that bagging deteriorated performance in their experiments. develop ensemble sampling for bandit problems with deep neural network parametrized policies and the theoretical justification. We derive a posterior update rule for the Q * function and approximations to the posterior update using ensembles in Appendix C. We note that in bootstrapped DQN, ensemble voting is applied for evaluation while Algorithm 1 uses ensemble voting during learning. In the experiments (Sec. 5), we demonstrate that Algorithm 1 is superior to bootstrapped DQN. The action choice of Algorithm 1 is exploitation only. In the next section, we propose our UCB exploration strategy. Pick an action according to DISPLAYFORM2 Execute a t. Receive state s t+1 and reward r t from the environment 8:Add (s t, a t, r t, s t+1) to replay buffer B At learning interval, sample random minibatch and update {Q k} 10:end for 11: end for In this section, we propose optimism-based exploration by adapting the UCB algorithms BID1; BID0 ) from the bandit setting. The UCB algorithms maintain an upper-confidence bound for each arm, such that the expected reward from pulling each arm is smaller than this bound with high probability. At every time step, the agent optimistically chooses the arm with the highest UCB. BID1 constructed the UCB based on empirical reward and the number of times each arm is chosen. BID0 incorporated the empirical variance of each arm's reward into the UCB, such that at time step t, an arm A t is pulled according to DISPLAYFORM0 wherer i,t andV i,t are the empirical reward and variance of arm i at time t, n i,t is the number of times arm i has been pulled up to time t, and c 1, c 2 are positive constants. We extend the intuition of UCB algorithms to the RL setting. Using the outputs of the {Q k} functions, we construct a UCB by adding the empirical standard deviationσ(DISPLAYFORM1 . The agent chooses the action that maximizes this UCB a t ∈ argmax a μ(s t, a) + λ ·σ(s t, a), where λ ∈ R + is a hyperparameter. We present Algorithm 2, which incorporates the UCB exploration. The hyperparemeter λ controls the degrees of exploration. In Section 5, we compare the performance of our algorithms on Atari games using a consistent set of parameters. Pick an action according to a t ∈ argmax a μ(s t, a) + λ ·σ(s t, a) Receive state s t+1 and reward r t from environment, having taken action a t 8:Add (s t, a t, r t, s t+1) to replay buffer B 9:At learning interval, sample random minibatch and update {Q k} We evaluate the algorithms on each Atari game of the Arcade Learning Environment BID3 ). We use the multi-head neural net architecture of BID16. We fix the common hyperparameters of all algorithms based on a well-tuned double DQN implementation, which uses the Adam optimizer BID11 ), different learning rate and exploration schedules compared to BID15. Appendix A tabulates the hyperparameters. The number of {Q k} functions is K = 10. Experiments are conducted on the OpenAI Gym platform BID5 ) and trained with 40 million frames and 2 trials on each game. We take the following directions to evaluate the performance of our algorithms:1. we compare Algorithm 1 against Double DQN and bootstrapped DQN, 2. we isolate the impact of UCB exploration by comparing Algorithm 2 with λ = 0.1, denoted as ucb exploration, against Algorithm 1, Double DQN, and bootstrapped DQN.3. we compare Algorithm 1 and Algorithm 2 with the count-based exploration method of BID2.4. we aggregate the comparison according to different categories of games, to understand when our methods are suprior. In Appendix B, we tabulate detailed that compare our algorithms, Ensemble Voting and ucb exploration, against prior methods. In TAB5, we tabulate the maximal mean reward in 100 consecutive episodes for Ensemble Voting, ucb exploration, bootstrapped DQN and Double DQN. Without exploration, Ensemble Voting already achieves higher maximal mean reward than both Double DQN and bootstrapped DQN in a majority of Atari games. Ensemble Voting performs better than Double DQN in 37 games out of the total 49 games evaluated, better than bootstrapped DQN in 41 games. ucb exploration achieves the highest maximal mean reward among these four algorithms in 30 games out of the total 49 games evaluated. Specifically, ucb exploration performs better than Double DQN in 38 out of 49 games evaluated, better than bootstrapped DQN in 45 games, and better than Ensemble Voting in 35 games. Figure 2 displays the learning curves of these five algorithms on a set of six Atari games. Ensemble Voting outperforms Double DQN and bootstrapped DQN. ucb exploration outperforms Ensemble Voting. In TAB6, we compare our proposed methods with the count-based exploration method A3C+ of BID2 based on their published of A3C+ trained with 200 million frames. We point out that even though our methods were trained with only 40 million frames, much less than A3C+'s 200 million frames, UCB exploration achieves the highest average reward in 28 games, Ensemble Voting in 10 games, and A3C+ in 10 games. Our approach outperforms A3C+.Finally to understand why and when the proposed methods are superior, we aggregate the comparison according to four categories: Human Optimal, Score Explicit, Dense Reward, and Sparse Reward. These categories follow the taxonomy in Probability of random action ingreedy exploration, as a function of the iteration t.replay start size 50000 Number of uniform random actions taken before learning starts. Table 4: Comparison of each method across different game categories. The Atari games are separated into four categories: human optimal, score explicit, dense reward, and sparse reward. In each row, we present the number of games in this category, the total number of games where each algorithm achieves the optimal performance according to TAB5. The game categories follow the taxonomy in TAB0 of C APPROXIMATING BAYESIAN Q-LEARNING WITH Q-ENSEMBLESIn this section, we first derive a posterior update formula for the Q * -function under full exploration assumption and this formula turns out to depend on the transition Markov chain. Next, we approximate the posterior update with Q-ensembles {Q k} and demonstrate that the Bellman equation emerges as the approximate update rule for each Q k. * -FUNCTION An MDP is specified by the transition probability T and the reward function R. Unlike prior works outlined in Section 2.3 which learned the posterior of the MDP, we will consider the joint distribution over (Q *, T). Note that R can be recovered from Q * given T. So (Q *, T) determines a unique MDP. In this section, we assume that the agent samples (s, a) according to a fixed distribution. The corresponding reward r and next state s given by the MDP append to (s, a) to form a transition τ = (s, a, r, s), for updating the posterior of (Q *, T). Recall that the Q * -function satisfies the Bellman equation DISPLAYFORM0 Denote the joint prior distribution as p(Q *, T) and the posterior asp. We apply Bayes' formula to expand the posterior: DISPLAYFORM1 where Z(τ) is a normalizing constant and the second equality is because s and a are sampled randomly from S and A. Next, we calculate the two conditional probabilities in. First, DISPLAYFORM2 where the first equality is because given T, Q * does not influence the transition. Second, DISPLAYFORM3 where 1 {·} is the indicator function and in the last equation we abbreviate it as 1(Q *, T). Substituting FORMULA9 and FORMULA10 into FORMULA8, we obtain the joint posterior of Q * and T after observing an additional randomly sampled transition τ DISPLAYFORM4 The exact Q * -posterior update is intractable in high-dimensional RL due to the large space of (Q *, T). Thus, we make several approximations to the Q * -posterior update. First, we approximate the prior of Q * by sampling K ∈ N + independently initialized Q * -functions {Q k} K k=1. Next, we update them as more transitions are sampled. The ing {Q k} approximate samples drawn from the posterior. The agent chooses the action by taking a majority vote from the actions determined by each Q k.We derive the update rule for {Q k} after observing a new transition τ = (s, a, r, s). At iteration i, given Q * = Q k,i (·; θ k) parametrized by θ k the joint probability of (Q *, T) factors into DISPLAYFORM0 Substitute FORMULA12 into FORMULA11 and we obtain the corresponding posterior for each Q k,i+1 at iteration i + 1 as DISPLAYFORM1 We update Q k,i to Q k,i+1 according to DISPLAYFORM2 We first derive a lower bound of the the posteriorp(Q k,i+1 |τ): DISPLAYFORM3 where we apply a limit representation of the indicator function in the third equation. The fourth equation is due to the bounded convergence theorem. The inequality is Jensen's inequality. The last equation replaces the limit with an indicator function. A sufficient condition for FORMULA14 is to maximize the lower-bound of the posterior distribution in by ensuring the indicator function in to hold. We can replace with the following update DISPLAYFORM4 However, FORMULA8 is not tractable because the expectation in is taken with respect to the posterior p(T |Q k,i, τ) of the transition T. To overcome this challenge, we approximate the posterior update by reusing the one-sample next state s from τ. Solving the exact minimal for each Q k,i+1 is impractical, thus we take a gradient step on Q k,i+1 according to the following gradient DISPLAYFORM5 where η is the step size. Instead of updating Q k after each transition, we use an experience replay buffer B to store observed transitions and sample a minibatch B mini of transitions (s, a, r, s) for each update. In this case, the batched update of each Q k,i to Q k,i+1 becomes a standard Bellman update DISPLAYFORM6 In this section, we also studied an "InfoGain" exploration bonus, which encourages agents to gain information about the Q * -function and examine its effectiveness. We found it had some benefits on top of Ensemble Voting, but no uniform additional benefits once already using Q-ensembles on top of Double DQN. We describe the approach and our experimental findings. Similar to , we define the information gain from observing an additional transition τ n as DISPLAYFORM0 wherep(Q * |τ 1, . . ., τ n) is the posterior distribution of Q * after observing a sequence of transitions (τ 1, . . ., τ n). The total information gain is DISPLAYFORM1 Our Ensemble Voting, Algorithm 1, does not maintain the posteriorp, thus we cannot calculate explicitly. Instead, inspired by BID12, we define an InfoGain exploration bonus that measures the disagreement among {Q k}. Note that DISPLAYFORM2 where H(·) is the entropy. If H τ1,...,τ N is small, then the posterior distribution has high entropy and high residual information. Since {Q k} are approximate samples from the posterior, high entropy of the posterior leads to large discrepancy among {Q k}. Thus, the exploration bonus is monotonous with respect to the residual information in the posterior H(p(Q * |τ 1, . . ., τ N)). We first compute the Boltzmann distribution for each Q k DISPLAYFORM3 where T > 0 is a temperature parameter. Next, calculate the average Boltzmann distribution DISPLAYFORM4 The InfoGain exploration bonus is the average KL-divergence from DISPLAYFORM5 The modified reward isr (s, a, s) = r(s, a) DISPLAYFORM6 where ρ ∈ R + is a hyperparameter that controls the degree of exploration. The exploration bonus b T (s t) encourages the agent to explore where {Q k} disagree. The temperature parameter T controls the sensitivity to discrepancies among {Q k}. When T → +∞, {P T,k} converge to the uniform distribution on the action space and b T (s) → 0. When T is small, the differences among {Q k} are magnified and b T (s) is large. We display Algorithrim 3, which incorporates our InfoGain exploration bonus into Algorithm 2. The hyperparameters λ, T and ρ vary for each game. We demonstrate the performance of the combined UCB+InfoGain exploration in FIG4 and FIG4. We augment the previous figures in Section 5 with the performance of ucb+infogain exploration, where we set λ = 0.1, ρ = 1, and T = 1 in Algorithm 3. FIG4 shows that combining UCB and InfoGain exploration does not lead to uniform improvement in the normalized learning curve. At the individual game level, FIG4 shows that the impact of InfoGain exploration varies. UCB exploration achieves sufficient exploration in games including Demon Attack and Kangaroo and Riverraid, while InfoGain exploration further improves learning on Enduro, Seaquest, and Up N Down. The effect of InfoGain exploration depends on the choice of the temperature T. The optimal temperature parameter varies across games. In FIG5, we display the behavior of ucb+infogain exploration with different temperature values. Thus, we see the InfoGain exploration bonus, tuned with the appropriate temperature parameter, can lead to improved learning for games that require extra exploration, such as ChopperCommand, KungFuMaster, Seaquest, UpNDown.
Adapting UCB exploration to ensemble Q-learning improves over prior methods such as Double DQN, A3C+ on Atari benchmark
764
scitldr
We focus on the problem of learning a single motor module that can flexibly express a range of behaviors for the control of high-dimensional physically simulated humanoids. To do this, we propose a motor architecture that has the general structure of an inverse model with a latent-variable bottleneck. We show that it is possible to train this model entirely offline to compress thousands of expert policies and learn a motor primitive embedding space. The trained neural probabilistic motor primitive system can perform one-shot imitation of whole-body humanoid behaviors, robustly mimicking unseen trajectories. Additionally, we demonstrate that it is also straightforward to train controllers to reuse the learned motor primitive space to solve tasks, and the ing movements are relatively naturalistic. To support the training of our model, we compare two approaches for offline policy cloning, including an experience efficient method which we call linear feedback policy cloning. We encourage readers to view a supplementary video (https://youtu.be/CaDEf-QcKwA) summarizing our . A broad challenge in machine learning for control and robotics is to produce policies capable of general, flexible, and adaptive behavior of complex, physical bodies. To build policies that can effectively control simulated humanoid bodies, researchers must simultaneously overcome foundational challenges related to high-dimensional control, body balance, and locomotion. Recent progress in deep reinforcement learning has raised hopes that such behaviors can be learned end-to-end with minimal manual intervention. Yet, even though significant progress has been made thanks to better algorithms, training regimes, and computational infrastructure, the ing behaviors still tend to exhibit significant idiosyncrasies (e.g. BID2 .One advantage of working with humanoids in this context is that motion capture data is widely available and can serve to help design controllers that produce apparently humanlike movement. Indeed, recent developments are now allowing for the production of highly specialized expert policies which robustly, albeit narrowly, reproduce single motion capture clips (e.g. BID18 ; BID30).A remaining challenge on the way to truly flexible and general purpose control is to be able to sequence and generalize individual movements or "skills" in a task-directed manner. Achieving this goal requires not just the ability to acquire individual skills in the first place, but also an architecture and associated training procedure that supports representation, recruitment, and composition of a large number of skills. This paper presents a step in this direction. Specifically, the setting we focus on will be one in which we have a large number of robust experts that perform single skills well and we wish to transfer these skills into a shared policy that can do what each expert does as well as the expert, while also generalizing to unseen behaviors within the distribution of skills. To this end we design a system that performs one-shot imitation as well as permits straightforward reuse (or transfer) of skills. We require our approach to scale to a very large number of individual skills while also keeping manual intervention and oversight to a minimum. Our primary contribution is the development of a neural network architecture that can represent and generate many motor behaviors, which we refer to as neural probabilistic motor primitives. This architecture is designed to perform one-shot imitation, while learning a dense embedding space of a large number of individual motor skills. Once trained, this module does not just reproduce individual behaviors in the training data, but can sequence and compose these behaviors in a controlled fashion as well as synthesize novel movements consistent with the training data distribution. Empirically, we also find that training controllers to reuse this learned motor primitive module for new tasks generates surprisingly human-like movement and the behavior generated seems to interpolate the space of behaviors well. In order to facilitate transfer and compression of expert skills at the scale of thousands of behaviors, we wish to avoid closed-loop RL training. We call the general, offline, functional transfer of policy content policy transfer or policy cloning and consider two approaches. The natural baseline approach involves the application of behavioral cloning to data gathered by executing experts many times, with noise, and logging intended expert actions, resembling the approach of BID16. This works well, as it ensures the student behaves like the expert not only along nominal expert rollouts but also at points arrived at by perturbing the expert. However, this approach may require many rollouts, which can be costly to obtain in many settings. As a more efficient alternative we therefore consider a second solution that operates by comprehensively transferring the functional properties of an expert to a student policy by matching the local noise-feedback properties along one or a small number of representative expert reference trajectories. We call this specific proposal linear feedback policy cloning (LFPC), and we demonstrate that it is competitive with behavioral cloning from many more rollouts in our setting. Recent efforts in RL for humanoid control build on a large body of research in robotics and animation. While contemporary for learning from scratch BID35 can be impressive the behaviors are not consistently human-like. Learning from motion capture (mocap) can provide strong constraints, especially for running BID29. Several recent approaches have demonstrated that it is possible to acquire specific behavioral skills, possibly jointly with external RL objectives BID30 BID17. At present, the policies produced tend to be restricted to single skills/behaviors and can require very large quantities of environment interactions, motivating us to seek methods which reuse existing single-skill expert policies. Knowledge transfer refers to the broad class of approaches which transfer the input-output functional mapping, to some extent or another, from a teacher (or expert) to a student BID12 BID36 BID8. Distillation connotes the transfer of function from one or more expert systems into a single student system often with the goal of compression or of combining multiple experts qualities BID12 BID28 BID33 BID40. Imitation learning is the control-specific term for the production of a student policy from either an expert policy or the behavioral demonstrations of an expert. One basic algorithm is behavioral cloning, which refers to supervised training of the policy from state-action pairs. In the most simple case it only requires examples from the expert. A broader setting is that in which more liberal queries to the expert are permitted; e.g. for the online-imitation setting as in DAGGER BID32. This setting is often satisfied e.g. if we wish to combine behavior from multiple experts. One-shot imitation is a concept which means that a trained system, at test time, can watch an example behavior and imitate it, as, for instance, in BID7. More similar to our work is the setting examined by, in which full-body humanoid movements were studied. Compared with this latter work, we will employ an architecture here that encourages imitation of motor details, rather than overall movement type, and we scale our approach to more expert demonstrations. The most similar work also demonstrates large-scale one-shot humanoid tracking and was contemporaneously published BID4; the approach they described involves direct tracking as well as failure recovery, but relative to our work the authors do not consider skill reuse. The notion of motor primitives is widespread in neuroscience, where there is evidence that lower dimensional control signals can selectively coordinate and blend behaviors produced by spinal circuits BID3, and that the cortex organizes the space of primitive motor behaviors BID9. In our setting, motor primitives refer to the reusable embedding space learned from many related behaviors and the associated context-modulable policy capable of generating sensory-feedback-stabilized motor behavior when executed in an environment. The particular architecture we consider is inspired by the formalization presented in BID41, which places a probabilistic latent bottleneck on the sensory-motor mapping. In the robotics literature, there is a rich line of research into various parameterizations of motion trajectories used for robot control. A class of these are referred to as "movement primitives" (e.g. BID34, including the "probabilistic movement primitives" of BID27 (see also e.g. BID26 . These approaches can be seen as specific implementation choices for a certain notion of motor primitive, which emphasize the parameterization and learning of movement trajectories from repeated demonstrations BID27 BID20, rather than learning the actuation/stabilization element, which is often handled by a prespecified PID controller. It has previously been recognized that linear-feedback policies can work well around optimal trajectories or limit cycles even for high DoF bodies. These can be obtained by sample-based optimization (e.g. BID6) or by differential dynamic programming BID24. For linear-quadratic-Gaussian control BID1 or differential dynamic programming BID19 BID13, we obtain feedback policies where the feedback terms are computed from the value function, amounting effectively to feedbackstabilized plans. Work by BID23 has shown that linear-feedback policies ing from trajectory optimization can be used to train neural networks. We employ a similar idea to transfer optimal behavior from an existing policy, observing that an optimal policy implicitly reflects the structure of the (local) value landscape and appropriately functions as a feedback controller. Figure 1: Examples of representative experts learned from motion capture. From top to bottom, these are "run and dodge", "cartwheel", "backflip", and "twist". See accompanying video. Note that these four behaviors will be used as representative examples for validation in single-skill transfer experiments. In this section, we will first briefly describe the expert policies used in this work (Sec. 2.1). We then describe the Neural Probabilistic Motor Primitive architecture and objective (Sec. 2.2). We then describe two approaches for training the module offline (Sec. 2.3). In order to study how to transfer and consolidate experts, we must be able to generate adequate quantities of expert data. For this work, we use expert policies trained to reproduce motion capture clips. The approach we use for producing experts is detailed more fully in BID22 and largely follows BID30. It yields time-indexed neural network policies that are robust to moderate amounts of action noise (see appendix A for additional details on the training procedure). Some examples of the ing single-skill time-indexed policies that are obtained from this procedure are depicted in FIG4. All our experts were trained in MuJoCo environments. Data We use the CMU Mocap database 1, which contains more than 2000 clips of varying lengths from more than 100 subjects. The motions in this dataset are quite varied, including many clips of walking, turning, running, jumping, dancing, various hand movements, and many more idiosyncratic behaviors. From this, we selected various clips of generic whole-body movements -any clips longer than 6 seconds were cut into smaller pieces yielding approximately 3000, roughly 2-6 second snippets. Just over half of these are generic locomotion such as walking, running, jumping and turning. The rest of the clips mostly contained diverse hand movements while standing. We trained one expert policy per selected snippet, yielding 2707 expert policies in our training set. Our goal is to obtain a motor primitive module that can flexibly and robustly deploy, sequence, and interpolate a diverse set of skills from a large database of reference trajectories without any manual alignment or other processing of the raw experts. This requires a representation that does not just reliably encode all behavioral modes but also allows effective indexing of behaviors for recall. To ensure plausible and reliable transitions it is further desirable that the encoding of similar behaviors should be close in some sense in the representation space. Compression of many expert skills via a latent variable inverse model We achieve this goal by training an autoregressive latent variable model of the state-conditional action sequence which, at training time, is conditioned on short look-ahead snippets of the nominal/reference trajectory (see FIG0). This architecture has the general structure of an inverse model, which produces actions based on the current state and a target. The architecture and training scheme are designed for the embedding space to reflect short-term motor behavior. As we demonstrate below, this allows for the selective execution of particular behavioral modes and also admits one-shot imitation via the trajectory encoder. We use a model with a latent variable z t at each time step, modelling the state conditional action distribution. The encoder and decoder are distributions q(z t |z t−1, x t) and π(a t |z t, s t) where s t is the state as in preceding sections and x t is concatenation of a small number of future states x t = [s t, ..., s t+K]. The encoder and decoder are MLPs with two and three layers, respectively. For architecture and experimental details see appendix B. The generative part of the model is given by: DISPLAYFORM0 Temporally nearby trajectory snippets should have a similar representation in the latent space. To implement this intuition, we choose an AR process as a weak prior: DISPLAYFORM1 where σ = √ 1 − α 2, ensuring that marginally z t ∼ N (0, I), and set α = 0.95 in experiments unless otherwise stated. In subsequent efforts, it may be interesting to investigate different values of α and learnable priors. In order to train this model, we consider the evidence lower bound (ELBO): DISPLAYFORM2 with a β parameter to tune the weight of the prior. For β = 1 this objective forms the well-known variational lower bound to log p(a 1:T |s 1:T). This objective can be optimized using supervised learning (i.e. behavioral cloning from noisy rollouts) offline. Note we chose not to condition the encoder on actions, since we are interested in one-shot imitation in settings where actions are unobserved. We experimented with different values of K and obtained similar performance. All the reported in this paper use K = 5. Our architecture effectively implements a conditional information bottleneck between the desired future trajectory x t and the action a t given the past latent state z t−1 (similar to Alemi et al. FORMULA0). As discussed above the auto-correlated prior encourages an encoding in which temporally nearby latent states from the same trajectory tend to be close in the latent space, and the information bottleneck more generally encourages a limited dependence on x t with z t forming a compressed representation of the future trajectory as required for the action choice. When transferring knowledge from an expert policy to a student we would like the student to replicate the expert's behavior in the full set of states plausibly visited by the expert. In our case, experts trained to reproduce single clips can be conceptualized as nonlinear feedback controllers around a nominal trajectory, and the manifold of states visited by experts can be thought of as a tube around that reference. We require the student to be able to operate successfully in and remain close to this tube even in the face of small perturbations. Formally, to ensure that the student retains expert robustness, we would like expert actions µ E (s) and student actions µ θ (s) to be close under a plausible (noisy) expert state distribution ρ E. A surrogate loss used in imitation learning as well as knowledge transfer is the quadratic loss between actions (or activations BID36). DISPLAYFORM0 Behavioral cloning can refer to optimization of this objective, where ρ E is replaced with an empirical distribution of a set of state-action pairs S. This works well if S adequately covers the state distribution later experienced by the student. Anticipating and generating an appropriate set of states on which to train the student typically requires many rollouts and can thus be expensive. Since we are aiming to compress the behavior of thousands of experts we desire a computationally efficient method. We investigate two schemes that allow us to record the experts' state-action mappings on a small-sample estimate of the experts' state distributions and to then train the student via supervised learning. Both schemes are convenient to implement in a regular supervised learning pipeline and require neither querying many experts simultaneously (which limits scalability when dealing with thousands of experts) nor execution of the student at training time. Behavioral cloning from noisy rollouts The first approach amounts to simply gathering a number of noisy trajectories from the expert (either under a stochastic policy or with noise injection) while logging the optimal/mean action of the expert instead of the noisy action actually executed. A version of this is equivalent to the DART algorithm of BID16. We then perform behavioral cloning from that data. Specifically, given an expert policy π E, let µ E (s) be the mean action of the expert in state s. To obtain noisy rollouts, we run π η E, the expert with moderate action noise (η) to obtain a set of data {s DISPLAYFORM1 And we optimize the policy according to Eqn. 4, with the expectation over s ∼ ρ E being approximated by a sum over the set of state and expert-actions collected. While we expect this approach can work well, we do not expect it to be particularly efficient insofar as the expert may need to be executed for many rollouts. Linear-feedback policy cloning (LFPC) The second approach, which we refer to as linearfeedback policy cloning (LFPC), logs the action-state Jacobian as well as the expert action along a single nominal trajectory. The Jacobian can be used to construct a linear feedback controller which gives target actions in nearby perturbed states during training (described below). This approach is not intended to outperform behavioral cloning, as this should not be possible for arbitrary quantities of expert rollout data. Instead the motivation for LFPC is to do as well as behavioral cloning while using considerably fewer expert rollouts. As pointed out above, experts trained to reproduce single clips robustly can be thought of as nonlinear feedback controllers around this nominal trajectory. The nominal trajectory refers to the sequence of nominal state-action pairs {s t, a t} 1...T obtained by executing µ E (s) recursively from an initial point s 0. Since expert behavior in our setting is well characterized by single nominal trajectories, we expect we can capture the relevant behavior of the expert by a linearization around the nominal trajectory 3.Let δs be a small perturbation of the state and let J = dµ E (s) ds | s=s be the Jacobian. Then DISPLAYFORM2 This linearization induces a linear-feedback-stabilized policy that at each time-step has a nominal action a t, but also expects to be in state s t, and correspondingly adjusts the nominal action with a linear correction based on discrepancy between the nominal and actual state at time t: DISPLAYFORM3 We empirically validated that a linear feedback policy about the nominal trajectory of the expert can approximate the expert behavior reasonably well for clips we examine (see Fig. 3).Above we presented the expert as a feedback controller operating in a tube around some nominal trajectory with states s 1,..., s T, actions a 1,..., a T, and Jacobians J 1,..., J T. We approximate ρ E with the distribution of states introduced by state perturbations around this nominal trajectory: DISPLAYFORM4 However, this objective still requires expert evaluations at the perturbed states. Using the linearization described above we can replace the expert action µ E (s + δs) with the Jacobian-based linearfeedback policy µ F B (s + δs), which is available offline. This yields the LFPC objective: DISPLAYFORM5 One potentially important choice is the perturbation distribution ∆(s). Ideally, we would like ∆(s) to be the state-dependent distribution induced by physically plausible transitions, but estimating this distribution may require potentially expensive rollouts which we are trying to avoid. A cheaper object to estimate is the stationary transition noise distribution induced by noisy actions, which can be efficiently approximated from a small number of trajectories. Empirically, we found the objective 8 to be relatively robust to some variations in ∆, and we use a fixed marginal distribution for all clips. Objective 8 bears interesting similarities to approaches such as denoising autoencoders BID43, where networks can learn to ignore local noise perturbations on inputs sampled from a high-dimensional noise distribution. Further, BID23 successfully distill feedback policies obtained from a planner. One question left open by this latter work is that of how much data might be required. Empirically we show in the experiments below that the augmented objective 8 can produce the desired robustness even from a very limited set of states. Figure 3: Comparisons of trajectory rollouts for 4 reference behaviors for the nominal trajectory and at varying noise levels. Note that the score is determined by similarity to motion-capture reference and the expert may be slightly suboptimal so slight improvements on the expert may arise by chance. There are multiple, relevant perspectives on LFPC. From one perspective, LFPC amounts to a data augmentation method. From another vantage, the approach attempts to match the mean action as well as the Jacobian at the set of relevant behavioral states, here sampled along the nominal trajectory. In settings where expert behavior is more diverse or multimodal, LFPC should be applied to states which representatively cover relevant behavioral modes or perhaps are expanded backwards from goal states (roughly similar to the procedure used to expand LQR-trees by Tedrake 2009). Explicit Jacobian matching has been proposed elsewhere, for example in. See appendix C for further disambiguation relative to other approaches. To train our Neural Probabilistic Motor Primitive architecture using LFPC we can adapt the objective in Eqn. 3 as follows: DISPLAYFORM6 log π(a t + J t δs t |s t + δs t, z t) + β log p z (z t |z t−1) − log q(z t |z t−1, x t + δx t), where δs t are i.i.d. perturbations drawn from suitable perturbation distribution ∆ and δx t is the concatenation of [δs t, δs t+1, ..., δs t+K]. To ground our in a simple setting, we begin with transfer of a single-skill, time-indexed policy from one network to another. We compare the performance of various time-indexed policies for each of the experts depicted in FIG4 Fig. 3 ). For additional validation, see appendix D.Figure 4: Performance relative to expert policies for trained neural probabilistic motor primitive models. Performance of model variations are compared on training and testing data. We compare models trained using cloning with 100 trajectories per expert for different levels of regularization, using a smaller latent space of dimension 20 rather than 60 in all other experiments, as well as LFPC. Having validated that single skills can be transferred, we next consider how well we can compress behaviors of the 2707 experts in our training set into the neural probabilistic motor primitive architecture. Assessing the models using the action-reconstruction loss is not very intuitive since it does not capture model behavior in the environment. Instead we report a more relevant measure based on expert imitation. Here we encode an expert trajectory into a sequence of latent variables and then execute the policy in the environment conditioned on this sequence. Note that this approach is openloop with respect to the latents while being closed-loop with respect to state. We can then compare the performance of the trained system against experts on training and held-out clips according to the tracking reward used to train the experts originally. To account for different expert reward scales we report performance relative to the expert policy. Importantly, that this approach works is itself a partial validation of the premise of this work, insofar as open-loop execution of action sequences usually trivially fails with minor perturbations. The trained neural probabilistic motor primitive system can execute behaviors conditioned on an open-loop noisy latent variable trajectory, implying that the decoder has learned to stabilize the body during latent-conditioned behavior. There are a few key takeaways from the comparisons we have run (see Fig. 4). Most saliently cloning based on 100 trajectories from each expert with a medium regularization value (β = 0.1) works best. LFPC with comparable parameters works less well here, but has qualitatively fairly similar performance. Our ablations show that regularization and a large latent space are important for good . We also set the autoregressive parameter α = 0 (.95 in other runs), making the latent variables i.i.d.. This hurts performance, validating our choice of prior. We have no expectation that trajectories well outside the training distribution are likely to be either representable by the encoder or executable by the decoder. Nevertheless, when one-shot imitation of a trajectory fails, a natural question is whether the decoder is incapable of expressing the desired actions, or the encoder fails to encode the trajectory in such a way that the decoder will produce it. Figure 5: These panels consist of visualizations of the PCA latent space with comparisons in this space between one-shot latent-variable sequences and optimized latent variable sequences for various behaviors: A. Run B. Backwards walking C. Jumping. Running executes well based on the one-shot trajectory so serves as a reference for which optimization is not noticeably different. Walking backwards and jumping one-shot imitations fail, but are noticeably improved by optimization. We propose an analysis to distinguish this for borderline cases. For held out trajectories that yield unsatisfying performance on one-shot imitation, we can simply optimize directly: DISPLAYFORM0 where µ θ is the decoder mean. Empirically we see that this optimization meaningfully improves the executed behavior, and we visualize the shift in a three-dimensional space given by the first three principal components in Fig. 5.We exhibit three examples where we visualize the original latent trajectory as well as the optimized latent trajectory. Performance is significantly improved (see supplementary video), showing the latent space can represent behaviors for which one-shot imitation fails. However execution remains imperfect suggesting that while much of the fault may lie with the encoder, the decoder still may be slightly undertrained on these relatively rare behavior categories. Quantitatively, among a larger set of clips with less than 50% relative expert performance for one-shot imitation we found that optimization as described above improved median relative expert performance from 43% to 78%.Other exploratory probes of the module suggest that it is possible in certain cases to obtain seamless transitioning between behaviors by concatenating latent-variable trajectories and running the policy conditioned on this sequence (e.g. in order to perform a sequence of turns). See additional supplementary video. Reuse of motor primitive module Finally, we experimented with reuse of the decoder as a motor primitive module. We treat the latent space as a new custom action space and train a new high-level (HL) policy to operate in this space. At each time-step the high-level policy outputs a latent-variable (a) Median return value across 10 seeds for the goto-target task vs learner steps. Compared to a very weakly regularized module (β = 0.001), more regularized motor primitives modules both trained faster and achieved higher final performance.(b) Our model is able to track the target speed accurately. Shown here are target speed and actual speed in the egocentric forward direction for three episodes. The reward function is a Gaussian centered at the target speed. The shaded region corresponds to ± one standard deviation. z t. The actual action is then given by the motor primitive module p(a t |s t, z t). For training we used SVG BID10 with the Retrace off-policy correction for learning the Q-function BID25.A natural locomotion task that can challenge the motor module is a task which requires abrupt, frequently redirected movement with sharp turns and changes of speed. To implement this we provide the higher-level controller with a target that is constant until the humanoid is near it for a few timesteps at which point it randomly moves to another nearby location. While no single task will comprehensively probe the module, performing well in this task demands a wide range of quick locomotion behavior. With only a sparse task reward, the HL-controller can learn to control the body through the learned primitive space, and it produces rather humanlike task-directed movement. We observed that more regularized motor primitive modules had more stable initial behavior when connected to the untrained high-level controller (i.e. were less likely to fall at the beginning of training). Compared to a very weakly regularized module (β = 0.001), more regularized motor primitives modules both trained faster and achieved higher final performance (see FIG2). We also investigated a go-to-target task with bumpy terrain that is unobserved by the agent. The fact that our model can learn to solve this task demonstrates its robustness to unseen perturbations for which the motor primitive module was not explicitly trained. In another experiment we investigated a task in which the agent has to move at a random, changing target speed. This requires transitions between qualitatively different locomotion behavior such as walking, jogging, and running (see FIG2).See an extended video of these experiments. In a final reuse experiment, we consider an obstacle course requiring the agent to jump across gaps (as in BID22). We were able to solve this challenging task with a high-level controller that operated using egocentric visual inputs (see the main supplementary video).We emphasize a few points about these to impact their importance: Using a pretrained neural probabilistic motor primitives module, new controllers can be trained effectively from scratch on sparse reward tasks, the ing movements are visually rather humanlike without additional constraints implying that the learned embedding space is well structured, and the module enables fairly comprehensive and smooth coverage for the purposes of physics-based control. In this paper we have described approaches for transfer and compression of control policies. We have exhibited a motor primitive module that learns to represent and execute motor behaviors for control of a simulated humanoid body. Using either a variant of behavioral cloning or linear feedback policy cloning we can train the neural probabilistic motor primitive sytem to perform robust one-shotimitation, and with the latter we can use relatively restricted data consisting of only single rollouts from each expert. While LFPC did not work quite as well in the full-scale model as cloning from noisy rollouts, we consider it remarkable that it is possible in our setting to transfer expert behavior using a single rollout. We believe LFPC holds promise insofar as it may be useful in settings where rollouts are costly to obtain (e.g. adapted to real-world robotic applications), and there is room for further improvement as we did not carefully tune certain parameters, most saliently the marginal noise distribution ∆.The ing neural probabilistic motor primitive module is interpretable and reusable. We are optimistic that this kind of architecture could serve as a basis for further continual learning of motor skills. This work has been restricted to motor behaviors which do not involve interactions with objects and where a full set a of behaviors are available in advance. Meaningful extensions of this work may attempt to greatly enrich the space of behaviors or demonstrate how to perform continual learning and reuse of new skills. The data used in this project was obtained from mocap.cs.cmu.edu. The database was created with funding from NSF EIA-0196217.We used the reparametrization trick BID15 BID31 to train the model and used stochastic gradient descent with ADAM BID14 with a learning rate of 0.0001. In the case of models trained on 100 trajectories per expert we used minibatches of 512 subsequences of length 30.For LFPC we sampled 32 subsequences of length 30 and produced 5 perturbed state sequences per subsequence. In preliminary experiments the length of the subsequences did not have a major impact on model performance. Firstly, we note that the emphasis of the proposal in this work is to match the responsivity of the expert policy in a neighborhood around each state. This is distinct from activation matching or KL matching where the emphasis is on matching the action/activation distribution for a particular state BID33 BID40. Secondly, we emphasize that the kind of robust knowledge transfer we discuss here is distinct from that which is seen to be important in other settings. For example BID36 provide a line of reasoning that involves training a student system to match the exact activations of a teacher in the presence of perturbations on the student inputs. This logic is sound in the setting of large-scale vision systems. However in the context of control policies, this would look like: DISPLAYFORM0 This essentially means that the student policy is learning to "blindly" reproduce the action of the expert exactly, despite input perturbations. While this is well motivated if the noise is thought to be orthogonal to the proper functioning of the system, this is a very bad idea for control, where you need to pay close attention to small input perturbations. Technically, this amounts to setting the local feedback to zero, and behaving in a sort of open-loop-like fashion. Locomotion behavior is, at least in the simplest case roughly a limit cycle. In an additional experiment to test LFPC we gathered three gait cycles of running behavior and performed LFPC. Note that here the student policy need not be time-indexed even when the demonstrations were time-indexed. This restricted case shows striking generalization in the presence of noise (see The limited reference data originating from a time-indexed policy has been projected into the same space (green). Observe that the rollouts are considerably noisier and consistently deviate from the reference trajectory, nevertheless the cloned-policy trajectories return to the limit cycle.
Neural Probabilistic Motor Primitives compress motion capture tracking policies into one flexible model capable of one-shot imitation and reuse as a low-level controller.
765
scitldr
Data augmentation is a useful technique to enlarge the size of the training set and prevent overfitting for different machine learning tasks when training data is scarce. However, current data augmentation techniques rely heavily on human design and domain knowledge, and existing automated approaches are yet to fully exploit the latent features in the training dataset. In this paper we propose \textit{Parallel Adaptive GAN Data Augmentation}(PAGANDA), where the training set adaptively enriches itself with sample images automatically constructed from Generative Adversarial Networks (GANs) trained in parallel. We demonstrate by experiments that our data augmentation strategy, with little model-specific considerations, can be easily adapted to cross-domain deep learning/machine learning tasks such as image classification and image inpainting, while significantly improving model performance in both tasks. Our source code and experimental details are available at \url{https://github.com/miaojiang1987/k-folder-data-augmentation-gan/}. Deep learning and machine learning models produce highly successful when given sufficient training data. However, when training data is scarce, overfitting will occur and the ing model will generalize poorly. Data augmentation(DA) ameliorates such issues by enlarging the original data set and making more effective use of the information in existing data. Much prior work has centered on data augmentation strategies based on human design, including heuristic data augmentation strategies such as crop, mirror, rotation and distortion BID15 BID21 Proceedings of the 1 st. Copyright 2019 by the author(s). et al., 2003), interpolating through labeled data points in feature spaces BID5, and adversarial data augmentation strategies based on BID22 BID8. These methods have greatly aided many deep learning tasks across several domains such as classification BID15, image segmentation BID24 and image reconstruction/inpainting BID0.Despite their success, these DA methods generally require domain-specific expert knowledge, manual operations and extensive amount of tuning depending on actual contexts BID3 BID6. In particular, the need to directly operate on existing data with domain knowledge prevents many previous data augmentation strategies from being applicable to more general settings. To circumvent the need for specific domain knowledge in data augmentation, more recent work BID1 utilizes generative adversarial networks(GANs) BID10 to produce images that better encode features in the latent space of training data. By alternatively optimizing the generator G and the discriminator D in the GAN, the GAN is able to produce images similar to the original data and effectively complement the training set. It has been shown in experiments BID1 that GAN-based methods have indeed significantly boosted the performance of classifiers under limited data through automatic augmentation, but applications into other tasks are yet to be explored. Furthermore, given the computational complexity of GANs, a natural way to reduce runtime is to consider parallelism BID13 BID7.In view of these considerations, we propose in this paper Parallel Adaptive Generative Adversarial Network Data Augmentation(PAGANDA), where the training set adaptively enriches itself with sample images automatically constructed from Generative Adversarial Networks (GANs) trained in parallel. Our contributions can be summarized as follows:• We propose a general adaptive black-box data augmentation strategy to diversify enhance training data, with no task-specific requirements.• We also include in our model a novel K-fold parallel framework, which helps make the most use of the existing data.• Experiments over various datasets and tasks demonstrate the effectiveness of our method in different context. Data Augmentation(DA) Previous work on data augmentation can be classified into several groups. Traditional Heuristic DA strategies such as crop, mirror, rotation and distortion BID15 BID21 have found their way in many deep classification tasks, but these method generally require domain-specific expert knowledge, manual operations and extensive amount of tuning depending on actual contexts BID3 BID6. Other DA methods used interpolation through labeled data points in feature spaces BID5, but their dependence on class labels makes them inapplicable for tasks with weak or no supervision. Adversarial Data Augmentation strategies BID22 BID8 choose from a select number of transformation operations to maximize the loss function of the end classification model involved in the task. While good motivations for our methods, these methods make strong assumptions over the types of augmentation and are difficult to generalize. BID18 BID4 transform the problem of choosing data augmentation strategies into a reinforcement learning policy search problems, but the choice of augmentation methods are still limited and the reinforcement learning algorithms have non-trivial computation overhead in addition to the main task. ML problems with limited data For classfication with limited samples, BID19 proposed a convolutional neural network(CNN) to classify environmental sounds with limited samples. Other algorithms have been proposed in BID9 BID25 ), yet many of them have assumptions/constraints that hurts their capacity for generalization. For unsupervised learning models, recent research on sample complexity reduction in GAN training seeks to reparametrize the input noise using variational inference BID12 BID16, but this method has severe mathematical limitation that prevents further generalization. BID23 adopts transfer learning techniques to train a new GAN for limited data from a pre-trained GAN network. While effective, this approach requires a pre-trained network in the first place and doesn't apply to the cases when data is scarce. Parallel/Distributed GANs BID13 BID7 proposed the first distributed multidiscriminator generative adversarial models, yet these models require large datasets to train and have great computational complexity. Moreover, these models are trained on In this section we describe the details of Parallel Adaptive Generative Adversarial Network Data Augmentation (PAGANDA). Our method consists of three interrelated components: generative data augmentation, parallel image generation with fold division, and adaptive weight adjustment. To ensure that make full use of the information contained in the existing images, the first part of our method involves generative data augmentation, which constructs varied images given the training set by repeatedly generating samples from and adding samples to the training set using a generative adversarial net. We start off with a limited training set, and consecutively run the generative adversarial net using the set. After running a fixed number t of regular training epochs, we proceed to the augmentation epoch where the augmentation is conducted. During the augmentation epoch, we extract a number of sample images from the generator G using standard procedures of sample image generation as described in BID17 BID11. For this batch of samples, we calculate the Inception Score(IS) as defined by BID20 to measure the authenticity of the images generated, which we denote as w. Here the Inception Score provides a metric of the power of generator to produce realistic images: the higher the value of w, the more power the corresponding generator G. This batch of images are then added back into the original training set for subsequent augmentation epochs. We alternate running t regular training epochs and the augmentation epoch for a fixed number of times or until convergence. FIG0 is a flow-chart of our procedure. Notice that our procedure is agnostic to the specific architecture of generative adversarial net used to augment the training data. Since GANs capture the information in the latent feature space of the images and translate such information into generated images, our method has the capacity to reveal the potential features that are possibly not visually evident in the original training images. Moreover, compared with many other data augmentation strategies which require one to pre-define the operations to be carried on the images, our method automatically enriches the training set and does not require human intervention. The second part of our method consists of a parallel data generation strategy, inspired by K-fold cross validation in machine learning BID2. Dividing the training data into K folds at the beginning, we run in parallel K independent generators DISPLAYFORM0. Each generator G i is trained on one of data groups, and each data group i consists of K − 1 folds of the training set, except for the i-th fold. After images are generated in each generator G i in the training epochs, the sample images produced by each generator during the augmentation epoch are fed back into the respective training data groups. To allow for maximal usage of each generated image, we insert the images in a way such that the images generated by one generator G i are sent to the training data groups corresponding to all other K − 1 generators except for that corresponding to G i. This is to insure that the different generators in parallel have access to as many varied data pieces as possible in subsequent steps of training, so as to prevent overfitting and bolster the robustness of our strategy. FIG1 demonstrates our algorithm. Furthermore, to determine which generators are the most effective in generating authentic images, we introduce adaptive generator weighting at each augmentation epoch. At the initial stage, all the generators are treated equally. Before the batch of sample images generated by one generator G i are sent to the data group corresponding to other K − 1 generators, we collect the inception scores {w i} K i=1 computed in section 3.1. Since higher inception scores imply better performance of the generator, we define the generator weight p i of a generator G i as DISPLAYFORM0 and use this weight to determine how many images should be sampled from generator G i to be sent to other data groups for subsequent training in the very next augmentation epoch. When the total number of samples to be collected from generators are fixed, this method enables generators with better realistic image generation power to contribute more to the future training data groups. More realistic training sets thus augmented, in turn, exert more positive influence on the images to be generated. Note that all three strategies introduced go hand in hand, with no need for model specific considerations. As demonstrated by our experiments Section 4, training different GANs in parallel from different folds of data substantially boosts the quality of the training set and that of the generated images. To illustrate the effectiveness of PAGANDA for multiple machine learning tasks, we have applied our data augmentation method to two tasks: image classification and image inpainting. For image classification we constructed our dataset from Imagenet and Cifar-10 by randomly drawing 5000 images from each dataset respectively and applied PAGANDA on these reduced datasets. The augmented datasets are then used to train an AlexNet CNN classifier, and the classification are compared with the obtained from an AlexNet trained on the corresponding original unaugmented datasets. For image inpainting, we constructed our datasets from Places dataset. We chose images from the Ocean subset from Places to obtain the reduced Places Dataset. To ensure the parallelism of the experiments, we trained our model in a multi-threaded environment to make simultaneously training. Under such a setting, all the data groups are trained at the same time, and each GAN model corresponding to each data group is trained in a separate thread. All of our experiments are conducted on a server with Tesla-V GPU (32GB RAM, 7.8 TeraFLOPS) and Intel Xeon Processor E5 (2.00 GHz). For our experiments on classification, we first augment the reduced Cifar-10 and reduced Imagenet datasets, and then train the CNN classifier with the augmented dataset. The classifier accuracies with and without augmentation are listed in TAB0 below. For the task of inpainting, we augment the reduced dataset constructed in the experiment. Without loss of generality, we train a WGAN-GP model for inpainting from the augmented dataset. We then select testing images that are not selected in the training set, and add to them gray masks covering the center part of these images. We then applied our trained WGAN-GP to generate patches that cover the masked portion of the inpainting image. Figure 4 lists a couple of generated images with and without augmentation. Visual comparisons demonstrate the effectiveness of our method. In sum, our paper shows that PAGANDA effectively improves the performances for different machine learning tasks with little task-specific considerations. Our strategy is not only simple to implement, but also demonstrates capability to generate onto different settings since it does not require specific information about the task being analyzed. As a further step, we are investigating the relationship between our proposed approach and other established methods. We hope to apply our idea to other generative models such as VAE BID14 and further optimize our strategy using recent theoretical advances, and wish to investigate the scenarios where the tasks involved are interrelated. Application wise, we are aiming to apply our parallel GAN model to multi-modal image synthesis/generation where training data is limited.
We present an automated adaptive data augmentation that works for multiple different tasks.
766
scitldr
Deep neural networks with millions of parameters may suffer from poor generalizations due to overfitting. To mitigate the issue, we propose a new regularization method that penalizes the predictive distribution between similar samples. In particular, we distill the predictive distribution between different samples of the same label and augmented samples of the same source during training. In other words, we regularize the dark knowledge (i.e., the knowledge on wrong predictions) of a single network, i.e., a self-knowledge distillation technique, to force it output more meaningful predictions. We demonstrate the effectiveness of the proposed method via experiments on various image classification tasks: it improves not only the generalization ability, but also the calibration accuracy of modern neural networks. Deep neural networks (DNNs) have achieved state-of-the-art performance on many machine learning applications, e.g., computer vision , natural language processing , and reinforcement learning . As the scale of training dataset increases, the size of DNNs (i.e., the number of parameters) also scales up to handle such a large dataset efficiently. However, networks with millions of parameters may incur overfitting and suffer from poor generalizations (; . To address the issue, many regularization strategies have been investigated in the literature: early stopping, L 1 /L 2 -regularization , dropout , batch normalization and data augmentation Regularizing the predictive or output distribution of DNNs can be effective because it contains the most succinct knowledge of the model. On this line, several strategies such as entropy maximization and angular-margin based methods have been proposed in the literature. They can be also influential to solve related problems, e.g., network calibration , detection of out-of-distribution samples and exploration of the agent in reinforcement learning . In this paper, we focus on developing a new output regularizer for deep models utilizing the concept of dark knowledge , i.e., the knowledge on wrong predictions made by DNN. Its importance has been first evidenced by the so-called knowledge distillation and investigated in many following works (; ; ;). While the related works use the knowledge distillation (KD; Hinton et al. 2015) to transfer the dark knowledge learned by a teacher network to a student network, we regularize the dark knowledge itself during training a single network, i.e., self-knowledge distillation. Specifically, we propose a new regularization technique, coined class-wise self-knowledge distillation (CS-KD) that matches or distills the predictive distribution of DNNs between different samples of the same label (class-wise regularization) and augmented samples of the same source (sample-wise regularization) as shown in Figure 1. One can expect that the proposed regularization method forces DNNs to produce similar wrong predictions if samples are of the same class, while the conventional cross-entropy loss does not consider such consistency on the wrong predictions. We demonstrate the effectiveness of our regularization method using deep convolutional neural networks, such as ResNet and DenseNet trained for image classification tasks on various datasets including CIFAR-100 , TinyImageNet 1, CUB-200-2011 , Stanford Dogs , and MIT67 datasets. We compare or combine our method with prior regularizers. In our experiments, the top-1 error rates of our method are consistently smaller than those of prior output regularization methods such as angular-margin based methods and entropy regularization . In particular, the gain tends to be larger in overall for the top-5 error rates and the expected calibration errors , which confirms that our method indeed makes predictive distributions more meaningful. Moreover, we investigate a variant of our method by combining it with other types of regularization method for boosting performance, such as the mixup regularization and the original KD method. We improve the top-1 error rate of mixup from 37.09% to 31.95% and that of KD from 39.32% to 35.36% under ResNet trained by the CUB-200-2011 dataset. Our method is very simple to use, and would enjoy a broader usage in the future. In this section, we introduce a new regularization technique, named class-wise self-knowledge distillation (CS-KD). Throughout this paper, we focus on fully-supervised or classification tasks, and denote x ∈ X as an input and y ∈ Y = {1, ..., C} as its ground-truth label. Suppose that a softmax classifier is used to model a posterior distribution, i.e., given the input x, the predictive distribution is as follows:, where f = [f i] denotes the logit-vector of DNN, parameterized by θ and T > 0 is the temperature scaling parameter. We first consider matching the predictive distributions on samples of the same class, which distills their dark knowledge into the model itself. To this end, we propose a class-wise regularization loss that enforces consistent predictive distributions in the same class. Formally, given input x and another randomly sampled input x having the same label y, it is defined as follows: where KL denotes the Kullback-Leibler (KL) divergence and θ is a fixed copy of the parameters θ. As suggested by , the gradient is not propagated through θ to avoid Algorithm 1 Class-wise self-knowledge distillation (CS-KD) Initialize parameters θ. while θ has not converged do for (x, y) in a sampled batch do g θ ← 0 Get another sample x randomly which has the same label y from the training set. Generate x aug, x aug using data augmentation methods. Compute gradient: Update parameters θ using gradients g θ. end while the model collapsing issue. Similar to the knowledge distillation method (KD) by , L cls matches two predictions. While the original KD matches predictions of a sample from two networks, we do predictions of different samples from a single network. Namely, our method performs self-knowledge distillation. In addition to enforcing the intra-class consistency of predictive distributions, we apply this idea to the single-sample scenario by augmenting the input data. For a given training sample x, the proposed sample-wise regularization loss L sam is defined as follows: where x aug is an augmented input that is modified by some data augmentation methods, e.g., resizing, rotating, random cropping (; ;, cutout , and auto-augmentation . In our experiments, we use standard augmentation methods for ImageNet (i.e., flipping and random sized cropping) because they make training more stable. In summary, the total training loss L tot is defined as a weighted sum of the two regularization terms with cross-entropy loss as follows: where λ cls and λ sam are balancing weights for each regularization, respectively. Note that the first term is the cross-entropy loss of softmax outputs with temperature T = 1. In other words, we not only train the true label, but also regularize the wrong labels. The full training procedure with the proposed loss L tot is summarized in Algorithm 1. Datasets. To demonstrate our method under general situations of data diversity, we consider various image classification tasks including conventional classification and fine-grained classification tasks. We use CIFAR-100 and TinyImageNet 2 datasets for conventional classification tasks, and CUB-200-2011 , Stanford Dogs , and MIT67 datasets for fine-grained classification tasks. Note that fine-grained image classification tasks have visually similar classes and consist of fewer training samples per class compared to conventional classification tasks. We sample 10% of the training dataset randomly as a validation set for CIFAR-100 and TinyImageNet and report the test accuracy based on the validation accuracy. For the fine-grained datasets, we report the best validation accuracy. Network architecture. We consider two state-of-the-art convolutional neural network architectures: ResNet and DenseNet . We use standard ResNet-18 with 64 filters and DenseNet-121 with growth rate of 32 for image size 224 × 224. For CIFAR-100 and TinyImageNet, we modify the first convolutional layer 3 with kernel size 3 × 3, strides 1 and padding 1, instead of the kernel size 7 × 7, strides 2 and padding 3, for image size 32 × 32. Evaluation metric. For evaluation, we measure the following metrics: • Top-1 / 5 error rate. Top-k error rate is the fraction of test samples for which the correct label is amongst the top-k confidences. We measured top-1 and top-5 error rates to evaluate the generalization performance of the models. • Expected Calibration Error (ECE). ECE approximates the difference in expectation between confidence and accuracy, by partitioning predictions into M equally-spaced bins and taking a weighted average of bins' difference of confidence and accuracy, i.e., ECE = • Recall at k (R@k). Recall at k is the percentage of test samples that have at least one example from the same class in k nearest neighbors on the feature space. To measure the distance between two samples, we use L 2 -distance between their average-pooled features in the penultimate layer. We compare the recall at 1 scores to evaluate intra-class variations of learned features. Hyper-parameters. All networks are trained from scratch and optimized by stochastic gradient descent (SGD) with momentum 0.9, weight decay 0.0001 and an initial learning rate of 0.1. The learning rate is divided by 10 after epochs 100 and 150 for all datasets and total epochs are 200. We set batch size as 128 for conventional, and 32 for fine-grained classification tasks. We use standard flips, random resized crops, 32 for conventional and 224 for fine-grained classification tasks, overall experiments. Furthermore, we set T = 4, λ cls = 1 for all experiments and λ sam = 1 for experiments on fine-grained classification tasks, and λ sam = 0 on conventional classification tasks. To compute expected calibration error (ECE), we set the number of bins M as 20. Baselines. We compare our method with prior regularization methods such as the state-of-the-art angular-margin based methods and entropy regularization . They also regularize predictive distributions as like ours. • AdaCos . 4 AdaCos dynamically scales the cosine similarities between training samples and corresponding class center vectors to maximize angular-margin. • Virtual-softmax . Virtual-softmax injects an additional virtual class to maximize angular-margin. • Maximum-entropy . Maximum-entropy is a typical entropy regularization, which maximizes the entropy of the predictive distribution. Note that AdaCos and Virtual-softmax regularize the predictive or output distribution of DNN to learn feature representation by reducing intra-class variations and enlarging inter-class margins. Comparison with output regularization methods. We measure the top-1 error rates of the proposed method (denoted by CS-KD) by comparing with Virtual-softmax, AdaCos, and Maximumentropy on various image classification tasks. Table 1 shows that CS-KD outperforms other baselines consistently. In particular, CS-KD improves the top-1 error rate of cross-entropy loss from 46.00% to 33.50% in the CUB-200-2011 dataset, while the top-1 error rates of other baselines are even worse than the cross-entropy loss (e.g., AdaCos in the CIFAR-100, Virtual-softmax in the MIT67, and Maximum-entropy in the TinyImageNet and the MIT67 under DenseNet). The imply that our method is more effective and stable than other baselines. Compatibility with other types of regularization methods. We investigate orthogonal usage with other types of regularization methods such as mixup and knowledge distillation (KD). Mixup utilizes convex combinations of input pairs and corresponding label pairs for training. We combine our method with mixup regularization by applying the class-wise regularization loss L cls to mixed inputs and mixed labels, instead of standard inputs and labels. Table 2 shows the effectiveness of our method combined with mixup regularization. Interestingly, this simple idea significantly improves the performances of fine-grained classification tasks. In particular, our method improves the top-1 error rate of mixup regularization from 37.09% to 31.95%, where the top-1 error rate of cross-entropy loss is 46.00% in the CUB-200-2011. KD regularizes predictive distributions of student network to learn the dark knowledge of a teacher network. We combine our method with KD to learn dark knowledge from the teacher and itself simultaneously. Table 3 shows that the top-1 error rate under using our method solely is close to that of KD, although ours do not use additional teacher networks. Besides, learning knowledge from a teacher network improves the top-1 error rate of our method from 39.32% to 35.36% in the CUB-200-2011 dataset. The show a wide applicability of our method, compatible to use with other regularization methods. One can expect that our method forces DNNs to produce meaningful predictions by reducing the intra-class variations. To verify this, we analyze feature embedding and various evaluation metrics, including the top-1, top-5 error, expected calibration error and R@1. In Figure 2, we visualize feature embedding of the penultimate layer from ResNet-18 trained with various regularization techniques by t-SNE in the CIFAR-100 dataset. One can note that intra-class variations are significantly decreased by our method (Figure 2f), while Virtualsoftmax (Figure 2b) and AdaCos (Figure 2c) only reduce the angular-margin. We also provide quantitative analysis on the feature embedding by measuring the R@1 values, which are related to intra-class variations. Note that the larger value of R@1 means the more reduced intra-class variations on the feature embedding . As shown in Table 4, R@1 values can be significantly improved when ResNet-18 is trained with our methods. In particular, R@1 of our method is 59.22% in the CUB-200-2011 dataset, while R@1 of Virtual-softmax and Adacos are 55.56% and 54.86%, respectively. Moreover, Table 4 shows the top-5 error rates of our method significantly outperform other regularization methods. Figure 3 and Table 4 show that our method enhances model calibration significantly, which also confirm that ours forces DNNs to produce more meaningful predictions. Table 4: Top-1 / 5 error, ECE, and Recall at 1 rates (%) of ResNet-18. The arrow on the right side of the evaluation metric indicates ascending or descending order of the value. We reported the mean and standard deviation over 3 runs with different random seed, and the best are indicated in bold. &; ) which show accuracy as a function of confidence, for ResNet-18 trianed on CIFAR-100 using (a) Cross-entropy, (b) Virtual-softmax, (c) AdaCos, and (d) Maximum-entropy. All methods are compared with our proposed method, CS-KD. Regularization techniques. Numerous techniques have been introduced to prevent overfitting of neural networks, including early stopping, weight decay, dropout , and batch normalization . Alternatively, regularization methods for the output distribution also have been explored: showed that label-smoothing, which is a mixture of the ground-truth and the uniform distribution, improves generalization of neural networks. proposed penalizing low entropy output distributions, which improves exploration in reinforcement learning and supervised learning. proposed a powerful data augmentation method called mixup, which works as a regularizer that can be utilized with smaller weight decay. We remark that our method enjoys orthogonal usage with the prior methods, i.e., our methods can be combined with prior methods to further improve the generalization performance. Knowledge distillation. Knowledge distillation is an effective learning method to transfer the knowledge from a powerful teacher model to a student. This pioneering work showed that one can use softmax with temperature scaling to match soft targets for transferring dark knowledge, which contains the information of non-target labels. There are numerous follow-up studies to distill knowledge in the aforementioned teacher-student framework. FitNets tried to learn features of a thin deep network using a shallow one with linear transform. introduced a transfer method that matches attention maps of the intermediate features, and tried to maximize the mutual information between intermediate layers of teacher and student for enhanced performance. proposed a loss function for matching Jacobian of the networks output instead of the feature itself. We remark that our method and knowledge distillation have a similar component, i.e., using a soft target distribution, but our method utilizes the soft target distribution from itself. We also remark that joint usage of our method and the prior knowledge distillation methods is effective. Margin-based softmax losses. There have been recent efforts toward boosting the recognition performances via enlarging inter-class margins and reducing intra-class variation. Several approaches utilized metric-based methods that measure similarities between features using Euclidean distances, such as triplet and contrastive loss . To make the model extract discriminative features, center loss and range loss were proposed to minimize distances between samples belong to the same class. COCO loss (b) and NormFace optimized cosine similarities, by utilizing reformulations of softmax loss and metric learning with feature normalization. applied ring loss for soft normalization which uses a convex norm constraint. More recently, angular-margin based losses were proposed for further improvement. Lsoftmax and A-softmax (a) combined angular margin constraints with softmax loss to encourage the model to generate more discriminative features. CosFace, AM-softmax and ArcFace introduced angular margins for a similar purpose, by reformulating softmax loss. Different from L-Softmax and A-Softmax, Virtual-softmax encourages a large margin among classes via injecting additional virtual negative class. In this paper, we discover a simple regularization method to enhance generalization performance of deep neural networks. We propose two regularization terms which penalizes the predictive distribution between different samples of the same label and augmented samples of the same source by minimizing the Kullback-Leibler divergence. We remark that our ideas regularize the dark knowledge (i.e., the knowledge on wrong predictions) itself and encourage the model to produce more meaningful predictions. Moreover, we demonstrate that our proposed method can be useful for the generalization and calibration of neural networks. We think that the proposed regularization techniques would enjoy a broader range of applications, e.g., deep reinforcement learning and detection of out-of-distribution samples .
We propose a new regularization technique based on the knowledge distillation.
767
scitldr
In this paper, we consider the specific problem of word-level language modeling and investigate strategies for regularizing and optimizing LSTM-based models. We propose the weight-dropped LSTM, which uses DropConnect on hidden-to-hidden weights, as a form of recurrent regularization. Further, we introduce NT-ASGD, a non-monotonically triggered (NT) variant of the averaged stochastic gradient method (ASGD), wherein the averaging trigger is determined using a NT condition as opposed to being tuned by the user. Using these and other regularization strategies, our ASGD Weight-Dropped LSTM (AWD-LSTM) achieves state-of-the-art word level perplexities on two data sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the effectiveness of a neural cache in conjunction with our proposed model, we achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and 52.0 on WikiText-2. We also explore the viability of the proposed regularization and optimization strategies in the context of the quasi-recurrent neural network (QRNN) and demonstrate comparable performance to the AWD-LSTM counterpart. The code for reproducing the is open sourced and is available at https://github.com/salesforce/awd-lstm-lm. Effective regularization techniques for deep learning have been the subject of much research in recent years. Given the over-parameterization of neural networks, generalization performance crucially relies on the ability to regularize the models sufficiently. Strategies such as dropout BID33 and batch normalization BID13 have found great success and are now ubiquitous in feed-forward and convolutional neural networks. Naïvely applying these approaches to the case of recurrent neural networks (RNNs) has not been highly successful however. Many recent works have hence been focused on the extension of these regularization strategies to RNNs; we briefly discuss some of them below. A naïve application of dropout BID33 to an RNN's hidden state is ineffective as it disrupts the RNN's ability to retain long term dependencies BID40. BID7 propose overcoming this problem by retaining the same dropout mask across multiple time steps as opposed to sampling a new binary mask at each timestep. Another approach is to regularize the network through limiting updates to the RNN's hidden state. One such approach is taken by BID31 wherein the authors drop updates to network units, specifically the input gates of the LSTM, in lieu of the units themselves. This is reminiscent of zoneout BID20 where updates to the hidden state may fail to occur for randomly selected neurons. Instead of operating on the RNN's hidden states, one can regularize the network through restrictions on the recurrent matrices as well. This can be done either through restricting the capacity of the matrix BID0 BID39 BID14 or through element-wise interactions (; BID32 .Other forms of regularization explicitly act upon activations such as batch normalization BID13, recurrent batch normalization BID4, and layer normalization BID1 . These all introduce additional training parameters and can complicate the training process while increasing the sensitivity of the model. In this work, we investigate a set of regularization strategies that are not only highly effective but which can also be used with no modification to existing LSTM implementations. The weightdropped LSTM applies recurrent regularization through a DropConnect mask on the hidden-tohidden recurrent weights. Other strategies include the use of randomized-length backpropagation through time (BPTT), embedding dropout, activation regularization (AR), and temporal activation regularization (TAR).As no modifications are required of the LSTM implementation these regularization strategies are compatible with black box libraries, such as NVIDIA cuDNN, which can be many times faster than naïve LSTM implementations. Effective methods for training deep recurrent networks have also been a topic of renewed interest. Once a model has been defined, the training algorithm used is required to not only find a good minimizer of the loss function but also converge to such a minimizer rapidly. The choice of the optimizer is even more important in the context of regularized models since such strategies, especially the use of dropout, can impede the training process. Stochastic gradient descent (SGD), and its variants such as Adam BID18 and RMSprop BID36 are amongst the most popular training methods. These methods iteratively reduce the training loss through scaled (stochastic) gradient steps. In particular, Adam has been found to be widely applicable despite requiring less tuning of its hyperparameters. In the context of word-level language modeling, past work has empirically found that SGD outperforms other methods in not only the final loss but also in the rate of convergence. This is in agreement with recent evidence pointing to the insufficiency of adaptive gradient methods BID38.Given the success of SGD, especially within the language modeling domain, we investigate the use of averaged SGD (AvSGD) BID29 which is known to have superior theoretical guarantees. AvSGD carries out iterations similar to SGD, but instead of returning the last iterate as the solution, returns an average of the iterates past a certain, tuned, threshold T. This threshold T is typically tuned and has a direct impact on the performance of the method. We propose a variant of AvSGD where T is determined on the fly through a non-monotonic criterion and show that it achieves better training outcomes compared to SGD. We refer to the mathematical formulation of the LSTM, DISPLAYFORM0 where DISPLAYFORM1 are weight matrices, x t is the vector input to the timestep t, h t is the current exposed hidden state, c t is the memory cell state, and is element-wise multiplication. Preventing overfitting within the recurrent connections of an RNN has been an area of extensive research in language modeling. The majority of previous recurrent regularization techniques have acted on the hidden state vector h t−1, most frequently introducing a dropout operation between timesteps, or performing dropout on the update to the memory state c t. These modifications to a standard LSTM prevent the use of black box RNN implementations that may be many times faster due to low-level hardware-specific optimizations. We propose the use of DropConnect BID37 on the recurrent hidden to hidden weight matrices which does not require any modifications to an RNN's formulation. As the dropout operation is applied once to the weight matrices, before the forward and backward pass, the impact on training speed is minimal and any standard RNN implementation can be used, including inflexible but highly optimized black box LSTM implementations such as NVIDIA's cuDNN LSTM.By performing DropConnect on the hidden-to-hidden weight matrices DISPLAYFORM2 within the LSTM, we can prevent overfitting from occurring on the recurrent connections of the LSTM. This regularization technique would also be applicable to preventing overfitting on the recurrent weight matrices of other RNN cells. As the same weights are reused over multiple timesteps, the same individual dropped weights remain dropped for the entirety of the forward and backward pass. The is similar to variational dropout, which applies the same dropout mask to recurrent connections within the LSTM by performing dropout on h t−1, except that the dropout is applied to the recurrent weights. DropConnect could also be used on the non-recurrent weights of the LSTM [W i, W f, W o] though our focus was on preventing overfitting on the recurrent connection. SGD is among the most popular methods for training deep learning models across various modalities including computer vision, natural language processing, and reinforcement learning. The training of deep networks can be posed as a non-convex empirical risk minimization problem DISPLAYFORM0 where f i is the loss function for the i th data point, w are the weights of the network, and the expectation is taken over the data. In this context, given a sequence of learning rates, γ k, SGD iteratively takes steps of the form DISPLAYFORM1 where the subscript denotes the iteration number and the∇ denotes a stochastic gradient that may be computed on a minibatch of data points. SGD demonstrably performs well in practice and also possesses several attractive theoretical properties such as linear convergence BID2, saddle point avoidance BID28 and better generalization performance BID10. For the specific task of neural language modeling, traditionally SGD without momentum has been found to outperform other algorithms such as momentum SGD BID34, Adam BID18, Adagrad BID5 and RMSProp BID36 ) by a statistically significant margin. Motivated by this observation, we investigate averaged SGD (AvSGD) to further improve the training process. AvSGD has been analyzed in depth theoretically and many surprising have been shown including its asymptotic second-order convergence BID29 BID22. AvSGD takes steps identical to equation FORMULA4 but instead of returning the last iterate as the solution, returns DISPLAYFORM2 where K is the total number of iterations and T < K is a user-specified averaging trigger. Despite its theoretical appeal, AvSGD has found limited practical use in training of deep networks. This may be in part due to unclear tuning guidelines for the learning-rate schedule γ k and averaging trigger T. If the averaging is triggered too soon, the efficacy of the method is impacted, and if it is triggered too late, many additional iterations may be needed to converge to the solution. In this section, we describe a non-monotonically triggered variant of AvSGD (NT-AvSGD), which obviates the need for tuning T. Further, the algorithm uses a constant learning rate throughout the experiment and hence no further tuning is necessary for the decay scheduling. Ideally, averaging needs to be triggered when the SGD iterates converge to a steady-state distribution BID22. This is roughly equivalent to the convergence of SGD to a neighborhood around a solution. In the case of SGD, certain learning-rate reduction strategies such as the stepwise strategy analogously reduce the learning rate by a fixed quantity at such a point. A common strategy employed in language modeling is to reduce the learning rates by a fixed proportion when the performance of the model's primary metric (such as perplexity) worsens or stagnates. Along the same lines, one could make a triggering decision based on the performance of the model on the Algorithm 1 Non-monotonically Triggered AvSGD (NT-AvSGD) Inputs: Initial point w 0, learning rate γ, logging interval L, non-monotone interval n. DISPLAYFORM3 while stopping criterion not met do 3:Compute stochastic gradient∇f (w k) and take SGD step. if mod(k, L) = 0 and T = 0 then 5:Compute validation perplexity v. if t > n and v > min l∈{0,···,t−n−1} DISPLAYFORM0 Append v to logs 10: DISPLAYFORM1 end if 12: DISPLAYFORM2 validation set. However, instead of averaging immediately after the validation metric worsens, we propose a non-monotonic criterion that conservatively triggers the averaging when the validation metric fails to improve for multiple cycles; see Algorithm 1. Given that the choice of triggering is irreversible, this conservatism ensures that the randomness of training does not play a major role in the decision. Analogous strategies have also been proposed for learning-rate reduction in SGD BID16.While the algorithm introduces two additional hyperparameters, the logging interval L and nonmonotone interval n, we found that setting L to be the number of iterations in an epoch and n = 5 worked well across various models and data sets. As such, we use this setting in all of our NTAvSGD experiments in the following section and demonstrate that it achieves better training outcomes as compared to SGD. In addition to the regularization and optimization techniques above, we explored additional regularization techniques that aimed to improve data efficiency during training and to prevent overfitting of the RNN model. Given a fixed sequence length that is used to break a data set into fixed length batches, the data set is not efficiently used. To illustrate this, imagine being given 100 elements to perform backpropagation through with a fixed backpropagation through time (BPTT) window of 10. Any element divisible by 10 will never have any elements to backprop into, no matter how many times you may traverse the data set. Indeed, the backpropagation window that each element receives is equal to i mod 10 where i is the element's index. This is data inefficient, preventing 1 10 of the data set from ever being able to improve itself in a recurrent fashion, and ing in 8 10 of the remaining elements receiving only a partial backpropagation window compared to the full possible backpropagation window of length 10.To prevent such inefficient data usage, we randomly select the sequence length for the forward and backward pass in two steps. First, we select the base sequence length to be seq with probability p and seq 2 with probability 1 − p, where p is a high value approaching 1. This spreads the starting point for the BPTT window beyond the base sequence length. We then select the sequence length according to N (seq, s), where seq is the base sequence length and s is the standard deviation. This jitters the starting point such that it doesn't always fall on a specific word divisible by seq or seq 2. From these, the sequence length more efficiently uses the data set, ensuring that when given enough epochs all the elements in the data set experience a full BPTT window, while ensuring the average sequence length remains around the base sequence length for computational efficiency. During training, we rescale the learning rate depending on the length of the ing sequence compared to the original specified sequence length. The rescaling step is necessary as sampling arbitrary sequence lengths with a fixed learning rate favors short sequences over longer ones. This linear scaling rule has been noted as important for training large scale minibatch SGD without loss of accuracy BID8 and is a component of unbiased truncated backpropagation through time BID35. In standard dropout, a new binary dropout mask is sampled each and every time the dropout function is called. New dropout masks are sampled even if the given connection is repeated, such as the input x 0 to an LSTM at timestep t = 0 receiving a different dropout mask than the input x 1 fed to the same LSTM at t = 1. A variant of this, variational dropout BID7, samples a binary dropout mask only once upon the first call and then to repeatedly use that locked dropout mask for all repeated connections within the forward and backward pass. While we propose using DropConnect rather than variational dropout to regularize the hidden-tohidden transition within an RNN, we use variational dropout for all other dropout operations, specifically using the same dropout mask for all inputs and outputs of the LSTM within a given forward and backward pass. Each example within the minibatch uses a unique dropout mask, rather than a single dropout mask being used over all examples, ensuring diversity in the elements dropped out. Following BID7, we employ embedding dropout. This is equivalent to performing dropout on the embedding matrix at a word level, where the dropout is broadcast across all the word vector's embedding. The remaining non-dropped-out word embeddings are scaled by 1 1−pe where p e is the probability of embedding dropout. As the dropout occurs on the embedding matrix that is used for a full forward and backward pass, this means that all occurrences of a specific word will disappear within that pass, equivalent to performing variational dropout on the connection between the one-hot embedding and the embedding lookup. Weight tying BID12 BID30 shares the weights between the embedding and softmax layer, substantially reducing the total parameter count in the model. The technique has theoretical motivation BID12 ) and prevents the model from having to learn a one-to-one correspondence between the input and output, ing in substantial improvements to the standard LSTM language model. In most natural language processing tasks, both pre-trained and trained word vectors are of relatively low dimensionality-frequently between 100 and 400 dimensions in size. Most previous LSTM language models tie the dimensionality of the word vectors to the dimensionality of the LSTM's hidden state. Even if reducing the word embedding size was not beneficial in preventing overfitting, the easiest reduction in total parameters for a language model is reducing the word vector size. To achieve this, the first and last LSTM layers are modified such that their input and output dimensionality respectively are equal to the reduced embedding size. L 2 -regularization is often used on the weights of the network to control the norm of the ing model and reduce overfitting. In addition, L 2 decay can be used on the individual unit activations and on the difference in outputs of an RNN at different time steps; these strategies labeled as activation regularization (AR) and temporal activation regularization (TAR) respectively BID25. AR penalizes activations that are significantly larger than 0 as a means of regularizing the network. Concretely, AR is defined as DISPLAYFORM0 where m is the dropout mask, L 2 (·) = · 2, h t is the output of the RNN at timestep t, and α is a scaling coefficient. TAR falls under the broad category of slowness regularizers BID11 BID6 BID21 BID15 which penalize the model from producing large changes in the hidden state. Using the notation from AR, TAR is defined as DISPLAYFORM1 where β is a scaling coefficient. As in BID25, the AR and TAR loss are only applied to the output of the final RNN layer as opposed to being applied to all layers. For evaluating the impact of these approaches, we perform language modeling over a preprocessed version of the Penn Treebank (PTB) BID27 ) and the WikiText-2 (WT2) data set. The Penn Treebank data set has long been a central data set for experimenting with language modeling. The data set is heavily preprocessed and does not contain capital letters, numbers, or punctuation. The vocabulary is also capped at 10,000 unique words, quite small in comparison to most modern datasets, which in a large number of out of vocabulary (OoV) tokens. WikiText-2 is sourced from curated Wikipedia articles and is approximately twice the size of the PTB data set. The text is tokenized and processed using the Moses tokenizer BID19, frequently used for machine translation, and features a vocabulary of over 30,000 words. Capitalization, punctuation, and numbers are retained in this data set. All experiments use a three-layer LSTM model with 1150 units in the hidden layer and an embedding of size 400. The loss was averaged over all examples and timesteps. All embedding weights were uniformly initialized in the interval [−0.1, 0.1] and all other weights were initialized between DISPLAYFORM0, where H is the hidden size. For training the models, we use the NT-AvSGD algorithm discussed in the previous section for 750 epochs with L equivalent to one epoch and n = 5. We use a batch size of 80 for WT2 and 40 for PTB. Empirically, we found relatively large batch sizes (e.g., 40-80) performed better than smaller sizes (e.g., 10-20) for NT-AvSGD. After completion, we run AvSGD with T = 0 and hot-started w 0 as a fine-tuning step to further improve the solution. For this fine-tuning step, we terminate the run using the same non-monotonic criterion detailed in Algorithm 1.We carry out gradient clipping with maximum norm 0.25 and use an initial learning rate of 30 for all experiments. We use a random BPTT length which is N with probability 0.95 and N with probability 0.05. The values used for dropout on the word vectors, the output between LSTM layers, the output of the final LSTM layer, and embedding dropout where (0.4, 0.3, 0.4, 0.1) respectively. For the weight-dropped LSTM, a dropout of 0.5 was applied to the recurrent weight matrices. For WT2, we increase the input dropout to 0.65 to account for the increased vocabulary size. For all experiments, we use AR and TAR values of 2 and 1 respectively, and tie the embedding and softmax weights. These hyperparameters were chosen through trial and error and we expect further improvements may be possible if a fine-grained hyperparameter search were to be conducted. In the , we abbreviate our approach as AWD-LSTM for AvSGD Weight-Dropped LSTM. We present the single-model perplexity for both our models (AWD-LSTM) and other competitive models in Table 1 and 2 for PTB and WT2 respectively 1. On both data sets we improve the state-of-the-art, with our vanilla LSTM model beating the state of the art by approximately 1 unit on PTB and 0.1 units on WT2.In comparison to other recent state-of-the-art models, our model uses a vanilla LSTM. BID41 propose the recurrent highway network, which extends the LSTM to allow multiple hidden state updates per timestep. BID42 use a reinforcement learning agent to generate an RNN cell tailored to the specific task of language modeling, with the cell far more complex than the LSTM.Independently of our work, BID23 apply extensive hyperparameter search to an LSTM based language modeling implementation, analyzing the sensitivity of RNN based language models to hyperparameters. Unlike our work, they use a modified LSTM, which caps the input gate i t to be min(1 − f t, i t), use Adam with β 1 = 0 rather than SGD or AvSGD, use skip connections between LSTM layers, and use a black box hyperparameter tuner for exploring models and settings. Of particular interest is that their hyperparameters were tuned individually for each data set compared to our work which shared almost all hyperparameters between PTB and WT2, including the embedding and hidden size for both data sets. Due to this, they used less model parameters than our model and found shallow LSTMs of one or two layers worked best for WT2.Like our work, BID23 find that the underlying LSTM architecture can be highly effective compared to complex custom architectures when well tuned hyperparameters are used. The approaches used in our work and BID23 may be complementary and would be worth exploration. In past work, pointer based attention models have been shown to be highly effective in improving language modeling BID9. Given such substantial improvements to the underlying neural language model, it remained an open question as to how effective pointer augmentation may be, especially when improvements such as weight tying may act in mutually exclusive ways. The neural cache model BID9 can be added on top of a pre-trained language model at negligible cost. The neural cache stores the previous hidden states in memory cells and then uses a simple convex combination of the probability distributions suggested by the cache and the language model for prediction. The cache model has three hyperparameters: the memory size (window) for the cache, the coefficient of the combination (which determines how the two distributions are mixed), and the flatness of the cache distribution. All of these are tuned on the validation set once a trained language model has been obtained and require no training by themselves, making it quite inexpensive to use. The tuned values for these hyperparameters were (2000, 0.1, 1.0) for PTB and (3785, 0.1279, 0.662) for WT2 respectively. In TAB1, we show that the model further improves the perplexity of the language model by as much as 6 perplexity points for PTB and 11 points for WT2. While this is smaller than the gains reported in BID9, which used an LSTM without weight tying, this is still a substantial drop. Given the simplicity of the neural cache model, and the lack of any trained components, these suggest that existing neural language models remain fundamentally lacking, failing to capture long term dependencies or remember recently seen words effectively. To understand the impact the pointer had on the model, specifically the validation set perplexity, we detail the contribution that each word has on the cache model's overall perplexity in TAB3. We compute the sum of the total difference in the loss function value (i.e., log perplexity) between the LSTM-only and LSTM-with-cache models for the target words in the validation portion of the WikiText-2 data set. We present for the sum of the difference as opposed to the mean since the latter undesirably overemphasizes infrequently occurring words for which the cache helps significantly and ignores frequently occurring words for which the cache provides modest improvements that cumulatively make a strong contribution. The largest cumulative gain is in improving the handling of <unk> tokens, though this is over 11540 instances. The second best improvement, approximately one fifth the gain given by the <unk> tokens, is for Meridian, yet this word only occurs 161 times. This indicates the cache still helps significantly even for relatively rare words, further demonstrated by Churchill, Blythe, or Sonic. The cache is not beneficial when handling frequent word categories, such as punctuation or stop words, for which the language model is likely well suited. These observations motivate the design of a cache framework that is more aware of the relative strengths of the two models. Several architectures for learning sequential data based on convolutions, instead of recurrences, have been proposed recently. We briefly mention experiments on the same language modeling us- ing quasi-recurrent neural networks (QRNNs) instead of LSTMs; we label this setup the AWD-QRNN. As in the case of AWD-LSTM, we regularize the network through weight, embedding and variational dropouts along with variable sequence lengths, weight tying, AR and TAR. The networks were designed such that they had the same number of parameters as their LSTM counterparts and were trained using NT-AvSGD. Despite the same size of the network, QRNNs were 2 − 4× faster per epoch as compared to their LSTM counterparts and required fewer epochs to converge. We report the in TAB4. As is evident from the table, the QRNN model achieves comparable to the LSTM suggesting the generality of the proposed regularization techniques. Interestingly, the hyperparameter values for the various regularization components, including the optimization procedure, needed minimal changes from the LSTM to the QRNN models for competitive performance. For full details and hyperparameters, refer to the released code. In The first two variants deal with the optimization of the language models while the rest deal with the regularization. For the model using SGD with learning rate reduced by 2 using the same nonmonotonic fashion, there is a significant degradation in performance. This stands as empirical evidence regarding the benefit of averaging of the iterates. Using a monotonic criterion instead also hampered performance. Similarly, the removal of the fine-tuning step expectedly also degrades the performance. This step helps improve the estimate of the minimizer by resetting the memory of the previous experiment. While this process of fine-tuning can be repeated multiple times, we found little benefit in repeating it more than once. The removal of regularization strategies paints a similar picture; the inclusion of all of the proposed strategies was pivotal in ensuring state-of-the-art performance. The most extreme perplexity jump was in removing the hidden-to-hidden LSTM regularization provided by the weight-dropped LSTM. Without such hidden-to-hidden regularization, perplexity rises substantially, up to 11 points. This is in line with previous work showing the necessity of recurrent regularization in state-of-the-art models BID7 BID12.We also experiment with static sequence lengths which we had hypothesized would lead to inefficient data usage. This also worsens the performance by approximately one perplexity unit. Next, we experiment with reverting to matching the sizes of the embedding vectors and the hidden states. This significantly increases the number of parameters in the network (to 43M in the case of PTB and 70M for WT2) and leads to degradation by almost 8 perplexity points, which we attribute to overfitting in the word embeddings. While this could potentially be improved with more aggressive regularization, the computational overhead involved with substantially larger embeddings likely outweighs any advantages. Finally, we experiment with the removal of embedding dropout, AR/TAR and weight decay. In all of the cases, the model suffers a perplexity increase of 2-6 points which we hypothesize is due to insufficient regularization in the network. In this work, we discuss regularization and optimization strategies for neural language models. We propose the weight-dropped LSTM, a strategy that uses a DropConnect mask on the hidden-tohidden weight matrices, as a means to prevent overfitting across the recurrent connections. Further, we investigate the use of averaged SGD with a non-monontonic trigger for training language models and show that it outperforms SGD by a significant margin. We investigate other regularization strategies including the use of variable BPTT length and achieve a new state-of-the-art perplexity on the PTB and WikiText-2 data sets. Our models outperform custom-built RNN cells and complex regularization strategies that preclude the possibility of using optimized libraries such as the NVIDIA cuDNN LSTM. We explore the use of a neural cache in conjunction with our proposed model and show that this further improves the performance, thus attaining an even lower state-of-the-art perplexity. We also explore the viability of using the proposed regularization and optimization strategies in the context of a quasi-recurrent neural network (QRNN) and demonstrate comparable performance to the LSTM counterpart. While the regularization and optimization strategies proposed are demonstrated on the task of language modeling, we anticipate that they would be generally applicable across other sequence learning tasks.
Effective regularization and optimization strategies for LSTM-based language models achieves SOTA on PTB and WT2.
768
scitldr
Various methods of measuring unit selectivity have been developed with the aim of better understanding how neural networks work. But the different measures provide divergent estimates of selectivity, and this has led to different regarding the conditions in which selective object representations are learned and the functional relevance of these representations. In an attempt to better characterize object selectivity, we undertake a comparison of various selectivity measures on a large set of units in AlexNet, including localist selectivity, precision, class-conditional mean activity selectivity (CCMAS), network dissection, the human interpretation of activation maximization (AM) images, and standard signal-detection measures. We find that the different measures provide different estimates of object selectivity, with precision and CCMAS measures providing misleadingly high estimates. Indeed, the most selective units had a poor hit-rate or a high false-alarm rate (or both) in object classification, making them poor object detectors. We fail to find any units that are even remotely as selective as the'grandmother cell' units reported in recurrent neural networks. In order to generalize these , we compared selectivity measures on a few units in VGG-16 and GoogLeNet trained on the ImageNet or Places-365 datasets that have been described as'object detectors'. Again, we find poor hit-rates and high false-alarm rates for object classification. There have been recent attempts to understand how neural networks (NNs) work by analyzing hidden units one-at-a-time using various measures such as localist selectivity , class-conditional mean activity selectivity (CCMAS) , precision , network dissection (a), and activation maximization (AM) . These measures are all taken to provide evidence that some units respond highly selectively to categories of objects under some conditions. Not only are these findings surprising given the widespread assumption that NNs only learn highly distributed and entangled representations, they raise a host of questions, including the functional importance of these selective representations (b), the conditions in which they are learned (e.g.,), and the relation between these representations and the selective neurons observed in cortex . To answer these question, it is necessary to have a better understanding of what these metrics actually measure, and how they relate to one another. Accordingly, we directly compare these measures of selectivity on the same set of units as well as adopt standard signal-detection measures in an attempt to provide better measures of single-unit selectivity to object category. In addition, to provide a more intuitive assessment of selectivity, we report jitterplots for a few of the most selective units that visually display how the unit responds to the different image categories. We focus on AlexNet trained on ImageNet because many authors have studied the selectivity of single hidden units in this model using a range of quantitative (a; and qualitative (; ;) methods. But we also compare different selectivity measures on specific units in VGG-16 and GoogLeNet trained on the the ImageNet and Places-365 datasets that were characterized by Zhou et al. (2018a) as "object detectors" based on their Network Dissection method (a). Our main findings are: 1. The precision and CCMAS measures are misleading with near-maximum selectivity scores associated with units that strongly respond to many different image categories. By contrast, the signal-detection measures more closely capture the level of selectivity displayed in the jitterplots (Sec. 3.1). 2. Units with interpretable AM images do not correspond to highly selective representations (Sec. 3.2). 3. The Network Dissection method also provides a misleading measure for "object detectors" (Sec. 3.3). In one line of research, Bowers et al. (2014; assessed the selectivity of single hidden units in recurrent neural networks (RNNs) designed to model human short-term memory. They reported many'localist' or'grandmother cell' units that were 100% selective for specific letters or words, where all members of the selective category were more active than and disjoint from all non-members, as can be shown in jitterplots (see Fig. 1 for a unit selective to the letter 'j'). The authors argued that the network learned these representations in order to co-activate multiple letters or words at the same time in short-term memory without producing ambiguous blends of overlapping distributed patterns (the so-called 'superposition catastrophe'). Consistent with this hypothesis, localist units did not emerge when the model was trained on letters or words one-at-a-time (see Fig. 1 for an example of a non-selective unit). In parallel, researchers have reported selective units in the hidden layers of various CNNs trained to classify images into one of multiple categories (; ; ;), for a review see. For example, assessed the selectivity of units in the pool5 layer of two CNNs trained to classify images into 1000 objects and 205 scene categories, respectively. They reported many highly selective units that they characterized as'object detectors' in both networks. reported that CNNs trained on CIFAR-10 and ImageNet learned many highly selective hidden units, with CCMAS scores approaching the maximum of 1.0. These later findings appear to be inconsistent with who failed to observe selective representations in fully connected NNs trained on stimuli one-at-a-time (see Fig. 1), but the measures of selectivity that have been applied across studies are different, and accordingly, it is difficult to directly compare . A better understanding of the relation between selectivity measures is vital given that different measures are frequently used to address similar issues. For example, both the human interpretability of generated images and localist selectivity have been used to make claims about'grandmother cells', but it is not clear whether they provide similar insights into unit selectivity. Similarly, based on their precision metric, claim that the object detectors learned in CNNs play an important role in identifying specific objects, whereas challenge this based on their finding that units with high CCMAS measures were not especially important in the performance of their CNNs and concluded: "...it implies that methods for understanding neural networks based on analyzing highly selective single units, or finding optimal inputs for single units, such as activation maximization may be misleading". This makes a direct comparison between selectivity measures all the more important. In order to directly compare and have a better understanding of the different selectivity measures we assessed localist, precision, and CCMAS selectivity of the conv5, fc6, and fc7 of AlexNet trained on ImageNet, and in addition, we employed a range of signal detection methods on these units, namely, recall with 100% and 95% precision, maximum informedness, specificity at maximum informedness, and recall (also called sensitivity) at maximum informedness, and false alarm rates at maximum informedness (described in Sec. 2). We also assessed the selectivity of a few units in VGG-16 and GoogLeNet models trained on the ImageNet and Places-365 dataset that were highly selective according to the Network Dissection method (a). We show that the precision and CCMAS measures often provide misleadingly high estimates of object selectivity compared to other measures, and we do not find any units that can be reasonably described as'object detectors' given that the most selective units show a low hit-rate or a high false-alarm rate (or both) when classifying images. At best, the most selective units in CNNs are sensitive to some unknown feature that is weakly associated with the class in question. . Top middle: jitterplot of a non-selective unit 160 found in an RNN trained on words one-at-a-time from . Top right: Activation maximization image of unit conv5 9 AlexNet that resembles a lighthouse . Bottom: highest-activation images for a'lamp' detector with 84% precision in the layer conv5 of AlexNet; from . In addition to these quantitative measures and jitterplots we assessed selectivity with a common qualitative measure, namely, human interpretation of images generated by a state-of-the-art activation maximization (AM) method . AM images are generated to strongly activate individual units, and some of them are interpretable by humans (e.g., a generated image that looks like a lighthouse, see Fig. 1). For the first time, we systematically evaluated the interpretability of the AM images and compare these ratings with the selectivity measures for corresponding units. We show that the few hidden units with interpretable AM images are not highly selective. Network and Dataset All ∼1.3M photos from the ImageNet ILSVRC 2012 dataset were cropped to 277 × 277 pixels and classified by the pre-trained AlexNet CNN shipped with Caffe , ing in 721,536 correctly classified images. Once classified, the images are not re-cropped nor subject to any changes. We analyzed the fully connected (fc) layers: fc6 and fc7 (4096 units), and the top convolutional layer conv5 which has 256 filters. We only recorded the activations of correctly classified images. The activation files are stored in.h5 format and will be available at http://anonymizedForReview. We randomly selected 233 conv5, 2738 fc6, 2239 fc7 units for analysis. Localist selectivity , we define a unit to be localist for class A if the set of activations for class A was higher and disjoint with those of ¬A. Localist selectivity is easily depicted with jitterplots in which a scatter plot for each unit is generated (see Figs. 1 and 3). Each point in a plot corresponds to a unit's activation in response to a single image, and only correctly classified images are plotted. The level of activation is coded along the x-axis, and an arbitrary value is assigned to each point on the y-axis. Precision Precision refers to the proportion of items above some threshold from a given class. The precision method of finding object detectors involves identifying a small subset of images that most strongly activate a unit and then identifying the critical part of these images that are responsible for driving the unit. took the 60 images that activated a unit the most strongly and asked independent raters to interpret the critical image patches (e.g., if 50 of the 60 images were labeled as 'lamp', the unit would have a precision index of 50/60 or 83%; see Fig. 1). Object detectors were defined as units with a precision score > 75%: they reported multiple such detectors. Here, we approximate this approach by considering the 60 images that most strongly activate a given unit and assess the highest percentage of images from a given output class. introduced a selectivity index called the Class-conditional Mean Activation Selectivity (CCMAS). The CCMAS for class A compares the mean activation of all images in class A, µ A, with the mean activation of all images not in class A, µ ¬A, and is given by: Here, we assessed class selectivity for the highest mean activation class. We harnessed an activation maximization method called Plug & Play Generative Networks in which an image generator network was used to generate images (AM images) that highly activate a unit in a target network. We used the public code released by and their default hyperparameters. 1 We generated 100 separate images that maximally activated each unit in the conv5, fc6, and fc8 layers of AlexNet and asked participants to judge whether they could identify any repeating objects, animals, or places in images in a behavioral experiment (Sec. 3.2). Readers can test themselves at: https://research.sc/ participant/login/dynamic/63907FB2-3CB9-45A9-B4AC-EFFD4C4A95D5 Recall with perfect and 95% precision Recall with perfect and 95% precision are related to localist selectivity except that they provide a continuous rather than discrete measure. For recall with perfect precision we identified the image that activated a given unit the most and counted the number of images from the same class that were more active than all images from all other classes. We then divided this by the total number of correctly identified images from this class. A recall with a perfect precision score of 1 is equivalent to a localist representation. Recall with a 95% precision allows 5% false alarms. Maximum informedness Maximum informedness identifies the class and threshold where the highest proportion of images above the threshold and the lowest proportion of images below the threshold are from that class . The informedness is computed for each class at each threshold, with the highest value selected. Informedness summarises the diagnostic performance of unit for a given class at a certain threshold based on the recall [True Positives / (True Positives + False Negatives)] and specificity [True Negatives / (True Negatives + False Positives)] in the formula [informedness = recall + specificity − 1] . Sensitivity or Recall at Maximum Informedness For the threshold and class selected by Maximum Informedness, recall (or hit-rate) is the proportion of items from the given class that are above the threshold. Also known as true postive rate. Specificity at Maximum Informedness For the threshold and class selected by Maximum Informedness, the proportion of items that are not from the given class that are below the threshold. Also known as true negative rate. False Alarm Rate at Maximum Informedness For the threshold and class selected by Maximum Informedness, the proportion of items that are not from the given class that are above the threshold. Network Dissection To assess the selectivity of a unit in the Network Dissection technique, Zhou et al. (2018a) compute the Intersection over Union (IoU) of an annotated input image L c, for the set of all'concepts' c and a spatial activation map, M k, of where a unit k is. A unit k is taken as a detector for concept c if its IoU k,c exceeds a pre-defined threshold T. See Zhou et al. (2018a) for more details. The from the various of selectivity measures applied to the conv5, fc6, and fc7 layers of AlexNet are displayed in Fig. 2a -i. We did not plot the localist selectivity as there were no localist'grandmother units'. The first point to note is that multiple units in the fc6 and fc7 layers had near 100% precision scores and multiple units had CCMAS scores approaching 1. For example, in layer fc7, we found 14 units with a precision > 0.9, and 1487 units with a CCMAS > 0.9. The second point is that other measures provided much reduced estimates of selectivity. For example, the unit with the highest recall with a perfect precision score was only.08 (unit 255 responding to images of Monarch butterflies), and the unit with the top maximum informedness score (unit 3290 also responding to images of Monarch butterflies with a score of 0.91) had a false alarm rate above its optimal threshold > 99% (indeed the minimum false alarm rate was 0.96). To illustrate the contrasting measures of selectivity consider unit fc6 1199 depicted in Fig. 3 Figure 2: Different selectivity measures across the conv5, fc6, and fc7 layers of AlexNet. Red-line: median of data, top and bottom of box edges is the 25 th and 75 th percentile, whiskers extend to extreme edges of distribution not considered outliers and red crosses are outliers. Green points and dashed lines are the means of the distributions with standard errors. The high levels of selectivity observed with the precision and CCMAS measures are in stark contrast with the low levels of selectivity observed with the recall with perfect precision and high false-alarm rates at maximum informedness. scores show this is a mischaracterisation of this unit given that the false alarm rate at maximum informedness was greater than 99% and the modal response to Monarch butterflies was zero. What level of selectivity is required before a unit can be considered an'object detector' for a given category? In the end, this is a terminological point. On an extreme view, one might limit the term to the'grandmother units' that categorize objects with perfect recall and specificity, or alternatively, it might seem reasonable to describe a unit as a detector for a specific object category if there is some threshold of activation that supports more hits than misses (the unit is strongly activated by the majority of images from a given category), and at the same time, supports more hits than false alarms (the unit is strongly activated by items from the given category more often than by items from other categories). Or perhaps a lower standard could be defended, but in our view, the term "object detector" suggests a higher level of selectivity than 8% recall at perfect precision. That said, our show that some units respond strongly to some (unknown) features that are weakly correlated with an object category. For instance, unit fc6 1199 is responding to features that occur more frequently in Monarch Butterflies than other categories. This can also be seen in a recent ablation study in which removing the most selective units tended to impair the CNN's performance in identifying the corresponding object categories more than other categories (b). But again, the pattern of performance is not consistent with the units being labeled'object detectors'. Although the high precision score suggests that this unit is a butterfly detector this is misleading given there are butterfly images over the entire activation range (including 0). Activation Maximization is one of the most commonly used interpretability methods for explaining what a single unit has learned in many artificial CNNs and even biological neural networks (see for a survey). Our behavioral experiment provides the first quantitative assessment of AM images and compares AM interpretability to other selectivity measures. We generated 100 AM images images for every unit in the layers conv5, fc6, and fc8 in AlexNet, as in , and displayed them as 10 × 10-image panels. A total of 3,299 image panels were used in the experiment (995 fc8, 256 conv5, and 2048 randomly selected fc6 image panels) and were divided into 64 counterbalanced lists for testing. To assess the interpretability for these units as object detectors, 333 paid volunteers were asked to look at image panels and asked if the images had an object / animal or place in common. If the answer was'yes', they were asked to write down a generic name for that object (e.g. "fish" rather than "goldfish"). Analyses of common responses was done for any units where over 80% of humans agreed there was an object present. The are summarized in Table 1. Not surprisingly, the AM images for output fc8 units are the most human-recognizable as objects across the AlexNet layers (71.2%; Table 1a). In addition, when they were given a consistent interpretation, they almost always (95.4%; Table 1d) match the corresponding ImageNet category. By contrast, less than 5% of units in conv5 or fc6 were associated with consistently interpretable images (Table 1b), and the interpretations only weakly matched the category associated with the highest-activation images or CCMAS selectivity (Table 1d -e). Apart from showing that there are few interpretable units in the hidden layers of AlexNet, our findings show that the interpretability of images does not imply a high level of selectivity given the signal-detection Figure 4: Example AM images that were either judged by all participants to contain objects (a-c) or to be uninterpretable as objects (d-f). The human label for unit conv5 183 (a) was'dogs'; the most active image was of a'flat-coated retriever'; CCMAS class was'monitor'. For fc6 319 (b), subjects reported'green peppers' or'apples' (all classified as the same broad class in our analysis); both the most active item and CCMAS class were'Granny Smith apples'. For fc8 969 (c), humans suggested'beverage' or'drink'; both the most active item and CCMAS class were'eggnog'. (Fig. 2d-h). See Fig. 4 for an example of the types of images that participants rated as objects or non-objects. Thus far we have assessed the selectivity of hidden units in AlexNet and shown that no units can reasonably be characterized as object detectors despite the high precision and CCMAS scores of some units. This raises the question as to whether more recent CNNs learn object detector units. In order to address this we display jitterplots for three units that have the highest IoU scores according to the Network Dissection for the category BUS in (a) GoogLeNet trained on ImageNet, (b) GoogLeNet trained on Places-365, and (c) VGG-16 trained on Places-365, respectively (a). Models trained on the Places-365 dataset learn to categorize images into scenes (e.g., bedrooms, kitchens, etc.) rather than into object categories, and nevertheless, Zhou et al. (2018a) reported more object detectors in the former models. We illustrate the selectivity of the BUS category because it is an output category in ImageNet so we can easily plot the jitterplots for these units. As was the case with AlexNet, the jitterplots show that the most selective units show some degree of selectivity, with the BUS images more active on average compared to non-Buses, and the percentage of nonzero activations for BUS higher than the non-BUS categories (see tables A3 -A5 in the appendix for summary of more units). But the units are no more selective than the units we observed in AlexNet. Indeed, the precision measure of selectivity for the first units is 0.0, with none of the units having a precision of.75 that was the criterion of object detectors by , and CCMAS scores for first two units were roughly similar to the mean CCMAS score for AlexNet units in conv 5 (and much lower than the mean in fc6 and fc7). The most selective VGG-16 unit trained on Places-365 has lower precision and CCMAS scores than the Monarch Butterfly unit depicted in Figure 3. So again, different measures of selectivity provide support different , and even the most selective units are far from the selective units observed in recurrent networks as reported in Figure 1a. See tables A3 -A5 in the appendix for more details about these three units. Our central finding is that different measures of single-unit selectivity for objects support very different when applied to the same units in AlexNet. In contrast with the precision and CCMAS measures that suggest some highly selective units for objects in layers conv5, fc6, and fc7, the recall with perfect precision and false alarm rates at maximum informedness show low levels of selectivity. Indeed, the most selective units have a poor hit-rate or a high false-alarm rate (or both) for identifying an object class. The same outcome was observed with units in VGG-16 and GoogLeNet trained on either ImageNet or the Places-365 dataset. Not only do the different measures provide very different assessments of selectivity, the precision, CCMAS, and Network Dissection measures provide highly misleading estimates of selectivity that have led to mistaken . For example, unit fc6 1199 in AlexNet trained on ImageNet is considered an Monarch Butterfly detector according to with a precision score of 98% (and a CCMAS score of .93). But the jitterplot in Fig. 3 and signal detection scores (e.g., high false alarm rate at maximum informedness) show this is a mischaracterisation of this unit. In the same way, the Network Dissection method identified many object detectors in VGG-16 and GoogLeNet CNNs, but the jitterplots in Fig. 5 (and precision scores) show that this is unjustified. For additional problems with the CCMAS score see Figure 5 in Appendix C. Similarly, the images generated by Activation Maximization also provided a misleading estimate of selectivity given that interpretable images were associated with very low selectivity scores. This has led to confusions that have delayed theoretical progress. For example, describing single units in CNNs as "object detectors" in response to high precision measures (Zhou et al.) suggests similar types of representations are learned in CNNs and RNNs. Indeed, we are not aware of anyone in the machine learning community who has even considered the hypothesis that selectivity is reduced in CNNs compared RNNs. Our findings highlight the contrasting . What should be made of the finding that localist representations are sometimes learned in RNNs (units with perfect specificity and recall), but not in AlexNet and related CNNs? The failure to observe localist units in the hidden layers of these CNNs is consistent with's claim that these units emerge in order to support the co-activation of multiple items at the same time in short-term memory. That is, localist representations may be the solution to the superposition catastrophe, and these CNNs only have to identify one image at a time. The pressure to learn highly selective representations in response to the superposition constraint may help explain the reports of highly selective neurons in cortex given that the cortex needs to co-activate multiple items at the same time in order to support short-term memory . Note, the RNNs that learned localist units were very small in scale compared to CNNs we have studied here, and accordingly, it is possible that the contrasting reflect the size of the networks rather than the superposition catastrophe per se. Relevant to this issue a number of authors have reported the existence of selective units in larger RNNs with long-short term memory (LSTM) units (; ; ;). use the term'grandmother cell' to describe the units they observed. It will be interesting to apply our measures of selectivity to these larger RNNs and see whether these units are indeed'grandmother units'. It should also be noted that there are recent reports of impressively selective representations in Generative Adversarial Networks and Variational Autoencoders where the superposition catastrophe is not an issue. Again, it will be interesting to assess the selectivity of these units according to signal detection measures in order to see whether there are additional computational pressures to learn highly selective or even grandmother cells. We will be exploring these issues in future work. One hundred generated images were made for every unit in layers conv5, fc6 and fc8 in AlexNet, as in , and displayed as 10x10 image panels (figures A4 and Figures A2 and A3). A total of 3,299 image panels were used in the experiment (995 fc8, 256 conv5, and 2048 randomly selected fc6 image panels) and were divided into 64 counterbalanced lists of 51 or 52 (4 conv5, 15 or 16 fc8 and 32 fc6). 51 of the lists were assigned to 5 participants and 13 lists were assigned to 6 participants. To test the interpretability of these units, paid volunteers were asked to look at image panels and asked if the images had an object / animal or place in common. If the answer was'yes', they were asked to name that object simply (i.e. fish rather than goldfish). Analyses of common responses was carried out for any units where over 80% of humans agreed there was an object present, by reading the human responses and comparing them to both each other and to the output classes. Agreement was taken if the object was the same rough class. For example,'beer','glass', and'drink' were all considered to be in agreement in the general object of'drink', and in agreement with both the classes of'wine glass' and'beer' as these classes were also general drink classes (this is an actual example, most responses were more obvious and required far less interpretation than that). Participants were given six practice trials, each with panels of 20 images before starting the main experiment. Practice trials included images that varied in their interpretability. Some examples of the 10x10 grids of activation maximisation images that were presented to participants are shown in Figures A2, A3 and A4. Figure A2 shows an example from conv5 that human participants agreed had no obvious object in common (although there are repeated shape motifs, the participants were specifically asked for objects, and not abstract concepts like shape or color. Figure A3 is also from the conv5 and was judged by participants as some images containing 'dogs'. Figure A4 is the AM images for the supposed 'butterfly detector' unit example discussed in the paper. Figure A2 : Example activation maximisation images for unit conv5.65. These images were judged by humans to not contain any interpretable objects in common (although the reader may agree that there are some shape and colour similarities in the images). Figure A3: Example activation maximisation images for unit conv5.183. These images were judged by humans to contain some interpretable images, in this case, of the type'dogs'. Figure A4: Example activation maximisation images for unit fc6.1199. Whilst there are some butterfly wing shapes in these images, there are not obvious butterflies. N.B. the second highest activating class for this unit is ladybirds, and there are some orange round shapes that could conceivably be ladybug-alikes. B FURTHER DATA ON THE SELECTIVITY MEASURES ACROSS ALEXNET Table A1 gives the highest values of CCMAS and precision for each layer in AlexNet. It is worth noting that the highest CCMAS score of all hidden units was.94 (fc7.31), which at first glance suggests that this unit is close to'perfect' selectivity. However, this unit only has low a precision score of 11%. In other words, although the mean activation for the given class is very high relative to the mean of all other activations (high CCMAS), the proportion of items from that class in the 100 most active items is low (low precision). See appendix Sec. C for discussion of how this occurs and Fig. A5(a) for an illustrative example. Top Fig. 3. Table A2 shows positive correlations between four of the selectivity measures used. There are moderate positive correlations between precision and CCMAS; and precision and Recall at 95% precision. The other correlations between selectivity measures have weak positive correlations. All four selectivity measures are negatively correlated with the number of classes present in the 100 most active items, that is, the more selective the unit, the fewer classes will be represented in the most active 100 items. The CCMAS measure is based on comparing the mean activation of a category with the mean activation for all other items, and this is problematic for a few reasons. First, in many units a large proportion of images do not activate a unit at all. For instance, our butterfly'detector' unit fc6.1199 has a high proportion of images with an activation of 0.0 (see figure 3). Indeed, the inset on the middle figure shows that the distribution can be better described by exponential-derived fits rather than a Gaussian. This means that the CCMAS selectivity is heavily influenced by the the proportion of images that have an activation value of zero (or close to zero). This can lead to very different estimates of selectivity for CCMAS and precision or localist selectivity, which are driven by the most highly activated items. In A5 we generate example data to highlight ways in which CCMAS score may be non-intuitive. In subplot (a) we demonstrate that a unit can have a CCMAS score of of 1.0 despite only a single item activating the unit. The point that we wish to emphasise is that a high CCMAS score does not necessarily imply selectivity for a given class, but might in fact relate to selectivity for a small subset of items from a given class, and this is especially true when a unit's activation is sparse (many items do not activate the unit). However, the reverse can also be true. In subplot (c) we demonstrate that a unit can have a very low CCMAS score of.06 despite all of the most active items being from the same class. In addition, if the CCMAS provided a good measure of a unit's class selectivity, then one should expect that a high measure of selectivity for one class would imply that the unit is not highly selective for other classes. However, the CCMAS score for the most selective category and the second most selective category (CCMAS2) were similar across the conv5, fc6 and fc7. layers, with the mean CCMAS scores.491,.844, and.848, and the CCMAS2 scores.464,.821,.831. For example, unit fc7.0 has a CCMAS of.813 for the class'maypole', and a CCMAS2 score of.808 for'chainsaw' (with neither of these categories corresponding 'orangutan' that had the highest precision of score of 14%). To investigate units claimed by Zhou et al. (2018a) to be object detectors, we focus on units from a single layer that are reported to be'bus detectors', that is, units with an IoU ≥.04. We used the first 100 images per class from the ImageNet 2012 dataset as our test data. There are three classes of bus in this dataset:'n04146614 school bus','n04487081 trolleybus, trolley coach, trackless trolley','n03769881 minibus', and this corresponded to 300 items out of 100000 images. Data for all bus unit detectors for VGG trained on places 365 are shown in table A3; for GoogLeNet trained on places 365 in table A4; and for GoogLeNet trained on ImageNet are shown in table A5. Note that for all units there are very few busses with activation at zero and that the mean activation for busses is higher than the mean activation for non-busses. However, all precision scores are all below.6, meaning that of the 100 items that most strongly activated the unit, at least 40 of them were not busses. Together these suggests that whilst these units demonstrate some sensitivity to busses, they show poor specificity for busses (e.g., high false-alarm rate). Table A5: Selectivity measures for GoogLeNet, trained on ImageNet, layer inception4e units units identified by Zhou et al. (2018a) as object detectors. Standard errors not shown for space, but were below ±2. A units is marked as correct if there was a single bus in the 4 example pictures on the website (http://netdissect.csail.mit.edu/dissect/googlenet_imagenet/), and false if not. This might suggest that the units were responding to'bus like' features in none bus objects.
Looking for object detectors using many different selectivity measures; CNNs are slightly selective , but not enough to be termed object detectors.
769
scitldr
The folding structure of the DNA molecule combined with helper molecules, also referred to as the chromatin, is highly relevant for the functional properties of DNA. The chromatin structure is largely determined by the underlying primary DNA sequence, though the interaction is not yet fully understood. In this paper we develop a convolutional neural network that takes an image-representation of primary DNA sequence as its input, and predicts key determinants of chromatin structure. The method is developed such that it is capable of detecting interactions between distal elements in the DNA sequence, which are known to be highly relevant. Our experiments show that the method outperforms several existing methods both in terms of prediction accuracy and training time. DNA is perceived as a sequence over the letters {A,C,G,T}, the alphabet of nucleotides. This sequence constitutes the code that acts as a blueprint for all processes taking place in a cell. But beyond merely reflecting primary sequence, DNA is a molecule, which implies that DNA assumes spatial structure and shape. The spatial organization of DNA is achieved by integrating ("recruiting") other molecules, the histone proteins, that help to assume the correct spatial configuration. The combination of DNA and helper molecules is called chromatin; the spatial configuration of the chromatin, finally, defines the functional properties of local areas of the DNA BID9.Chromatin can assume several function-defining epigenetic states, where states vary along the genome BID12. The key determinant for spatial configuration is the underlying primary DNA sequence: sequential patterns are responsible for recruiting histone proteins and their chemical modifications, which in turn give rise to or even define the chromatin states. The exact configuration of the chromatin and its interplay with the underlying raw DNA sequence are under active research. Despite many enlightening recent findings (e.g. BID6 ; BID11, comprehensive understanding has not yet been reached. Methods that predict chromatin related states from primary DNA sequence are thus of utmost interest. In machine learning, many prediction methods are available, of which deep neural networks have recently been shown to be promising in many applications BID17 . Also in biology deep neural networks have been shown to be valuable (see BID3 for a review).Although DNA is primarily viewed as a sequence, treating genome sequence data as just a sequence neglects its inherent and biologically relevant spatial configuration and the ing interaction between distal sequence elements. We hypothesize that a deep neural network designed to account for long-term interactions can improve performance. Additionally, the molecular spatial configuration of DNA suggests the relevance of a higher-dimensional spatial representation of DNA. However, due to the lack of comprehensive understanding with respect to the structure of the chromatin, sensible suggestions for such higher-dimensional representations of DNA do not exist. One way to enable a neural net to identify long-term interactions is the use of fully connected layers. However, when the number of input nodes to the fully connected layer is large, this comes with a large number of parameters. We therefore use three other techniques to detect long-term interactions. First, most convolutional neural networks (CNNs) use small convolution filters. Using larger filters already at an early stage in the network allows for early detection of long-term interactions without the need of fully connected layers with a large input. Second, a deep network similar to the ResNet BID14 or Inception BID27 network design prevents features found in early layers from vanishing. Also, they reduce the size of the layers such that the final fully connected layers have a smaller input and don't require a huge number of parameters. Third, we propose a novel kind of DNA representation by mapping DNA sequences to higher-dimensional images using space-filling curves. Space-filling curves map a 1-dimensional line to a 2-dimensional space by mapping each element of the sequence to a pixel in the 2D image. By doing so, proximal elements of the sequence will stay in close proximity to one another, while the distance between distal elements is reduced. The space-filling curve that will be used in this work is the Hilbert curve which has several advantages. (i): [Continuity] Hilbert curves optimally ensure that the pixels representing two sequence elements that are close within the sequence are also close within the image BID4 BID1. (ii): [Clustering property] Cutting out rectangular subsets of pixels (which is what convolutional filters do) yields a minimum amount of disconnected subsequences BID20. (iii): If a rectangular subimage cuts out two subsequences that are disconnected in the original sequence, chances are maximal that the two different subsequences are relatively far apart (see our analysis in Appendix A).The combination of these points arguably renders Hilbert curves an interesting choice for representing DNA sequence as two-dimensional images. (i) is a basic requirement for mapping short-term sequential relationships, which are ubiquitous in DNA (such as codons, motifs or intron-exon structure).(ii) relates to the structure of the chromatin, which -without all details being fully understood -is tightly packaged and organized in general. Results from BID10 indicate that when arranging DNA sequence based on Hilbert curves, contiguous areas belonging to identical chromatin states cover rectangular areas. In particular, the combination of (i) and (ii) motivate the application of convolutional layers on Hilbert curves derived from DNA sequence: rectangular subspaces, in other words, submatrices encoding the convolution operations, contain a minimum amount of disconnected pieces of DNA. (iii) finally is beneficial insofar as long-term interactions affecting DNA can also be mapped. This in particular applies to so-called enhancers and silencers, which exert positive (enhancer) or negative (silencer) effects on the activity of regions harboring genes, even though they may be far apart from those regions in terms of sequential distance. Since Watson and Crick first discovered the double-helix model of DNA structure in 1953 BID31, researchers have attempted to interpret biological characteristics from DNA. DNA sequence classification is the task of determining whether a sequence S belongs to an existing class C, and this is one of the fundamental tasks in bio-informatics research for biometric data analysis (Z.). Many methods have been used, ranging from statistical learning BID30 to machine learning methods BID19. Deep neural networks BID17 form the most recent class of methods used for DNA sequence classification (R.R. BID25 BID26 BID33 BID3 .Both BID22 and BID15 use support vector machines (SVM) to predict chromatin state from DNA sequence features. While BID22 use the entire set of features as input to the SVM, BID15 use random forests to pre-select a subset of features that are expected to be highly relevant for prediction of chromatin state to use as input to the SVM. Only BID21 use a CCN as we do. There are two major differences between their approach and ours. First and foremost, the model architecture is different: the network in BID21 consists of two convolution layers followed by pooling layers, a fully connected layer and a sigmoid layer, while our model architecture is deeper, uses residual connections to reuse the learned features, has larger convolution filters and has small layers preceding the fully connected layers (see Methods). Second, while we use a space-filling curve to transform the sequence data into an image-like tensor, BID21 keep the sequential form of the input data. Apart from BID10, the only example we are aware of where Hilbert curves were used to map DNA sequence into two-dimensional space is from BID2, who demonstrated the power of Hilbert curves for visualizing DNA. Beyond our theoretical considerations, these last two studies suggest there are practical benefits of mapping DNA using Hilbert curves. Our contributions are twofold. First, we predict chromatin state using a CNN that, in terms of architecture, resembles conventional CNNs for image classification and is designed for detecting distal relations. Second, we propose a method to transform DNA sequence patches into two-dimensional image-like arrays to enhance the strengths of CNNs using space-filling curves, in particular the Hilbert curve. Our experiments demonstrate the benefits of our approach: the developed CNN decisively outperforms all existing approaches for predicting the chromatin state in terms of prediction performance measures as well as runtime, an improvement which is further enhanced by the convolution of DNA sequence to a 2D image. In summary, we present a novel, powerful way to harness the power of CNNs in image classification for predicting biologically relevant features from primary DNA sequence. We transform DNA sequences into images through three steps. First, we represent a sequence as a list of k-mers. Next, we transform each k-mer into a one-hot vector, which in the sequence being represented as a list of one-hot vectors. Finally, we create an image-like tensor by assigning each element of the list of k-mers to a pixel in the image using Hilbert curves. Each of the steps is explained in further detail below. From a molecular biology point of view, the nucleotides that constitute a DNA sequence do not mean much individually. Instead, nucleotide motifs play a key role in protein synthesis. In bioinformatics it is common to consider a sequence's k-mers, defined as the k-letter words from the alphabet {A,C,G,T} that together make up the sequence. In computer science the term q-gram is more frequently used, and is often applied in text mining BID29. As an example, the sequence TGACGAC can be transformed into the list of 3-mers {TGA, GAC, ACG, CGA, GAC} (note that these are overlapping). The first step in our approach is thus to transform the DNA sequence into a list of k-mers. Previous work has shown that 3-mers and 4-mers are useful for predicting epigenetic state BID22 BID15. Through preliminary experiments, we found that k = 4 yields the best performance: lower values for k in reduced accuracy, while higher values yield a high risk of overfitting. Only for the Splice dataset (see experiments) we used k = 1 to prevent overfitting, as this is a small dataset. In natural language processing, it is common to use word embeddings as GloVe or word2vec or onehot vectors BID13. The latter approach is most suitable for our method. Each element of such a vector corresponds to a word, and a vector of length N can thus be used to represent N different words. A one-hot vector has a one in the position that corresponds to the word the position is representing, and a zero in all other positions. In order to represent all k-mers in a DNA sequence, we need a vector of length 4 k, as this is the number of words of length k that can be constructed from the alphabet {A,C,G,T}. For example, if we wish to represent all 1-mers, we can do so using a one-hot vector of length 4, where A corresponds to Our next step is to transform the list of one-hot vectors into an image. For this purpose, we aim to assign each one-hot vector to a pixel. This gives us a 3-dimensional tensor, which is similar in shape to the tensor that serves as an input to image classification networks: the color of a pixel in an RGB-colored image is represented by a vector of length 3, while in our approach each pixel is represented by a one-hot vector of length 256.What remains now is to assign each of the one-hot vectors in the list to a pixel in the image. For this purpose, we can make use of space-filling curves, as they can map 1-dimensional sequences to a 2-dimensional surface preserving continuity of the sequence BID4 BID1. Various types of space-filling curves are available. We have compared the performance of several such curves, and concluded that Hilbert curves yield the best performance (Appendix A). This corresponds with our intuition: the Hilbert curve has several properties that are advantageous in the case of DNA sequences, as discussed in the introduction section. The Hilbert curve is a well-known space-filling curve that is constructed in a recursive manner: in the first iteration, the curve is divided into four parts, which are mapped to the four quadrants of a square FIG1. In the next iteration, each quadrant is divided into four sub-quadrants, which, in a similar way, each hold 1/16 th of the curve FIG1. The quadrants of these sub-quadrants each hold 1/64 th of the curve, etc FIG1.By construction, the Hilbert curve yields a square image of size 2 n × 2 n, where n is the order of the curve (see FIG1). However, a DNA sequence does not necessarily have 2 n * 2 n k-mers. In order to fit all k-mers into the image, we need to choose n such that 2 n * 2 n is at least the number of k-mers in the sequence, and since we do not wish to make the image too large, we pick the smallest such n. In many cases, a large fraction of the pixels then remains unused, as there are fewer k-mers than pixels in the image. By construction, the used pixels are located in upper half of the image. Cropping the picture by removing the unused part of the image yields rectangular images, and increases the fraction of the image that is used FIG1 ).In most of our experiments we used sequences with a length of 500 base pairs, which we convert to a sequence of 500 -4 + 1 = 497 4-mers. We thus need a Hilbert curve of order 5, ing in an image of dimensions 2 5 × 2 5 × 256 = 32 × 32 × 256 (recall that each pixel is assigned a one-hot vector of length 256). Almost half of the ing 1024 pixels are filled, leaving the other half of the image empty which requires memory. We therefore remove the empty half of the image and end up with an image of size 16 × 32 × 256.The data now has the appropriate form to input in our model. Modern CNNs or other image classification systems mainly focus on gray-scale images and standard RGB images, ing in channels of length 1 or 3, respectively, for each pixel. In our approach, each pixel in the generated image is assigned a one-hot vector representing a k-mer. For increasing k, the length of the vector and thus the image dimension increases. Here, we use k = 4 ing in 256 channels, which implies that each channel contains very sparse information. Due to the curse of dimensionality standard network architectures applied to such images are prone to severe overfitting. Here, we design a specific CNN for the kind of high dimensional image that is generated from a DNA sequence. The architecture is inspired by ResNet BID14 and Inception BID27. The network has L layers and each layer implements a non-linear function F l (x l) where l is the index of the hidden layer with output x l+1. The function F l (x l) consists of various layers such as convolution (denoted by c), batch normalization (bn), pooling (p) and non-linear activation function (af).The first part of the network has the objective to reduce the sparseness of the input image FIG2, and consists of the consecutive layers [c, bn, c, bn, af, p]. The main body of the network enhances the ability of the CNN to extract the relevant features from DNA space-filling curves. For this purpose, we designed a specific Computational Block inspired by the ResNet residual blocks BID14. The last part of the network consists of 3 fully-connected layers, and softmax is used to obtain the output classification label. The complete model is presented in TAB1, and code is available on Github (https://github.com/Bojian/Hilbert-CNN/tree/master). A simplified version of our network with two Computational Blocks is illustrated in FIG2.Computation Block. In the Computation Block first the outputs of two Residual Blocks and one identity mapping are summed, followed by a bn and an af layer FIG2 ). In total, the computational block has 4 convolutional layers, two in each Residual Block (see FIG3 . The Residual Block first computes the composite function of five consecutive layers, namely [c, bn, af, c, bn], followed by the concatenation of the output with the input tensor. The residual block concludes with an af. Implementation details. Most convolutional layers use small squared filters of size 2, 3 and 4, except for the layers in the first part of the network, where large filters are applied to capture long range features. We use Exponential Linear Units (ELU, BID8) as our activation function af to reduce the effect of gradient vanishing: preliminary experiments showed that ELU preformed significantly better than other activation functions (data not shown). For the pooling layers p we used Average pooling. Average pooling outperformed Max pooling in terms of prediction accuracy by more than 2% in general, as it reduces the high variance of the sparse generated images. Cross entropy was used as the loss function. We test the performance of our approach using ten publicly available datasets from BID23. The datasets contain DNA subsequences with a length of 500 base pairs. Each sequence is labeled either as "positive" or "negative", indicating whether or not the subsequence contains regions that are wrapped around a histone protein. The ten datasets each consider a different type of histone protein, indicated by the name of the dataset. Details can be found in TAB2.A randomly chosen 90% of the dataset is used for training the network, 5% is used for validation and early stopping, and the remaining (5%) is used for evaluation. We train the network using the AdamOptimizer BID16 1. The learning rate is set to 0.003, the batch-size was set to 300 samples and the maximum number of epochs is 10. After each epoch the level of generalization is measured as the accuracy obtained on the validation set. We use early stopping to prevent overfitting. To ensure the model stops at the correct time, we combine the GL α measurement BID24 of generalization capability and the No-Improvement-In-N-Steps(Nii-N) method BID24. For instance, Nii-2 means that the training process is terminated when generalization capability is not improved in two consecutive epochs. We compare the performance of our approach, referred to as HCNN, to existing models. One of these is the support vector machine (SVM) model by BID15, for which are available in their paper. Second, in tight communication with the authors, we reconstructed the Seq-CNN model presented in BID21 (the original software was no longer available), see Appendix C for detailed settings. Third, we constructed the commonly used LSTM, where the so-called 4-mer profile of the sequence is used as input. A 4-mer profile is a list containing the number of occurrences of all 256 4-mers of the alphabet {A,C,G,T} in a sequence. Preliminary tests showed that using all 256 4-mers ed in overfitting, and including only the 100 most frequent 4-mers is sufficient. Details of the LSTM architecture can be found in TAB6 in Appendix C.In order to assess the effect of using a 2D representation of the DNA sequence in isolation, we compare HCNN to a neural network using a sequential representation as input. We refer to this model as seq-HCNN. As in HCNN, the DNA sequence is converted into a list of kmer representing one-hot vectors, though the mapping of the sequence into a 2D image is omitted. The network architecture is a "flattened" version of the one used in HCNN: for example, a 7×7 convolution filter in HCNN is transformed to a 49×1 convolution filter in the 1D-sequence model. As a summary of model size, the Seq-CNN model contains 1.1M parameters, while both HCNN and seq-HCNN have 961K parameters, and the LSTM has 455K parameters. In order to test whether our method is also applicable to DNA sequence classification tasks other than chromatin state prediction only, we performed additional tests on the splice-junction gene sequences dataset from BID18. Most of the DNA sequence is unused, and splice-junctions refer to positions in the genetic sequence where the transition from an unused subsequence (intron) to a used subsequence (exon) or vice versa takes place. The dataset consists of DNA subsequences of length 61, and each of the sequences is known to be an intron-to-exon splice-junction, an exon-to-intron splice junction or neither. As the dataset is relatively small, we used 1-mers instead of 4-mers. Note that the sequences are much shorter than for the other datasets, ing in smaller images (dimensions 8 × 8 × 4). The show that SVM and Seq-CNN were both outperformed by HCNN and seq-HCNN; LSTM shows poor performance. HCNN and seq-HCNN show similar performance in terms of prediction accuracy, though HCNN shows more consistent over the ten folds indicating that using a 2D representation of the sequence improves robustness. Furthermore, HCNN yields better performance than seq-HCNN in terms of precision, recall, AP and AUC (Table 5). It thus enables to reliably vary the tradeoff between recall and false discoveries. HCNN outperforms all methods in terms of training time (Table 4).The good performance of HCNN observed above may either be attributable to the conversion from DNA sequence to image, or to the use of the Hilbert curve. In order to address this question, we adapted our approach by replacing the Hilbert curve with other space-filling curves and compared their prediction accuracy. Besides the Hilbert curve, other space-filling curves exist BID20 ) (see Appendix A). In Figure 4, we compare the performance of our model with different mapping strategies in various datasets as displayed. We find that the images generated by the spacefilling Hilbert curve provide the best accuracy on most datasets and the 1-d sequence performs worst. In this paper we developed a CNN that outperforms the state-of-the-art for prediction of epigenetic states from primary DNA sequence. Indeed, our methods show improved prediction accuracy and training time compared to the currently available chromatin state prediction methods from Pahm TAB1 in BID15. In the splice dataset, Seq-CNN performed best when using 4-mers, while for HCNN and seq-HCNN 1-mers yielded the best performance. Figure 4: HCNN with different mapping strategies by BID21 and thus yields a huge number of parameters in the fully connected layer. In HCNN on the other hand the number of nodes is strongly reduced before introducing a fully connected layer. Third, the use of a two-dimensional input further enhances the model's capabilities of incorporating long-term interactions. We showed that seq-HCNN and HCNN are not only capable of predicting chromatin state, but can also predict the presence or absence of splice-junctions in DNA subsequences. This suggests that our approach could be useful for DNA sequence classification problems in general. Hilbert curves have several properties that are desirable for DNA sequence classification. The intuitive motivation for the use of Hilbert curves is supported by good when comparing Hilbert curves to other space-filling curves. Additionally, Hilbert curves have previously been shown to be useful for visualization of DNA sequences BID2 ).The main limitation of Hilbert curves is their fixed length, which implies that the generated image contains some empty spaces. These spaces consume computation resources; nevertheless, the 2D representation still yields reduced training times compared to the 1D-sequence representation, presumably due to the high degree of optimization for 2D inputs present in standard CNN frameworks. Given that a substantial part of the improvements in performance rates are due to our novel architecture, we plan on investigating the details of how components of the architecture are intertwined with improvements in prediction performance in more detail. We also plan to further investigate why Hilbert curves yield the particular advantages in terms of robustness and false discovery control we have observed here. As noted before, long-term interactions are highly relevant in DNA sequences. In this section we consider these long-term interactions in four space-filling curves: the reshape curve, the snake curve, the Hilbert curve and the diag-snake curve. See FIG4 for an illustration. As can be seen in FIG4, mapping a sequence to an image reduces the distance between two elements that are far from one another in the sequence, while the distance between nearby elements does not increase. Each of the curves does have a different effect on the distance between faraway elements. In order to assess these differences, we use a measure that is based on the distance between two sequence elements as can be observed in the image. We denote this distance by L C (x,y) where x, y ∈ S, with S the sequence and C the curve under consideration. Then for the sequence S = {A,B,C,D,E, · · ·,P} we obtain• L seq (A,P) = 15 for the sequence;• L reshape (A,P) = 3 √ 2 for the reshape curve;• L snakesnake (A,P) = 3 for the snake curve;• L diag−snake (A,P) = 3 √ 2 for the diagonal snake curve.• L hilbert (A,P) = 3 for the Hilbert curve;We now introduce the following measure: DISPLAYFORM0 where the ∆(C) is the set of the weighted distances between all pairs of the elements in the sequence. Here, ∆(C) is a set containing the distance between any two sequence elements, weighted by their distance in the sequence: DISPLAYFORM1 Note that a low max(∆(C)) relative to mean(∆(C)) implies that long-term interactions are strongly accounted for, so a high Γ(C) is desirable. Γ(C) is evaluated for the four space-filling curves as well as the sequence representation for sequences of varying lengths. The show that the Hilbert curve yields the highest values for Γ(C) 6 and thus performs best in terms of retaining long-distance interactions. Table 6: Γ(C) for four space-filling curves, evaluated in sequences of varying lengths. B FROM SEQUENCE TO IMAGE FIG5 shows the conversion from DNA sequence to image. Accuracy is one of the most intuitive performance measurements in deep learning and machine learning. We therefore optimized the hyperparameters such as the network architecture and learning rate based on maximum accuracy. The hyperparameters are optimized through random search BID5 and using general principles from successful deep learning strategies and the following intuition. First, as our main goal was to capture long-term interactions, we chose a large kernel size for the first layer, for which various values were attempted and 7x7 gave the best performance. As is common practice in deep learning, we then opted for a smaller kernel size in the following layer. Second, in order to limit the number of parameters, we made use of residual blocks inspired by ResNet BID14 and Inception BID27. Finally, we applied batch normalization to prevent overfitting.
A method to transform DNA sequences into 2D images using space-filling Hilbert Curves to enhance the strengths of CNNs
770
scitldr
This paper introduces a novel method to perform transfer learning across domains and tasks, formulating it as a problem of learning to cluster. The key insight is that, in addition to features, we can transfer similarity information and this is sufficient to learn a similarity function and clustering network to perform both domain adaptation and cross-task transfer learning. We begin by reducing categorical information to pairwise constraints, which only considers whether two instances belong to the same class or not (pairwise semantic similarity). This similarity is category-agnostic and can be learned from data in the source domain using a similarity network. We then present two novel approaches for performing transfer learning using this similarity function. First, for unsupervised domain adaptation, we design a new loss function to regularize classification with a constrained clustering loss, hence learning a clustering network with the transferred similarity metric generating the training inputs. Second, for cross-task learning (i.e., unsupervised clustering with unseen categories), we propose a framework to reconstruct and estimate the number of semantic clusters, again using the clustering network. Since the similarity network is noisy, the key is to use a robust clustering algorithm, and we show that our formulation is more robust than the alternative constrained and unconstrained clustering approaches. Using this method, we first show state of the art for the challenging cross-task problem, applied on Omniglot and ImageNet. Our show that we can reconstruct semantic clusters with high accuracy. We then evaluate the performance of cross-domain transfer using images from the Office-31 and SVHN-MNIST tasks and present top accuracy on both datasets. Our approach doesn't explicitly deal with domain discrepancy. If we combine with a domain adaptation loss, it shows further improvement. Supervised learning has made significant strides in the past decade, with substantial advancements arising from the use of deep neural networks. However, a large part of this success has come from the existence of extensive labeled datasets. In many situations, it is not practical to obtain such data due to the amount of effort required or when the task or data distributions change dynamically. To deal with these situations, the fields of transfer learning and domain adaptation have explored how to transfer learned knowledge across tasks or domains. Many approaches have focused on cases where the distributions of the features and labels have changed, but the task is the same (e.g., classification across datasets with the same categories). Cross-task transfer learning strategies, on the other hand, have been widely adopted especially in the computer vision community where features learned by a deep neural network on a large classification task have been applied to a wide variety of other tasks .Most of the prior cross-task transfer learning works, however, require labeled target data to learn classifiers for the new task. If labels of the target data are absent, there is little choice other than to apply unsupervised approaches such as clustering on the target data with pre-trained feature representations. In this paper, we focus on the question of what can be transferred (besides features) to support both cross-domain and cross-task transfer learning. We address it with a learned similarity function as the fundamental component of clustering. Clustering can then be realized using a neural network trained using the output of the similarity function, which can be successfully used to achieve both cross-task and cross-domain transfer. The key idea is to formulate the clustering objective to use a learnable (and transferable) term, which in our proposed work is a similarity prediction function. Our proposed objective function can be easily combined with deep neural networks and optimized end-to-end. The features and clustering are optimized jointly, hence taking advantage of such side information in a robust way. Using this method, we show that unsupervised learning can benefit from learning performed on a distinct task, and demonstrate the flexibility of further combining it with a classification loss and domain discrepancy loss. In summary, we make several contributions. First, we propose to use predictive pairwise similarity as the knowledge that is transferred and formulate a learnable objective function to utilize the pairwise information in a fashion similar to constrained clustering. We then provide the methodologies to deploy the objective function in both cross-task and cross-domain scenarios with deep neural networks. The experimental for cross-task learning on Omniglot and ImageNet show that we can achieve state of the art clustering with predicted similarities. On the standard domain adaptation benchmark Office-31 dataset, we demonstrate improvements over state-of-art even when not performing any explicit domain adaptation, and further improvements if we do. Finally, on another domain adaptation task, SVHN-to-MNIST, our approach using Omniglot as the auxiliary dataset achieves top performance with a large margin. Transfer Learning: Transfer learning aims to leverage knowledge from the source domain to help learn in the target domain, while only focusing on the performance on the target domain. The type of transferred knowledge includes training instances, features, model parameters and relational knowledge BID23. Pairwise similarity is the meta-knowledge we propose to transfer, which falls in between the last two types. The similarity prediction function is a neural network with learned parameters, while the output is the most simplified form of relational knowledge, i.e., only considering pairwise semantic similarity. Cross-task Transfer Learning: Features learned when trained for ImageNet classification BID25 have boosted the performance of a variety of vision tasks in a supervised setting. For example, new classification tasks (; BID38, object detection , semantic segmentation BID17, and image captioning BID33. Translated Learning BID8 has an unsupervised setting similar to ours, but it again focuses only on transferring features across tasks. Our work explores how learning could benefit from transferring pairwise similarity in an unsupervised setting. Cross-domain Transfer Learning: Also known as domain adaptation BID23, there has recently been a large body of work dealing with domain shift between image datasets by minimizing domain discrepancy BID31 BID18; BID19 BID28 BID6 BID20 BID40 BID4. We address the problem in a complementary way that transfers extra information from the auxiliary dataset and show a larger performance boost with further gains using an additional domain discrepancy loss. Constrained Clustering: Constrained clustering algorithms can be categorized by how they utilize constraints (e.g. pairwise information). The first set of work use the constraints to learn a distance metric. For example, DML BID36, ITML , SKMS BID1, SKKm BID1 BID0, and SKLR BID0. This group of approaches closely relates to metric learning and needs a clustering algorithm such as K-means in a separate stage to obtain cluster assignments. The second group of work use constraints to formulate the clustering loss. For example, CSP BID35 and COSC BID24. The third group uses constraints for both metric learning and the clustering objective, such as MPCKMeans BID3 and CECM BID2. The fourth group does not use constraints at all. A generic clustering algorithms such as K-means BID21, LSC BID7, and LPNMF BID5 ) all belong to this category. There is a long list of associated works and they are summarized in survey papers, e.g. Davidson & Basu Figure 1: Overview of the transfer scheme with a learnable clustering objective (LCO). The LCO and pairwise similarity are the two key components of our approach and are described in section 4. The dashed rectangles and light gray arrows are only available in cross-domain transfer. Details are described in section 3. and. Our proposed clustering strategy belongs to the third group. The constrained clustering methods above are applied to the semi-supervised setting where the groundtruth constraints are sparsely available. In our unsupervised setting, the ground-truth is unavailable but predicted constraints are densely available. We include all four groups of algorithms in our comparison and show the advantages of the third group. To define the transfer learning problem addressed in this work, we follow the notations used by BID23. The goal is to transfer knowledge from source data S = (X S, Y S), where X S is the set of data instances and Y S is the corresponding categorical labels, to a target data noted as T = (X T, Y T). The learning is unsupervised since Y T is unknown. The scenario is divided into two cases. One is {Y T} = {Y S}, which means the set of categories are not the same and hence the transfer is across tasks. The second case is {Y T} = {Y S}, but with a domain shift. In other words, the marginal probability distributions of the input data are different, i.e., P (X T) = P (X S). The latter is a cross-domain learning problem also called transductive learning. The domain adaptation approaches which have gained significant attention recently belong to the second scenario. To align with common benchmarks for evaluating transfer learning performance, we further use the notion of an auxiliary dataset and split the source data into S = S ∪ A. S = (X S, Y S) which is only present in the cross-domain transfer scheme and has {Y S} = {Y T}. A = (X A, Y A) is the auxiliary dataset which has a large amount of labeled data and potentially categories as well, and may or may not contain the categories of Y T. For the cross task scenario, only A and unlabeled T are included, while cross-domain transfer involves A, S, and T. In the following sections we use the above notations and describe the two transfer learning tasks in detail. Figure 1 illustrates how our approach relates to both tasks. Transfer across tasks: If the target task has different categories, we cannot directly transfer the classifier from source to target, and there is no labeled target data to use for fine-tuning transferred features. Here we propose to first reduce the categorization problem to a surrogate same-task problem. We can directly apply transductive transfer learning BID23 to the transformed task. The cluster structure in the target data is then reconstructed using the predictions in the transformed task. See FIG1 for an illustration. The source involves a labeled auxiliary dataset A and an unlabeled target dataset T. Y T is the target that must be inferred. In this scenario the set of categories {Y A} = {Y T}, and Y T is unknown. We first transform the problem of categorization into a pairwise similarity prediction problem. In other words, we specify a transformation function R such that R(A) = (X R A, Y R), and X R A = {(x A,i, x A,j)} ∀i,j contains all pairs of data, where {Y R} = {dissimilar, similar}. The transformation on the labeled auxiliary data is straightforward. It will be similar if two data instances are from the same category, and vice versa. We then use it to learn a pairwise similarity prediction function G(x i, x j) = y i,j. By applying G on T, we can obtain G(x T,i, x T,j) = y T,i,j. The last step is to infer Y T from Y R T = {y T,i,j} ∀i,j, which can be solved using constrained clustering algorithms. Note that since the actual Y T is unknown, the algorithm only infers the indices of categories, which could be in arbitrary order. The ing clusters are expected to contain coherent semantic categories. The problem setting we consider here is the same as unsupervised domain adaptation. Following the standard evaluation procedure BID20 ), the labeled datasets A is ImageNet and S is one domain in the Office-31 dataset. The unlabeled T is another domain in Office-31. The goal is to enhance classification performance on T by utilizing A, S, and T together. The key to our approach is the design of a learning objective that can use (noisy) predicted pairwise similarities, and is inspired from constrained clustering which involves using pairwise information in the loss function. The pairwise information is called must-link/cannot-link constraints or similar/dissimilar pairs (we use the latter). Note that the information is binarized to one and zero for similar and dissimilar pairs, accordingly. Although many constrained clustering algorithms have been developed, few of them are scalable with respect to the number of pairwise relationships. Further, none of them can be easily integrated into deep neural networks. Inspired by the work of BID12, we construct a contrastive loss for clustering on the probability outputs of a softmax classifier. However, each output node does not have to map to a fixed category but instead each output node represents a probabilistic assignment of a data point to a cluster. The assignment between output nodes and clusters are formed stochastically during the optimization and is guided by the pairwise similarity. If there is a similar pair, their output distribution should be similar, and vice-versa. Specifically, we use the pair-wise KL-divergence to evaluate the distance between k cluster assignment distributions of two data instances, and use predicted similarity to construct the contrastive loss. Given a pair of data x p, x q, their corresponding output distributions are defined as P = f (x p) and Q = f (x q), while f is the neural network. The cost of a similar pair is described as: The cost L(x p, x q) + is symmetric w.r.t. x p, x q, in which P and Q are alternatively assumed to be constant. Each KL-divergence factor D KL (P ||Q) becomes a unary function whose gradient is simply ∂D KL (P ||Q)/∂Q. If x p, x q comes from a pair which is dissimilar, their output distributions are expected to be different, which can be defined as a hinge-loss function: DISPLAYFORM0 DISPLAYFORM1 DISPLAYFORM2 Given a pair of data with similarity prediction function G(x p, x q) ∈ {0, 1}, which is introduced in section 3, the total loss can be defined as a contrastive loss (where we use integer 1 to represent a similar pair): DISPLAYFORM3 We refer to equation 5 as LCO. Function G is the learnable part that utilizes prior knowledge and is trained with auxiliary dataset A before optimizing LCO. Two particular characteristics of the clustering criterion are worth mentioning: There is no need to define cluster centers. FORMULA1 There is no predefined metric applied on the feature representation. Instead, the divergence is calculated directly on the cluster assignment; therefore, both feature representation and clustering are jointly optimized using back-propagation through the deep neural networks. Although there is no restriction of how G should be constructed, we choose deep convolution neural networks due to their efficiency in vision tasks. We design a network architecture inspired from BID39. While they use it to predict image patch similarity, we use it to predict image-level semantic similarity. However, the Siamese architecture used in BID39 is not efficient in both training and inference, especially when pairwise information is dense. Therefore, instead of using Siamese architecture, we keep the single column backbone but add a pair-enumeration layer on top of the feature extraction network. The pair-enumeration layer enumerates all pairs of feature vectors within a mini-batch and concatenates the features. Suppose the input of the layer is 10 × 512 with the mini-batch size 10 and the feature dimension 512; then the output of the pair-enumeration layer will be 100 × 1024 (self-pairs included).The architecture is illustrated in FIG2. We add one hidden fully connected layer on top of the enumeration layer and a binary classifier at the end. We use the standard cross-entropy loss and train it end-to-end. The supervision for training was obtained by converting the ground-truth category labels into binary similarity, i.e., if two samples are from the same class then their label will be similar, otherwise dissimilar. The inference is also end-to-end, and it outputs predictions among all similarity pairs in a mini-batch. The output probability g ∈ with 1 means more similar. We binarize g at 0.5 to obtain discrete similarity predictions. In the following sections, we simplified the notation of the pairwise similarity prediction network as G. Once G is learned, it then works as a static function in our experiments. Since the pairwise prediction of a mini-batch can be densely obtained from G, to efficiently utilize the pair-wise information without forwarding each data multiple times we also combine the pairenumeration layer described in section 4.1 with equation 5. In this case, the outputs of softmax are Figure 4: The constrained clustering network (CCN) for transfer learning across tasks. The input is unlabeled target data T. The cluster assignment block contains two fully connected layers and has the number of output nodes equal to k. The f described in section 4 is the backbone network plus the cluster assignment block. To optimize LCO, the full pipeline in the diagram is used. After the optimization, it uses another forward propagation with only f to obtain the final cluster assignment. DISPLAYFORM0 We use L d standalone with deep neural networks to reconstruct semantic clusters and for transfer learning across tasks (figure 4). We call the architecture the constrained clustering network (CCN). In the cross-domain case, we additionally have labeled source data. This enables us to use LCO for T while using classification loss for S. The overall training procedure is similar to previous domain adaptation approaches in that both source and target data are mixed in a mini-batch, but different losses are applied. We denote the source domain images in a mini-batch b as X b S and the target domain images X b T with its set of dense pairs D b T. The loss function L cd for cross-domain transfer in a mini-batch can then be formulated as: DISPLAYFORM0 DISPLAYFORM1 The L cluster and L cls share the same outputs from network f. Although L cluster does not force the mapping between clusters and categories, L cls does. Therefore the learned classifier can be applied on target data directly and picks the maximum probability for the predicted class. Note that in our loss function, there is no term to explicitly match the feature distribution between source and target; it merely transfers more knowledge in the form of constraints to regularize the learning of the classifier. There is also no requirement for the architecture to utilize hidden layers. Therefore our approach has the large flexibility to be combined with other domain adaptation strategies. FIG4 illustrates the architectures CCN + and CCN ++ used in our cross-domain experiments. This section contains evaluations with four image datasets and covers both cross-task and crossdomain schemes. The details are described below, and the differences between experimental settings are illustrated in appendix A. The Omniglot dataset contains 1623 different handwritten characters and each of them has 20 images drawn by different people. The characters are from 50 different alphabets and were separated to 30 sets Omniglot bg and 20 evaluation sets Omniglot eval by the author. We use the Omniglot bg as the auxiliary dataset (A) and the Omniglot eval as the target data (T). The total number of characters in Omniglot bg is 964, which can be regarded as the number of categories available to learn the semantic similarity. The goal is to cluster Omniglot eval to reconstruct its semantic categories, without ever having any labels. The G function has a backbone neural network with four 3x3 convolution layers followed by 2x2 max-pooling with stride 2. All hidden layers are followed by batch normalization BID13 and rectified linear unit. To prepare the inputs for training, the images from Omniglot bg were resized to 32x32 and normalized to zero mean with unit standard deviation. Each mini-batch has a size of 100 and is sampled from a random 20 characters to make sure the amount of similar pairs is reasonable. After pair enumeration, there are 10000 pairs subject to the loss function, which is a two-class cross entropy loss. The ground-truth similarity is obtained by converting the categorical labels. The loss of G is optimized by stochastic gradient descent and is the only part trained in a supervised manner in this section. The Constrained Clustering Network (CCN) is used to reconstruct the semantic clusters in the target dataset using outputs from G. The network has four 3x3 convolution layers followed by 2x2 max-pooling with stride 2, one hidden fully-connected layer, and one cluster assignment layer which is also fully connected. The number of output nodes in the last layer is equal to the number of potential clusters in the target data. The output distribution is enumerated in pairs before sending to LCO. The network is randomly initialized, trained end-to-end, and optimized by stochastic gradient descent with randomly sampled 100 images per mini-batch. Note the G function used by LCO is fixed during the optimization. The input data preparation is the same as above, except now the data is from Omniglot eval. Specifically, during the training, the same mini-batch is given to both G and CCN. The dense pairwise similarity predictions from G are sent to LCO and are then fully utilized. The only hyper-parameter in LCO is σ, and we set it to 2 for all our experiments. Omniglot eval contains 20 alphabets and each one can be used as a standalone dataset. The 20 target datasets contain a varied number (20 to 47) of characters. Therefore we can evaluate the reconstruction performance under a varied number of ground-truth clusters. We tested the situations when the number of character (K) in an alphabet is known and unknown. When K is known, we set the target number of clusters in the clustering algorithm equal to the true number of characters. If K is unknown, the common practice is to set it to a large number so that the data from different categories will not be forced to be in the same cluster. In the experiment, we merely set K to 100, which is much larger than the largest dataset.All constrained clustering algorithms can be used to reconstruct the semantic clusters for our problem. Since there are mis-predictions in G, robustness to noise is the most important factor. Here we include all four types of constrained clustering algorithms introduced in section 2 as the baselines. We provide the full set of pairwise constraints to all algorithms including ours. In other words, given an alphabet of 20 characters, it contains 400 images and 160000 predicted similarities from G (note that G makes predictions for both orders of a pair and for self-pairs). The full pairwise similarities were presented to all algorithms in random order, while we empirically found it has no noticeable effect on . We pick the baseline approaches based on code availability and scalability concerning the number of pairwise constraints. Therefore we have shown for K-means BID21, LPNMF BID5 ), LSC BID7 ), ITML (, SKKm BID1, SKLR BID0, SKMS BID1, CSP BID35, and MPCK-means BID3 as our baselines. We use the default parameters for each algorithm provided by The evaluation uses two clustering metrics. The first is normalized-mutual information (NMI) BID29 which is widely used for clustering. The second one is the clustering accuracy (ACC). The ACC metric first finds the one-to-one matching between predicted clusters and ground-truth labels, and then calculates the classification accuracy based on the mapping. All data outside the matched clusters will be regarded as mis-predictions. To get high ACC, the algorithm has to generate coherent clusters where each cluster includes most of the data in a category; otherwise the score drops quickly. Therefore ACC provides better discrimination to evaluate whether the semantic clusters have been reconstructed well. We report the average performance over the 20 alphabets in table 1. Our approach achieved the top performance on both metrics. The CCN demonstrates strong robustness on the challenging scenario of unknown K. It achieved 78.1% average accuracy. Compared with 82.4% when K is known, CCN has a relatively small drop. Compared to the second best algorithm, CSP, which is 65.4%, CCN outperforms it with a large gap. The classical approach MPCK-means works surprisingly well when the number of clusters is known, but its performance dropped dramatically from 81.9% to 53.9% when K = 100. In the performance breakdown for the 20 individual alphabets, CCN achieved 94% clustering accuracy on Old Church Slavonic Cyrillic, which has 45 characters (appendix TAB4). Therefore the show the feasibility of reconstructing semantic clusters using only noisy similarity predictions. When to use the semantic similarity? The experiments in table 1 show a clear trend that utilizing the pairwise constraints jointly for both metric learning and minimizing the clustering loss achieves the best performance, including both MPCK-means and CCN. In the case of unknown number of clusters, where we set K = 100, the algorithms that use constraints to optimize clustering loss have better robustness, for example, CSP and CCN. The group that only use constraints for metric learning (ITML, SKMS, SKKm, and SKLR) significantly outperform the group that does not use it (K-means, LPNMF, LSC). However, their performance are still far behind CCN. Our confirm the importance of jointly optimizing the metric and clustering. The robustness against noisy similarity prediction is the key factor to enable the cross-task transfer framework. To the best of our knowledge, table 1 is the first comprehensive robustness comparisons using predicted constraints learned from real data instead of converting from ground-truth labels. The accuracy of G in our experiment is shown in appendix table 7 and demonstrates the reasonable performance of G which is on par with Matching-Net BID34. After binarizing the prediction at 0.5 probability, the similar pair precision, similar pair recall, dissimilar pair precision, and dissimilar pair recall among the 659 characters are (0.392, 0.927, 0.999, 0.995), accordingly. The binarized predictions are better than uniform random guess (0.002, 0.500, 0.998, 0.500), but are still noisy. Therefore it is very challenging for constrained clustering. The visualization of the robustness range of CCN are provided in appendix D, and shows that the robustness is related to the density of pairs involved in a mini-batch. We hypothesize that during the optimization, the gradients from wrongly predicted pairs are canceled out by each other or by the correctly predicted pairs. Therefore the overall gradient still moves the solution towards a better clustering . How to predict K? Inferring the number of clusters (N C) is a hard problem, but with the pairwise similarity information, it becomes feasible. For evaluation, we compute the difference between the number of dominant clusters (N DC) and the true number of categories (N C gt) in a dataset. We use a naive definition for N DC, which is the number of clusters that have a size larger than expected size when data is sampled uniformly. In other words, TAB5 ). We compare this with the baseline approach SKMS BID1, which does not require a given K and supports a pipeline to estimate K automatically (therefore we only put it into the column K = 100 in table 1.). SKMS gets 16.3. Furthermore, 10 out of 20 datasets from CCN's prediction have a difference between N DC d and N C gt d smaller or equal to 3, which shows the feasibility of estimating K with predicted similarity. DISPLAYFORM0 To demonstrate the scalability of our approach, we applied the same scheme on the ImageNet dataset. The 1000-class dataset is separated into 882-class (ImageN et 882) and 118-class (ImageN et 118) subsets as the random split in BID34. We use ImageN et 882 for A and 30 classes (∼39k images) are randomly sampled from ImageN et 118 for T. The difference from section 5.1.1 is that here we use Resnet-18 for both G and CCN, and the weights of the backbone are pre-trained with ImageN et 882. Since the number of pairs is high and it is not feasible to feed them into other constrained clustering algorithms, we compare CCN with K-means, LSC BID7, and LPNMF BID5 ). We use the output from the average pooling layer of Resnet-18 as the input to these clustering algorithms. CCN gives the top performance with average ACC 73.8% when K is known, and 65.2% when the number of clusters is unknown, which outperforms the second (34.5% by K-means) with a large margin. The full comparison is in appendix table 8. And the performance of G is provided in appendix table 9. Office-31 BID26 has images from 31 categories of office objects. The 4652 images are obtained from three domains: Amazon (a), DSLR (d), and Webcam (w). The dataset is the standard benchmark for evaluating domain adaptation performance in computer vision. We experiment with all six combinations (source S → target T): a → w, a → d, d → a, d → w, w → a, w → d, and report the average accuracy based on five random experiments for each setting. The G function learns the semantic similarity function from the auxiliary dataset A, which is ImageNet with all 1000 categories. The backbone network of G is Resnet-18 and has the weights initialized by ImageNet classification. The training process is the same as section 5.1.1 except the images are resized to 224.We follow the standard protocols using deep neural networks BID20 ) for unsupervised domain adaptation. The backbone network of CCN + is pre-trained with ImageNet. Appendix figure 7 illustrates the scheme. During the training, all source data and target data are used. Each mini-batch is constructed by 32 labeled samples from source and 96 unlabeled samples from target. Since the target dataset has no labels and could only be randomly sampled, it is crucial to have sufficient mini-batch size to ensure that similar pairs are sampled. The loss function used in our approach is equation and is optimized by stochastic gradient descent. The CCN +/++ and DANN (RevGrad) with ResNet backbone are implemented with Torch. We use the code from original author for JAN. Both DANN and JAN use a 256-dimension bottleneck feature layer. The are summarized in table 2. Our approach (CCN +) demonstrates a strong performance boost for the unsupervised cross-domain transfer problem. It reaches 77.5% average accuracy which gained 6.2 points from the 71.3% source-only baseline. Although our approach merely transfers more information from the auxiliary dataset, it outperforms the strong approach DANN (75.7%), and state-of-the-art JAN (76.9%). When combining ours with DANN (CCN ++), the performance is further boosted. This indicates that LCO helps mitigate the transfer problem in a certain way that is orthogonal to minimizing the domain discrepancy. We observe the same trend when using a deeper backbone network, i.e., ResNet-34. In such a case the average accuracy achieved is 77.9%, 81.1% and 82% for source-only, CCN + and CCN ++, respectively, though we used exactly the same G as before (with ResNet-18 backbone for G). This indicates that the information carried in the similarity predictions is not equivalent to transferring features with deeper networks. More discussions are in appendix C and the performance of G is provided in appendix table 11 to show that although the prediction has low precision for similar pairs (∼ 0.2), our approach still benefits from the dense similarity predictions. We also evaluated the CCN + on another widely compared scenario, which uses color Street View House Numbers images (SVHN) BID22 as S, the gray-scale hand-written digits (MNIST) as T. To learn G, we use the Omniglot bg as A. We train all the networks in this section from scratch. Our experimental setting is similar to BID28. We achieve the top performance with 89.1% accuracy. The performance gain from source-only in our approach is +37.1%, which wins by a large margin compared to the +23.9% of LTR BID28. The full comparison is presented in appendix table 12. In this paper, we demonstrated the usefulness of transferring information in the form of pairwise similarity predictions. Such information can be transferred as a function and utilized by a loss formulation inspired from constrained clustering, but implemented more robustly within a neural network that can jointly optimize both features and clustering outputs based on these noisy predictions. The experiments for both cross-task and cross-domain transfer learning show strong benefits of using the semantic similarity predictions ing in new state of the art across several datasets. This is true even without explicit domain adaptation for the cross-domain task, and if we add a domain discrepancy loss the benefits increase further. There are two key factors that determine the performance of the proposed framework. The first is the robustness of the constrained clustering and second is the performance of the similarity prediction function. We show robustness of CCN empirically, but we do not explore situations where learning the similarity function is harder. For example, such cases arise when there are a small number of categories in source or a large domain discrepancy between source and target. One idea to deal with such a situation is learning G with domain adaptation strategies. We leave these aspects for future work. This work was supported by the National Science Foundation and National Robotics Initiative (grant # IIS-1426998). The performance of the similarity prediction function used in section 5.1. We leverage the N-way test which is commonly used in one-shot learning evaluation. The similarity is learned with Omniglot bg and has N-way test with Omniglot eval and MNIST. The experimental settings follow BID34. The raw probability output (without binarization) from our G is used to find the nearest exemplar in the N-way test. Omniglot-eval MNIST Method 5-way 20-way 10-way Siamese-Nets BID14 TAB0 shows that simply using deeper pre-trained networks produces a significant performance boost. Specifically, using Resnet-50 increases the performance 8.4 points from Alexnet, which surpasses the 6.2 points gained by using one of the state-of-the-art domain adaptation algorithms (JAN BID20). If we regard pre-training with deeper networks as transferring more information, our approach is similar in this aspect since both transfer more information from the auxiliary dataset. In this case, memory limitations precluded the application of LCO to such models, and multi-GPU implementations for this problem is an area of future work. To quickly explore the large combination of factors that may affect the clustering, we use a small dataset (MNIST) and a small network which has two convolution layers followed by two fully connected layers. The MNIST dataset is a dataset of handwritten digits that contains 60k training and 10k testing images with size 28x28. Only the training set is used in this section and the raw pixels, which were normalized to zero mean and unit standard deviation, were fed into networks directly. The networks were randomly initialized and the clustering training was run five times under each combination of factors; we show the best final , as is usual in the random restart regime. The mini-batch size was set to 256, thus up to 65536 pairs were presented to the LCO per mini-batch if using full density (D=1). There were 235 mini-batches in an epoch and the optimization proceeded for 15 epochs. The clustering loss was minimized by stochastic gradient descent with learning rate 0.1 and momentum 0.9. The predicted cluster was assigned at the end by forwarding samples through the clustering networks. The best in the five runs was reported. To simulate different performance of the similarity prediction, the label of pairs were flipped according to the designated recall. For example, to simulate a 90% recall of similar pair, 10% of the ground truth similar pair in a mini-batch were flipped. The precision of similar/dissimilar pairs is a function of the recall of both type of pairs, thus controlling the recall is sufficient for the evaluation. The recalls for both similar and dissimilar pairs were gradually reduced from one to zero at intervals of 0.1. The ing performance w.r.t different values of recall, density, and number of clusters is visualized in FIG7. A bright color means high NMI score and is desired. The larger the bright region, the more robust the clustering is against the noise of similarity prediction. The ACC score shows almost the same trend and is thus not shown here. How does similarity prediction affect clustering? Looking at the top-left heat map in FIG7, which has D = 1 and 10 clusters, it can be observed that the NMI score is very robust to low similar pair recall, even lower than 0.5. For recall of dissimilar pairs, the effect of recall is divided at the 0.5 value: the clustering performance can be very robust to noise in dissimilar pairs if the recall is greater than 0.5; however, it can completely fail if recall is below 0.5. For similar pairs, the clustering works on a wide range of recalls when the recall of dissimilar pairs is high. In practical terms, robustness to the recall of similar pairs is desirable because it is much easier to predict dissimilar pairs than similar pairs in real scenarios. In a dataset with 10 categories e.g. Cifar-10, we can easily get 90% recall for dissimilar pairs with purely random guess if the number of classes is known, while the recall for similar pairs will be 10%.How does the density of the constraints affect clustering? We argue that the density of pairwise relationships is the key factor to improving the robustness of clustering. The density D = 1 means that every pair in a mini-batch is utilized by the clustering loss. For density D = 0.1, it means only 1 out of 10 possible constraints is used. We could regard the higher density as better utilization of the pairwise information in a mini-batch, thus more learning instances contribute to the gradients at once. Consider a scenario where there is one sample associated with 5 true similar pairs and 3 false similar pairs. In such a case, the gradients introduced by the false similar pairs have a higher chance to be overridden by true similar pairs within the mini-batch, thus the loss can converge faster and is less affected by errors. In FIG7, we could see when density decreases, the size of the bright region shrinks significantly. In our implementation, enumerating the full pairwise relationships introduces negligible overhead in computation time using GPU. Although there is overhead for memory consumption, it is limited because only the vector of predicted distributions has to be enumerated for calculating the clustering loss. The effect of varying the number of Clusters In the MNIST experiments, the number of categories is 10. We augment the softmax output number up to 100. The rows of FIG7 show that even when the number of output categories is significant larger than the number of true object categories, e.g. 100 > 10, the clustering performance NMI score only degrades slightly.
A learnable clustering objective to facilitate transfer learning across domains and tasks
771
scitldr
Recent advances in cross-lingual word embeddings have primarily relied on mapping-based methods, which project pretrained word embeddings from different languages into a shared space through a linear transformation. However, these approaches assume word embedding spaces are isomorphic between different languages, which has been shown not to hold in practice (Søgaard et al., 2018), and fundamentally limits their performance. This motivates investigating joint learning methods which can overcome this impediment, by simultaneously learning embeddings across languages via a cross-lingual term in the training objective. Given the abundance of parallel data available , we propose a bilingual extension of the CBOW method which leverages sentence-aligned corpora to obtain robust cross-lingual word and sentence representations. Our approach significantly improves cross-lingual sentence retrieval performance over all other approaches, as well as convincingly outscores mapping methods while maintaining parity with jointly trained methods on word-translation. It also achieves parity with a deep RNN method on a zero-shot cross-lingual document classification task, requiring far fewer computational resources for training and inference. As an additional advantage, our bilingual method also improves the quality of monolingual word vectors despite training on much smaller datasets. We make our code and models publicly available. Cross-lingual representations-such as embeddings of words and phrases into a single comparable feature space-have become a key technique in multilingual natural language processing. They offer strong promise towards the goal of a joint understanding of concepts across languages, as well as for enabling the transfer of knowledge and machine learning models between different languages. Therefore, cross-lingual embeddings can serve a variety of downstream tasks such as bilingual lexicon induction, cross-lingual information retrieval, machine translation and many applications of zero-shot transfer learning, which is particularly impactful from resource-rich to low-resource languages. Existing methods can be broadly classified into two groups : mapping methods leverage existing monolingual embeddings which are treated as independent, and apply a postprocess step to map the embeddings of each language into a shared space, through a linear transformation (b; ;). On the other hand, joint methods learn representations concurrently for multiple languages, by combining monolingual and cross-lingual training tasks (; ; ; ; ;). While recent work on word embeddings has focused almost exclusively on mapping methods, which require little to no cross-lingual supervision, (Søgaard et al., 2018) establish that their performance is hindered by linguistic and domain divergences in general, and for distant language pairs in particular. Principally, their analysis shows that cross-lingual hubness, where a few words (hubs) in the source language are nearest cross-lingual neighbours of many words in the target language, and structural non-isometry between embeddings do impose a fundamental barrier to the performance of linear mapping methods. propose using joint learning as a means of mitigating these issues. Given parallel data, such as sentences, a joint model learns to predict either the word or context in both source and target languages. As we will demonstrate with from our algorithm, joint methods yield compatible embeddings which are closer to isomorphic, less sensitive to hubness, and perform better on cross-lingual benchmarks. Contributions. We propose the BI-SENT2VEC algorithm, which extends the SENT2VEC algorithm to the cross-lingual setting. We also revisit , another joint learning method, to assess the effectiveness of joint learning over mapping-based methods. Our contributions are • On cross-lingual sentence-retrieval and monolingual word representation quality evaluations, BI-SENT2VEC significantly outperforms competing methods, both jointly trained as well as mapping-based ones while preserving state-of-the-art performance on cross-lingual word retrieval tasks. For dis-similar language pairs, BI-SENT2VEC outperform their competitors by an even larger margin on all the tasks hinting towards the robustness of our method. • BI-SENT2VEC performs on par with a multilingual RNN based sentence encoder, LASER , on MLDoc , a zero-shot crosslingual transfer task on documents in multiple languages. Compared to LASER, our method improves computational efficiency by an order of magnitude for both training and inference, making it suitable for resource or latency-constrained on-device cross-lingual NLP applications. • We verify that joint learning methods consistently dominate state-of-the-art mapping methods on standard benchmarks, i.e., cross-lingual word and sentence retrieval. • Training on parallel data additionally enriches monolingual representation quality, evident by the superior performance of BI-SENT2VEC over FASTTEXT embeddings trained on a 100× larger corpus. We make our models and code publicly available. The literature on cross-lingual representation learning is extensive. Most recent advances in the field pursue unsupervised (; ; ; ; b) or supervised mapping or alignment-based algorithms. All these methods use existing monolingual word embeddings, followed by a cross-lingual alignment procedure as a post-processing step-that is to learn a simple (typically linear) mapping from the source language embedding space to the target language embedding space. Supervised learning of a linear map from a source embedding space to another target embedding space (b) based on a bilingual dictionary was one of the first approaches towards cross-lingual word embeddings. Additionally enforcing orthogonality constraints on the linear map in rotations, and can be formulated as an orthogonal Procrustes problem . However, the authors found the translated embeddings to suffer from hubness, which they mitigate by introducing the inverted softmax as a corrective search metric at inference time. align embedding spaces starting from a parallel seed lexicon such as digits and iteratively build a larger bilingual dictionary during training. In their seminal work, propose an adversarial training method to learn a linear orthogonal map, avoiding bilingual supervision altogether. They further refine the learnt mapping by applying the Procrustes procedure iteratively with a synthetic dictionary generated through adversarial training. They also introduce the'Cross-Domain Similarity Local Scaling' (CSLS) retrieval criterion for translating between spaces, which further improves on the word translation accuracy over nearest-neighbour and inverted softmax metrics. They refer to their work as Multilingual Unsupervised and Supervised Embeddings (MUSE). In this paper, we will use MUSE to denote the unsupervised embeddings introduced by them, and "Procrustes + refine" to denote the supervised embeddings obtained by them. similarly use "multilingual adversarial training" followed by "pseudo-supervised refinement" to obtain unsupervised multilingual word embeddings (UMWE), as opposed to bilingual word embeddings by . describe an unsupervised approach where they align the second moment of the two word embedding distributions followed by a further refinement. Building on the success of CSLS in reducing retrieval sensitivity to hubness, directly optimize a convex relaxation of the CSLS function (RCSLS) to align existing mono-lingual embeddings using a bilingual dictionary. While none of the methods described above require parallel corpora, all assume structural isomorphism between existing embeddings for each language (b), i.e. there exists a simple (typically linear) mapping function which aligns all existing embeddings. However, this is not always a realistic assumption (Søgaard et al., 2018) -even in small toy-examples it is clear that many geometric configurations of points can not be linearly mapped to their targets. Joint learning algorithms such as TRANSGRAM and Cr5 , circumvent this restriction by simultaneously learning embeddings as well as their alignment. TRANSGRAM, for example, extends the Skipgram (a) method to jointly train bilingual embeddings in the same space, on a corpus composed of parallel sentences. In addition to the monolingual Skipgram loss for both languages, they introduce a similar cross-lingual loss where a word from a sentence in one language is trained to predict the word-contents of the sentence in the other. Cr5, on the other hand, uses document-aligned corpora to achieve state-of-the-art for cross-lingual document retrieval while staying competitive at cross-lingual sentence and word retrieval. TRANSGRAM embeddings have been absent from discussion in most of the recent work. However, the growing abundance of sentence-aligned parallel data merits a reappraisal of their performance. use BIVEC , another bilingual extension of Skipgram, which uses a bilingual dictionary in addition to parallel sentences to obtain word-alignments and compare it with the unsupervised version of VECMAP (b), another mappingbased method. Our experiments show this extra level of supervision in the case of BIVEC is redundant in obtaining state-of-the-art performance. Proposed Model. Our BI-SENT2VEC model is a cross-lingual extension of SENT2VEC proposed by , which in turn is an extension of the C-BOW embedding method (a). SENT2VEC is trained on sentence contexts, with the word and higher-order word n-gram embeddings specifically optimized toward obtaining robust sentence embeddings using additive composition. Formally, SENT2VEC obtains representation v s of a sentence S by averaging the word-ngram embeddings (including unigrams) as v s:= 1 R(S) w∈R(S) v w where R(S) is the set of word n-grams in the sentence S. The SENT2VEC training objective aims to predict a masked word token w t in the sentence S using the rest of the sentence representation v S\{wt}. To formulate the training objective, we use logistic loss: x → log (1 + e −x) in conjunction with negative sampling. More precisely, for a raw text corpus C, the monolingual training objective for SENT2VEC is given by where w t is the target word and, V and U are the source n-gram and target word embedding matrices respectively. Here, the set of negative words N wt is sampled from a multinomial distribution where the probability of picking a word is directly proportional to the square root of its frequency in the corpus. Each target word w t is sampled with probability min{1, t/f wt + t/f wt} where f wt is the frequency of the word in the corpus. We adapt the SENT2VEC model to bilingual corpora by introducing a cross-lingual loss in addition to the monolingual loss in equation. Given a sentence pair S = (S l1, S l2) where S l1 and S l2 are translations of each other in languages l 1 and l 2, the cross-lingual loss for a target word w t in l 1 is given by Thus, we use the sentence S l1 to predict the constituent words of S l2 and vice-versa in a similar fashion to the monolingual SENT2VEC, shown in Figure 1. This ensures that the word and n-gram embeddings of both languages lie in the same space. The Figure 1: An illustration of the BI-SENT2VEC training process. A word from a sentence pair is chosen as a target and the algorithm learns to predict it using the rest of the sentence(monolingual training component) and the translation of the sentence(cross-lingual component). Assuming C to be a sentence aligned bilingual corpus and combining equations and, our BI-SENT2VEC model objective function is formulated as cross-lingual loss Implementation Details. We build our C++ implementation on the top of the FASTTEXT library. Model parameters are updated by asynchronous SGD with a linearly decaying learning rate. Our model is trained on the ParaCrawl (Esplà-) v4.0 datasets for the English-Italian, English-German, English-French, English-Spanish, English-Hungarian and English-Finnish language pairs. For the English-Russian language pair, we concatenate the OpenSubtitle corpus 1 and the Tanzil project 2 (Quran translations) corpus. The number of parallel sentence pairs in the corpora except for those of English-Finnish and English-Hungarian used by us range from 17-32 Million. Number of parallel sentence pairs for the dis-similar language pairs(English-Hungarian and English-Finnish) is approximately 2 million. Evaluation for these two language pairs can be found in Subsection 4.4. Exact statistics regarding the different corpora can be found in the Table 7 in the Appendix. All the sentences were tokenized using Spacy tokenizers 3 for their respective languages. For each dataset, we trained two different models: one with unigram embeddings only, and the other additionally augmented with bigrams. The earlier TRANSGRAM models were trained on a small amount of data (Europarl Corpus ). To facilitate a fair comparison, we train new TRANSGRAM embeddings on the same data used for BI-SENT2VEC. Given that TRANSGRAM and BI-SENT2VEC are a cross-lingual extension of Skipgram and SENT2VEC respectively, we use the same parameters as and , except increasing the number of epochs for TRANSGRAM to 8, and decreasing the same for BI-SENT2VEC to 5. Additionally, a preliminary hyperparameter search (except changing the number of epochs) on BI-SENT2VEC and TRANSGRAM did not improve the . All parameters for training the TRANSGRAM and BI-SENT2VEC models can be found in the Table 6 in the Appendix. In order to make the comparison more extensive, we also train VECMAP (mapping-based) (b; a) and BIVEC (joint-training) methods on the same corpora using the exact pipeline as . To assess the quality of the word and sentence embeddings obtained as well as their cross-lingual alignment quality, we compare our using the following four benchmarks • Cross-lingual word retrieval • Monolingual word representation quality • Cross-lingual sentence retrieval • Zero-shot cross-lingual transfer of document classifiers where benchmarks are presented in order of increasing linguistic granularity, i.e. word, sentence, and document level. We also analyze the effect of training data by studying the relationship between representation quality and corpus size. We use the code available in the MUSE library 4 for all evaluations except the zero-shot classifier transfer, which is tested on the MLDoc task 5. The task involves retrieving correct translation(s) of a word in a source language from a target language. To evaluate translation accuracy, we use the bilingual dictionaries constructed by . We consider 1500 source-test queries and 200k target words for each language pair and report P@1 scores for the supervised and unsupervised baselines as well as our models in Table 1. Table 1: Word translation retrieval P@1 for various language pairs of MUSE evaluation dictionary . NN: nearest neighbours. CSLS: Cross-Domain Similarity Local Scaling. ('en' is English, 'fr' is French, 'de' is German, 'ru' is Russian, 'it' is Italian) ('uni. ' and 'bi.' denote unigrams and bigrams respectively) (→ denotes translation from the first language to the second and ← the other way around.) We assess the monolingual quality improvement of our proposed cross-lingual training by evaluating performance on monolingual word similarity tasks. To disentangle the specific contribution of the cross-lingual loss, we train the monolingual counterpart of BI-SENT2VEC, SENT2VEC on the same corpora as our method. Performance on monolingual word-similarity tasks is evaluated using the English SimLex-999 and its Italian and German translations, English WS-353 and its German, Italian and Spanish translations. For French, we use a translation of the RG-65 dataset. Pearson scores are used to measure the correlation between human-annotated word similarities and predicted cosine similarities. We also include FASTTEXT monolingual vectors trained on CommonCrawl data (a) which is comprised of 600 billion, 68 billion, 66 billion, 72 billion and 36 billion words of English, French, German, Spanish and Italian respectively and is at least 100× larger than the corpora on which we trained BI-SENT2VEC. We report Pearson correlation scores on different word-similarity datasets for En-It pair in Table 2. Evaluation on other language pairs are similar and can be found in the appendix in Tables 8, 9, and 10. Table 2: Monolingual word similarity task performance of our methods when trained on en-it ParaCrawl data. We report Pearson correlation scores. The primary contribution of our work is to deliver improved cross-lingual sentence representations. We test sentence embeddings for each method obtained by bag-of-words composition for sentence retrieval across different languages on the Europarl corpus. In particular, the tf-idf weighted average is used to construct sentence embeddings from word embeddings. We consider 2000 sentences in the source language dataset and retrieve their translation among 200K sentences in the target language dataset. The other 300K sentences in the Europarl corpus are used to calculate tf-idf weights. Results for P@1 of unsupervised and supervised benchmarks vs our models are included in Table 3: Cross-lingual Sentence retrieval. We report P@1 scores for 2000 source queries searching over 200 000 target sentences. Reduction in error is calculated with respect to BI-SENT2VEC uni. + bi. CSLS and the best non-BI-SENT2VEC method. We report a substantial improvement on the performance of previous models on cross-lingual word and sentence retrieval tasks for the dis-similar language pairs(English-Finnish and EnglishHungarian). We use the same evaluation scheme as in Subsections 4.1 and 4.3 Results for these pairs are included in Table 4. The MLDoc multilingual document classification task (Table 4 : Cross-lingual Word and Sentence retrieval for dis-similar language pairs (P@1 scores).'en' is English,'fi' is Finnish,'hu' is Hungarian use a zero-shot setting, i.e., we train a classifier on embeddings in the source language, and report the accuracy of the same classifier applied to the target language. As the classifier, we use a simple feed-forward neural network with two hidden layers of size 10 and 8 respectively, optimized using the Adam optimizer. Each document is represented using the sum of its sentence embeddings. . A document classifier was trained on one language and tested on another without additional training/fine-tuning. We report % accuracy. We compare the performance of BI-SENT2VEC with the LASER sentence embeddings in Table 5. LASER sentence embedding model is a multi-lingual sentence embedding model which is composed of a biLSTM encoder and an LSTM decoder. It uses a shared byte pair encoding based vocabulary of 50k words. The LASER model which we compare to was trained on 223M sentences for 93 languages and requires 5 days to train on 16 V100 GPUs compared to our model which takes 1-2.5 hours for each language pair on 30 CPU threads. We conduct an ablation study on how BI-SENT2VEC embeddings' performance depends on the size of the training corpus. We uniformly sample smaller subsets of the En-Fr ParaCrawl dataset and train a BI-SENT2VEC model on them. We test word/sentence translation performance with the CSLS retrieval criterion, and monolingual embedding quality for En-Fr with increasing ParaCrawl corpus size. The are illustrated in Figures 2 and 3. In the following section, we discuss the on monolingual and cross-lingual benchmarks, presented in Tables 1 -5, and a data ablation study for how the model behaves with increasing parallel corpus size in Figure 2 -3. The most impressive outcome of our experiments is improved crosslingual sentence retrieval performance, which we elaborate on along with word translation in the next subsection. For cross-lingual tasks, we observe in Table 1 that jointly trained embeddings produce much better on cross-lingual word and sentence retrieval tasks. BI-SENT2VEC's performance on word-retrieval tasks is uniformly superior to mapping methods, achieving up to 11.5% more in P@1 than RCSLS for the English to German language pair, consistent with the from . It is also on-par with, or better than competing joint methods except on translation from Russian to English, where TRANSGRAM receives a significantly better score. For word retrieval tasks, there is no discernible difference between CSLS/NN criteria for BI-SENT2VEC, suggesting the relative absence of the hubness phenomenon which significantly hinders the performance of cross-lingual word embedding methods. Our principal contribution is in improving cross-lingual sentence retrieval. Table 3 shows BI-SENT2VEC decisively outperforms all other methods by a wide margin, reducing the relative P@1 error anywhere from 31.5% to 55.1%. Our model displays considerably less variance than others in quality across language pairs, with at most a ≈ 5% deficit between best and worst, and nearly symmetric accuracy within a language pair. TRANSGRAM also outperforms the mapping-based methods, but still falls significantly short of BI-SENT2VEC's. These can be attributed to the fact that BI-SENT2VEC directly optimizes for obtaining robust sentence embeddings using additive composition of its word embeddings. Since BI-SENT2VEC's learning objective is closest to a sentence retrieval task amongst current state-ofthe-art methods, it can surpass them without sacrificing performance on other tasks. Cross-lingual evaluations on dis-similar language pairs Unlike other language pairs in the evaluation, English-Finnish and English-Hungarian pairs are composed of languages from two different language families(English being an Indo-European language and the other language being a Finno-Ugric language). In Table 4, we see that the performance boost achieved by BI-SENT2VEC on competing methods methods is more pronounced in the case of dis-similar language pairs as compared to paris of languages close to each other. This observation affirms the suitaibility of BI-SENT2VEC for learning joint representations on languages from different families. For the monolingual word similarity tasks, we observe large gains over existing methods. SENT2VEC is trained on the same corpora as us, and FASTTEXT vectors are trained on the CommonCrawl corpora which are more than 100 times larger than ParaCrawl v4.0. In Table 2, we see that BI-SENT2VEC outperforms them by a significant margin on SimLex-999 and WS-353, two important monolingual word quality benchmarks. This observation is in accordance with the fact that bilingual contexts can be surprisingly effective for learning monolingual word representations. However, amongst the joint-training methods, BI-SENT2VEC also outperforms TRANSGRAM and BIVEC trained on the same corpora by a significant margin, again hinting at the superiority of the sentence level loss function over a fixed context window loss. Effect of n-grams report improved on monolingual word representation evaluation tasks for SENT2VEC and FASTTEXT word vectors by training them alongside word n-grams. Our method incorporates their based on the observation that unigram vectors trained alongside with bigrams significantly outperform unigrams alone on the majority of the evaluation tasks. We can see from Tables 1 -3 that this holds for the bilingual case as well. However, in case of dis-similar language pairs (Table 4), we observe that using n-grams degrades the cross-lingual performance of the embeddings. This observation suggests that use of higher order n-grams may not be helpful for language pairs where the grammatical structures are contrasting. Considering the cross-lingual performance curve exhibited by BI-SENT2VEC in Figure 2, increasing corpus size for the English-French datasets up to 1-3.1M lines appears to saturate the performance of the model on cross-lingual word/sentence retrieval, after which it either plateaus or degrades slightly. This is an encouraging , indicating that joint methods can use significantly less data to obtain promising performance. This implies that joint methods may not necessarily be constrained to high-resource language pairs as previously assumed, though further experimentation is needed to verify this claim. It should be noted from Figure 3 that the monolingual quality does keep improving with an increase in the size of the corpus. A potential way to overcome this issue of plateauing cross-lingual performance is to give different weights to the monolingual and cross-lingual component of the loss with the weights possibly being dependent on other factors such as training progress. Comparison with a cross-lingual sentence embedding model and performance on document level task On the MLDoc classifier transfer task where we evaluate a classifier learned on documents in one language on documents in another, Table 5 shows we achieve parity with the performance of the LASER model for language pairs involving English, where BI-SENT2VEC's average accuracy of 77.8% is slightly higher than LASER's 77.3%. While the comparison is not completely justified as LASER is multilingual in nature and is trained on a different dataset, one must emphasize that BI-SENT2VEC is a bag-of-words method as compared to LASER which uses a multi-layered biLSTM sentence encoder. Our method only requires to average a set of vectors to encode sentences reducing its computational footprint significantly. This makes BI-SENT2VEC an ideal candidate for on-device computationally efficient cross-lingual NLP, unlike LASER which has a huge computational overhead and specialized hardware requirement for encoding sentences. We introduce a cross-lingual extension of an existing monolingual word and sentence embedding method. The proposed model is tested at three levels of linguistic granularity: words, sentences and documents. The model outperforms all other methods by a wide margin on the cross-lingual sentence retrieval task while maintaining parity with the best-performing methods on word translation tasks. Our method achieves parity with LASER on zero-shot document classification, despite being a much simpler model. We also demonstrate that training on parallel data yields a significant improvement in the monolingual word representation quality. The success of our model on the bilingual level calls for its extension to the multilingual level especially for pairs which have little or no parallel corpora. While the amount of bilingual/multilingual parallel data has grown in abundance, the amount of monolingual data available is practically limitless. Consequently, we would like to explore training cross-lingual embeddings with a large amount of raw text combined with a smaller amount of parallel data. We used ParaCrawl v4.0 corpora for training BI-SENT2VEC, SENT2VEC,BIVEC,VECMAP and TRANSGRAM embeddings except for En-Ru pair for which we used OpenSubtitles and Tanzil corpora combined. MUSE and RCSLS vectors were trained from FASTTEXT vectors obtained from Wikipedia dumps (a
Joint method for learning cross-lingual embeddings with state-of-art performance for cross-lingual tasks and mono-lingual quality
772
scitldr
To act and plan in complex environments, we posit that agents should have a mental simulator of the world with three characteristics: (a) it should build an abstract state representing the condition of the world; (b) it should form a belief which represents uncertainty on the world; (c) it should go beyond simple step-by-step simulation, and exhibit temporal abstraction. Motivated by the absence of a model satisfying all these requirements, we propose TD-VAE, a generative sequence model that learns representations containing explicit beliefs about states several steps into the future, and that can be rolled out directly without single-step transitions. TD-VAE is trained on pairs of temporally separated time points, using an analogue of temporal difference learning used in reinforcement learning. Generative models of sequential data have received a lot of attention, due to their wide applicability in domains such as speech synthesis BID18, neural translation BID3, image captioning BID22, and many others. Different application domains will often have different requirements (e.g. long term coherence, sample quality, abstraction learning, etc.), which in turn will drive the choice of the architecture and training algorithm. Of particular interest to this paper is the problem of reinforcement learning in partially observed environments, where, in order to act and explore optimally, agents need to build a representation of the uncertainty about the world, computed from the information they have gathered so far. While an agent endowed with memory could in principle learn such a representation implicitly through model-free reinforcement learning, in many situations the reinforcement signal may be too weak to quickly learn such a representation in a way which would generalize to a collection of tasks. Furthermore, in order to plan in a model-based fashion, an agent needs to be able to imagine distant futures which are consistent with the agent's past. In many situations however, planning step-by-step is not a cognitively or computationally realistic approach. To successfully address an application such as the above, we argue that a model of the agent's experience should exhibit the following properties:• The model should learn an abstract state representation of the data and be capable of making predictions at the state level, not just the observation level.• The model should learn a belief state, i.e. a deterministic, coded representation of the filtering posterior of the state given all the observations up to a given time. A belief state contains all the information an agent has about the state of the world and thus about how to act optimally.• The model should exhibit temporal abstraction, both by making'jumpy' predictions (predictions several time steps into the future), and by being able to learn from temporally separated time points without backpropagating through the entire time interval. To our knowledge, no model in the literature meets these requirements. In this paper, we develop a new model and associated training algorithm, called Temporal Difference Variational Auto-Encoder (TD-VAE), which meets all of the above requirements. We first develop TD-VAE in the sequential, non-jumpy case, by using a modified evidence lower bound (ELBO) for stochastic state space models (; BID12 BID8 which relies on jointly training a filtering posterior and a local smoothing posterior. We demonstrate that on a simple task, this new inference network and associated lower bound lead to improved likelihood compared to methods classically used to train deep state-space models. Following the intuition given by the sequential TD-VAE, we develop the full TD-VAE model, which learns from temporally extended data by making jumpy predictions into the future. We show it can be used to train consistent jumpy simulators of complex 3D environments. Finally, we illustrate how training a filtering a posterior leads to the computation of a neural belief state with good representation of the uncertainty on the state of the environment.2 MODEL DESIDERATA Autoregressive models. One of the simplest way to model sequential data (x 1, . . ., x T) is to use the chain rule to decompose the joint sequence likelihood as a product of conditional probabilities, i.e. log p(x 1, . . ., x T) = t log p(x t | x 1, . . ., x t−1). This formula can be used to train an autoregressive model of data, by combining an RNN which aggregates information from the past (recursively computing an internal state h t = f (h t−1, x t)) with a conditional generative model which can score the data x t given the context h t. This idea is used in handwriting synthesis BID15, density estimation , image synthesis (van den BID19, audio synthesis (van den BID20, video synthesis , generative recall tasks BID13, and environment modeling (; BID9 .While these models are conceptually simple and easy to train, one potential weakness is that they only make predictions in the original observation space, and don't learn a compressed representation of data. As a , these models tend to be computationally heavy (for video prediction, they constantly decode and re-encode single video frames). Furthermore, the model can be computationally unstable at test time since it is trained as a next step model (the RNN encoding real data), but at test time it feeds back its prediction into the RNN. Various methods have been used to alleviate this issue; BID14 BID0.State-space models. An alternative to autoregressive models are models which operate on a higher level of abstraction, and use latent variables to model stochastic transitions between states (grounded by observation-level predictions). This enables to sample state-to-state transitions only, without needing to render the observations, which can be faster and more conceptually appealing. They generally consist of decoder or prior networks, which detail the generative process of states and observations, and encoder or posterior networks, which estimate the distribution of latents given the observed data. There is a large amount of recent work on these type of models, which differ in the precise wiring of model components BID4 BID10; BID1 BID12;; BID8; BID17.Let z = (z 1, . . ., z T) be a state sequence and x = (x 1, . . ., x T) an observation sequence. We assume a general form of state-space model, where the joint state and observation likelihood can be written as p(x, z) = t p(z t | z t−1)p(x t | z t).1 These models are commonly trained with a VAEinspired bound, by computing a posterior q(z | x) over the states given the observations. Often, the posterior is decomposed autoregressively: q(z | x) = t q(z t | z t−1, φ t (x)), where φ t is a function of (x 1, . . ., x t) for filtering posteriors or the entire sequence x for smoothing posteriors. This leads to the following lower bound: DISPLAYFORM0 A key feature of sequential models of data is that they allow to reason about the conditional distribution of the future given the past: p(x t+1, . . ., x T | x 1, . . ., x t). For reinforcement learning in partially observed environments, this distribution governs the distribution of returns given past observations, and as such, it is sufficient to derive the optimal policy. For generative sequence modeling, it enables conditional generation of data given a context sequence. For this reason, it is desirable to compute sufficient statistics b t = b t (x 1, . . ., x t) of the future given the past, which allow to rewrite the conditional distribution as p(x t+1, . . ., x T | x 1, . . ., x t) ≈ p(x t+1, . . ., x T | b t). For an autoregressive model as described in section 2.1, the internal RNN state h t can immediately be identified as the desired sufficient statistics b t. However, for the reasons mentioned in the previous section, we would like to identify an equivalent quantity for a state-space model. For a state-space model, the filtering distribution p(z t | x 1, . . ., x t), also known as the belief state in reinforcement learning, is sufficient to compute the conditional future distribution, due to the Markov assumption underlying the state-space model and the following derivation: DISPLAYFORM1 Thus, if we train a network that extracts a code DISPLAYFORM2, b t would contain all the information about the state of the world the agent has, and would effectively form a neural belief state, i.e. a code fully characterizing the filtering distribution. Classical training of state-space model does not compute a belief state: by computing a joint, autoregressive posterior q(z | x) = t q(z t | z t−1, x), some of the uncertainty about the marginal posterior of z t may be'leaked' in the sample z t−1. Since that sample is stochastic, to obtain all information from (x 1, . . ., x t) about z t, we would need to re-sample z t−1, which would in turn require re-sampling z t−2 all the way to z 1.While the notion of a belief state itself and its connection to optimal policies in POMDPs is well known BID2; ), it has often been restricted to the tabular case (Markov chain), and little work investigates computing belief states for learned deep models. A notable exception is , which uses a neural form of particle filtering, and represents the belief state more explicitly as a weighted collection of particles. Related to our definition of belief states as sufficient statistics is the notion of predictive state representations (PSRs) ; see also BID21 for a model that learns PSRs which, combined with a decoder, can predict future observations. Our last requirement for the model is that of temporal abstraction. We postpone the discussion of this aspect until section 4. In this section, we develop a sequential model that satisfies the requirements given in the previous section, namely (a) it constructs a latent state-space, and (b) it creates a online belief state. We consider an arbitrary state space model with joint latent and observable likelihood given by DISPLAYFORM0, and we aim to optimize the data likelihood log p(x). We begin by autoregressively decomposing the data likelihood as: log p(x) = t log p(x t | x <t). For a given t, we evaluate the conditional likelihood p(x t | x <t) by inferring over two latent states only: z t−1 and z t, as they will naturally make belief states appear for times t − 1 and t: DISPLAYFORM1 Because of the Markov assumptions underlying the state-space model, we can simplify DISPLAYFORM2. Next, we choose to decompose q(z t−1, z t | x ≤t) as a belief over z t and a one-step smoothing distribution DISPLAYFORM3. We obtain the following belief-based ELBO for state-space models: DISPLAYFORM4 Both quantities p(z t−1 | x ≤t−1) and q(z t | x ≤t) represent the belief state of the model at different times, so at this stage we approximate them with the same distribution DISPLAYFORM5 representing the belief state code for z t. Similarly, we represent the smoothing posterior over z t−1 as q(z t−1 | z t, b t−1, b t). We obtain the following loss: DISPLAYFORM6 We provide an intuition on the different terms of the ELBO in the next section. The model derived in the previous section expresses a state model p(z t | z t−1) that describes how the state of the world evolves from one time step to the next. However, in many applications, the relevant timescale for planning may not be the one at which we receive observations and execute simple actions. Imagine for example planning for a trip abroad; the different steps involved (discussing travel options, choosing a destination, buying a ticket, packing a suitcase, going to the airport, and so on), all occur at vastly different time scales (potentially months in the future at the beginning of the trip, and days during the trip). Certainly, making a plan for this situation does not involve making second-by-second decisions. This suggests that we should look for models that can imagine future states directly, without going through all intermediate states. Beyond planning, there are several other reasons that motivate modeling the future directly. First, training signal coming from the future can be stronger than small changes happening between time steps. Second, the behavior of the model should ideally be independent from the underlying temporal sub-sampling of the data, if the latter is an arbitrary choice. Third, jumpy predictions can be computationally efficient; when predicting several steps into the future, there may be some intervals where the prediction is either easy (e.g. a ball moving straight), or the prediction is complex but does not affect later time steps -which Neitz et al. FORMULA1 call inconsequential chaos. There is a number of research directions that consider temporal jumps. Koutnik et al. FORMULA1 and BID11 consider recurrent neural network with skip connections, making it easier to bridge distant timesteps. BID8 temporally sub-sample the data and build a jumpy model (for fixed jump size) of this data; but by doing so they also drop the information contained in the skipped observations. Neitz et al. FORMULA1 and Jayaraman et al. FORMULA1 predict sequences with variable time-skips, by choosing as target the most predictable future frames. They predict the observations directly without learning appropriate states, and only focus on nearly fully observed problems (and therefore do not need to learn a notion of belief state). For more general problems, this is a fundamental limitation, as even if one could in principle learn a jumpy observation model p(x t+δ |x ≤t), it cannot be used recursively (feeding x t+δ back to the RNN and predicting x t+δ+δ). This is because x t+δ does not capture the full state of the system and so we would be missing information from t to t + δ to fully characterize what happens after time t + δ. In addition, x t+δ might not be appropriate even as target, because some important information can only be extracted from a number of frames (potentially arbitrarily separated), such as a behavior of an agent. Motivated by the model derived in section 3, we extend sequential TD-VAE to exhibit time abstraction. We start from the same assumptions and architectural form: there exists a sequence of states z 1,..., z T from which we can predict the observations x 1,..., x T. A forward RNN encodes a belief state b t from past observations x ≤t. The main difference is that, instead of relating information known at times t and t + 1 through the states z t and z t+1, we relate two distant time steps t 1 and t 2 through their respective states z t1 and z t2, and we learn a jumpy, state-to-state model p(z t2 | z t1) between z t1 and z t2. Following equation 5, the negative loss for the TD-VAE model is: DISPLAYFORM0 To train this model, one should choose the distribution of times t 1, t 2; for instance, t 1 can be chosen uniformly from the sequence, and t 2 − t 1 uniformly over some finite range [1, D]; other approaches could be investigated. FIG2 describes in detail the computation flow of the model. Finally, it would be desirable to model the world with different hierarchies of state, the higher-level states predicting the same-level or lower-level states, and ideally representing more invariant or abstract information. For this reason, we also develop stacked (hierarchical) version of TD-VAE, which uses several layers of latent states. Hierarchical TD-VAE is detailed in the appendix. In this section, we provide a more intuitive explanation behind the computation and loss of the model. Assume we want to predict a future time step t 2 from all the information we have up until time t 1. All relevant information up until time t 1 (respectively t 2) has been compressed into a code b t1 (respectively b t2). We make an observation x t of the world 2 at every time step t, but posit the existence of a state z t which fully captures the full condition of the world at time t. Consider an agent at the current time t 2. At that time, the agent can make a guess of what the state of the world is by sampling from its belief model p B (z t2 | b t2). Because the state z t2 should entail the corresponding observation x t2, the agent aims to maximize p(x t2 | z t2) (first term of the loss), with a variational bottleneck penalty − log p(z t2 | b t2) (second term of the loss) to prevent too much information from the current observation x t2 from being encoded into z t2. Then follows the question'could the state of the world at time t 2 have been predicted from the state of the world at time t 1?'. In order to ascertain this, the agent must estimate the state of the world at time t 1. By time t 2, the agent has aggregated observations between t 1 and t 2 that are informative about the state of the world at time t 1, which, together with the current guess of the state of the world z t2, can be used to form an ex post guess of the state of the world. This is done by computing a smoothing distribution q(z t1 |z t2, b t1, b t2) and drawing a corresponding sample z t1. Having guessed states of the world z t1 and z t2, the agent optimizes its predictive jumpy model of the world state p(z t2 | z t1) (third term of the loss). Finally, it should attempt to see how predictable the revealed information was, or in other words, to assess whether the smoothing distribution q(z t1 | z t2, b t2) could have been predicted from information only available at time t 1 (this is indirectly predicting z t2 from the state of knowledge b t1 at time t 1 -the problem we started with). The agent can do so by minimizing the KL between the smoothing distribution and the belief distribution at time DISPLAYFORM0 (fourth term of the loss). Summing all the losses described so far, we obtain the TD-VAE loss. In reinforcement learning, the state of an agent represents a belief about the sum of discounted rewards R t = τ r t+τ γ τ. In the classic setting, the agent only models the mean of this distribution represented by the value function V t or action dependent Q-function Q a t . Recently in , a full distribution over R t has been considered. To estimate V t1 or Q a t1 at time t 1, one does not usually wait to get all the rewards to compute R t1. Instead, one uses an estimate at some future time t 2 as a bootstrap to estimate V t1 or Q a t1 (temporal difference). In our case, the model expresses a belief p B (z t | b t) about possible future states instead of the sum of discounted rewards. The model trains the belief p B (z t1 | b t1) at time t 1 using belief p B (z t2 | b t2) at some time t 2 in the future. It accomplishes this by (variationally) auto-encoding a sample z t2 of the future state into a sample z t1, using the approximate posterior distribution q(z t1 | z t2, b t1, b t2) and the decoding distribution p(z t2 | z t1). This auto-encoding mapping translates between states at t 1 and t 2, forcing beliefs at the two time steps to be consistent. Sample z t1 forms the target for training the belief p B (z t1 | b t1), which appears as a prior distribution over z t1. The first experiment using sequential TD-VAE, which enables a direct comparison to related algorithms for training state-space models. Subsequent experiments use the full TD-VAE model. We use a partially observed version of the MiniPacman environment (Racanière et al., 2017), shown in FIG0. The agent (Pacman) navigates a maze, and tries to eat all the food while avoiding being eaten by a ghost. Pacman sees only a 5 × 5 window around itself. To achieve a high score, the agent needs to form a belief state that captures memory of past experience (e.g. which parts of the maze have been visited) and uncertainty on the environment (e.g. where the ghost might be).We evaluate the performance of sequential (non-jumpy) TD-VAE on the task of modeling a sequence of the agent's observations. We compare it with two state-space models trained using the standard ELBO of equation 1:• A filtering model with encoder q(z | x) = t q(z t | z t−1, b t), where b t = RNN(b t−1, x t).• A mean-field model with encoder q(z | x) = t q(z t | b t), where b t = RNN(b t−1, x t). FIG0 shows the ELBO and estimated negative log probability on a test set of MiniPacman sequences for each model. TD-VAE outperforms both baselines, whereas the mean-field model is the least well-performing. We note that b t is a belief state for the mean-field model, but not for the filtering model; the encoder of the latter explicitly depends on the previous latent state z t−1, hence b t ELBO and estimated negative log probability on a test set of MiniPacman sequences. Lower is better. Log probability is estimated using importance sampling with the encoder as proposal. is not its sufficient statistics. This comparison shows that naively restricting the encoder in order to obtain a belief state hurts the performance significantly; TD-VAE overcomes this difficulty. In this experiment, we show that the model is able to learn the state and roll forward in jumps. We consider sequences of length 20 of images of MNIST digits. For each sequence, a random digit from the dataset is chosen, as well as the direction of movement (left or right). At each time step, the digit moves by one pixel in the chosen direction, as shown in FIG1. We train the model with t 1 and t 2 separated by a random amount t 2 − t 1 from the interval. We would like to see whether the model at a given time can roll out a simulated experience in time steps t 1 = t + δ 1, t 2 = t 1 + δ 2,... with δ 1, δ 2,... > 1, without considering the inputs in between these time points. Note that it is not sufficient to predict the future inputs x t1,... as they do not contain information about whether the digit moves left or right. We need to sample a state that contains this information. We roll out a sequence from the model as follows: (a) b t is computed by the aggregation recurrent network from observations up to time t; (b) a state z t is sampled from p B (z t | b t); (c) a sequence of states is rolled out by repeatedly sampling z ← z ∼ p(z | z) starting with z = z t; (d) each z is decoded by p(x | z), producing a sequence of frames. The ing sequences are shown in FIG1. We see that indeed the model can roll forward the samples in steps of more than one elementary time step (the sampled digits move by more than one pixel) and that it preserves the direction of motion, demonstrating that it rolls forward a state. We would like to demonstrate that the model can build a state even when little information is present in each observation, and that it can sample states far into the future. For this we consider a 1D sequence obtained from a noisy harmonic oscillator, as shown in Figure 4 (first and fourth rows). The frequencies, initial positions and initial velocities are chosen at random from some range. At every update, noise is added to the position and the velocity of the oscillator, but the energy is approximately preserved. The model observes a noisy version of the current position. Attempting to predict the input, which consists of one value, 100 time steps in the future would be uninformative; such a Figure 4: Skip-state prediction for 1D signal. The input is generated by a noisy harmonic oscillator. Rollouts consist of (a) a jumpy state transition with either dt = 20 or dt = 100, followed by 20 state transitions with dt = 1. The model is able to create a state and predict it into the future, correctly predicting frequency and magnitude of the signal.prediction wouldn't reveal what the frequency or the magnitude of the signal is, and because the oscillator updates are noisy, the phase information would be nearly lost. Instead, we should try to predict as much as possible about the state, which consists of frequency, magnitude and position, and it is only the position that cannot be accurately predicted. The aggregation RNN is an LSTM; we use a hierarchical TD-VAE with two layers, where the latent variables in the higher layer are sampled first, and their are passed to the lower layer. The belief, smoothing and state-transition distributions are feed-forward networks, and the decoder simply extracts the first component from the z of the first layer. We also feed the time interval t 2 − t 1 into the smoothing and state-transition distributions. We train on sequences of length 200, with t 2 − t 1 taking values chosen at random from with probability 0.8 and from We analyze what the model has learned as follows. We pick time t 1 = 60 and sample z t1 ∼ p B (z t1 | b t1). Then, we choose a time interval δ t ∈ {20, 100} to skip, sample from the forward model p(z 2 | z 1, δ t) to obtain z t2 at t 2 = t 1 + δ t. To see the content of this state, we roll forward 20 times with time step δ = 1 and plot the , shown in Figure 4. We see that indeed the state z t2 is predicted correctly, containing the correct frequency and magnitude of the signal. We also see that the position (phase) is predicted well for dt = 20 and less accurately for dt = 100 (at which point the noisiness of the system makes it unpredictable).Finally, we show that TD-VAE training can improve the quality of the belief state. For this experiment, the harmonic oscillator has a different frequency in each interval,,,. The first three frequencies f 1, f 2, f 3 are chosen at random. The final frequency f 4 is chosen to be one fixed value f a if f 1 > f 2 and another fixed value f b otherwise (f a and f b are constants). In order to correctly model the signal in the final time interval, the model needs to learn the relation between f 1 and f 2, store it over length of 100 steps, and apply it over a number of time steps (due to the noise) in the final interval. To test whether the belief state contains the information about this relationship, we train a binary classifier from the belief state to the final frequency f 4 at points just before the final interval. We compare two models with the same recurrent architecture (an LSTM), but trained with different objective: next-step prediction vs TD-VAE loss. The figure on the right shows the classification accuracy for the two methods, averaged over 20 runs. We found that the longer the separating time interval (containing frequency f 3) and the smaller the size of the LSTM, the better TD-VAE is compared to next-step predictor. In the final experiment, we analyze the model on a more visually complex domain. We use sequences of frames seen by an agent solving tasks in the DeepMind Lab environment BID5. We aim to demonstrate that the model holds explicit beliefs about various possible futures, and that it can roll out in jumps. We suggest functional forms inspired by convolutional DRAW: we use convolutional LSTMs for all the circles in FIG5 and make the model 16 layers deep (except for the forward updating LSTMs which are fully connected with depth 4).We use time skips t 2 − t 1 sampled uniformly from and analyze the content of the belief state b. We take three samples z 1, z 2, z 3 from p B (z | b), which should represent three instances of possible futures. FIG3 (left) shows that they decode to roughly the same frame. To see what they represent about the future, we draw 5 samples z k i ∼ p(ẑ | z), k = 1,..., 5 and decode them, as shown in FIG3 (right). We see that for a given i, the predicted samples decode to similar frames (images in the same row). However z's for different i's decode to different frames. This means b represented a belief about several different possible futures, while different z i each represent a single possible future. Finally, we show what rollouts look like. We train on time separations t 2 − t 1 chosen uniformly from on a task where the agent tends to move forward and rotate. FIG4 shows 4 rollouts from the model. We see that the motion appears to go forward and into corridors and that it skips several time steps (real single step motion is slower). In this paper, we argued that an agent needs a model that is different from an accurate step-by-step environment simulator. We discussed the requirements for such a model, and presented TD-VAE, a sequence model that satisfies all requirements. TD-VAE builds states from observations by bridging time points separated by random intervals. This allows the states to relate to each other directly over longer time stretches and explicitly encode the future. Further, it allows rolling out in state-space and in time steps larger than, and potentially independent of, the underlying temporal environment/data step size. In the future, we aim to apply TD-VAE to more complex settings, and investigate a number of possible uses in reinforcement learning such are representation learning and planning. In section 3, we derive an approximate ELBO which forms the basis of the training loss of the one-step TD-VAE. One may wonder whether a similar idea may underpin the training loss of the jumpy TD-VAE. Here we show how to modify the derivation to provide an approximate ELBO for a slightly different training regime. Assume a sequence (x 1, . . ., x T), and an arbitrary distribution S over subsequences x s = (x t1, . . ., x tn) of x. For each time index t i, we suppose a state z ti, and model the subsequence x s with a jumpy state-space model p(x s) = i p(z ti |z ti−1)p(x ti |z ti); denote z s = (z t1, . . ., z tn) the state subsequence. We use the exact same machinery as the next-step ELBO, except that we enrich the posterior distribution over z s by making it depend not only on observation subsequence x s, but on the entire sequence x. This is possible because posterior distributions can have arbitrary contexts; the observations which are part of x but not x s effectively serve as auxiliary variable for a stronger posterior. We use the full sequence x to form a sequence of belief states b t at all time steps. We use in particular the ones computed at the subsampled times t i. By following the same derivation as the one-step TD-VAE, we obtain: DISPLAYFORM0 which, using the same belief approximations as the next step TD-VAE, becomes: DISPLAYFORM1 which is the same loss as the TD-VAE for a particular choice of the sampling scheme S (only sampling pairs). In this section we start with a general recurrent variational auto-encoder and consider how the desired properties detailed in sections 1 and 2 constrain the architecture. We will find that these constraints in fact naturally lead to the TD-VAE model. Let us first consider a relatively general form of temporal variational auto-encoder. We consider recurrent models where the same module is applied at every step, and where outputs are sampled one at a time (so that arbitrarily long sequences can be generated). A very general form of such an architecture consist of forward-backward encoder RNNs and a forward decoder RNN (Figure 7) but otherwise allowing for all the connections. Several works BID10; BID1 BID12; BID14 BID8 ) fall into this framework. Now let us consider our desired properties. In order to sample forward in latent space, the encoder must not feed into the decoder or the prior of the latent variables, since observations are required to compute the encoded state, and we would therefore require the sampled observations to compute the distribution over future states and observations. We next consider the constraint of computing a belief state b t. The belief state b t represents the state of knowledge up to time t, and therefore cannot receive an input from the backwards decoder. Figure 7: Recurrent variational auto-encoder. General recurrent variational auto-encoder, obtained by imposing recurrent structure, forward sampling and allowing all potential connections. Note that the encoder can have several alternating layers of forward and backward RNNs. Also note that the connection 1 has to be absent if the backwards encoder is used. Possible skip connections are not shown as they can directly be implemented in the RNN weights. If connections 2 are absent, the model is capable of forward sampling in latent space without going back to observations. Furthermore, b t should have an unrestricted access to information; it should ideally not be disturbed by sampling (two identical agents with the same information should compute the same information; this will not be the case if the computation involves sampling), nor go through information bottlenecks. This suggests using the forward encoder for computing the belief state. This prevents running the backwards inference from the end of the sequence. However if we assume that p B represents our best belief about the future, we can take a sample from it as an instance of the future: z t2 ∼ p B (z t2 |b t2). It forms a type of bootstrap information. Then we can go backwards and infer what would the world have looked like given this future (e.g. the object B was still in the box even if we don't see it). Using VAE training, we sample z 1 from its posterior q(z t1 |z t2, b t2, b t1) (the conditioning variables are the ones we have available locally), using p B (z t1 |b t1) as prior. Conversely, for t 2, we sample from p B (z t2 |b t2) as posterior, but with p(z t2 |z t1) as prior. We therefore obtain the VAE losses log q(z 1 |z 2, s 1, s 2) − log p B (z 1 |s 1) at t 1 and log p B (z 2 |s 2) − log p P (z 2 |z 1) at t 2. In addition we have the reconstruction term p D (x 2 |z 2) that grounds the latent in the input. The whole algorithm is presented in the FIG2 In the main paper we detailed a framework for learning models by bridging two temporally separated time points. It would be desirable to model the world with different hierarchies of state, the higherlevel states predicting the same-level or lower-level states, and ideally representing more invariant or abstract information. In this section we describe a stacked (hierarchical) version of the model. The first part to extend to L layers is the RNN that aggregates observations to produce the belief state b. Here we simply use a deep LSTM, but with layer l receiving inputs also from layer l + 1 from the previous time step. This is so that the higher layers can influence the lower ones (and vice versa). and setting b 0 = b L and b L+1 = ∅.We create a deep version of the belief part of the model by stacking the shallow one, as shown in FIG5. In the usual spirit of deep directed models, the model samples downwards, generating higher level representations before the lower level ones (closer to pixels). The model implements deep inference, that is, the posterior distribution of one layer depends on the samples from the posterior distribution in previously sampled layers. The order of inference is a design choice, and we use the same direction as that of generation, from higher to lower layers, as done for example by BID16;;. We implement the dependence of various distributions on latent variables sampled so far using a recurrent neural network that summarizes all such variables (in a given group of distributions). We don't share the weights between different layers. Given these choices, we can allow all connections consistent with the model. Next we describe the functional forms used in our model.= KL(q DISPLAYFORM0 The hidden layer of the D maps is 50; the size of each z l t is 8. Belief states have size 50. We use the Adam optimizer with learning rate 0.0005.The same network works for the MNIST experiment with the following modifications. Observations are pre-processed by a two hidden layer MLP with ReLU nonlinearity. The decoder p D also have a two layer MLP, which outputs the logits of a Bernoulli distribution. δ t was not passed as input to any network.
Generative model of temporal data, that builds online belief state, operates in latent space, does jumpy predictions and rollouts of states.
773
scitldr
This paper introduces an information theoretic co-training objective for unsupervised learning. We consider the problem of predicting the future. Rather than predict future sensations (image pixels or sound waves) we predict ``hypotheses'' to be confirmed by future sensations. More formally, we assume a population distribution on pairs $(x,y)$ where we can think of $x$ as a past sensation and $y$ as a future sensation. We train both a predictor model $P_\Phi(z|x)$ and a confirmation model $P_\Psi(z|y)$ where we view $z$ as hypotheses (when predicted) or facts (when confirmed). For a population distribution on pairs $(x,y)$ we focus on the problem of measuring the mutual information between $x$ and $y$. By the data processing inequality this mutual information is at least as large as the mutual information between $x$ and $z$ under the distribution on triples $(x,z,y)$ defined by the confirmation model $P_\Psi(z|y)$. The information theoretic training objective for $P_\Phi(z|x)$ and $P_\Psi(z|y)$ can be viewed as a form of co-training where we want the prediction from $x$ to match the confirmation from $y$. We give experiments on applications to learning phonetics on the TIMIT dataset.
Presents an information theoretic training objective for co-training and demonstrates its power in unsupervised learning of phonetics.
774
scitldr
Generative models for singing voice have been mostly concerned with the task of "singing voice synthesis," i.e., to produce singing voice waveforms given musical scores and text lyrics. In this work, we explore a novel yet challenging alternative: singing voice generation without pre-assigned scores and lyrics, in both training and inference time. In particular, we experiment with three different schemes: 1) free singer, where the model generates singing voices without taking any conditions; 2) accompanied singer, where the model generates singing voices over a waveform of instrumental music; and 3) solo singer, where the model improvises a chord sequence first and then uses that to generate voices. We outline the associated challenges and propose a pipeline to tackle these new tasks. This involves the development of source separation and transcription models for data preparation, adversarial networks for audio generation, and customized metrics for evaluation. The task of computationally producing singing voices is usually referred to as singing voice synthesis (SVS) in the literature . Most researchers assume that the note sequence and the lyrics of the waveform to be generated are given as the model input, and aim to build synthesis engines that sound as natural and expressive as a real singer (; ; ; a;). As such, the content of the produced singing voice is largely determined by the given model input, which is usually assigned by human. And, accordingly, progress in SVS has followed closely with that in text-to-speech (TTS) synthesis (; ;). However, we argue that singing according to a pre-assigned musical score and lyrics is only a part of the human singing activities. For human beings, singing can also be a spontaneous activity. We learn to spontaneously sing when we were children . We do not need a score to sing when we are humming on the road or in the bathroom. The voices sung do not have to be intelligible. Jazz vocalists can improvise according to a chord progression, an accompaniment, or even nothing. We aim to explore such a new task in this paper: teaching a machine to sing with a training collection of singing voices, but without the corresponding musical scores and lyrics of the training data. Moreover, the machine has to sing without pre-assigned score and lyrics as well even in the inference (generation) time. This task is challenging in that, as the machine sees no lyrics at all, it hardly has any knowledge of the human language to pronounce or articulate either voiced or unvoiced sounds. And, as the machine sees no musical scores at all, it has to find its own way learning the language of music in creating plausible vocal melodies. It also makes the task different from TTS. Specifically, we consider three types of such score-and lyrics-free singing voice generation tasks, as shown in Figures 1(b) - (d). A free singer sings with only random noises as the input. An accompanied singer learns to sing over a piece of instrumental music, which is given as an audio waveform (again without score information). Finally, a solo singer also sings with only noises as the input, but it uses the noises to firstly generate some kind of'inner ideas' of what to sing. From a technical point of view, we can consider SVS as a strongly conditioned task for generating singing voices, as the target output is well specified by the input. In contrast, the proposed tasks are either unconditioned or weakly conditioned. This work therefore contributes to expanding the "spectrum" (in terms of the strength of conditional signals) of singing voice generation. Doing so has at least two implications. First, while our models are more difficult to train than SVS models, they enjoy more freedom in the generation output. Such freedom may be desirable considering the artistic nature of singing. Second, we can more easily use a larger training set to train our model- due to the difficulty in preparing time-aligned scores and lyrics, the training set employed in existing work on SVS usually consists of tens of songs only (a); in contrast, in our case we do not need labeled and aligned data and can therefore use more than hundreds of songs for training. This may help establish a universal model based on which extensions can be made. The proposed accompanied singer also represents one of the first attempts to produce singing voice given an accompaniment. One intuitive approach to achieve this is to first generate a score according to an accompaniment in the symbolic domain and then synthesize the singing voices according to the score. The second step of synthesis is relatively well-established, but the first step of generating a score given an accompaniment is not explored yet. Extensive researches have been done in generating scores of one or several instruments (; ;). However, to the best of our knowledge, very few, if any, researches have been done on generating scores of singing voices given an accompaniment. Our approach bypasses the step of generating scores by directly generating the mel-spectrogram representation. We outline below the challenges associated with the proposed tasks and the solutions we investigate. First, the tasks are unsupervised as we do not provide any labels (e.g., annotations of phonemes, pitches, or onset times) for the training singing files. The machine has to learn the complex structure of music directly from audio signals. We explore the use of generative adversarial network (GAN) to address this issue, for its demonstrated effectiveness for SVS and pitch-conditioned instrument note synthesis . Specifically, we design a novel GAN-based architecture to learn to generate the mel-spectrogram of singing voice, and then use WaveRNN , a single-layer recurrent neural network, as the vocoder to generate the audio waveform. Rather than considering the mel-spectrograms as a fixedsize image as done in recent work on audio generation , we use gated recurrent units (GRUs) and dilated convolutions (van den) in both the generator and discriminator, to model both the local and sequential patterns in music and to facilitate the generation of variable-length waveforms. Second, for training the free singer, unaccompanied vocal tracks are needed. As for the accompanied singer, we need additionally an accompaniment track for each vocal track. However, public-domain multi-track music data is hard to find. We choose to implement a vocal source separation model with state-of-the-art separation quality for data preparation. The proposed pipeline for training and evaluating an accompanied singer is illustrated in Figure 2. The advantage of having a vocal separation model is that we can use as many audio files as we have to compile the training data. The downside is that the singing voice generation models may suffer from the artifacts of the source separation model, which is moderate but not negligible. Third, for the accompanied singer, there is no single "ground truth" and the relationship between the model input and output may be one-to-many. This is because there are plenty of valid ways to Figure 2: A pipeline for building the accompanied singer. We use source separation to get separated singing voice and accompaniment from professionally recorded audio files. Then, we use the separated tracks to train the generators and discriminators in the GAN. In inference time, we feed an unseen accompaniment to the trained singer model and let it "sing." sing over an accompaniment track. For diversity and artistic freedom, we cannot ask the machine to generate any specific singing voice in response to an accompaniment track, even if we have paired data of vocal and accompaniment tracks. We investigate using conditional GAN to retain the possibility of generating singing voices with multiple modes. Fourth, as the proposed tasks are new, there are no established ways for performance evaluation. According to our setting, we desire our machine to generate audio waveforms with high quality and diversity, vocal-like timbre, plausible pitch contour, emotion expression, and, for the accompanied singer, that are in harmony with the given accompaniment track. But, the singing does not have to be intelligible. We propose customized objective and subjective metrics to evaluate our models in these aspects. For example, we adapt the melody harmonization model proposed by to measure the matchness between the generated vocal track and the given accompaniment track. Finally, reproducibility is a major issue, especially for a subjective task. We intend to use publiclyavailable copyright-free instrumental music as the conditional signals for testing the accompanied singer, so that other researchers can use the same testing conditions for model comparison in the future. We will also release the testing conditions for the solo singer, the generated singing voices for all our models, as well as open source our code through a public git repository [URL removed]. We focus on Jazz music in this work. Samples of the generated singing voices can be found at https://bit.ly/2mIvoIc. Our models have many possible use cases. For example, we can use the accompanied singer as a backing vocalist. In addition, we can use the free singer as a sound source-to demonstrate this, we make a song by hand in the style of Jazz Hiphop by sampling the output of our free singer. This song can be listened to at https://bit.ly/2QkUJoJ. A free singer takes no conditions at all as the input. We want it to sing freely. The singing voices from a free singer may not even sound good, but they should sound like singing voice. A free singer is like we are freely humming or singing on the road walking or in the bathroom taking a shower. We may not even know what we are singing and likely there is no underlying musical score. From the viewpoint of a generative model, training a free singer amounts to modeling a distribution P (Y), where Y ∈ R K×T is a matrix representing a sequence of K-dimensional features and T is the number of time frames. A free singing is sampled from this distribution without conditions. An accompanied singer takes as the input a sequence of accompaniment-derived features. An accompanied singer tries to generate singing voices that match the accompaniment track in some way. It is similar to the case of Karaoke, where a backing accompaniment track is played from a speaker, the lyrics and a video are displayed on a screen, and a user tries to sing according to the lyrics and the backing track. The difference is that, this time the user is a trained model and we do not ask it to follow the lyrics or the exact pitch contour of the accompaniment. The note sequence found in the singing has to be in harmony with, but not a duplicate of, that in the backing track. Training an accompanied singer amounts to modeling a distribution P (Y|A), where Y ∈ R K×T represents a feature sequence of the vocal track, and A ∈ R H×T is a feature sequence of the given accompaniment track. In our implementation, we use the mel-spectrograms for Y in compliance with the need of the vocoder. For A, different features can be tried, and we investigate using a transcription model to extract pitch features. See Section 4.1 for details. A solo singer is similar to a free singer in that both takes no conditions as the input. However, a solo singer would generate an'inner idea' first, and then sing according to that. In other words, it learns a joint distribution P (Y|I)Q(I), where I ∈ R J×T is a matrix representing the idea sequence. The singer first samples I from a distribution Q, and then uses that to conditionally sample Y from P. The inner idea I can take several forms. In this work, we instantiate this scheme with I being a chord progression (namely a sequence of chord labels). The distribution Q is modeled by an autoregressive recurrent network we build for chord progression generation (with a network architecture adapted from that in ), as described in Section 3.3. Alternatively, we can think of a solo singer as a combination of an idea generator and an an accompanied singer. For an accompanied singer, the information extracted from the given accompaniment track can take several forms such as transcribed pitches and chord progressions. A solo singer learns to generate such information on its own, without reference to an actual accompaniment track. To account for the absence of supervised data and the highly complicated spatio-temporal patterns in audio spectrograms, we propose a new adversarial net that features heavy use of ), dilated convolutions (van den ;), and feature grouping to build our singer models. We provide the algorithmic details below. Network architectures with stacked blocks of GRUs and dilated convolutions have been used to attain state-of-the-art performance in blind musical source separation . In a source separation task, a model learns to decompose, or unmix, different sources (e.g., vocal, piano, bass, drum) from a mixture signal . This requires the abilities to model the relationships between different sources as well as the relationships between neighboring time frames. The output spectrograms are also expected to be distortion-less and of high audio quality. For it has demonstrated its capability in source separation, we adopt it as a building block of the singer models. Especially, we want the singer models to also consider accompaniment information. Specifically, one such block we adopted in our models is a stack of GRU, dilated convolution with feature grouping, and group normalization . The input to the GRU, the output of the GRU, and the output of the group normalization are summed to form the output of the block. We note that the original'D2 block' used in uses dilated GRU and uses weight normalization for the dilated convolution layers. However, empirically we find that it is easier for the singer models to converge by replacing weight normalization with group normalization, and using plain GRUs is as good as using dilated GRUs. We refer to our blocks as GRU-grouped dilated convolution-group normalization block ('G3 block'). The accompanied singers and solo singers have to take conditions as part of their input. One desirable property of the models is the ability to generate voices with arbitrary length, as the conditional signal can be of variable length. Besides, the model has to deal with the one-to-many issue mentioned in Section 1, and the absence of supervisory signals. With these issues in mind, we design a GAN architecture for score and lyrics-free voice generation. In particular, we pay special attention to the following three components: 1) the network architecture, 2) the input noises for GAN, and 3) the loss function of the discriminator. Let us first take a look at two existing GAN models for audio generation: and. Their generators and discriminators are both based on 2D convolutions, transposed 2D convolutions and dense (linear) layers. The generators take a vector z ∈ R U as the input noise and use transposed convolutions to expand z so that a temporal dimension emerges in the expanded intermediate matrices. The number of temporal frames in the final output depends on the total strides used in all the transposed convolutions. The discriminators take the output of the generators or the real signal as the input, and compress the input matrix with convolution layers until the output becomes a single value represents the prediction of true (real) or false (generated) data. A main reason why existing models cannot generate variable-length output is the need to expand z by transposed convolution layers. We remedy this by using an architecture consisting of the proposed G3 blocks, and convolutions without strides, for both the generators G(·) and discriminators D(·). Moreover, instead of using a single noise vector, our models take as input a sequence of noise vectors, denoted as Z ∈ R U ×T, that has the same temporal length as the desired output Y. Each column of Z is sampled independently from a Gaussian distribution N ormal. At the first glance, it might feel unnatural to have one noise vector per frame as that may in fast oscillations in the noises. However, we note that the output of G(·) for the t-th frame depends not only on the t-th column of Z (and C or I), but the entire Z (and the condition matrices), due to the recurrent GRUs in the model. We expect that the GRUs in the discriminator D(·) would force G(·) to generate consistent consecutive frames. Therefore, the effect of the frame-wise noises might be introducing variations to the generation (e.g., by adjusting the modes of the generated frame-wise features). As for the loss function of D(·), we experiment with the following three options: the vanilla GAN, the LSGAN that adopts the least squares loss function for the discriminator, and the boundary equilibrium GAN (BEGAN) that adopts an "auto-encoder style" discriminator loss. The D(·) in either GAN or LSGAN is implemented as a classifier aiming to distinguish between real and generated samples, whereas the D(·) in BEGAN is an autoencoder aiming to reconstruct its input. Specifically, in BEGAN, the loss functions l D and l G for the discriminator and generator, as in the case of the accompanied singer, are respectively: where X ∈ R K×T is the feature sequence of a real vocal track sampled from the training data, G(Z, C) ∈ R K×T is the feature sequence for the generated vocal track, and L(·) is a function that measures how well the discriminator D(·), implemented as an auto-encoder, reconstructs its input: where we use M w,t to denote the (w, t)-th element of a matrix M (and similarly for D(M, C) w,t ). Moreover, the variable τ s in Eq. is introduced by BEGAN to balance the power of D(·) and G(·) during the learning process. It is dynamically set to be for each training step s, with τ s ∈. λ and γ are manually-set hyperparameters. Empirical comparison of the performance of GAN, LSGAN and BEGAN can be found in Appendix D. It turns out that the BEGAN-based one, referred to as G3BEGAN hereafter, works the best. We also use auto-regressive RNN to build a chord progression generator for implementing the solo singer. Our chord generator is trained on the Wikifonia dataset (http://www.wikifonia. org/), a set of 6,670 songs in the leadsheet format (i.e., with separated symbolic melody and chord tracks). Its chord vocabulary covers 612 different chords. We set the harmonic rhythm of the chord generator such that a chord change may occur every beat. We desire the chord generator to freely generate chord progressions across different tempo values, time signatures, and keys. Moreover, the generated chord progression has to be rhythmically correct. In achieving so, we encode the tempo, time signatures, and key information (which are available in the Wikifonia dataset) as the initial hidden state of the RNN, and concatenate the chord vector from last time step with the beat position (e.g., 1, 2, 3, 4) of that time step as the input to the RNN. For data augmentation, we transpose the chord progressions found in Wikifonia to 12 possible keys. Once trained, we use the chord generator to create chord progressions for testing the solo singer. With a one-hot encoding, each column of the condition matrix I would have dimension J = 660. More details of G3BEGAN and the chord generator can be found in Appendices A.1 and A.2. In our implementation, we use 80-dimensional mel-spectrograms as the acoustic features modeled and generated by the singer models (i.e., K = 80). We use the python package librosa , with default settings, to compute the mel-spectrograms from audio. A mel-spectrogram is passed to a WaveRNN vocoder to generate an audio signal from melspectrograms. Our implementation of the WaveRNN vocoder is based on the code from Fatchord. Instead of using off-the-shelf pre-trained vocoders, which are typically trained for TTS, we train our vocoder from scratch with a set of 3,500 vocal tracks we separate (by a separation model) from an in-house collection of music that covers diverse musical genres. One main difficulty in conducting this research is to get vocal tracks for training our singer models. Existing multitrack datasets that contain clean vocal tracks are usually diverse in musical genre and the singing timbre, making it hard for our models to learn the relationship between singing and accompaniment. As we set out to focus on Jazz music in this work, we opt for collecting our own vocal tracks from Jazz music only. We therefore implement our source separation model following the architecture proposed by , which represents the state-of-the-art as evaluated on the MUSDB benchmark . We use the whole MUSDB dataset to train our separation model. This dataset contains clean vocal and accompaniment tracks. However, for constructing the condition matrix A of our accompanied singer, we desire to have separated piano tracks as well (we will explain why shortly). We therefore collect additionally 4.5 hours of Jazz piano solo audio and use them to augment MUSDB for training our source separation model, which can as a isolate out not only the vocal track but also the piano track from an arbitrary song. We collect 17.4 hours of Jazz songs containing female voices and 7.6 hours of Jazz songs with male voices. We use the aforementioned separation model to get the vocal tracks. For batched training, we divide the tracks into 10-second sub-clips. Sub-clips that contain less than 40% vocals, as measured from energy, are removed. This leads to 9.9-hour and 5.0-hour training data for female and male Jazz vocals, respectively. 200 and 100 sub-clips are reserved from the training set as the validation set for female singing and male singing, respectively. Each model is trained for 500 epochs. For GAN and LSGAN, we use the models at the 500th epoch for evaluation. For BEGAN, the parameters of the epoch with the best convergence rate are used for evaluation. For the accompanied singer, we experiment with extracting pitch-related information from the accompaniment track to form the matrix A that conditions the generation of the vocal track. The assumption here is that whether the generated vocal track is in harmony with the accompaniment track can be largely determined by pitch-related information. For this purpose, we implement a piano transcription model to transcribe the separated piano track, leading to 88-dimensional transcribed frame-wise pitch as the accompaniment condition (i.e., H = 88, as there are 88 piano notes). We implement a piano transcription model with the G3 blocks introduced in Section 3.1, following the training procedure of . We also implement the model of. Under the same training setting, we find that ours is slightly worse than theirs in note F1 score (0.779 vs 0.794) but slightly better in the note precision score (0.834 vs 0.823). We decide to use our model for the better precision, but the difference of using either of them should be small. The clips in the training set of our singer models may not contain piano playing (see Table 5 for a summary of the datasets). Even if a clip contains piano playing, the piano may not play across the entire clip. Hence, the models have to learn to sing either with or without the piano accompaniment. For performance evaluation, we collect 5.3 hours of Jazz music from Jamendo (https://www. jamendo.com), an online platform for sharing copyright-free music. As said in Section 1, this test set is meant to be public. We apply source separation to the audios, divide each track into 20-second sub-clips, 4 and remove those that do not contain piano. Piano transcription is also applied to the separated piano track, yielding 402 20-second sub-clips for evaluation. 402 chord progressions are generated by the chord generator to evaluate the solo singer and to compute the matchness. As this is a new task, there is no previous work that we can compare with. Therefore, we establish the baselines by 1) computing the baseline objective metrics (see Section 4.3) from the training data of the singing models, and 2) using existing SVS systems for synthesizing singing voices. For the SVS baselines, we employ Sinsy and Synthesizer V , the two well-known SVS systems that are publicly accessible. For Sinsy, we use the publicly available repository 5 to query the Sinsy API (http://sinsy.jp/); we use the HMM version instead of the deep learning version as the latter cannot generate male voices. For Synthesizer V, we use their software (https://synthesizerv.com/). We use Sinsy for both objective and subjective tests, but Synthesizer V for subjective test only, for the latter does not provide a functionality to batch process a collection of MIDI files and lyrics. SVS systems have to take lyrics and a melody to synthesize singing voices. For the lyrics, we choose to use multiple'la,' the default lyrics for Synthesizer V. 6 For the melodies, we consider two methods: 1. Vocal transcription from singer training data. We use CREPE to transcribe the separated vocals from the singer training data, and convert it to MIDI format. 2. Piano transcription from the Jamendo testing data. As described in Section 4.1, we have separated and transcribed the piano part of the Jamendo data. Yet, the piano transcription often contains multiple notes at the same time. We use the skyline algorithm to the transcription to get a melody line comprising the highest notes. The best way to evaluate the performance of the singer models is perhaps by listening to the generated . Therefore, we encourage our readers to listen to the audio files provided in the supplementary material. However, objective evaluation remains desirable, either for model development or for gaining insights into the generation . We propose the following metrics for our tasks. • Vocalness measures whether an audio clip contains singing voices. There are different publicly available tools for detecting singing voices in an audio mixture (e.g., ). We choose the JDC model for it represents the state-of-theart. In this model, the pitch contour is also predicted in addition to the vocal activation. If the pitch at a frame is outside a reasonable human pitch range (73-988 Hz defined by JDC), the pitch is set to 0 at that frame. We consider a frame as being vocal if it has a vocal activation ≥ 0.5 AND has a pitch > 0. Moreover, we define the vocalness of an audio clip as the proportion of its frames that are vocal. The tool is applied to the non-silence part of an audio 7 of the generated singing voices only, excluding the accompaniment. • Average pitch: We estimate the pitch (in Hz) for each frame with two pitch detection models: the state-of-the-art monophonic pitch tracker CREPE (a), and JDC. The average pitch is computed by averaging the pitches across the frames with confidence higher than 0.5 for CREPE, and across the frames that are estimated to be vocal for JDC. 4 Please note that this is longer than the 10-second sub-clips we used to train the singer models. This is okay as our model can generate variable-length output. 5 https://github.com/mathigatti/midi2voice 6 As our models do not contain meaningful lyrics, to be fair the baselines should not contain meaningful lyrics either. We choose'la' because people do sometimes sing with'la' and it has no semantic meaning. An alternative way to get the lyrics is by randomly sampling a number of characters. However, randomly sampling a reasonable sequence of characters is not a trivial task as well. 7 The non-silence frames are derived by using the librosa function'effects. signal to frame nonsilent.' 312 ± 70 310 ± 56 0.60 ± 0.14 -9.24 ± 3.09 Singer train data (vocals, male) 263 ± 93 258 ± 75 0.64 ± 0.16 -9.09 ± 3.22 Singer train data (accomp., female) --0.05 ± 0.09 -Singer train data (accomp., male) --0.12 ± 0.15 -MUSDB clean vocals 271 ± 81 283 ± 75 0.59 ± 0.14 - Table 1: Result of objective evaluation for our singer models and a few baseline methods. • Singing-accompaniment matchness: As detailed in the appendix, to objectively measure matchness, we build a melody harmonization RNN by adapting the chord generator described in Section 3.3. Given a pair of melody and chord sequences, the model computes the likelihood of observing that chord sequence as the output when taking the melody sequence as the model input. We use the average of the log likelihood across time frames as the matchness score. As the harmonization model considers symbolic sequences, we use CREPE to transcribe the generated voices, and Madmom (Böck et al., 2016) to recognize the chord sequence from the accompaniment track. Several observations can be made from the shown in Table 1. In terms of the average pitch, we can see that the of our model is fairly close to that of the singing voices in the training data. Moreover, the average pitch of the generated female voices is higher than that of the generated male voices as expected. We can also see that the Sinsy singing voices tend to have overly high pitches, when the melody line is derived from a piano playing (denoted as 'testing piano skyline.'). In terms of vocalness, our models score in general lower than Sinsy, and the singing voices in the training data. However, the difference is not that far. As a reference, we also compute the vocalness of the accompaniments in the training set (denoted as 'accomp.') and it is indeed quite low. As for matchness, we show in Table 1 the score computed from the real melody-chords pairs of Wikifonia (-7.04) and that from random pairs of Wikifonia (-13.16). We can see that the accompanied singers score higher than the random baseline and the free singer as expected. 9 Moreover, the matchenss scores of the accompanied singers are close to that of the singer training data. Examples of the generated spectrograms of our models can be found in the appendix. From visually inspecting the spectrograms and listening to the , the models seem to learn the characteristics of the singing melody contour (e.g., the F0 is not stable over time). Moreover, the female singer models learn better than the male counterparts, possibly because of the larger training set. Vocalness Expression Matchness G3BEGAN (20 epochs) 1.59 ± 0.82 1.93 ± 0.99 1.98 ± 0.88 2.18 ± 1.08 G3BEGAN (240 epochs) 2.24 ± 0.93 2.66 ± 1.01 2.60 ± 1.01 2.58 ± 1.05 G3BEGAN (final) 2.38 ± 0.96 2.98 ± 1.02 2.85 ± 1.00 2.74 ± 1.04 Table 2: Mean opinion scores (MOS) and standard deviations with respect to four evaluation criteria collected from the user study, for three different versions of accompanied singer (female). The scores are in 5-point Likert scale and are from 1 to 5; the higher the better. Sound quality Vocalness Expression Matchness G3BEGAN (final) 1.71 ± 0.70 2.39 ± 1.11 2.27 ± 1.06 2.34 ± 1.16 Sinsy 3.19 ± 1.07 2.90 ± 1.01 2.40 ± 0.98 2.10 ± 0.90 Synthesizer V 3.57 ± 1.07 3.30 ± 1.24 3.25 ± 1.10 3.35 ± 1.15 Table 3: MOS from the second user study, comparing our model and two existing SVS systems. We conduct two online, non-paid user studies to evaluate the accompanied singer, the female one. In the first user study, we compare the'final' model (with the number of epochs selected according to a validation set) against two early versions of the model trained with less epochs. In the second one, we compare the proposed accompanied singer with Sinsy and Synthesizer V. In the first study, we recruit 39 participants to each rate the generated singing for three different accompaniment tracks (each 20 seconds), one accompaniment track per page. The subjects are informed the purpose of our research (i.e., score and lyrics-free singing voice generation) and the user study (to compare three computer models), and are asked to listen in a quiet environment with proper headphone volume. No post-processing (e.g., noise removal, EQ adjustment) is applied to the audio. The ordering of the of the three models is randomized. The process of the second study is similar to the first one, but it includes five different accompaniments (randomly chosen from those used in the first user study) and the respective generated/synthesized singing voices. The melodies used for synthesis are those from the piano skyline of the test data, so that our model can be compared with the synthesis methods with the same accompaniment. A separate set of 21 subjects participate in this study. The audio files used in this user study can be downloaded from https://bit.ly/2qNrekv. Tables 2 and 3 show the of the two studies. We can see that the model indeed learns better with more epochs. Among the four evaluation criteria, the Sound Quality is rated lower than the other three in both studies, suggesting room for improvement. By comparing the proposed model with the two SVS systems, we see that Synthesizer V performs the best for all the evaluation criteria. Our model achieves better Matchness than Sinsy, and achieves a rating close to Sinsy in Expression. In general, we consider the as promising considering that our models are trained from scratch with little knowledge of human language. While early work on SVS is mainly based on digital signal processing (DSP) techniques such as sampling concatenation , machine learning approaches offer greater flexibility and have been more widely studied in recent years. Hidden Markov models 10 We note that Sinsy and Synthesizer V have an unfair advantage on matchness because their singing voices are basically synthesized according to the melody lines of the accompaniment. From Table 3, we see that Synthesizer V does exhibit this advantage, while Sinsy does not. We observe that the Sinsy singing voices do not always align with the provided scores. The fact that Synthesizer V has higher audio quality seem to promote its score in the other criteria. The presence of the of Synthesizer V seems to also make the subjects in the second study rate the proposed model lower than the subjects do in the first study. (HMMs), in particular, have been shown to work well for the task . The Sinsy system, a baseline model in Section 4, is also based on HMMs . report improved naturalness by using deep neural nets instead of HMMs. Since then, many neural network models have been proposed. The model presented by uses simple fully-connected layers to map symbolic features extracted from the user-provided scores and lyrics, to a vector of acoustic features for synthesis. The input and output features are time-aligned frame-by-frame beforehand by well-trained HMMs. The input features consist of score-related features (e.g., the key of the current bar and the pitch of the current musical note), and lyrics-related ones (the current phoneme identify, the number of phonemes in the current syllable, and the duration of the current phoneme). The output features consist of spectral and excitation parameters and their dynamic features , which altogether can then be turned into audio with a DSP technique called the MLSA filter . The aforementioned model has been extended in many aspects. For instance, using convolutional layers and recurrent layers in replacement of the fully-connected layers for learning the mapping between input and output features has been respectively investigated by and Kim et al. (2018b). Using neural vocoders such as the WaveNet (van den) instead of the MLSA filter has been shown to improve naturalness by. Rather than using hand-crafted features for the input and output, Lee et al. (2019a) train a model to predict the mel-spectrogram directly from time-aligned lyrics and pitch labels, and then use the Griffin-Lim algorithm to synthesize the audio. Modern techniques such as adversarial loss and attention module have also been employed (a). A follow-up work adds a speaker encoder to the network to achieve multi-singer SVS (b). Synthesizer V , the other baseline model we employ in Section 4, is based on a hybrid structure that uses both deep learning and sample-based concatenation. While exciting progress has been made to SVS, the case of score and lyrics-free singing voice generation, to our best knowledge, has not been tackled thus far. Similar to (a), we do not use hand-crafted features and we train our model to predict the mel-spectrograms. Using neural nets for score-conditioned instrumental audio generation have also been investigated in recent years. However, existing work is mostly concerned with the generation of single notes of, for example, 4-second long (Défossez et al., 2018;). A deep neural network that is capable of generating variable-length audio (e.g., a "recurrent generator") as the proposed singer models do, to our knowledge, has not been much studied. In this paper, we have introduced a novel task of singing voice generation that does not use musical scores and lyrics. Specifically, we proposed three singing schemes with different input conditions: free singer, accompanied singer, and solo singer. We have also proposed a BEGAN based architecture that uses GRUs and grouped dilated convolutions to learn to generate singing voices in an adversarial way. For evaluating such models, we proposed several objective metrics and implemented a model to measure the compatibility between a given accompaniment track and the generated vocal track. The evaluation shows that the audio quality of the generated voices still leave much room for improvement, but in terms of humanness and emotion expression our models work fine. Score and lyrics-free singing voice generation is a new task, and this work represents only a first step tackling it. There are many interesting ideas to pursue. For example, we have chosen to extract pitch-related information only from the accompaniment track for the accompanied singer, but a more interesting way is to let the model learns to extract relevant information itself. In the near future, we plan to investigate advanced settings that allow for timbre and expression control, and experiment with other network architectures, such as coupling a fine-grained auto-regressive model with a multiscale generation procedure as done in MelNet , using a discriminator that examines different chunks of the generated audio as done in PatchGAN for the vision domain , or using multiple discriminators that evaluate the generated audio based on multi-frequency random windows as done in GAN-TTS (Bińkowski et al., 2019) The generator in G3BEGAN is implemented with a stack of two G3 blocks. Please see Table 4 for details of the network architecture. The chord generator is aimed to generate chord progressions freely under some given conditions. It supports 12 major and 12 minor keys, 10 tempo options from 60 to 240 BPM, 6 time signature options, and 51 chord qualities (612 chords in total). The conditions, key, tempo, and time signatures, are encoded into one-hot representation and concatenated together as a 40-dimension vector. The model mainly consists with 3 stacked GRU layers, each with 512 hidden variables. The input of each time step is a 524-dimensional vector consisting of a chord embedding and a beat-related one-hot positional encoding (to encourage the model to follow certain rhythmical pattern. This input array passes through a fully-connected layer to 512-dimension and is used as the input of the GRUs. The training data are the leadsheets from the Wikifonia dataset. We augmented the data by rotating the keys, leading to in total 80,040 leadsheets for training. Table 5 : A summary of the datasets employed in this work. The first three datasets and the 4.5-hour Piano solo part of the last dataset are Jazz music, whereas the others may not. The 'Processing' column indicates the music processing models that have been applied to process the respective dataset. (Notation-Tr.: training. Ev.: evaluating. SS: source separation. PT: piano transcription) The melody harmonization model is modified from the chord generator described in Appendix A.2, using additionally the melody tracks found in the Wikifonia dataset. Specifically, the model intends to generate a chord sequence given a melody sequence. Such a model can be learned by using the pairs of melody and chord tracks in Wikifonia. We add the chroma representation of the melody with window size of a quarter-note to the input vector. The matchness of the real pairs of melody and chord progression in Wikifonia is -7.04±2.91. If we pair a melody with a randomly selected chord progression and calculate the matchness, the score becomes -13.16±3.72. B APPENDIX: DATASETS C APPENDIX: EXAMPLES OF THE SPECTROGRAM OF THE GENERATED SINGING VOICES Examples of the spectrograms of the generated singing voices can be found in Figures 3 and 4. (a) The given instrumental, accompaniment track (b) The voices generated by the accompanied singer (female) (c) The voices generated by the accompanied singer (male) Figure 3: Samples of spectrograms generated by our accompanied singers: (left) the given accompaniment tracks, the voices generated by (middle) the female singer and (right) the male singer. Average pitch (Hz) Vocalness Model CREPE JDC JDC BEGAN (female) 288 ± 28 292 ± 28 0.48 ± 0.09 GAN (female) 307 ± 7 371 ± 25 0.22 ± 0.17 LSGAN (female) 1130 ± 9 453 ± 14 0.28 ± 0.03 Table 6: Result of using different GAN losses, for free singer (female). We experiment different GANs, including BEGAN , vanilla GAN (, and LSGAN , for the case of building the free singer. The output of the discriminators in the vanilla GAN and LSGAN is a single real/fake value. To compare the three GANs as fairly as possible, the discriminators used in vanilla GAN and LSGAN are almost the same as the one used in the BEGAN. The only difference is that the discriminators used in vanilla GAN and LSGAN have an extra average-pooling layer in the output. The validation losses at different training epochs are shown in Figure 5, and the metrics are shown in Table 6. In Figure 5, we can see that, according to our implementation, only BEGAN converges. In Table 6, we can see that the BEGAN model has much higher Vocalness than other models. By listening to the generated singing voices of the GAN and LSGAN models, we find that they are basically noises. Figure 4: Samples of spectrograms generated by our (left) free singers and (right) solo singers. We can see salient pitch contour in the spectrograms. Moreover, the pitches sung by the male singers seem on average lower than those sung by the female singers. which can be used to examine how well the model converges.
Our models generate singing voices without lyrics and scores. They take accompaniment as input and output singing voices.
775
scitldr
The carbon footprint of natural language processing (NLP) research has been increasing in recent years due to its reliance on large and inefficient neural network implementations. Distillation is a network compression technique which attempts to impart knowledge from a large model to a smaller one. We use teacher-student distillation to improve the efficiency of the Biaffine dependency parser which obtains state-of-the-art performance with respect to accuracy and parsing speed . When distilling to 20% of the original model’s trainable parameters, we only observe an average decrease of ∼1 point for both UAS and LAS across a number of diverse Universal Dependency treebanks while being 2.26x (1.21x) faster than the baseline model on CPU (GPU) at inference time. We also observe a small increase in performance when compressing to 80% for some treebanks. Finally, through distillation we attain a parser which is not only faster but also more accurate than the fastest modern parser on the Penn Treebank. Ethical NLP research has recently gained attention . For example, the environmental cost of AI research has become a focus of the community, especially with regards to the development of deep neural networks . Beyond developing systems to be greener, increasing the efficiency of models makes them more cost-effective, which is a compelling argument even for people who might downplay the extent of anthropogenic climate change. In conjunction with this push for greener AI, NLP practitioners have turned to the problem of developing models that are not only accurate but also efficient, so as to make them more readily deployable across different machines with varying computational capabilities (; ;). This is in contrast with the recently popular principle of make it bigger, make it better . Here we explore teacher-student distillation as a means of increasing the efficiency of neural network systems used to undertake a core task in NLP, dependency parsing. To do so, we take a state-of-theart (SoTA) Biaffine parser from. The Biaffine parser is not only one of the most accurate parsers, it is the fastest implementation by almost an order of magnitude among state-of-the-art performing parsers. Contribution We utilise teacher-student distillation to compress Biaffine parsers trained on a diverse subset of Universal Dependency (UD) treebanks. We find that distillation maintains accuracy performance close to that of the full model and obtains far better accuracy than simply implementing equivalent model size reductions by changing the parser's network size and training regularly. Furthermore, we can compress a parser to 20% of its trainable parameters with minimal loss in accuracy and with a speed 2.26x (1.21x) faster than that of the original model on CPU (GPU). Dependency parsing is a core NLP task where the syntactic relations of words in a sentence are encoded as a well-formed tree with each word attached to a head via a labelled arc. Figure 1 shows an example of such a tree. The syntactic information attained from parsers has been shown to benefit a number of other NLP tasks such as relation extraction, machine translation , and sentiment analysis . The son of the cat hunts the rat. Table 1 shows performance details of current SoTA dependency parsers on the English Penn Treebank (PTB) with predicted POS tags from the Stanford POS tagger . The Biaffine parser of offers the best trade-off between accuracy and parsing speed with the HPSG parser of achieving the absolute best reported accuracy but with a reported parsing speed of roughly one third of the Biaffine's parsing speed. It is important to note that direct comparisons between systems with respect to parsing speed are wrought with compounding variables, e.g. different GPUs or CPUs used, different number of CPU cores, different batch sizes, and often hardware is not even reported. Biaffine Table 1: Speed and accuracy performance for SoTA parsers and parsers from our distillation method, Biaffine-Dπ compressing to π% of the original model, for the English PTB with POS tags predicted from the Stanford POS tagger. In the first table block, † denotes values taken from the original paper, ‡ from. Values with no superscript (second and third blocks) are from running the models on our system locally with a single CPU core for both CPU and GPU speeds (averaged over 5 runs) and with a batch size of 4096 with GloVe 100 dimension embeddings. We therefore run a subset of parsers locally to achieve speed measurements in a controlled environment, also shown in Table 1: we compare a PyTorch implentation of the Biaffine parser (which runs more than twice as fast as the reported speed of the original implementation); the UUParser from which is one of the leading parsers for Universal Dependency (UD) parsing; a sequence-labelling dependency parser from which has the fastest reported parsing speed amongst modern parsers; and also distilled Biaffine parsers from our implementation described below. All speeds measured here are with the system run with a single CPU core for both GPU and CPU runs. Biaffine parser is a graph-based parser extended from the graph-based BIST parser to use a deep self-attention mechanism. This in a fast and accurate parser, as described above, and is used as the parser architecture for our experiments. More details of the system can be found in. Model compression has been under consideration for almost as long as neural networks have been utilised, e.g. introduced a pruning technique which removed weights based on a locally predicted contribution from each weight so as to minimise the perturbation to the error function. More recently, introduced a means of pruning a network up to 40 times smaller with minimal affect on performance. and utilised magnitude-based pruning to increase network generalisation. More specific to used absolute-magnitude pruning to compress neural machine translation systems by 40% with minimal loss in performance. However, pruning networks leaves them in an irregularly sparse state which cannot be trivially re-cast into less sparse architectures. Sparse tensors could be used for network layers to obtain real-life decreases in computational complexity, however, current deep learning libraries lack this feature. introduced structured pruning to account for this, but this kernel-based technique is restricted to convolutional networks. More recently pruned the heads of the attention mechanism in their neural machine translation system and found that the remaining heads were linguistically salient with respect to syntax, suggesting that pruning could also be used to undertake more interesting analyses beyond merely compressing models and helping generalisation. and developed distillation as a means of network compression from the work of , who compressed a large ensemble of networks into one smaller network. Teacher-student distillation is the process of taking a large network, the teacher, and transferring its knowledge to a smaller network, the student. Teacher-student distillation has successfully been exploited in NLP for machine translation, language modelling, and speech recognition (; ;). Latterly, it has also been used to distill task-specific knowledge from BERT. Other compression techniques have been used such as low-rank approximation decomposition , vector quantisation , and Huffman coding . For a more thorough survey of current neural network compression methods see. The essence of model distillation is to train a model and subsequently use the patterns it learnt to influence the training of a smaller model. For teacher-student distillation, the smaller model, the student, explicitly uses the information learnt by the larger original model, the teacher, by comparing the distribution of each model's output layer. We use the Kullback-Leibler divergence to calculate the loss between the teacher and the student: where P is the probability distribution from the teacher's softmax layer, Q is the probability distribution from the student's, and x is the input to the target layer for token w x in a given tree, t. For our implementation, there are two probability distributions for each model, one for the arc prediction and one for the label prediction. By using the distributions of the teacher rather than just using the predicted arc and label, the student can learn more comprehensively about which arcs and labels are very unlikely in a given context, i.e. if the teacher makes a mistake in its prediction, the distribution might still carry useful information such as having a similar probability for y g and y p which can help guide the student better rather than just learning to copy the teacher's predictions. In addition to the loss with respect to the teacher's distributions, the student model is also trained using the loss on the gold labels in the training data. We use cross entropy to calculate the loss on the student's predicted head classifications: where t is a tree in the treebank T, h is a head position for the set of heads H for a given tree, and h is the head position predicted by the student model. Similarly, cross entropy is used to calculate the loss on the predicted arc labels for the student model. The total loss for the student model is therefore: where L CE (h) is the loss for the student's predicted head positions, L CE (lab) is the loss for the student's predicted arc label, L KL (T h, S h) is the loss between the teacher's probability distribution for arc predictions and that of the student, and L KL (T lab, S lab) is the loss between label distributions. We train a Biaffine parser for a number of Universal Treebanks v2.4 (UD) and apply the teacher-student distillation method to compress these models into a number of different sizes. We use the hyperparameters from , but use a PyTorch implementation for our experiments which obtains the same parsing and runs faster than the reported speed of the original (see Table 1). 2 The hyperparameter values can be seen in Table 4. During distillation dropout is not used. Beyond lexical features, the model only utilises universal part-ofspeech (UPOS) tags. Gold UPOS tags were used for training and at runtime. Also, we used gold sentence segmentation and tokenisation. We opted to use these settings to compare models under homogeneous settings, so as to make reproducibility of and comparability with our easier. Data We use the subset of UD treebanks suggested by de from v2.4, so as to cover a wide range of linguistic features, linguistic typologies, and different dataset sizes. We make some changes as this set of treebanks was chosen from a previous UD version. We exchange Kazakh with Uyghur because the Kazakh data does not include a development set and Uyghur is a closely related language. We also exchange Ancient-Greek-Proiel for Ancient-Greek-Perseus because it contains more non-projective arcs (the number of arcs which cross another arc in a given tree) as this was the original justification for including Ancient Greek. We also included Wolof as African languages were wholly unrepresented in the original collection of suggested treebanks. Details of the treebanks pertinent to parsing can be seen in Table 2. We use pretrained word embeddings from FastText for all but Ancient Greek, for which we used embeddings from , and Wolof, for which we used embeddings from. When necessary, we used the algorithm of to reduce the embeddings to 100 dimensions. For each treebank we then acquired the following models: i Baseline 1: Full-sized model is trained as normal and undergoes no compression technique. ii Baseline 2: Model is trained as normal but with equivalent sizes of the distilled models (20%, 40%, 60%, and 80% of the original size) and undergoes no compression technique. These models have the same overall structure of baseline 1, with just the number of dimensions of each layer changed to in a specific percentage of trainable parameters of the full model. iii Distilled: Model is distilled using the teacher-student method. We have four models were the first is distilled into a smaller network with 20% of the parameters of the original, the second 40%, the third 60%, and the last 80%. The network structure and parameters of the distilled models are the exact same as those of the baseline 2 models. Table 2: Statistics for salient features with respect to parsing difficulty for each UD treebank used: number of trees, the number of data instances; average tree length, the length of each data instance on average; average arc length, the mean distance between heads and dependents; non.proj. arc pct, the percentage of non-projective arcs in a treebank. Base E, the baseline models of equivalent size to the distilled models; Distill, the distilled models; Base, the performance of the original full-sized model. Hardware For evaluating the speed of each model when parsing the test sets of each treebank we set the number of CPU cores to be one and either ran the parser using that solitary core or using a GPU (using a single CPU core too). The CPU used was an Intel Core i7-7700 and the GPU was an Nvidia GeForce GTX 1080. Experiment We compare the performance of each model on the aforementioned UD treebanks with respect to the unlabelled attachment score (UAS) which evaluates the accuracy of the arcs, and the labelled attachment score (LAS) which also includes the accuracy of the arc labels. We also evaluate the differences in inference time for each model on CPU and GPU with respect to sentences per second and tokens per second. We report sentences per second as this has been the measurement traditionally used in most of the literature, but we also use tokens per second as this more readily captures the difference in speed across parsers for different treebanks where the sentence length varies considerably. We also report the number of trainable parameters of each distilled model and how they compare to the baseline, as this is considered a good measure of how green a model is in lieu of the number of floating point operations (FPO) . 6 AND DISCUSSION Figure 2a shows the average attachment scores across all treebanks for the distilled models and the equivalent-sized base models against the size of the model relative to the original full model. There is a clear gap in performance between these two sets of models with roughly 2 points of UAS and LAS more for the distilled models. This shows that the distilled models do actually manage to leverage the information from the original full model. The full model's scores are also shown and it is clear that on average the model can be distilled to 60% with no loss in performance. When compressing to 20% of the full model, the performance only decreases by about 1 point for both UAS and LAS. Figures 3a and 3b show the differences in UAS and LAS for the models distilled to 20% and 80% respectively for each treebank when compared to the equivalent sized baseline model and the full baseline model. The distilled models far outperform the equivalent-sized baselines for all treebanks. It is clear that for the smaller model that some treebanks suffer more when compressed to 20% than others when compared to the full baseline model, e.g. Finnish-TDT and Ancient-Greek-Perseus. These two treebanks have the largest percentage of non-projective arcs (as can be seen in Table 2) which could account for the decrease in performance, with a more powerful model required to account for this added syntactic complexity. However, the two smallest treebanks, Tamil-TTB and Wolof-WTB, actually increase in accuracy when using distillation, especially Tamil-TTB, which is by far the smallest treebank, with an increase in UAS and LAS of about 4 points over the full base model. This is likely the of over-fitting when using the larger, more powerful model, so that reducing the model size actually helps with generalisation. These observations are echoed in the for the model distilled to 80%, where most treebanks lose less than a point for UAS and LAS against the full baseline, but have a smaller increase in performance over the equivalent-sized baseline. This makes sense as the model is still close in size to the full baseline and still similarly powerful. The increase in performance for Tamil-TTB and Wolof-WTB are greater for this distilled model, which suggests the full model doesn't need to be compressed to such a small model to help with generalisation. The full set of attachment scores from our experiments can be seen in Table 5 in the Appendix. With respect to how green our distilled models are, Table 3 shows the number of trainable parameters for each distilled model for each treebank alongside its corresponding full-scale baseline. We report these in lieu of FPO as, to our knowledge, no packages exist to calculate the FPO for neural network layers like LSTMs which are used in our network. These numbers do not depend on the hardware used and strongly correlate with the amount of memory a model consumes. Different algorithms do utilise parameters differently, however, the models compared here are of the same structure and use the same algorithm, so comparisons of the number of trainable model parameters do relate to how much work each respective model does compared to another. Figures 4 and 5 show the parsing speeds on CPU and GPU for the distilled models and for the full baseline model for sentence per second and token per second, respectively. The speeds are reported for different batch sizes as this obviously affects the speed at which a neural network can make predictions, but the maximum batch size that can be used on different systems varies significantly. As can be seen in Figures 4a and 5a, the limiting factor in parsing speed is the bottleneck of loading the data onto the GPU when using a batch size less than ∼1000 sentences. However, with a batch size of 4096 sentences, we achieve an increase in parsing speed of 21% over the full baseline model when considering tokens per second. As expected, a much smaller batch size is required to achieve increases in parsing speed when using a CPU. Even with a batch size of 32 sentences, the smallest model more than doubles the speed of the baseline. For a batch size of 4096, the distilled model compressed to 20% increases the speed of the baseline by 126% when considering tokens per second. A full breakdown of the parsing speeds for each treebank and each model when using a batch size of 4096 sentences is given in Table 6 in the Appendix. Figure 6 shows the attachment scores and the corresponding parsing speed against model size for the distilled model and the full baseline model. These plots clearly show that the cost in accuracy is neglible when compared to the large increase in parsing speed. So not only does this teacher-student distillation technique maintain the accuracy of the baseline model, but it achieves real compression and with it practical increases in parsing speed and with a greener implementation. In absolute terms, our distilled models are faster than the previously fastest parser using sequence labelling, as can be seen explicitly in Table 1 for PTB, and outperforms it by over 1 point with respect to UAS and LAS when compressing to 40%. Distilling to 20% in a speed 4x that of the sequence labelling model on CPU but comes at a cost of 0.62 points for UAS and 0.76 for LAS compared to the sequence labelling accuracies. Furthermore, the increase in parsing accuracy for the smaller treebanks suggests that distillation could be used as a more efficient way of finding optimal hyperparameters depending on the available data, rather than training numerous models with varying hyperparameter settings. There are numerous ways in which this distillation technique could be augmented to potentially retain more performance and even outperform the large baseline models, such as using teacher annealing introduced by where the distillation process gradually secedes to standard training. Beyond this, the structure of the distilled models can be altered, e.g. student models which are more shallow than the teacher models . This technique could further improve the efficiency of models and make them more environmentally friendly by reducing the depth of the models and therefore the total number of trainable parameters. Distillation techniques can also be easily expanded to other NLP tasks. Already attempts have been made to make BERT more wieldy by compressing the information it contains into task-specific models. But this can be extended to other tasks more specifically and potentially reduce the environmental impact of NLP research and deployable NLP systems. We have shown the efficacy of using the teacher-student distillation technique for dependency parsing by distilling a state-of-the-art parser implementation. The parser used for our experiments was not only accurate but already fast, meaning it was a strong baseline from which to see improvements. We obtained parsing speeds up to 2.26x (1.21x) faster on CPU (GPU) while only losing ∼1 point for both UAS and LAS when compared to the original sized model. Furthermore, the smallest model which obtains these only has 20% of the original model's trainable parameters, vastly reducing its environmental impact. A APPENDIX
We increase the efficiency of neural network dependency parsers with teacher-student distillation.
776
scitldr
While autoencoders are a key technique in representation learning for continuous structures, such as images or wave forms, developing general-purpose autoencoders for discrete structures, such as text sequence or discretized images, has proven to be more challenging. In particular, discrete inputs make it more difficult to learn a smooth encoder that preserves the complex local relationships in the input space. In this work, we propose an adversarially regularized autoencoder (ARAE) with the goal of learning more robust discrete-space representations. ARAE jointly trains both a rich discrete-space encoder, such as an RNN, and a simpler continuous space generator function, while using generative adversarial network (GAN) training to constrain the distributions to be similar. This method yields a smoother contracted code space that maps similar inputs to nearby codes, and also an implicit latent variable GAN model for generation. Experiments on text and discretized images demonstrate that the GAN model produces clean interpolations and captures the multimodality of the original space, and that the autoencoder produces improvements in semi-supervised learning as well as state-of-the-art in unaligned text style transfer task using only a shared continuous-space representation. Recent work on regularized autoencoders, such as variational BID15 BID29 and denoising BID37 variants, has shown significant progress in learning smooth representations of complex, high-dimensional continuous data such as images. These codespace representations facilitate the ability to apply smoother transformations in latent space in order to produce complex modifications of generated outputs, while still remaining on the data manifold. Unfortunately, learning similar latent representations of discrete structures, such as text sequences or discretized images, remains a challenging problem. Initial work on VAEs for text has shown that optimization is difficult, as the decoder can easily degenerate into a unconditional language model BID2. Recent work on generative adversarial networks (GANs) for text has mostly focused on getting around the use of discrete structures either through policy gradient methods BID40 or with the Gumbel-Softmax distribution BID17. However, neither approach can yet produce robust representations directly. A major difficulty of discrete autoencoders is mapping a discrete structure to a continuous code vector while also smoothly capturing the complex local relationships of the input space. Inspired by recent work combining pretrained autoencoders with deep latent variable models, we propose to target this issue with an adversarially regularized autoencoder (ARAE). Specifically we jointly train a discrete structure encoder and continuous space generator, while constraining the two models with a discriminator to agree in distribution. This approach allows us to utilize a complex encoder model, such as an RNN, and still constrain it with a very flexible, but more limited generator distribution. The full model can be then used as a smoother discrete structure autoencoder or as a latent variable GAN model where a sample can be decoded, with the same decoder, to a discrete output. Since the system produces a single continuous coded representation-in contrast to methods that act on each RNN state-it can easily be further regularized with problem-specific invariants, for instance to learn to ignore style, sentiment or other attributes for transfer tasks. Experiments apply ARAE to discretized images and sentences, and demonstrate that the key properties of the model. Using the latent variable model (ARAE-GAN), the model is able to generate varied samples that can be quantitatively shown to cover the input spaces and to generate consistent image and sentence manipulations by moving around in the latent space via interpolation and offset vector arithmetic. Using the discrete encoder, the model can be used in a semi-supervised setting to give improvement in a sentence inference task. When the ARAE model is trained with task-specific adversarial regularization, the model improves the current best on sentiment transfer reported in BID33 and produces compelling outputs on a topic transfer task using only a single shared code space. All outputs are listed in the Appendix 9 and code is available at (removed for review). In practice unregularized autoencoders often learn a degenerate identity mapping where the latent code space is free of any structure, so it is necessary to apply some method of regularization. A popular approach is to regularize through an explicit prior on the code space and use a variational approximation to the posterior, leading to a family of models called variational autoencoders (VAE) BID15 BID29. Unfortunately VAEs for discrete text sequences can be challenging to train-for example, if the training procedure is not carefully tuned with techniques like word dropout and KL annealing BID2, the decoder simply becomes a language model and ignores the latent code (although there has been some recent successes with convolutional models BID32 BID39). One possible reason for the difficulty in training VAEs is due to the strictness of the prior (usually a spherical Gaussian) and/or the parameterization of the posterior. There has been some work on making the prior/posterior more flexible through explicit parameterization BID28 BID16 BID4. A notable technique is adversarial autoencoders (AAE) BID23 which attempt to imbue the model with a more flexible prior implicitly through adversarial training. In AAE framework, the discriminator is trained to distinguish between samples from a fixed prior distribution and the input encoding, thereby pushing the code distribution to match the prior. While this adds more flexibility, it has similar issues for modeling text sequences and suffers from mode-collapse in our experiments. Our approach has similar motivation, but notably we do not sample from a fixed prior distribution-our'prior' is instead parameterized through a flexible generator. Nonetheless, this view (which has been observed by various researchers BID35 BID24 BID22) provides an interesting connection between VAEs and GANs. The success of GANs on images have led many researchers to consider applying GANs to discrete data such as text. Policy gradient methods are a natural way to deal with the ing non-differentiable generator objective when training directly in discrete space BID7 BID38. When trained on text data however, such methods often require pre-training/co-training with a maximum likelihood (i.e. language modeling) objective BID40. This precludes there being a latent encoding of the sentence, and is also a potential disadvantage of existing language models (which can otherwise generate locally-coherent samples). Another direction of work has been through reparameterizing the categorical distribution with the Gumbel-Softmax trick BID13 BID21 )-while initial experiments were encouraging on a synthetic task BID17, scaling them to work on natural language is a challenging open problem. There has also been a flurry of recent, related approaches that work directly with the soft outputs from a generator BID9; BID33 BID26. For example, Shen et al. BID33 exploits adversarial loss for unaligned style transfer between text by having the discriminator act on the RNN hidden states and using the soft outputs at each step as input to an RNN generator, utilizing the Professor-forcing framework BID18. Our approach instead works entirely in code space and does not require utilizing RNN hidden states directly. Discrete Structure Autoencoders Define X = V n to be a set of discrete structures where V is a vocabulary of symbols and P x to be a distribution over this space. For instance, for binarized images V = {0, 1} and n is the number of pixels, while for sentences V is the vocabulary and n is the sentence length. A discrete autoencoder consists of two parameterized functions: a deterministic encoder function enc φ: X → C with parameters φ that maps from input to code space and a conditional decoder distribution p ψ (x | c) over structures X with parameters ψ. The parameters are trained on a cross-entropy reconstruction loss: DISPLAYFORM0 The choice of the encoder and decoder parameterization is specific to the structure of interest, for example we use RNNs for sequences. We use the notation,x = arg max x p ψ (x | enc φ (x)) for the (approximate) decoder mode. When x =x the autoencoder is said to perfectly reconstruct x. Generative Adversarial Networks GANs are a class of parameterized implicit generative models BID8. The method approximates drawing samples from a true distribution c ∼ P r by instead employing a latent variable z and a parameterized deterministic generator functioñ c = g θ (z) to produce samplesc ∼ P g. Initial work on GANs minimizes the Jensen-Shannon divergence between the distributions. Recent work on Wasserstein GAN (WGAN), replaces this with the Earth-Mover (Wasserstein-1) distance. GAN training utilizes two separate models: a generator g θ (z) maps a latent vector from some easy-to-sample source distribution to a sample and a critic/discriminator f w (c) aims to distinguish real data and generated samples from g θ. Informally, the generator is trained to fool the critic, and the critic to tell real from generated. WGAN training uses the following min-max optimization over generator parameters θ and critic parameters w, DISPLAYFORM1 where f w: C → R denotes the critic function,c is obtained from the generator,c = g θ (z), and P r and P g are real and generated distributions. If the critic parameters w are restricted to an 1-Lipschitz function set W, this term correspond to minimizing Wasserstein-1 distance W (P r, P g). We use a naive approximation to enforce this property by weight-clipping, i.e.. DISPLAYFORM2 Ideally, a discrete autoencoder should be able to reconstruct x from c, but also smoothly assign similar codes c and c to similar x and x. For continuous autoencoders, this property can be enforced directly through explicit regularization. For instance, contractive autoencoders BID30 regularize their loss by the functional smoothness of enc φ. However, this criteria does not apply when inputs are discrete and we lack even a metric on the input space. How can we enforce that similar discrete structures map to nearby codes?Adversarially regularized autoencoders target this issue by learning a parallel continuous-space generator with a restricted functional form to act as a smoother reference encoding. The joint objective regularizes the autoencoder to constrain the discrete encoder to agree in distribution with its continuous counterpart: DISPLAYFORM0 Above W is the Wasserstein-1 distance between P r the distribution of codes from the discrete encoder model (enc φ (x) where x ∼ P(x)) and P g is the distribution of codes from the continuous generator model (g θ (z) for some z, e.g. z ∼ N (0, I)). To approximate Wasserstein-1 term, the W function includes an embedded critic function which is optimized adversarially to the encoder and generator as described in the . The full model is shown in Figure 1.To train the model, we use a block coordinate descent to alternate between optimizing different parts of the model: the encoder and decoder to minimize reconstruction loss, the WGAN critic function to approximate the W term, the encoder and generator to adversarially fool the critic to minimize W: DISPLAYFORM1 The full training algorithm is shown in Algorithm 1. discrete struct. encoder code (P r) decoder reconstruction loss DISPLAYFORM2 Figure 1: ARAE architecture. The model can be used as an autoencoder, where a structure x is encoded and decoded to producex, and as a GAN (ARAE-GAN), where a sample z is passed though a generator g θ to produce a code vector, which is similarly decoded tox. The critic function fw is only used at training to help approximate W. for number of training iterations do Train the autoencoder for reconstruction DISPLAYFORM0 Backpropagate reconstruction loss, DISPLAYFORM1, and update. Sample DISPLAYFORM2 DISPLAYFORM3 Backpropagate adversarial loss DISPLAYFORM4 ) and update. Extension: Code Space Transfer One benefit of the ARAE framework is that it compresses the input to a single code vector. This framework makes it ideal for manipulating discrete objects while in continuous code space. For example, consider the problem of unaligned transfer, where we want to change an attribute of a discrete input without supervised examples, e.g. to change the topic or sentiment of a sentence. First, we extend the decoder to condition on a transfer variable denoting this attribute y which is known during training, to learn p ψ (x | c, y). Next, we train the code space to be invariant to this attribute, to force it to be learned fully by the decoder. Specifically, we further regularize the code space to map similar x with different attribute labels y near enough to fool a code space attribute classifier, i.e.: DISPLAYFORM5 where L class (φ, u) is the loss of a classifier p u (y | c) from code space to labels (in our experiments we always set λ = 1). To incorporate this additional regularization, we simply add two more gradient update steps: (2b) training a classifier to discriminate codes, and (3b) adversarially training the encoder to fool this classifier. The algorithm is shown in Algorithm 2. Note that similar technique has been introduced in other domains, notably in images BID19 and video modeling BID6. We experiment with three different ARAE models: an autoencoder for discretized images trained on the binarized version of MNIST, an autoencoder for text sequences trained using the Stanford Natural Language Inference (SNLI) corpus BID1, and an autoencoder trained DISPLAYFORM0, and compute code-vectors c DISPLAYFORM1 Backpropagate adversarial classifier loss DISPLAYFORM2 for text transfer (Section 6.2) based on the Yelp and Yahoo datasets for unaligned sentiment and topic transfer. All three models utilize the same generator architecture, g θ. The generator architecture uses a low dimensional z with a Gaussian prior p(z) = N (0, I), and maps it to c. Both the critic f w and the generator g θ are parameterized as feed-forward MLPs. The image model uses fully-connected NN to autoencode binarized images. Here X = {0, 1} n where n is the image size. The encoder used is a feed-forward MLP network mapping from {0, DISPLAYFORM3 The text model uses a recurrent neural network (RNN) for both the encoder and decoder. Here X = V n where n is the sentence length and V is the vocabulary of the underlying language. Define an RNN as a parameterized recurrent function h j = RNN(x j, h j−1 ; φ) for j = 1... n (with h 0 = 0) that maps a discrete input structure x to hidden vectors h 1... h n. For the encoder, we define enc φ (x) = h n = c. For decoding we feed c as an additional input to the decoder RNN at each time step, i.e.h j = RNN(x j,h j−1, c; ψ), and further calculate the distribution over V at each time step via softmax, p ψ (x | c) = n j=1 softmax(Wh j + b) xj where W and b are parameters (part of ψ). Finding the most likely sequencex under this distribution is intractable, but it is possible to approximate it using greedy search or beam search. In our experiments we use an LSTM architecture BID12 for both the encoder/decoder and decode using greedy search. The text transfer model uses the same architecture as the text model but extends it with a code space classifier p(y|c) which is modeled using an MLP and trained to minimize cross-entropy. Our baselines utilize a standard autoencoder (AE) and the cross-aligned autoencoder BID33 for transfer. Note that in both our ARAE and standard AE experiments, the encoded code from the encoder is normalized to lie on the unit sphere, and the generated code is bounded to lie in (−1, 1) n by the tanh function at output layer. We additionally experimented with the sequence VAE introduced by BID2 and the adversarial autoencoder (AAE) model BID23 on the SNLI dataset. However despite extensive parameter tuning we found that neither model was able to learn meaningful latent representations-the VAE simply ignored the latent code and the AAE experienced mode-collapse and repeatedly generated the same samples. The Appendix 12 includes detailed descriptions of the hyperparameters, model architecture, and training regimes. Our experiments consider three aspects of the model. First we measure the empirical impact of regularization on the autoencoder. Next we apply the discrete autoencoder to two applications, unaligned style transfer and semi-supervised learning. Finally we employ the learned generator network as an implicit latent variable model (ARAE-GAN) over discrete sequences. Our main goal for ARAE is to regularize the model produce a smoother encoder by requiring the distribution from the encoder to match the distribution from the continuous generator over a simple latent variable. To examine this claim we consider two basic statistical properties of the code space during training of the text model on SNLI, shown in FIG1. On the left, we see that the 2 norm of c and codec converge quickly in ARAE training. The encoder code is always restricted to be on the unit sphere, and the generated codec quickly learns to match it. The middle plot shows the convergence of the trace of the covariance matrix between the generator and the encoder as training progresses. We find that variance of the encoder and the generator match after several epochs. To check the smoothness of the model, for both ARAE/AE, we take a sentence and calculate the average cosine similarity of 100 randomly-selected sentences that had an edit-distance of at most 5 to the original sentence. We do this for 250 sentences and calculate the mean of the average cosine similarity. FIG1 (right) shows that the cosine similarity of nearby sentences is quite high for the ARAE than in the case for the AE. Edit-distance is not an ideal proxy for similarity in sentences, but it is often a sufficient condition. Finally an ideal representation should be robust to small changes of the input around the training examples in code space BID30. We can test this property by feeding a noised input to the encoder and (i) calculating the score given to the original input, and (ii) checking the reconstructions. TAB2 (right) shows an experiment for text where we add noise by permuting k words in each sentence. We observe that the ARAE is able to map a noised sentence to a natural sentence, (though not necessarily the denoised sentence). TAB2 (left) shows empirical for these experiments. We obtain the reconstruction error (i.e. negative log likelihood) of the original (non-noised) sentence under the decoder, utilizing the noised code. We find that when k = 0 (i.e. no swaps), the regular AE better reconstructs the input as expected. However, as we increase the number of swaps and push the input further away from the data manifold, the ARAE is more likely to produce the original sentence. We note that unlike denoising autoencoders which require a domain-specific noising function BID10 BID37, the ARAE is not explicitly trained to denoise an input, but learns to do so as a byproduct of adversarial regularization. Unaligned Text Transfer A smooth autoencoder combined with low reconstruction error should make it possible to more robustly manipulate discrete objects through code space without dropping off the data manifold. To test this hypothesis, we experimented with two unaligned text transfer tasks. For these tasks, we attempt to change one attribute of a sentence without aligned examples of this change. To perform this transfer, we learn a code space that can represent an input that is agnostic to this attribute, and a decoder that can incorporate the attribute (as described in Section 4). We experiment with unaligned transfer of sentiment on the Yelp corpus and topic on the Yahoo corpus BID41. we came on the recommendation of a bell boy and the food was amazing. the people who ordered off the menu did n't seem to do much better. ARAE we came on the recommendation and the food was a joke. ARAE the people who work there are super friendly and the menu is good. Cross-AE we went on the car of the time and the chicken was awful.Cross-AE the place, one of the office is always worth you do a business. For sentiment we follow the same setup as BID33 and split the Yelp corpus into two sets of unaligned positive and negative reviews. We train an ARAE as an autoencoder with two separate decoders, one for positive and one for negative sentiment, and incorporate adversarial training of the encoder to remove sentiment information from the code space. We test by encoding in sentences of one class and decoding, greedily, with the opposite decoder. Our evaluation is based on four automatic metrics, shown in Table 2: (i) Transfer: measuring how successful the model is at transferring sentiment based on an automatic classifier (we use the fastText library BID14).(ii) BLEU: measuring the consistency between the transferred text and the original. We expect the model to maintain as much information as possible and transfer only the style; (iii) Perplexity: measuring the fluency of the generated text; (iv) Reverse Perplexity: measuring the extent to which the generations are representative of the underlying data distribution. 1 Both perplexity numbers are obtained by training an RNN language model. We additionally perform human evaluations on the cross-aligned AE and our best ARAE model. We randomly select 1000 sentences (500/500 positive/negative), obtain the corresponding transfers from both models, and ask Amazon Mechanical Turkers to evaluate the sentiment (Positive/Neutral/Negative) and naturalness (1-5, 5 being most natural) of the transferred sentences. We create a separate task in which we show the Turkers the original and the transferred sentences, and ask them to evaluate the similarity based on sentence structure (1-5, 5 being most similar). We explicitly ask the Turkers to disregard sentiment in their similarity assessment. In addition to comparing against the cross-aligned AE of BID33, we also compare against a vanilla AE trained without adversarial regularization. For ARAE, we experimented with different λ weighting on the adversarial loss (see section 4) with λ a = 1, λ b = 10. We generally set λ = 1. Experimentally the adversarial regularization enhances transfer and perplexity, but tends to make the transferred text less similar to the original, compared to the AE. Some randomly selected sentences are shown in figure 6 and more samples are shown available in Appendix 9.The same method can be applied to other style transfer tasks, for instance the more challenging Yahoo QA data BID41 Semi-Supervised Training We further utilize ARAE in a standard AE setup for semi-supervised training. We experiment on a natural language inference task, shown in Table 5 (right). We use 22.2%, 10.8% and 5.25% of the original labeled training data, and use the rest of the training set for unlabeled training. The labeled set is randomly picked. The full SNLI training set contains 543k sentence pairs, and we use supervised sets of 120k, 59k and 28k sentence pairs respectively for the three settings. As a baseline we use an AE trained on the additional data, similar to the setting explored in BID5. For ARAE we use the subset of unsupervised data of length < 15, which roughly includes 655k single sentences (due to the length restriction, this is a subset of 715k sentences that were used for AE training). As observed by BID5, training on unlabeled data with an AE objective improves upon a model just trained on labeled data. Training with adversarial regularization provides further gains. After training, an ARAE can also be used as an implicit latent variable model controlled by z and the generator g θ, which we refer to as ARAE-GAN. While models of this form have been widely used for generation in other modalities, they have been less effective for discrete structures. In this section, we attempt to measure the effectiveness of this induced discrete GAN.A common test for a GANs ability mimic the true distribution P r is to train a simple model on generated samples from P g. While there are pitfalls of this evaluation BID34, it provides a starting point for text modeling. Here we generate 100k samples from (i) ARAE-GAN, (ii) an AE 2, (iii) a RNN LM trained on the same data, and (iv) the real training set (samples from the models are 2 To "sample" from an AE we fit a multivariate Gaussian to the code space after training and generate code vectors from this Gaussian to decode back into sentence space. Medium Table 5 : Left. Semi-Supervised accuracy on the natural language inference (SNLI) test set, respectively using 22.2% (medium), 10.8% (small), 5.25% (tiny) of the supervised labels of the full SNLI training set (rest used for unlabeled AE training). Right. Perplexity (lower is better) of language models trained on the synthetic samples from a GAN/AE/LM, and evaluated on real data (Reverse PPL).A man is on the corner in a sport area. A man is on corner in a road all. A lady is on outside a racetrack. A lady is outside on a racetrack. A lot of people is outdoors in an urban setting.A lot of people is outdoors in an urban setting.A lot of people is outdoors in an urban setting.A man is on a ship path with the woman. A man is on a ship path with the woman. A man is passing on a bridge with the girl. A man is passing on a bridge with the girl. A man is passing on a bridge with the girl. A man is passing on a bridge with the dogs. A man is passing on a bridge with the dogs.A man in a cave is used an escalator.A man in a cave is used an escalator A man in a cave is used chairs. A man in a number is used many equipment A man in a number is posing so on a big rock.People are posing in a rural area. People are posing in a rural area. Figure 3: Sample interpolations from the ARAE-GAN. Constructed by linearly interpolating in the latent space and decoding to the output space. Word changes are highlighted in black. Results of the ARAE. The top block shows output generation of the decoder taking fake hidden codes generated by the GAN; the bottom block shows sample interpolation . shown in Appendix 10). All models are of the same size to allow for fair comparison. We train an RNN language model on generated samples and evaluate on held-out data to calculate the reverse perplexity. As can be seen from Table 5, training on real data (understandably) outperforms training on generated data by a large margin. Surprisingly however, we find that a language model trained on ARAE-GAN data performs slightly better than one trained on LM-generated/AE-generated data. We further found that the reverse PPL of an AAE BID23 was quite high due to mode-collapse. Another property of GANs (and VAEs) is that the Gaussian form of z induces the ability to smoothly interpolate between outputs by exploiting the structure of the latent space. While language models may provide a better estimate of the underlying probability space, constructing this style of interpolation would require combinatorial search, which makes this a useful feature of text GANs. We experiment with this property by sampling two points z 0 and z 1 from p(z) and constructing intermediary points z λ = λz 1 + (1 − λ)z 0. For each we generate the argmax outputx λ. The samples are shown in FIG0 (left) for text and in FIG0 (right) for a discretized MNIST ARAE-GAN.A final intriguing property of image GANs is the ability to move in the latent space via offset vectors (similar to the case with word vectors BID25). For example, Radford et al. BID27 observe that when the mean latent vector for "men with glasses" is subtracted from the mean latent vector for "men without glasses" and applied to an image of a "woman without glasses", the ing image is that of a "woman with glasses". To experiment with this property we generate 1 million sentences from the ARAE-GAN and compute vector transforms in this space to attempt to change main verbs, subjects and modifier (details in Appendix 11). Some examples of successful transformations are shown in FIG2 (right). Quantitative evaluation of the success of the vector transformations is given in FIG2 (left). We present adversarially regularized autoencoders, as a simple approach for training a discrete structure autoencoder jointly with a code-space generative adversarial network. The model learns a improved autoencoder as demonstrated by semi-supervised experiments and improvements on text transfer experiments. It also learns a useful generative model for text that exhibits a robust latent space, as demonstrated by natural interpolations and vector arithmetic. We do note that (as has been frequently observed when training GANs) our model seemed to be quite sensitive to hyperparameters. Finally, while many useful models for text generation already exist, text GANs provide a qualitatively different approach influenced by the underlying latent variable structure. We envision that such a framework could be extended to a conditional setting, combined with other existing decoding schemes, or used to provide a more interpretable model of language. One can interpret the ARAE framework as a dual pathway network mapping two distinct distributions into a similar one; enc φ and g θ both output code vectors that are kept similar in terms of Wasserstein distance as measured by the critic. We provide the following proposition showing that under our parameterization of the encoder and the generator, as the Wasserstein distance converges, the encoder distribution (c ∼ P r) converges to the generator distribution (c ∼ P g), and further, their moments converge. This is ideal since under our setting the generated distribution is simpler than the encoded distribution, because the input to the generator is from a simple distribution (e.g. spherical Gaussian) and the generator possesses less capacity than the encoder. However, it is not so simple that it is overly restrictive (e.g. as in VAEs). Empirically we observe that the first and second moments do indeed converge as training progresses (Section 6.1). Proposition 1. Let P be a distribution on a compact set χ, and (P n) n∈N be a sequence of distributions on χ. Further suppose that W (P n, P) → 0. Then the following statements hold:(i) P n P (i.e. convergence in distribution).(ii) All moments converge, i.e. for all k > 1, k ∈ N, DISPLAYFORM0 Proof. (i) has been proved in BID36 Theorem 6.9.For (ii), using The Portmanteau Theorem, (i) is equivalent to: DISPLAYFORM1 for all bounded and continuous function f: R d → R, where d is the dimension of the random variable. The k-th moment of a distribution is given by DISPLAYFORM2 Our encoded code is bounded as we normalize the encoder output to lie on the unit sphere, and our generated code is also bounded to lie in (−1, 1) n by the tanh function. Hence Original definitely a great choice for sushi in las vegas! Original i was so very disappointed today at lunch. ARAE definitely a _num_ star rating for _num_ sushi in las vegas. ARAE i highly recommend this place today. Cross-AE not a great choice for breakfast in las vegas vegas! Cross-AE i was so very pleased to this. DISPLAYFORM3 Original the best piece of meat i have ever had! Original i have n't received any response to anything. ARAE the worst piece of meat i have ever been to! ARAE i have n't received any problems to please. Cross-AE the worst part of that i have ever had had! Cross-AE i have always the desert vet.Original really good food, super casual and really friendly. Original all the fixes were minor and the bill? ARAE really bad food, really generally really low and decent food. ARAE all the barbers were entertaining and the bill did n't disappoint. Cross-AE really good food, super horrible and not the price. Cross-AE all the flavors were especially and one! Original it has a great atmosphere, with wonderful service. Original small, smokey, dark and rude management. ARAE it has no taste, with a complete jerk. ARAE small, intimate, and cozy friendly staff. Cross-AE it has a great horrible food and run out service. Cross-AE great,,, chips and wine.Original their menu is extensive, even have italian food. Original the restaurant did n't meet our standard though. ARAE their menu is limited, even if i have an option. ARAE the restaurant did n't disappoint our expectations though. Cross-AE their menu is decent, i have gotten italian food. Cross-AE the restaurant is always happy and knowledge.Original everyone who works there is incredibly friendly as well. Original you could not see the stage at all! ARAE everyone who works there is incredibly rude as well. ARAE you could see the difference at the counter! Cross-AE everyone who works there is extremely clean and as well. Cross-AE you could definitely get the fuss! Original there are a couple decent places to drink and eat in here as well. Original room is void of all personality, no pictures or any sort of decorations. ARAE there are a couple slices of options and _num_ wings in the place. ARAE room is eclectic, lots of flavor and all of the best. Cross-AE there are a few night places to eat the car here are a crowd. Cross-AE it's a nice that amazing, that one's some of flavor.Original if you're in the mood to be adventurous, this is your place! Original waited in line to see how long a wait would be for three people. ARAE if you're in the mood to be disappointed, this is not the place. ARAE waited in line for a long wait and totally worth it. Cross-AE if you're in the drive to the work, this is my place! Cross-AE another great job to see and a lot going to be from dinner.Original we came on the recommendation of a bell boy and the food was amazing. Original the people who ordered off the menu did n't seem to do much better. Cross-AE we came on the recommendation and the food was a joke. ARAE the people who work there are super friendly and the menu is good. Cross-AE we went on the car of the time and the chicken was awful. Cross-AE the place, one of the office is always worth you do a business.Original service is good but not quick, just enjoy the wine and your company. Original they told us in the beginning to make sure they do n't eat anything. ARAE service is good but not quick, but the service is horrible. ARAE they told us in the mood to make sure they do great food. Cross-AE service is good, and horrible, is the same and worst time ever. Cross-AE they're us in the next for us as you do n't eat.Original the steak was really juicy with my side of salsa to balance the flavor. Original the person who was teaching me how to control my horse was pretty rude. ARAE the steak was really bland with the sauce and mashed potatoes. ARAE the person who was able to give me a pretty good price. Cross-AE the fish was so much, the most of sauce had got the flavor. Cross-AE the owner's was gorgeous when i had a table and was friendly.Original other than that one hell hole of a star bucks they're all great! Original he was cleaning the table next to us with gloves on and a rag. ARAE other than that one star rating the toilet they're not allowed. ARAE he was prompt and patient with us and the staff is awesome. Cross-AE a wonder our one came in a _num_ months, you're so better! Cross-AE he was like the only thing to get some with with my hair. A woman is seeing a man in the river. There passes a woman near birds in the air. Some ten people is sitting through their office. The man got stolen with young dinner bag. Monks are running in court. The Two boys in glasses are all girl. The man is small sitting in two men that tell a children. The two children are eating the balloon animal. A woman is trying on a microscope. The dogs are sleeping in bed. Two Three woman in a cart tearing over of a tree. A man is hugging and art. The fancy skier is starting under the drag cup in. A dog are <unk> a A man is not standing. The Boys in their swimming. A surfer and a couple waiting for a show. A couple is a kids at a barbecue. The motorcycles is in the ocean loading I's bike is on empty The actor was walking in a a small dog area. no dog is young their mother LM Samples a man walking outside on a dirt road, sitting on the dock. A large group of people is taking a photo for Christmas and at night. Someone is avoiding a soccer game. The man and woman are dressed for a movie. Person in an empty stadium pointing at a mountain. Two children and a little boy are <unk> a man in a blue shirt. A boy rides a bicycle. A girl is running another in the forest. the man is an indian women.Figure 5: Text samples generated from ARAE-GAN, a simple AE, and from a baseline LM trained on the same data. To generate from an AE we fit a multivariate Gaussian to the learned code space and generate code vectors from this Gaussian. We generate 1 million sentences from the ARAE-GAN and parse the sentences to obtain the main verb, subject, and modifier. Then for a given sentence, to change the main verb we subtract the mean latent vector (t) for all other sentences with the same main verb (in the first example in FIG2 this would correspond to all sentences that had "sleeping" as the main verb) and add the mean latent vector for all sentences that have the desired transformation (with the running example this would be all sentences whose main verb was "walking"). We do the same to transform the subject and the modifier. We decode back into sentence space with the transformed latent vector via sampling from p ψ (g(z + t)). Some examples of successful transformations are shown in FIG2 (right). Quantitative evaluation of the success of the vector transformations is given in FIG2 (left). For each original vector z we sample 100 sentences from p ψ (g(z + t)) over the transformed new latent vector and consider it a match if any of the sentences demonstrate the desired transformation. Match % is proportion of original vectors that yield a match post transformation. As we ideally want the generated samples to only differ in the specified transformation, we also calculate the average word precision against the original sentence (Prec) for any match.
Adversarially Regularized Autoencoders learn smooth representations of discrete structures allowing for interesting results in text generation, such as unaligned style transfer, semi-supervised learning, and latent space interpolation and arithmetic.
777
scitldr
When data arise from multiple latent subpopulations, machine learning frameworks typically estimate parameter values independently for each sub-population. In this paper, we propose to overcome these limits by considering samples as tasks in a multitask learning framework.
We present a method to estimate collections of regression models in which each model is personalized to a single sample.
778
scitldr
In this work, we first conduct mathematical analysis on the memory, which is defined as a function that maps an element in a sequence to the current output, of three RNN cells; namely, the simple recurrent neural network (SRN), the long short-term memory (LSTM) and the gated recurrent unit (GRU). Based on the analysis, we propose a new design, called the extended-long short-term memory (ELSTM), to extend the memory length of a cell. Next, we present a multi-task RNN model that is robust to previous erroneous predictions, called the dependent bidirectional recurrent neural network (DBRNN), for the sequence-in-sequenceout (SISO) problem. Finally, the performance of the DBRNN model with the ELSTM cell is demonstrated by experimental . The recurrent neural network (RNN) has proved to be an effective solution for natural language processing (NLP) through the advancement in the last three decades BID8 BID11 BID2 BID1. At the cell level of a RNN, the long short-term memory (LSTM) BID10 and the gated recurrent unit (GRU) are often adopted by a RNN as its low-level building cell. Being built upon these cells, various RNN models have been proposed to solve the sequence-in-sequence-out (SISO) problem. To name a few, there are the bidirectional RNN (BRNN) BID14, the encoder-decoder model BID15 BID16 BID0 and the deep RNN BID12. Although the LSTM and the GRU were designed to enhance the memory length of RNNs and avoid the gradient vanishing/exploding issue BID10 BID13 BID3, a good understanding of their memory length is still lacking. Here, we define the memory of a RNN model as a function that maps an element in a sequence to current output. The first objective of this research is to analyze the memory length of three RNN cells -the simple RNN (SRN) BID8 BID11, the long short-term memory (LSTM) and the gated recurrent unit (GRU). This will be conducted in Sec. 2. Such analysis is different to the investigation of gradient vanishing/exploding problem in a sense that gradient vanishing/exploding problem happens during the training process, the memory analysis is, however, done on a trained RNN model. Based on the understanding from the memory analysis, we propose a new design, called the extended-long short-term memory (ELSTM), to extend the memory length of a cell in Sec.3.As to the macro RNN model, one popular choice is the BRNN. Since the elements in BRNN output sequences should be independent of each other BID14, the BRNN cannot be used to solve dependent output sequence problem alone. Nevertheless, most language tasks do involve dependent output sequences. The second choice is the encoder-decoder system, where the attention mechanism has been introduced BID16 BID0 to improve its performance furthermore. As shown later in this work, the encoder-decoder system is not an efficient learner. Here, to take advantages of both the encoder-decoder and the BRNN and overcome their drawbacks, we propose a new multitask model called the dependent bidirectional recurrent neural network (DBRNN), which will be elaborated in Sec. 4. Furthermore, we conduct a series of experiments on the part of speech (POS) tagging and the dependency parsing (DP) problems in Sec. 5 to demonstrate the performance of the DBRNN model with the ELSTM cell. Finally, concluding remarks are given and future research direction is pointed out in Sec. 6. For a large number of NLP tasks, we are concerned with finding semantic patterns from the input sequence. It was shown by BID8 that the RNN builds an internal representation of semantic patterns. The memory of a cell characterizes its ability to map an input sequence of certain length into such a representation. More rigidly, we define the memory as a function that maps an element in a sequence to the current output. So the memory capability of a RNN is not only about whether an element can be mapped into current output, but also how this mapping takes place. It was reported by BID9 that a SRN only memorized sequences of length between 3-5 units while a LSTM could memorize sequences of length longer than 1000 units. In this section, we study the memory of the SRN, LSTM and GRU. Here, for the ease of analysis, we use Elman's SRN model BID8 with linear hidden state activation function and non-linear output activation function since such cell model is mathematically tractable and performance-wise equivalent to BID11 and Tensorflow's variations. The SRN is described by the following two equations: DISPLAYFORM0 DISPLAYFORM1 where subscript t is the index of the time unit, W c ∈ R N ×N is the weight matrix for hidden state vector c t−1 ∈ R N, W in ∈ R N ×M is the weight matrix of input vector X t ∈ R M, h t ∈ R N and f (·) is an element-wise non-linear function. Usually, f (·) is a hyperbolic-tangent or a sigmoid function. Throughout this paper, we omit the bias terms by putting them inside the corresponding weight matrices. By induction, c t can be rewritten as DISPLAYFORM2 where c 0 is the initial internal state of the SRN. Typically, we set c 0 = 0. Then, Eq. becomes DISPLAYFORM3 Let λ max be the largest singular value of W c. Then, we have DISPLAYFORM4 Here, we are only interested in the case of memory decay when λ max < 1. Hence, the contribution of X k, k < t, to h t decays at least in form of λ |t−k| max. We conclude that SRN's memory decays at least exponentially with its memory length |t − k|. By following the work of BID10, we plot the diagram of a LSTM cell in FIG0. In this figure, φ, σ and ⊗ denote the hyperbolic tangent function, the sigmoid function and the multiplication operation, respectively. All of them operate in an element-wise fashion. The LSTM has an input gate, an output gate, a forget gate and a constant error carousal (CEC) module. Mathematically, the LSTM cell can be written as DISPLAYFORM5 DISPLAYFORM6 where c t ∈ R N, column vector I t ∈ R (M +N) is a concatenation of the current input, X t ∈ R M, and the previous output, h t−1 ∈ R N (i.e., I DISPLAYFORM7 and W in are weight matrices for the forget gate, the input gate, the output gate and the input, respectively. Under the assumption c 0 = 0, the hidden state vector of the LSTM can be derived by induction as DISPLAYFORM8 By setting f (·) in Eq. to a hyperbolic-tangent function, we can compare outputs of the SRN and the LSTM below: DISPLAYFORM9 LSTM: h DISPLAYFORM10 We see from the above that W t−k c and t j=k+1 σ(W f I j) play the same memory role for the SRN and the LSTM, respectively. DISPLAYFORM11 As given in Eqs. FORMULA4 and FORMULA0, the impact of input I k on output h t in the LSTM lasts longer than that of input X k in the SRN. This is the case if an appropriate weight matrix, W f, of the forget gate is selected. The GRU was originally proposed for neural machine translation. It provides an effective alternative for the LSTM. Its operations can be expressed by the following four equations: DISPLAYFORM0 DISPLAYFORM1 DISPLAYFORM2 where X t, h t, z t and r t denote the input vector, the hidden state vector, the update gate vector and the reset gate vector, respectively, and W z, W r, W, are trainable weight matrices. Its hidden state is also its output, which is given in Eq. FORMULA0. If we simplify the GRU by setting U z, U r and U to zero matrices, then we can obtain the following simplified GRU system: DISPLAYFORM3 DISPLAYFORM4 DISPLAYFORM5 For the simplified GRU with the initial rest condition, we can derive the following by induction: DISPLAYFORM6 By comparing Eqs. FORMULA8 and FORMULA0, we see that the update gate of the simplified GRU and the forget gate of the LSTM play the same role. One can control the memory decay behavior of the GRU by choosing the weight matrix, W z, of the update gate carefully. As discussed above, the LSTM and the GRU have longer memory by introducing the forget and the update gates, respectively. However, from Eq. 10 and Eq. 11, it can be seen that the impact of proceeding element to the current output at time step t still fades quickly due to the presence of forget gate and update gate. And as we will show in the ELSTM design, this does not have to be the case. In this section, we attempt to design extended-long short-term memory (ELSTM) cells and propose two new cell models:• ELSTM-I: the extended-long short-term memory (ELSTM) with trainable input weight vector s i ∈ R N, i = 1, · · ·, t − 1, where weights s i and s j (with i = j) are independent.• ELSTM-II: the ELSTM-I with no forget gate. The ELSTM-I cell can be described by DISPLAYFORM0 DISPLAYFORM1 where b ∈ R N is a trainable bias vector. The ELSTM-II cell can be written as DISPLAYFORM2 DISPLAYFORM3 As shown above, we introduce scaling factor, s i, i = 1, · · ·, t − 1, to the ELSTM-I and the ELSTM-II to increase or decrease the impact of input I i in the sequence. To prove that the proposed ELSTM-I has longer memory than LSTM, we first derive the closed form expression of h t, which is: DISPLAYFORM4 We then pick s k such that: DISPLAYFORM5 Compare Eq. 25 with Eq. 11, we conclude that ELSTM-I has longer memory than LSTM. As a matter of fact, s k plays a similarly role as the attention score in various attention models such as BID16. The impact of proceeding elements to the current output can be adjusted (either increase or decrease) by s k. The memory capability of ELSTM-II can be proven in a similarly fashion, so even ELSTM-II does not have forget gate, it is capable in attending to or forgetting a particular position of a sequence as ELSTM-I through the scaling factor. The major difference between the ELSTM-I and the ELSTM-II is that fewer parameters are used in the ELSTM-II than those in the ELSTM-I. The numbers of parameters used by different RNN cells are compared in TAB0, where X t ∈ R M, h t ∈ R N and t = 1, · · ·, T. Although the number of parameters of ELSTM depends on the maximum length of a sequence in practice, the memory overhead required is limited. ELSTM-II requires less number of parameters than LSTM for typical lengthed sequence. From Table. 1, to double the number of parameters as compare to an ordinary LSTM, the length of a sentence needs to be 4 times the size of the word embedding size and number of cells put together. That is, in the case of BID15 with 1000 word embedding and 1000 cells, the sentence length needs to be 4 × (1000 + 1000) = 8000! In practice, most NLP problems whose input involves sentences, the length will be typically less than 100. In our experiment, sequence to sequence with attention BID16 for maximum sentence length 100 (other model settings please refer to TAB1), ELSTM-I parameters uses 75M of memory, ELSTM-II uses 69.1M, LSTM uses 71.5M, and GRU uses 65.9M. Through GPU parallelization, the computational time for all four cells are almost identical with 0.4 seconds per step time on a GeForce GTX TITAN X GPU. We investigate the macro RNN model and propose a multitask model called dependent BRNN (DBRNN) in this section. The model is tasked to predict a output sequence DISPLAYFORM0, where T and T are the length of the input and output sequence respectively. Our proposal is inspired by the pros and cons of two RNN modelsthe bidirectional RNN (BRNN) model BID14 and the encoder-decoder model. In the following, we will first examine the BRNN and the encoder-decoder in Sec. 4.1 and, then, propose the DBRNN in Sec. 4.2. BRNN is modeling the conditional probability density function: DISPLAYFORM0 ). This output is a combination of the output of a forward and a backward RNN. Due to this bidirectional design, the BRNN can fully utilize the information of the entire input sequence to predict each individual output element. On the other hand, the BRNN does not utilize the predicted output in predicting Y t. This makes elements in the predicted sequenceŶ t = argmax Yt P (Y t |{X i} T t=1 ) independent of each other. ). However, the encoder-decoder model is vulnerable to previous erroneous predictions in the forward path. Recently, the BRNN has been introduced in the encoder by BID0, yet this design still does not address the erroneous prediction problem. Being motivated by observations in Sec. 4.1, we propose a multitask RNN model called DBRNN to fulfill the following objectives: DISPLAYFORM0 DISPLAYFORM1 where W f and W b are trainable weights. DISPLAYFORM2 ). The DBRNN has three learning objectives: the target sequence for the forward RNN prediction, the reversed target sequence for the backward RNN prediction, and, finally, the target sequence for the bidirectional prediction. The DBRNN model is shown in FIG2. It consists of a lower and an upper BRNN branches. At each time step, the input to the forward and the backward parts of the upper BRNN is the concatenated forward and backward outputs from the lower BRNN branch. The final bidirectional prediction is the pooling of both the forward and backward predictions. We will show later that this design will make DBRNN robust to previous erroneous predictions. DISPLAYFORM3 where c denotes the cell hidden state and l denotes the lower BRNN. The final output, h t, of the lower BRNN is the concatenation of the output, h f t, of the forward RNN and the output, h b t, of the backward RNN. Similarly, the upper BRNN generates the final output p t as DISPLAYFORM4 where u denotes the upper BRNN. To generate forward predictionŶ There are three errors: prediction error ofŶ f t denoted by e f, prediction error ofŶ b t denoted by e b and prediction error ofŶ t denoted by e. To train this network, e f is back propagated through time to the upper forward RNN and the lower BRNN, e b is back propagated through time to the upper backward RNN and the lower BRNN, and e is back propagated through time to the entire model. To show that DBRNN is more robust to previous erroneous predictions than one-directional models, we compare the cross entropy of them as follows: DISPLAYFORM5 where K is the total number of classes (e.g. the size of vocabulary for language tasks). p t is the ground truth distribution which is an one-hot vector such that: p tk = I(p tk = k), ∀k ∈ 1,..., K, where I is the indicator function, k is the ground truth label of the tth output.p t is the predicted distribution. From Eq. 26, l can be further expressed as: DISPLAYFORM6 DISPLAYFORM7 We can pick W It is worthwhile to compare the DBRNN and the solution in BID5. Both of them have a bidirectional design for the output. However, there exist three main differences. First, the DBRNN is a general design for the sequence-in-sequence-out (SISO) problem without being restricted to dependency parsing. The target sequences in trainingŶ f t,Ŷ b t andŶ t are the same for the DBRNN. In contrast, the solution in BID5 has different target sequences. Second, the attention mechanism is used by BID5 but not in the DBRNN. Third, The encoder-decoder design is adopted by in BID5 but not in the DBRNN. We conduct experiments on two problems: part of speech (POS) tagging and dependency parsing (DP). The POS tagging task is an easy one which requires shorter memory while the DP task needs much longer memory and has more complex relations between the input and the output. In the experiments, we compare the performance of five RNN models under two scenarios: 1) I t = X t, and 2) I The training dataset used for both problems are from the Universal Dependency 2.0 English branch (UD-English). It contains 12543 sentences and 14985 unique tokens. The test dataset for both experiments is from the test English branch (gold, en.conllu) of CoNLL 2017 shared task development and test data. In the experiment, the lengths of the input and the target sequences are fixed. Sequences longer than the maximum length will be truncated. If the sequence is shorter than the fixed length, a special pad symbol will be used to pad the sequence. Similar technique called bucketing is also used for some popular models such as BID15. The input to the POS tagging and the DP problems are the stemmed and lemmatized sequences (column 3 in CoNLL-U format). The target sequence for POS tagging is the universal POS tag (column 4). The target sequence for DP is the interleaved dependency relation to the headword (relation, column 8) and its position (column 7). As a , the length of the actual target sequence (rather than the preprocessed fixed-length sequence) for DP is twice of the length of the actual input sequence. The input is first fed into a trainable embedding layer BID4 before it is sent to the actual network. TAB1 shows the detailed network and training specifications. It is important to point out that we do not finetune network parameters or apply any engineering trick for the best possible performance since our main goal is to compare the performance of the LSTM, GRU, ELSTM-I and ELSTM-II four cells under various macro-models. The of the POS tagging problem with I t = X t and I BID14 88.49 82.84 79.14 Seq2seq BID15 25.83 24.87 31.43 Seq2seq with Attention BID16 27 The of the DP problem with I t = X t and I TAB5, respectively. The ELSTM-I and ELSTM-II cells perform better than the LSTM and the GRU cells. Among all possible combinations, the sequence-to-sequence with attention combined with ELSTM-I has the best performance. It has an accuracy of 60.19% and 66.72% for the former and the latter, respectively. Also, the basic RNN often outperforms BRNN for the DP problem as shown in TAB5. This can be explained by that the basic RNN can access the entire input sequence when predicting the latter half of the output sequence since the target sequence is twice as long as the input. The other reason is that the BRNN can easily overfit when predicting the headword position. We see from Tables 3 -6 that the two DBRNN models outperform both BRNN and sequence-tosequence (without attention) in both POS tagging and DP problems regardless of used cells. This shows the superiority of introducing the expert opinion pooling from both the input and the predicted output. DISPLAYFORM0 Furthermore, the proposed ELSTM-I and ELSTM-II outperform the LSTM and the GRU by a significant margin for complex language tasks. This demonstrates that the scaling factor in the ELSTM-I and the ELSTM-II does help the network retain longer memory with better attention. ELSTMs even outperform BID5, which is designed specifically for DP. For the POS tagging problem, the ELSTM-I and the ELSTM-II do not perform as well as the GRU or the LSTM. This is probably due to the shorter memory requirement of this simple task. The ELSTM cells are over-parameterized and, as a , they converge slower and tend to overfit the training data. The ELSTM-I and the ELSTM-II perform particularly well for sequence-to-sequence (with and without attention) model. The hidden state c t of the ELSTMs is more expressive in representing patterns over a longer distance. Since the sequence-to-sequence design relies on the expressive power of the hidden state, the ELSTMs do have an advantage. We compare the convergence behavior of I t = X t and I DISPLAYFORM1 with the LSTM, the ELSTM-I and the ELSTM-II cells for the DP problem in FIG6. We see that the ELSTM-I and the ELSTM-II do not behave very differently between I t = X t and I The memory decay behavior of the LSTM and the GRU was investigated and explained by mathematical analysis. Although the memory of the LSTM and the GRU fades slower than that of the SRN, it may not be long enough for complicated language tasks such as dependency parsing. To enhance the memory length, two cells called the ELSTM-I and the ELSTM-II were proposed. Furthermore, we introduced a new RNN model called the DBRNN that has the merits of both the BRNN and the encoder-decoder. It was shown by experimental that the ELSTM-I and ELSTM-II outperforms other designs by a significant margin for complex language tasks. The DBRNN design is superior to BRNN as well as sequence-to-sequence models for both simple and complex language tasks. There are interesting issues to be further explored. For example, is the ELSTM cell also helpful in more sophisticated RNN models such as the deep RNN? Is it possible to make the DBRNN deeper and better? They are left for future study.
A recurrent neural network cell with extended-long short-term memory and a multi-task RNN model for sequence-in-sequence-out problems
779
scitldr
Many recently trained neural networks employ large numbers of parameters to achieve good performance. One may intuitively use the number of parameters required as a rough gauge of the difficulty of a problem. But how accurate are such notions? How many parameters are really needed? In this paper we attempt to answer this question by training networks not in their native parameter space, but instead in a smaller, randomly oriented subspace. We slowly increase the dimension of this subspace, note at which dimension solutions first appear, and define this to be the intrinsic dimension of the objective landscape. The approach is simple to implement, computationally tractable, and produces several suggestive . Many problems have smaller intrinsic dimensions than one might suspect, and the intrinsic dimension for a given dataset varies little across a family of models with vastly different sizes. This latter has the profound implication that once a parameter space is large enough to solve a problem, extra parameters serve directly to increase the dimensionality of the solution manifold. Intrinsic dimension allows some quantitative comparison of problem difficulty across supervised, reinforcement, and other types of learning where we conclude, for example, that solving the inverted pendulum problem is 100 times easier than classifying digits from MNIST, and playing Atari Pong from pixels is about as hard as classifying CIFAR-10. In addition to providing new cartography of the objective landscapes wandered by parameterized models, the method is a simple technique for constructively obtaining an upper bound on the minimum description length of a solution. A byproduct of this construction is a simple approach for compressing networks, in some cases by more than 100 times. Training a neural network to model a given dataset entails several steps. First, the network designer chooses a loss function and a network architecture for a given dataset. The architecture is then initialized by populating its weights with random values drawn from some distribution. Finally, the network is trained by adjusting its weights to produce a loss as low as possible. We can think of the training procedure as traversing some path along an objective landscape. Note that as soon as a dataset and network architecture are specified, the landscape in its entirety is completely determined. It is instantiated and frozen; all subsequent parameter initialization, forward and backward propagation, and gradient steps taken by an optimizer are just details of how the frozen space is explored. Consider a network parameterized by D weights. We can picture its associated objective landscape as a set of "hills and valleys" in D dimensions, where each point in R D corresponds to a value of the loss, i.e., the elevation of the landscape. If D = 2, the map from two coordinates to one scalar loss can be easily imagined and intuitively understood by those living in a three-dimensional world with similar hills. However, in higher dimensions, our intuitions may not be so faithful, and generally we must be careful, as extrapolating low-dimensional intuitions to higher dimensions can lead to unreliable . The difficulty of understanding high-dimensional landscapes notwithstanding, it is the lot of neural network researchers to spend their efforts leading (or following?) networks over these multi-dimensional surfaces. Therefore, any interpreted geography of these landscapes is valuable. Several papers have shed valuable light on this landscape, particularly by pointing out flaws in common extrapolation from low-dimensional reasoning. BID4 showed that, in contrast to conventional thinking about getting stuck in local optima (as one might be stuck in a valley in our familiar D = 2), local critical points in high dimension are almost never valleys but are instead saddlepoints: structures which are "valleys" along a multitude of dimensions with "exits" in a multitude of other dimensions. The striking is that one has less to fear becoming hemmed in on all sides by higher loss but more to fear being waylaid nearly indefinitely by nearly flat regions. BID9 showed another property: that paths directly from the initial point to the final point of optimization are often monotonically decreasing. Though dimension is high, the space is in some sense simpler than we thought: rather than winding around hills and through long twisting corridors, the walk could just as well have taken a straight line without encountering any obstacles, if only the direction of the line could have been determined at the outset. In this paper we seek further understanding of the structure of the objective landscape by restricting training to random slices through it, allowing optimization to proceed in randomly generated subspaces of the full parameter space. Whereas standard neural network training involves computing a gradient and taking a step in the full parameter space (R D above), we instead choose a random d-dimensional subspace of R D, where generally d < D, and optimize directly in this subspace. By performing experiments with gradually larger values of d, we can find the subspace dimension at which solutions first appear, which we call the measured intrinsic dimension of a particular problem. Examining intrinsic dimensions across a variety of problems leads to a few new intuitions about the optimization problems that arise from neural network models. We begin in Sec. 2 by defining more precisely the notion of intrinsic dimension as a measure of the difficulty of objective landscapes. In Sec. 3 we measure intrinsic dimension over a variety of network types and datasets, including MNIST, CIFAR-10, ImageNet, and several RL tasks. Based on these measurements, we draw a few insights on network behavior, and we conclude in Sec. 4. We introduce the intrinsic dimension of an objective landscape with an illustrative toy problem. Let θ (D) ∈ R D be a parameter vector in a parameter space of dimension D, let θ (D) 0 be a randomly chosen initial parameter vector, and let θ (D) * be the final parameter vector arrived at via optimization. Consider a toy optimization problem where D = 1000 and where θ (D) optimized to minimize a squared error cost function that requires the first 100 elements to sum to 1, the second 100 elements to sum to 2, and so on until the vector has been divided into 10 groups with their requisite 10 sums. We may start from a θ Solutions to this problem are highly redundant. With a little algebra, one can find that the manifold of solutions is a 990 dimensional hyperplane: from any point that has zero cost, there are 990 orthogonal directions one can move and remain at zero cost. Denoting as s the dimensionality of the solution set, we define the intrinsic dimensionality d int of a solution as the codimension of the solution set inside of R D: DISPLAYFORM0 Here the intrinsic dimension d int is 10 (1000 = 10 + 990), with 10 corresponding intuitively to the number of constraints placed on the parameter vector. Though the space is large (D = 1000), the number of things one needs to get right is small (d int = 10). The above example had a simple enough form that we obtained d int = 10 by calculation. But in general we desire a method to measure or approximate d int for more complicated problems, including problems with data-dependent objective functions, e.g. neural network training. Random subspace optimization provides such a method. Standard optimization, which we will refer to hereafter as the direct method of training, entails evaluating the gradient of a loss with respect to θ (D) and taking steps directly in the space of θ (D). To train in a random subspace, we instead define θ (D) in the following way: DISPLAYFORM0 where P is a randomly generated D × d projection matrix 1 and θ (d) is a parameter vector in a gen- DISPLAYFORM1 and P are randomly generated and frozen (not trained), so the system has only d degrees of freedom. We initialize θ (d) to a vector of all zeros, so initially DISPLAYFORM2 0. This convention serves an important purpose for neural network training: it allows the network to benefit from beginning in a region of parameter space designed by any number of good initialization schemes BID8 BID11 to be well-conditioned, such that gradient descent via commonly used optimizers will tend to work well. Training proceeds by computing gradients with respect to θ (d) and taking steps in that space. Columns of P are normalized to unit length, so steps of unit length in θ (d) chart out unit length motions of θ (D). Columns of P may also be orthogonalized if desired, but in our experiments we relied simply on the approximate orthogonality of high dimensional random vectors. By this construction P forms an approximately orthonormal basis for a randomly oriented d dimensional subspace of R D, with the origin of the new coordinate system at θ0. FIG1 (left and middle) shows an illustration of the related vectors. Consider a few properties of this training approach. If d = D and P is a large identity matrix, we recover exactly the direct optimization problem. If d = D but P is instead a random orthonormal basis for all of R D (just a random rotation matrix), we recover a rotated version of the direct problem. Note that for some "rotation-invariant" optimizers, such as SGD and SGD with momentum, rotating the basis will not change the steps taken nor the solution found, but for optimizers with axis-aligned assumptions, such as RMSProp and Adam BID16, the path taken through θ (D) space by an optimizer will depend on the rotation chosen. Finally, in the general case where d < D and solutions exist in D, solutions will almost surely (with probability 1) not be found if d is less than the codimension of the solution. On the other hand, when d ≥ D −s, if the solution set is a hyperplane, the solution will almost surely intersect the subspace, but for solution sets of arbitrary topology, intersection is not guaranteed. Nonetheless, by iteratively increasing d, re-running optimization, and checking for solutions, we obtain one estimate of d int. We try this sweep of d for our toy problem laid out in the beginning of this section, measuring (by convention as described in the next section) the positive performance (higher is better) instead of loss.3 As expected, the solutions are first found at d = 10 (see FIG1, right), confirming our intuition that for this problem, d int = 10. In the rest of this paper, we measure intrinsic dimensions for particular neural network problems and draw about the associated objective landscapes and solution sets. Because modeling real data is more complex than the above toy example, and losses are generally never exactly zero, we first choose a heuristic for classifying points on the objective landscape as solutions vs. non-solutions. The heuristic we choose is to threshold network performance at some level relative to a baseline model, where generally we take as baseline the best directly trained model. In supervised classification settings, validation accuracy is used as the measure of performance, and in reinforcement learning scenarios, the total reward (shifted up or down such that the minimum reward is 0) is used. Accuracy and reward are preferred to loss to ensure are grounded to real-world performance and to allow comparison across models with differing scales of loss and different amounts of regularization included in the loss. We define d int100 as the intrinsic dimension of the "100%" solution: solutions whose performance is statistically indistinguishable from baseline solutions. However, when attempting to measure d int100, we observed it to vary widely, for a few confounding reasons: d int100 can be very high -nearly as high as D -when the task requires matching a very well-tuned baseline model, but can drop significantly when the regularization effect of restricting parameters to a subspace boosts performance by tiny amounts. While these are interesting effects, we primarily set out to measure the basic difficulty of problems and the degrees of freedom needed to solve (or approximately solve) them rather than these subtler effects. Thus, we found it more practical and useful to define and measure d int90 as the intrinsic dimension of the "90%" solution: solutions with performance at least 90% of the baseline. We chose 90% after looking at a number of dimension vs. performance plots (e.g. FIG2) as a reasonable trade off between wanting to guarantee solutions are as good as possible, but also wanting measured d int values to be robust to small noise in measured performance. If too high a threshold is used, then the dimension at which performance crosses the threshold changes a lot for only tiny changes in accuracy, and we always observe tiny changes in accuracy due to training noise. If a somewhat different (higher or lower) threshold were chosen, we expect most of in the rest of the paper to remain qualitatively unchanged. In the future, researchers may find it useful to measure d int using higher or lower thresholds. We begin by analyzing a fully connected (FC) classifier trained on MNIST. We choose a network with layer sizes 784-200-200-10, i.e. a network with two hidden layers of width 200; this in a total number of parameters D = 199, 210. A series of experiments with gradually increasing subspace dimension d produce monotonically increasing performances, as shown in FIG2. By checking the subspace dimension at which performance crosses the 90% mark, we measure this network's intrinsic dimension d int90 at about 750.Some networks are very compressible. A salient initial is that 750 is quite low. At that subspace dimension, only 750 degrees of freedom (0.4%) are being used and 198,460 (99.6%) unused to obtain 90% of the performance of the direct baseline model. A compelling corollary of this is a simple, new way of creating and training compressed networks, particularly networks for applications in which the absolute best performance is not critical. To store this network, one need only store a tuple of three items: (i) the random seed to generate the frozen θ This compression approach differs from other neural network compression methods in the following aspects. (i) While it has previously been appreciated that large networks waste parameters BID3 and weights contain redundancy BID5 ) that can be exploited for posthoc compression , this paper's method constitutes a much simpler approach to compression, where training happens once, end-to-end, and where any parameterized model is an allowable base model. (ii) Unlike layerwise compression models BID5 ), we operate in the entire parameter space, which could work better or worse, depending on the network. (iii) Compared to methods like that of , who take a Bayesian perspective and consider redundancy on the level of groups of parameters (input weights to a single neuron) by using group-sparsity-inducing hierarchical priors on the weights, our approach is simpler but not likely to lead to compression as high as the levels they attain. (iv) Our approach only reduces the number of degrees of freedom, not the number of bits required to store each degree of freedom, e.g. as could be accomplished by quantizing weights. Both approaches could be combined. (v) There is a beautiful array of papers on compressing networks such that they also achieve computational savings during the forward pass ; subspace training does not speed up execution time during inference. (vi) Finally, note the relationships between weight pruning, weight tying, and subspace training: weight pruning is equivalent to finding, post-hoc, a subspace that is orthogonal to certain axes of the full parameter space and that intersects those axes at the origin. Weight tying, e.g. by random hashing of weights into buckets BID2, is equivalent to subspace training where the subspace is restricted to lie along the equidistant "diagonals" between any axes that are tied together. Robustness of intrinsic dimension. Next, we investigate how intrinsic dimension varies across FC networks with a varying number of layers and varying layer width. 4 We perform a grid sweep of networks with number of hidden layers L chosen from {1, 2, 3, 4, 5} and width W chosen from {50, 100, 200, 400}. FIG7 in the Supplementary Information shows performance vs. subspace dimension plots in the style of FIG2 for all 20 networks, and FIG3 shows each network's d int90 plotted against its native dimension D. As one can see, D changes by a factor of 24.1 between the smallest and largest networks, but d int90 changes over this range by a factor of only 1.33, with much of this possibly due to noise. Thus it turns out that the intrinsic dimension changes little even as models grown in width or depth! The striking is that every extra parameter added to the network -every extra dimension added to D -just ends up adding one dimension to the redundancy of the solution, s. Often the most accurate directly trained models for a problem have far more parameters than needed ; this may be because they are just easier to train, and our observation suggests a reason why: with larger models, solutions have greater redundancy and in a sense "cover" more of the space.5 To our knowledge, this is the first time this phenomenon has been directly measured. We should also be careful not to claim that all FC nets on MNIST will have an intrinsic dimension of around 750; instead, we should just consider that we have found for this architecture/dataset combination a wide plateau of hyperparamter space over which intrinsic dimension is approximately constant. Are random subspaces really more parameter-efficient for FC nets? One might wonder to what extent claiming 750 parameters is meaningful given that performance achieved (90%) is far worse than a state of the art network trained on MNIST. With such a low bar for performance, could a directly trained network with a comparable number of trainable parameters be found that achieves the same performance? We generated 1000 small networks (depth randomly chosen from {1, 2, 3, 4, 5}, layer width randomly from {2, 3, 5, 8, 10, 15, 20, 25}, seed set randomly) in an attempt to find high-performing, small FC networks, but as FIG4 (left) shows, a gap still exists between the subspace dimension and the smallest direct FC network giving the same performance at most levels of performance. Measuring d int90 on a convolutional network. Next we measure d int90 of a convolutional network, LeNet (D=44,426). FIG2 (right) shows validation accuracy vs. subspace dimension d, and we find d int90 = 290, or a compression rate of about 150× for this network. As with the FC case above, we also do a sweep of random networks, but notice that the performance gap of convnets between direct and subspace training methods becomes closer for fixed budgets, i.e., the number of trainable parameters. Further, the performance of direct training varies significantly, depending on the extrinsic design of convet architectures. We interpret these in terms of the Minimum Description Length below. Relationship between Intrinsic Dimension and Minimum Description Length (MDL). As discussed earlier, the random subspace training method leads naturally to a compressed representation of a network, where only d floating point numbers need to be stored. We can consider this d as an upper bound on the MDL of the problem solution. 6 We cannot yet conclude the extent to which this bound is loose or tight, and tightness may vary by problem. However, to the extent that it is tighter than previous bounds (e.g., just the number of parameters D) and to the extent that it is correlated with the actual MDL, we can use this interpretation to judge which solutions are more well-suited to the problem in a principled way. As developed by and further by BID12, holding accuracy constant, the best model is the one with the shortest MDL.Thus, there is some rigor behind our intuitive assumption that LeNet is a better model than an FC network for MNIST image classification, because its intrinsic dimension is lower (d int90 of 290 vs. 750). In this particular case we are lead to a predictable , but as models become larger, more complex, and more heterogeneous, of this type will often not be obvious. Having a simple method of approximating MDL may prove extremely useful for guiding model exploration, for example, for the countless datasets less well-studied than MNIST and for models consisting of separate sub-models that may be individually designed and evaluated BID11 BID14. In this latter case, considering the MDL for a sub-model could provide a more detailed view of that sub-model's properties than would be available by just analyzing the system's overall validation performance. Finally, note that although our approach is related to a rich body of work on estimating the "intrinsic dimension of a dataset" BID1 BID15 BID6; ), it differs in a few respects. Here we do not measure the number of degrees of freedom necessary to represent a dataset (which requires representation of a global p(X) and per-example properties and thus grows with the size of the dataset), but those required to represent a model for part of the dataset (here p(y|X), which intuitively might saturate at some complexity even as a dataset grows very large). That said, in the following section we do show measurements for a corner case where the model must memorize per-example properties. Are convnets always better on MNIST? Measuring d int90 on shuffled data. Zhang et al. FORMULA0 provocatively showed that large networks normally thought to generalize well can nearly as easily be trained to memorize entire training sets with randomly assigned labels or with input pixels provided in random order. Consider two identically sized networks: one trained on a real, non-shuffled dataset and another trained with shuffled pixels or labels. As noted by , externally the networks are very similar, and the training loss may even be identical at the final epoch. However, the intrinsic dimension of each may be measured to expose the differences in problem difficulty. When training on a dataset with shuffled pixels -pixels for each example in the dataset subject to a random permutation, chosen once for the entire dataset -the intrinsic dimension of an FC network remains the same at 750, because FC networks are invariant to input permutation. But the intrinsic dimension of a convnet increases from 290 to 1400, even higher than an FC network. Thus while convnets are better suited to classifying digits given images with local structure, when this structure is removed, violating convolutional assumptions, our measure can clearly reveal that many more degrees of freedom are now required to model the underlying distribution. When training on MNIST with shuffled labels -the label for each example is randomly chosen -we redefine our measure of d int90 relative to training accuracy (validation accuracy is always at chance). We find that memorizing random labels on the 50,000 example MNIST training set requires a very high dimension, d int90 = 190, 000, or 3.8 floats per memorized label. Sec. S5.2 gives a few further , in particular that the more labels are memorized, the more efficient memorization is in terms of floats per label. Thus, while the network obviously does not generalize to an unseen validation set, it would seem "generalization" within a training set may be occurring as the network builds a shared infrastructure that makes it possible to more efficiently memorize labels. We scale to larger supervised classification problems by considering CIFAR-10 and ImageNet . When scaling beyond MNIST-sized networks with D on the order of 200k and d on the order of 1k, we find it necessary to use more efficient methods of generating and projecting from random subspaces. This is particularly true in the case of ImageNet, where the direct network can easily require millions of parameters. In Sec. S7, we describe and characterize scaling properties of three methods of projection: dense matrix projection, sparse matrix projection , and the remarkable Fastfood transform . We generally use the sparse projection method to train networks on CIFAR-10 and the Fastfood transform for ImageNet. Measuring intrinsic dimension allows us to perform some comparison across the divide between supervised learning and reinforcement learning. In this section we measure the intrinsic dimension of three control tasks of varying difficulties using both value-based and policy-based algorithms. The value-based algorithm we evaluate is the Deep Q-Network (DQN) , and the policy-based algorithm is Evolutionary Strategies (ES) . Training details are given in Sec. S6.2. For all tasks, performance is defined as the maximum-attained (over training iterations) mean evaluation reward (averaged over 30 evaluations for a given parameter setting). In horizontal line). A dot is darkened signifying the first d that allows a satisfactory performance. We find that the inverted pendulum task is surprisingly easy, with d int100 = d int90 = 4, meaning that only four parameters are needed to perfectly solve the problem (see for a similarly small solution found via evolution). The walking humanoid task is more difficult: solutions are found reliably by dimension 700, a similar complexity to that required to model MNIST with an FC network, and far less than modeling CIFAR-10 with a convnet. Finally, to play Pong on Atari (directly from pixels) requires a network trained in a 6k dimensional subspace, making it on the same order of modeling CIFAR-10. For an easy side-by-side comparison we list all intrinsic dimension values found for all problems in In this paper, we have defined the intrinsic dimension of objective landscapes and shown a simple method -random subspace training -of approximating it for neural network modeling problems. We use this approach to compare problem difficulty within and across domains. We find in some cases the intrinsic dimension is much lower than the direct parameter dimension, and hence enable network compression, and in other cases the intrinsic dimension is similar to that of the best tuned models, and suggesting those models are better suited to the problem. Further work could also identify better ways of creating subspaces for reparameterization: here we chose random linear subspaces, but one might carefully construct other linear or non-linear subspaces to be even more likely to contain solutions. Finally, as the field departs from single stackof-layers image classification models toward larger and more heterogeneous networks BID11 BID14 often composed of many modules and trained by many losses, methods like measuring intrinsic dimension that allow some automatic assessment of model components might provide much-needed greater understanding of individual black-box module properties. In the main paper, we attempted to find d int90 across 20 FC networks with various depths and widths. A grid sweep of number of hidden layers from {1,2,3,4,5} and width of each hidden layer from {50,100,200,400} is performed, and all 20 plots are shown in FIG7. For each d we take 3 runs and plot the mean and variance with blue dots and blue error bars. d int90 is indicated in plots (darkened blue dots) by the dimension at which the median of the 3 runs passes 90% performance threshold. The variance of d int90 is estimated using 50 bootstrap samples. Note that the variance of both accuracy and measured d int90 for a given hyper-parameter setting are generally small, and the mean of performance monotonically increases (very similar to the single-run ) as d increases. This illustrates that the difference between lucky vs. unlucky random projections have little impact on the quality of solutions, while the subspace dimensionality has a great impact. We hypothesize that the variance due to different P matrices will be smaller than the variance due to different random initial parameter vectors θ 0, and aspects of the network depending on smaller numbers of random samples will exhibit greater variance. Hence, in some other experiments we rely on single runs to estimate the intrinsic dimension, though slightly more accurate estimates could be obtained via multiple runs. In similar manner to the above, in FIG8 we show the relationship between d int90 and D across 20 networks but using a per-model, directly trained baseline. Most baselines are slightly below 100% accuracy. This is in contrast to FIG3, which used a simpler global baseline of 100% across all models. Results are qualitatively similar but with slightly lower intrinsic dimension due to slightly lower thresholds. Two kinds of shuffled MNIST datasets are considered:• The shuffled pixel dataset: the label for each example remains the same as the normal dataset, but a random permutation of pixels is chosen once and then applied to all images in the training and test sets. FC networks solve the shuffled pixel datasets exactly as easily as the base dataset, because there is no privileged ordering of input dimension in FC networks; all orderings are equivalent.• The shuffled label dataset: the images remain the same as the normal dataset, but labels are randomly shuffled for the entire training set. Here, as in , we only evaluate training accuracy, as test set accuracy remains forever at chance level (the training set X and y convey no information about test set p(y|X), because the shuffled relationship in test is independent of that of training).On the full shuffled label MNIST dataset (50k images), we trained an FC network (L = 5, W = 400, which had d int90 = 750 on standard MNIST), it yields d int90 = 190k. We can interpret this as requiring 3.8 floats to memorize each random label (at 90% accuracy). Wondering how this scales with dataset size, we estimated d int90 on shuffled label versions of MNIST at different scales and found curious , shown in TAB4 and Fig. S8. As the dataset memorized becomes smaller, the number of floats required to memorize each label becomes larger. Put another way, as dataset size Figure S8: Training accuracy vs. subspace dimension d for a FC networks (W =400, L=5) trained on a shuffled label version of MNIST containing 100%, 50%, and 10% of the dataset.increases, the intrinsic dimension also increases, but not as fast as linearly. The best interpretationis not yet clear, but one possible interpretation is that networks required to memorize large training sets make use of shared machinery for memorization. In other words, though performance does not generalize to a validation set, generalization within a training set is non-negligible even though labels are random. An interesting tangential observation is that random subspace training can in some cases make optimization more stable. First, it helps in the case of deeper networks. FIG9 shows training for FC networks with up to 10 layers. SGD with step 0.1, and ReLUs with He initialization is used. Multiple networks failed at depths 4, and all failed at depths higher than 4, despite the activation function and initialization designed to make learning stable BID11. Second, for MNIST with shuffled labels, we noticed that it is difficult to reach high training accuracy using the direct training method with SGD, though both subspace training with SGD and either type of training with Adam reliably reach 100% memorization as d increases (see Fig. S8).Because each random basis vector projects across all D direct parameters, the optimization problem may be far better conditioned in the subspace case than in the direct case. A related potential downside is that projecting across D parameters which may have widely varying scale could in ignoring parameter dimensions with tiny gradients. This situation is similar to that faced by methods like SGD, but ameliorated by RMSProp, Adam, and other methods that rescale per-dimension step sizes to account for individual parameter scales. Though convergence of the subspace approach seems robust, further work may be needed to improve network amenability to subspace training: for example by ensuring direct parameters are similarly scaled by clever initialization or by inserting a pre-scaling layer between the projected subspace and the direct parameters themselves. Another finding through our experiments with MNIST FC networks has to do with the role of optimizers. The same set of experiments are run with both SGD (learning rate 0.1) and ADAM (learning rate 0.001), allowing us to investigate the impact of stochastic optimizers on the intrinsic dimension achieved. The intrinsic dimension d int90 are reported in FIG1 (a)(b). In addition to two optimizers we also use two baselines: Global baseline that is set up as 90% of best performance achieved across all models, and individual baseline that is with regards to the performance of the same model in direct training. DQN on Cartpole We start with a simple classic control game CartPole−v0 in OpenAI Gym BID0. A pendulum starts upright, and the goal is to prevent it from falling over. The system is controlled by applying a force of LEFT or RIGHT to the cart. The full game ends when one of two failure conditions is satisfied: the cart moves more than 2.4 units from the center (where it started), or the pole is more than 15 degrees from vertical (where it started). A reward of +1 is provided for every time step as long as the game is going. We further created two easier environments Pole and Cart, each confined by one of the failure modes only. A DQN is used, where the value network is parameterized by an FC (L = 2, W = 400). For each subspace d at least 5 runs are conducted, the mean of which is used to computed d int90, and the baseline is set as 195.0 7. The are shown in FIG1. The solid line connects mean rewards within a run over the last 100 episodes, across different ds. Due to the noise-sensitiveness of RL games the course is not monotonic any more. The intrinsic dimension for CartPole, Pole and Cart is d int90 = 25, 23 and 7, respectively. This reveals that the difficulty of optimization landscape of these games is remarkably low, as well as interesting insights such as driving a cart is much easier than keeping a pole straight, the latter being the major cause of difficulty when trying to do both. We carry out with ES 3 RL tasks: InvertedPendulum−v1, Humanoid−v1, Pong−v0. The hyperparameter settings for training are in Table S3.Inverted pendulum The InvertedPendulum−v1 environment uses the MuJoCo physics simulator to instantiate the same problem as CartPole−v0 in a realistic setting. We expect that even with richer environment dynamics, as well as a different RL algorithm -ESthe intrinsic dimensionality should be similar. As seen in Fig. 5, the measured intrinsic dimensionality d int90 = 4 is of the same order of magnitude, but smaller. Interestingly, although the environment FIG3 and FIG8, because in the former we average over three runs, and in the latter we show one run each for all optimization methods. DISPLAYFORM0 Figure S11: Subspace training of DQN on CartPole game. Shown as dots are rewards collected through a game run averaged over the last 100 episodes, under each subspace training of DQN, and each game environment. The line connects mean rewards across different ds.dynamics are more complex than in CartPole−v0, using ES rather than DQN seems to induce a simpler objective landscape. Learning to walk A more challenging problem is Humanoid−v1 in MuJoCo simulator. Intuitively, one might believe that learning to walk is a more complex task than classifying images. Our show the contrary -that the learned intrinsic dimensionality of d int90 = 700 is similar to that of MNIST on a fully-connected network (d int90 = 650) but significantly less than that of even a convnet trained on CIFAR-10 (d int90 = 2, 500). Table S3: Hyperparameters used in training RL tasks using ES. σ refers to the parameter perturbation noise used in ES. Default Adam parameters of β 1 = 0.9, β 2 = 0.999, = 1 × 10 −7 were used.begin to see training runs reach the threshold as early as d = 400, with the median performance steadily increasing with d. Atari Pong Finally, using a base convnet of approximately D = 1M in the Pong−v0 pixels-toactions environment (using 4-frame stacking). The agent receives an image frame (size of 210 × 160 × 3) and the action is to move the paddle UP or DOWN. We were able to determine d int90 = 6, 000. Scaling the random subspace training procedure to large problems requires an efficient way to map from R d into a random d-dimensional subspace of R D that does not necessarily include the origin. Algebraically, we need to left-multiply a vector of parameters v ∈ R d by a random matrix M ∈ R D×d, whose columns are orthonormal, then add an offset vector θ 0 ∈ R D. If the low-dimensional parameter vector in R d is initialized to zero, then specifying an offset vector is equivalent to choosing an initialization point in the original model parameter space R D.A naïve approach to generating the random matrix M is to use a dense D × d matrix of independent standard normal entries, then scale each column to be of length 1. The columns will be approximately orthogonal if D is large because of the independence of the entries. Although this approach is sufficient for low-rank training of models with few parameters, we quickly run into scaling limits because both matrix-vector multiply time and storage of the matrix scale according to O(Dd). We were able to successfully determine the intrinsic dimensionality of MNIST (d=225) using a LeNet (D=44,426), but were unable to increase d beyond 1,000 when applying a LeNet (D=62,006) to CIFAR-10, which did not meet the performance criterion to be considered the problems intrinsic dimensionality. Random matrices need not be dense for their columns to be approximately orthonormal. In fact, a method exists for "very sparse" random projections , which achieves a density of DISPLAYFORM0 To construct the D × d matrix, each entry is chosen to be nonzero with probability DISPLAYFORM1. If chosen, then with equal probability, the entry is either positive or negative with the same magnitude in either case. The density of DISPLAYFORM2 time and space complexity. Implementing this procedure allowed us to find the intrinsic dimension of d=2,500 for CIFAR-10 using a LeNet mentioned above. Unfortunately, when using Tensorflow's SparseTensor implementation we did not achieve the theoretical √ D-factor improvement in time complexity (closer to a constant 10x). Nonzero elements also have a significant memory footprint of 24 bytes, so we could not scale to larger problems with millions of model parameters and large intrinsic dimensionalities. We need not explicitly form and store the transformation matrix. The Fastfood transform was initially developed as an efficient way to compute a nonlinear, high-dimensional feature map φ(x) for a vector x. A portion of the procedure involves implicitly generating a D × d matrix with approximately uncorrelated standard normal entries, using only O(D) space, which can be multiplied by v in O(D log d) time using a specialized method. The method relies on the fact that Hadamard matrices multiplied by Gaussian vectors behave like dense Gaussian matrices. In detail, to implicitly multiply v by a random square Gaussian matrix M with side-lengths equal to a power of two, the matrix is factorized into multiple simple matrices: M = HGΠHB, where B is a random diagonal matrix with entries +-1 with equal probability, H is a Hadamard matrix, Π is a random permutation matrix, and G is a random diagonal matrix with independent standard normal entries. Multiplication by a Hadamard matrix can be done via the Fast Walsh-Hadamard Transform In practice, the reduction in space footprint allowed us to scale to much larger problems, including the Pong RL task using a 1M parameter convolutional network for the policy function. FIG4 compares the computational time for direct and subspace training (various projections) methods for each update. Our subspace training is more computational expensive, because the subspace training method has to propagate the signals through two modules: the layers of neural networks, and the projection between two spaces. The direct training only propagates signals in the layers of neural networks. We have made efforts to reduce the extra computational cost. For example, the sparse projection less than doubles the time cost for a large range of subspace dimensions. We consider the CIFAR-10 dataset and test the same set of FC and LeNet architectures as on MNIST. For FC networks, d int90 values for all 20 networks are shown in FIG1 plotted against the native dimension D of each network; D changes by a factor of 12.16 between the smallest and largest networks, but d int90 changes over this range by a factor of 5.0. However, much of this change is due to change of baseline performance. In FIG1, we instead compute the intrinsic dimension with respect to a global baseline: 50% validation accuracy. d int90 changes over this range by a factor of 1.52. This indicates that various FC networks share similar intrinsic dimension (d int90 = 5000 ∼ 8000) to achieve the same level of task performance. For LeNet (D = 62, 006), the validation accuracy vs. subspace dimension d is shown in FIG1, the corresponding d int90 = 2900. It yields a compression rate of 5%, which is 10 times larger than LeNet on MNIST. It shows that CIFAR-10 images are significantly more difficult to be correctly classified than MNIST. In another word, CIFAR-10 is a harder problem than MNIST, especially given the fact that the notion of "problem solved" (baseline performance) is defined as 99% accuracy on MNIST and 58% accuracy on CIFAR-10. On the CIFAR-10 dataset, as d increases, subspace training tends to overfitting; we study the role of subspace training as a regularizer below. ResNet vs. LeNet We test ResNets, compare to LeNet, and find they make efficient use of parameters. We adopt the smallest 20-layer structure of ResNet with 280k parameters, and find out in FIG1 (b) that it reaches LeNet baseline with d int90 = 1000 ∼ 2000 (lower than the d int90 of LeNet), while takes a larger d int90 (20, 000 ∼ 50, 000) to reach reach its own, much higher baseline. The role of regularizers Our subspace training can be considered as a regularization scheme, as it restricts the solution set. We study and compare its effects with two traditional regularizers with an FC network (L=2, W=200) on CIFAR-10 dataset, including 2 penalty on the weights (i.e., weight decay) and Dropout.• 2 penalty Various amount of 2 penalty from {10 −2, 10 −3, 5 × 10 −4, 10 −4, 10 −5, 0} are considered. The accuracy and negative log-likelihood (NLL) are reported in FIG1 (a) (b), respectively. As expected, larger amount of weight decay reduces the gap between training and testing performance for both direct and subspace training methods, and eventually closes the gap (i.e., 2 penalty = 0.01). Subspace training itself exhibits strong regularization ability, especially when d is small, at which the performance gap between training and testing is smaller.• Dropout Various dropout rates from {0.5, 0.4, 0.3, 0.2, 0.1, 0} are considered. The accuracy and NLL are reported in FIG1. Larger dropout rates reduce the gap between training and testing performance for both direct and subspace training methods. When observing testing NLL, subspace training tends to overfit the training dataset less.• Subspace training as implicit regularization Subspace training method performs implicit regularization, as it restricts the solution set. We visualized the testing NLL in FIG1. Subspace training method outperforms direct method when d is properly chosen (when 2 penalty< 5×10 −4, or dropout rate < 0.1), suggesting the potential of this method as a better alternative to traditional regularizers. When d is large, the method also overfits the training dataset. Note that the these methods perform regularization in different ways: weight decay enforces the learned weights concentrating around zeros, while subspace training directly reduces the number of dimensions of the solution space. To investigate even larger problems, we attempted to measure d int90 for an ImageNet classification network. We use a relatively smaller network, SqueezeNet by BID13, with 1.24M parameters. Larger networks suffered from memory issues. A direct training produces Top-1 accuracy of 55.5%. We vary intrinsic dimension from 50k, 100k, 200k, 500k, 800k, and record the validation accuracies as shown in FIG1. The training of each intrinsic dimension takes about 6 to 7 days, distributed across 4 GPUs. Due to limited time, training on ImageNet has not yet produced a reliable estimate for d int90 except that it is over 500k. Since the learned d int90 can be used as a robust measure to study the fitness of neural network architectures for specific tasks, we further apply it to understand the contribution of each component in convolutional networks for image classification task. The convolutional network is a special case of FC network in two aspects: local receptive fields and weight-tying. Local receptive fields force each filter to "look" only at a small, localized region of the image or layer below. Weight-tying enforces that each filter shares the same weights, which reduces the number of learnable parameters. We performed control experiments to investigate the degree to which each component contributes. Four variants of LeNet are considered:• Standard LeNet 6 kernels (5×5) -max-pooling (2×2) -16 kernels (5×5) -max-pooling (2 × 2) -120 FC -84 FC -10 FC • Untied-LeNet The same architecture with the standard LeNet is employed, except that weights are unshared, i.e., a different set of filters is applied at each different patch of the input. For example in Keras, the LocallyConnected2D layer is used to replace the Conv2D layer.• FCTied-LeNet The same set of filters is applied at each different patch of the input. we break local connections by applying filters to global patches of the input. Assume the image size is H × H, the architecture is 6 kernels ((2H −1) × (2H −1)) -max-pooling (2 × 2) -16 kernels ((H −1) × (H −1)) -max-pooling (2 × 2) -120 FC -84 FC -10 FC. The padding type is same.• FC-LeNet Neither local connections or tied weights is employed, we mimic LeNet with its FC implementation. The same number of hidden units as the standard LeNet are used at each layer. The are shown in FIG1. We set a crossing-line accuracy (i.e., threshold) for each task, and investigate d int90 needed to achieve it. For MNIST and CIFAR-10, the threshold is 90% and 45%, respectively. For the above LeNet variants, d int90 = 290, 600, 425, 2000 on MNIST, and d int90 = 1000, 2750, 2500, 35000 on CIFAR-10. Experiments show both tied-weights and local connections are important to the model. That tied-weights should matter seems sensible. However, models with maximal convolutions (convolutions covering the whole image) may have had the same intrinsic dimension as smaller convolutions, but this turns out not to be the case. The is that convnets are more efficient than FC nets both due to local connectivity and due to weight tying. We summarize d int90 of the objective landscape on all different problems and neural network architectures in TAB7 and FIG2. "SP" indicates shuffled pixel, and "SL" for shuffled label, and "FC-5" for a 5-layer FC. d int90 indicates the minimum number of dimensions of trainable parameters required to properly solve the problem, and thus reflects the difficulty level of problems. C a r t P o le -F C P o le -F C C a r t -F C I n v e r t e d P e n d u lu m -F C H u m a n o id -F C A t a r i P o n g -C o n v N e t I m a g e N e t -S q u e e z e N e t FIG2: Intrinsic dimension of the objective landscapes created by all combinations of dataset and network we tried in this paper.
We train in random subspaces of parameter space to measure how many dimensions are really needed to find a solution.
780
scitldr
Graph Neural Networks as a combination of Graph Signal Processing and Deep Convolutional Networks shows great power in pattern recognition in non-Euclidean domains. In this paper, we propose a new method to deploy two pipelines based on the duality of a graph to improve accuracy. By exploring the primal graph and its dual graph where nodes and edges can be treated as one another, we have exploited the benefits of both vertex features and edge features. As a , we have arrived at a framework that has great potential in both semisupervised and unsupervised learning. Convolutional Neural Networks (CNNs) has been very successfully used for automated feature extraction in Euclidean domains, especially for computer vision, such as 2D image classification, object detection, etc. However, many real-life data has a non-Euclidean graph structure in nature, from which we want to investigate the underlying relations among different objects by utilizing the representation of nodes and edges. Recently, research on applying the generalization of Convolutional Neural Networks to the non-Euclidean domains has attracted growing attention. As a , a branch of research on Geometric Deep Learning based on that has been ignited. Previous works including ChebNet and GCN have demonstrated strong in solving problems in semi-supervised learning where the labels of only a few objects are given, and we want to find out the labels of other objects through their inner connections. Current methods generalizing convolution operations include both spatial and spectral domains . The spatial one deals with each node directly in the vertex domain while the spectral one takes a further step in converting signals via graph Fourier transform into the spectral domain. However, one critical weakness would be the fact that the interchangeable and complementary nature between nodes and edges are generally ignored in previous research. As a , the duality of the graph is not fully utilized. If we treat those edges in the original, or known as the primal graph, as the nodes in the new graph, and original nodes as edges, we can arrive at a new graph that further exploits the benefits of edge features. In such a way, we are able to get both the primal graph and the dual graph . By combining both the vertex features and the edge features, we will be able to solve wider range of problems and achieve better performance. In this paper, we propose a new approach to transform the primal graph into its dual form and have implemented two pipelines based on these two forms of graph to improve the accuracy and the performance. With two pipelines, we also exploited a path to make the model wider instead of merely deeper. Meanwhile, we have developed a new framework that can be applied later on both semi-supervised learning and unsupervised learning. Graph-based semi-supervised learning aims to annotate data from a small amount of label data on a graph. To learn the vectors that can recover the labels of the training data as well as distinguish data with different labels, conventionally, graph Laplacian regularizer gives penalty between sampling based on graph Laplacian matrix (; ;). Sample-based method takes random walk to get samples from the context of data points in order to propagate information (; ;). Graph Convolutional Networks generalize the operation of convolution from grid data to graph data . After the emergence of the spectral-based convolutional networks on graph , ChebNet approximate the filters by Chebyshev polynomials according to the Laplacian eigendecomposition. GCN simplifies ChebNet by introducing its first-order approximation and can be viewed as a spatial-based perspective, which requires vertices in the graph to propagate their information to the neighbors. MoNet is a spatial-based method, of which convolution is defined as a Gaussian mixture of the candidates. GAT (Veličković et al. ) applies the attention mechanism to the graph network. DGI (Veličković et al. ) proposes a framework to learn the unsupervised representations on graph-structured data by maximizing location mutual information. We refer to;;; as a more comprehensive and thorough review on graph neural networks. Dual approaches on graph networks usually unlike the above mono-methods, apply mixed methods to study graph networks. DGCN makes a balance between the spatialbased domain and spectral-based domain by regularizing their mutual information. GIN proposes a dual-path from graph convolution on texts and another network on images to gather cross-modal information into a common semantic space. DPGCNN extends the classification on vertices to edges by considering the attention mechanism on both. Our study follows this path, which classifies vertices from the relationship between them (edges) and regularization from the mutual information between classification on both vertices and edges. 3.1 PRELIMINARIES Let G = {V, E, A} denote a graph, where V = {1, . . ., N} is the set of nodes with |V| = N, E is the set of edges, and A = (A (i,j)∈V = 0) ∈ R N ×N is the adjacency matrix. When G is undirected then A is symmetric with A i,j = A j,i, G is an undirected graph, otherwise a directed graph. The Laplacian matrix, also acts a propagation matrix, has the combinatorial form as L = D − A ∈ R N ×N, and its normalized form is N ×N is the degree matrix of graph G with d(i) = j∈V A i,j and I ∈ R N ×N is the identity matrix. In some literature, the random walk Laplacian L rw = I − D −1 A is employed to directed graph G. Let L = U ΛU T be the eigendecomposition, where U ∈ R N ×N is composed of orthonormal eigenbasis and Λ = diag(λ 0, λ 1, . . ., λ N −1) is a diagonal matrix of eigenvalues which denotes frequencies of graph G, and λ i and u i form an eigenpair. The convolutional operator * G on the graph signal x is defined by wheref = U T x andĝ = U T g are regarded as the graph Fourier transform of graph signal x and graph filter g, respectively; f = U (·) is the inverse graph Fourier transform, and is the Hadamard product. Ĝ = diag(ĝ 0, · · ·,ĝ N −1) behaves as spectral filter coefficients. Graph convolution can be approximated by polynomial filters, the k-th order form is where Based on the above approximation, ChebNet further introduces Chebyshev polynomials into graph filters of the convolutional layers for the sake of computational efficiency. Chebyshev polynomials are recursively expressed as T i (x) = 2xT i−1 (x) − T i−2 (x) with T 0 (x) = 1 and T 1 (x) = x. The graph filter then becomes whereL = 2/λ max L − I denotes the scaled normalized Laplacian for all eigenvalues λ i ∈ [−1, 1] and θ i is trainable parameter. Graph Convolutional Network (GCN) is a variant of ChebNet which only takes first two terms of Equation. By setting the coefficients θ 0 and θ 1 as θ = θ 0 = −θ 1 and with λ max = 2, the convolution operator in convolution layer of GCN is induced as g In graph theory, The definition of the dual varies according to the choice of embedding of the graph G. For planar graphs generally, there may be multiple dual graphs, depending on the choice of planar embedding of the graph. In this work, we follow the most common definition. Given a plane graph G = {V, E A}, which is designated as the primal graph, the dual graphĜ = {Ṽ = E,Ẽ,Ã} is a graph that has a vertex (or node) for each edge of G. The dual graphĜ has an edge whenever two edges of G share at least one common vertex. To be clarified, the vertices (i, j) and (j, i) of dual graphĜ converted from a undirected graph are regarded as the same. Fig.1 shows the conversion from primal graph to its dual counterpart. When vertices of the primal graph embed features (or signals in terminology of spectral graph theory), the features of a dual node can be obtained by applying a specified functions to its corresponding primal nodes' features, i.e. the simplest applicable function is to calculate the distance between the features of two nodes. In addition, if the edges of primal graph possess features or attributes, we also take them into account as the their inherited features of dual nodes. Take node of dual graph in Fig.1b) as an example, its feature is obtained by performing the element-wise subtraction to the feature vectors of nodes 0 and 3 of primal graph in Fig.1a), i.e. T − T = [0, −1, 1] T. The Twin Graph Convolutional Networks (TwinGCN) proposed in this work consists of two pipelines. Both pipelines are built with the same architecture as GCN, and contain two convolution layers in each pipeline, as shown in Fig.2. The upper pipeline acts exactly as GCN; however, the lower one takes the dual featuresX derived from primal features X as its inputs (as described in section 3.3), the predictions or outputs in dual vertex domain (i.e. edge domain in primal) is then aggregated to primal vertex domain. The goal of introducing a dual pipeline into the model is that we desire to utilize the predictions on the dual node (edges in primal graph) to affect the predictions on primal nodes since the knowledge about those neighbors of a node can be propagated through edges. For the purpose of training the dual pipeline, we also need to get the labels of dual nodes. Let us take an example, given a dual node (i, j) (corresponds to an edge in primal graph), primal node i has label α and j has label β, then dual node (i, j) is assigned with a label (α, β). One thing worth mentioned is that TwinGCN's convolution layers are not limited to those used in GCN, they can be replaced with other types of convolution layer, such as ChebNet, GWNN , etc. The convolution layers in the pipelines perform graph convolution operations with shared weights as learnable parameters, mathematically expressed as where H (l) is the activation in l-th layer, W (l) is learnable weights in that layer. σ represents nonlinear activation function, e.g. ReLU. For the task of semi-supervised node classification, the loss function is defined as where Y L is set of node labels for L ∈ V labeled node set, F denotes the number of labels of the nodes, and Z is predicted outcome, a softmax of the output of the network. In order to take effect of dual pipeline on the prediction of primal pipeline, we adopt KullbackLeibler Divergence (D KL) as a regularization term in training. Suppose that P (Y |X) is predictions by primal pipeline and P (Ŷ |X) = P (Ŷ |X) is the derived predictions obtained through an aggregation from the predictions on dual labels by dual pipeline to primal label predictions. X is derived from X as aforementioned (Section 3.3). We first calculate the joint probability matrix P (Y,Ŷ) of two matrices P (Y |X) and P (Ŷ |X) we further get the marginal probabilities of P (Y) and P (Ŷ) from P (Y,Ŷ). KullbackLeibler Divergence D KL is evaluated by finally, we attains the loss function as 3 illustrates a fast algorithm deriving primal predictions from predictions of dual pipeline. It is conducted by introducing two special incidence matrices. The matrix at the left hand side (N × M, N = |V| and M = |E|) is an incidence matrix in which the rows represent primal nodes, each column depicts whether a primal node in a row has an incidence in the dual node represented by this column. The rightmost matrix is the incidence matrix of primal labels presenting in dual labels with dimension of L 2 × L. Although these two matrices are extremely sparse when node number is very large (we store them in compressed form), by taking advantage of GPU's powerful computing capability, the delicate sparse matrix multiplication subroutine, e.g. Nvidia's cuSARSE, runs much faster than codes with loops for lumping the incidences. In this section, we evaluate the performance of TwinGCN, we mainly focus on semi-supervised node classification in current work. Actually, TwinGCN also support unsupervised learning by changing the loss functions which we will fulfill in future work. We conduct experiments on three benchmark datasets and follow existing studies (; ; etc.) The datasets include Cora, Citeseer, and Pubmed . All these three datasets are collected from their corresponding citation networks, the nodes represent documents and edges are the citations. Table 4.1 shows details of these datasets. Label rate indicates the portion of the available labeled nodes used for training. The training process takes 20 labeled samples for each class for every dataset. Since both pipelines of our proposed architecture work with graph convolution based on spectral graph theory, we use recent works, such as ChebNet GCN , and GWNN , etc. These models maintain the same graph Lapalacian base structure, unlike some other methods take partial graph structure, e.g. FastGCN applies Monte Carlo importance sampling on edges. however, this kind of method only guarantees the convergence as the sample size goes to infinity. For the sake of consistency for comparison, the hyper-parameters for training are kept the same for primal pipeline as other models. The primal are composed with two graph convolution layers with 16 hidden units and applied with ReLU non-linear activations. Loss is evaluated with the softmax function. Dropout of primal pipeline is set to p = 0.5 for the primal. We use the Adam optimizer for optimizing the weights with an initial learning rate lr = 0.01. As the dual graph is normally much bigger than the counterpart primal graph, its adjacency/Laplacian matrix and the number of dual nodes becomes quadratically larger, e.g. N nodes with N × (N − 1) edges in a fully-connected graph. Therefore, to avoid overfitting on dual pipeline, we set its dropout rate higher than 70%. We also introduce a sampling rate to extract a small fraction from the total dual node labels. Having a large number of edges in the primal graph also means a large number of dual nodes. In such situation, the performance will be degraded severely. The quantitative comparison among different models is given in Table 4.4. For node classification, TwinGCN achieves similar or outperforms with some datasets. The performance gain comes from the aggregation of knowledge propagated through edges (or dual nodes) trained by dual pipeline. However, primal pipeline only will ignore the dependency between labels of nodes. Fig.4a ) illustrate that when compared to the GCN, TwinGCN bearing two pipelines converges slower but achieves a higher accuracy as the number of epoch increases. This is because that we have two pipelines through mutual interaction. In Fig.4b ), we observe that two loss curves of traditional GCN and TwinGCN have very similar decreasing trends. However, the loss curve of TwinGCN is slightly above GCN because the loss of TwinGCN is the summation of both primal and dual pipelines. To test whether the introduced dual pipeline and regularization improve the basic GCN pipeline, we conducted controlled experiments to make comparison among GCN, GCNs with pipelines on original graph and dual graph and TwinGCN(GCNs with both pipelines and regularization by KLdivergence). Method Cora Citeseer Pubmed GCN 81.5% ± 0.3% 70.8% ± 0.1% 78.8% ± 0.1% GCN(double-pipeline) 81.6% ± 0.4% 72.5% ± 1.0% 79.8% ± 2.3% TwinGCN 83.0% ± 1.3% 72.5 ± 0.8% 79.5% ± 1.2% In this work, we propose the TwinGCN with parallel pipelines working on both the primal graph and its dual graph, respectively. TwinGCN achieves the state-of-the-art performance in semisupervised learning tasks. Moreover, TwinGCN's ability is not limited to this, we can extend its power/utilization into unsupervised learning by altering its loss functions. Use unnumbered third level headings for the acknowledgments. All acknowledgments, including those to funding agencies, go at the end of the paper.
A primal dual graph neural network model for semi-supervised learning
781
scitldr
We describe two end-to-end autoencoding models for semi-supervised graph-based dependency parsing. The first model is a Local Autoencoding Parser (LAP) encoding the input using continuous latent variables in a sequential manner; The second model is a Global Autoencoding Parser (GAP) encoding the input into dependency trees as latent variables, with exact inference. Both models consist of two parts: an encoder enhanced by deep neural networks (DNN) that can utilize the contextual information to encode the input into latent variables, and a decoder which is a generative model able to reconstruct the input. Both LAP and GAP admit a unified structure with different loss functions for labeled and unlabeled data with shared parameters. We conducted experiments on WSJ and UD dependency parsing data sets, showing that our models can exploit the unlabeled data to boost the performance given a limited amount of labeled data. Dependency parsing captures bi-lexical relationships by constructing directional arcs between words, defining a head-modifier syntactic structure for sentences, as shown in Figure 1. Dependency trees are fundamental for many downstream tasks such as semantic parsing (;, machine translation , information extraction and question answering . As a , efficient parsers have been developed using various neural architectures. While supervised approaches have been very successful, they require large amounts of labeled data, particularly when neural architectures are used. Syntactic annotation is notoriously difficult and requires specialized linguistic expertise, posing a serious challenge for low-resource languages. Semisupervised parsing aims to alleviate this problem by combining a small amount of labeled data and a large amount of unlabeled data, to improve parsing performance over labeled data alone. Traditional semi-supervised parsers use unlabeled data to generate additional features, assisting the learning process , together with different variants of self-training (Søgaard & Rishøj, 2010). However, these approaches are usually pipe-lined and error-propagation may occur. In this paper, we propose two end-to-end semi-supervised parsers based on probabilistic autoencoder models illustrated in Figure 3, Locally Autoencoding Parser (LAP) and Globally Autoencoding Parser (GAP). In LAP, continuous latent variables are used to support tree inference by providing a better representation, while in GAP, the latent information forms a probability distribution over dependency trees corresponding to the input sentence. A similar idea has been proposed by , but our GAP model differs fundamentally from their parser, as GAP does not sample from the posterior of the latent tree structure to approximate the Evidence Lower Bound (ELBO). Instead it relies on a tractable algorithm to directly compute the posterior to calculate the ELBO. We summarize our contributions as follows: 1. We proposed two autoencoding parsers for semi-supervised dependency parsing, with complementary strengths, trading off speed vs. accuracy; 2. We propose a tractable inference algorithm to compute the expectation and marginalization of the latent dependency tree posterior analytically for GAP, avoiding sampling from the posterior to approximate the expectation ; 3. We show improved performance of both LAP and GAP with unlabeled data on WSJ and UD data sets empirically, and improved of GAP comparing to a recently proposed semi-supervised parser . Most dependency parsing studies fall into two major groups: graph-based and transition-based . Graph-based parsers regard parsing as a structured prediction problem to find the most probable tree, while transition-based parsers (; 2008) treat parsing as a sequence of actions at different stages leading to a dependency tree. While earlier works relied on manual feature engineering, in recent years the hand-crafted features were replaced by embeddings and deep neural architectures, leading to improved performance in both graph-based parsing and transition-based parsing (; ;). More recent works rely on neural architectures for learning a representation for scoring structural decisions;;. The annotation difficulty for this task, has also motivated work on unsupervised (grammar induction) and semi-supervised approaches to parsing (; ; ; ; ; ;). Similar to other structured prediction tasks, directly optimizing the objective is difficult when the underlying probabilistic model requires marginalizing over the dependency trees. Variational approaches are a natural way for alleviating this problem, as they try to improve the lower bound of the original objective, and were applied in several recent NLP works (; b; ; b; a). Variational Autoencoder (VAE) is particularly useful for latent representation learning, and is studied in semi-supervised context as the Conditional VAE (CVAE) . The work mostly related to ours is as they consider the dependency tree as the latent variable, but their work takes a second approximation to the variational lower bound by an extra step to sample from the latent dependency tree, without identifying a tractable inference. We show that with the given structure, exact inference on the lower bound is achievable without approximation by sampling, which tightens the lower bound. A dependency graph of a sentence can be regarded as a directed tree spanning all the words of the sentence, including a special "word"-the ROOT-to originate out. Assuming a sentence length of l, a dependency tree can be denoted as T = (< h 1, m 1 >, . . ., < h l−1, m l−1 >), where h t is the index in the sequence of the head word of the dependency connecting the tth word m t as a modifier. Our graph-based parser is constructed by following the standard structured prediction paradigm . In inference, based on the parameterized scoring function S Λ with parameter Λ, the parsing problem is formulated as finding the most probable directed spanning tree for a given sentence x: where T * is the highest scoring parse tree and T is the set of all valid trees for the sentence x. It is common to factorize the score of the entire graph into the summation of its substructures: the individual arc scores : whereT represents the candidate parse tree, and s Λ is a function scoring each individual arc. s Λ (h, m) describes the likelihood of forming an arc from the head h to its modifier m in the tree. Through out this paper, the scoring is based on individual arcs, as we focus on first order parsing. We used the same neural architecture as that in. In this formulation, we first use two parameters to extract two different representations that carry two different types of information: a head seeking for its modifier (h-arc); as well as a modifier seeking for its head (m-arc). Then a nonlinear function maps them to an arc score. For a single sentence, we can form a scoring matrix as shown in Figure 4, by filling each entry in the matrix using the score we obtained. Therefore, the scoring matrix is used to represent the head-modifier arc score of all the possible arcs connecting words in a sentence . Using the scoring arc matrix, we build graph-based parsers. Since exploring neural architectures for scoring is not our focus, we did not explore other architectures, however performance shall be further improved using advanced neural architectures. Variational Autoencoder (VAE). The typical VAE is a directed graphical model with Gaussian latent variables, denoted by z. A generative process first generates a set of z from the prior distribution π(z) and the data x is generated as P θ (x|z) parameterized by θ given input x, In our scenario, x is an input sequence and z is a sequence of latent variables corresponding to it. The VAE framework seeks to maximize the complete log-likelihood log P (x) by marginalizing out the latent variable z. Since direct parameter estimation of log P (x) is usually intractable, a common solution is to maximize its Evidence Lower Bound (ELBO) by introducing an auxiliary posterior Q(x|z) distribution that encodes the input into the latent space. Tree Conditional Random Field. Linear chain CRF models an input sequence x = (x 1 . . . x l) of length l with labels y = (y 1 . . . y l) with globally normalized probability where Y is the set of all the possible label sequences, and S(x, y) the scoring function, usually decomposed as emission (for first order models. Tree CRF models generalize linear chain CRF to trees. For dependency trees, if POS tags are given, the tree CRF model tries to resolve which node pairs should be connected with direction, such that the arcs form a tree. The potentials in the dependency tree take an exponential form, thus the conditional probability of a parse tree T, given the sequence, can be denoted as: where is the partition function that sums over all possible valid dependency trees in the set T(x) of the given sentence x. We extend the original VAE model for sequence labeling to dependency parsing by building a latent representation position-wise to form a sequential latent representation. It has been shown that under the VAE framework the latent representation can reflect the desired properties of the raw input . This inspired us to use the continuous latent variable as neural representations for the dependency parsing task. Typically, each token in the sentence is represented by its latent variable z t, which is a high-dimensional Gaussian variable. This configuration on the one hand ensures the continuous latent variable retains the contextual information from lower-level neural models to assist finding its head or its modifier; on the other hand, it forces tokens of similar properties closer in the euclidean space. We adjust the original VAE setup in our semi-supervised task by considering examples with labels, similar to recent conditional variational formulations (; ;). We propose a full probabilistic model for any certain sentence x, with the unified objective to maximize for supervised and unsupervised parsing as follows: This objective can be interpreted as follows: if the training example has a golden tree T with it, then the objective is the log joint probability P θ,ω (T, x); if the golden tree is missing, then the objective is the log marginal probability P θ (x). The probability of a certain tree is modeled by a tree-CRF in Eq. 1 with parameters ω as P ω (T |x). Given the assumed generative process P (x|z), directly optimizing this objective is intractable, we instead optimize its ELBO (We show the details in the appendix, proving J lap is the ELBO of J in Lemma A.1): [log P ω (T |z)]. Instead of autoencoding the input locally at the sequence level, we could alternatively directly regard the dependency tree as the structured latent variable to reconstruct the input sentence, by building a model containing both a discriminative component and a generative component. The discriminative component builds a neural CRF model for dependency tree construction, and the generative model reconstructs the sentence from the factor graph as a Bayesian network, by assuming a generative process in which each head generates its modifier. Concretely, the latent variable in this model is the dependency tree structure. We model the discriminative component in our model as P Φ (T |x) parameterized by Φ, taking the same form as in Eq. 1. Typically in our model, Φ are the parameters of the underlying neural networks, whose architecture is described in Sec. 3.1. We use a set of conditional categorical distributions to construct our Bayesian network decoder. More specifically, using the head h and modifier m notation, each head reconstructs its modifier with the probability P (m t |h t) for the tth word in the sentence (0th word is always the special "ROOT" word), which is parameterized by the set of parameters Θ. Given Θ as a matrix of |V| by |V|, where |V| is the vocabulary size, θ mh is the item on row m column h denoting the probability that the head word h would generate m. In addition, we have a simplex constraint m∈V θ mh = 1. The probability of reconstructing the input x as modifiers m in the generative process is where l is the sentence length and P (m t |h t) represents the probability a head generating its modifier. With the design of the discriminative component and the generative component of the proposed model, we have a unified learning framework for sentences with or without golden parse tree. The complete data likelihood of a given sentence, if the golden tree is given, is where s Φ,Θ (h, m) = s Φ (h, m) + log θ mh, with m, x and T all observable. For unlabeled sentences, the complete data likelihood can be obtained by marginalizing over all the possible parse trees in the set T(x): where We adapted a variant of's algorithm to marginalize over all possible trees to compute both Z and U, as U has the same structure as Z, assuming a projective tree. We use log-likelihood as our objective function. The objective for a sentence with golden tree is: for sentence x l i with golden parse tree T l i in the labeled data set {x, T} l do Stochastically update the parameter Λ in the encoder using Adam while fixing the decoder. Compute the posterior Q(T) in an arc factored manner for x u i tractably. 10: Compute the expectation of all possible (h(head) → m(modif ier)) occurrence in the sentence x based on Q(T). Update buffer B using the expectation to the power for Obtain Θ globally and analytically based on the buffer B and renew the decoder. 14: end for If the input sentence does not have an annotated golden tree, then the objective is: Thus, during training, the objective function with shared parameters is chosen based on whether the sentence in the corpus has golden parse tree or not. Directly optimizing the loss in Eq.2 is difficult for the unlabeled data, and may lead to undesirable shallow local optima without any constraints. Instead, we derive the evidence lower bound (ELBO) of log P Θ,Φ (m|x) as follows, by denoting Q(T) = P Θ,Φ (T |m, x) as the posterior: Instead of maximizing the log-likelihood directly, we alternatively maximize the ELBO, so our new objective function for unlabeled data becomes max In addition, to account for the unambiguity in the posterior, we incorporate entropy regularization when applying our algorithm, by adding an entropy term − T Q(T) log Q(T) with a non-negative factor σ when the input sentence does not have a golden tree. Adding this regularization term is equivalent as raising the expectation of Q(T) to the power of 1 1−σ. We annealed σ from 1 to 0.3 from the beginning of training to the end, as in the beginning, the generative model is well initialized by sentences with golden trees that resolve disambiguity. In practice, we found the model benefits more by fixing the parameter Φ when the data is unlabeled and optimizing the ELBO w.r.t. the parameter Θ. We attribute this to the strict convexity of the ELBO w.r.t. Θ, by sketching the proof in the appendix. The details of training are shown in Alg. 1. The common approach to approximate the expectation of the latent variables from the posterior distribution Q(T) is via sampling in VAE-type models . In a significant contrast to that, we argue in this model the expectation of the latent variable (which is the dependency tree structure) is analytically tractable by designing a variant of the inside-outside algorithm in an arc decomposed manner. We leave the detailed derivation in the appendix. A high-level explanation is that assuming the dependency tree is projective, specialized belief propagation algorithm exists to compute not only the marginalization but also the expectation analytically, making inference tractable. Data sets First we compared our models' performance with strong baselines on the WSJ data set, which is the Stanford Dependency conversion of the Penn Treebank using the standard section split: 2-21 for training, 22 for development and 23 for testing. Second we evaluated our models on multiple languages, using data sets from UD (Universal Dependency) 2.3 . Since semi-supervised learning is particularly useful for low-resource languages, we believe those languages in UD can benefit from our approach. The statistics of the data used in our experiments are described in Table 3 in appendix. To simulate the low-resource language environment, we used 10% of the whole training set as the annotated, and the rest 90% as the unlabeled. Input Representation and Architecture Since we use the same neural architecture in all of our models, we specify the details of the architecture once, as follows: The internal word embeddings have dimension 100 and the POS embeddings have dimension 25. The hidden layer of the bi-LSTM layer is of dimension 125. The nonlinear layers used to form the head and the modifier representation both have 100 dimension. For LAP, we use separate bi-LSTMs for words and POSs. In GAP, using "POS to POS" decoder only yield the satisfactory performance. This echos the finding that complicated decoders may cause "posterior collapse" (van den ;). Training In the training phase, we use Adam to update all the parameters in both LAP and GAP, except the parameters in the decoder in GAP, which are updated by using their global optima in each epoch. We did not take efforts to tune models' hyper-parameters and they remained the same across all the experiments. We first evaluate our models on the WSJ data set and compared the model performance with other semi-supervised parsing models, including CRFAE , which is originally designed for dependency grammar induction but can be modified for semi-supervised parsing, and "differentiable Perturb-and-Parse" parser (DPPP) . To contextualize the , we also experiment with the supervised neural margin-based parser (NMP) , neural tree-CRF parser (NTP) and the supervised version of LAP and GAP, with only the labeled data. To ensure a fair comparison, our experimental set up on the WSJ is identical as that in DPPP and we use the same 100 dimension skip-gram word embeddings employed in an earlier transition-based system . We show our experimental in Table 1. As shown in this table, both of our LAP and GAP model are able to utilize the unlabeled data to increase the overall performance comparing with only using labeled data. Our LAP model performs slightly worse than the NMP model, which we attribute to the increased model complexity by incorporating extra encoder and decoders to deal with the latent variable. However, our LAP model achieved comparable on semi-supervised parsing as the DPPP model, while our LAP model is simple and straightforward without additional inference procedure. Instead, the DPPP model has to sample from the posterior of the structure by using a "GUMBEL-MAX trick" to approximate the categorical distribution at each step, which is intensively computationally expensive. Further, our GAP model achieved the best among all these methods, by successfully leveraging the the unlabeled data in an appropriate manner. We owe this success to such a fact: GAP is able to calculate the exact expectation of the arc-decomposed latent variable, the dependency tree structure, in the ELBO for the complete data likelihood when the data is unlabeled, rather than using sampling Model UAS DPPP (L) 88.79 DPPP (L+U) 89.50 CRFAE (L+U) 82.34 NMP Table 2: In this table we compare different models on multiple languages from UD. Models were trained in a fully supervised fashion with labeled data only (noted as "L") or semi-supervised (notes as "L+U"). "ST" stands for self-training. to approximate the true expectation. Self-training using NMP with both labeled and unlabeled data is also included as a base-line, where the performance is deteriorated without appropriately using the unlabeled data. We also evaluated our models on multiple languages from the UD data and compared the model performance with the semi-supervised version of CRFAE and the fully supervised NMP and NTP. To fully simulate the low-resource scenario, no external word embeddings were used. We summarize the in Table 2. First, when using labeled data only, LAP and GAP have similar performance as NMP and NTP. Second, we note that our LAP and GAP models do benefit from the unlabeled data, compared to using labeled data only. Both our LAP and GAP model are able to exploit the hidden information in the unlabeled data to improve the performance. Comparing between LAP and GAP, we notice GAP in general has better performance than LAP, and can better leverage the information in the unlabeled data to boost the performance. These validate that GAP is especially useful for low-resource languages with few annotations. We also experimented using self-training on the labeled and unlabeled data with the NMP model. As show, selftraining deteriorate the performance especially when the size of the training data is small. In this paper, we present two semi-supervised parsers, which are locally autoencoding parser (LAP) and globally autoencoding parser (GAP). Both of them are end-to-end learning systems enhanced with neural architecture, capable of utilizing the latent information within the unlabeled data together with labeled data to improve the parsing performance, without using external resources. More importantly, our GAP model outperforms the previous published semisupervised parsing system on the WSJ data set. We attribute this success to two reasons: First, our GAP model consists both a discriminative component and a generative component. These two components are constraining and supplementing each other such that final parsing choices are made in a checked-and-balanced manner to avoid over-fitting. Second, instead of sampling from posterior of the latent variable (the dependency tree) , our model analytically computes the expectation and marginalization of the latent variable, such that the global optima can be found for the decoder, which leads to an improved performance. A APPENDIX Lemma A.1. J lap is the ELBO (evidence lower bound) of the original objective J, with an input sequence x. Denote the encoder Q is a distribution used to approximate the true posterior distribution P φ (z|x), parameterized by φ such that Q encoding the input into the latent space z. Proof. Combining U and L leads to the fact: In practice, similar as VAE-style models, E z∼Q φ (z|x) [log P θ (x|z)] is approximated by, where z j is the jthe sample of N samples sampled from Q φ (z|x). At prediction stage, we simply use µ z rather than sampling z. Here we used a mean field approximation together with the conditional independence assumption by assuming P θ (z|x) ≈ l t=1 Q φ (z t |x t). The generative model P θ (x|z) acting as decoder parameterized by θ tries to regenerate the specific input x t at time step t from the latent space z t, as we assume conditional independence in the generative process among P θ (x t |z t). The encoder and the decoder are trained jointly in the classical variational autoencoder framework, by minimizing the KL divergence between the approximated posterior and the true posterior. We describe the encoder and decoder formulation. We parameterize the encoder Q φ (z t |x t) in such a way: First a bi-LSTM is used to obtain a non-linear transformation h t of the original x t; then two separate MLPs are used to compute the mean µ zt and the variance σ 2 zt. The generative story P θ (x t |z t) follows such parameterization: we used a MLP of two hidden layers in-between to take z t as the input, and then predict the word (or POS tag) over the vocabulary, such that the reconstruction probability can be measured. Following traditional VAE training paradigms, we also apply the "re-parameterization" trick to circumvent the non-differentiable sampling procedure to sample z t from the Q φ (z t |x t). Instead of directly sample from N (µ zt, σ 2 zt), we form z t = µ zt + σ 2 zt by sampling ∼ N (0, I). In addition, to avoid hindering learning during the initial training phases, following previous works , we anneal the temperature on the KL divergence term from a small value to 1. From an empirical Bayesian perspective, rather than fixing the prior using some certain distributions, it is beneficial to estimate the prior distribution directly from the data by treating prior's parameters part of the model parameters. Similar to the approach used in the previous study , LAP also learns the priors from the data by updating them iteratively. We initialize the priors from a standard Gaussian distribution N (0, I), where I is an identity matrix. During the training, the current priors are updated using the last optimized posterior, following the rule: where P (x) represents the empirical data distribution, and k the iteration step. Empirical Bayesian is also named as "maximum marginal likelihood", such that our approach here is to marginalize over the missing observation as a random variable. In previous studies exploring parsing using neural architectures, POS tags and external embeddings have been shown to contain important information characterizing the dependency relationship between a head and a child. Therefore, in addition to the variational autoencoding framework taking as input the randomly initialized word embeddings, optionally we can build the same structure for POS to reconstruct tags and for external embeddings to reconstruct words as well, whose variational objectives are U p and U e respectively. Hence, the final variational objective can be a combination of three: U = U w (The original U in Lemma A.1) + U p + U e (or just U = U w + U p if external embeddings are not used). Assuming the sentence is of length l, and we have obtained a arc decomposed scoring matrix S of size l ×l, and an entry S[i, j] i =j,j =0 stands for the arc score where ith word is the head and jth word the modifier. We first describe the inside algorithm to compute the marginalization of all possible projective trees in Algo.2. We then describe the outside algorithm to compute the outside tables in Algo. 3. In this algorithm, stands for the logaddexp operation. Finally, with the inside table α, outside table β and the marginalization Z of all possible latent trees, we can compute the expectation of latent tree in an arc-decomposed manner. Algo. 4 describes the procedure. It the matrix P containing the expectation of all individual arcs by marginalize over all other arcs except itself. Light modification is needed in our study to calculate the expectation w.r.t. the posterior distribution Q(T) = P Θ,Φ (T |m, x), as we have In this section we derive the strict convexity of ELBO w.r.t. Θ. Since we only care about the term containing Θ, the KL divergence term degenerates to a constant. For sentence i, Q(T i) has been derived in the previous section as matrix P and 1 is the indication function. Q(1(h → m)) log θ mh Q(1(h → m)) is a Bernoulli distribution, indicating whether the arc (h → m) exists. s.t. DATA SET STATISTICS We show the details of the statistics of the WSJ data set, which is the Stanford Dependency conversion of the Penn Treebank and the statistics of the languaes we used in UD (Universal Dependency) 2.3 here.
We describe two end-to-end autoencoding parsers for semi-supervised graph-based dependency parsing.
782
scitldr
We improve previous end-to-end differentiable neural networks (NNs) with fast weight memories. A gate mechanism updates fast weights at every time step of a sequence through two separate outer-product-based matrices generated by slow parts of the net. The system is trained on a complex sequence to sequence variation of the Associative Retrieval Problem with roughly 70 times more temporal memory (i.e. time-varying variables) than similar-sized standard recurrent NNs (RNNs). In terms of accuracy and number of parameters, our architecture outperforms a variety of RNNs, including Long Short-Term Memory, Hypernetworks, and related fast weight architectures. Recurrent Neural Networks (RNNs) are general parallel-sequential computers that can implement algorithms which map input sequences to output sequences. One variation of it, the Long ShortTerm Memory (LSTM), has achieved great success on a wide variety of Machine Learning tasks such as natural language translation, image caption generation, and speech recognition among others BID11; BID6; BID7. In practical applications, most RNNs are actually LSTM networks now used billions of times per day for automatic translation BID21, speech recognition Sak et al., and many other tasks BID13 BID17.However, plain RNNs but also LSTMs are known to have difficulty in performing memorization, like e.g. a simple copying task of outputting the same sequence as the input sequence BID22. But also other more high-level cognitive tasks have been shown to be difficult to master BID2.In this work, we explore a generalization of the Associative Retrieval problem. We follow a similar style as in BID2 but turned the task into a general sequence to sequence problem and also substantially increased its complexity. The underlying mechanism is essentially a dictionary with a certain number of key-value pairs which is controlled using a simple syntax of storage and query tokens. In order to overcome the limitation of current RNNs on this task, we propose a fast weight architecture that is able to learn and generalize using much fewer parameters. Our architecture consists of the two networks s and f which both operate on the input sequence in parallel. The small network f predicts the targets while the big network s generates on-the-fly weight-updates for f. The big network s is called the slow network because its weights change only after every mini-batch according to the gradient-based learning algorithm. f, on the other hand, is called the fast network because its weights can change after every time step. Our work is heavily influenced by early work in Neural Networks. Like many ideas at the time, the idea of fast-changing weights emerged out of biological evidence and the efforts of storing activation patterns in the weights of an associative network. Networks with non-differentiable fast weights or "dynamic links" have been published since 1981 von der BID19 BID5; BID10. Subsequent work showed that a slow network can use gradient descent learning to control fast weights of a separate network in end-to-end differentiable fashion BID14. A more recent work on fast weights, which provides a good overview of the physiological facts, is the work by BID0. In their work they aptly describe their own fast weight architecture as an attention mechanism which effectively increases the flexibility of the learned program, making it more adaptive through time and able to store temporally recent information but, by design, it is unable to store long-term knowledge using that same mechanism. Their decay of the modulation on top of the slow weights that long-term knowledge has to be learned using a learning algorithm like gradient descent and can't be learned using fast weights. Our approach follows more from BID14 which frames the fast weights idea as a means of program generation i.e. using the first network to produce context dependent weight updates for a second network. We also refer to the weights of a network as its program. An adaptive program, such as the one of f, and its benefits can be described through the concept of the ratio of time-varying variables BID15. Take as an example a plain RNN. The program is its recurrent weight matrix. Once trained, the number of time-varying variables in the computational graph is limited to the state variables and increasing the size of the recurrent weight matrix quadratically increases the number of weight variables available for the implementation of the during inference static program but only linearly increase the number of adaptive state variables. Fast weight architectures tend to relax this ratio allowing the program to also change through time in order to adapt to the context-specific needs. An adjacent approach to the relaxation of the ratio of time-varying variables is the Hypernetwork by BID9. Their initial idea consisted of an adaptive interpolation of several Long Short-Term Memory (LSTM) cells but then lead to a novel approximation which still allows for an adaptive program. Their sequential Hypernetwork was applied to language modelling and outperformed the LSTM on several datasets. As introduced, our fast weight architecture consists of a big network s and a small network f. For simplicity we chose both networks to be RNNs but they could be replaced by any other recurrent architecture, such as a LSTM. While the weights of s change after every batch, the weights of f change after every time step, hence the fast/slow dichotomy. The following formulas are for a single sample or a batch size of 1 and uppercase letters refer to matrices while lower case letters refer to vectors. The biases are omitted for simplicity. Both, the slow network s(x t, h S t) and the time-varying fast network f t (x t, h F t) use the same input embedding and the same output projection as it is common practice for RNNs. In this paper, we use in both cases a 2-layer transition RNN. Recall the basic RNN formulation, DISPLAYFORM0 where h is the hidden state, φ is a non-linear activation function such as tanh, x t is the current input from the input sequence X = (x 1, x 2, ..., x T), and the weights W ∈ R m×n are fixed parameters. Analogously, we define our fast network f t (x t, h DISPLAYFORM1 Where h F is the hidden state of the fast network, LN refers to layer normalization (LN) as introduced by BID1, and F ∈ R m×n, F ∈ R m×m are parameters which are not fixed and can change through time. Similarly, we define the the slow network s(x t, h S t): DISPLAYFORM2 DISPLAYFORM3 Where h S is the hidden state of the slow network, respectively. Every F t+1 is then calculated as follows: DISPLAYFORM4 DISPLAYFORM5 Where H and T are outer products to generate weight matricies in an Hebb-like manner as introduced by BID14 and have the same dimensionality as the respective F, is the element-wise product, 1 denotes a matrix of ones. We make use of LN in the fast network because it has shown to stabilize learning especially in the beginning of the training but also slightly improved validation set performance. Because of the generated nature of the fast weights, we expected them to be far from good initial values when training begins. We use LN to overcome this problem. We tested our architecture without LN, with pre-activation LN, with post-activation LN, and with pre and post-activation LN and found that using a post-activation LN always works best no matter if we normalize pre-activation. In equation 8 we use a gating matrix to blend the current fast matrix with a matrix update. This is essentially equal to the gating mechanism for activations in a highway network BID18. We also evaluated other multiplicative and additive interactions using more than one matrix but achieved the best using this specific update mechanism. Furthermore, we'd like to point out that the slow network s is generating weights for the next time step. We do this to prevent s from fully learning a prediction on its own which bypasses f. We didn't experiment with update delays greater than one step. Figure 1: An informal pictogram which visualises the fast and slow network of our architecture at time step t. e(x t) refers to the embedding of x t and biases, activation functions, layernorm, and outer-products are not displayed. We created a challenging sequence to sequence task based on the Associative Retrieval problem. Our version employs of a simple syntax of storage and query tokens. Storage tokens are key-value pairs while query tokens come with only a key to which the network must output the respective value according to a previously seen storage token. All tokens are part of the inputs sequence while the generated answers to the query tokens are part of the output sequence. Whenever there is no query to respond to the network is supposed to output some default value which in our case is the empty space character. This means that non-trivial problem-relevant predictions are rather sparse. The keys are 2 to 4 characters long while the respective values are a single character. All characters are uniformly sampled with replacement from a set of 8 possible ASCII characters (a to h). Each query token has to be a valid key which the network must have seen after the previous query token. Preceding every query token there are between 1 and 10 storage tokens. We concatenated all query tokens and their respective storage tokens into one large sequence and use truncated Backpropagation Through Time (truncated BPTT) to train the network. The following is an example with only 2 queries with quotes to show the beginning and end of both sequences.x: "S(hgb,c),S(ceaf,e),S(df,g),S(hac,b),Q(ceaf)e. S(hf,h),S(cc,d),Q(cc)d." y: " e d "We generate and concatenate 100'000 queries for the training set and 5'000 queries for the test and validation set. This in a single training sequence of roughly 5.7 million characters and a test and validation sequence of roughly 288'000 characters each. Instead of measuring the accuracy of all predictions we use the partial accuracy which is the percentage of the correct predictions of nonspace characters. This is because all models learn very quickly to predict spaces at every step which very quickly yields a high accuracy without actually having learned much. However, the cost and respective gradient are computed using all predictions. For all models and experiments, we used the Nesterov accelerated Adam (Nadam) by BID4. We observed in our experiments that Adam achieves similar performance but tends to converge slower than Nadam. The Model Our final architecture uses an embedding size of 15. The fast network is defined such that h DISPLAYFORM0 ∈ R 40×40 and the slow network such that h S ∈ R 40, S ∈ R 55×100, S ∈ R 100×394 both using a shared embedding of R 15×15 and a shared output projection W ∈ R 40×15 summing up to 46'234 trainable parameters (bias not listed for simplicity). We aimed for a small and very context specific fast network and a slow network big enough to support such a fast-changing fast network while using as few weights as possible. Different configurations are possible and a larger s seems to help the model to converge faster but doesn't help performance. Preliminary experiments with different RNNs, like an RHN or LSTM as the fast and slow network, also showed promising but were not part of this work. Results We provide our best experimental for other architectures for which we performed a hyper parameter search using a mix of grid and random search over each architecture's individual parameters. We compare our architecture to the LSTM, fast weights as a means of attention to the recent past (AttentionFW) by BID0, and HyperNetworks by BID9. We also performed experiments on the feed-forward fast weights and recurrent fast weights as introduced by BID14 BID16 but we were unable to get them to predict anything else than the trivial output on this task. We trained all models using a sequence length of 32 and a batch size of 256. We didn't perform learning rate decay or similar strategies but included the learning rate in our hyper parameter search. In the end, a learning rate of 0.002 achieved over all architectures the best and was in a second phase fixed for all models to allow for a comparison of convergence qualities. Our model achieves the highest partial accuracy and the lowest partial bits per character (BPC) while using 83 times fewer parameters than the next best model. Again, "partial" refers only to the non-space prediction targets. It has been shown before how generalizing a memory mechanism, such as required in this task, is difficult for vanilla RNNs to learn. Several previous works focused on integrating differentiable Figure 2: The left figure represents the accuracy of non-trivial targets and the right figure the respective bits per character. These are the validation set of the best models of the four examined architectures due to our hyper parameter search. Green is the LSTM, dark blue is the fast weights architecture as attention to the recent past, red is the hypernetwork, and cyan is our novel fast weight architecture.computer-like memory into the graph structure of the architecture such that the model wouldn't need to learn the mechanism itself but mainly how to use it. Examples of such are differentiable stacks by BID3; BID12, but also related storage types like those in LSTM-controlled Neural Turing Machines BID8 or memory nets BID20.A basic argument against a memory approach inspired by the Turing Machine or the von Neumann architecture is its biological plausibility, as well as, the fact that we know how the human memory system often doesn't really behave as computer memory does. It is generally known to be much more nuanced and forcing an architecture to include a strong and possibly misleading bias would certainly limit its ability to learn and generalize to a more effective mechanism. We think that learning high-level cognitive functions (i.e. high-level programs implemented under the constraints of some artificial neural substrate) is difficult and find the idea to search and reverse engineer every human capability necessary for intelligence in order to engineer it into an architecture to be undesirable. Instead, we favour an approach which focuses on improving the capabilities of the artificial neural substrate which allows for the emergence of higher-level functions through training. We think fast weights are such a component from which many models could benefit. Limitations Fast weights seem to have a positive effect when they are incorporated into an architecture but we experienced at least two practical limitations. While the calculation of the gradient through these fast weight dynamics remains rather cheap, the number of values to be stored in the backward pass encompasses now all time-varying variables (i.e. all fast weights) at each relevant time step. This quadratically increases the memory consumption compared to a similar sized RNN. At the moment, these memory limitations are the main reason why such fast weight networks remain rather small compared to state-of-the-art RNNs one some popular application like e.g. neural machine translation. Another noteworthy limitation is the wall-time necessary for computing a more complex architecture. Reshaping tensors and other simple operations in a significant increase of wall-time. However, over 20 years ago it was pointed out that an RNN can also use additional, soft, end-toend differentiable attention mechanisms to learn to control its own internal spotlights of attention BID15 to quickly associate self-defined patterns through fast weights (on connections between certain units) that can quickly and dramatically change from one time step to the next. This approach can essentially increase the number of time-varying variables massively while keeping the model relatively small. We improved the update mechanism through which the slow network learns to write into its fast weight memory. This allows us to construct a model with a small but memory expensive fast network in addition to the standard slow network. However, the fast weights are not just passive memory like the state but are more like active memory in the sense of a context-specific computation. We force the model to use this active memory at every step to predict the current output by delaying the weight updates from slow network by one step. Consider the model introduced in the previous section. While the slow network is technically bigger, it contains only 40 time-varying variables, namely the state vector h S. The fast network is much smaller but has 3840 time-varying variables (h F, F, and F ). Increasing the total number of time-varying variables significantly. In this paper, we introduce a complex sequence to sequence variation of the Associative Retrieval problem. In that problem, the model has to learn how to store a number of associations from the input sequence, retrieve them if necessary, and forget them to learn new associations. We use a standard RNN to generate weight updates for a fast weight RNN. This allows our model to store temporal information not only in the state of either RNN but also in the weights of the fast weight RNN. Our contribution is a new way of updating the weight matrices of the fast weight RNN where we use a gate and two generated matrices instead of one. Without our contribution the model has never shown to be able to learn any non-trivial predictions. We compare it with other architectures on this general task and show how it outperforms them in convergence, accuracy, and number of parameters.
An improved Fast Weight network which shows better results on a general toy task.
783
scitldr
The field of deep learning has been craving for an optimization method that shows outstanding property for both optimization and generalization. We propose a method for mathematical optimization based on flows along geodesics, that is, the shortest paths between two points, with respect to the Riemannian metric induced by a non-linear function. In our method, the flows refer to Exponentially Decaying Flows (EDF), as they can be designed to converge on the local solutions exponentially. In this paper, we conduct experiments to show its high performance on optimization benchmarks (i.e., convergence properties), as well as its potential for producing good machine learning benchmarks (i.e., generalization properties). Due to recent progress in the field of machine learning, it becomes more and more important to develop and sophisticate methods of solving hard optimization problems. At the same time, in this field, such methods are additionally required to elicit decent generalization performance from statistical models. An efficient method of mathematical optimization, however, does not always produce sufficient generalization properties, since these are involved with two distinct mathematical problems; The former is to find one of the solutions which minimize a given (possibly non-convex) objective function, and the latter is to adjust parameters so that a statistical estimator achieves its best . To address such a hard issue, we introduce a new mathematical perspective on optimization, and develop a method for machine learning based on this perspective. We then empirically show its rapid convergence rate and high compatibility with deep learning techniques, as well as good statistical properties. In this field, many optimization methods have been proposed and modified so that they fit specific problems or models. One of the current standard methods is the gradient descent method. The method tends to converge slowly in general optimization problems. However, with various specific techniques, such as mini-batch training and batch normalization BID9 ), it has been found to be efficient for state-of-the-art purposes in the field of deep learning. Another class of methods that are now becoming popular and standard is adaptive methods, such as AdaGrad BID6 ) and Adam . Compared to the gradient descent method, these methods have been shown to improve convergence rates with almost the same computational cost as the gradient descent method, but are reported to in poor statistical outcomes in some cases of machine learning BID16 ).Other class of methods that have been thoroughly studied in the theory of mathematical optimization is second-order methods, such as the Newton method and the Gauss-Newton method. These methods possess great convergence properties, and in particular, have a potential to overcome plateau's problems BID5 ). Furthermore, when it comes to applications in stochastic settings, the method based on the Gauss-Newton Matrix (or Fisher information Matrix) is shown to asymptotically attain the best statistical , which is called Fisher efficiency (see BID0). Despite these attractive characteristics, the methods have not yet been spotlighted in the field of machine learning due to several severe drawbacks; They suffer from high computational cost in general and their useful properties are no longer guaranteed in practical settings (see Section 12 in BID12). One of the continuously developing second-order methods in this field, K-FAC BID1, BID7 ), successfully produced high convergence rate empirically with relatively low computational cost. However, it still requires much effort to become compatible with some deep learning techniques. In addition, it is unclear whether the method has advantages in generalization performance. In our approach, by introducing a Riemannian metric induced by non-linear functions, we constitute dynamical systems which describe motions along the shortest route from arbitrary initial points to the zeros of non-linear functions on the corresponding Riemannian manifold, that is, geodesic with respect to the Riemannian metric. One of the remarkable characteristics of our approach is that it enables us to flexibly design flows of such dynamical systems to control convergence rates. The for the flows are then applicable to mathematical optimization problems, in particular, with deep neural network (DNN) models. In this paper, after providing mathematical ground of our methods, we experimentally demonstrate their performance in various aspects, from convergence rates to statistical properties. We start by establishing some essential properties of dynamics which are effective for the analysis of non-linear equations. Let F: R N → R N be a smooth function and J be the Jacobian with variable w ∈ R N, that is, J = ∂F/∂w. In this section, we deal with the well-posed case that there exists a connected closed subset Ω ⊂ R N, where J is regular and the equation has a unique solution ξ. Therefore, the positive matrix G = J T J induces a Riemannian metric g on Ω and (Ω, g) becomes a Riemannian manifold under some appropriate conditions. Let us then consider time evolution of variable w on this manifold. Our main purpose in this section is to study the characteristics of dynamical systems in which w(t) moves on geodesics between any point in Ω and ξ with respect to the metric g. Let L be a Lagrangian given by DISPLAYFORM0 with v = dw/dt (also written asẇ). The Euler-Lagrange equation for L is then expressed as dp dt DISPLAYFORM1 with momentum vector p = Gv. If the boundary condition at two points in Ω, w(t 0) = w 0, w(t 1) = w 1, is imposed on, a geodesic between w 0 and w 1 is obtained as the solution. In contrast, if we give an appropriate initial condition, w describes the motion along the geodesic from w 0 to w 1. In fact, the following statement holds; Theorem 2.1. Let w 0 be an arbitrary point in Ω and (w(t), p(t)) be a solution of equation FORMULA1 with the following initial condition; DISPLAYFORM2 Then w satisfies F w(t) = (1 − t)F w 0, for t ∈. In particular, w(t) passes through the point which is a solution of non-linear equation DISPLAYFORM3 We briefly describe the outline of the proof for the statement above. Throughout this paper, we regard functions of w as those of t in the trivial way; for instance, F (t) = F (w(t)). Note that p can be expressed as p = J T dF/dt. Then using the Beltrami identity for with the initial condition above leads to the equation DISPLAYFORM4 where F 0 = F. Thus, a closed form expression is obtained as DISPLAYFORM5 which gives F = 0 as asserted in Theorem 2.1. Now we take a different expression that the coefficient (1 − t) in is replaced by a different monotonically decreasing function, that is, DISPLAYFORM6 where ρ denotes a monotonically decreasing smooth function from (t 0, t 1) onto. Then, we give the following differential equation whose solution is of the closed form; DISPLAYFORM7 A motion described by this differential equation differs from the one that is described by, but these two motions are along the same geodesic. Theorem 2.2. Let w 0 ∈ Ω be an arbitrary point. The differential equation DISPLAYFORM8 with an initial condition w 0 = w(t 0) has a unique solution that satisfies F (w(t 1)) = 0. Furthermore, the orbit under flow f defined by f (w 0, t) = w(t) coincides with that of the geodesic equation.Note that since equation FORMULA8 is equivalent to equation FORMULA7, the orbit is invariant under coordinate transformations. With respect to the choice of ρ, the end point t 1 can be set as ∞ under some appropriate conditions and Theorem 2.2 still holds in the sense that DISPLAYFORM9 In particular, if we set ρ(t) = e −t, then χ(t) = −1 and F can be represented as F (t) = e −t F 0, so that the convergence rate of F is exponential. Definition 2.3. Let w be a solution of the differential equation DISPLAYFORM10 with an initial condition w 0 = w(t 0). The flow f (w 0, t) = w(t) is called an exponentially decaying flow (EDF) of non-linear equation F (w) = 0.For the end of this section, we present a connection between EDF and the classical Newton method. If we apply the Euler method to the differential equation with step size t, the corresponding iteration step can be written as DISPLAYFORM11 which recovers the Newton method with step size η i = t · χ(τ i). Consider smooth functions ϕ: DISPLAYFORM0 In this section, we develop a method based on EDF for a mathematical optimization problem of the form DISPLAYFORM1 In this section, the area Ω ⊂ R N is supposed to be a compact subset where there exists no stationary point except for a minima. DISPLAYFORM2 An example of such problems is least squares one in which L is given by DISPLAYFORM3 In this case, F = ϕ. In particular, if M = N and a minimal value of loss function L = L • ϕ is zero, the optimization problem is equivalent to solving non-linear equation F (w) = 0.For optimization problems, we set up a standard equation that the gradient of loss function is zero, that is, ∇ L(w) = 0. Note that the gradient can be expressed as ∇ L(w) = J T ϕ F (w) with Jacobian J ϕ = ∂ϕ/∂w. Applying Theorem 2.2, we obtain the differential equation DISPLAYFORM4 where H is the Hessian of loss function L with respect to w. Since second order differentiation is typically computationally heavy, we seek other equations which possess almost the same property as especially in terms of asymptotic behavior. Let us decompose momentum Hẇ as DISPLAYFORM5 where J F denotes the Jacobian of F, and G is a symmetric matrix defined by DISPLAYFORM6 with Hessian matrix H L of L with respect to ϕ. We then consider the following equation instead of FORMULA0; DISPLAYFORM7 This equation no longer describes the motion along the geodesic related to in general. However, if M = N and J ϕ is invertible, then w moves on another geodesic with respect to a different metric DISPLAYFORM8 In addition, if ρ(t) = e −t, F converges exponentially, which implies that ∇ L = J ϕ F also converges exponentially in as well as in FORMULA0. In general cases, if a condition that DISPLAYFORM9 is satisfied, then F converges to 0. This shows that in the neighborhood of solution ξ of equation ∇ L = 0, the momentum Gẇ sufficiently approximates Hẇ by. Definition 3.1. The flow given by DISPLAYFORM10 is referred to EDF of type H and written as EDF-H. Similarly, the flow given by DISPLAYFORM11 is referred to EDF of type G and written as EDF-G. Like second order methods, in EDF-based methods, matrix inversion has to be carried out, which requires expensive computational cost, particularly in large systems. Moreover, we often encounter rank deficient matrices in optimization problems. To deal with rank deficiency, in general, we need pseudo-inverse matrices which are more computationally heavy. Therefore, instead of the inverse of matrix A = G, H, for fixed v ∈ R M, we consider a projection which maps r = arg min DISPLAYFORM0 One of the basic methods to construct such a projection for indefinite symmetric systems is the minimum residual method BID13 ), which requires only the matrix multiplication. Therefore, for numerical computations, we use the following differential equation approximated by the k-th order Krylov subspace; DISPLAYFORM1 k is a hyperparameter that interpolates between the gradient method and the method based on or. In fact, in the case that k = 1, equation FORMULA1 has the form DISPLAYFORM2 which reduces to a kind of gradient methods, but the coefficient c conveys information about G or H unlike the standard gradient methods. Next, similar to the Levenberg-Marquardt algorithm, we modify equation FORMULA0 by adding a damping factor to G in order that the method becomes more stable. So, we set DISPLAYFORM3 where λ is a continuous positive function of t. Then, we take the same approximation as FORMULA1 with A = G+λI. The damping factor λ plays several important roles in solving practical problems. First, it has solutions well-behaved near singularities. Even in the case that G is not invertible, equation FORMULA1 can be defined and solved unlike. If we choose λ such that it approaches to 0 as rapidly as the gradient in, the asymptotic behavior of FORMULA1 is almost the same as that of FORMULA0. Particularly, in the case that χ = −1, we set λ = a J T ϕ F b with a, b > 0, so that the convergence rate of stays exponential. Second, the damping factor makes the method compatible with stochastic approaches such as mini-batch training, in deep learning models. Since the orbit of FORMULA1 is affected by the gradient due to the damping factor λ, the method based on this equation could take advantage of stochastic approaches as other gradient methods do. (For implementation of the algorithm, see Appendix A.) Finally, to accelerate the EDF-based methods, it is sometimes effective to change equation FORMULA0 into a second-order differential equation, particularly in the case in which the approximation method with k is applied. Specifically, we take the equation DISPLAYFORM4 where κ is a real-valued function of t. The convergence properties of the methods are controlled by the following differential equation; DISPLAYFORM5 There are two ways of determining κ. The first one is to set κ = α with constant α (W1), which leads to a similar scheme to the momentum gradient decent method. The other one is to set κ(t) = αt −1 (W2). In this setting, the equation can be discretized in a similar way as described in BID15, which is analogous to the Nesterov's acceleration scheme. In this section, we present how optimization problems are set up in the field of deep learning. Let x = {x j} n−1 j=0 and y = {y j} n−1 j=0 denote training datasets of input and output, respectively, where n is the size of dataset. Let d x and d y be dimensions of input data and output data, respectively. We write ϕ nn for neural networks with parameters w ∈ R N, and define ϕ by the direct sum of vectors {ϕ j} given by ϕ j (w) = ϕ nn (x j, w), that is, ϕ = ⊕ϕ j. Note that M = n × d y in this case. Then finding a minima of a given loss function is proposed as a standard optimization problem to train networks. For the image classification tasks, there are two typical cases of setting loss functions. In the first case, the loss is set as. As already mentioned, in this case, F = ϕ and H L = I. In the second case, the loss is given by cross entropy with softmax function, that is, DISPLAYFORM0 where θ denotes the softmax function and θ j = θ(ϕ j). In this case, F is expressed by the direct sum such that F = ⊕F j with DISPLAYFORM1 where s j denotes a sum of all elements in vector y j for each j. Note that if each y j is given as a probability vector, then s j = 1. Moreover, H L is expressed as H L = ⊕H j with DISPLAYFORM2 where ⊗ denotes the outer product. In both cases, the loss functions take the minimum value 0 if and only if F = 0. In this study, we conducted following three groups of experiments; First, we examined optimization performance of the EDF-based methods on a data-fitting problem with a simple convolutional neural network model. Second, we tested both optimization and generalization performance in standard settings of classification tasks in which we employed residual network models and CIFAR-10/100 datasets. Third, we incorporated some techniques of data augmentation and regularization in the training into our experiment as we would like to measure effectiveness of our methods in more practical settings. The primary purpose of the experiments in this paper is not to pursue the state-of-the-art performance, but to explore how the statistical pertain to those of optimization when the loss functions are minimized. Therefore, we tuned hyperparameters of each method in accordance with the optimization benchmark. It should be noted that the conjecture concerning non-convex optimization problems in the field of deep learning BID2, BID4 ) is still an open problem (studied for linear cases in BID3, BID10). Hence, for experiments in this paper, we do not discuss whether each optimization method actually reaches to a global solution. We evaluated convergence performance of EDF-based methods (type G and type H) on a data-fitting problem of CIFAR-10, that is, full-batch training in the context of deep learning. The model we employed in these experiments was the convolutional neural network that consisted of two convolution filters with rectified linear units and max pooling layers, followed by two fully connected layers. For EDF-based methods, the step size was fixed to 1.0, and no damping factors were used. In addition, the acceleration technique derived from W2 of was adapted, since W2 achieved better performance than W1. A similar experiment with a different type of second-order methods was conducted in BID14.First, we examined change in convergence performance of EDF-G depending on hyperparameter k, the order of Krylov subspace. The are illustrated in the left-hand side of FIG0. The presented trend is consistent with the theoretical fact that k interpolates between the gradient method (for small k) and the method based on dynamics (for large k). In other words, when k is small, the method is similar to a gradient method, which converges slow, but as k becomes larger, the method leads to better approximation of the inverse matrix, which gives a rapidly decaying flow. Next, we compared EDF-G (k = 1, 30) with EDF-H and other standard optimizers in deep learning: gradient descent methods with Polyaks momentum scheme (Momentum) and Nesterov's acceleration scheme (NAG), and Adam. The step sizes for Momentum, NAG, and Adam were fixed to 0.01, 0.001, and 0.001, respectively. The right-hand side of FIG0 shows that both EDF-based methods made the loss decrease more rapidly than other standard optimizers. Even when EDF-based methods were reduced to the gradient methods at k = 1, EDF-G outperformed standard optimizers. The figure also presents the difference between EDF-G and EDF-H. While their convergence rates around extremes were almost the same, the overall convergence rates of EDF-G were better than that of EDF-H.As has been found, second-order-methods on full-batch training converge to the solution within a few iterations. However, with such a training, the quality of generalization becomes worse compared to the time when stochastic approach with a mini-batch of small size is taken. Like other secondorder-methods, EDF also suffers from a negative impact of full-batch training on generalization performance, though setting the hyperparameter k small makes EDF be compatible with stochastic approaches and take their advantages (see Appendix B). In the following experiments, we compared both optimization and generalization performance of EDF-G with other methods, momentum stochastic gradient decent method (MSGD) and Adam on classification tasks for CIFAR-10/100. The experiments were conducted, employing residual network models with batch normalization (Resnet-56 for CIFAR-10 and Resnet-110 for CIFAR-100), and working with a mini-batch of size 250.For CIFAR-100, during pre-investigation of the dataset, we found that 9 pairs of data, each of which has the same image but different labels, contaminated the neural network and might have had a non-negligible influence on both optimization and generalization performance (See Appendix C). Therefore, in our experiments on CIFAR-100, we excluded them from the training dataset and used the rest of the data (size = 49982).At the end of each epoch, we computed training loss in the following manner. First, the loss was defined as FORMULA1 where n is the total number of data. Second, to calculate the statistics for batch normalization in the loss, as described in BID9, we adopted so-called "inference mode" in which moving averages of mean and variance over mini-batches are used for batch-normalization. For EDF, we tested the case k = 1 and k = 2, with the damping factor set as a = b = 1. The momentum coefficient was fixed to α = 0.9 for the W1 acceleration (see 25).For each of the tasks, we ran tests on 10 different learning rates; for EDF between 0.1 and 2.0, for Momentum SGD between 0.1 and 10.0, for Adam between 0.0001 and 0.1. We then chose the one with the best convergence performance for each optimizer. For EDF with W2, to obtain stability in convergence, the learning rate was set to 0.2 times that of the initial values at the end of the 20-th epoch. Such a change in learning rate in the middle of optimization did not bring advantages to either optimization or generalization performances for other methods including EDF with W1 (see Appendix D).The are presented in Figures 2 and 3. As shown in the figures, with respect to the optimization performance, EDF reached an optimal solution with smaller error at higher convergence rate than Adam and Momentum SGD, even when k = 1, 2. Moreover, EDF overall attained better generalization performance than other methods. For classification tasks, generalization performance often improves by adopting several techniques, such as data augmentation and regularization. In this group of experiments, employing the data augmentation based on random horizontal flip and shift and the L 2 regularization, we conducted comparisons between EDF and other methods that are similar to those in the previous sections. When adopting these techniques, rate decay scheme has as large impact on optimization performance as learning rate itself. Because effectiveness of each rate decay scheme much depends on optimization methods, it is almost impossible to find the best scheme that is applicable to all the methods. For this reason, for MSGD and Adam, as initial learning rates, we chose one of the standard rates for MSGD and Adam, 0.1 and 0.001, respectively, and at the ends of the 100-th, 140-th, 180-th epochs, reset the rates to 0.1 times the rates at the moment. For EDF, we ran tests on two different rates 0.5 and 0.75, the optimal rates found in the experiments of Section 6.2, and reset to 0.2 times those of the initial values, only at the end of the 100-th epoch, in order to demonstrate performance of the EDF more clearly. Among the obtained with EDF, we chose the ones with the best optimization performance. The of the comparison is presented in Figures 4 and 5. As can be seen, we found a condition in which EDF achieved better performance than other methods on optimization while achieving sufficient levels of generalization. Obtaining good statistical from limited available data is a critical goal in machine learning. To reach this goal, while developing an effective model is an essential approach, eliciting the best performance from the fixed model through optimization is important as well. In our study, to examine the performance of our optimization methods, Exponentially Decaying Flows (EDF) based methods, we explored their generalization properties pertaining to of optimization. Our experiments showed that EDF-based methods are more likely to achieve optimal solutions which generalize the test data well than other standard optimizers are. Therefore, EDF-based methods are considered to be optimization methods that have a high potential in their application to various tasks, and thus, are worthwhile to be sophisticated through future studies. In terms of computation of the EDF-based methods with GPU, the Jacobian-vector-product can be carried out at almost the same cost as the gradient of loss function. In fact, multiplying a vector by Jacobian and its transposed matrix (written as R-op and L-op, respectively) are implemented in combination with gradients of scholar functions. For the psuedo-code for update scheme of the EDF-G with L/R-op, refer to Algorithm 1, and Algorithm 2 in particular case that k = 1.Algorithm 1: Update scheme for EDF-G with non-preconditioned MINRES Input: FIG3 shows the of experiments using EDF for simple examples that compare a full-batch training with stochastic trainings. In this example, the convolutional network similar to that used in Section 6.1 was employed on MNIST. The curve labeled as "EDF F" depicts the of Full-batch training per step, and those labeled as "EDF S" illustrate the of stochastic trainings per epoch with a mini-batch of size 500. DISPLAYFORM0 Let us divide the dataset {x, y} = {x j, y j} n−1 j=0 into distinct subsets of size p such that DISPLAYFORM0 Then, gradients and Jacobian-vector-multiplications are calculated as DISPLAYFORM1 where ϕ (i) and F (i) are the subcomponents of ϕ and F, respectively, corresponding to the decomposition above. Thus, for a fixed k, if we set p as the same size as a mini-batch, then the computational cost per step of a full-batch training becomes almost the same as that per epoch of a mini-batch training. The dataset of CIFAR-100 includes 9 pairs of irregular data, each of which has the same image but different labels. These data are enumerated in Figure 7. For instance, the 8393-rd and the 36874-th images are the same, but they have different labels, "girl (class 35)" and "baby (class 2)." Such pairs contaminate the training process. In fact, in our experiment, when the network model was optimized with the full dataset, one of the images in each 9 pairs above could not be classified correctly, which ed in stagnated training accuracy at 99.982 %. Moreover, generalization also deteriorated when irregular data were contained. For the details of these , see Figure 8. D PERFORMANCES WITH THE RATE DECAY SCHEME Figure 9 and FIG0 present the optimization and generalization performance of each optimizer when the rate decay scheme was adopted to the experiment with the same setting as in Figure 2 and Figure 3. The rate decay scheme was that the learning rate was set to 0.2 times that of the initial values at the end of the 20-th epoch.
Introduction of a new optimization method and its application to deep learning.
784
scitldr
We introduce Quantum Graph Neural Networks (QGNN), a new class of quantum neural network ansatze which are tailored to represent quantum processes which have a graph structure, and are particularly suitable to be executed on distributed quantum systems over a quantum network. Along with this general class of ansatze, we introduce further specialized architectures, namely, Quantum Graph Recurrent Neural Networks (QGRNN) and Quantum Graph Convolutional Neural Networks (QGCNN). We provide four example applications of QGNN's: learning Hamiltonian dynamics of quantum systems, learning how to create multipartite entanglement in a quantum network, unsupervised learning for spectral clustering, and supervised learning for graph isomorphism classification. Variational Quantum Algorithms are a promising class of algorithms are rapidly emerging as a central subfield of Quantum Computing (; ;). Similar to parameterized transformations encountered in deep learning, these parameterized quantum circuits are often referred to as Quantum Neural Networks (QNNs). Recently, it was shown that QNNs that have no prior on their structure suffer from a quantum version of the no-free lunch theorem and are exponentially difficult to train via gradient descent. Thus, there is a need for better QNN ansatze. One popular class of QNNs has been Trotter-based ansatze . The optimization of these ansatze has been extensively studied in recent works, and efficient optimization methods have been found (b;). On the classical side, graph-based neural networks leveraging data geometry have seen some recent successes in deep learning, finding applications in biophysics and chemistry . Inspired from this success, we propose a new class of Quantum Neural Network ansatz which allows for both quantum inference and classical probabilistic inference for data with a graph-geometric structure. In the sections below, we introduce the general framework of the QGNN ansatz as well as several more specialized variants and showcase four potential applications via numerical implementation. Graph Neural Networks (GNNs) date back to who applied neural networks to acyclic graphs. and developed methods that learned node representations by propagating the information of neighbouring nodes. Recently, GNNs have seen great breakthroughs by adapting the convolution operator from CNNs to graphs (; ; ; ; ; ;). Many of these methods can be expressed under the message-passing framework . n×n is the adjacency matrix, and X ∈ R n×d is the node feature matrix where each node has d features. where H (k) ∈ R n×d are the node representations computed at layer k, P is the message propagation function and is dependent on the adjacency matrix, the previous node encodings and some learnable parameters W (k). The initial embedding, H is naturally X. One popular implementation of this framework is the GCN which implements it as follows: whereà = A + I is the adjacency matrix with inserted self-loops,D = jà ij is the renormalization factor (degree matrix). Consider a graph G = {V, E}, where V is the set of vertices (or nodes) and E the set of edges. We can assign a quantum subsystem with Hilbert space H v for each vertex in the graph, forming a global Hilbert space H V ≡ v∈V H v. Each of the vertex subsystems could be one or several qubits, a qudit, a qumode , or even an entire quantum computer. One may also define a Hilbert space for each edge and form H E ≡ e∈E H e. The total Hilbert space for the graph would then be H E ⊗ H V. For the sake of simplicity and feasibility of numerical implementation, we consider this to be beyond the scope of the present work. The edges of the graph dictate the communication between the vertex subspaces: couplings between degrees of freedom on two different vertices are allowed if there is an edge connecting them. This setup is called a quantum network with topology given by the graph G. The most general Quantum Graph Neural Network ansatz is a parameterized quantum circuit on a network which consists of a sequence of Q different Hamiltonian evolutions, with the whole sequence repeated P times:Û where the product is time-ordered , the η and θ are variational (trainable) parameters, and the HamiltoniansĤ q (θ) can generally be any parameterized Hamiltonians whose topology of interactions is that of the problem graph: Here the W qrjk and B qrv are real-valued coefficients which can generally be independent trainable parameters, forming a collection are Hermitian operators which act on the Hilbert space of the j th node of the graph. The sets I jk and J v are index sets for the terms corresponding to the edges and nodes, respectively. To make compilation easier, we enforce that the terms of a given HamiltonianĤ q commute with one another, but differentĤ q's need not commute. In order to make the ansatz more amenable to training and avoid the barren plateaus (quantum parametric circuit no free lunch) problem , we need to add some constraints and specificity. To that end, we now propose more specialized architectures where parameters are tied spatially (convolutional) or tied over the sequential iterations of the exponential mapping (recurrent). We define quantum graph recurrent neural networks as ansatze of the form of equation 3 where the temporal parameters are tied between iterations, η pq → η q. In other words, we have tied the parameters between iterations of the outer sequence index (over p = 1, . . ., P). This is akin to classical recurrent neural networks where parameters are shared over sequential applications of the recurrent neural network map. As η q acts as a time parameter for Hamiltonian evolution under H q, we can view the QGRNN ansatz as a Trotter-based quantum simulation of an evolution e −i∆Ĥ eff under the HamiltionianĤ eff = ∆ −1 q η qĤq for a time step of size ∆ = η 1 = q |η q |. This ansatz is thus specialized to learn effective quantum Hamiltonian dynamics for systems living on a graph. In Section 3 we demonstrate this by learning the effective real-time dynamics of an Ising model on a graph using a QGRNN ansatz. Classical Graph Convolutional neural networks rely on a key feature: that of permutation invariance. In other words, the ansatz should be invariant under permutation of the nodes. This is analogous to translational invariance for ordinary convolutional transformations. In our case, permutation invariance manifests itself as a constraint on the Hamiltonian, which now should be devoid of local trainable parameters, and should only have global trainable parameters. The θ parameters thus become tied over indices of the graph: W qrjk → W qr and B qrv → B qr. A broad class of graph convolutional neural networks we will focus on is the set of so-called Quantum Alternating Operator Ansatze , the generalized form of the Quantum Approximate Optimization Algorithm ansatz . We can take inspiration from the continuous-variable quantum approximate optimization ansatz introduced in Verdon et al. (2019a) to create a variant of the QGCNN: the Quantum Spectral Graph Convolutional Neural Network (QSGCNN). We show here how it recovers the mapping of Laplacianbased graph convolutional networks in the Heisenberg picture, consisting of alternating layers of message passing, node update, and nonlinearities. Consider an ansatz of the form from equation 3 with four different Hamiltonians (Q = 4) for a given graph. First, for a weighted graph G with edge weights Λ jk, we define the coupling Hamiltonian aŝ The Λ jk here are the weights of the graph G, and are not trainable parameters. The operators denoted here byx j are quantum continuous-variable position operators, which can be implemented via continuous-variable (analog) quantum computers or emulated using multiple qubits on digital quantum computers . After evolving byĤ C, which we consider to be the message passing step, one applies an exponential of the kinetic Hamiltonian,Ĥ K ≡ 1 2 j∈Vp 2 j. Herep j denotes the continuous-variable momentum (Fourier conjugate) of the position, obeying the canonical commutation relation [x j,p j] = iδ jk. We consider this step as a node update step. In the Heisenberg picture, the evolution generated by these two steps maps the position operators of each node according to where L jk = δ jk v∈V Λ jv − Λ jk is the Graph Laplacian matrix for the weighted graph G. We can recognize this step as analogous to classical spectral-based graph convolutions. One difference to note here is that momentum is free to accumulate between layers. Next, we must add some non-linearity in order to give the ansatz more capacity. 1 The next evolution is thus generated by an anharmonic HamiltonianĤ A = j∈V f (x j), where f is a nonlinear function of degree greater than 2, e.g., a quartic potential of the form f (x j) = ((x j − µ) 2 − ω 2 ) 2 for some µ, ω hyperparameters. Finally, we apply another evolution according to the kinetic Hamiltonian. These last two steps yield an update which acts as a nonlinear mapping. By repeating the four evolution steps described above in a sequence of P layers, i.e., with variational parameters θ = {α, β, γ, δ}, we then recover a quantum-coherent analogue of the node update prescription of in the original graph convolutional networks paper. Learning the dynamics of a closed quantum system is a task of interest for many applications , including device characterization and validation. In this example, we demonstrate that a Quantum Graph Recurrent Neural Network can learn effective dynamics of an Ising spin system when given access to the output of quantum dynamics at various times. Our target is an Ising Hamiltonian with transverse field on a particular graph, We are given copies of a fixed low-energy state |ψ 0 as well as copies of the state |ψ T ≡ U (T) |ψ 0 = e −iTĤtarget for some known but randomly chosen times T ∈ [0, T max]. Our goal is to learn the target Hamiltonian parameters {J jk, Q v} j,k,v∈V by comparing the state |ψ T with the state obtained by evolving |ψ 0 according to the QGRNN ansatz for a number of iterations P ≈ T /∆ (where ∆ is a hyperparameter determining the Trotter step size). We achieve this by training the parameters via Adam gradient descent on the average infidelity 2 averaged over batch sizes of 15 different times T. Gradients were estimated via finite difference differentiation with step size = 10 −4. The fidelities (quantum state overlap) between the output of our ansatz and the time-evolved data state were estimated via the quantum swap test . The ansatz uses a Trotterization of a random densely-connected Ising Hamiltonian with transverse field as its initial guess, and successfully learns the Hamiltonian parameters within a high degree of accuracy as shown in Fig. 1a. A picture of the quantum network topology is inset. Right: Quantum phase kickback test on the learned GHZ state. We observe a 7x boost in Rabi oscillation frequency for a 7-node network, thus demonstrating we have reached the Heisenberg limit of sensitivity for the quantum sensor network. Quantum Sensor Networks are a promising area of application for the technologies of Quantum Sensing and Quantum Networking/Communication . A common task considered where a quantum advantage can be demonstrated is the estimation of a parameter hidden in weak qubit phase rotation signals, such as those encountered when artificial atoms interact with a constant electric field of small amplitude . A well-known method to achieve this advantange is via the use of a quantum state exhibiting multipartite entanglement of the Greenberger-Horne-Zeilinger kind, also known as a GHZ state . Here we demonstrate that, without global knowledge of the quantum network structure, a QGCNN ansatz can learn to prepare a GHZ state. We use a QGCNN ansatz withĤ 1 = {j,k}∈EẐ jẐk andĤ 2 = j∈VX j. The loss function is the negative expectation of the sum of stabilizer group generators which stabilize the GHZ state (Tóth & Gühne, 2005), i.e., for a network of n qubits. Results are presented in Fig. 1b. Note that the advantage of using a QGNN ansatz on the network is that the number of quantum communication rounds is simply proportional to P, and that the local dynamics of each node are independent of the global network structure. In order to further validate that we have obtained an accurate GHZ state on the network after training, we perform the quantum phase kickback test on the network's prepared approximate GHZ state . 3 We observe the desired frequency boost effect for our trained network preparing an approximate GHZ state at test time, as displayed in Figure 2. As a third set of applications, we consider applying the QSGCNN from Section 2 to the task of spectral clustering . Spectral clustering involves finding low-frequency eigenvalues of the graph Laplacian and clustering the node values in order to identify graph clusters. In Fig. 3 we present the for a QSGCNN for varying multi-qubit precision for the representation of the continuous values, where the loss function that was minimized was the expected value of the anharmonic potential L(η) = Ĥ C +Ĥ A η. Of particular interest to near-term quantum computing with low numbers if qubits is the single-qubit precision case, where we modify the QSGCNN construction asp 2 j →X j, 3 For this test, one applies a phase rotation j∈V e −iϕẐ j on all the qubits in paralel, then one applies a sequence of CNOT's (quantum adder gates) such as to concentrate the phase shifts onto a single collector node, m ∈ V. Given that one had a GHZ state initially, one should then observe a phase shift e −inϕẐm where n = |V|. This boost in frequency of oscillation of the signal is what gives quantum multipartite entanglement its power to increase sensitivity to signals to super-classical levels . configurations, and to their right is the output probability distribution over potential energies. We see lower energies are most probable and that these configurations have node values clustered. H A → I andx j → |1 1| j which transforms the coupling Hamiltonian aŝ where |1 1| k = (Î −Ẑ k)/2. We see that using a low-qubit precision yields sensible , thus implying that spectral clustering could be a promising new application for near-term quantum devices. Recently, a benchmark of the representation power of classical graph neural networks has been proposed where one uses classical GCN's to identify whether two graphs are isomorphic. In this spirit, using the QSGCNN ansatz from the previous subsection, we benchmarked the performance of this Quantum Graph Convolutional Network for identifying isomorphic graphs. We used the single-qubit precision encoding in order to order to simulate the execution of the quantum algorithms on larger graphs. Our approach was the following, given two graphs G 1 and G 2, one applies the single-qubit precision QSGCNN ansatz P j=1 e iηjĤ K e iγjĤ C withĤ K = j∈VX j andĤ C from equation 5 in parallel according to each graph's structure. One then samples eigenvalues of the coupling HamiltonianĤ C on both graphs via standard basis measurement of the qubits and computation of the eigenvalue at each sample of the wavefunction. One then obtains a set of samples of "energies" of this Hamiltonian. By comparing the energetic measurement statistics output by the QSGCNN ansatz applied with identical parameters θ = {η, γ} for two different graphs, one can then infer whether the graphs are isomorphic. We used the Kolmogorov-Smirnoff test on the distribution of energies sampled at the output of the QSGCNN to determine whether two given graphs were isomorphic. In order to determine the binary classification label deterministically, we considered all KS statistic values above 0.4 to indicate that the graphs were non-isomorphic. For training and testing purposes, we set the For the dataset, graphs were sampled uniformly at random; to prepare a balanced dataset, we selected isomorphic and non-isomorphic pairs. In all of our experiments, we had 100 pairs of graphs for training, 50 for validation, 50 for testing, and in all cases there are balanced isomorphic and nonisomorphic pairs. The networks were trained via Adam gradient-based optimizer with batches of size 50. Presented in Figure 4 is the training and testing losses for various graph sizes and numbers of energetic samples. In Tables 1 and 2, we present the graph isomorphism classification accuracy for the training and testing sets using the trained QGCNN with the previously described thresholded KS statistic as the label. We see we get highly accurate performance even at low sample sizes. This seems to imply that the QGCNN is fully capable of identifying graph isomorphism, as desired for graph convolutional network benchmarks. We leave a comparison to similar scale classical graph convolutional networks to future work. Results featured in this paper should be viewed as a promising set of first explorations of the potential applications of QGNNs. Through our numerical experiments, we have shown the use of these QGNN ansatze in the context of quantum dynamics learning, quantum sensor network optimization, unsupervised graph clustering, and supervised graph isomorphism classification. Given that there is a vast set of literature on the use of Graph Neural Networks and their variants to quantum chemistry, future works should explore hybrid methods where one can learn a graph-based hidden quantum representation (via a QGNN) of a quantum chemical process. As the true underlying process is quantum in nature and has a natural molecular graph geometry, the QGNN could serve as a more accurate model for the hidden processes which lead to perceived emergent chemical properties. We seek to explore this in future work. Other future work could include generalizing the QGNN to include quantum degrees of freedom on the edges, include quantum-optimization-based training of the graph parameters via quantum phase backpropagation , and extending the QSGCNN to multiple features per node.
Introducing a new class of quantum neural networks for learning graph-based representations on quantum computers.
785
scitldr
Data breaches involve information being accessed by unauthorized parties. Our research concerns user perception of data breaches, especially issues relating to accountability. A preliminary study indicated many people had weak understanding of the issues, and felt they themselves were somehow responsible. We speculated that this impression might stem from organizational communication strategies. We therefore compared texts from organizations with external sources, such as the news media. This suggested that organizations use well-known crisis communication methods to reduce their reputational damage, and that these strategies align with repositioning of the narrative elements involved in the story. We then conducted a quantitative study, asking participants to rate either organizational texts or news texts about breaches. The findings of this study were in line with our document analysis, and suggest that organizational communication affects the users' perception of victimization, attitudes in data protection, and accountability. Our study suggests some software design and legal implications supporting users to protect themselves and develop better mental models of security breaches. A data breach is a successful malicious attack which leads to the compromise or the loss of data. Personally Identifiable Information (PII) is often stored in organization databases, and if disclosed is at risk of misuse. Depending on the size, scale, and type of stolen information, the potential consequences of a data breach can be huge. A data breach can put people at risk of identity theft, which often happens through fraudulent use of existing accounts like credit cards, online accounts, and Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. insurance. It can also lead to financial loss and emotional distress. Despite the increased awareness of organizations and great emphasis by experts on security mechanisms, many organizations still maintain insufficient security practices on data collection, processing, and storage, so are unable to prevent data breaches and consequent misuse of the data. Several recent occurrences follow this pattern, and data breaches at major companies, like Equifax, have exposed a massive number of consumers' records. Although such events have become commonplace, there appears to be little indication that end-users feel urgency about holding companies to account. A 2016 study reports that by far most consumers kept doing business with companies after breaches, and some high-profile commentary suggests "breach fatigue" has "set a new normal and instill a sense of fatalism -and complacency". In a small preliminary study, we even found that participants often thought that they themselves were somehow responsible for data breaches. According to Coombs, the reputation of a company is based on the evaluation customers make about it. Customer evaluations can be affected by the behavior of a company when a crisis like a data breach happens. So, due to the significant financial loss and reputational damage caused by data breaches, companies try to reduce the damage using communication strategies in the after-breach notifications. The crisis response strategies aim to reduce the negative effects of the crisis by changing the level of crisis responsibility. For example, if a company frames themselves as victims of the situation and therefore positioned in what crisis communication theorists call the "victim cluster", they are likely to incur little blame for the crisis. User understanding of data breach incidents are important because it allows development of mental models to support reasoning about behavior and accountability. The goal of our research is to explore how breached companies and news media communicate with users, and how that might affect users' perception of a data breach incident. To do so, we apply Image Repair Theory (IRT) and a narrative-semiotics method to the analysis of Equifax crisis communications to see how this incident is reported in the company press releases and the news. We first conducted a communication study based on collected data from 58 stories related to this security breach crisis. We then conducted a questionnaire study with 100 participants testing the influence of companies' notifications and news on the general public. To the best of our knowledge, testing communication strategies' influence on users mental models of a data breach is original, and it shows HCI efforts on building user understanding of security can be undermined by organizational communication. Our also suggest a need for the improvement of software design, and delicate attention of communication professionals and legal scholars to the notifications created during and after a data breach. Data breaches are a severe threat to both organizations and consumers, and are increasingly common across businesses. When a data breach happens, the enterprise facing this crisis is required to inform the legal authorities, and it also needs to notify all the affected and potentially affected consumers. For these reasons, data breaches are now understood as a crisis for an organization, and there is an established body of knowledge about how organizations should communicate about crises. There are several theories on effective communication strategies during crises. In our work we used IRT, and also a narrative semiotic method. IRT, introduced by William Benoit is a well-established framework, and it can be used by practitioners to design messages during a crisis. It can also be used by critics to evaluate the created messages. The key concept in this theory is to understand the nature of a complaint or attack. An attack has two components: an offensive act, and the accusation of responsibility for an action. According to Benoit, the image repair strategies can be categorized into five broad categories: denial, evasion of responsibility, reducing the offensiveness of event, corrective action, and mortification. Denial is a general approach to image repair, and it is about rejecting responsibility. Evasion of responsibility contains four sub-strategies: provocation (response to someone else's action), defeasibility (lack of information or control over the situation), accident (did not mean it to happen), and good intention. Reducing the offensiveness of an event also involves a list of potential response strategies: bolstering (reminds of good traits), minimization (claim that act was not serious), differentiation (reduce offensiveness of the act), transcendence (place the act in more favorable context), attack accuser (challenge the credibility of accuser), and compensation (reimburse for the act). Corrective action is about restoring situation or promise that the act will not happen again, and mortification is asking for forgiveness. Semiotics involves the study of signs, and semioticians believe that communication is symbolic and ambiguous, and it happens through perceptual or linguistic signs between interlocutors. The narrative-semiotic method finds common patterns in stories. The strategies for examining storytelling help to make sense of what the narrator has perceived and experienced. This can reveal conflicts and changes during a crisis. Based on news framing theory, media and organizations use different features in their messages to frame the crisis. We can use narrative semiotics to see how these framings are different. This method is valuable for understanding a data breach situation, and the goals and motivations of different narrators of a data breach story. It can help researchers to study a failure of an organization and assess whether the decisions made by the organizations were appropriate or not. We can divide the narrative-semiotic approach into two aspects: the narrative trajectory which is the sequence of events and actions that create a story, and the narrative schema. The narrative schema consists of the six categories of agents known as the actantial model. The actantial model, developed by A.J.Greimas, can be used to break an action down into six positions or actants: 1. The sender includes agents who direct the action of subject towards an object. 2. The subject consists of the leading performers aiming at a desired goal or object. 3. The object includes the desired goal and objectives. 4. The receiver category consists of the agents who benefit if a desired goal is achieved. 5. The helper includes agents who assist in achieving the desired goal, like experts who aid the subject. 6. The opponent includes agents who hinder the achievement of the desired goal, for example, adversaries, lack of knowledge or ability, and ineffective tools. Narrative-semiotics can be used to identify the agents, their actions, and their discourses. Different actions reflect different points of view, which is why it is vital to compare different texts on the same subject. A series of events can be told differently when the narrator changes. Narrators position the agents through the way they tell the story. So the narrator has a pivotal role in interpreting the actions and highlighting the patterns in their interaction. Comparing the narrative structure of different texts can reveal how companies narrate a crisis like a data breach, and how their narrative can be different from the narratives produced by media organizations. The way a story is told using strategies like IRT might change readers' mental model of a data breach, in other words, the way they see and interpret the situation. We propose that using communication strategies and changing the role of agents in a story might affect the readers' perception of the reality and of the companies' role relating to the breach. We analyzed the Equifax data breach because it happened recently (the Equifax's data breach occurred on May 2017, with investigation and analyses into 2019), and is of great importance due to the following reasons: the significant role of Equifax on people by impacting their access to many necessities, the extreme sensitivity of financial information breached and risk of identity theft, the centrality of data security to Equifax -its primary mission is stewardship, and the comprehensive analysis of the causes showing how they were at fault. To retrieve all the stories concerning the data breach, we first searched the company's press releases archive. We then selected reputable news sites due to their national circulation and impact, and searched for keywords-"Equifax data breach". Some stories were excluded due to duplication or unrelated items such as the story of another data breach where it referred to the Equifax data breach. The analysis covers the period from September 2017 until March 2019 (see Table 1). We first analyzed the sources using Image Repair Theory because it forms the basis of different approaches in the field, and it is best suited to a rhetorical method that examines texts produced by a company to identify their language and discourse patterns. In the context of a data breach, responsibility can appear in different forms, a company can be blamed for a poorly performed action that hurts consumers or neglect like poor security practices or flaws in a system that allows a breach. Perception of responsibility and offensiveness of an action can seem more important than reality, so businesses try to use different strategies to affect the perception of responsibility. The coding sheet for image repair included the following strategies as nodes: shift the blame, defeasibility, accident, bolstering,minimization,compensation,corrective action, and mortification. We used NVivo12, which is a qualitative analysis tool, to code our data. We first imported all the documents into the software, different folders for different sources. We then created our first group of nodes for image repair strategies; we created a case node to represent each strategy, and we gathered references by coding sources at the nodes. We then used the narrative-semiotic method of analysis to understand how the positioning of agents changes when the narrator changes. We used this method to clarify how the company and the media communicate with the general public to direct their sense-making process. Based on the actantial model, the six role categories were applied to analyze the documents. To find the patterns in different sources we used the following categories as our nodes in NVivo12: the sender, the subject, the object, the receiver, the helper, and the opponent. IRT and Narrative-semiotics were used as a framework to explore the full text content of each document. One researcher did the coding and the were reviewed by two other researchers. To begin, we present a short description of major events and actions in the Equifax story. We divided Equifax's story into a series of narrative episodes which refer to different stages of the narrative trajectory. The information in this story was taken from press releases posted on the Equifax's official site and the GAO report. Equifax is a consumer credit reporting agency. Equifax collects and aggregates information on over 800 million individual consumers and more than 88 million businesses worldwide, and its database includes employee data contributed from more than 7,100 employers. On 7 September 2017, Equifax announced a cybersecurity incident impacting approximately 143 million U.S. consumers and unknown number in the UK and Canada. Hackers exploited a U.S. website application vulnerability to gain access to certain files in mid-May 2017. They stayed on the system until they were detected in July 2017. The information accessed primarily includes names, Social Security numbers, birth dates, addresses and, in some instances, driver's license numbers. In addition, credit card numbers for approximately 209,000 U.S. consumers, and certain dispute documents with personal identifying information, for approximately 182,000 U.S. consumers, were accessed. As part of its investigation of this application vulnerability, Equifax also identified unauthorized access to limited personal information for certain UK and Canadian residents. According to Equifax officials, beginning on May 13, 2017, attackers gained access to the online dispute portal (it maintained documents used to resolve consumer disputes) and used a number of techniques to disguise their activity. They extracted a portion of the PII (Personally Identifiable Information) residing on the systems. After successfully accessing the information, the attackers exfiltrated the data in small increments, using standard encrypted web protocols to disguise the exchanges as normal network traffic. The attack lasted for about 76 days before it was discovered. Equifax officials stated that, on July 29, 2017, approximately 2.5 months after the attackers began extracting sensitive information, security personnel conducting routine checks of the operating status and configuration of IT systems detected the intrusion on the online dispute portal. A misconfiguration due to an expired digital certificate was the reason the intrusion was not noticed before. Equifax then blocked several Internet addresses from which the requests were being executed to try to stop the attack. The IT department discovered a vulnerability in the Apache Struts web application framework as the initial attack vector. The US-CERT had notified the company about this vulnerability before this incident. The Apache Foundation also had reported the vulnerability (CVE-2017-5638) 1 in early March 2017. Equifax took the website offline and then took steps to identify the stolen data and the number of affected people by this incident. Once Equifax officials found out how the attackers were able to access to the company's databases, they took measures to address this problem and avoid it in future. For the challenging task of identifying the affected individuals, Equifax compared the affected database with company's internal databases that were not impacted by the data breach. On September 7, 2017, Equifax stated in its press release that the company had set up a dedicated website to help individuals determine if their information might have been exposed in the breach. Additionally, Equifax reported that it would provide several services to all U.S. consumers, regardless of whether their information had been compromised, free of charge for one year. After the investigation, the company notified all U.S. state attorneys general regarding the approximate number of potentially affected residents in each state and its plans for consumer remediation. On March 1, 2018, Equifax stated that, overall, 2.4 million U.S. consumers whose names and partial driver's license information were exposed, had been identified. The GAO report reveals how Equifax failed to protect Americans' personal data. According to the GAO, "Equifax determined that several major factors had facilitated the attackers' ability to successfully gain access to its network and extract information from databases containing PII," and that "key factors that led to the breach were in the areas of identification, detection, segmentation, and data governance." Finally, the GAO's report highlights the critical need for legislation to protect consumers whose data is not adequately safeguarded, such as Senator Warren's and Senator Mark Warner's bill to hold credit reporting agencies like Equifax liable for data breaches. Under this legislation, Equifax would have paid at least $1.5 billion in penalties for the data breach. We evaluated the company's press releases during the crisis using IRT. See Table 2 for the strategies used in press releases and illustrations of them. This table suggests that Equifax used the image repair strategies to minimize the apparent consequences of the data breach. Since the company was believed to be responsible for the incident, the CEO blamed the entire situation on IT staff who had not installed an Apache Struts patch issued in the weeks before the hack, and on technology failures. Moreover, the company used other strategies like: bolstering, compensation and corrective actions to reduce the offensiveness of the data breach. As a final general strategy, the CEO apologized to victims. The second level of analysis involves the Narrative-Semiotic approach. Here we classified the agents in specific roles as they interact in the sequences of actions. Since a series of events can be told differently, the identification of the agents would change according to the role of the narrator. The initial document that was considered for the narrative analysis was "Prepared testimony of Richard F. Smith before the U.S. House Committee on Energy and Commerce Subcommittee on Digital Commerce and Consumer Protection". The narrative story starts with the talk of the CEO of Equifax as a narrator about the initial situation of the company and how Equifax offered several services to its customers. Consider the following extract from the aforementioned text: FROM PREPARED TESTIMONY OF RICHARD F. SMITH: Equifax was founded 118 years ago and now serves as one of the largest sources of consumer and commercial information in the world. That information helps people make business and personal financial decisions in a more timely and accurate way. Behind the scenes, we help millions of Americans access credit, whether to buy a house or a car, pay for college, or start a small business. During my time at Equifax, working together with our employees, customers, and others, we saw the company grow from approximately 4,000 employees to almost 10,000. Some of my proudest accomplishments are the efforts we undertook to build credit models that allowed and continue to allow many unbanked Americans outside the financial mainstream to access credit in ways they previously could not have. Throughout my tenure as CEO of Equifax, we took data security and privacy extremely seriously, and we devoted substantial resources to it. We set out to notify American consumers, protect against increased attacks, and remediate and protect against harm to consumers. In recent weeks, vulnerability scanning and patch management processes and procedures were enhanced. We took data security and privacy extremely seriously, and we devoted substantial resources to it. Equifax is doing everything in its power to prevent a breach like this from ever happening again. Mortification Apologize I am here today to apologize to the American people myself and on behalf of the Board, the management team, and the company's employees. To each and every person affected by this breach, I am deeply sorry that this occurred. I sincerely apologize. I will close by saying again how so sorry I am that this data breach occurred. In this extract, customers (general people or businesses) are described as subjects who want to buy a house or a car, or start a small business. Equifax stepped into the role of helper to give information and to help people make business and personal financial decisions, and millions of Americans are described as receivers. Figure 1a shows the actantial model inferred from this text. In the second stage of Equifax's story, a complication happened due to an external threat (attackers On July 29, however, Equifax's security department observed suspicious network traffic associated with the consumer dispute website (where consumers could investigate and contest issues with their credit reports). In response, the security department investigated and immediately blocked the suspicious traffic that was identified. The department continued to monitor network traffic and observed additional suspicious activity on July 30, 2017. In response, they took the web application completely offline that day. The criminal hack was over, but the hard work to figure out the nature, scope, and impact of it was just beginning. The narrator of this press release proposes an attacker as the subject, and accessing information is the object. The Apaches Struts vulnerability and the Equifax's security tools that couldn't detect the illegal access are both helpers that assist the attacker. The opponent category in this piece of story includes: the security department investigation, blocking the suspicious traffic, network monitoring, and taking the web application offline. The narrator (the company's CEO) emphasized the company's corrective actions to hinder the attacker in the narrative quest. need to patch a particular vulnerability in certain versions of software used by other businesses. ". Equifax had 5 days to patch the vulnerability before the first date when attackers accessed sensitive information. Since the attacker could exploit the vulnerability of the Apaches Struts, the notification of CERT was a helper to the attacker too (See Figure 1b). In the second stage of Equifax's story, Equifax's actions to defend against the intrusion are described in the actions taken to address the complication. Therefore, according to the story, the agent's categories change in the narrative. The story starts with the repair efforts of the company, as the CEO as a narrator confessed that they failed to protect American consumer data and apologized for the act of data breach. The narrator (Richard F. Smith -Equifax's now retired CEO) continued to describe certain actions regarding how this incident happened. FROM PREPARED TESTIMONY OF RICHARD F. SMITH: Americans want to know how this happened and I am hopeful my testimony will help in that regard. As I will explain in greater detail below, the investigation continues, but it appears that the breach occurred because of both human error and technology failures. These mistakes -made in the same chain of security systems designed with redundancies -allowed criminals to access over 140 million Americans' data. Upon learning of suspicious activity, I and many others at Equifax worked with outside experts to understand what had occurred and do everything possible to make this right. Ultimately we realized we had been the victim of a massive theft, and we set out to notify American consumers, protect against increased attacks, and remediate and protect against harm to consumers. We developed a robust package of remedial protections for each and every American consumer -not just those affected by the breach -to protect their credit information. The relief package includes: monitoring of consumer credit files across all three bureaus, access to Equifax credit files, the ability to lock the Equifax credit file, an insurance policy to cover out-of-pocket costs associated with identity theft; and dark web scans for consumers' social security numbers. In this extract, Equifax is foregrounded as the main agent, occupying four positions. Equifax is described as a sender dictating to its employees and outside experts to make the suspicious activity right. Equifax also stepped into the role of receiver, as well as helper. Opponents in this part are criminals, human errors and technology failure. Figure 1c shows the actantial model inferred from this text. Figure 1 shows the actantial model of the Equifax's story and we can see how the substories are connected to each other. The customer is subject who wants to buy something or start a business as object, and Equifax is its helper to achieve the goal. In the other part of the story Equifax stepped into the role of subject who wants to solve the data breach incident, Equifax protection activity is a helper here, and human error, computer failure and attackers are playing the role of opponent. In the last part of the story, attackers are subjects who want to find access to the personal information of Equifax's customers, human error and computer failure is a helper and Equifax protection activity is an opponent. The other documents were the news provided in the technical websites, general news, and the GAO report. After coding these documents using narrative components, we found two narrative programs or mini-narratives that were common among these texts; one from the attack stage of the story and the other from the response part of the story. In the first narrative episode, the attacker is the subject, and getting access to the information is the object. The company's failure to patch the flaw, an expired certificate, doing the malicious activity without being noticed, and lack of restriction on the database are all helpers to the attacker. The opponent category includes locking down the system so that the attackers would not be able to misuse the vulnerability and hiring an expert security team. Figure 2a shows the actantial model of this mini-narrative. In the second narrative, Equifax is the subject who wants to solve the incident (see Figure 2b). The helper category in this mini-narrative that was extracted from the news includes the following components: log files, and new regulations. Numerous data security failures such as the insecurity of Equifax's web setup, failing to patch the flaw promptly, the lack of restrictions on the frequency of database queries are the first element in the category of opponents. The second one is failure in notifying the data breach victims, the evidence that was extracted from different texts are as following: FROM GAO EQUIFAX REPORT: • Equifax executives -including its Chief Security Officer and Chief Executive Officer -kept the public in the dark for more than a month after they found out about the security intrusion. • The attack lasted for about 76 days before it was discovered. • Equifax and other big credit reporting agencies keep profiting off a business model that rewards their failure to protect personal information. • In the three years before the Equifax data breach, the company spent only about 3% of its operating revenue on cybersecurity-less than the company spent on stock dividends. • because of a process failure in 2016 that meant a limited amount of UK data was stored on the US system between 2011 and 2016. • Congressman Frank Pallone said Equifax had an "ongoing lax attitude when it comes to protecting consumer data. • An inadequate response to a data breach that included the personal information of up to 143 million Americans. • Equifax was breached in "mid-May" 2017, realized it in July and got around to telling the world in early September. • Apache Struts was popped, but company had at least TWO MONTHS to fix it. • specifically, the lack of restrictions on the frequency of database queries allowed the attackers to execute approximately 9,000 such queries -many more than would be needed for normal operations. • "As your company continues to issue incomplete, confusing and contradictory statements and hide information from Congress and the public, it is clear that five months after the breach was publicly announced, Equifax has yet to answer this simple question in full: what was the precise extent of the breach?" Senator Elizabeth Warren fumed in a missive late last week. The third element in the opponent category is inadequate assistance in resolving the problem like no definitive action to hold Equifax accountable, betraying stakeholders by top Equifax executives, an overwhelmed call center, and customer response site. Some examples of evidence found in the documents are: • It is always a company's responsibility to identify UK victims and take steps to reduce any harm to consumers. • Ying used confidential information to conclude that his company had suffered a massive data breach, and he dumped his stock before the news went public. That was a hastily constructed WordPress bodge job, and victims were initially asked to agree to take any dispute to arbitration and forfeit the right to take part in any classaction lawsuit. • Equifax also failed to provide consumers full protection from new account identity theft. Table 3: Participants' demographics • These consumer complaints included improper use of credit reports, incorrect information on credit reports, inadequate assistance in resolving problems, and problems with Equifax credit monitoring, fraud alerts, and security freezes in the wake of the breach. Our analysis shows that the narrative programs extracted from company's press release are different from the ones extracted from news. More details will be discussed later. As explained above, our text analysis showed important differences between how organizations described data breach incidents, and how they were described by others. In particular, the texts from organizations appeared to position themselves in ways that might affect how readers perceived their role relating to the breach. We speculated that this might influence users' understanding of the data breach, and of organizational accountability. To explore this, we conducted a questionnaire study, where we asked participants to first read data breach descriptions and then asked their perception of various aspects of the breach. We recruited participants through TurkPrime, which is an online crowdsourcing research platform that integrates with Amazon Mechanical Turk (MTurk). Participants were asked some questions about demographics and data breach basics, then read two short extracts describing data breaches, and finally responded to follow-up statements about the incidents described. The study was reviewed and cleared by our Research Ethics Board. We recruited 100 participants, specifying participants must be residents of the US or Canada. By far most were from the US, 96, with only 4 from Canada, with 33 female and 67 male. A summary of their demographics is shown in Table 3. There were two pairs of extracts about Equifax data breach. Each pair contained one extract from the company itself, and one from another source (e.g., news), but no company names were mentioned. Participants were shown a pair at random, and the order within the pair was also random. After each extract, participants were shown 10 statements, and responded using a Likert scale to gauge their perception about the breach motivations, company security measures, after-breach issues, and responsibility. Each of the responses were scored 1-5, where 1 stands for "Strongly disagree", 2 for "Somewhat disagree", 3 for "Neutral", 4 for "Somewhat agree", and 5 for "Strongly agree". Our hypothesis were that responses to each of the 10 statements would differ by the source of the text read by the participants. We examined responses, removing those unrealistically quick, or with inconsistent answers between similar questions. Then, we analyzed the of the questionnaire by calculating the median and spread of the distribution of our data. We also did Wilcoxon tests to see if the source (company or news), order of text within each group, and the pair of data breach description affected the participants' responses. Figure 3 shows the ten statements together with boxplots in describing the responses to each statement. For each statement, there are two boxplots, one for responses to the company text, and the other for responses to the news media text. After reading both the company text and the news text, in response to being asked who the victim in an Internet data breach is, participants strongly agreed that customers of the company are the victims (Mdn= 5/5; later numbers also refer to medians). However, the for the two texts was quite different when asked if the company is the victim of the data breach. After reading the company's description of the breach, participants agreed (4/5) that the company is also a victim, but reading the news shows a slight different , and they again agreed that the company could also be a victim (4/5). The data are not normal and are skewed to the neither agreed nor disagreed. When asked about the company's attitude regarding data protection, a considerable difference can be seen between the two texts. Participants reading the company's description disagreed (2/5) that the company had a relaxed attitude about protecting customers data, and they agreed that the company took security measures seriously (4/5). However, participants reading the news text showed different , and agreed (4/5) that the company had a relaxed attitude and they disagreed that the company took security measures seriously (2/5). In response to being asked what the attackers' purpose was, we got quite similar . They agreed that attackers wanted to harm customers as well as the company (4/5). Participants reading the company's text agreed (4/5) that the company was helping customers to recover from the breach. They somewhat agreed (4/5, with data skewed to disagreement) that the company put customers at risk by neglecting data protection. Those reading the news text, however, strongly agreed that the company put the customer at risk (5/5), and they disagreed that the company was helpful in after-breach actions (2/5). In response to being asked about the accountability, the of reading both the company and news' text were quite similar but significantly different at 0.05 level. Most participants believed that the company was accountable for problems ing from the data breach (Company= 4/5, News= 5/5), and the customers were not responsible (Company= 1/5, News= 2/5). Our hypotheses were that there would be a different response for those reading the company's text and those reading the news text about a data breach. To test our hypotheses, we did Wilcoxon tests for the responses to each statement, choosing this non-parametric test because the data was ordinal. The are shown in Table 4. We can see that the company's and the news description make a significant difference in participants' responses about victimization, the company's security measures, its attitude in data protection, and helpfulness in after-breach actions (these are marked with red boxes in Figure 3). n.s. 6. Attackers wanted to harm the customers. n.s. 7. The company is helping customers recover from the breach. We had two pairs of data breach descriptions (company and news), one of the pairs was assigned randomly to each participant, and the order within the pair was also randomized. For each statement we used Wilcoxon test to see if the pair (one of two) and the order (company first and news first) affected the participants responses. The showed that the order of texts did not change the participants responses. Which pair of texts also made no difference, except for the statement about whether the company is a victim or not and the helpfulness of about-breach actions. Although we used non-parametric tests, we also checked a multi-way ANOVA test for the pairs, order, and source of texts, and the confirmed our findings with Wilcoxon tests with little difference. In this study, we showed half our participants text about a data breach from a company, and half text about the same breach from a news source. We then asked them to respond to 10 statements about the breach, and we expected that their responses would differ for all 10 statements depending on the source of the text. As we described above, for most statements this is what occurred. For others, however, we saw little difference. For statement 1 (victimhood of customers), it seems that both sources led participants to feel that the customers were indeed victims, whereas statement 2 shows that people who read the company text more strongly felt that the company was also a victim. However, statements 5 and 6 (who the attackers wanted to harm) presented no difference in responses, despite their similarity to statements 1 and 2. We speculate that participants perceived a difference between the concept of victimhood and the concept of intent to harm. Both customers and company might, or might not, be victims, but intent to harm was more difficult to distinguish. For statement 9 (accountability of company), those who read the company text were less strong in feeling the company was accountable for the breach, but for statement 10 (accountability of customers), there was agreement that customers themselves were not accountable. Both these seem understandable. For statement 9, both sources suggest that the company has some level of accountability, but the non-company text makes it seem more clear. For statement 10, neither source suggests that customers have any accountability. On reflection, the issue of accountability has a subtlety that needs to be addressed. It is clear the attackers have a key role in the breach, so they can be seen as having accountability. The company, however, has a duty of care, and any failing in that duty might also be seen as involving accountability. In the physical world, burglars might rob a bank, but if it emerged that the bank left their doors unlocked at night, we suggest any bank customer would regard them as also accountable. The primary goal of our work was to explore how organization communications about data breaches might affect user perception. To do this, we first studied the nature of the communication itself. Using Image Repair Theory, we analyzed press releases posted on official company websites. We found that Equifax press releases had characteristics consistent with tactics to reduce reputational damage and therefore financial loss. Recognizing that the way the news media frames a crisis might be different to the framing in an organization's press releases, we next explored that issue. We used techniques from narrative semiotics to examine the structure of the stories being told, and found that the agents were not positioned the same way. Considering the first narrative story studied, our comparison of the Equifax press releases with news and GAO reports shows important differences with respect to the positioning of Equifax (see Figure 4). In the press releases, there was emphasis on Equifax as a helper, presenting the company's protection actions. In the news and GAO reports there was emphasis on Equifax as an opponent, presenting the company's weak security protection of consumer data. Moreover, the new media had a focus on the company and its security failure, whereas the company appears to use scapegoating as its primary crisis response strategy and suggesting responsibility lay with a single unnamed IT staff member. The ethics of scapegoating is doubtful, suggesting a manipulative approach used deflect responsibility. The company's apparently lax attitude in crisis response was heavily criticized by the media. The news text suggests Equifax shares responsibility for this incident. However, Equifax positioned itself as a receiver to emphasize it is a victim, a strategy that consistent with an attempt to reduce its responsibility. Equifax appears to map all their actions to the helper category, in a manner consistent with Image Repair Theory. For example, a bolstering strategy places the company in a helper position, deflecting responsibility by shifting the blame, scapegoating puts the company in victim position, and compensation strategies stress the company acts as a helper. However, when the news media narrates the story, the mapping of the actions and agents goes to the opponent category, since the media is not concerned with Image Repair. Our second step was to explore how the strategies used in the company press releases might influence the public understanding of data breaches. We conducted a questionnaire study to see if data breach incident descriptions from different sources, the company's and the news media, in different perceptions of the incident. After reading the text extracted from the company's press release, participants tended to rate the company's after-breach action and security measures higher. They also thought that the company was helping their customers and did not put them at any risk. However, we got different from participants who read the news texts reporting the same incident; participants disagreed that the company took the security seriously and their after breach protective actions were not acceptable to help the customers. The company was regarded as a victim after reading the company's description; however, the news approach in narrating the data breach ed in a different perception of participants. This therefore confirmed our speculations based on our text analysis. It also confirms the effectiveness of IRT and its relevance in crisis communication. Of course, it is not surprising that companies tend to present themselves in a better light that the news media. Nor is it surprising that they used strategies that have been developed to help them to do this. However, our text study shows that their Image Repair strategies exhibit some important characteristics. In particular, they show differences in how agency is presented, which, in turn affects readers' understanding of what happened. Studies of HCI and security highlight the role of user mental models in understanding issues relating to security (e.g. ), and we know many users continue to use breached services (see ). Our study shows that organizations, by their communications strategies, may be contributing to users weak mental models. One design implication might be improving software design to support users ability to track access to their data, with enough detail to help users determine provenance and legitimacy, perhaps with alarms for especially sensitive data. This would enable user engagement and oversight of their own information. This would allow users to inform organizations of discrepancies, and knowledge of this new transparency would also promote increased diligence by organizations. Moreover, organizations could make protective measures more explicit to users, both to assure users, and emphasize their diligence. The potential effect of all such measures could be studied in future research. We should also acknowledge that some of the measures we suggest may require changes in underlying software architecture, but research could show that such changes are justified, and potentially imperative. The possibility of crisis communications influencing user perception of accountability should be considered by communications professionals and legal scholars, to better establish the line between promoting the organization and misleading users. There are some limitations to address in this research. First, the communication study was focused on one case study, and same analysis on several data breaches may reveal different . Second, the wording of each statement in the questionnaire study might cause the difference that participants perceived in the concept of victimhood and the concept of intent to harm. In this paper, we presented our study on communication about data breach events which exposed private consumer data. We first analysed Equifax press releases and notifications to identify their the strategies, and then analysed news stories and government reports on the same events; we studied 58 stories in all. We found that the company used crisis communication strategies to reduce its reputational damage and financial loss. Our analysis also showed that there are differences between press releases, major newspaper and technical news when reporting the same data breach incident. In our narrative-semiotic analysis, we found the company mapped their after-breach actions into helper category; but the narrator of news reports mapped them into the opponent category. These narrative changes affected reader perception about these data breaches. Our questionnaire study revealed that the dissimilar approach detected in document analysis when narrating the same story from a different point of view (companies and news) has a considerable influence on the general public's perception of a data breach incident. Large scale data breaches are a serious matter, not just for organizations, but for the thousands or millions of users who have private data exposed, making them vulnerable to a range of consequences. Despite this, it is unclear if users understand what exactly has happened, where accountability lies, and how to proceed. In work on human factors in computer security, it has often been found that users have only weak mental models of online threats and defences, e.g.. When user data is exposed by a large scale data breach, communication with the user may well be primarily from the organization itself. Our research suggests that communication from organizations may misrepresent the data breach events leading to misleading perceptions of the crisis and the company's accountability. Design of software that stores sensitive personal information should support users in maintaining better awareness of data breaches.
"In this paper, we tested communication strategies' influence on users mental models of a data breach."
786
scitldr
Goal recognition is the problem of inferring the correct goal towards which an agent executes a plan, given a set of goal hypotheses, a domain model, and a (possibly noisy) sample of the plan being executed. This is a key problem in both cooperative and competitive agent interactions and recent approaches have produced fast and accurate goal recognition algorithms. In this paper, we leverage advances in operator-counting heuristics computed using linear programs over constraints derived from classical planning problems to solve goal recognition problems. Our approach uses additional operator-counting constraints derived from the observations to efficiently infer the correct goal, and serves as basis for a number of further methods with additional constraints. Agents that act autonomously on behalf of a human user must choose goals independently of user input and generate plans to achieve such goals ). When such agents have complex sets goals and require interaction with multiple agents that are not under the user's control, the ing plans are likely to be equally complex and non-obvious for human users to interpret BID0. In such environments, the ability to accurately and quickly identify the goals and plans of all involved agents is key to provide meaningful explanation for the observed behavior. Goal recognition is the problem of inferring one or more goals from a set of hypotheses that best account for a sequence of observations, given a fixed initial state, a goal state, and a behavior model of the agent under observation. Recent approaches to goal recognition based on classical planning domains have leveraged data-structures and heuristic information used to improve planner efficiency to develop increasingly accurate and faster goal recognition algorithms BID1 BID2. Specifically, BID2 use heuristics based on planning landmarks BID1 ) to accurately and efficiently recognize goals in a wide range of domains with various degrees of observability and noise. This approach, however, does not deal with noise explicitly, relying on the implicit necessity of landmarks in valid plans for goal hypotheses to achieve com- petitive accuracy with other methods BID3 BID3, while increasing the number of recognized goals (spread).Thus, goal recognition under partial observability (i.e., missing observations) in the presence of noisy observation is a difficult problem to address while achieving both reasonable recognition time (i.e., a few seconds), high accuracy and low spread. In this paper, we address these limitations by leveraging recent advances on operator-counting heuristics (Pommerening et al. 2014; BID4). Operator-counting heuristics provide a unifying framework for a variety of sources of information from planning heuristics BID1 ) that provide both an estimate of the total cost of a goal from any given state and and indication of the actual operators likely to be in such plans. This information proves to be effective at differentiating between goal hypotheses in goal recognition. Our contributions are threefold. First, we develop three, increasingly more accurate goal recognition approaches using operator-counting heuristics. Second, we empirically show that these heuristics are very effective at goal recognition, overcoming existing approaches in almost all domains in terms of accuracy while diminishing the spread of recognized goals. Such approaches are substantially more effective for noisy settings. Third, we discuss a broad class of operator-counting heuristics for goal recognition that can use additional constraints to provide even finer handling of noise and missing observations. We review the key for the approaches we develop in this paper. First, the recognition settings we assume for our approach follows the standard formalization of goal recognition as planningSecond, while there is substantial literature on linear programming heuristic unified on the operator-counting framework, we focus on the specific types of operator-counting constraints we actually use in our experimentation. Definition 1 (Predicates and State). A predicate is denoted by an n-ary predicate symbol p applied to a sequence of zero or more terms (τ 1, τ 2, ..., τ n) -terms are either constants or variables. We refer to grounded predicates that represent logical values according to some interpretation as facts, which are divided into two types: positive and negated facts, as well as constants for truth and falsehood (⊥). A state S is a finite set of positive facts f that follows the closed world assumption so that if f ∈ S, then f is true in S. We assume a simple inference relation |= such that S |= f iff f ∈ S, S |= f iff f ∈ S, and S |= f 1 ∧... ∧ f n iff {f 1, ..., f n} ⊆ S.Definition 2 (Operator and Action). An operator a is represented by a triple name(a), pre(a), eff(a): name(a) represents the description or signature of a; pre(a) describes the preconditions of a, a set of predicates that must exist in the current state for a to be executed; eff(a) represents the effects of a. These effects are divided into eff(a) + (i.e., an add-list of positive predicates) and eff(a) − (i.e., a delete-list of negated predicates). An action is a ground operator instantiated over its free variables. Definition 3 (Planning Domain). A planning domain definition Ξ is represented by a pair Σ, A, which specifies the knowledge of the domain, and consists of a finite set of facts Σ (e.g., environment properties) and a finite set of actions A.Definition 4 (Planning Instance). A planning instance Π is represented by a triple Ξ, I, G, where Ξ = Σ, A is the domain definition; I ⊆ Σ is the initial state specification, which is defined by specifying values for all facts in the initial state; and G ⊆ Σ is the goal state specification, which represents a desired state to be achieved. Definition 5 (Plan). An s-plan π for a planning instance Π = Ξ, I, G is a sequence of actions a 1, a 2,..., a n that modifies a state s into a state S |= G in which the goal state G holds by the successive execution of actions in π starting from s. An I-plan is just called a plan. A plan π * with length |π * | is optimal if there exists no other plan π for Π such that |π | < |π * |.A goal recognition problem aims to select the correct goal of an agent among a set of possible goals using as evidence a sequence of observations. These observations might be actions executed by the agent or noise observation which are part of a valid plan but are not executed by the agent. Definition 6 (Observation Sequence). An observation sequence O = o 1, o 2,..., o n is said to be satisfied by a plan π = a 1, a 2,..., a m, if there is a monotonic function f that maps the observation indices j = 1,..., n into action indices i = 1,..., n, such that a f (j) = o j.Definition 7 (Goal Recognition Problem). A goal recognition problem is a tuple T GR = Ξ, I, G, O, where: Ξ = Σ, A is a planning domain definition; I is the initial state; G is the set of possible goals, which include a correct hidden goal G * (i.e., G * ∈ G); and O = o 1, o 2,..., o n is an observation sequence of executed actions, with each observation o i ∈ A, and the corresponding action being part of a valid plan π (from Definition 5) that transitions I into G * through the sequential execution of actions in π. Definition 8 (Solution to a Goal Recognition Problem). A solution to a goal recognition problem T GR = Ξ, I, G, O is a nonempty subset of the set of possible goals G ⊆ G such that ∀G ∈ G there exists a plan π G generated from a planning instance Ξ, I, G and π G is consistent with O. Recent work on linear programming (LP) based heuristics has generated a number of informative and efficient heuristics for optimal-cost planning BID4 Pommerening et al. 2014; BID0. These heuristics rely on constraints from different sources of information that every plan π (Definition 5) must satisfy. All operator-counting constraints contain variables of the form Y a for each operator a such that setting Y a to the number of occurrences of a in π satisfies the constraints. In this paper we adopt the formalism and definitions of Pommerening et al. for LP-based heuristics 1. for all a ∈ A. A constraint set for s is a set of operatorcounting constraints for s where the only common variables between constraints are the operator-counting constraints. While the framework from Pommerening et al. 2013 unifies many types of constraints for operator-counting heuristics, we rely on three types of constraints for our goal recognition approaches: landmarks, state-equations, and post-hoc optimization. Planning landmarks consist of actions (alternatively state-formulas) that must be executed (alternatively made true) in any valid plan for a particular goal BID1. Thus, landmarks are necessary conditions for all valid plans towards a goal, and, as such, provide the basis for a number of admissible heuristics (Karpas and Domshlak 2009) and as conditions to strengthen existing heuristics (Bonet 2013). Importantly, planning landmarks form the basis for the current state-of-the-art goal recognition algorithms BID2 BID2. Disjunctive action landmarks BID5 ) for a state s are sets of actions such that at least one action in the set must be true for any s-plan, and make for a natural operator-counting constraint. DISPLAYFORM0 Net change constraints generalize state equation heuristic, which itself relate the planning instance in question with Petri nets that represent the transitions of state variables induced by the actions, and such that tokens in this task represent net changes to the states of the problem. Finally, Post-hoc optimization constraints (Pommerening et al. 2013) use the fact that certain heuristics can rule out the necessity of certain operators from plans (and thus from the heuristic estimate). For example, Pattern Database (PDBs) heuristics (Culberson and Schaeffer 1998) create projections of the planning task into a subset of state variables (with this subset being the pattern), such that the heuristic can partition operators into two sets of each pattern, one that changes variables in the pattern (i.e., contributes towards transitions) and the other than does not (i.e., is non-contributing). Definition 13 (Post-hoc Optimization Constraint). Let Π be a planning task with operator set A, let h be an admissible heuristic for Π, and let N ⊆ A be a set of operators that are noncontributing in that h is still admissible in a modified planning task where the cost of all operators in N is set to 0.Then the post-hoc optimization constraint c P H s,h,N for h, N, and state s of Π consists of the inequality. DISPLAYFORM1 We now bring together the operator-counting constraints into three operator-counting heuristics suitable for goal recognition, ranging from the simplest way to employ operator counts to compute the overlap between counts and observed actions, to modifying the constraints used by the operator counts to enforce solutions that agree with such observations,and finally accounting for possible noise by comparing heuristic values. We start with a basic operator-counting heuristic h(s), which we define to be the LP-heuristic of Def. 11 where C comAlgorithm 1 Goal Recognition using the Operator Counts. Input: Ξ planning domain definition, I initial state, G set of candidate goals, and O observations. Output: Recognized goal(s). DISPLAYFORM0 Hits ← Initialize empty dictionary 3:for all G ∈ G do Compute overlap for G 4: DISPLAYFORM1 for all o ∈ O do 8:if Yo > 0 then 9:HitsG ← HitsG + 1 10: DISPLAYFORM2 prises the constraints generated by Landmarks (Def. 12), post-hoc optimization (Def. 13), and net change constraints as described by. This heuristic, computed following Def. 11, yields two important bits of information for our first technique, first, it generates the actual operator counts Y a for all a ∈ A from Def. 10, whose minimization comprises the objective function h(s).The heuristic values h(s) of each goal candidate G ∈ G tells us about the optimal distance between the initial state I and G, while the operator counts indicate possible operators in a valid plan from I to G. We can use these counts to account for the observations O by computing the overlap between operators with counts greater than one and operators observed for recognition. Algorithm 1 shows how we can use the operator counts directly in a goal recognition technique. In order to rank the goal hypotheses we keep a dictionary of Hits (Line 2) to store the overlap, or count the times operators counts hit observed actions. The algorithm then iterates over all goal hypotheses (Lines 3-10) computing the operator counts for each hypothesis G and comparing these counts with the actual observations (Lines 7-10). We recognize goals by choosing the hypotheses whose operator counts hit the most observations (Line 11). The technique of Algorithm 1 is conceptually similar to the Goal Completion heuristic of BID2 in that it tries to compare heuristically computed information with the observations. However, this initial approach has a number of shortcomings in relation to their technique. First, while the landmarks themselves are enforced by the LP used to compute the operator counts (and thus observations that correspond to landmarks count as hits), the overlap computation loses the ordering of the landmarks that the Goal Completion heuristic uses to account for missing observations. Second, a solution to a set of operator-constraints, i.e., a set of operators with non-negative counts may not be a feasible plan for a planning instance. Thus, these counts may not correspond to the plan that generated the observations. While operator-counting heuristics on their own are fast and informative enough to help guide search when dealing with millions of nodes, goal recognition problems often reAlgorithm 2 Goal Recognition using ObservationConstrained Operator Counts. Input: Ξ planning domain definition, I initial state, G set of candidate goals, and O observations. Output: Recognized goal(s). DISPLAYFORM0 quire the disambiguation of a dozen or less goal hypotheses. Such goal hypotheses are often very similar so that the operator-counting heuristic value (i.e., the objective function over the operator counts) for each goal hypothesis is very similar, especially if the goals are more or less equidistant from the initial state. Thus, we refine the technique of Observation Overlap by introducing additional constraints into the LP used to compute operator counts. Specifically, we force the operator counting heuristic to only consider operator counts that include every single observation o ∈ O. The ing LP heuristic (which we call h C) then minimizes the cost of the operator counts for plans that necessarily agree with all observations. We summarize this Observation Constraint Enforcement approach in Algorithm 2. This technique is similar to that of Algorithm 1 in that it iterates over all goals computing a heuristic value. However, instead of computing observation hits by looking at individual counts, it generates the constraints for the operator-counting heuristic (Line 3) and adds constraints to ensure that the count of the operators corresponding to each observation is greater than one (Lines 4-5). Finally, we choose the goal hypotheses that minimize the operator count heuristic distance from the initial state (Line 8). Although enforcing constraints to ensure that the LP heuristic computes only plans that do contain all observations helps us overcome the limitations of computing the overlap of the operator counts, this approach has a major shortcoming: it considers all observations as valid operators generated by the observed agent. Therefore, the heuristic ing from the minimization of the LP might overestimate the actual length of the plan for the goal hypothesis due to noise. This may happen for one of two reasons: either the noise is simply a sub-optimal operator in a valid plan, or it is an operator that is completely unrelated to the plan that generated the observations. In both cases, the ing heuristic value may prevent the algorithm from selecting the actual goal from among the goal hypotheses. This overestimation, however, has an important property in relation to the basic operator counting heuristic, which is that h C always dominates the operator counting heuristic h, in Proposition 1.Algorithm 3 Goal Recognition using Heuristic Difference of Operator Counts. Input: Ξ planning domain definition, I initial state, G set of candidate goals, and O observations. Output: Recognized goal(s). DISPLAYFORM0 HG ← a∈A Ya 6: DISPLAYFORM1 HC,G ← a∈A Ya 10: DISPLAYFORM2 Proposition 1 (h C dominates h). Let h be the operatorcounting heuristic from Defs. 10-11, h C be the overconstrained heuristic that accounts for all observations o ∈ O, and s a state of Π. Then h C (s) ≥ h(s).Proof. Let C h be set of constraints used in h(s), and C h C be set of constraint used to compute h C (s). Every feasible solution to C h C is a solution to C h. This is because to generate C h C we only add constraints to C h. Thus, a solution to C h C has to satisfy all constraints in C h. Therefore, since we are solving a minimization problem the value of the solution for C h cannot be larger than the solution to C h C.The intuition here is that the operator-counting heuristic h estimates the total cost of any optimal plan, regardless of the observations, while h C estimates a plan following all observations, including noise, if any. If there is no noise, the sum of the counts must agree (even if the counts are different), whereas if there is noise and assuming the noise is evenly distributed, there will be differences in all counts. Thus, our last approach consists of computing the difference between h C and h, and infer that the goal hypothesis for which these values are closer must be the correct goal. We call the ing heuristic h δ and formalize this approach in Algorithm 3. Here we compute the LP twice, once with only the basic operator-counting constraints (Line 4), and once with the constraints enforcing the observations in the operator counts (Line 8), using these two values to compute h δ (Line 10). The algorithm then returns goal hypotheses that minimize h δ (Line 11). To evaluate the effectiveness of our approaches, we implemented each of the algorithms described earlier and performed the goal recognition process over the large dataset introduced by BID2. This dataset contains thousands of problems for goal and plan recognition under varying levels of observability for a number of traditional IPC domains (Vallati et al. 2018 2 . We implemented our approaches using PYTHON 2.7 for the main recognition algorithms with external calls to a customized version of the FAST-DOWNWARD BID1) planning system to compute the operator counts. Our customized planner returns not only the operator counts and can also introduce additional constraints before running the CPLEX 128 optimization system. We ran experiments in a single core of a 24 core Intel R Xeon R CPU E5-2620 @2.00Ghz with 48GB of RAM, with a 2-minute time limit and a 2GB memory limit. Table 1 shows the for the partially observable, nonnoisy fragment of the dataset, whereas TAB3 shows the noisy fragment of the dataset 3. For the noisy experiments, each set of observations contained at least two spurious actions, which, while valid for the plan, were not actually executed by the agent being observed. These show that, while not nearly as fast as the h uniq approach from Pereira et al. with a θ = 0 recognition threshold, the accuracy (Acc %) of our h δ approach is either competitive or superior in virtually all domains (except for some levels of observability in IPC-GRID, DWR and KITCHEN), and, even for the domains where the accuracy is similar, or lower, the spread (SinG) of the ing goals is consistently lower, i.e., the returned goals are unique for most problems. The accuracy of our approach, thus, consistently matches or surpasses that of R&G 2009, with a computational cost that is also often smaller than FG 2015. Importantly, the cost of all of our approaches is basically the same within each domain, regardless of the level of observability and noise, since our technique relies on a single call to a planner that computes the operator counts for a single state and then stops the planner. We argue that this is attributable to our inefficient implementation rather than the technique, for the h δ approach, the overhead of the FAST-DOWNWARD pre-processing step is paid multiple times. Unlike R&G 2009, that uses a modified planning heuristic, and FG 2015, that builds a data structure and explores it at very high computational cost. We note that the for noisy observations show the greatest impact of h δ with an overall higher accuracy and lower spread across all domains but KITCHEN.Finally, for the KITCHEN domain stand out in our experiments in that our some of our approaches consistently show underwhelming performance both in noisy and nonnoisy domains. Counter-intuitively, for this particular do-main, the more observations we have available, the worse the performance. This seems to be a problem for all other approaches under noisy conditions, though not under incomplete observations. Moreover, since the loss of accuracy with fuller observability also occurs for the non-noisy setting, we surmise this to stem from the domain itself, rather than the algorithm's ability to handle noise, and defer investigation of this issue to future work. Our work follows the traditional of goal and plan recognition as planning algorithms as defined by BID2 BID3. The former work yields higher recognition accuracy in our settings (and hence we chose it as a baseline), whereas the latter models goal recognition as a problem of estimating the probability of a goal given the observations. Such work uses a Bayesian framework to compute the probability of goals given observations by computing the probability of generating a plan given a goal, which they accomplish by running a planner multiple times to estimate the probability of the plans that either comply or not with the observations. Recent research on goal recognition has yielded a number of approaches to deal with partial observability and noisy observations, of which we single out three key contributions. First, BID1 developed a goal recognition approach based on constructing a planning graph and propagating operator costs and the interaction among operators to provide an estimate of the probabilities of each goal hypothesis. While their approach provides probabilistic estimates for each goal, its precision in inferring the topmost goals is consistently lower than ours, often ranking multiple goals with equal probabilities (i.e., having a large spread). Second, BID3 developed an approach that also provides a probabilistic interpretation and explicitly deals with noisy observations. Their approach works through a compilation of the recognition problem into a planning problem that is processed by a planner that computes a number of approximately optimal plans to compute goal probabilities under R&G's Bayesian framework. Finally, BID2 develop heuristic goal recognition approaches using landmark information. This approach is conceptually closer to ours in that we also compute heuristics, but we aim to overcome the potential sparsity of landmarks in each domain by using operator-count information, as well as explicitly handle noise by introducing additional constraints in heuristic h C and comparing the distance to the unconstrained h heuristic. We developed a novel class goal recognition technique based on operator-counting heuristics from classical planning (Pommerening et al. 2014) which, themselves rely on ILP constraints to estimate which operators occur in valid optimal plans towards a goal. The ing approaches are competitive with the state of the art in terms of high accuracy and low false positive rate (i.e., the spread of returned goals), at a moderate computational cost. We show empirically that the overall accuracy of our best approach is sub- stantially superior to the state-of-the-art over a large dataset. Importantly, the values of the operator-counting constraints we compute for each of the heuristics can be used as explanations for recognized goals. The techniques described in this paper use a set of simple additional constraints in the ILP formulation to achieve substantial performance, so we expect substantial future work towards further goal recognition approaches and heuristics that explore more refined constraints to improve accuracy and reduce spread, as well as deriving a probabilistic approach using operator-counting information. Examples of such work include using the constraints to force the LP to generate the counterfactual operator-counts (i.e., non-compliant with the observations) used by the R&G approach, or, given an estimate of the noise, relax the observation constraints to allow a number of observations to not be included in the ing operator-counts. DISPLAYFORM0
A goal recognition approach based on operator counting heuristics used to account for noise in the dataset.
787
scitldr
It can be challenging to train multi-task neural networks that outperform or even match their single-task counterparts. To help address this, we propose using knowledge distillation where single-task models teach a multi-task model. We enhance this training with teacher annealing, a novel method that gradually transitions the model from distillation to supervised learning, helping the multi-task model surpass its single-task teachers. We evaluate our approach by multi-task fine-tuning BERT on the GLUE benchmark. Our method consistently improves over standard single-task and multi-task training. Building a single model that jointly learns to perform many tasks effectively has been a longstanding challenge in Natural Language Processing (NLP). However, applying multi-task NLP remains difficult for many applications, with multitask models often performing worse than their single-task counterparts BID30 BID1 BID25. Motivated by these , we propose a way of applying knowledge distillation BID3 BID0 BID14 so that single-task models effectively teach a multi-task model. Knowledge distillation transfers knowledge from a "teacher" model to a "student" model by training the student to imitate the teacher's outputs. In "born-again networks" BID10, the teacher and student have the same neural architecture and model size, but surprisingly the student is able to surpass the teacher's accuracy. Intuitively, distillation is effective because the teacher's output distribution over classes provides more training signal than a one-hot label; BID14 suggest that teacher outputs contain "dark knowledge" capturing additional information about training examples. Our work extends born-again networks to the multi-task setting. We compare Single→Multi 1 born-again distillation with several other variants (Single→Single and Multi→Multi), and also explore performing multiple rounds of distillation (Single→Multi→Single→Multi). Furthermore, we propose a simple teacher annealing method that helps the student model outperform its teachers. Teacher annealing gradually transitions the student from learning from the teacher to learning from the gold labels. This method ensures the student gets a rich training signal early in training but is not limited to only imitating the teacher. Our experiments build upon recent success in self-supervised pre-training BID7 BID28 and multi-task fine-tune BERT BID8 to perform the tasks from the GLUE natural language understanding benchmark BID41. Our training method, which we call Born-Again Multi-tasking (BAM) 2, consistently outperforms standard single-task and multi-task training. Further analysis shows the multi-task models benefit from both better regu- 1 We use Single→Multi to indicate distilling single-task "teacher" models into a multi-task "student" model. 2 Code is available at https://github.com/ google-research/google-research/tree/ master/bam larization and transfer between related tasks. Multi-task learning for neural networks in general BID4 and within NLP specifically BID6 BID24 has been widely studied. Much of the recent work for NLP has centered on neural architecture design: e.g., ensuring only beneficial information is shared across tasks BID21 or arranging tasks in linguistically-motivated hierarchies BID37 BID13 BID34. These contributions are orthogonal to ours because we instead focus on the multi-task training algorithm. Distilling large models into small models BID19 BID26 or ensembles of models into single models BID20 BID22 has been shown to improve for many NLP tasks. There has also been some work on using knowledge distillation to aide in multi-task learning. In reinforcement learning, knowledge distillation has been used to regularize multi-task agents BID27 BID39. In NLP, BID38 distill singlelanguage-pair machine translation systems into a many-language system. However, they focus on multilingual rather than multi-task learning, use a more complex training procedure, and only experiment with Single→Multi distillation. Concurrently with our work, several other recent works also explore fine-tuning BERT using multiple tasks BID29 BID23 BID18. However, they use only standard transfer or multi-task learning, instead focusing on finding beneficial task pairs or designing improved task-specific components on top of BERT. Model. All of our models are built on top of BERT BID8. This model passes byte-pairtokenized BID35 input sentences through a Transformer network BID40, producing a contextualized representation for each token. The vector corresponding to the first input token 3 c is passed into a task-specific classifier. For classification tasks, we use a standard softmax layer: softmax(W c). For regression tasks, we normalize the labels so they are between 0 and 1 and then use a size-1 NN layer with a sigmoid activation: sigmoid(w T c). In our multi-task models, all of the model parameters are shared across tasks except for these classifiers on top of BERT, which means less than 0.01% of the parameters are task-specific. Following BERT, the token embeddings and Transformer are initialized with weights from a self-supervised pre-training phase. Training. Single-task training is performed as in BID8. For multi-task training, examples of different tasks are shuffled together, even within minibatches. The summed loss across all tasks is minimized. We use DISPLAYFORM0 } to denote the training set for a task τ and f τ (x, θ) to denote the outputs for task τ produced by a neural network with parameters θ on the input x (for classification tasks this is a distribution over classes). Standard supervised learning trains θ to minimize the loss on the training set: DISPLAYFORM1 where for classification tasks is usually crossentropy. Knowledge distillation trains the model to instead match the predictions of a teacher model with parameters θ: DISPLAYFORM2 Note that our distilled networks are "born-again" in that the student has the same model architecture as the teacher, i.e., all of our models have the same prediction function f τ for each task. For regression tasks, we train the student to minimize the L2 distance between its prediction and the teacher's instead of using cross-entropy loss. Intuitively, knowledge distillation improves training because the full distribution over labels provided by the teacher provides a richer training signal than a one-hot label. See BID10 for a more thorough discussion. Multi-Task Distillation. Given a set of tasks T, we train a single-task model with parameters θ τ on each task τ. For most experiments, we use the single-task models to teach a multi-task model with parameters θ: DISPLAYFORM3 However, we experiment with other distillation strategies as well. Teacher Annealing. In knowledge distillation, the student is trained to imitate the teacher. This raises the concern that the student may be limited by the teacher's performance and not be able to substantially outperform the teacher. To address this, we propose teacher annealing, which mixes the teacher prediction with the gold label during training. Specifically, the term in the summation becomes DISPLAYFORM4 where λ is linearly increased from 0 to 1 throughout training. Early in training, the model is mostly distilling to get as useful of a training signal as possible. Towards the end of training, the model is mostly relying on the gold-standard labels so it can learn to surpass its teachers. Data. We use the General Language Understanding Evaluation (GLUE) benchmark BID41, which consists of 9 natural language understanding tasks on English data. Tasks cover textual entailment (RTE and MNLI) question-answer entailment (QNLI), paraphrase (MRPC), question paraphrase (QQP), textual similarity (STS), sentiment (SST-2), linguistic acceptability (CoLA), and Winograd Schema (WNLI).Training Details. Rather than simply shuffling the datasets for our multi-task models, we follow the task sampling procedure from, where the probability of training on an example for a particular task τ is proportional to |D τ | 0.75. This ensures that tasks with very large datasets don't overly dominate the training. We also use the layerwise-learning-rate trick from BID16. If layer 0 is the NN layer closest to the output, the learning rate for a particular layer d is set to BASE LR · α d (i.e., layers closest to the input get lower learning rates). The intuition is that pre-trained layers closer to the input learn more general features, so they shouldn't be altered much during training. Hyperparameters. For single-task models, we use the same hyperparameters as in the original BERT experiments except we pick a layerwiselearning-rate decay α of 1.0 or 0.9 on the dev set for each task. For multi-task models, we train the model for longer (6 epochs instead of 3) and with a larger batch size (128 instead of 32), using α = 0.9 and a learning rate of 1e-4. All models use the BERT-Large pre-trained weights. Reporting Results. Dev set report the average score (Spearman correlation for STS, Matthews correlation for CoLA, and accuracy for the other tasks) on all GLUE tasks except WNLI, for which methods can't outperform a majority baseline. Results show the median score of at least 20 trials with different random seeds. We find using a large number of trials is essential because can vary significantly for different runs. For example, standard deviations in score are over ±1 for CoLA, RTE, and MRPC for multi-task models. Single-task standard deviations are even larger. Main Results. We compare models trained with single-task learning, multi-task learning, and several varieties of distillation in TAB1. While standard multi-task training improves over single-task training for RTE (likely because it is closely related to MNLI), there is no improvement on the other tasks. In contrast, Single→Multi knowledge distillation improves or matches the performance of the other methods on all tasks except STS, the only regression task in GLUE. We believe distillation does not work well for regression tasks because there is no distribution over classes passed on by the teacher to aid learning. The gain for Single→Multi over Multi is larger than the gain for Single→Single over Single, suggesting that distillation works particularly well in combination with multi-task learning. Interestingly, Single→Multi works substantially better than Multi→Multi distillation. We speculate it may help that the student is exposed to a diverse set of teachers in the same way ensembles benefit from a diverse set of models, but future work is required to fully understand this phenomenon. In addition to the models reported in the table, we also trained Single→Multi→Single→Multi models. However, the difference with Single→Multi was not statistically significant, suggesting there is little value in multiple rounds of distillation. Avg. g constructed from SQuAD BID31 h BID11 BERT-Base BID8 78.5 BERT-Large BID8 80.5 BERT on STILTs BID29 82.0 MT-DNN BID23 82 Overall, a key benefit of our method is robustness: while standard multi-task learning produces mixed , Single→Multi distillation consistently outperforms standard single-task and multitask training. We also note that in some trials single-task training ed in models that score quite poorly (e.g., less than 91 for QQP or less than 70 for MRPC), while the multi-task models have more dependable performance. DISPLAYFORM0 Test Set Results. We compare against recent work by submitting to the GLUE leaderboard. We use Single→Multi distillation. Following the procedure used by BERT, we train multiple models and submit the one with the highest average dev set score to the test set. BERT trained 10 models for each task (80 total); we trained 20 multi-task models. Results are shown in TAB3.Our work outperforms or matches existing published that do not rely on ensembling. However, due to the variance between trials dis-cussed under "Reporting Results," we think these test set numbers should be taken with a grain of salt, as they only show the performance of individual training runs (which is further complicated by the use of tricks such as dev set model selection). We believe significance testing over multiple trials would be needed to have a definitive comparison. Single-Task Fine-Tuning. A crucial difference distinguishing our work from the STILTs, Snorkel MeTaL, and MT-DNN KD methods in TAB3 is that we do not single-task fine-tune our model. That is, we do not further train the model on individual tasks after the multi-task training finishes. While single-task fine-tuning improves , we think to some extent it defeats the purpose of multi-task learning: the of training is one model for each task instead of a model that can perform all of the tasks. Compared to having many single-task models, a multi-task model is simpler to deploy, faster to run, and arguably more scientifically interesting from the perspective of building general language-processing systems. We evaluate the benefits of single-task finetuning in TAB5. Single-task fine-tuning initializes models with multi-task-learned weights and then performs single-task training. Hyperparameters are the same as for our single-task models except we use a smaller learning rate of 1e-5. While single-task fine-tuning unsurprisingly improves , the gain on top of Single→Multi distillation is small, reinforcing the claim that distillation provides many of the benefits of singletask training while producing a single unified model instead of many task-specific models. Ablation Study. We show the importance of teacher annealing and the other training tricks in improve scores. Using pure distillation without teacher annealing (i.e., fixing λ = 0) performs no better than standard multi-task learning, demonstrating the importance of the proposed teacher annealing method. Comparing Combinations of Tasks. Training on a large number of tasks is known to help regularize multi-task models BID32. A related benefit of multi-task learning is the transfer of learned "knowledge" between closely related tasks. We investigate these two benefits by comparing several models on the RTE task, including one trained with a very closely related task (MNLI, a much large textual entailment dataset) and one trained with fairly unrelated tasks (QQP, CoLA, and SST). We use Single→Multi distillation (Single→Single in the case of the RTE-only model). Results are shown in TAB8. We find both sets of auxiliary tasks improve RTE performance, suggesting that both benefits are playing a role in improving multi-task models. Interestingly, RTE + MNLI alone slightly outperforms the model performing all tasks, perhaps because training on MNLI, which has a very large dataset, is already enough to sufficiently regularize the model. We have shown that Single→Multi distillation combined with teacher annealing produces consistently better than standard single-task or multi-task training. Achieving robust multi-task gains across many tasks has remained elusive in previous research, so we hope our work will make multi-task learning more broadly useful within NLP. However, with the exception of closely related tasks with small datasets (e.g., MNLI helping RTE), the overall size of the gains from our multi-task method are small compared to the gains provided by transfer learning from self-supervised tasks (i.e., BERT). It remains to be fully understood to what extent "self-supervised pre-training is all you need" and where transfer/multi-task learning from supervised tasks can provide the most value.
distilling single-task models into a multi-task model improves natural language understanding performance
788
scitldr
The detection of out of distribution samples for image classification has been widely researched. Safety critical applications, such as autonomous driving, would benefit from the ability to localise the unusual objects causing the image to be out of distribution. This paper adapts state-of-the-art methods for detecting out of distribution images for image classification to the new task of detecting out of distribution pixels, which can localise the unusual objects. It further experimentally compares the adapted methods on two new datasets derived from existing semantic segmentation datasets using PSPNet and DeeplabV3+ architectures, as well as proposing a new metric for the task. The evaluation shows that the performance ranking of the compared methods does not transfer to the new task and every method performs significantly worse than their image-level counterparts. Figure 1: Image from the LostAndFound dataset , where two unlikely objects (storage crates) are almost entirely incorrectly predicted to be road. The Max Softmax method clearly highlights these crates as OOD. (best viewed in colour) Many applications using machine learning (ML) may benefit from out of distribution (OOD) detection to improve safety. When inputs are determined to be out of distribution, the output of an ML algorithm should not be trusted. A large body of research exists for detecting entire images as OOD for the task of image classification. Image-level OOD detection outputs a classification for the entire image; this coarse level of detection may be inadequate for many safety critical applications, including autonomous driving. Most of the pixels in an image taken from an onboard camera will be in distribution (ID), i.e. an image of a road scene with cars, people, and roadway-but an unusual object that was not part of the training set may cause only a small number of OOD pixels. Extending the framework to semantic segmentation networks will allow each pixel to have an "in" or "out of" distribution classification. Applied to autonomous driving, groups of pixels classified as OOD would be considered as unknown objects. Depending on the location of the unknown objects, a planner would then proceed with caution or hand over control to a safety driver. Another application is automatic tagging of images with OOD objects, which would then be sent for human labelling. Figure 1 shows a failure case where OOD detection is beneficial. The two crates are predicted as road. The right image of this figure shows the of pixel-level OOD detection using one of the proposed methods, which clearly identifies the unusual objects. This paper adapts existing state-of-the-art image-level OOD detection methods to the new task of pixel-level OOD classification and compares their performance on a new dataset designed for this task. In addition to adapting the methods, we address the question of whether the best-performing image-level methods maintain their performance when adapted to the new task. In order to answer this question, we also propose pixel-level OOD detection performance metrics, drawing both on existing image-level OOD detection and semantic segmentation performance metrics. Further, we design two new datasets for pixel-level OOD detection with test images that contain both pixels that are in distribution and pixels that are out of distribution, evaluated with two different network architectures-PSPNet and DeeplabV3+ . Somewhat surprisingly, our evaluation shows that the best performing pixel-level OOD detection methods were derived from image-level OOD detection methods that were not necessarily the best performing on the image-level OOD detection task. In summary, the contributions of this paper are the following: • adaptation of image-level OOD detection methods to pixel-level OOD detection and their evaluation; • training and evaluation datasets for pixel-level OOD detection evaluation derived from existing segmentation datasets; and • a new metric for pixel-level OOD detection, called MaxIoU. We use two criteria to select existing image-level OOD detection methods to adapt to pixellevel OOD detection. First, the candidate methods are top performers on image classification datasets. Second, they must be computationally feasible for semantic segmentation. Max Softmax , ODIN , Mahalanobis and Confidence fit both criteria. The Entropy , Sum of Variances (VarSum) , and Mutual Information methods do not meet the first criterion, but are included as an existing uncertainty baseline for the pixel-level OOD classification. An example method that does not meet the second criterion is an ensemble method by , with an ensemble of K leave-out classifiers. In general, images and architectures for semantic segmentation are larger than for image classification, and therefore an ensemble method is much less feasible for segmentation than classification due to GPU memory limitations. Generative Adversarial Networks (GANs) and Auto-Encoders (AEs) are often used in the literature as a method for OOD detection. GANs and AEs are excluded to limit the scope of this work. Table 1 briefly describes the selected image-level OOD detection methods and any architecture modifications necessary to adapt them to OOD detection. 1 The Dim reduction modification is an additional penultimate layer that is a 1 × 1 convolution reducing the depth to 32. DeeplabV3+ has a much larger feature extractor than PSPNet, therefore due to hardware limitation, the Mahalanobis method is not evaluated on this architecture. Each original/adapted method produces a value that can be thresholded to predict whether an image/pixel is OOD. All metrics used in our evaluation are threshold independent, therefore no thresholds are discussed in this section. See Appendix A for more detailed description of each OOD detection method. to OOD data prior to evaluation. This unfair comparison has been criticised in the past . Most methods have a input perturbation step; however, this is removed as it also requires access to OOD data before evaluation. Previous work on image-level OOD detection designates a dataset used for training as the ID dataset, and the dataset for testing as the OOD dataset. This framework is not easily extended to semantic segmentation, as any two datasets may share features with the ID dataset. For example, people may exist in both datasets, but one is indoor scenes and the other is outdoor scenes. attempt to get around this issue by inserting animals from the COCO dataset or internet images into the Cityscapes dataset. The lack of realism of the inserted images (improper scale and hard boundaries) make the created dataset insufficient for OOD detection. The remainder of this section describes the datasets used in this work and any modifications required. The dataset used for training the weights of all networks is the unmodified Cityscapes dataset . It is the most prominent dataset for training and benchmarking semantic segmentation for road scene images. The diverse SUN dataset and the India Driving Dataset (IDD) are used as the main evaluation datasets. The main motivation for using the SUN dataset is that it has a large variety of scenes (e.g. street, forest, and conference room), as well as a large variety of labels (e.g. road, door, and vase). In total there are 908 scene categories and 3819 label categories. Anonymous label submissions are ignored, as their validity is not confirmed. However, the SUN dataset is not very realistic in terms of what a vehicle camera would see. The IDD dataset is an autonomous vehicle dataset that has all the same labels of Cityscapes with the addition of the autorickshaw class. Although the classes are the same, the instances tend to be OOD, due to different building architecture, road infrastructure, etc. This shift in features leads to higher average values OOD predictions as shown in Figure 2. Therefore only the car class as ID and auto-rickshaw class as OOD are used for evaluation. The SUN dataset labels have to be modified before the dataset can be used for evaluating OOD detection. 2 The approach for modifying this dataset is similar to the open world dataset created by. Let C = {person, car, ignore, ...} be the set of labels available in Cityscapes. Let S = {person, roof, chair, ...} be the set of labels available in the SUN dataset. Ambiguous classes such as "wall" (can appear indoor and outdoor) or "path" (a sidewalk or a dirt path in the woods) are sorted into a third set A ⊂ S. The map M: S → C ∪ {OOD} is defined as: For every image in the SUN dataset, each pixel label l i gets a new label l i = M(l i). Pixels with the "ignore" class are given a weight of 0 during evaluation. All the images in the SUN dataset have various dimensions. To prevent artefacts in the input after resizing, only images that have more than 640 · 640 = 409, 600 pixels are used. All images are resized to the same size as Cityscapes (1024 × 2048). The SUN dataset is split into a train set and an evaluation set (25%/75% split). It is stressed that the training set is used only to select the best hyperparameters of the ODIN method. Following previous work on image-level OOD detection, a synthetic dataset of random normally distributed images are used as well. The random normal noise is usually very easily detected by all methods, therefore Perlin noise images are used as well. Perlin noise is a smooth form of noise that has large "blobs" of colour that is much harder to detect and filter. Each of these datasets have the entire image labelled as OOD. All OOD datasets used are mixed with Cityscapes evaluation sets. The ing ratio of ID to OOD pixels for IDD/Cityscapes and SUN/Cityscapes is about 7.5:1 and 3:1 respectively. Since ODIN requires ODD data for hyperperameter tuning, Cityscapes training set is mixed with a held-out training set of OOD data in the same manner as the evaluation sets. There are five metrics used to evaluate the performance of each model. The first four listed below are the same metrics usually used by previous works on OOD detection (; ; ;). Since the output of semantic segmentation is so much larger than image classification, the below metrics must be approximated. This is done by using 400 linearly spaced thresholds between 0 and 1 and tracking all true positives (TP), true negatives (TN), false positives (FP), and false negatives (FN) for each threshold. Each pixel contributes to one of TP, TN, FP, FN for a given threshold, based on the ground truth and OOD prediction value -accumulated across all images. • AUROC -Area under the receiver operating characteristic (ROC) curve. The ROC curve shows the false positive rate (FPR) FP FP+TN against the true positive rate (TPR) TP TP+FN. The area under this curve is the AUROC metric. • AUPRC -Area under the precision recall (PR) curve. The PR curve shows the precision TP TP+FP against the TPR (or recall). The area under this curve is the AUPRC. • FPRatTPR -FPR at 95% TPR. Extracted from the ROC curve, this metric is the FPR value when the TPR is 95%. • MaxIoU -Max intersection over union (IoU). IoU is calculated by TP TP+FP+FN. MaxIoU is the maximum IoU over all thresholds. A common metric in the literature, coined detection error , is excluded as it is a linear transformation of the FPRatTPR metric. Therefore, it adds no useful information. The inspiration for MaxIoU issued from the semantic segmentation community. Mean intersection over union (mIoU) is the canonical performance metric for semantic segmentation. MaxIoU is similar, but targets tasks that are threshold dependant. Thresholds selected by MaxIoU punish false positives more than AUROC. The optimal threshold is generally greater, ing in fewer positive predictions than for AUROC. To verify that the MaxIoU is complimentary to AUROC, the optimal thresholds selected by each were experimentally compared. The mean absolute difference between the threshold selected via Youden index and that chosen for MaxIoU was found to be 0.039. PSPNet and DeeplabV3+ network architectures are used with the Resnet and Xception feature extractors respectively. The two driving factors for these networks are: near top performance on the Cityscapes benchmark (2.6 and 1.7 less mIoU than the top scoring algorithm) and the final operation before the softmax classification layer being a bi-linear upsample. The upsample ensures that any method's OOD prediction will be directly correlated with the pixel prediction from the softmax layer, as most methods rely on the softmax layer. There is a clear relationship between any spatial location in the penultimate layer and an output pixel. The Xception feature extractor is much larger than the Resnet -therefore due to hardware limitations, the space intensive Mahalanobis method is only evaluated on PSPNet. There are two main research questions that this paper focuses on. Each question has an associated experiment. • RQ1: Do the required modifications to architecture and loss functions negatively affect the semantic segmentation performance of the network? • RQ2: Which OOD detection method performs the best? To answer RQ1, we evaluate the semantic segmentation performance on Cityscapes using the standard class mean intersection over union (mIoU) metric. As long as the performance drop of modified networks is not too large, the modifications do not interfere with the original task. RQ2 is answered by evaluating each method on the datasets described in Section 3. Figure 3 shows the comparison of the performance of the different methods using the PSPNet architecture. Each graph shows a different metric (c.f . Section 4) for each dataset. Max Softmax, ODIN, and Mahalanobis follow the same trend as their image classification counterparts with the SUN dataset, increasing in performance in that order. However, for the IDD dataset the order is Mahalanobis, Max Softmax, then ODIN, in increasing performance. For image-level OOD detection, the Confidence method outperforms the Max Softmax baseline. However, for pixel-level OOD detection it is worse. For real datasets, VarSum is worse than Mutual Information, but better on the synthetic datasets. Across each metric, VarSum has the biggest performance increase from the modified SUN and IDD datasets to the random normal dataset, moving from worst to near top performance. This however is not the same for the Perlin noise dataset, as the low frequency noise is not easily filtered. The Confidence method seems to mostly learn to highlight class boundaries, as that is where prediction errors are likely to occur. Therefore the prediction loss force lower confidence levels. This makes it less suitable for the random normal and Perlin noise datasets, where the network predicts one single class for the majority of the input. Figure 4 shows the comparison of the performance of the different methods using the DeeplabV3+ architecture. With respect to AUROC, AUPRC, and MaxIoU on the IDD dataset Mutual Information Figure 5: Comparison of the Mutual Information method on an IDD dataset image, successfully predicted. The top row is using the PSPNet architecture, the bottom row is using the DeeplabV3+ architecture. The columns from left to right are: input image, ground truth, class prediction (Cityscapes colours), ODD prediction. The OOD prediction is masked to only cars and auto-rickshaws. Best viewed in colour. has the best performance, followed by ODIN. On all datasets and metrics, VarSum has the worst or is near worst performance. Confidence has much better performance with the DeeplabV3+ architecture than the PSPNet architecture, especially on the random datasets. The for DeeplabV3+ are much less conclusive than PSPNet as the relative ordering across the different metrics changes significantly. The metrics values reported in the original works for the image-level OOD detection for all methods with image-level OOD detection counterparts are very close to 1 (∼0.95 and above). However, in our experiments, the are much less than 1. One peculiar is the significant difference in performance between the Confidence method on DeeplabV3+ and PSPNet on the random normal and Perlin noise datasets. ODIN performs well on all datasets and architectures. The performance of this method is aided by its access to OOD data during hyperparameter tuning, however. Figure 5 shows a comparison of the Mutual Information method using both the PSPNet and DeeplabV3+ architecture. Mutual information is one of the worst performing on PSPNet and the best performing on DeeplabV3+. There are two major observations to note. The first is that both networks label different parts of the auto-rickshaw as different classes. The second is the increase in OOD prediction of all cars from DeeplabV3+ to PSPNet, while the OOD prediction of the autorickshaw remains approximately the same. Both architectures successfully predict the majority of the pixels correctly; however, the separation is better for DeeplabV3+. Figure 6 shows a comparison of the Entropy method using both the PSPNet and DeeplabV3+ architecture. The auto-rickshaw is mistaken by both architectures as a car, therefore the ing OOD prediction is similar to other cars in the scene. This failure highlights a more general failure of pixel-level OOD detection methods. When the network is confident about a wrong prediction, the OOD methods are fooled as well. The drop in performance for pixel-level OOD detection is likely due to features that cause large disruptions at the pixel-level, but would not affect an entire image; for example, shadows, occlusion, and far away objects. Figure 7 shows an example of shadows and far away objects in the bottom row. At the end of the road, most pixels are high OOD values as well as the right side of the scene, which is in the shade of a building. The top row of Figure 7 shows an interesting failure case of a flooded road being predicted as road with a low OOD value. As can be seen in all example outputs, class boundaries are highlighted. A classical computer vision algorithm was developed, using a series of erosion, dilation and other filters to remove these boundaries. In general performance was increased; however, the increase was on the order of 10 −3. To our knowledge, there are very few works researching pixel-level OOD detection. create a dataset by overlaying animals and objects from the COCO dataset on top of the Cityscapes dataset , this however lacks realism. This new dataset is tested with various OOD detection methods. train a segmentation network with two datasets-one ID and one OOD dataset. The network learns to classify between the two on a per pixel-level. This is compared to the Max Softmax baseline. The major flaw in this method is that the network learns its weights from OOD samples. There are some commonalities to Active Learning for semantic segmentation . These studies attempt to use some form of output uncertainty to choose which images are best for training/labelling in order to reduce the number of training examples needed. They produce a heat map similar to the OOD detection output. These heat maps are then aggregated across a whole image to produce a score for the entire image. create an open world dataset. Known object labels are drawn from the COCO dataset , and labels drawn from the NYU dataset are relabelled as unknown if the class doesn't exist in COCO. This methodology is very similar to the modified SUN dataset in Section 3. A generic object instance level segmentation algorithm is developed, based on a class specific object detector, a boundary detector and simulated annealing, and is evaluated on the new dataset. This approach splits the image into visually distinct connected regions, but is too slow for real-time applications. Several methods for detecting OOD pixels were adapted from image-level OOD detection, as well as a pixel uncertainty estimation. These methods were compared using metrics previously established by OOD detection works, as well as a new metric that has roots in the semantic segmentation task. This paper also contributed two new datasets for pixel-level OOD classification derived from semantic segmentation datasets that have common classes but also unique ones. There is great room for improvement for pixel-level OOD detection. One shortcoming for all the methods compared in this paper is the ability to distinguish between class boundary pixels and OOD pixels. We tested classical computer vision techniques that could be used to visually fix this problem, but the performance increase was negligible. The ODIN and Mahalanobis methods have the best performance with PSPNet and SUN dataset, beating the VarSum, Mutual Information, and Confidence methods by a significant margin. However, Mutual Information has the best performance with DeeplabV3+ and the IDD dataset, with the other methods following closely. Therefore the ODIN, Mahalanobis, and Mutual Information methods should be considered the baseline for further research in pixel-level OOD detection. Understanding the faults of pixel-level OOD detectors is crucial for progress. This would include categorising the failure cases of a detector. For example, understanding why a flooded road is not highlighted, and what makes that different to shadows falsely being highlighted. Each subsection uses notation close to the original works, so that readers can refer to said works for more detail. The appendix also provides additional detail. Notation should not be carried between sections unless explicitly stated. Assume the neural network is a function f, and input image is x. The set of pixels P will be used with typical subscripts i, j ∈ P. For example f (x) i is the i th pixel of the of f (x). Note that most methods have a input perturbation step. They are included here for completeness, however, they are not used in any evaluation as it requires access to OOD data. This access to OOD data has been criticised as an unfair comparison to methods that do not require access to OOD data. At test time, dropout can be used as an estimate for model uncertainty. A multiplier of 0 or 1 is randomly chosen for each neuron at test time. use the variation in predictions of a model that uses dropout at test time to compute an estimate of model uncertainty. The output is a variance per class, the sum of these variances is the estimate of model uncertainty. This is performed per pixel to get the value s i for pixel i ∈ P. A.2 show that the max softmax value can be used to detect OOD examples as a baseline for image classification. The softmax function is defined as: Where Sŷ(x) is the max softmax value andŷ is used to signify the chosen index. Given a prediction p i = f (x) i, the max softmax value for each pixel i is v i = Sŷ i (p i), and that value is used to determine if that pixel is OOD. A.3 create a similar method to the max softmax value, dubbed ODIN. This method adds temperature scaling and input preprocessing. The softmax function in Equation 3 is modified to include a temperature value T: Liang et al. found that perturbing the input in the direction of the gradient influences ID samples more than OOD samples, thus separating ID and OOD examples more. The input preprocessing step is: where is a hyperparameter chosen from a set of 21 evenly spaced values starting at 0 and ending at 0.004. The best temperature value is chosen from a predefined set of temperatures {1, 2, 5, 10, 20, 50, 100, 200, 500, 1000}. The temperature-scaled and preprocessed max softmax score v i = Sŷ(x; T) i for pixel i is used to predict if that pixel is OOD. A.4 use the Mahalanobis distance for detecting OOD samples. The Mahalanobis distance is the number of standard deviations a vector is away from the mean, generalised to many dimensions. Each feature vector at a spatial location in the penultimate layer is assumed to be normally distributed. Each distribution is parameterised by the pixel class mean µ ci and global class covariance Σ c for pixel i and class c. The global class covariance is computed for all pixels of a given class, independent of their location. Initial tests showed that using pixel class means and a global class covariance has better performance than global or pixel class mean and pixel class covariance, therefore they are used throughout. The labels are resized to match the height and width of the penultimate layer using nearest neighbour interpolation. The two quantities µ ci and Σ c are computed as follows: where N c is the number of examples of class c, and X j is the j th example of the training dataset X. f l is the output of the l th layer of the network f. Here l is the second to last layer. Each spatial location has a class distance and minimum distance computed by: This increases the number of matrix multiplications required to compute each pixel distance. Due to hardware memory limitations, a dimensionality reduction layer is needed after the penultimate layer, reducing the depth from 512 to 32 via a 1 × 1 convolution. An input preprocessing step is also performed. The new inputx is computed by: Instead of a logistic regression layer, the minimum distance is normalised to have zero mean and unit variance. The sigmoid function σ is applied to clamp to the interval: where µ and s are the mean and standard deviation of all M (x) computed over the whole training dataset. v i is used to determine if pixel i is OOD. The prediction values are resized, with bi-linear interpolation, to the original input size. train a secondary branch of an image classification network to output a single confidence value. This confidence value is learned via a regularization loss. The loss gives "hints" to the network when the confidence is low, and it penalizes low confidence. Similar to the image classification method, a secondary branch is added to the network f. Therefore c i, p i = f (x) i, where c i is the confidence value and p i is the prediction vector for pixel i ∈ P. The new branch is trained by creating a new prediction p i as: where B is a Bernoulli distribution and y i is the one hot ground truth vector for pixel i. The mean negative log likelihood is then applied to the new p i as well as a regularization term is added to force each c i to 1 (i.e. high confidence): where L is the total loss that is used to train the network, and λ is a hyperparameter. λ is 0.5 in all experiments. At test time a preprocessing step is applied to each pixel, and is computed using the gradients of the L c loss. Note that the adapted L c sums over all the pixel confidence predictions c i, therefore the gradient implicitly sums over output pixels as well. Letp i,c i = f i (x), then v i =c i is used to determine if a pixel is OOD. A.6 ENTROPY Shannon entropy is an information theoretic concept, that is used to determine how much information a source contains. The entropy equation is H: R n → R: Since H was developed for probabilities, x must behave like a probability distribution meaning: ∀i, x i >= 0 train an image-level classifier with a third outlier dataset that is disjoint from both the training and OOD dataset. An auxiliary loss is added minimising the entropy of predictions of outlier images. The network learns to predict uniform values for all classes. The max softmax value is then used at test time to determine if a sample is OOD. The entropy function is applied to the softmax output, which satisfies the properties in Equations 21 and 22. Let P = f (x) then v i is used to determine if pixel i is OOD. develop Bayesian uncertainty estimation methods and evaluation metrics to evaluate if higher uncertainty is correlated with higher error rates. The estimation can be used for OOD detection as well, similar to the Entropy and VarSum methods. There are two quantities required. Predictive entropy: and Aleatoric Entropy: In both equations, x is the input and y is the prediction. ω i is the set of weights selected by dropout at iteration i. Mutual information is then MI(y|x) is used to determine if a pixel is OOD or ID.
Evaluating pixel-level out-of-distribution detection methods on two new real world datasets using PSPNet and DeeplabV3+.
789
scitldr
We present a domain adaptation method for transferring neural representations from label-rich source domains to unlabeled target domains. Recent adversarial methods proposed for this task learn to align features across domains by ``fooling'' a special domain classifier network. However, a drawback of this approach is that the domain classifier simply labels the generated features as in-domain or not, without considering the boundaries between classes. This means that ambiguous target features can be generated near class boundaries, reducing target classification accuracy. We propose a novel approach, Adversarial Dropout Regularization (ADR), which encourages the generator to output more discriminative features for the target domain. Our key idea is to replace the traditional domain critic with a critic that detects non-discriminative features by using dropout on the classifier network. The generator then learns to avoid these areas of the feature space and thus creates better features. We apply our ADR approach to the problem of unsupervised domain adaptation for image classification and semantic segmentation tasks, and demonstrate significant improvements over the state of the art. Transferring knowledge learned by deep neural networks from label-rich domains to new target domains is a challenging problem, especially when the source and target input distributions have different characteristics. Such domain shifts occurs in many practical applications. For example, while simulated driving images rendered by games provide a rich source of labeled data for semantic segmentation BID19, deep models trained on such source data do not transfer well to real target domains (FIG0). When target-domain labels are unavailable for fine-tuning, unsupervised domain adaptation must be applied to improve the source model. Recent methods for unsupervised domain adaptation attempt to reduce the discrepancy between the source and target features via adversarial learning BID28; BID4 ). They divide the base network into a feature encoder G and classifier C, and add a separate domain classifier (critic) network D. The critic takes the features generated by G and labels them as either source-or target-domain. The encoder G is then trained with an additional adversarial loss that maximizes D's mistakes and thus aligns features across domains. However, a major drawback of this approach is that the critic simply predicts the domain label of the generated point and does not consider category information. Thus the generator may create features that look like they came from the right domain, but are not discriminative. In particular, it can generate points close to class boundaries, as shown in FIG0 (e), which are likely to be misclassified by the source model. We argue that to achieve good performance on the target data, the adaptation model must take the decision boundaries between classes into account while aligning features across domains (FIG0). Moreover, since our setting is unsupervised adaptation, this must be accomplished without labels on target data. In this paper, we propose a novel adversarial alignment technique that overcomes the above limitation and preserves class boundaries. We make the following observation: if the critic could detect points near the decision boundary, then the generator would have to avoid these areas of the feature space in order to fool the critic. Thus the critic would force the generator to create more discriminative features. How can we obtain such a critic? If we alter the boundary of the classifier C slightly and measure the change in the posterior class probability p(y|x), where y and x denote class and We propose to use the boundary information to achieve low-density separation of aligned points.input respectively, then samples near the decision boundary are likely to have the largest change. In fact, this posterior discrepancy is inversely proportional to the distance from the class boundary. We thus propose to maximize this posterior discrepancy to turn C into a critic sensitive to nondiscriminative points. We call this technique Adversarial Dropout Regularization. Here, dropout is not used in the standard way, which is to regularize the main classifier and make it insensitive to noise. Instead, we use dropout in an adversarial way, to transform the classifier into a critic sensitive to noise. Compared to previous adversarial feature alignment methods, where the distributions p(x) are aligned globally, our method aligns target features away from decision boundaries, as illustrated in FIG0 (f).Our ADR approach has several benefits. First, we train the generator G with feedback from the classifier C, in contrast to existing methods, which use an unrelated critic D. Second, our method is general and straightforward to apply to a variety of domain adaptation problems, such as classification and semantic segmentation. Finally, since ADR is trained to align distributions, it is also applicable to semi-supervised learning and training of generative models, such as Generative Adversarial Networks (GANs) BID6 ). Through extensive experiments, we demonstrate the benefit of ADR over existing domain adaptation approaches, achieving state-of-the-art in difficult domain shifts. We also show an application to semi-supervised learning using GANs in appendix. Domain Adaptation. Recent unsupervised domain adaptation (UDA) methods for visual data aim to align the feature distributions of the source and target domains; BID28; BID5 Long et al. (2015b); BID31 ). Such methods are motivated by theoretical stating that minimizing the divergence between domains will lower the upper bound of the error on target domain BID0 ). Many works in deep learning utilize the technique of distribution matching in hidden layers of a network such as a CNN BID28; BID5 Long et al. (2015b) ). However, they measure the domain divergence based on the hidden features of the network without considering the relationship between its decision boundary and the target features, as we do in this paper. Low-density Separation. Many semi-supervised learning (SSL) methods utilize the relationship between the decision boundary and unlabeled samples, a technique called low-density separation (; BID12). By placing the boundary in the area where the unlabeled samples are sparse, these models aim to obtain discriminative representations. Our method aims to achieve low-density separation for deep domain adaptation and is related to entropy minimization for semi-supervised learning BID8 ). used entropy minimization in their approach to directly measure how far samples are from a decision boundary by calculating entropy of the classifier's output. On the other hand, our method tries to achieve low-density separation by slightly moving the boundary and detecting target samples sensitive to the movement. As long as target samples features are robust to the movement, they will be allowed to exist relatively nearby the boundary compared to source samples, as FIG0 shows. In entropy minimization is only a part of the overall approach. To compare our ADR approach to entropy minimization more directly, we use a new baseline method. To our knowledge, though this method has not been proposed by any previous work, it is easily achieved by modifying a method proposed by BID23 ). For this baseline, we train a model that generates features to minimize the entropy of the output probability for target samples. The details of the baseline are provided in appendix. In short, the generator tries to minimize the entropy of the target samples, whereas the critic tries to maximize it. The entropy is directly measured by the output of the classifier. This baseline is similar to our approach in that the goal of the method is to achieve low-density separation. Dropout. Dropout is a method that prevents deep networks from overfitting BID24 ) by randomly dropping units from the neural network during training. Effectively, dropout samples from an exponential number of different thinned networks at training time, which prevents units from co-adapting too much. At test time, predictions are obtained by using the outputs of all neurons. If the thinned networks are able to classify the samples accurately, the full network will as well. In other words, dropout encourages the network to be robust to noise. In our work, we use dropout to regularize the feature generation network G, but in an adversarial way. We train the critic C to be sensitive to the noise caused by dropout and use C to regularize G so that it generates noise-robust features. To our knowledge, this use of dropout is completely different from existing methods. We assume that we have access to a labeled source image x s and a corresponding label y s drawn from a set of labeled source images {X s, Y s}, as well as an unlabeled target image x t drawn from unlabeled target images X t. We train a feature generation network G, which takes inputs x s or x t, and a network C that acts as both the main classifier and the critic. When acting as the classifier, C takes features from G and classifies them into K classes, predicting a K-dimensional vector of logits {l 1, l 2, l 3 ...l K}. The logits are then converted to class probabilities by applying the softmax function. Namely, the probability that x is classified into class j is denoted by p(y = j|x) = DISPLAYFORM0 We use the notation p(y|x) to denote the K-dimensional probabilistic output for input x. When C is acting as the critic, we want it to detect the feature encodings of target samples near the decision boundary. We propose to make C sensitive to such samples by slightly perturbing its decision boundary and measuring the change in the posterior class probability p(y|x). This change is likely to be largest for samples near the decision boundary. The network C is then trained to increase this change, while the feature generation network G is trained to decrease it. Through this adversarial training, G learns to'fool' the critic and generate target features far away from the decision boundary, thus avoiding ambiguous features. The weights of G can be initialized either by pre-training on some auxiliary dataset (e.g., ImageNet), or with random weights, while C uses random initialization. In the next section, we show how we utilize dropout to perturb the boundary in the critic and measure sensitivity. We then show the training procedure of our method. Finally, we give some intuition behind adversarial dropout and improve our method based on this insight. Consider the standard training of a neural network using dropout. For every sample within a minibatch, each node of the network is removed with some probability, effectively selecting a different classifier for every sample during training. We harness this idea in a very simple way. We forward input features G(x t) to C twice, dropping different nodes each time and obtaining two different output vectors denoted as C 1 (G(x t)), C 2 (G(x t)). In other words, we are selecting two different classifiers C 1 and C 2 from C by dropout as in Fig. 2. In the figure, the corresponding posterior probabilities are indicated as p 1 (y|x t), p 2 (y|x t), abbreviated as p 1 and p 2 in the following discussion. In order to detect the change of predictions near the boundary, the critic tries to increase the difference between the predictions of C 1 and C 2. This difference corresponds to C's sensitivity to the noise caused by dropout. To measure the sensitivity d(p 1, p 2) between the two obtained probabilistic outputs, we use the symmetric Kullback Leibler (KL) divergence. Formally, the divergence is calculated as DISPLAYFORM0 Update G to minimize sensitivity on target inputs (Fix C) Update C to maximize sensitivity on target inputs (Fix G) Train G, C on source inputs using classification loss DISPLAYFORM0 Figure 2: Overview of ADR. Left: We train G, C with classification loss on source and sample a critic consisting of two classifiers using dropout. The critic's sensitivity is measured as the divergence between the class predictions of C 1 and C 2 on the same input. Right: Adversarial training iterates two steps: the critic tries to maximize the sensitivity while the generator tries to minimize it.where KL divergence between p and q is denoted as D kl (p|q). In our approach, C works as both critic and classifier. The following three requirements are imposed by our method: 1) C and G must classify source samples correctly to obtain discriminative features; 2) C should maximize the sensitivity for target samples to detect the samples near the boundary; 3) G should learn to minimize the sensitivity to move target samples away from the boundary. The training within the same mini-batch consists of the following three steps. Step 1, in this step, C is trained as a classifier. C and G have to classify source samples correctly to obtain discriminative features. Thus, we update both networks' parameters based on the following standard classification loss. Given source labels y s and samples x s, the objective in this step is DISPLAYFORM0 C(G(x s)) k returns the probability that the sample x s is assigned to class k. Step 2, in this step, C is trained as a critic to detect target samples near the boundary. Two classifiers are sampled from C for each target sample using dropout twice to obtain p 1 and p 2. Then, C's parameters are updated to maximize the sensitivity as measured by Eq. 1. Since C should learn discriminative features for source samples, in addition to the sensitivity term, we add Eq. 2. We experimentally confirmed that this term is essential to obtain good performance. DISPLAYFORM1 DISPLAYFORM2 C 1 and C 2 are sampled from C randomly. Step 3, in order to obtain representations where target samples are placed far from the decision boundary, G is trained to minimize sensitivity. Here we do not add the categorical loss for source samples as in Step 2, as the generator is able to obtain discriminative features without it. DISPLAYFORM3 We update the parameters of C and G in every step following the defined objectives. We experimentally found it beneficial to repeat Step 3 n times for each mini-batch. show the decision boundary obtained by keeping one neuron in the last hidden layer and removing the rest. Red points are source samples of class one, green points are class two. Black points are target samples. The yellow region indicates where the samples are classified as class one, cyan region class two. We see that the neurons do not learn very diverse features. Column 6 shows the boundary obtained by keeping all 5 neurons. Bottom row: Boundaries learned by the model adapted by our adversarial dropout method. Unlike the top row, here neurons 3,4,5 learn diverse features which in diverse boundaries. Our ADR approach encourages different neurons of the classifier to learn different characteristics of the input (see Sec. 4.1.) The output is the combination of shared and unshared nodes, therefore, to maximize the sensitivity, the unshared nodes must learn different features of target samples. As learning proceeds, each neuron in C will capture different characteristics. At the same time, to minimize the sensitivity, G learns to extract pure categorical information. If G outputs features which are not related to categorical information, such as texture, slight contrast or difference of color, C will utilize them to maximize sensitivity. The trained classifier will be sensitive to the perturbation of targets caused by dropout. We note that our approach is contrary to methods called adversarial example training BID7; BID15 ) which train the classifier to be robust to adversarial examples. They utilize input noise which can deceive or change the output of the classifier, and incorporate it to obtain a good classifier. Our ADR method encourages the feature generator to obtain noise-robust target features. However, with regard to the classifier, it is trained to be sensitive to noise. To improve the final accuracy, we learn another classifier C that is not trained to be sensitive to the noise. C takes features generated by G and is trained with classification loss on source samples. The loss of C is not used to update G. We compare the accuracy of C and C in experiments on image classification. 4.1 EXPERIMENT ON TOY DATA Experimental Setting. In this experiment, we observe the decision boundary obtained by each neuron to demonstrate that ADR encourages the neurons to learn different input characteristics. We use synthetic "two moons" data for this problem. Two dimensional samples from two classes are generated as source samples. Target samples are obtained by rotating the source samples. In our setting, the rotation was set to 30 degrees and data was generated with scikit-learn BID17 ). We train a six-layered fully-connected network; the lower 3 layers are used as feature generator, and upper 3 layers are used as classifier. We used Batch Normalization BID11 ) and ReLU as activation function. The number of neurons are for feature generator, for classifier. We visualize the boundary obtained from each neuron in the last layer by removing the output of all other neurons. Results. We show the learned boundary in FIG1. In the baseline model trained only with source samples (top row), two of five neurons do not seem to learn an effective boundary, and three neurons learn a similar boundary. On the other hand, in our method (bottom row), although two neurons do not seem to learn any meaningful boundary, three neurons learn distinctive boundaries, demonstrating greater diversity. Each neuron is trained to be sensitive to the noise caused by target samples. The final decision boundary (rightmost column) classifies most target samples correctly. The accuracy of our proposed method is 96% whereas the accuracy of the non-adapted model was 84%. Experiments on Digits Classification. We evaluate our model on adaptation between digits datasets. We use MNIST BID14 ), SVHN BID16 ) and USPS datasets and follow the protocol of unsupervised domain adaptation used by BID29 ). To extensively compare our method with previous methods, in adaptation from MNIST to USPS, we applied our method to a different protocol used in. We assume no labeled target samples and use fixed hyper-parameters for all experiments, unlike other works that use a target validation set BID20 ). The number of iterations forStep 3 was fixed at n = 4. We used the same network architecture as in BID29 ), but inserted a Batch Normalization layer before the activation layer to stabilize the training. We used Adam BID13 ) for optimizer and set the learning rate to 2.0 × 10 −4, a value commonly reported in the GAN literature. We compare our approach to several existing methods and to the entropy minimization baseline (ENT) obtained by modifying (Springenberg FORMULA6). As we mentioned in Section 2, this is a model that generates features to minimize the entropy of the output probability for target samples. Due to space limitations, we provide a detailed explanation of this baseline in the appendix. Results in Table 1 demonstrate that ADR obtains better performance than existing methods. In particular, on the challenging adaptation task from SVHN to MNIST, our method achieves much better accuracy than previously reported. FIG2 shows the learning curve of each experiment. As sensitivity loss increases, the target accuracy improves. This means that as critic C learns to detect the non-discriminative samples, feature generator G learns to fool it, ing in improved accuracy. In addition, we can see that the sensitivity of source samples increases too. As mentioned in Sec 3.3, the critic network should learn to capture features which are not very important for classification, such as texture or slight edges, and it seems to also capture such information in source samples. The accuracy of the classifier C (denoted by red), which is trained not to be sensitive to the noise, is almost always better than the accuracy of the critic network. In adaptation from SVHN to MNIST FIG3 ), the accuracy of the critic often suffers as it becomes too sensitive to the noise caused by dropout. On the other hand, the accuracy shown by the red line is stable. Our ENT baseline shows good performance compared to other existing methods. This indicates the effectiveness of methods based on entropy minimization. In FIG3, we compare our proposed method and ENT in terms of entropy of target samples. Our method clearly decreases the entropy, because target samples are moved away from the decision boundary. Yet, its behavior is different from ENT. Interestingly, the entropy is made smaller than ENT in case of adaptation from USPS to MNIST FIG2 ) though ENT directly minimizes the entropy and our method does not. On the SVHN to MNIST task FIG2 ), the entropy of ADR is larger than ENT, which indicates that our method places the target samples closer to the decision boundary than ENT does. Experiments on Object Classification. We next evaluate our method on fine-tuning a pretrained CNN. We use a new domain adaptation benchmark called the VisDA Challenge BID18 ) which focuses on the challenging task of adapting from synthetic to real images. The source domain consists of 152,409 synthetic 2D images from 12 object classes rendered from 3D models. The validation and test target domains consists of real images, which belong to the same classes. We used the validation domain (55,400 images) as our target domain in an unsupervised domain adaptation setting. We evaluate our model on fine-tuning networks pretrained on ImageNet BID2 ): ResNet101 ) and ResNext BID30 ). For the feature generator, we use the pretrained CNN after removing the top fully connected layer. For the classification network, we use a three-layered fully connected network. Table 2 shows that our method outperformed other distribution matching methods and our new baseline (ENT) in finetuning both networks by a large margin. ENT did not achieve better performance than existing methods, though improvement over the source only model was observed. Although this method performed well on digits, it does not work as well here, possibly because of the larger shift between very different domains. In the experiment on ResNext, after training G and C, we retrained a classifier C just on the features generated by G due to GPU memory limitations, and observed improvement in both networks. Image Segmentation experiments. Next, we apply our method to adaptation for semantic image segmentation. Image segmentation is different from classification in that we classify each pixel in the image. To evaluate the performance on segmentation, the synthetic GTA5 (Richter et al. FORMULA7) dataset is used as source, and real CityScape dataset is used as target. Previous work tackled this problem by matching distributions of each pixel's feature in a middle layer of the network BID10 ). In this work, we apply ADR by calculating sensitivity between all pixels. The training procedure is exactly the same as in classification experiments. We use the ResNet50 pretrained on ImageNet, and utilize an FCN (Long et al. (2015a) ) based network architecture. Further, we utilize the more recent Dilated Residual Networks (DRN) 105 layered model BID32 ), which outperforms ResNet50 on a semantic segmentation task. Table 2: Results on Visda2017 classification datasets BID18 ). DANN and MMD are distribution alignment methods proposed by BID4 ) and (Long et al. (2015b) ) respectively. Ours (retrain classifier) means the classifier retrained for our proposed generator as we mentioned in Sec 3.3. Our proposed method shows much better performance than existing methods. For the feature generator, we use the pretrained network without fully-connected layers. For the classifier, we use a fully-convolutional network with dropout layers. Due to limited memory, the batch size is set to 1. We include details of the network architecture in appendix. For comparison, we train a domain classifier based model for our network (DANN). We build a domain classifier network for the features of each pixel following BID10 ).In Table 3, we show the qualitative comparison with existing methods. ADR clearly improves mean IoU (Intersection-over-Union) compared to the source-only and competing models, beating state-ofthe-art by a large margin. When we apply ADR to DRN, the accuracy improves much more than for ResNet50, and is 12.4 points higher than the model trained only on GTA5 source samples. This is likely because ADR exploits the strong representation of the pretrained DRN network. Although we implemented ENT in this setting, the accuracy was much worse than the Source Only model with a mIoU of 15.0 in training ResNet50. The ENT method does not seem to work well on syntheticto-real shifts. Finally, we illustrate our method's improvement on example input images, ground truth labels, images segmented by the Source Only model and our method in FIG7. While the Source Only model seems to suffer from domain shift, ADR generates a clean segmentation. These experiments demonstrate the effectiveness of ADR on semantic segmentation. In this paper, we introduced a novel approach for aligning deep representation, Adversarial Dropout Regularization, which learns to generate discriminative features for the target domain. The method Table 3: Results on adaptation from GTA5 → Cityscapes. DANN and FCN Wild denote methods proposed by BID4 ) and BID10 respectively. consists of a critic network that can detect samples near the task decision boundary and a feature generator that fools the critic. Our approach is general, applies to a variety of tasks, and does not require target domain labels. In extensive domain adaptation experiments, our method outperformed baseline methods, including entropy minimization, and achieved state-of-the-art on three datasets. We also show how to apply our method to train Generative Adversarial Networks for semisupervised learning in the appendix. Our method aims to move target samples away from the decision boundary. Some techniques used in training Generative Adversarial Networks can be applied to achieve our goal too. BID23; BID21 ) used small number of labeled samples to train critic. Critic is trained to classify real samples into K classes. They also trained critic to move unlabeled real images away from the boundary by minimizing entropy of the critic's output. Generated fake images are moved near the boundary by maximizing the entropy. On the other hand, generator is trained to generate fake images which should be placed away from the boundary. This kind of method can be easily applied to domain adaptation problem. We would like to describe the method along with our problem setting. Similar to our method, we have critic networks C and generator G. C classifies samples into K class. C is trained to maximize the entropy of target samples, which encourages to move the target samples near the boundary. Then, G is trained to minimize the entropy of them. Thus, G tries to move target samples away from the boundary. The only difference from our method is that we used entropy term for adversarial training loss. That is, in this method, we replace our sensitivity term d(p 1, p 2) in Eq. 4 with entropy of the classifier output. The adversarial loss for this baseline method is a following one. DISPLAYFORM0 DISPLAYFORM1 DISPLAYFORM2 The hyper-parameter n, how many times we update G for adversarial loss in one mini-batch, is set as n = 4. Experimentally, it worked well for all settings. We follow the protocol used in BID29 ). For adaptation from SVHN to MNIST, we used standard training splits of each datasets as training data. For evaluation, we used test splits of MNIST. For the adaptation between MNIST and USPS (P1), we sampled 2000 images from MNIST and 1800 images from USPS. For the adaptation between MNIST and USPS (P2), we used all training images of MNIST and USPS following. In these experiments, we composed the mini-batch half from source and half from target samples. The batch-size was set as 128 for both source and target. We report the score after repeating Step 1∼3 (please see Sec 3.2) 20000 times. For our baseline, ENT, we used the same network architecture and the same hyper-parameters as used in our proposed method. In this experiment, SGD with learning rate 1.0 × 10 −3 is used to optimize the parameters. For the finetuning of ResNet101, we set batch-size as 32. Due to the limit of GPU memory, we set it as 24 in finetuning ResNext model. We report the score after 20 epochs training. In order to train MMD model, we use 5 RBF kernels with the following standard deviation parameters: DISPLAYFORM0 We changed the number of the kernels and their parameters, but we could not observe significant performance difference. We report the performance after 5 epochs. We could not see any improvement after the epoch. To train a model BID4 ), we used two-layered domain classification networks. Experimentally, we did not see any improvement when the network architecture is changed. According to the original method BID4 ), learning rate is decreased every iteration. However, in our experiment, we could not see improvement, thus, we fixed learning rate 1.0 × 10 −3. We report the accuracy after 1 epoch. The accuracy dropped significantly after the first epoch. We assume this is due to the large domain difference between synthetic and real images. For our new baseline, ENT, we used the same hyper-parameter as we used for our proposed method. Since the accuracy of ENT drops significantly after around 5 epochs, we report the accuracy after 5 epoch updates. ResBlock3 FORMULA5 FIG9, we show how we integrated the features of each layers. We regard the layers of ResNet50 as generator and rest of the networks, namely convolution and upsampling layers as a critic network. The input images were resized to 512x1024 due to the limit of GPU memory. For the same reason, the batchsize was set to one. In FIG8, we show the example of segmented images by DRN-105. The images are cleanly segmented by our proposed method. In this section, we demonstrate how to apply our method in training a Generative Adversarial Network (GAN) applied to semi-supervised learning. We follow the method proposed by (; BID21), who use a K-class classification network as a critic to train a GAN in the semi-supervised setting. Approach. In contrast to the domain adaptation setting, here G tries to generate images which fool the critic C. Also, in this setting, we are given labeled and unlabeled real images from the same domain. Then, we train the critic to classify labeled images correctly and to move unlabeled images far from the decision boundary. To achieve this, we propose to train the critic with the following objective: SVHN (% errors) CIFAR (% errors) Labeled Only SDGM (Maaløe et al. 16.61 ± 0.24 -CatGAN (Springenberg FORMULA6 Table 4 : Comparison with state-of-the-art methods on two benchmark datasets. Only methods without data augmentation are included. We used the same critic architecture as used in ImpGAN. DISPLAYFORM0 DISPLAYFORM1 DISPLAYFORM2 where X L denotes the subset of labeled samples, X u denotes unlabeled ones and X g denotes images generated by G and H denotes entropy as Eq.6 shows. The critic is trained to minimize the loss on labeled samples in the first term. Since unlabeled images should be far away from the decision boundary and should be distributed uniformly among the classes, we add the second and fourth term. The third term encourages the critic to detect fake images generated near the boundary. The objective of G is as follows, DISPLAYFORM3 where the second term encourages generated images to be similar to real images, which is known to be effective to stabilize the training. The first term encourages the generator to create fake images which should be placed far away from the boundary. Such images should be similar to real images because they are likely to be assigned to some class with high probability. Here, we update C and G same number of times. Experiment. We evaluate our proposed GAN training method by using SVHN and CIFAR10 datasets, using the critic network architecture from BID21). We set the batch size as 100 and used Adam with learning rate 2.0 × 1.0 −4 for optimizer. After the conv6 layer of the critic, we constructed a classifier which was not concerned with adversarial learning process. In the experiment on SVHN, we replaced Weight Normalization with Batch Normalization for C. Also, in the experiment on CIFAR10, we construct a classifier from a middle layer of the critic, which is not incorporated into the adversarial training step. This is motivated by the insight that the critic in our method is trained to be too sensitive to the dropout noise as we explained in Sec 3.3.Results. FIG0, we can see that ADR seems to generate realistic SVHN images. Some images are significantly blurred, but most of the images are clear and diverse. As for generated CIFAR10 images, they do not seem as realistic, but some objects appear in most images. In Table 4, we can see that the accuracy of the critic trained by our method has better performance than other models for SVHN. For CIFAR10, the accuracy was slightly worse than other state-of-the-art methods. We conclude that, despite its clear advantage on the domain adaptation tasks, our method produces mixed on the SSL tasks. It could still be useful for SSL, however, it needs further exploration to improve the accuracy. For example, in Eq. 6, we propose to maximize the entropy of the marginal class distribution of the unlabeled real images, as well as forcing them to be far from the boundary. However, these objectives may contradict each other, which may in turn degrade the performance. In late-breaking , BID1 theoretically showed that just generating fake images that are far from decision boundaries does not help to improve accuracy in training GANs in the setting of SSL. Further improvement of our SSL approach based on these is an interesting direction for future work.
We present a new adversarial method for adapting neural representations based on a critic that detects non-discriminative features.
790
scitldr
The use of deep learning for a wide range of data problems has increased the need for understanding and diagnosing these models, and deep learning interpretation techniques have become an essential tool for data analysts. Although numerous model interpretation methods have been proposed in recent years, most of these procedures are based on heuristics with little or no theoretical guarantees. In this work, we propose a statistical framework for saliency estimation for black box computer vision models. We build a model-agnostic estimation procedure that is statistically consistent and passes the saliency checks of. Our method requires solving a linear program, whose solution can be efficiently computed in polynomial time. Through our theoretical analysis, we establish an upper bound on the number of model evaluations needed to recover the region of importance with high probability, and build a new perturbation scheme for estimation of local gradients that is shown to be more efficient than the commonly used random perturbation schemes. Validity of the new method is demonstrated through sensitivity analysis. Deep learning models have achieved great predictive performance in many tasks. However, these complex, often un-tractable models are difficult to interpret and understand. This lack of interpretability is a major barrier for their wide adoption, especially in domains (e.g., medicine) where models need to be qualitatively understood and/or verified for robustness. In order to address these issues, several interpretation approaches have been proposed in the last few years. A group of methods are based on visualizations, either by quantifying the effect of particular neurons or features, or by creating new images that maximize the target score for specific classes (; ;). A large collection of the techniques build saliency maps by attributing the gradients of the neural network to the input image through various procedures or by finding perturbations that significantly change the output (; ; ; ; ; ; ; ; a; ;). Another class of approaches treat the deep learner as a black-box. In this domain, use a Parzen window classifier to approximate the target classifier locally. propose the LIME procedure, where small perturbations on the instance are used to obtain additional samples with which a sparse linear model is fit. propose SHapley Additive exPlanation(SHAP), which combines the Shapley value from the game theory with the additive feature attribution methods. They also make connections of the SHAP procedure with various existing methods including LRP, LIME and DeepLIFT. propose L-and C-Shapley procedures which can reliably approximate the Shapley values in linear time with respect to the number of features. Majority of the listed methods are heuristics which are constructed according to certain desirable qualities. For these methods, it is not clear what the main estimand is, if it can be consistently estimated or if (and how) the estimand can be computed more efficiently. In fact, according to the recent research by Adebayo et al. (2018b), most methods with great visual inspection lack sensitivity to the model and the data generating process. Theoretical explanation for why guided back-propagation and deconvolutional methods perform image recovery is provided by. In this work, we propose a statistically valid technique for model-agnostic saliency estimation, and prove its consistency under reasonable assumptions. Furthermore, our method passes the sanity checks given by Adebayo et al. (2018b). Through our analysis, we obtain insights into how to improve the accuracy and reliability of our approach. We note that there is recent work by where they provide a saliency estimation technique with theoretical guarantees -more specifically, FDR control. Although their procedure is very promising from a statistical perspective, and theoretically valid under a very general set of assumptions, their technique requires human input and has a significant computational load as it uses a generative model for filling in certain regions of the target image. Our main contributions are as follows: • We introduce a new saliency estimation framework for CNNs and propose a new method based on input perturbation. Our procedure requires solving a linear program, and hence the estimates can be computed very efficiently. Furthermore, the optimization problem can be recast as a "parametric simplex" , which allows the computation of the full solution path in an expedient manner. • We establish conditions under which the significant pixels in the input can be identified with high probability. We present finite-sample convergence rates that can be used to determine the number of necessary model evaluations. • We find that the noise distribution for the perturbation has a substantial effect on the convergence rate. We propose a new perturbation scheme which uses a highly correlated Gaussian, instead of the widely used independent Gaussian distribution. In the following section, we define the linearly estimated gradient (LEG), which is the saliency parameter of interest (i.e. the estimand), and introduce our statistical framework. In section 3, we propose a regularized estimation procedure for LEG that penalizes the anisotropic total-variation. We provide our theoretical in Section 4 and the of our numerical comparisons in Section 5. For a matrix B, we use vec(B) and vec −1 (B) to denote its vectorization and inverse vectorization, respectively. The transpose of a matrix B is given by B T and we use B + for its pseudo-inverse. The largest and smallest eigenvalue of a symmetric matrix B are denoted by λ max (B) and λ min (B). For a set S, we use S C to denote its complement. For a vector u ∈ R p and a set S ⊆ [1, . . ., p], we use u S to refer to its components indexed by elements in S. The q-norm for a vector u is given by u q and we use B F r for the Frobenius norm of a matrix B. The vector of size p whose values are all equal to 1 is denoted by 1 p. Similarly, we use 1 p1×p2 and 0 p1×p2 to denote a p 1 × p 2 matrix whose entries are equal to 1 and 0, respectively. Finally, for a continuous distribution F, we use F + x 0 to denote a distribution that is mean-shifted by x 0, i.e. F (z) = G(z − x 0) for all z, where In gradient based saliency approaches, the main goal is to recover the gradient of the deep learner with respect to the input. More specifically, let f (x) be a deep learner, f: X →, where X is the input space, e.g., 28×28 for the MNIST dataset, where the input are given as 28 by 28 sized images. In this notation, the output is the probability of a specific class, for instance P model (x is a 9); although this can be modified to check for comparative quantities by setting the output as f (x) = f 9 (x) − f 7 (x) = P model (x is a 9) − P model (x is a 7). Then, local saliency is defined as the derivative of f (·) with respect to the input, evaluated at a point of interest x 0 ∈ X, i.e. ∇ x f (x)| x=x0. However, in practice, local saliency is often too noisy and one instead uses an average of the gradient around x 0 . In order to study the saliency procedure from a statistical perspective, we start by defining an estimand, whose definition is motivated by the LIME procedure . Definition 1 (LEG). For a continuous distribution F, an initial point x 0 ∈ X with X ⊂ R p1×p2, and a function f: X → [−1, 1], the linearly estimated gradient (LEG), γ ∈ R p1×p2 is given by LEG is based on a first order Taylor series expansion of the function f (x) around the point of interest x 0. The estimand is a proxy for the local gradient, and is the coefficient that gives the best linear approximation, in terms of the squared error, among all possible choices. The distribution F determines the range of points the analyst wants to consider. We visually demonstrate LEG on two toy examples with a single pixel (i.e. p 1 = p 2 = 1) in Figure 1. Figure 1a, we compare LEG to the gradient, which is very localized. If f (x) is a highly varying function, then the gradient is too noisy, and the saliency score provided by LEG is more meaningful. In Figure 1b, we show LEG for two different distributions. For the distribution with the larger variance, LEG evaluates the input's effect on the output for a larger neighborhood around x 0. We note that the variance of F has a large effect on LEG. As F converges to a point mass at 0, if f (x) is twice continuously differentiable in the neighborhood of x 0, then γ → ∇ x f (x). On the other hand, if F has high variance, then samples from x 0 + F are substantially different from x 0 and LEG might no longer be useful for interpreting the model at x 0. This phenomenon can also described in terms of local vs global interpretation: for F with a small variance, LEG provides a very local interpretation, i.e. a gradient that is valid in a small neighborhood around x 0, and as the variance of F increases, LEG produces a more global interpretation, since a larger neighborhood around x 0 is considered in the calculation. LEG has an analytical solution as the next lemma shows. Lemma 1. Let Z be the random variable with a centered distribution F, i.e. Z ∼ F and E[Z] = 0 p1×p2. Assume that covariance of vec(Z) exists, and is positive-definite. Proof of the lemma is provided in the Appendix. Lemma 1 shows that the LEG can be written as an affine transformation of a high dimensional integral where the integrand is (f ( This analysis also suggests an empirical estimate for the LEG, by replacing the expectation with the empirical mean. The empirical mean can be obtained by sampling x from F + x 0, calculating f (x), and then applying Lemma 1. More formally, let x 1,..., x n be random samples from F + x 0, and let y 1,..., y n be the function evaluations with As the function f (x) is bounded and F has a positive-definite covariance matrix, then it follows that as n → ∞,γ → γ. However, classical linear model theory shows that rate of the convergence is very slow, on the order of 1 λmin(Σ) p 1 p 2 /n, where p 1 and p 2 are the dimensions of X. This severely limits the practicality of the empirical approach. In the next section we propose to use regularization in order to obtain faster convergence rates. For interpretation of image classifiers, one expects that the saliency scores are located at a certain region, i.e. a contiguous body or a union of such bodies. This idea has lead to various procedures that estimate saliency scores by penalizing the local differences of the solution, often utilizing some form of the total variation (TV) penalty . The approach is very sensible from a practical point of view: Firstly, it produces estimates that are easy to interpret as the important regions can be easily identified; secondly, penalization significantly shrinks the variance of the estimate and helps produce reliable solutions with less model evaluations. In the light of the above, we propose to estimate the LEG coefficient with an anisotropic L 1 TV penalty. For a hyperparameter, L ≥ 0, the TV-penalized LEG estimate is given as γ = vec −1 (g) where g is the solution of the following linear program where D ∈ R (2p1p2−p1−p2)×(p1p2) is the differencing matrix with D i,j = 1, D i,k = −1 if the j th and the k th component of g are connected on the two dimensional grid. Our method is based on the "high confidence set" approach which has been successful in numerous applications in high dimensional statistics (; ;). The set of g that satisfy the constraint in the formulation is our high confidence set; if L is chosen properly, this set contains the true LEG coefficient, γ(f, x 0, F), with high probability 1. This setup ensures that the distance between γ andγ is small. When combined with the TV penalty in the objective function, the procedure seeks to find a solution that both belongs to the confidence set and has sparse differences on the grid. Thus, the estimator is extremely effective at recovering γ that have small total variation. The proposed method enjoys low computational complexity. The problem in equation 4 is a linear program and can be solved in polynomial time, for instance by using a primal-dual interior-point method for which the time complexity is O (p 1 p 2) 3.5 . However, in practice, solutions can be obtained much faster using simplex solvers. In our implementations, we use MOSEK, a commercial grade simplex solver by , and are able to obtain a solution in less than 3 seconds on a standard 8-core PC for a problem of size p 1 = p 2 = 28. Additionally, the alternative formulation (provided in the Appendix) can be solved using parametric simplex approaches which yield the whole solution path in L . The last point is often a necessity in deployment when L needs to be tuned according to some criteria. We note that the procedure does not require any knowledge about the underlying neural network and is completely model-agnostic. In fact, in applications where security or privacy could be a concern and returning multiple prediction values needs to be avoided, the term given by n i=1 vec (ỹ i z i) can be computed on the side and supplied alongside the prediction. In Figure 2, we show the ing estimates of the method with n = 500 model evaluations for a VGG-19 network. For the distribution F, we use a multivariate Gaussian distribution with the proposed perturbation scheme in Section 4.2. We computeγ separately for each channel, and then sum the absolute values of the different channels to obtain the final saliency score. In this section, we analyze the procedure from a theoretical perspective and derive finite sample convergence rates of the proposed LEG-TV estimator. As we noted earlier, this analysis also gives us insight on the properties of the ideal perturbation distribution. We first present our condition, which has a major role in the convergence rate of our estimator. The condition is akin to the restricted eigenvalue condition with adjustments specific to our problem. Assumption 1. Let D + be the pseudo-inverse of the differencing matrix D, and denote the elements of singular value decomposition of D as U, Θ, V where D = U ΘV T. Furthermore, denote the last p 1 p 2 − p 1 − p 2 columns of U that correspond to zero singular values as U 2. For the covariance matrix Σ, and any set S with size s, it holds that κ > 0, where The following theorem is our main ., where Z ∼ F and E[Z] = 0 p1×p2. Letγ be the LEG-TV estimate with L = 2 D + 1 log (p 1 p 2 /) /n. If Assumption 1 holds for the covariance matrix Σ with constant κ, then with probability 1 −, where m ∈ R is a mean shift parameter, s is the number of non-zero elements in Dγ The proof is built on top of the "high confidence set" approach of. In the proof, we first establish that, for an appropriately chosen value of L, γ * = γ(f, x 0, F) satisfies the constraint in equation 4 with high probability. Then, we make use of TV sparsity ofγ and γ * to argue that the two quantities cannot be too far away from each other, since both are in the constraint set. The full proof is provided in the Appendix. Our theorem has two major implications: 1. We can recover the true parameter as the number of model evaluations increase. That is, TV penalized LEG is a statistically consistent model interpretation scheme. Furthermore, our states that, ignoring the log terms, one needs n = O(s (p 1 p 2) 1/2 ) many model evaluations to reliably recover γ *. 2. Our bound depends on the constant κ, which further depends on the choice of Σ for the perturbation scheme. It is possible to obtain faster rates of convergence with a carefully tuned choice of Σ. As a side note, since γ * also depends on Σ, the estimand changes when Σ is adjusted. In other words, our states that certain estimands require less samples. We note that our procedure identifies the LEG coefficient up to a mean shift parameter, m, which is the average of the true LEG coefficient γ. In practice, the average can be consistently estimated (for instance, using the empirical version of LEG in equation 3), and the mean can be subtracted to yield consistent estimates for γ. However, in our numerical studies, we see that this mean shift is almost non-existent: LEG-TV yields solutions that has no mean differences with the LEG coefficient, which we define as the solution of the empirical version as n → ∞. In our main , we established that the convergence of our estimator depends on the quantity κ which is related to the spectral properties of Σ. In this subsection we explore the ramifications of the assumption. Our main in Theorem 1 states that the rate of convergence to the true LEG coefficient is inversely proportional to the term κ. Thus, perturbation schemes for which the restricted eigenvalues are large, as defined in Definition 1, yield saliency maps that require less samples to estimate the LEG. We note that most of the saliency estimation procedures that make use of perturbations take these perturbations to be independent, which in a covariance matrix that is equal to the identity matrix, Σ = σ 2 I (p1p2)×(p1p2) for some σ 2 > 0. For LEG estimation without penalization, i.e. using equation 1, this choice is also optimal as the convergence rates under the normal setup depend on 1/λ min (Σ). However, when one seeks to find an estimate for which the solution is sparse in the TV norm, this choice is no longer ideal as demonstrated by our theorem. In order to choose the covariance matrix of our perturbation scheme in a manner that maximizes the bound in equation 5, one also needs some prior information about the size of S, s. As that requires estimation of s, and a complex optimization procedure, we instead propose a heuristic: we choose Σ so that its eigenvectors match D + ∆ for vectors ∆ with unit-norm and U T 2 ∆ = 0. This choice fixes p 1 p 2 − 1 many of the eigenvectors of Σ. For the last eigenvector, we use the one vector as it is orthogonal to the rest of the eigenvectors. Our proposed perturbation scheme is as follows: 1. Compute the singular value decomposition of D, and let D = U ΘV T. for some choice of σ 2 > 0. with the proposed Σ, the numerator in equation 5 reduces to σ 2 ∆ T ∆ and hence κ = σ 2. Without any additional assumptions on S, this is the maximal value for κ. Figure 3: Selected eigenvectors of the proposed Σ. The eigenvectors, which contain the principal directions of the distribution, have maxima and minima in adjacent locations. Distributions drawn with these properties perform as object detectors as they can be used to detect existence (or nonexistence) of significant pixels at these locations. We plot some of the eigenvectors for our proposed Σ with p 1 = p 2 = 28 in Figure 3. These eigenvectors are the principal directions of the perturbation distribution F, and the samples drawn from F contain a combination of these directions. We see these samples will have sharp contrasts at certain locations. This is very intuitive: The perturbation scheme is created for a specific problem where boundaries for objects are assumed to exist, and large jumps in the magnitude of the distribution help our method recover these boundaries efficiently. We conclude this section with a demonstration of the perturbation scheme using Gaussian noise. In Figure 4, we plot a digit from the MNIST dataset , along with instances obtained by independent perturbation and by our suggested distribution. LEG-TV procedure has two tuning parameters: (i) F, which determines the structure of the perturbation; and (ii) L, which controls the sparsity of the chosen interpretation. Regarding F, we propose to use a multivariate Gaussian distribution as it is easy to sample from. For Σ, we propose a theoretically driven heuristic for determining the correlation structure of Σ in Section 4.2. However, the choice of the magnitude of Σ, i.e. σ 2, is left to the user. If this quantity is chosen too low, then the added perturbations are small in magnitude, and the predictions of the neural network do not change, ing in a LEG near zero. On the other hand, with a very large value of σ 2, the have too much variance as some of the pixel values are set to the minimum or the maximum pixel intensity. In our implementations, we find that setting σ 2 to be between 0.05 and 0.30 in reasonable solutions. We determine this range by computing perturbations of various sizes on numerous images using the VGG-19 classifier. The provided range is found to create perturbations large enough to change the prediction probabilities but small enough to avoid major changes in the image. Most of our presented are given for σ 2 = 0.10. For the choice of L, we propose two solutions: The first is the theoretically suggested quantity given in Theorem 1, although this often in estimates that are too conservative. Our second method is a heuristic based on some of the quantities in the optimization problem and we use this for our demonstrations. We set L = K L L max where K is a constant between 0 and 1 and L max is the smallest value of L for which the solution in equation 4 would with g = 0; i.e.. We use K L = 0.05 or K L = 0.10 in our implementations. We note that is possible to obtain the solution for all L by using a parametric simplex solver , or by starting with a large initial L, then using the solution of the program as a warm-start for a smaller choice of L. Both approaches return the solution path for all L, and might be more desirable in practice than relying on heuristics. In this section, we demonstrate the robustness and validity of our procedure by two numerical experiments. In Section 5.1, we perform sanity checks as laid out by Adebayo et al. (2018b), and show that the LEG-TV estimator fails to detect objects when the weights of the neural network are chosen randomly. In Section 5.2, we implement a sensitivity analysis in which we use various saliency methods to compute regions of importance, and then perturb these regions in order to see their effect on the prediction. For the deep learner, we use VGG-19 . For computational efficiency, we compute saliency maps on a 28 by 28 grid (i.e.γ ∈ R 28×28) although the standard input for VGG-19 is 224 by 224. The perturbations on the image are scaled up by 8 via upsampling in order for the dimensions to match. In Adebayo et al. (2018b), the validity of saliency estimation procedures are tested by varying the weights of the neural network. In a technique named, "cascading randomization", authors propose to replace the fitted weights of a CNN layer by layer, and compute the saliency scores with each change. As a deep learner with randomly chosen weights should have no prediction power, one expects to see the same effect in the ing saliency scores: namely, as more of the weights are perturbed, the explanation offered by interpretability methods should become more and more meaningless. Surprisingly, Adebayo et al. (2018b) show that most commonly adopted interpretation procedures provide some saliency even after full randomization, and conclude that these methods act as edge detectors. Our procedure treats the classifier as a black-box and the explanations offered by LEG-TV are based solely on the predictions made by the neural network. During the sanity check, when the weights of the neural network are randomly perturbed, the predictions change significantly and no longer depend on the input. Thus, we expect the local linear approximations of the underlying function to be flat, which would in saliency scores of zero for all of the pixels. Finally, small artifacts that might arise in this process, such as positive or negative saliency scores with no spatial structure, should be smoothed over due to the TV penalty, further robustifying our procedure. In order to verify our intuition, we perform cascading randomization on the weights of a VGG-19 network. For all of the images in our analysis, we find that the LEG-TV estimate,γ, is reduced to zero after randomization of either the top (i.e. logits) or the second top layer (i.e. second fully connected layer). The of our experiment for two images are given in Figure 5. It is seen that after the weights are perturbed, the LEG-TV method fails to detect any signal that could be used for interpretation. In fact, due to penalization, the estimate is set to zero. These show that the interpretation given by our proposed method is reliable and is dependent on the classifier....... Figure 5: Results of the sanity check with cascading randomization. The network weights are replaced by random numbers in a cascading order, starting from the last layer. LEG is equal to zero for all pixel values immediately after the first randomization. For our second validity test, we use various interpretation models to compute regions of high importance. We then mask these regions by decreasing the value of the pixels to zero which is equivalent to painting them black. We compute and assess the difference of the predictions for the target class with each perturbation. We compare our method against four alternatives: GradCAM , LIME , SHAP and C-Shapley . The last three methods are chosen as they are model-agnostic, like LEG, and do not make use of the architecture of the neural network. GradCAM is chosen due to its popularity. The saliency maps using C-Shapley and LEG-TV are computed for a 28 by 28 grid. In order to make the comparison between the methods more fair, we downsize the saliency maps ing from GradCAM, LIME and SHAP to the same size. Interestingly, we find that this step improves the performance of these estimators; that is, the perturbations identified using the low resolution saliency maps in faster drops in the predicted score. For LEG-TV, LIME and SHAP, the saliency scores are computed using 3000 model evaluations, where as C-Shapley requires 3136 (28×28×4) evaluations. For LEG-TV, we provide two solutions, a sparse solution which corresponds to a larger choice of the penalty parameter L and a noisy solution which is obtained with a smaller choice of L, denoted by LEG and LEG0, respectively. We present the for 500 images that are randomly chosen from a subsample of the ImageNet dataset 2. The average of the log odds ratios across the 500 images are provided in Figure 6. We see that as the size of the perturbation increases, the predictions for the target class drop for all of the methods. The slope is sharpest for SHAP and LEG0, suggesting that these two methods identify pixels that are crucial for the predictions. Figure 6: Results of sensitivity analysis. Log of the predicted probability for the target class is plotted versus the size of the perturbation. The locations for the perturbations are determined by the saliency procedures. Predictions should decrease at a fast rate for interpretability methods that can reliably identify regions of importance. In that regard, SHAP and LEG0 appear to be the most accurate in determining the critical pixels, followed by LEG, GradCAM, C-Shapley and LIME. In Figure 7, we plot the top 10% most salient pixels according to different procedures for three images in the dataset. The pixels chosen by SHAP appear to correspond to specific a convolution pattern and the chosen region is not contiguous. On the other hand, pixels identified by LEG-TV are visually meaningful to the human eye and contain pixels that are more likely to be relevant for the prediction. LEG-TV selects different parts of the crane in the first image, and the face of the Pekinese dog in the second. In the last image, where a soap dispenser is misclassified as a soda bottle, LEG-TV relates the classification to the label and the barcode of the bottle -parts that are often seen on soda bottles. For the same image, LEG-TV also selects the fixtures in the , which could have been mistaken by the classifier as the cap of the soda bottle. We have proposed a statistical framework for saliency estimation that relies on local linear approximations. Utilizing the new framework, we have built a computationally efficient saliency estimator that has theoretical guarantees. Using our theoretical analysis, we have identified how the sample complexity of the estimator can be improved by altering the model evaluation scheme. Finally, we have shown through empirical studies that (i) unlike most of its competitors, our method passes the recently proposed sanity checks for saliency estimation; and (ii) pixels identified through our approach are highly relevant for the predictions, and our method often chooses regions with higher saliency compared to regions suggested by its alternatives. Our linear program can also be recast by a change of variables and setting α = Dg. In this case, the elements of α correspond to differences between adjoint pixels. This program can be written as: + is the pseudo-inverse of D and U 2 is related to the left singular vectors of D. More precisely, letting D = U ΘV T denote the singular value decomposition of D, U 2 is the submatrix that corresponds to the columns of U for which Θ j is zero. The linearity constraint ensures that the differences between the adjoint pixels is proper. Derivation of the alternative formulation follows from Theorem 1 in and is omitted. This formulation can be expressed in the standard augmented form, i.e. min Ax=b,x≥0 c T x, by writ- where y = 1 n n i=1f (x i)x i and m = 2p 1 p 2 −p 1 −p 2. The γ coefficient in the original formulation can be obtained by setting A.2 PROOF OF THEOREM 1 Our proof depends on the following lemma. Lemma 2. For L ≥ 2 D + 1 log (p 1 p 2 /) /n, γ * is in the feasibility set with probability 1 −, that is Proof. For ease of notation, let We also assume that the images have been rescaled so that the maximum value ofx i is 1 (without rescaling, the maximum would be given as the largest intensity, i.e. 255). Since, the function values are also in the range given by [-2,2], we can bound |z i,j |, that is Under review as a conference paper at ICLR 2020 The proof follows by applying the McDiarmid's inequality for each row of the difference and then taking the supremum over the terms. By application of McDiarmid's inequality, we have that Let L = 2 D + 1 log (p 1 p 2 /2) /n. Then, taking a union bound over all variables, we have Now note that that the feasibility set for any L ≥ L contains that of L and thus γ * is automatically included. We now present the proof of the theorem. Note that the technique is based on the Confidence Set approach by. In the proof, we use γ to refer to vec(γ) for ease of presentation. Proof. First, let the high probability set for which Lemma 2 holds by A. All of the following statements hold true for A. We let ∆ = D (γ − γ *). We know that Dγ 1 ≤ Dγ * 1 since both are in the feasibility set, as stated in Lemma 2. Let α * = Dγ *,α = Dγ and define S = {j : α * j = 0}, and the complement of S as S C. By assumption of the Theorem, we have that the cardinality of S is s, i.e. |S| = s. Now let ∆ S as the elements of ∆ in S. Then, using the above statement, one can show that ∆ S 1 ≥ ∆ S C 1. Note, and ∆ S 1 ≥ ∆ S C 1 follows immediately. Furthermore where the last line uses the previous . Additionally, note that where the first inequality follows by Holder's inequality and the second follows from Lemma 2 and the fact that bothγ and γ * are in the feasibility set for L = 2 D + 1 log (p 1 p 2 /) /n. We further bound the right hand side of the inequality by using the previous , which gives Next, we bound ∆ 2 by combining the previous . Now, by assumption of the Theorem, we have that Dividing both sides by ∆ 2, we obtain that
We propose a statistical framework and a theoretically consistent procedure for saliency estimation.
791
scitldr
We explore the idea of compositional set embeddings that can be used to infer not just a single class, but the set of classes associated with the input data (e.g., image, video, audio signal). This can be useful, for example, in multi-object detection in images, or multi-speaker diarization (one-shot learning) in audio. In particular, we devise and implement two novel models consisting of an embedding function f trained jointly with a “composite” function g that computes set union opera- tions between the classes encoded in two embedding vectors; and embedding f trained jointly with a “query” function h that computes whether the classes en- coded in one embedding subsume the classes encoded in another embedding. In contrast to prior work, these models must both perceive the classes associated with the input examples, and also encode the relationships between different class label sets. In experiments conducted on simulated data, OmniGlot, and COCO datasets, the proposed composite embedding models outperform baselines based on traditional embedding approaches. Embeddings, especially as enabled by advances in deep learning, have found widespread use in natural language processing, object recognition, face identification and verification, speaker verification and diarization (i.e., who is speaking when ), and other areas. What embedding functions have in common is that they map their input into a fixed-length distributed representation (i.e., continuous space) that facilitates more efficient and accurate downstream analysis than simplistic representations such as one-of-k. Moreover, they are amenable to one-shot and few-shot learning since the set of classes that can be represented does not depend directly on the dimensionality of the embedding space. Previous research on embeddings has focused on cases where each example is associated with just one class (e.g., the image contains only one person's face). In contrast, we investigate the case where each example is associated with not just one, but an entire subset of classes from a universe S. The goal is to embed each example so that questions of two types can be answered (see Figure 1 (a)): Is the set of classes in example x a equal to the union of the classes in examples x b and x c? Does the set of classes in example x a subsume the set of classes in example x b? Importantly, we focus on settings in which the classes present in the example must be perceived automatically. We approach this problem using compositional set embeddings. Like traditional embeddings, we train a function f that maps each example x ∈ R n into an embedding space R m so that examples with the same classes are mapped close together and examples with different classes are mapped far apart. Unlike traditional embeddings, our function f is trained to represent the set of classes that is associated with each example, so that questions about set union and subsumption can be answered by comparing vectors in the embedding space. We do not assume that the mechanism by which examples (e.g., images, audio signals) are rendered from multiple classes is known. Rather, the rendering process must be learned from training data. We propose two models, whereby f is trained jointly with either a "composition" function g (Model I) that answers questions about set union, or a "query" function h (Model II) that answers question about subsumption (see Figure 1(a) ). Figure 1: (a): Overview of the paper: embedding function f is trained jointly with either the composition function g or the query function h. In particular, the goal is for g to "compose" the embeddings of two examples, containing classes T and U respectively, to approximate the embedding of an example containing classes T ∪ U. (b): 2-D projection of the embedding space from Experiment 1 on test classes and examples not seen during training (one-shot learning). Function g composes two embeddings (two arrow tails) and maps the back into the embedding space (arrow head). To substantial if imperfect degree, the embedding space is compositional as described in (a). To our knowledge, this computational problem is novel. We see at least two use-cases: Speaker recognition and diarization (i.e., infer who is talking within an audio signal) with multiple simultaneous speakers: Given an audio signal containing speakers who were not part of the training set and who may be speaking simultaneously, and given one example of each person speaking in isolation (one-shot learning), infer which set of speakers is talking. Multi-object recognition in images: Given just the embedding of an image x a, answer whether x a contains the object(s) in another image x b. Storing just the embeddings but not the pixels could potentially be more space-efficient. Because of the novelty of the problem, it was not obvious to what baselines we should compare. When evaluating our models, we sought to assess the unique contribution of the compositional embedding above and beyond what a "traditional" embedding could achieve. Hence, we created baselines by endowing a traditional embedding with some extra functionality to enable it to infer label sets. Modeling assumptions and notation: For generality, we refer to the data to be embedded (images, videos, audio signals, etc.) simply as "examples". Let the universe of classes be S. From any subset T ⊆ S, a ground-truth rendering function r: 2 S → R n "renders" an example, i.e., r(T) = x. Inversely, there is also a ground-truth classification function c: R n → 2 S that identifies the label set from the rendered example, i.e., c(x) = T. Neither r nor c is observed. We let e T represent the embedding (i.e., output of f) associated with some example containing classes T. Contribution: To our knowledge, this is the first paper to explore how embedding functions can be trained both to perceive multiple objects in the example and to represent the set of detected objects so that set operations can be conducted among embedded vectors. We instantiate this idea in two ways: Model I for set union (f & g) and Model II for set containment (f & h). By evaluating on synthetic data, OmniGlot handwritten image data , as well as the COCO dataset , we provide a proof-of-concept that "compositional set embeddings" can work. Embeddings: We distinguish between two types of embeddings: "Perceptual" embeddings such as for vision (Facenet ) and speech (x-vector ) where each class (e.g., person whose voice was recorded or face was photographed) may contain widely varying examples across speech content, facial expression, lighting, noise, etc. Word embeddings (word2vec , GloVe ) where each class contains only one exemplar by definition. Within the former, the task of the embedding function is to map examples from the same class close together and examples from other classes far apart. This often requires deep, non-linear transformations to be successful. With word embeddings, the class of each example is already clear and does not need to be inferred; instead, the goal is to give the embedded vectors geometric structure to reflect co-occurrence, similarity in meaning, etc. Compositional embeddings: Since at least 30 years, AI researchers, cognitive scientists, and computational neuroscientists have explored how the embeddings of multiple elements could be combined to reflect relationships between them or higher-level semantics. However, almost all this work was based on word embeddings, in which perception was not necessary. Some early work investigated how the grammatical structure and/or semantics of an input sentence can be represented in an efficient manner in neural networks, and how such a network could be trained . Given the advent of word embeddings, deep NLP architectures can combine the word-level semantics, as represented by the embeddings of the individual elements of an input sentence, to infer higher-level attributes, e.g., sentiment . Recent work has investigated to what extent contemporary recurrent neural networks can generalize to understand novel sentences consisting of known words. Also, in the NLP domain, developed compositional pairwise embeddings that model the co-occurrence relationships between two words given their common context. Probably the most algorithmically similar work to ours is by on compositional network embeddings: the goal is to predict whether two new nodes in a graph, which were not observed during training, are adjacent, using node-based features as predictors. In their approach, two embeddings are used: one to embed the node-based features, and another to aggregate these embedded features into a secondary embedding space. Structurally, their work differs from ours in that the two embedding spaces in their model do not represent the same universe of objects; the embeddings do not capture set relationships. Deep set representations: Our paper is also about how to encode a set of objects with a neural network. One issue is how to ensure invariance to the order in which examples are presented. proposed an approach based on permutation-invariant content-based attention. For producing sets as outputs, proposed a probabilistic model, within a supervised learning paradigm where all classes are known at training time, that predicts both the cardinality and the particular elements of the set. Given two examples x a and x b that are associated with singleton sets {s} and {t}, respectively, the hope is that, for some third example x c that is associated with both classes (i.e., {s, t}), we have Moreover, we hope that g can generalize to any number of classes within the set S. For example, if example x d is associated with a singleton set {u}, then we hope where x e is an example associated with {s, t, u}. There are two challenging tasks that f and g must solve cooperatively: f has to learn to perceive multiple objects that appear simultaneously and may possibly interact with each other -all without knowing the rendering process r of how examples are formed or how classes are combined. g has to define geometrical structure in the embedding space to support set union operations. One way to understand our computational problem is the following: If f is invertible, then ideally we would want g to compute g(e T, e U) = f (r(c(f −1 (e T)) ∪ c(f −1 (e U)))). In other words, one (though not necessarily the only) way that g can perform well is to learn to perform the following actions (without knowing r or c): invert each of the two input embeddings; classify the two corresponding label sets; render an example with the union of the two inferred label sets; and embed the . One-shot learning: Model I can be used for one-shot learning on a set of classes S not seen during training in the following way: We obtain k labeled examples x 1,..., x k from the user, where each {s i} = c(x i) is the singleton set formed from the ith element of S and |S| = k. We call these examples the reference examples. We then infer which set of classes is represented by a new example x using the following procedure: Compute the embedding of x, i.e., f (x). Use f to compute the embedding of each singleton example x i, i.e., e {i} = f (x i). From e {1},..., e {k}, estimate the embedding of every subset T = {s 1, . . ., s l} ⊆ S according to the recurrence relation: Finally, estimate the label of x as arg min Although the number of possible subsets is exponential in |S|, for speaker diarization the number of overlapping speakers is typically small, and thus the iteration is tractable. Functions f and g are trained jointly: For each example x associated with classes T, we compute e T from the singleton prototypes according to Equation 1. (To decide the order in which we apply the recursion, we define an arbitrary ordering over the elements of S and iterate accordingly.) We then compute a hinge loss: where is a small positive real number. In practice, for each example x, we randomly pick one element of T ∈ 2 S for comparison. See Appendix for a discussion of an alternative (but less effective) training procedure. To explore the viability of Model I, we first conducted a simulation using 1-D "audio" signals, in which each "speaker" s i ∈ S is modeled with a prototype p i ∈ R n consisting of the superposition of some randomly chosen frequencies and phases. Then, the simultaneous sound from multiple (up to 3) speakers is given by a rendering function r that computes a vector sum of slightly perturbed versions of the prototype vectors and then clips the . (See Figure 2 (a) for example waveforms, and the Appendix for more details.) We trained an embedding and composition functions f and g, as described in Section 3.1, on a set of 250 speakers. We then tested these functions on test speakers (and examples) not seen during training after providing just a single "reference" example of each speaker for one-shot learning. The goal is to identify the exact set of speakers who contributed to the formation of each audio waveform. Architecture: For function f we used a simple convolutional neural network that produces unitlength vectors in R 32: where Conv(k, f) is a 1-D convolutional layer with f filters of size k, stride 1, and zero-padding to preserve the spatial extent of the example; BN is batch normalization; MP(k) is a max-pooling layer with width and stride k; FC(n) is a fully-connected layer with n neurons; and L2Norm is a L 2 -normalization layer. We constructed the composition function (g DNN) to be symmetric: is a symmetric function (with parameter matrices W 1, W 2) of its two examples a, b ∈ R n that produces a vector in R k. Assessment: We evaluated accuracy (% correct) in multiple ways: the accuracy, over all examples, of identifying the exact set of speakers; the accuracy, separately for each set size, of identifying the exact set of speakers; and the accuracy, over all examples, in distinguishing just the number of speakers in the set rather than their specific identities. As a baseline, we used an oracle that returned the Most Frequently (MF) occurring label set in the test set (or a random selection among multiple most frequent elements). Results are shown in Figure 2 (b). Note that, for the rows that evaluate on k-sets, the embedding functions are still used to compare the test example to every possible subset (see Equation 2) -not just subsets of size k. In general, the compositional embedding method was far more accurate (48.1% for top-1 and 69.6% for top-3 on all subsets) compared to the baseline. The accuracy improvement was largest on singletons (1-sets), which made up 1/3 of the test data, and for which any non-compositional embedding functions (i.e., f trained without any g) would likely work well. Even on 2-sets and 3-sets, however, the compositional embedding approach was substantially more accurate than the baseline. Moreover, part of the task involves inferring the number of elements in the label set (set size determination); on this problem too the compositional embedding was nearly twice as accurate (61.9% vs. 33.3%) compared to the MF baseline. Geometry of the embedding space: We visualized (using PCA) the embedding space for a set of 3 classes not used for training (Figure 1(b) ). Each marker (the symbol/color combination distinguishes the different label sets) represents the embedding of a test example. The clusters of markers for each label set are clearly distinguished by the embedding f. From the examples from each label set, one is randomly chosen as the "reference" examples for 1-shot learning. Using composition function g DNN, their embeddings are combined to estimate where, in the embedding space, an example for the union of their two label sets would lie; the estimated location is shown at the head of the arrow whose two tails come from the two reference examples. Although the estimated and actual clusters do not align exactly, there is substantial agreement. For instance, the examples for speakers {1, 2, 3} (i.e., all 3 people talking simultaneously) are represented by the blue triangles pointing to the right, and the estimate, given by f and g DNN from reference examples of {3} and {1, 2} is shown at the head of the arrow with label {1, 2, 3} beside it. Not all compositions are accurate, e.g., the estimated location for label set {1, 3} is somewhat below the actual cluster location. We also evaluated our method on the OmniGlot dataset . OmniGlot contains handwritten characters from 50 different alphabets; in total it comprises 1623 symbols, each of which was drawn by 20 people and rendered as a 64x64 image. OmniGlot has been previously used in one-shot learning research (e.g., ;). In our experiment, the model is provided with one reference image for each singleton test class. Then, it uses f and g to select the subset of classes that most closely match the embedding of each test example (Equation 2). In this study we considered class label sets up to size 2 (i.e., singletons and 2-sets). The rendering function r randomly picks one of the 20 exemplars from each class; it randomly shifts, scales, and rotates it; and then it adds it to an image with Gaussian noise (see Figure 3(a) ). For images with multiple classes, the pixel-wise minimum across all classes is applied before adding the noise. (See Appendix for more details.) Similar to Experiment 1, the goal is to train f and g so that, on classes not seen during training, the exact set of classes contained in each test example can be inferred. During both training and testing, each of the 15 class label sets (5 singletons and 5 2 = 10 2-sets) occurred with equal frequency. All the embeddings were normalized to unit length. Architecture: For f, we used ResNet-18 Training: The process is similar to Experiment 1. See Appendix for details. Testing: Similar to training, the testing data are grouped into 5 randomly selected classes (not seen during training), and images from these classes are rendered using function r from either singleton or 2-set class label sets. We optimize Equation 2 to estimate the label set for each test example. Baselines: We compared to several baselines: • Most frequent (MF): Always guess the most frequent element in the test set. Since all classes occurred equally frequently, this was equivalent to random guessing. • Traditional f with simulated composite reference examples (SimRef): Given the reference examples of the singleton label sets (e.g., {1}, {3}), we can simulate reference examples for each composite label set (e.g., {1, 3}) and then use a traditional embedding approach to classify the test examples. We simulated imperfect knowledge of the rendering function r by performing pixel-wise minimum, but without shifting/scaling/rotation. In other words, this approach simply superimposes pairs of reference images on top of each other to simulate composite examples. To estimate the label set of each test example, we select the reference example (including the simulated composite references) with minimal L 2 distance. • Traditional f and average (Mean): f is trained as a traditional embedding function (only on singletons) using one-shot learning . Then the embedding of a composite image is computed using the mean of the embeddings of each class in its label set. • Traditional f trained jointly with g mean (g mean): f is trained on only singleton embeddings but is optimized jointly with composition g mean that computes the mean of its inputs. The difference with the previous baseline is that f can benefit from knowing how its embeddings are combined. Results: As shown in Figure 3 (b), the proposed f & g method outperforms the Mean and g mean baselines. Upon closer investigation, we discovered that, while Mean can distinguish each singleton from the other four singletons with high accuracy (0.979, not shown in table), it struggles when it must decide among all 15 possible label sets. The slightly more powerful f & g Mean method (i.e., the composition function has no trainable parameters) can achieve better accuracy on 2-set images, but the accuracy on 1-set images was much worse than other methods. We argue that f and g correspond to two different tasks, and the model is not able to do a good job on both tasks at the same time without optimizing g. If g is a learnable function, then a simple symmetric linear function (g Lin) achieves the best performance. After we stack more FC layers, the performance gets worse. The reason could be overfitting, and it may be possible to achieve better performance with regularization or more training data. In real life, it is very difficult to create a dataset that all singletons in a compositional set are labeled. Thus, we want to extend our compositional embedding from "composition" to "containing". Here we consider a second type of compositional embedding mechanism that tests whether the set of classes associated with one example subsumes the set of classes associated with another example. We implement this using a "query" function h that takes two embedded examples as inputs: In contrast to g, function h is not symmetric: its first and second arguments are the putative superset and subset, respectively. Also, h can be trained in an unsupervised manner w.r.t. the individual examples: it never needs to know which particular label set is associated with an example, but rather only pairwise information about which examples "subsume" other examples. For h, we tried several functions (h DNN, h Lin+FC, h Lin), analogous to the different implementations of g from Section 3.3. The final layers of all models have a 1-dimensional output. Ref. Functions f and h are trained jointly. Since h is not symmetric, its first layer is replaced with two weight matrices for the different input embeddings (see Appendix). In contrast to Model I, "reference" examples are not needed; only the subset relationships between label sets of pairs of examples are required. Model II can be trained on one set of classes and applied to a different set of classes (see Experiment 3), akin to zero-shot learning. It is also useful when the training and testing classes are the same because, in contrast to traditional supervised training of a detector for each possible class, no labels for individual examples are needed. To train f and h, we backpropagate a binary cross-entropy loss, based on correctly answering the query in Eq. 3, through both f and h. Here we assess Model II in a one-shot learning setting (OmniGlot), i.e., training and testing classes are different. As in Experiment 2, we consider class label sets of size up to 2, and we use the same rendering function r. Let f (x a) and f (x b) be the second arguments to h. For x a, we always choose examples associated with two classes, i.e., c(x a) = {s 1, s 2}. For x b, half of the positive examples (i.e., such that h(f (x a), f (x b)) = True) contain either s 1 or s 2, and half contain both classes. For the negative examples (h(f (x a), f (x b)) = False), x b is associated with some other singleton class s 3 ∈ {s 1, s 2}. See Appendix for more details. Baseline: We compared our proposed method with a traditional (non-compositional) embedding method (TradEmb) that is trained to separate examples according to their association with just a single class. In particular, for each composite example x a (i.e., |c(x a)| = 2), we picked one of the two classes arbitrarily (according to some fixed ordering on the elements of S); call this class s 1. Then, we chose both a positive example x b (such that c(x b) = {s 1}) and a negative example x c (such that c(x c) = {s 3} ⊂ c(x a)). We then compute a hinge loss so that the distance between f (x a) and f (x b) is smaller than the distance between f (x a) and f (x c), and backpropagate the loss through f. During testing, we use f to answer a query -does c(x a) contain c(x b)? -by thresholding the distance between f (x a) and f (x b) (threshold of 0.5). Results are shown in Table 1 (a). Compositional embeddings, as implemented with a combination of f trained jointly with either h DNN, h Lin+FC, or h Lin, slightly outperformed the TradEmb baseline, in terms of both % correct accuracy and AUC. Moreover, there was an advantage of deeper architectures for h. We trained and evaluated Model II on COCO . This is a highly challenging problem: in the example in Figure 4, f has to encode a backpack, a cell phone, and a person; then, given completely different images of these classes (and others), h has to decide which objects were present in the original image. Here, training and testing classes are the same (but testing examples are not used for training). In COCO, each image may contain objects from multiple classes, and each object has a bounding box. We used the bounding boxes to crop singleton objects from images. For training, we used the same strategy as Experiment 3, except that in positive queries (i.e., pairs of images x a, x b such that h(f (x a), f (x b)) = True), image x b was always associated with just a singleton label set. During testing, half of x b are contained in x a and half are not (see Appendix). Baseline: We compared to the TradEmb method: to decide the singleton label set for each image during training, we pick the largest object. Since in this problem the training and testing classes are the same, we also implemented another baseline (see Appendix) consisting of a ResNet classifier with multiple independent binary outputs (one for each class). We can then answer queries about the label sets of two images using the labels of the different classes estimated by the ResNet for each image. Results are shown in Table 1 (b). The proposed methods (f & one of the h functions) easily outperformed TradEmb. They also outperformed the ResNet baseline. One possible reason for the latter baseline's poor performance was that the data was highly imbalanced (since most images contain just a few images). As in Experiment 3, deeper architectures for h performed better. We proposed a new kind of embedding mechanism whereby the set of objects contained in the input data (e.g., image, video, audio) must be both perceived and then mapped into a space such that the set relationships -union (Model I) and subset (Model II) -between multiple embedded vectors can be inferred. Importantly, the ground-truth rendering process for how examples are rendered from their component classes is not known and must implicitly be learned. In our experiments, conducted on simulated data, OmniGlot, and COCO, the accuracy was far from perfect but outperformed several baselines, including one based on a traditional embedding approach. The provide a proof-of-concept of how an embedding function f, trained jointly with either the composition function g or the query function h, could be effectively optimized. One possible direction for further research to increase accuracy is to take better advantage of the statistical structure of class co-occurrence in a specific application domain (e.g., which objects tend to co-occur in the same image). A ALTERNATIVE TRAINING PROCEDURE We also tried another method of training f and g with the explicit goal of encouraging g to map e T and e U to be close to e T ∪U. This can be done by training f and g alternately, or by training them jointly in the same backpropagation. However, this approach yielded very poor . A possible explanation is that g could fulfill its goal by mapping all vectors to the same location (e.g., 0). Hence, a trade-off arises between g's goal and f's goal (separating examples with distinct label sets). Since it was initially not clear to us whether functions f and g even exist with the desired properties that could plausibly be implemented as a neural network, we first sought to design them analytically. Composition function g needs to map two vectors on a unit sphere back onto the same sphere, so that the location on the sphere uniquely identifies a set of classes. Here, we assume the set of classes is the same for training and testing. Our construction is inspired by basic group theory, in particular how each element of a permutation group can be represented as a rotation of Euclidean space and performed by multiplication with a permutation matrix. Let |S| = k. We model g after the permutation group of degree m = 2k consisting of all pairwise exchanges of coordinates 2i and 2i − 1 (i = 1, . . ., k), where the group action is composition. Since each such permutation is independent of all the others, the group is commutative and contains contains 2 k elements -just as we desire so that the range of g contains 2 |S| elements. Each member of the permutation group can be associated with a unique permutation matrix, where multiplication among these matrices is also commutative. Define a vector e ∅ ∈ R 2k whose length is 1 and whose components are all distinct. The first condition is standard when creating an embedding, and the second is important so that the permutations applied to the vector can be deduced unambiguously. We associate vector e ∅ with the empty set ∅. Next, to each singleton {s i}, where s i ∈ S, we associate the m-dimensional permutation matrix P {si} that swaps axes 2i and 2i − 1. We define f as a neural network: In its first layers, all the classes associated with example x are detected independently using k binary classifiers. Next, for all the detected classes, their associated permutation matrices P are multiplied together, which yields another commutative permutation matrix -the ant matrix identifies all the classes present in the example. Finally, the is multiplied by vector e ∅. This produces a vector with unit length in the embedding space R m. If the first few layers comprise k binary classifiers d i (z) ∈ {0, 1} (i = 1, . . ., m), then the whole f network computes: We then construct g so that, given two embedded vectors e T, e U, it computes the product of their associated permutation matrices and multiplies the by e ∅: g(e T, e U) = g(P T e ∅, P U e ∅). This construction of f and g enables perfect inference of the union of classes associated with any examples, provided the binary classifiers are perfect. This gave us hope that compositional set embeddings could succeed. Each speaker was represented by a "prototype" waveform comprising the sum of 8 sinusoids (range [−1, +1]) with randomly chosen frequencies and phases. To construct each training and testing example, the prototypes of all the speakers contained in the particular example are added together and then clipped at [−4, +4]. To simulate noise, we slightly perturbed the frequencies and phases of each prototype by random noise drawn uniformly from [−0.02, +0.02]. We also perturbed the superposition by a small amount (from [−0.02, +0.02]) prior to clipping. The training dataset contained a total of 250 simulated speakers. Each minibatch comprised 3072 examples from 5 unique speakers that were combined in randomly selected subsets T: 1024 of these contained individual speakers (singletons), 1024 contained combinations of 2 speakers, and 1024 contained combinations of 3 speakers. Functions f and g were trained jointly as described in Section 3.1. Training was conducted for 4000 epochs with a learning rate of 0.002 and the Adam algorithm. We set the hinge parameter = 0.1. At test time, one of the singleton examples was randomly chosen as the "reference" example for one-shot learning. Similar to training, each test set consists of 3072 examples comprising randomly chosen 1-sets, 2-sets and 3-sets of speakers; see Figure 2 (a) for example waveforms. To generate an example image from a class prototype, we applied random affine transformations consisting of shift up to 20%, scaling up to 10%, and rotation up to 10 •. Gaussian noise was added with mean 0.9 and variance 0.1. We set the hinge parameter = 0.1. Training is performed on 964 OmniGlot classes; from each of these, the 20 images are augmented using affine transformations (described above) to yield 50 images. Each mini-batch comprises 10000 examples from 5 classes, with two images randomly selected per class: one as a reference example, and one as a training example. The validation set consists of 20 OmniGlot classes not used for training. Training is performed using Adam (lr = .0003) to maximize the validation accuracy, up to a maximum of 100 epochs. Testing accuracy is computed over the 659 OmniGlot classes not used for training or validation. The dataset used in this experiment is the same as experiment 2 including the distribution of training set, validation set and test set. We also used the same ResNet-18 as for Experiment 2. Function h was constructed the same as g except that the first layer was relaxed to be asymmetric: W 1 e T + W 2 e U, and the output layer is logistic sigmoid. For training and testing, the proportion of positive and negative queries was fixed to be equal. During training, each mini-batch contains 128 query pairs and each query pair is composed by a positive query and a negative query. We use binary cross entropy loss and Adam optimizer (lr = .0003) to optimize the models. During evaluation, 0.5 is used for thresholding. COCO's training set is used for training, and COCO's labels of subclasses are used as training labels. A small part of images are taken as validation set. For ResNet baseline, We modified ResNet's last's dimension to 80 and applied logistic sigmoid to all of them. Thus, the output layer can be used as 80 classifiers and each one corresponds to one possible subclass. During training, the loss is composed by binary cross entropy loss from all 80 binary classifiers. For every subclass in all 80 possibilities, if it is in the set of subclasses of one image, then the label is 1, otherwise 0. Adam optimizer ((lr) =.0003) is used for optimization. During evaluation, for each query we assume the class labels of subclass images are already known. ResNet is not optimized to answer queries about image pairs. Instead, it tries to encode each image into an n-bit string (for n classes). While this representation can account for all 2 n possible label sets, it may not be the most effective or efficient representation for the task, especially since some objects are very unlikely to co-occur with others. The proposed f &h embedding method can harness the co-occurrence structure to answer queries more accurately, whereas a ResNet trained to recognize all the individual classes does not harness it. For other models singleton images are also required, which are cropped according to COCO's labels of bounding boxes. All images were padded with zeros to be square and then downscaled to 128x128. We used the same architecture as for Experiment 3, except that the input layer was 3 channels instead of 1. Testing was the same as Experiment 3 (likewise with a distance threshold of 0.5). In both training and testing, the number of positive queries and negative queries are balanced. In order to explore how the number of subclasses affects h function, we also plot the accuracy and AUC with different number of subclasses in Figure 5. We can see a decreasing trend of performance when the images contain more kinds of labeled objects.
We explored how a novel method of compositional set embeddings can both perceive and represent not just a single class but an entire set of classes that is associated with the input data.
792
scitldr
In this work, we propose a goal-driven collaborative task that contains language, vision, and action in a virtual environment as its core components. Specifically, we develop a Collaborative image-Drawing game between two agents, called CoDraw. Our game is grounded in a virtual world that contains movable clip art objects. The game involves two players: a Teller and a Drawer. The Teller sees an abstract scene containing multiple clip art pieces in a semantically meaningful configuration, while the Drawer tries to reconstruct the scene on an empty canvas using available clip art pieces. The two players communicate via two-way communication using natural language. We collect the CoDraw dataset of ~10K dialogs consisting of ~138K messages exchanged between human agents. We define protocols and metrics to evaluate the effectiveness of learned agents on this testbed, highlighting the need for a novel "crosstalk" condition which pairs agents trained independently on disjoint subsets of the training data for evaluation. We present models for our task, including simple but effective baselines and neural network approaches trained using a combination of imitation learning and goal-driven training. All models are benchmarked using both fully automated evaluation and by playing the game with live human agents. Building agents that can interact with humans in natural language while perceiving and taking actions in their environments is one of the fundamental goals in artificial intelligence. One of the required components, language understanding, has traditionally been studied in isolation and with tasks aimed at imitating human behavior (e.g. language modeling BID4 ; BID35, machine translation BID2 ; BID42, etc.) by learning from large text-only corpora. To incorporate both vision and action, it is important to have the language grounded BID19 BID3, where words like cat are connected to visual percepts and words like move relate to actions taken in an environment. Additionally, judging language understanding purely based on the ability to mimic human utterances has limitations: there are many ways to express roughly the same meaning, and conveying the correct information is often more important than the particular choice of words. An alternative approach, which has recently gained increased prominence, is to train and evaluate language generation capabilities in an interactive setting, where the focus is on successfully communicating information that an agent must share in order to achieve its goals. In this paper, we propose the Collaborative Drawing (CoDraw) task, which combines grounded language understanding and learning effective goal-driven communication into a single, unified testbed. This task involves perception, communication, and actions in a partially observable virtual environment. As shown in FIG0, our game is grounded in a virtual world constructed by clip art objects. Two players, Teller and Drawer, play the game. The Teller sees an abstract scene made from clip art objects in a semantically meaningful configuration, while the Drawer sees a drawing canvas that is initially empty. The goal of the game is to have both players communicate so that the Drawer can reconstruct the image of the Teller, without ever seeing it. Our task requires effective communication because the two players cannot see each other's scenes. The Teller has to describe the scene in sufficient detail for the Drawer to reconstruct it, which will require rich grounded language. Moreover, the Drawer will need to carry out a series of actions from a rich action space to position, orient, and resize all of the clip art pieces required for the reconstruction. Note that such actions are only made possible through clip art pieces: they can represent semantically meaningful configurations of a visual scene that are easy to manipulate, in contrast to low-level pixel-based image representations. The performance of a pair of agents is judged based on the quality of reconstructed scenes. We consider high-quality reconstructions as a signal that communication has been successful. As we develop models and protocols for CoDraw, we found it critical to train the Teller and the Drawer separately on disjoint subsets of the training data. Otherwise, the two machine agents may conspire to successfully achieve the goal while communicating using a shared "codebook" that bears little resemblance to natural language. We call this separate-training, joint-evaluation protocol crosstalk, which prevents learning of mutually agreed upon codebooks, while still checking for goal completion at test time. We highlight crosstalk as one of our contributions, and believe it can be generally applicable to other related tasks BID41 BID14 BID11 BID9 BID27. • We propose a novel CoDraw task, which is a game designed to facilitate the learning and evaluation of effective natural language communication in a grounded context. • We collect a CoDraw dataset of ∼10K variable-length dialogs consisting of ∼138K messages with the drawing history at each step of the dialog.• We define a scene similarity metric, which allows us to automatically evaluate the effectiveness of the communication at the end and at intermediate states.• We propose a cross-talk training and evaluation protocol that prevents agents from potentially learning joint uninterpretable codebooks, rendering them ineffective at communicating with humans.• We evaluate several Drawer and Teller models automatically as well as by pairing them with humans, and show that long-term planning and context reasoning in the conversation are key challenges of the CoDraw task. Language and Vision. The proposed CoDraw game is related to several well-known language and vision tasks that study grounded language understanding BID21 BID13 BID11. For instance, compared to image captioning BID46 BID6 BID32, visual question answering BID1 BID23 BID33 BID37 BID43 BID54 and recent embodied extensions BID10, CoDraw involves multiple rounds of interactions between two agents. Both agents hold their own partially observable states and need to build a mental model for each other to collaborate. Compared to work on navigation BID47 BID0 BID16 where an agent must follow instructions to move itself in a static environment, CoDraw involves moving and manipulating multiple clip art pieces, which must jointly form a semantically meaningful scene. Compared to visual dialog BID8 BID39 BID40 BID36 tasks, agents need to additionally cooperate to change the environment with actions (e.g., move around pieces). Thus, the agents have to possess the ability to adapt and hold a dialog about novel scenes that will be constructed as a consequence of their dialog. In addition, we also want to highlight that CoDraw has a well-defined communication goal, which facilitates objective measurement of success and enables end-to-end goal-driven learning. End-to-end Goal-Driven Dialog. Traditional goal-driven agents are often based on'slot filling' BID24 BID51 BID54, in which the structure of the dialog is pre-specified but the individual slots are replaced by relevant information. Recently, endto-end neural models are also proposed for goal-driven dialog BID5 BID29 BID39 BID20, as well as goal-free dialog or'chit-chat' BID38 BID39 BID45 BID12. Unlike CoDraw, in these approaches, symbols in the dialog are not grounded into visual objects. Language Grounded in Environments. Learning language games to change the environment has been studied recently BID49. The agent can change the environment using the grounded natural language. However, agents do not have the need to cooperate. Other work on grounded instruction following relies on datasets of pre-generated action sequences annotated with human descriptions, rather than using a single end goal BID31. Speaker models for these tasks are only evaluated based on their ability to describe an action sequence that is given to them BID15, whereas Teller models for CoDraw also need to select a desired action sequence in a goal-driven manner. Language grounding has also been studied for robot navigation, manipulation, and environment mapping BID44 BID34 BID7. However, these works manually pair each command with robot actions and lack end-to-end training BID44, dialog BID34 BID7, or both BID48.Emergent Communication. Building on the seminal works by BID25 BID26, a number of recent works study cooperative games between agents where communication protocols emerge as a consequence of training the agents to accomplish shared goals BID41 BID14. These methods have typically been applied to learn to communicate small amounts of information, rather than the complete, semantically meaningful scenes used in the CoDraw task. In addition, the learned communication protocols are usually not natural BID22 or interpretable. On the other hand, since our goal is to develop agents that can assist and communicate with humans, we must pre-train our agents on human communication and use techniques that can cope with the greater linguistic variety and richness of meaning present in natural language. In this section, we first detail our task, then present the CoDraw dataset, and finally propose a Scene Similarity Metric which allows automatic evaluation of the reconstructed and original scene. Abstract Scenes. To enable people to easily draw semantically rich scenes on a canvas, we leverage the Abstract Scenes dataset of. This dataset consists of 10,020 semantically consistent scenes created by human annotators. An example scene is shown in the left portion of FIG0. Most scenes contain 6 objects (min 6, max 17, mean 6.67). These scenes depict children playing in a park, and are made from a library of 58 clip arts, including a boy (Mike) and a girl (Jenny) in one of 7 poses and 5 expressions, and various other objects including trees, toys, hats, animals, food, etc. An abstract scene is created by dragging and dropping multiple clip art objects to any (x, y) position on the canvas. Also, for each clip art, different spatial transformations can be applied, including sizes (Small, Normal, Large), and two orientations (facing left or right). The clip art serve simultaneously as a high-level visual representation and as a mechanism by which rich drawing actions can be carried out. Interface. We built a drag-and-drop interface based on the Visual Dialog chat interface BID8 ) (see Figures 4 and 5 in Appendix A for screen shots of the interface). The interface allows real-time interaction between two people. During the conversation, the Teller describes the scene and answers any questions from the Drawer on the chat interface, while Drawer "draws" or reconstructs the scene based on the Teller's descriptions and instructions. Each side is only allowed to send one message at a time, and must wait for a reply before continuing. The maximum length of a single message is capped at 140 characters: this prevents excessively verbose descriptions and gives the Drawer more chances to participate in the dialog by encouraging the Teller to pause more frequently. Both participants were asked to submit the task when they are both confident that Drawer has accurately reconstructed the scene of Teller. Our dataset, as well as this infrastructure for live chat with live drawing, will be made publicly available. Additional Interaction. We did not allow Teller to continuously observe Drawer's canvas to make sure that the natural language focused on the high-level semantics of the scene rather than instructions calling for the execution of low-level clip art manipulation actions, but we hypothesize that direct visual feedback may be necessary to get the all the details right. For this, we give one chance for the Teller to look at the Drawer's canvas using a'peek' button in the interface. Communication is only allowed after the peek window is closed. We collect 9,993 1 dialogs where pairs of people complete the CoDraw task, consisting of one dialog per scene in the Abstract Scenes dataset. The dialogs contain of a total of 138K utterances and include snapshots of the intermediate state of the Drawer's canvas after each round of each conversation. We reserve 10% of the scenes to form a test set and an additional 10% to form a development set; the corresponding dialogs are used to evaluate human performance for the CoDraw task. The remaining dialogs are used for training (see Section 5 for details about our training and evaluation setup.)The message length distribution for the Drawer is skewed toward 1 with the passive replies like "ok", "done", etc. There does exist a heavy tail, which shows that Drawers do ask clarifying questions about the scene like "where is trunk of second tree, low or high". On the other hand, the distribution of number of tokens in Tellers' utterances is relatively smooth with long tails. The vocabulary size is 4,555. Since the subject of conversations is about abstract scenes with a limited number of clip arts, the vocabulary is relatively small compared to those on real images. See Appendix B for a more detailed analysis of our dataset, where we study the lengths of the conversations, the number of rounds, and the distributions of scene similarity scores when humans perform the task. The goal-driven nature of our task naturally lends itself to evaluation by measuring the similarity of the reconstructed scene to the original. For this purpose we define a scene similarity metric, which allows us to automatically evaluate communication effectiveness both at the end of a dialog and at intermediate states. We use the metric to compare how well different machine-machine, humanmachine, and human-human pairs can complete the CoDraw task. Let c i, c j denote the identity, location, configuration of two clipart pieces i and j. A clipart image C = {c i} is then simply a set of clipart pieces. Given two images C andĈ, we compute scene similarity by first finding the common clipart pieces C ∩Ĉ and then computing unary f (c i) and pairwise terms g(c i, c j) on these pieces in common: DISPLAYFORM0 Using f (c) = 1 and g(c i, c j) = 0 would in the standard intersection-over-union measure used for scoring set predictions. The denominator terms normalize the metric to penalize missing or extra clip art, and we set f and g such that our metric is on a 0-5 scale. The exact terms f and g are described in Appendix C. We model both the Teller and the Drawer, and evaluate the agents using the metrics described in the previous section. Informed by our analysis of the collected dataset (see Appendix B), we make three modeling assumptions compared to the full generality of the setup that humans were presented with during data collection. These assumptions hold for all models studied in this paper. Assumption 1: Silent Drawer. We choose to omit the Drawer's ability to ask clarification questions; instead, our Drawer models will always answer "ok" and our Teller models will not condition on the text of the Drawer replies. This is consistent with typical human replies (around 62% of which only use a single token) and the fact that the Drawer talking is not strictly required to resolve the information asymmetry inherent in the task. We note that this assumption does not reduce the number of modalities needed to solve the task: there is still language generation on the Teller side, in addition to language understanding, scene perception, and scene generation on the Drawer side. Drawer models that can detect when a clarification is required, and then generate a natural language clarification question is interesting future work. Assumption 2: No Peek Action. The second difference is that the data collection process for humans gives the Teller a single chance to peek at the Drawer's canvas, a behavior we omit from our models. Rich communication is still required without this behavior, and omitting it also does not decrease the number of modalities needed to complete the task. We leave for future work the creation of models that can peek at the time that maximizes task effectiveness. Assumption 3: Full Clip Art Library. The final difference is that our drawer models can select from the full clip art library. Humans are only given access to a smaller set so that it can easily fit in the user interface, while ensuring that all pieces needed to reconstruct the target scene are available. We choose to adopt the full-library condition as the standard for models because it gives the models greater latitude to make mistakes (making the problem more challenging) and makes it easier to detect obviously incorrect groundings. Simple methods can be quite effective even for what appear to be challenging tasks, so we begin by building models based on nearest-neighbors and rule-based approaches. Rule-based Nearest-Neighbor Teller. For our first Teller model, We consider a rule-based dialog policy where the Teller describes exactly one clip art each time it talks. The rule-based system determines which clip art to describe during each round of conversation, following a fixed order that roughly starts with objects in the sky (sun, clouds, airplanes), then objects in the scene (trees, Mike, Jenny), and ends with small objects (sunglasses, baseball bat). The Teller then produces an utterance by performing a nearest-neighbor lookup in a database containing (Teller utterance, clip art object) pairs, where the similarity between the selected clip art and each database element is measured by applying the scene similarity metric to individual clip art. The database is constructed by collecting all instances in the training data where the Teller sent a message and the Drawer responded by adding a single clip art piece to the canvas. Instances where the Drawer added multiple clip art pieces or made any changes to the position or other attributes of pieces already on the canvas are not used when constructing the nearest-neighbor database. This baseline approach is based on the assumptions that the Drawer's action was elicited by the Teller utterance immediately prior, and that the Teller's utterance will have a similar meaning when copied verbatim into a new conversation. Rule-based Nearest-Neighbor Drawer. This Drawer model is the complement to the rule-based nearest-neighbor Teller. It likewise follows a hard-coded rule that the response to each Teller utterance should be the addition of a single clip art to the scene, and makes use of the same database of (Teller utterance, clip art object) tuples collected from the training data. Each Teller utterance the agent receives is compared with the stored tuples using character-level string edit distance. The clip art object from the most similar tuple is selected and added to the canvas by the Drawer. In this section, we describe a neural network approach to the Drawer. Contextual reasoning is an important part of the CoDraw task: each message from the Teller can relate back to what the Drawer has previously heard or drawn, and the clip art pieces it places on the canvas must form a semantically coherent scene. To capture these effects, our model should condition on the past history of the conversation and use an action representation that is conducive to generating coherent scenes. conditions on the current state of the canvas and a BiLSTM encoding of the previous utterance to decide which clip art pieces to add to a scene. The Teller (right) uses an LSTM language model with attention to the scene (in blue) taking place before and after the LSTM. The "thought bubbles" represent intermediate supervision using an auxiliary task of predicting which clip art have not been described yet. In reinforcement learning, the intermediate scenes produced by the drawer are used to calculate rewards. Note that the language used here was constructed for illustrative purposes, and that the messages in our dataset are more detailed and precise. When considering past history, we make the Markovian assumption that the current state of the Drawer's canvas captures all information from the previous rounds of dialog. Thus, the Drawer need only consider the most recent utterance from the Teller and the current canvas to decide what to draw next. We experimented with incorporating additional context -such as previous messages from the teller or the action sequence by which the Drawer arrived at its current canvas configuration -but did not observe any gains in performance. The current state of the canvas is represented using a collection of indicator features and real-valued features. For each of the n c = 58 clip art types, there is an indicator feature for its presence on the canvas, and an indicator feature for each discrete assignment of an attribute to the clip art (e.g. 1 size=small, 1 size=medium, etc.) for a total of n b = 41 binary features. There are additionally two real-valued features that encode the x and y position of the clip art on the canvas, normalized to the 0-1 range. The ing canvas representation is a feature vector v canvas of size n c × (n b + 2), where all features for absent clip art types are set to zero. We run a bi-directional LSTM over the Teller's most recent message and extract the final hidden states for both directions, which we concatenate to form a vector v message. The Drawer is then a feedforward neural network that takes as input v canvas and v message and produces an output vector v action. The action representation v action also consists of n c × (n b + 2) elements and can be thought of as a continuous relaxation of the mostly-discrete canvas encoding. For each clip art type, there is a realvalued score that determines whether a clip art piece of that type should be added to the canvas: a positive score indicates that it should be added as part of the action. During training, a binary crossentropy loss compares these scores with the actions taken by human drawers. v action also contains unnormalized log-probabilities for each attribute-value assignment (e.g. z size=small, z size=medium, etc. for each clip art type); when a clip art piece is added to the canvas, its attributes are assigned to their most-probable values. The log-probabilities are trained using softmax losses. Finally, v action contains two entries for each clip art type that determine the clip art's x, y position if added to the canvas; these elements are trained using an L 2 loss. For our neural Teller models, we adopt an architecture that we call scene2seq. This architecture is a conditional language model over the Teller's side of the conversation with special next-utterance tokens to indicate when the Teller ends its current utterance and waits for a reply from the Drawer. scene is incorporated both before and after each LSTM cell through the use of an attention mechanism. Attention occurs over individual clip art pieces: each clip art object in the ground-truth scene is represented using a vector that is the sum of learned embeddings for different clip art attributes (e.g. e type=Mike, e size=small, etc.) At test time, the Teller's messages are constructed by decoding from the language model using greedy word selection. To communicate effectively, the Teller must keep track of which parts of the scene it has and has not described, and also generate language that is likely to accomplish the task objective when interpreted by the Drawer. We found that training the scene2seq model using a maximum likelihood objective did not in long-term coherent dialogs for novel scenes. Rather than introducing a new architecture to address these deficiencies, we explore reducing them by using alternative training objectives. To better ensure that the model keeps track of which pieces of information it has already communicated, we take advantage of the availability of drawings at each round of the recorded human dialogs and introduce an auxiliary loss based on predicting these drawings. To select language that is more likely to lead to successful task completion, we further fine-tune our Teller models to directly optimize the end-task goal using reinforcement learning. We incorporate state tracking into the scene2seq architecture through the use of an auxiliary loss. This formulation maintains the end-to-end training procedure and keeps test-time decoding exactly the same. The only change is that, at each utterance separator token, the output from the LSTM is used to predict which clip art still need to be described. More precisely, the network must classify whether each clip art in the ground truth has been drawn already or not. The supervisory signal makes use of the fact that the CoDraw dataset records human drawer actions at each round of the conversation, not just at the end. The network outputs a score for each clip art ID, which is connected to a softmax loss for the clip art in the ground truth scene (the scores for absent clip arts do not contribute to the auxiliary loss). We find that adding such a supervisory signal reduces the Teller's propensity for repeating itself or omitting objects. The auxiliary loss helps the agent be more coherent throughout the dialog, but it is still trained to imitate human behavior rather than to complete the downstream task. By training the agents using reinforcement learning (RL), they can learn to use language that is more effective at achieving highfidelity scene reconstructions. In this work we only train the Teller with RL, because the Teller has challenges maintaining a long-term strategy throughout a long dialog, whereas preliminary showed that making local decisions is less detrimental for Drawers. The scene2seq Teller architecture remains unchanged, and each action from the agent is to output a word or one of two special tokens: a next-utterance token and a stop token. After each next-utterance token, our neural Drawer model is used to take an action in the scene and the ing change in scene similarity metric is used as a reward. However, this reward scheme alone has an issue: once all objects in the scene are described, any further messages will not in a change in the scene and have a reward of zero. As a , there is no incentive to end the conversation. We address this by applying a penalty of 0.3 to the reward whenever the Drawer makes no changes to the scene. We train our model with REINFORCE . To evaluate our models, we pair our models with other models, as well as with a human. Human-Machine Pairs. We modified the interface used for data collection to allow human-machine pairs to complete the tasks. Each model plays one game with a human per scene in the test set, and we compare the scene reconstruction quality between different models and with human-human pairs. Script-based Drawer Evaluation. In addition to human evaluation, we would like to have automated evaluation protocols that can quickly estimate the quality of different models. Drawer models can be evaluated by pairing them with a Teller that replays recorded human conversation from a script (a recorded dialog from the dataset) and measuring scene similarity at the end of the dia- Figure 3: A rule-based nearest-neighbor Teller/Drawer pair "trained" on the same data outperforms humans for this scene according to the similarity metric, but the language used by the models doesn't always correspond in meaning to the actions taken. The three panels on the left show a scene from the test set and corresponding human/model reconstructions. The two panels on the right show the Teller message and Drawer action from two rounds of conversation by the machine agents.log. While this setup does not capture the full interactive nature of the task, the Drawer model still receives human descriptions of the scene and should be able to reconstruct it. Our modeling assumptions include not giving Drawer models the ability to ask clarifying questions, which further suggests that script-based evaluation can reasonably measure model quality. Machine-Machine Evaluation. Unlike Drawer models, Teller models cannot be evaluated using a "script" from the dataset. We instead consider an evaluation where a Teller model and Drawer model are paired, and their joint performance is evaluated using the scene similarity metric. Automatically evaluating agents, especially in the machine-machine paired setting, requires some care because a pair of agents can achieve a perfect score while communicating in a shared code that bears no resemblance to natural language. There are several ways such co-adaptation can develop. One is by overfitting to the training data to the extent that it's used as a codebook -we see this with the rule-based nearest-neighbor agents described in Section 4.1, where a Drawer-Teller pair "trained" on the same data outperforms humans on the CoDraw task. An examination of the language, however, reveals that only limited generalization has taken place (see Figure 3). Another way that agents can co-adapt is if they are trained jointly, for example using reinforcement learning. To limit these sources of co-adaptation, we propose a training protocol we call "crosstalk." In this setting, the training data is split in half, and the Teller and Drawer are trained separately on disjoint halves of the training data. When multiple agents are required during training (as with reinforcement learning), the joint training process is run separately for both halves of the training data, but evaluation pairs a Teller from the first partition with a Drawer from the second. This ensures to some extent that the models can succeed only if they have learned generalizable language, and not via a highly specialized codebook specific to model instances. Taking the crosstalk training protocol into account, the dataset split we use for all experiments is: 40% Teller training data (3,994 scenes/dialogs), 40% Drawer training data, 10% development data and 10% testing data. Results for our models are shown in TAB1. All numbers are scene similarities, averaged across scenes in the test set. Neural Drawer Performs the Best. In the script setting, our neural Drawer is able to outperform the rule-based nearest-neighbor baseline (3.39 vs. 0.94) and close most of the gap between baseline (0.94) and human performance (4.17).Validity of Script-Based Drawer Evaluation. To test the validity of script-based Drawer evaluation -where a Drawer is paired with a Teller that recites the human script from the dataset corresponding to the test scenes -we include from interactively pairing human Drawers with a Teller that recites the scripted messages. While average scene similarity is lower than when using live human Tellers (3.83 vs. 4.17), the scripts are sufficient to achieve over 91% of the effectiveness of the Benefits of Intermediate Supervision and Goal-Driven Training. Pairing our models with humans shows that the scene2seq Teller model trained with imitation learning is worse than the rulebased nearest-neighbor baseline (2.69 vs. 3.21), but that the addition of an auxiliary loss followed by fine-tuning with reinforcement learning allow it to outperform the baseline (3.65 vs. 3.21). However, there is still a gap between to human Tellers (3.65 vs. 4.17). Many participants in our human study noted that they received unclear instructions from the models they were paired with, or expressed frustration that their partners could not answer clarifying questions as a way of resolving such situations. Recall that our Teller models currently ignore any utterances from the Drawer. Correlation Between Fully-automated and Human-machine Evaluation. We also report the of paired evaluation for different Teller models and our best Drawer, showing that the relative rankings of the different Teller types match those we see when models are paired with humans. This shows that automated evaluation while following the crosstalk training protocol is a suitable automated proxy for human-evaluation. The errors made by Teller reflect two key challenges posed by the CoDraw task: reasoning about the context of the conversation and drawing, and planning ahead to fully and effectively communicate the information required. A common mistake the rule-based nearest-neighbor Teller makes is to reference objects that are not present in the current scene. Figure 3 shows an example (second panel from the right) where the Teller has copied a message referencing a "swing" that does not exist in the current scene. In a sample of 5 scenes from the test set, the rule-based nearest-neighbor describes a non-existent object 11 times, compared to just 1 time for the scene2seq Teller trained with imitation learning. The scene2seq Teller, on the other hand, frequently describes clip art pieces multiple times or forgets to mention some of them: in the same sample of scenes, it re-describes an object 10 times (vs. 2 for the baseline) and fails to mention 11 objects (vs. 2.) The addition of an auxiliary loss and RL fine-tuning reduces these classes of errors while avoiding frequent descriptions of irrelevant objects (0 references to non-existent objects, 3 instances of re-describing an object, and 4 objects omitted.)On the Drawer side, the most salient class of mistakes made by the neural network model is semantically inconsistent placement of multiple clip art pieces. Several instances of this can be seen in FIG0 in the Appendix D, where the Drawer places a hat in the air instead of on a person's head, or where the drawn clip art pieces overlap in a visually unnatural way. Qualitative examples of both human and model behavior are provided in Appendix D. In this paper, we introduce CoDraw: a collaborative task designed to facilitate learning of effective natural language communication in a grounded context. The task combines language, perception, and actions while permitting automated goal-driven evaluation both at the end and as a measure of intermediate progress. We introduce a dataset and models for this task, and propose a crosstalk training + evaluation protocol that is more generally applicable to studying emergent communication. The models we present in this paper show levels of task performance that are still far from what humans can achieve. Long-term planning and contextual reasoning as two key challenges for this task that our models only begin to address. We hope that the grounded, goal-driven communication setting that CoDraw is a testbed for can lead to future progress in building agents that can speak more naturally and better maintain coherency over a long dialog, while being grounded in perception and actions. A.1 INTERFACE Figure 4 shows the interface for the Teller, and Figure 5 shows the interface for the Drawer. Following previous works, Drawers are given 20 clip art objects selected randomly from the 58 clip art objects in the library, while ensuring that all objects required to reconstruct the scene are available. Your fellow Turker will ask you questions about your secret scene. 1Your objective is to help the fellow Turker recreate the scene. You typically describe the details of the image and/or answer their questions. Use Chance Finish HIT! Figure 5: User interface for a Drawer. The Drawer has an empty canvas and a randomly generated drawing palette of Mike, Jenny, and 18 other objects, chosen from a library of 58 clip arts. We ensure that using the available objects, Drawer can fully reproduce the scene. Using the library, the Drawer can draw on the canvas in a drag-and-drop fashion. Drawer can also send messages using a given input box. However, the peek button is disabled. Only the Teller can use it. We found that approximately 13.6% of human participants disconnect voluntarily in an early stage of the session. We paid participants who stayed in the conversation and had posted at least three messages. However, we exclude those incomplete sessions in the dataset, and only use the completed sessions. There are 616 unique participants represented in our collected data. Among these workers, the 5 most active have done 26.63% of all finished tasks (1,419, 1,358, 1,112, 1,110, and 1,068 tasks). Across all workers, the maximum, median, and minimum numbers of tasks finished by a worker are 1,419, 3, and 1, respectively.*Collected 9,993 sessions as of Apr 19 2017 The number of sessions.0K.4K.8K1.2K The number of sessions.0K.4K.8K1.2K The CoDraw dataset consists of 9,993 dialogs consisting of a total of 138K utterances. Each dialog describes a distinct abstract scene. Messages. FIG3 shows the distribution of message lengths for both Drawers and Tellers. Drawer messages tend to be short (the median length is 1 accounts for 62% of messages), but there does exist a heavy tail where the Drawer asks clarifying questions about the scene. Teller message length have a more smooth distribution with a median length of 16 tokens. The size of vocabulary is 4,555: since conversations describe abstract scenes consisting of a limited number of clip art types, the vocabulary is relatively small compared to tasks involving real images. Rounds. FIG3 shows the distribution of the numbers of conversational rounds for dialog sessions. Most interactions are shorter than 20 rounds, median being 7.Durations. In FIG3 we see that the median session duration is 6 minutes. We had placed a 20-minute maximum limit on each session. Scores. FIG4 shows the distribution of scene similarity scores throughout the dataset. FIG5 shows the progress of scene similarity scores over the rounds of a conversation. An average conversations is done improving the scene similarity after about 5 rounds, but for longer conversations that continue to 23 rounds, there is still room for improvement. Given a ground-truth scene C and a predicted sceneĈ (where the presence of a clip art type c in the scene C is indicated by c ∈ C) scene similarity s is defined as: DISPLAYFORM0 where f (c) =w 0 − w 1 1 clip art piece c faces the wrong direction − w 2 1 clip art piece c is Mike or Jenny and has the wrong facial expression − w 3 1 clip art piece c is Mike or Jenny and has the wrong body pose − w 4 1 clip art piece c has the wrong size DISPLAYFORM1 Here x c and y c refer to the position of the clip art piece in the ground-truth scene,x c andŷ c refer to its position in the predicted scene, and W, H are the width and height of the canvas, respectively. We use parameters w = [5, 1, 0.5, 0.5, 1, 1, 1, 1], which provides a balance between the different components and ensures that scene similarities are constrained to be between 0 and 5.D QUALITATIVE EXAMPLES Figure 9 shows some examples of scenes and dialogs from the CoDraw dataset. The behavior of our Drawer and Teller models on a few randomly-selected scenes is illustrated in FIG0, and 12. Figure 9: Examples from the Collaborative Drawing (CoDraw) dataset, chosen at random from the test set. The images depict the Drawer's canvas after each round of conversation. From left to right, we show rounds one through four, then the last round, followed by the ground truth scene. The corresponding conversations between the Teller (T) and Drawer (D) are shown below the images. Note that there is no restriction on which of the two participants begins or ends the dialog. small hot air balloon top right B2 in front of tree is boy, he is to the left part of tree and is covering the curve up. he is angry, standing, arms, out facing left small girl, running, facing right, surprised, 1 " from bottom, 1 2 " from left small hot balloon on right corner, half " from top.large bear on left faced right B3 the head of surprised girl is on front the trunk. she is like running and faces right.small pine tree behind her, bottom of trunk at horizon, bottom of trunk at horizon, small boy in front of tree, head touching bottom of tree, standing, smiling, facing right, holding a hot dog in left hand on center, a mad mike with hands front facing left. FIG0: A comparison of the descriptions generated by each of our Teller models for two randomly-sampled scenes from the test set.
We introduce a dataset, models, and training + evaluation protocols for a collaborative drawing task that allows studying goal-driven and perceptually + actionably grounded language generation and understanding.
793
scitldr
Presence of bias and confounding effects is inarguably one of the most critical challenges in machine learning applications that has alluded to pivotal debates in the recent years. Such challenges range from spurious associations of confounding variables in medical studies to the bias of race in gender or face recognition systems. One solution is to enhance datasets and organize them such that they do not reflect biases, which is a cumbersome and intensive task. The alternative is to make use of available data and build models considering these biases. Traditional statistical methods apply straightforward techniques such as residualization or stratification to precomputed features to account for confounding variables. However, these techniques are not in general applicable to end-to-end deep learning methods. In this paper, we propose a method based on the adversarial training strategy to learn discriminative features unbiased and invariant to the confounder(s). This is enabled by incorporating a new adversarial loss function that encourages a vanished correlation between the bias and learned features. We apply our method to a synthetic, a medical diagnosis, and a gender classification (Gender Shades) dataset. Our show that the learned features by our method not only in superior prediction performance but also are uncorrelated with the bias or confounder variables. The code is available at http://blinded_for_review/. A central challenge in practically all machine learning applications is the consideration of confounding biases. Confounders are extraneous variables that distort the relationship between the input (independent) and output (dependent) variables and hence lead to erroneous . In a variety of applications ranging from disease prediction to face recognition, where machine learning models are built to predict labels from images, demographic variables (such as age, sex, race) of the study may confound the training process if the distribution of image labels is skewed with respect to them. In this situation, the predictor may learn the influence of the confounder and bias present in the data instead of actual discriminative cues. It is a cumbersome task to account for all biases when curating large-scale datasets . An alternative approach is to account for the bias in the model. Traditionally, confounding variables are often controlled by statistical methods in either design or analytical stages . In the design stage, one can utilize randomization or matching of the confounding variables across different study groups. In the analytical stage, confounding can be controlled by standardization or stratification . Another common solution is to learn the influence of the confounding variables on the input (independent) variables by regression analysis. Then, the residuals derived from the optimal regression model are regarded as the confounder-free input to train the predictor . The regression analysis works reasonably well under the assumption that the input variables represent deterministic features that are comparable across a population, e.g., morphometric measurements extracted from medical images or engineered features extracted from face images. The method fails, however, when this assumption does not hold such as for the pixel intensity values in images. Note, the raw intensities are only meaningful within a neighborhood but variant across images. Therefore, these regression approaches cannot be used in connection with deep learning methods that are di- † indicates equal contribution. Figure 1: Average face images across each shade category (first row), average saliency map of the trained baseline (second row), and BR-Net (third row) color-coded with the normalized saliency value for each pixel. BR-Net in more stable patterns across all 6 shade categories. The last column shows the tSNE projection of the learned features by each method. Our method in a better feature space invariant to the bias variable (shade) while the baseline shows a clear pattern affected by the bias. Average accuracy of per-shade gender classification over 5 runs of 5-fold crossvalidation is shown on each average map. The models are pre-trained on ImageNet and fine-tuned on GS-PPB. BR-Net is not only able to close the gap of accuracy for the darker shade but it also regularizes the model to improve per-category accuracy. rectly applied to images, such as convolutional neural networks (CNNs). Removing confounding factors for CNNs is an open question we aim to address here. We propose a feature learning scheme to produce features that are predictive of class labels while being unbiased to confounding variables. The idea is inspired by the domain-adversarial training approaches with controllable invariance within the context of generative adversarial networks (GANs) , but we argue that generic and widely used loss functions are not designed for controlling the invariance with respect to bias variables. Hence, we introduce an adversarial loss function that aims to quantify the statistical dependence between the learned features and bias variables with the correlation coefficient. This strategy improves over the commonly used cross-entropy or mean-squared error (MSE) loss that only aims to predict the exact value of the bias variables and thereby achieves stabler within the context of adversarial training. Since our proposed model injects resilience towards the bias during training to produce confounder-invariant features, we refer to our approach as Bias-Resilient Neural Network (BR-Net). We evaluate BR-Net on three datasets to examine different aspects of the method and compare it with a wide range of baselines. First, we test on a synthetic dataset to outline how the learned features by our method are unbiased to controlled confounding variables. Then, we test it on a medical imaging application, i.e., predicting the human immunodeficiency virus (HIV) diagnosis directly from T1-weighted Magnetic Resonance Images (MRIs). As widely explored in the HIV literature, HIV disease accentuates brain aging and if a predictor is learned not considering age as a confounder, the predictor may actually be learning the brain aging patterns rather than actual HIV markers. Lastly, we evaluate BR-Net for gender classification using the Gender Shades Pilot Parliaments Benchmark (GS-PPB) dataset . We use different backbones pre-trained on ImageNet and fine-tune them for predicting gender from face images. We show that prediction of the vanilla models is dependent on the race of the subject (alternatively we consider skin color quantified by the 'shade' variable) and show poor for darker faces, while BR-Net can successfully close the gap. Our comparison with methods based on multi-task prediction (i.e., predicting gender and shade as two tasks) and categorical GAN (i.e., predicting shade as a categorical variable in the adver-sarial component) shows that BR-Net is not only able to learn features impartial to the bias of race (verified by feature embedding and saliency visualization), it also in better performance in gender prediction (see Fig. 1). Fairness in Machine Learning: In recent years, developing fair machine learning models have been the center of many discussions (; ;) even in the news outlets and media . It has been argued that the bias essentially comes from human or society biases induced by the training datasets . Recent efforts in solving this problem have been focused on building fairer and more diverse datasets . However, this approach is not always practical for large-scale datasets or especially in medical applications, where data is relatively scarce and expensive to generate. In this work, we propose to use existing sets of data but to build models mindful of biases by learning impartial features that are only predictive of the actual output variables. Domain-Adversarial Training: proposed for the first time to use adversarial training for domain adaptation tasks by creating a component in the network that uses the learned features to predict which domain the data is coming from (a binary variable; source or target). Ever since, several other works built on top of the same idea explored different loss functions , domain discriminator settings, or cycle-consistency. The focus of all these works was to close the domain gap, which is often encoded as a binary variable. To learn generic bias-resilient models, we argue that we need to go beyond this and learn features that are invariant to either discrete or continuous confounders. There have been different attempts in the literature for learning representations that are invariant to specific factors in the data. For instance, took an information obfuscation approach to obfuscate membership in the protected group of data during training, and introduced a regularization-based method. Recently, (; ; ;) proposed to use domain-adversarial training strategies for controllable invariant feature learning with respect to existing variables in the data. Some concurrent works have also used adversarial techniques for mitigating the effect of bias. These methods used similar adversarial loss functions as in domain adaptation that aim at predicting exact values of the bias variables. For instance, used a binary cross-entropy for removing effect of'gender' and used linear (and kernelized) least squares predictors as the adversarial component. Our study shows that these strategies fail at creating resilience against biases that take continuous or ordinal values. Instead, we introduce a loss function based on correlation coefficient to naturally alleviate the bias effects on the learned features. Distribution Matching: Some previous work attempted to learn distributionally robust techniques to avoid learning confounded effects from data . This can be done by matching the distributions of the data across different domains. However, distribution matching techniques only model data of a population as a whole and fall short when it is crucial to remove the association between the learned features and a specific bias or confounding variable for each single input data point. Whereas, to close the gap with respect to the underlying bias in the data, our correlation-based analysis minimizes the bias-predictive power of the learned features for every individual data point, which by construction harmonizes the data distribution on the population level. Suppose we have an M -class classification problem, for which we have N pairs of training images and their corresponding target label(s):. Assume that the study is confounded or biased by a set of k variables, denoted by a vector b ∈ R k. To train a deep neural network for classifying each image to its label(s) while not being biased by the confounders in the study, we propose our end-to-end architecture as in Fig. 2 similar to domain-adversarial training approaches . Given the input image X, we first apply a Feature Extraction (FE) network, ing in a feature vector F. A Classifier (C) is built on top of this feature vector to predict the class label y for the input X, and it forces FE to learn discriminative futures for the classification task. Now, to guarantee that these features are not biased to b, we build another network (denoted by BP) with a new loss function for predicting the bias variables from F. We propose to backpropagate this loss to the feature extraction module in an adversarial way. As a , the feature extractor learns features that minimize the classification loss, while maximizing the bias predictor loss. Each network has its underlying trainable parameters, defined as θ f e for FE, θ c for C, and θ bp for BP. If the predicted probability that subject i belongs to class m is defined byŷ im = C(FE(X i ; θ f e); θ c ), the classification loss can be characterized by a cross-entropy: Similarly, withb i = BP(FE(X i ; θ f e); θ bp ), we can define the adversarial component of the loss function. Standard methods for designing this loss function suggest to use a cross-entropy for binary/categorical variables (e.g., in ) or an 2 MSE loss for continuous variables. However, we argue that in the context of bias control, the ultimate goal of adversarial training is to remove statistical association with respect to the bias variables, as opposed to maximizing the prediction error of them. In fact, the adversarial training based on MSE leads to the maximization of the 2 distance betweenb and b, which could be trivially achieved by uniformly shifting the magnitude ofb, thereby potentially ing in an ill-posed optimization and oscillation in the adversarial training. To address this issue, we define the following surrogate loss for predicting the bias confounders while quantifying the statistical dependence with respect to b: where corr 2 (·, ·) is the squared Pearson correlation between its inputs and b κ defines the vector of κ th bias variable across all inputs. Through adversarial training, we aim to remove statistical dependence by encouraging a zero correlation between b κ andb κ. Note, BP deems to maximize squared correlation and FE minimizes for it; Since corr 2 is bounded in the range, both minimization and maximization schemes are deemed feasible. Having these loss functions defined, the overall objective of the network is then defined as where hyperparameter λ controls the trade-off between the two objectives. This scheme is similar to GAN and domain-adversarial training , in which a min-max game is portrayed between two networks. In our case, FE extracts features that minimize the classification criterion, while fooling BP (i.e., making BP incapable of predicting the bias variables). Hence, the saddle point for this optimization objective is obtained when the parameters θ f e minimize the classification loss while maximizing the loss of the bias prediction module. Simultaneously, θ c and θ bp minimize their respective network losses. In general, a zero-correlation or a zero-covariance only quantifies linear independence between variables but cannot infer non-linear relationships. However, we now theoretically show that, under certain assumptions on the adversarial training of BP, a zero-covariance would guarantee the mean independence between bias variables and features, a much stronger type of statistical independence than the linear type. A random variable B is said to be mean independent of F if and only if for all ξ with non-zero probability, where E[·] defines the expected value. In other words, the expected value of B is neither linearly nor non-linearly dependent on F, but the variance of B might. The following theorem then relates the mean independence between features F and bias variables B to the zero-covariance between B and the prediction ofB produced by the adversarial component, BP. Property 1: B is mean independent ofB ⇒ Cov(B,B) = 0. Property 2: B, F are mean independent ⇒ B is mean independent ofB = φ(F) for any mapping function φ. Proof. The forward direction ⇒ follows directly through Property 1 and 2. We focus the proof on the reverse direction. Now, construct a mapping functionB = φ(2 holds if and only ifB is a constant, i.e., by definition, B is mean independent of F. Remark. In practice we normalize the covariance by standard deviations of variables for optimization stability. In the unlikely singular case that BP outputs a constant prediction, we add a small perturbation in computing the standard deviation. This theorem echoes the validity of our adversarial training strategy: FE encourages a zerocorrelation between b κ andb κ, which enforces b κ to be mean independent of F (one cannot infer the expected value of b κ from F). In turn, assuming BP has the capacity to approximate any arbitrary mapping function, the mean independence between features and bias would correspond to a zero-correlation between b κ andb κ, otherwise BP would adversarially optimize for a mapping function that increases the correlation. Similar to the training of GANs, in each iteration, we first back-propagate the L c loss to update θ f e and θ c. With θ f e fixed, we then minimize the L bp loss to update θ bp. Finally, with θ bp fixed, we maximize the L bp loss to update θ f e. The last step can be considered as the bias effect removal component. Furthermore, in the present study, L bp depends on the correlation operation, which is a population-based operation, as opposed to individual-level error metrics such as cross-entropy or MSE losses. Therefore, we calculate the correlations over each training batch as a batch-level operation. Depending on the application, we can use different architectures for each of the three subnetworks. We use a 3D CNN for FE to extract features from 3D medical images and use VGG16 and ResNet50 backbones for GS-PPB. For C and BP, we use a two-layer fully connected network. In this section, we evaluate our method on three different scenarios. First, we run a synthetic experiment to verify the validity of our assumptions. Then, we apply BR-Net to predict diagnosis of HIV from brain MRIs confounded by the subjects' age. Finally, we test the model for predicting gender from face images and show how controlling for variables related to race (e.g., face color shade) can robustly enhance prediction performance. We compare BR-Net with several baselines, and evaluate how the features learned by our method are invariant to the bias or confounding variables. Baseline Methods. In line with the implementation of our approach, the baseline for all three experiments is a vanilla CNN, whose architecture is exactly the same as BR-net except that there is no bias prediction sub-network and hence the adversarial loss. We emphasize that BR-Net aims to remove the association between prediction and bias by encouraging vanished correlation, which is different from simply maximizing the prediction loss (w.r.t bias) as usually performed in many GAN settings. Therefore, the second comparison method is a BR-Net with the adversarial loss being the MSE, denoted by'BR-Net (w/ MSE).' For the Gender Shades PPB experiment, we further add two other baseline methods, one predicting both'gender' and'shade' in a multi-task setting , denoted by'Multi-Task'; and one replacing correlation loss function L bp with a cross-entropy loss as the'shade' variable has a ordinal but categorical value. The adversarial training then relies on maximizing the entropy of BP predictions as motivated in Categoral GAN models ('CatGAN') . These baselines show how the correlation loss plays an important role in delineating the bias and confounding effects. We generate a synthetic dataset comprised of two groups of data, each containing 512 images of resolution 32 × 32 pixels. Each image is generated by 4 Gaussians (see Fig. 3a), the magnitude of which is controlled by σ A and σ B. For each image from Group 1, we sample σ A and σ B from a uniform distribution U while we generate images of Group 2 with stronger intensities by sampling from U. Gaussian noise is added to the images with standard deviation 0.01. Now we assume the difference in σ A between the two groups is associated with the true discriminative cues that should be learned by a classifier, whereas σ B is a given confounder. In other words, an unbiased model should predict the group label purely based on the two diagonal Gaussians and not dependent on the two off-diagonal ones. To show that the BR-Net can in such models by controlling for σ B, we train it on the whole dataset of 1,024 images given their respective binary labels and confounder values σ B. For simplicity, we construct the FE Network with 3 stacks of 2 × 2 convolution/ReLU/max-pooling layers to produce 32 features. Both the BP and C networks have one hidden layer of dimension 16 with tanh as the non-linear activation function. After training, BR-Net achieves 89% training accuracy and BR-Net w/ MSE achieves 90%. Note that the theoretically maximum training accuracy is 90% due to the overlapping sampling range of σ A between the two groups. The baseline model, however, achieves 95% accuracy, indicating that the model additionally relies on the confounding effects σ B for predicting the group label, an undesired behavior. To further investigate the association between the learned features and σ B, we measure their squared distance correlation (dcor 2) (Székely et al., 2007) for the training samples in Group 1 (when there is no association between σ B and prediction, dcor=0 for either group). Distance correlation is a widely-used measure of dependence between two paired vectors of arbitrary dimensions. Fig. 3b shows that our method successfully removes the statistical association w.r.t σ B as the distance correlation drops dramatically with training iterations. On the other hand, the baseline model without the BP component learns features that constantly yield high correlation. Note that the adversarial loss based on MSE yields unstable dcor 2 measures potentially due to the ill-posed optimization of maximizing 2 distance. Finally, the above are further supported by the 2D tSNE projection of the learned features as shown in Fig. 3c. The feature space learned by the baseline model forms a clear correlation with σ B, whereas our method in a space with no apparent bias. This confirms that the proposed adversarial technique successfully removes the bias from the confounding variable. Our second task aims at predicting the diagnosis of HIV patients vs.control subjects (CTRL) based on brain MRIs. The study cohort includes 223 CTRLs and 122 HIV patients who are seropositive for the HIV-infection with CD4 count > 100 cells µL (average: 303.0). Since the HIV subjects are significantly older in age than the CTRLs (CTRL: 45 ± 17, HIV: 51 ± 8.3, p < .001) in this study, age becomes a potential confounder; prediction of diagnosis labels may be dependent on subjects' age instead of true HIV markers. The T1-weighted MRIs are all skull stripped, affinely registered to a common template, and resized into a 64 × 64 × 64 volume. Classification accuracy is measured with 5-fold cross validation. For each run, the training folds are augmented by random shifting (within one-voxel distance), rotation (within one degree) in all 3 directions, and left-right flipping based on the assumption that HIV infection affects the brain bilaterally. The data augmentation in a balanced training set of 1024 CTRLs and 1024 HIVs. As the flipping removes left-right orientation, the ConvNet is built on half of the 3D volume containing one hemisphere. The feature extractor FE has 4 stacks of 2×2×2 3D convolution/ReLu/batch-normalization/max-pooling layers yielding 4096 intermediate features. Both BP and C have one hidden layer of dimension 128 with tanh as the activation function. For this experiment, as suggested in the previous work , confounding effects can only be reliably estimated among healthy subjects. So, in practice we only perform the adversarial loss back-propagation step for the CTRL group. Each point shows a subject in the CTRL cohort color-coded by their age. Table 1 shows the diagnosis prediction accuracy of BR-Net in comparison with 3D CNN, BRNet (w/ MSE), and Resid+SVM (note, to compare with the traditional residualization methods, we extract 298 brain regional measurements, residualize the confounders using a general linear model, and classify with a support vector machine). Our method (BR-Net) in the most accurate prediction in terms of balanced accuracy (bAcc), area under curve (AUC), and F 1 -score from the cross-validation. These show that our method is able to learn discriminative features while controlling for confounders. In addition, we record the balanced accuracy, true positive, and true negative rate for each training iteration. As shown in Fig. 5, the baseline tends to predict most subjects as CTRLs (high true negative rate). This is potentially caused by the CTRL group having a wider age distribution, so an age-dependent predictor would bias the prediction towards CTRL. On the other hand, when controlling age as a confounder, BR-Net reliably in balanced true positive and true negative rates. Similar to the previous experiment, we train different methods on the entire dataset and plot the squared distance correlation between the learned features and confounders for the CTRL cohort (Fig. 4). The figure shows that for BR-Net the distance correlation between the features and the confounding variable (age) decreases with the adversarial training. Whereas, the baseline model 3D CNN consistently produces features that are highly correlated with the confounder, and BR-Net w/ MSE produces inconsistent and unreliable associations with respect to the confounder. The t-SNE projection of the learned feature spaces are visualized in Fig. 6. The feature space learned by the baseline model forms a clear association with age, as older subjects are concentrated on the top left region of the space. This again suggests predictions from the baseline may be dependent on age rather than true HIV markers. Whereas, our method in a space with no apparent bias to age. The last experiment is on gender prediction from face images in the Gender Shades Pilot Parliaments Benchmark (GS-PPB) dataset . This dataset contains 1,253 facial images of 561 female and 692 male subjects. The face shade is quantified by the Fitzpatricksix-point labeling system and is categorised from type 1 (lighter) to type 6 (darker). This quantization was used by dermatologistsuse for skin classification and determining risk for skin cancer . To train our models on this dataset, we use backbones VGG16 and ResNet50 pre-trained on ImageNet . We fine-tune each model on GS-PPB dataset to predict the gender of subjects based on their face images using fair 5-fold cross-validation. The ImageNet dataset for pre-training the models has fewer cases of humans with darker faces , and hence the ing models have an underlying bias to the shade. BR-Net counts the variable'shade' as an ordinal and categorical bias variable. As discussed earlier, besides the vanilla VGG16 and ResNet50 models, we compare the with a multi-task baseline , which predicts both'gender' and'shade' simultaneously, and a model that uses the entropy loss as the adversarial loss for the cross-entropy-based categorical prediction (proposed by CatGAN ). Table 2 shows the across five runs of 5-fold cross-validation. Fig. 7 plots the accuracy for each individual'shade' category. As can be seen from the table and Table 2: Average over five runs of 5-fold cross-validation on the GS-PPB dataset. Best in each column are typeset in bold. the figure, BR-Net outperforms other methods on average while producing similar accuracy across all'shade' categories. Prediction made by other methods, however, is more dependent on the bias variable by showing inconsistent recognition capabilities for different'shade' categories and failing significantly on darker faces. This bias is confirmed by the tSNE projection of the feature spaces learned by different methods (see Fig. 8). The features learned by the vanilla baseline or even the multi-task model show a clear dependency on the'shade' while BR-Net in a roughly uniform distribution of subjects. To gain more insight into the , we visualize the saliency maps derived for the baseline and BR-Net. For this purpose, we use a similar technique as in to extract the pixels in the original image space highlighting the areas that are discriminative for the gender labels. Generating such saliency maps for all inputs, we visualize the average map for each individual'shade' category (Fig. 1). The value on each pixel corresponds to the attention from the network to that pixel within the classification process. Compared to the baseline, BR-Net focuses more on specific face regions and in more stable patterns across all'shade' categories. We proposed a method based on adversarial training strategies by encouraging vanished correlation to learn features for the prediction task while being unbiased to the confounding variables in the study. We evaluated our bias-resilient neural network (BR-Net) on a synthetic, a medical diagnosis, and a gender prediction dataset. In all experiments, BR-Net ed in a feature embedding space that was agnostic to the bias in the data while all other methods failed to do so. Based on our experiments we can conclude that, besides the attempt to improve datasets and curate unbiased ones , it is crucial to build models that properly account for the bias in data during training. Our bias-resilient model and some other recent works set on foot toward this direction. This is crucial as machine learning models are acceding to everyday lives, or are being developed for crucial medical applications. Failure to account for the underlying bias or confounding effects can lead to spurious associations and erroneous decisions. As a direction for the future work, other strategies such as deep canonical correlation analysis can be explored to form the adversarial component.
We propose a method based on the adversarial training strategy to learn discriminative features unbiased and invariant to the confounder(s) by incorporating a loss function that encourages a vanished correlation between the bias and learned features.
794
scitldr
Existing neural question answering (QA) models are required to reason over and draw complicated inferences from a long context for most large-scale QA datasets. However, if we view QA as a combined retrieval and reasoning task, we can assume the existence of a minimal context which is necessary and sufficient to answer a given question. Recent work has shown that a sentence selector module that selects a shorter context and feeds it to the downstream QA model achieves performance comparable to a QA model trained on full context, while also being more interpretable. Recent work has also shown that most state-of-the-art QA models break when adversarially generated sentences are appended to the context. While humans are immune to such distractor sentences, QA models get easily misled into selecting answers from these sentences. We hypothesize that the sentence selector module can filter out extraneous context, thereby allowing the downstream QA model to focus and reason over the parts of the context that are relevant to the question. In this paper, we show that the sentence selector itself is susceptible to adversarial inputs. However, we demonstrate that a pipeline consisting of a sentence selector module followed by the QA model can be made more robust to adversarial attacks in comparison to a QA model trained on full context. Thus, we provide evidence towards a modular approach for question answering that is more robust and interpretable.
A modular approach consisting of a sentence selector module followed by the QA model can be made more robust to adversarial attacks in comparison to a QA model trained on full context.
795
scitldr
Multi-relational graph embedding which aims at achieving effective representations with reduced low-dimensional parameters, has been widely used in knowledge base completion. Although knowledge base data usually contains tree-like or cyclic structure, none of existing approaches can embed these data into a compatible space that in line with the structure. To overcome this problem, a novel framework, called Riemannian TransE, is proposed in this paper to embed the entities in a Riemannian manifold. Riemannian TransE models each relation as a move to a point and defines specific novel distance dissimilarity for each relation, so that all the relations are naturally embedded in correspondence to the structure of data. Experiments on several knowledge base completion tasks have shown that, based on an appropriate choice of manifold, Riemannian TransE achieves good performance even with a significantly reduced parameters. 1.1 Multi-relational graphs, such as social networks and knowledge bases, have a variety of applications, and embedding methods for these graphs are particularly important for these applications. For instance, multi-relational graph embedding has been applied to social network analysis and knowledge base completion BID2. A multi-relational graph consists of entities V, a set R of relation types, and a collection of real data triples, where each triple (h, r, t) ∈ V × R × V represents some relation r ∈ R between a head entity h ∈ V and a tail entity t ∈ V. Embedding a multi-relational graph refers to a map from the entity and the relation set to some space. Mathematical operations in this space enable many tasks, including clustering of entities and completion, prediction, or denoising of triples. Indeed, completion tasks for knowledge bases attract considerable attention, because knowledge bases are known to be far from complete, as discussed in BID13. Multi-relational graph embedding can help its completion and improve the performance of applications that use the graph. This is the reason why much work focuses on multi-relational graph embedding. FIG0 shows an example of a multi-relational graph and a completion task. In multi-relational graph embedding, reducing the number of parameters is an important problem in the era of big data. Many parameters are needed with tensor-factorization-based methods, such as Bayesian clustered tensor factorization (BCTF) , RESCAL , and a neural tensor network (NTN) , where each relation has a dense matrix or tensors (O D 2 or more parameters, where D is dimensionality of the space). Thus, TransE BID2 was proposed to reduce the number of parameters, to overcome this problem. In TransE, each entity is mapped to a point in Euclidean space and each relation is no more than a vector addition (O (D) parameters), rather than a matrix operation. The successors to TransE, TransH and TransD BID11, also use only a small number of parameters. Some methods succeeded in reducing parameters using diagonal matrices instead of dense matrices: e.g. DISTMULT , ComplEx , HolE (through the Fourier transform) , and ANALOGY BID15. In these methods, all relations share one space for embedding, but each relation uses its own dissimilarity criterion. The success of these methods implies that one common space underlies whole data, and each relation can be regarded as a dissimilarity criterion in the space. Whereas these methods use distances or inner products in Euclidean space as dissimilarity criteria, recent work has shown that using non-Euclidean space can further reduce the number of parameters. One typical example of this is Poincaré Embedding for hierarchical data, where a hyperbolic space is used as a space for embedding. Here, the tree structure of hierarchical data has good compatibility with the exponential growth of hyperbolic space. Recall the circumference with radius R is given by 2π sinh R(≈ 2π exp R) in a hyperbolic plane. As a , Poincaré embedding achieved good graph completion accuracy, even in low dimensionality such as 5 or 10. On the other hand, spheres (circumference: 2π sin R) are compatible with cyclic structures. Since Poincaré embedding, several methods have been proposed for single-relational graph embedding in non-Euclidean space (e.g. BID8, ) and shown good . The success of these methods suggests that the appropriate choice of a manifold (i.e., space) can retain low dimensionality, although these methods are limited to single-relational graph embedding. According to the success of the TransE and its derivation and Poincaré embedding, it is reasonable in multi-relational graph embedding to assume the existence of a single structure compatible with a non-Euclidean manifold. For example, we can consider a single tree-like structure, which contains multiple hierarchical structures, where root selection gives multiple hierarchical structures from a single tree, which is compatible with hyperbolic spaces (See Figure 2). Therefore, embedding in a single shared non-Euclidean manifold with multiple dissimilarity criteria used in TransE is promising. Taking Poincaré embedding's success with low dimensionality into consideration, this method should work well (e.g., in graph completion tasks) with small number of parameters. This is the main idea of this paper. There are five entities and two kinds of relation (hypernym and synonym). Graph completion refers to answering questions such as "is mammal a hypernym of cannis?"Figure 2: Multiple hierarchical relations in a single tree. As this example shows, it is possible that multiple relations are given by multiple dissimilarity criteria in a single structure. We propose a novel method, called Riemannian TransE, for multi-relation graph embedding using a non-Euclidean manifold. In Riemannian TransE, the relations share one non-Euclidean space and the entities are mapped to the space, whereas each relation has its own dissimilarity criterion based on the distance in the space. Specifically, the dissimilarity criteria in Riemannian TransE are similar to those in TransE BID2 ) based on vector addition, which is known to be effective. Unfortunately, we cannot straightforwardly use TransE's dissimilarity criteria. This is due to non-existence of a parallel vector field (See Figure 4), which is implicitly but essentially used in "vector addition." However, the parallel condition is not essential in TransE's idea. For example, hierarchical bottom to top relations should be regarded as attraction to the top in the hierarchy, which is not parallel but has an attractive point. Moreover, parallel vector fields can be regarded as a vector field attracted to a point at infinity. Therefore, we replace parallel vector fields in TransE by vector fields with an attractive point that are well-defined in Riemannian manifolds, and as a , we obtain Riemannian TransE. Advantages of non-Euclidean spaces enable our Riemannian TransE to achieve good performance (e.g. in graph completion) with low-dimensional parameters. Riemannian TransE further exploits the advantages of TransE: that is, the method needs only O (D) parameters for each relation. Numerical experiments on graph completion tasks show that with an appropriate choice of manifold, our method can improve the performance of multi-relational graph embedding with few parameters. Let V and R denote the entities and relations in a multi-relational graph, and let T ⊂ V × R × V denote the triples in the graph. Multi-relational graph embedding refers to a pair of maps from V and R into M e and M r, respectively. Particularly, learning multi-relational graph embedding refers to obtaining an appropriate pair of maps v → p v (v ∈ V, p v ∈ M e) and r → w r (r ∈ R, w r ∈ M r) from the triples T. In this paper, we call p v the planet of entity v, w r the launcher of relation r, and M e and M r the planet manifold and launcher manifold, respectively. The quality of embedding is measured through a score function f: (M e × M e) × M r → R, which is designed by each method. Embedding is learned such that the value score function f (p h, p t ; w r) will be low when p h, p t; w r ∈ T and high when p h, p t; w r / ∈ T. For specific loss functions designed from the score function, see Subsection 2.3. We interpret the score function of multi-relational graph embedding as dissimilarity in a manifold, which we call a satellite manifold M s. We rewrite the score function f in multi-relational graph embedding using two maps H, T: M e × M r → M s and the dissimilarity measure function D: M s × M s → R as follows: DISPLAYFORM0 We call H and T the head and tail launch map, respectively, and call s H v;r and s T v;r the head and tail satellite of entity v (or of planet p v) with respect to relation r. The idea of this formulation is embedding in one shared space with multiple dissimilarity criteria. Specifically, each entity has only one planet and their satellite pairs give multiple dissimilarity criteria, each of which corresponds to a relation. In other words, all of the relations shares one space and the planets in it, and the differences among the relations are reduced to the difference of their launcher maps and the satellites given by them. We regard the planets as the embeddings of the entities, whereas dissimilarity between entities with respect to a relation is evaluated through their satellites which correspond to the relation. A simple example of this is TransE BID2, where all of the planets, satellites, and launchers share the same Euclidean space, i.e. M e = M s = M r = R D, the launch maps are given by vector addition as H (p; w) = p + w and T (p; w) = p, and the distance in a norm spacei.e. the norm of the difference-is used as a dissimilarity criterion i.e. D s DISPLAYFORM1 (the L1 or L2 norm is often used in practice). See Figure 5 (left). suggested, one can associate the idea of representing relations as vector additions with the fact that we can find a relation through a substraction operator in Word2Vec. That is, we can find relations such as p France − p Paris ≈ p Italy − p Rome in Word2Vec. As explained above, TransE is based on the distance between satellites, and each satellite is given by simple vector addition. Regardless of this simplicity, the performance of TransE has been exemplified in review papers (. Indeed, the addition operation in a linear space is essential in the launcher map, and hence TransE can easily be extended to a Lie group, which is a manifold equipped with an addition operator, as suggested in BID5 . Some methods, such as TransH , TransR BID14, and TransD BID11, also use a norm in linear space as a dissimilarity measure, integrating a linear map into a latent space. Another simple example is RESCAL , which uses the negative inner product as a dissimilarity measure. In RESCAL, the launcher of relation r is a matrix W ∈ M r = R D×D, the launch maps are given by a linear map, i.e. H (p; (W, w)) = W p and T (p; (W, w)) = p, and the dissimilarity measure is the negative inner product D s DISPLAYFORM2 Other methods are also based on the (negative) inner product dissimilarity: e.g., DISTMULT , ComplEx , HolE (through the Fourier transform) , and ANALOGY BID15. Table 1 shows score functions of these methods. Whereas some methods are based on a neural network (e.g., the neural tensor network and ConvE BID3 ), their score function consists of linear operations and element-wise nonlinear functions. Graph embedding using non-Euclidean space has attracted considerable attention, recently. Specifically, embedding methods using hyperbolic space have achieved outstanding (Nickel & Kiela, Table 1 : Score Functions. The launcher w r of r determines the dissimilarity criterion of r through satellites. In this table, the dimensionality is set so that the (real) dimensionality of the planets is D. † denotes conjugate transpose. F denotes the discrete Fourier Transform. The interpretation here of HolE is given by BID15 and BID10 BID8 ) (. With these methods, each node in the graph is mapped to a point in hyperbolic space and the dissimilarity is measured by a distance function in the space. Although these methods exploit the advantages of non-Euclidean space, specifically those of a negative curvature space, they focus on single-rather than multi-relational graph embedding. DISPLAYFORM0 By contrast, TransE has been extended to an embedding method in a Lie group-that is, a manifold with the structure of a group BID5 . As such, the regularization problem in TransE is avoided by using torus, which can be regarded as a Lie group. Although this extension to TransE deals with multi-relational embedding, it cannot be applied to all manifolds. This is because not all manifolds have the structure of a Lie group. Indeed, we cannot regard a hyperbolic space (if D = 1) or a sphere (if D = 1, 3) as a Lie group. We can simply design a loss function on the basis of the negative log likelihood of a Bernoulli model as follows: DISPLAYFORM0 DISPLAYFORM1 where Q is the set of the triples with its corrupted head and tail. That is, DISPLAYFORM2 where δ ∈ R ≥0 is the margin hyperparameter, and [·] + denotes the negative value clipping-i.e. for all x ∈ R, [x] +:= max(x, 0). We use this loss function throughout this paper. In this section, we formulate Riemannian TransE exploiting the advantages of TransE in nonEuclidean manifolds. Firstly, we give a brief introduction of Riemannian geometry. Secondly, we explain the difficulty in application of TransE in non-Euclidean manifolds. Lastly, we formulate Riemannian TransE. Let (M, g) be a Riemannian manifold with metric g. We denote the tangent and cotangent space of M on p by T p M and T * p M, respectively, and we denote the collection of all smooth vector DISPLAYFORM0 denote the LeviCivita connection, the unique metric-preserving torsion-free affine connection. A smooth curve γ: (−,) → M is a geodesic when ∇γγ = 0 on curve γ, whereγ is the differential of curve γ. Geodesics are generalizations of straight lines, in the sense that they are constant speed curves that are locally distance-minimizing. We define the exponential map Exp p, which moves point p ∈ M towards a vector by the magnitude of the vector. In this sense, the exponential map is regarded as an extension of vector addition in a Riemannian manifold. FIG2 shows an intuitive example of an exponential map on a sphere. Let DISPLAYFORM1 We define the logarithmic map Log p: M → T p M as the inverse of the exponential map. Note that the exponential map is not always bijective, and we have to limit the domain of the exponential and logarithmic map appropriately, while some manifolds, such as Euclidean and hyperbolic space, do not suffer from this problem. In TransE, a single vector w r determines the head and tail launch maps H, T as a transform: DISPLAYFORM0 In fact, these launch maps are given by vector addition. Note that this constitution of the launcher maps implicitly but essentially uses the fact that a vector is identified with a parallel vector field in Euclidean space. Specifically, a vector w determines a parallel vector field, denoted by W r here, which gives a tangent vector [W r] p ∈ T p R D on every point p ∈ R D, and each tangent vector determines the exponential map Exp p ([W r] p ) at p, which is used as a launch map in TransE. However, because there is no parallel vector field in non-zero curvature spaces, we cannot apply TransE straightforwardly in non-zero curvature spaces. Thus, extention of TransE in non-Euclidean space non-trivial. This is the difficulty in Riemannian Manifolds. As we have explained in Introduction, our idea is replacing parallel vector fields in TransE by vector fields attracted to a point. Specifically, we obtain the Riemannian TransE as an extension of TransE, replacing the launchers w r ∈ R D in TransE by pairs w r = (r, p r) ∈ R × M of a scalar value and point, indicating the length and destination of the satellites' move, respectively. We call p r the attraction point of relation r. In other words, we replace parallel vector field W r = w r in TransE by r DISPLAYFORM0 Note that, we use a fixed manifold M e = M for entity embedding and use direct product manifold M r = R × M for relation embedding. However, the extension still has arbitrariness. For instance, we could launch the tail satellite instead of the head satellite in TransE; in other words, the following launching map also gives us a score function equivalent to that of the original TransE: H (p; w) = p and T (p; w) = p − w (Figure 5 center). On the other hand, the score function depends on whether we move the head or tail satellites In these examples, the number |V| of entities is three and the number |R| of relations is two (red and orange), with triples (1, orange, 2) and (1, red, 3). Hence, these models learn that the orange head satellite of Entity 1 is close to the orange tail satellite of Entity 2 and the red head satellite of Entity 1 is close to the red tail satellite of Entity 3. In addition, the distance of the other pair of satellites should be long in the representation learned by each method. The figure on the left shows the original formulation of TransE, where the satellites are given by vector addition. In other words, the satellites are given by a move towards a point at infinity from the planet. The center figure shows an alternative formulation of TransE, which is equivalent to the original TransE. Here, the tail satellites are launched and the head satellites are fixed in the red relation. In Riemannian TransE in the figure on the right, the vector additions are replaced by a move towards a (finite) point. Figure 6: Relation of the sign for. If is positive (e.g. the orange relation), the relation runs from low (e.g. Entity 2 and 3) to high hierarchy (e.g. Entity 1), and vice versa (e.g. the red relation).in our case, where the attraction points are not at infinity. With hierarchical data, an entity at a higher hierarchy has many related entities in a lower hierarchy. Therefore, it is best to always launch the satellites of "children," the entities in a lower hierarchy, toward their parent. Hence, we move the head satellites when r > 0 and fix the tail satellites, and vice versa when r < 0; specifically, we move the head satellites by length λ = [r] + and move the tail satellites by length λ = [− r] +. Thus, bottom-to-top relation cases correspond to r > 0 (Figure 6, left), and top-to-bottom relation cases correspond to r < 0 (Figure 6, right). Another problem pertains to launching the satellites near the attraction point. If λ > ∆ (p r, p v), the naive rule causes overrun. In this case, we simply clip the move and set the satellite in the place of p r.We turn now to the score function of Riemannian TransE. The score function f: (M × M) × (R, M) → R in Riemannian TransE is given as follows: DISPLAYFORM1 where transform m λ,p: M → M denotes a move, defined as follows: DISPLAYFORM2 Here, note that m,p (q) is on the geodesic that passes through p and q. Figure 5 (right) shows the Riemannian TransE model. If M = R D and the attraction points are at infinity, the score function is equivalent to that of TransE (without the sphere constraint). Although the exponential map and logarithmic map in closed form are required to implement Riemannian TransE, we can obtain them when the manifold M is a sphere S D (positive curvature), Euclidean space R D (zero curvature), and hyperbolic space H D (negative curvature), or a direct product of them. These are practically sufficient. Also note that the computation costs of these maps are O(D), which is small enough. In typical cases, the number of entities is very large. Therefore, stochastic gradient methods are effective for optimization. Although we can directly apply stochastic gradient methods of Euclidean space or the natural gradient method , Riemannian gradient methods (e.g. ) work better for non-Euclidean embedding BID6. In this paper, we use stocastic Riemannian sub gradient methods with norm clipping (See Appendix). Note that in spheres or hyperbolic spaces, the computation costs of the gradient is O(D), which is as small as TransE. Evaluation Tasks We evaluated the performance of our method for a triple classification task on real knowledge base datasets. The triple classification task involved predict-ing whether a triple in the test data is correct. We label a triple positive when f (p h, p t ; ( r, p r)) > θ r, and vice versa. Here, θ r ∈ R ≥0 denotes the threshold for each relation r, which is determined by the accuracy of the validation set. We evaluated the accuracy of classification with the FB13 and WN11 datasets . Although we do not report the of link prediction tasks BID2 here because there are many evaluation criteria for the task, which makes it difficult to interpret the , we report the in Appendix. BID15. We used implementations of these methods on the basis of OpenKE http://openke.thunlp.org/static/index.html, and we used the evaluation scripts there. Note that we compensated for some missing constraints (for example, in TransR and TransD) and regularizers (for example, in DISTMULT and Analogy) in OpenKE. We also found that omitting the constraint of the entity planets onto the sphere in TransE gave much better in our setting, so we also provide these unconstrained (UnconstraintTransE). We determined the hyperparameters by following each paper. For details, see the Appendix. Results TAB1 shows the for the triple classification task in each dimensionality. In WN11, the sphere-based Riemannian TransEs achieved good accuracy. The accuracy did not degrade dramatically even with low dimensionality. On the other hand, in FB13, the hyperbolic-space-based Riemannian TransEs was more accurate than other methods. Moreover for each dimensionality, these with the proposed Riemannian TransE were at least comparable to those of the baselines. The accuracy of Euclidean-space-based methods (e.g. the original TransE, and Euclidean TransE) are between that of the sphere-based Riemannian TransEs and that of the hyperbolic-spacebased Riemannian TransEs in most cases. Note that these are compatible with the curvature of each space (i.e. Sphere: positive, Euclidean space: 0, a hyperbolic space: negative). Note that Euclidean methods are sometimes better than non-Euclidean methods. In Appendix, we also report the triple classification task in FB15k, where Euclidean TransE as well as baseline methods outperformed Riemannian TransE did not always outperform the baseline methods. In summary, positive curvature spaces were good in WN11 and negative curvature spaces were good in FB13, and zero curvature spaces were good in FB15k. These show that Riemannian TransE can attain good accuracy with small dimensionality provided that an appropriate manifold is selected. What determines the appropriate manifold? Spheres are compatible with cyclic structure and hyperbolic spaces are compatible with tree-like structure. One possible explanation is that WN11 has cyclic structure and FB13 has tree-like structure and the structure of FB15k is between them. However, further discussion remains future work. We proposed Riemannian TransE, a novel framework for multi-relational graph embedding, by extending TransE to a Riemannian TransE. Numerical experiments showed that Riemannian TransE outperforms baseline methods in low dimensionality, although its performance depends significantly on the choice of manifold. Hence, future research shall clarify which manifolds work well with particular kinds of data, and develop a methodology for choosing the appropriate manifold. This is important work not only for graph completion tasks but also for furthering our understanding of the global characteristics of a graph. In other words, observing which manifold is effective can help us to understand the global "behavior" of a graph. Other important work involves using "subspaces" in non-Euclidean space. Although the notion of a subspace in a non-Euclidean manifold is nontrivial, it may be that our method offers advantages over TransH and TransD, which exploit linear subspaces. In this paper, we use the following simple (projected) stochastic (Riemannian) (sub-) gradient methods DISPLAYFORM0 where DISPLAYFORM1 |R| denotes the parameter in the τ -th step, η ∈ R ≥0 is the DISPLAYFORM2 |R| is a stochastic gradient that satisfies DISPLAYFORM3 Recall that denotes index raising. Specifically, we use the following stochastic loss function based on the mini-batch method: DISPLAYFORM4 where the stochastic quintet set Q (τ) ⊂ Q is a set of uniform-distributed random variables on Q. DISPLAYFORM5. We obtain a stochastic gradient as follows: DISPLAYFORM6 where θ is a local coordinate representation of θ. We obtain∇ (τ) easily using an automatic differentiation framework. Algorithm 1 shows the learning algorithm for Riemannian TransE. In the experiments, we applied norm clipping such that the norm of a stochastic gradient is smaller than 1. end for return θ (τ) We give additional explanations of the reason why we cannot define a parallel vector field on a non-Euclidean manifold. Specifically we describe the relationship between parallel vector fields and parallel transform. We can define a parallel transform along a geodesic. This parallel transform maps a tangent vector in a tangent space to one in another. At one glance, it seems that we can define a parallel vector field using the parallel transform. However, a parallel transform is not determined only by the origin and destination but depends on the path i.e. the geodesic. Figure 7 shows an example on a sphere, where two ways to map a vector from a tangent space to another are shown and these two give different maps. As this figure shows we cannot obtain a well-defined vector on more than two points. Figure 7: Parallel transforms in a sphere S 2. This figure shows two ways to transform vector v ∈ T p S 2 to T r S 2. We denote the parallel transform from along segment pq by Π q p: T p S 2 → T q S 2. The red vector on T r S 2 denotes the vector obtained by the direct transform along segment pr. The blue vector T r S 2 denotes the vector obtained by the transform via q. As this figure shows we cannot obtain a well-defined vector on more than two points. We introduces some Riemannian manifolds useful in applications, and the formula of the exponential map and logarithmic map in these manifolds. The closed form of exponential map and logarithmic map enables implementation of Riemannian TransE in these manifolds. In the following, we omit symbols ∂ ∂x and d x of the basis in a tangent and cotangent space, respectively, for notation simplicity. Moreover, we give the composition of the exponential map and index raising and that of the index lowering and logarithmic map instead of the exponential map and logarithmic map themselves. This is because we use a cotangent vector rather than a tangent vector in a practical implementation and map from/to cotangent space is more useful (Recall that ∂ ∂θ L is not the coordinate of a tangent but the coordinate of a cotangent vector). In a D-dimensional Euclidean Space, the exponential map (with the index raising) Exp p •: DISPLAYFORM0 Apparently, the logarithmic map (with the index lowering) DISPLAYFORM1 C.2 SPHERE A D-dimensional (unit) sphere is given by point set S D:= p ∈ R (D+1) p p = 1, and the DISPLAYFORM2 between two points p ∈ S D and q ∈ S D is given as follows: DISPLAYFORM3 where arccos: [−1, 1] → [0, π] denote arc-cosine function. The exponential map (with the index raising) Exp p •: DISPLAYFORM4 where sinc denotes the cardinal sine function defined as follows: DISPLAYFORM5 The logarithmic map (with the index lowering) DISPLAYFORM6 Note that in optimization, we need the projection of the differentialδ = ∂ ∂θ L (θ)| θ=p of the loss function L to cotangent vector δ given by: DISPLAYFORM7 In this subsection, we introduces models of a hyperbolic space, which are mathematically equivalent to each other, but have practically different aspects. There are many models of a hyperbolic space. We introduce two of them: the hyperboloid model and Poincaré disk model. Some formulae here are also given and used in. Let G M denote diagonal matrix DISPLAYFORM0 In the hyperboloid model, a (canonical) hyperbolic space is given by point set DISPLAYFORM1 p, δ = 0, and the metric g p: DISPLAYFORM2 The distance ∆ (p, q) between two points p ∈ H D and q ∈ H D is given as follows: DISPLAYFORM3 where, arcosh: [1, ∞) → [0, ∞) denotes the area hyperbolic cosine function, i.e. the inverse fucntion of the hyperbolic cosine function. The exponential map (with the index raising) Exp p •: DISPLAYFORM4 where sinhc denotes the hyperbolic sine cardinal function defined as follows: DISPLAYFORM5 The logarithmic map (with the index lowering) DISPLAYFORM6 D.1.1 LINK PREDICTION TASK In the link prediction task, we predict the head or the tail entity given the relation type and the other entity. We evaluate the ranking of each correct test triple (h, r, t) in the corrupted triples. We corrupt each triple as follows. In our setting, either its head or tail is replaced by one of the possible head or entity, respectively. In addition, we applied "filtered" setting proposed by BID2, where the correct triples, that is, the triples T in the original multi-relational graph are excluded. Thus, the corrupted triples are given by (h, r, t) h ∈ V h ∧ (h, r, t) / ∈ T (head corruption) or {(h, r, t) | t ∈ V t ∧ (h, r, t) / ∈ T } (tail corruption). where V h r and V t r denote the possible heads and tails in relation r, given as follows: DISPLAYFORM7 As evaluation metrics, we use the following:Mean rank (MR) the mean rank of the correct test triples. The value of this metric is always equal to or greater than 1, and the lower, the better. Hits @ n (@n) the propotion of correct triples ranked in the top n predictions (n = 1, 3, 10). The value ranges from 0 to 1, and the higher, the better. Mean reciprocal rank (MRR) the mean of the reciprocal rank of the correct test triples. The value ranges from 0 to 1, and the higher, the better. In triple classification tasks, we predict whether a triple in the test data is correct or not. The classification is simply based on the score function i.e. we label a triple positive when f (p h, p t ; ( r, p r)) > θ r, and the other way around. Here, θ r ∈ R ≥0 denotes the threshold for each relation r, which is determined by the accuracy in the validation set. In link prediction tasks, we used WN18 and FB15k datasets, and WN11 and FB13 datasets. In triple classification tasks, we used WN11 and FB13 datasets, as well as FB15k. Note that WN18 and FB15k are originally used for link prediction tasks, whereas WN11 and FB13 are originally used for triple classification tasks. Also note that WN18 cannot be used for the triple classification task because WN18 does not have test negative data. TAB4 shows the number of the entities, relations, and triples in each dataset. Manifolds in Riemannian TransE To evaluate the dependency of performance of Riemannian TransE, we compared Riemannian TransE using the following five kinds of manifolds: Euclidean space R D (Euclidean TransE), hyperbolic spaces H D (HyperbolicTransE), spheres S D (SphericalTransE), the direct product H 4 × H 4 × · · · × H 4 of hyperbolic spaces (PHyperbolicTransE), and the direct product S 4 × S 4 × · · · × S 4 of spheres (PSphericalTransE). We compared our method with baselines. As baselines, we used. We used implementations of the baselines in OpenKE http://openke.thunlp. org/static/index.html, a Python library of knowledge base embedding based on Tensorflow BID0, and moreover, we implemented some lacked constraints (for example, in TransR, TransD) and regularizers (for example, in DistMult, Analogy) in OpenKE. We also found that omitting the constraint of the entity planets onto sphere in TransE gives much better in our setting, and this is why we also show the without the constraint (UnconstraintTransE). We also implemented Riemannian TransEs as derivations of the base class of OpenKE.We set the dimensionality of the entity manifold as D = 8, 16, 32, 64, 128. Although we also have to determine the dimensionality of the projected space in TransR and TransD, we let them be equal to D. Due to limitation of the computational costs, we fixed the batch size in baselines and Riemannian TransEs such that that the training data are split to 100 batches. We also fixed the number of epochs to 1000. Note that in the first 100 epochs in Riemannian TransEs, we fixed the launchers. Also note that we applied norm clipping such that the norm of a stochastic gradient in the tangent space is smaller than 1. We did not use "bern" setting introduced in , where the ratio between head and tail corruption is not fixed to one to one; in other words, we replaced head and tail with equal probability. Other than the dimensionality and batch sizes, we used hyperparameters such as learning rate η and margin paremeter δ of baselines used in each paper. Note that some methods only reports link prediction tasks, and reports hyperparameters for WN18 and FB15k and do not reports ones for WN11 and FB13. Some methods do not mention settings of hyperparameters, and in these cases, we used the default parameters in OpenKE. In these cases, we used hyperparameters of WN18 and FB15k also for WN11 and FB13, respectively. Note that the parameters of TorusE is supposed to be used with very high dimensionality, and the hyperparameters are designed for high dimensionality settings. In Riemannian TransEs, we simply followed the hyperparameters in TransE.We used the Xavier initializer BID9 as an initializer. When we have to use the points on a sphere (in the original TransE and Spherical TransEs), we projected the points generated by the initialization onto the sphere. We found that choice of an initializer has significant effect on embedding performance, and the Xavier initializer achieves very good performance. We selected optimizers in baselines following each paper. Note that while using ADADELTA is also proposed in TransD, we used SGD in TransD. In Riemannian TransEs, we used we simply followed the hyperparameters in TransE. TAB5 shows the hyperparameters and optimization method for each method. BID2, it still contain many reversible triples, as noted by. By contrast, these are removed in WN11 and FB13. Recall that projection-based methods such as TransH, TransR and TransD, and inner-product-based methods such as ComplEx and DISTMULT can exploit a linear subspace. When a dataset has apparent clusters inside which one relation is easily recovered from the others, we can allocate each cluster to a subspace and separate subspaces from one another. This separation is easily realized by setting some elements in the launchers to zero in these methods. Indeed, the TransE without the sphere constraint attains good accuracies in WN11 and FB13.Differences between criteria are also interesting phenomena. Note that MRR and hit@10 is generous for heavy mistakes. It is possible that inner-product-based methods earn good scores in trivial relations, but further intensive investigation is needed.
Multi-relational graph embedding with Riemannian manifolds and TransE-like loss function.
796
scitldr
Catastrophic forgetting in neural networks is one of the most well-known problems in continual learning. Previous attempts on addressing the problem focus on preventing important weights from changing. Such methods often require task boundaries to learn effectively and do not support backward transfer learning. In this paper, we propose a meta-learning algorithm which learns to reconstruct the gradients of old tasks w.r.t. the current parameters and combines these reconstructed gradients with the current gradient to enable continual learning and backward transfer learning from the current task to previous tasks. Experiments on standard continual learning benchmarks show that our algorithm can effectively prevent catastrophic forgetting and supports backward transfer learning. The ability to learn continually without forgetting previously learned skills is crucial to artificial general intelligence (AGI) BID3. Addressing catastrophic forgetting in artificial neural networks (ANNs) has been the top priority of continual learning research. Notable attempts on solving the problem include Elastic Weight Consolidation (EWC) by BID2 and the follow up work on Synaptic Intelligence (SI) by BID6, and Memory Aware Synapse (MAS) by BID0. These algorithms share the same core idea: preventing important parameters from deviating from their old (presumably better) values. In order to achieve that, EWC-like algorithms compute the importance of each parameter w.r.t. each task in the sequence and for each old task, a regularization term is added to the loss of the new task to prevent that task from being catastrophically forgotten. The regular-Preliminary work. Under review by the International Conference on Machine Learning (ICML). Do not distribute. ization term for task T (i) in EWC-like algorithms takes the following form: DISPLAYFORM0 where λ (i) controls the relative importance of task i to the current task, θ is the current parameters, θ (i) * is the parameters found at the end of the training of T (i), and ω DISPLAYFORM1 j is the importance of parameter θ 1. The regularizer in Eqn. 1 prevent changes to important parameters regardless of the effect of these changes. Unless θ DISPLAYFORM2 is the optimal value for the j-th parameter, either increasing or decreasing its value will in better performance on task i. Keeping θ close to θ (i) * only prevent the network from catastrophically forgetting T (i) but cannot help the network to leverage the information from the current task T (k), k > i to improve its performance on T (i) and other previous tasks. In other words, regularizers of the form in Eqn. 1 do not support backward transfer learning.2. The number of old parameter and importance vectors, θ * and ω, grows linearly with the number of tasks, making EWC-like algorithms not scalable to a large number of tasks. BID5 proposed the online EWC algorithm which maintains only one copy of θ * and ω. The sizes of θ * and ω are equal to that of the network. Therefore, the memory requirement of online EWC is still considerably large for large networks. To address these limitations of EWC-like algorithms, we propose a meta learning algorithm which:1. Learns to approximate the gradient of a task w.r.t. the current parameters from the current parameters 2. Combines the approximated gradients of old tasks w.r.t. the current parameters and the current task's gradient to in an update that improves the performance of the network on all tasks. By combining the gradients, our algorithm exploits the similarity between the current task and previous tasks to enable backward transfer learning. As described in section 2.2 and 5.2, the size of a meta-network is typically orders of magnitude smaller than that of the main network and metanetworks for different tasks can be distilled into a single meta-network in an online manner. That significantly reduces the memory requirement of our method. In the next section, we introduce our learning to learn algorithm for continual learning. Experiments are presented in section 3. Conclusions and future work are located in section 4 and 5, respectively. Let us consider a continual learning problem with a learner f (x; θ): DISPLAYFORM0, DISPLAYFORM1, and T loss functions DISPLAYFORM2 To avoid clutter, we remove the input of the loss function L, and DISPLAYFORM3 In joint learning settings, data from all tasks is available to the learner. The parameter θ is updated using the average of gradients from all tasks: DISPLAYFORM4 where α is the learning rate, and DISPLAYFORM5 Generally, updating θ with δ will improve the performance of f on all T tasks. In continual learning settings, at task t+1, the learner cannot access to D (i), i = 1,..., t and cannot compute ∇ θ L (i), i = 1,..., t. The update at task t + 1 is computed from ∇ θ L (t+1)only. When θ is updated with this gradient, f's performance on T (t+1) will be improved while f's performance on tasks T (i), i = 1,..., t might be catastrophically damaged.1 The analysis here still applies to the case where mini-batches are used because the expectation of DISPLAYFORM6 To address this problem, we propose the following meta learning algorithm. During task i, we train meta-network DISPLAYFORM7 In subsequent tasks, h (i) is used to reconstruct the gradient of task i w.r.t. the current parameters without having to access to D (i). More concretely, h (i) learns to map the parameter to the corresponding gradient: DISPLAYFORM8 When the main network f is trained on a new task DISPLAYFORM9 Section 2.3 introduces several ways to combine predicted gradients with the current gradient to prevent catastrophic forgetting and enable backward transfer learning. For our method to work when optimizers other than SGD is used to train the main network, ∇ θ L (i) should be replaced with the update vector produced by the optimizer. Because a real world neural network typically contains tens of thousands to billions of parameters, the naive way of training h would require an astronomically large number of samples of θ and ∇ to cover a very high dimensional space. A fully connected meta-network h also need to be extremely large to receive a very high dimensional input and produce a very high dimensional output. To circumvent the problem, we follow the coordinate-wise approach proposed by BID1 where each coordinate is processed independently. h is a neural network that takes in a 1-dimensional input and produces 1-dimensional output DISPLAYFORM0 The procedure is applied to all coordinates in θ. In our experiments, hs are MLPs and are trained to minimize the Euclidean distance between h(θ j ; φ) and ∇ j for all θ j in θ. h could be modified to process more inputs such as the position of parameter in the network or the previous values of θ j. It is also possible for h to process a small set of related parameters simultaneously, e.g. parameters in the same filter of a CNN. However, we leave these variations for future work. Let us consider a pair of gradients ∇ (k) and DISPLAYFORM0 and ∇ j have different signs and α is small enough, then updating f with ∇ (k) j will improve the network's performance on task k and damage its performance on task i. If they have the same sign then the update will improve the performance on both tasks. That intuition leads to the following rule to create an update vector from a pair of gradients: DISPLAYFORM0 At task t+1, an update vector δ can be produced by applying the above rule the pair between ∇ (t+1) and all other gradients∇ (i), i = 1,..., t. When t is large, that method usually in a sparse update vector. In practice, we apply the rule to the pair ∇ (t+1),∇ 1:t where∇ 1:t = 1 t t i=1∇(i). Updating the main network with δ will improve the performance on task t + 1 and will likely to improve the performance on tasks 1,..., t. The update vector δ contains information that are common between task t + 1 and previous tasks. Updating f with δ transfers information from the current task to previous tasks. δ is the medium for backward transfer learning in our algorithm. We tested our algorithm on the Permuted MNIST dataset BID2 ). To better demonstrate the effect of backward transfer learning, we train each task for only 2000 iterations to prevent the main network from reaching its maximum performance. The is shown in FIG1.The network in FIG1 suffers from catastrophic forgetting problem: the performance on old tasks decrease rapidly when new tasks are trained. The network trained with our algorithm FIG1 ) does not suffer from catastrophic forgetting: the performance on old tasks is maintained or even improved when new tasks are trained. The performance improvement on old tasks suggests that our algorithm has backward transfer learning capability. We also note the forward transfer learning phenomenon in FIG1: the starting accuracy of a later task is higher than that of former ones. In this paper, we present a meta learning algorithm for continual learning. Experiments on Permuted MNIST dataset show that our algorithm is effective in preventing catastrophic forgetting and is capable of supporting backward transfer learning. To make our algorithm works without task boundaries, we need to detect boundaries automatically. The simplest way to detect task boundaries is to look at the loss of the main network. That way, however, does not work well when different tasks uses different loss functions or have different input, output scales. We propose to use detect task boundaries using the loss of the meta-networks. Different tasks
We propose a meta learning algorithm for continual learning which can effectively prevent catastrophic forgetting problem and support backward transfer learning.
797
scitldr
We give a formal procedure for computing preimages of convolutional network outputs using the dual basis defined from the set of hyperplanes associated with the layers of the network. We point out the special symmetry associated with arrangements of hyperplanes of convolutional networks that take the form of regular multidimensional polyhedral cones. We discuss the efficiency of of large number of layers of nested cones that from incremental small size convolutions in order to give a good compromise between efficient contraction of data to low dimensions and shaping of preimage manifolds. We demonstrate how a specific network flattens a non linear input manifold to an affine output manifold and discuss it's relevance to understanding classification properties of deep networks. Deep convolutional networks for classification map input data domains to output domains that ideally correspond to various classes. The ability of deep networks to construct various mappings has been the subject of several studies over the years (1; 3; 10) and in general ed in various estimates of capacity given a network structure. The actual mappings that are learnt by training a specific network however, often raise a set of questions such as why are increasingly deeper networks advantageous (13; 14)? What are the mechanisms responsible for the successful generalisation properties of deep networks? Also the basic question why deep learning over large datasets is so much more effective than earlier machine learning approaches is still essentially open, BID6. These questions are not in general answered by studies of capacity. A more direct approach based on actual trained networks and the mappings they are efficiently able to produce seems needed in order to answer these questions. It seems ever more likely e.g that the ability of deep networks to generalize is connected with some sort of restriction of mappings that they theoretically can produce and that these mappings are ideally adapted to the problem for which deep learning has proven successful, Due to the complexity of deep networks the actual computation of how input domains are mapped to output classifiers has been considered prohibitively difficult. From general considerations of networks with rectifier (ReLU) non linearities we know that these functions must be piecewise linear BID9 but the relation between network parameters such has convolutional filter weights and fully connected layer parameters and the actual functions remains largely obscure. In general, work has therefore been concentrated on empirical studies of actual trained networks (6; 8; 9) Recently however there have been attempts to understand the relation between networks and their mapping properties from a more general and theoretical point of view. This has included specific procedures for generating preimages of network outputs BID3 and more systematic studies of the nature of piecewise linear functions and mappings involved in deep networks, (2; 11; 15).In this work we will make the assertion that understanding the geometry of deep networks and the manifolds of data they process is an effective way to understand the comparative success of deep networks. We will consider convolutional networks with ReLU non linearities. These can be completely characterised by the corresponding hyperplanes associated with individual convolutional kernels. We will demonstrate that the individual arrangement of hyperplanes inside a layer and the relative arrangement between layers is crucial to the understanding the success of various deep network structures and how they map data from input domains to output classifiers. We will consider only the convolutional part of a deep network with a single channel. We will assume no subsampling or max pooling. This will allow us to get a clear understanding of the role of the convolutional part. A more complete analysis involving multiple channels and fully connected layers is possible but more complex and will be left to future work. The focus of our study is to analyse how domains of input data are mapped through a deep network. A complete understanding of this mapping and its inverse or preimage will give a detailed description of the workings of the network. Since we are not considering the final fully connected layers we will demonstrate how to compute in detail the structure of input data manifold that can be mapped to a specified reduced dimensionality affine manifold in the activity space of the final convolutional output layer. This flattening of input data is often considered as a necessary preprocessing step for efficient classification. The understanding of mappings between layers will be based on the specific understanding of how to compute preimages for networks activities. We will recapitulate and extend the work in based on the construction of a dual basis from an arrangement of hyperplanes. By specialising to convolutional networks we will demonstrate that the arrangement of hyperplanes associated with a specific layer can be effectively described by a regular multidimensional polyhedral cone oriented in the identity direction in the input space of the layer. Cones associated with successive layers are then in general partly nested inside their predecessor. This leads to efficient contraction and shaping of the input domain data manifold. In general however contraction and shaping are in conflict in the sense that efficient contraction implies less efficient shaping. We will argue that this' conflict is resolved by extending the number of layers of the network with small incremental updates of filters at each layer. The main contribution of the paper is the exploitation of the properties of nested cones in order to explain how non linear manifolds can be shaped and contracted in order to comply with the distribution of actual class manifolds and to enable efficient preprocessing for the final classifier stages of the network. We will specifically demonstrate the capability of the convolutional part of the network to flatten non linear input manifolds which has previously been suggested as an important preprocessing step in object recognition, (5; 12) Transformations between layers in a network with ReLU as nonlinear elements can be written as DISPLAYFORM0 Where [] + denotes the ReLU function max(0, x i). applied component wise to the elements of the input vector x which will be confined to the positive orthant of the d-dimensional Euclidean input space. It divides the components of the output vector y into two classes depending on the location of the input x: DISPLAYFORM1 In order to analyse the way domains are mapped through the network we will be interested in the set of inputs x that can generate a specific output y. DISPLAYFORM2 This set, known as the preimage of y can be empty, contain a unique element x or consist of a whole domain of the input space. This last case is quite obvious by considering the ReLU nonlinearity that maps whole half spaces of the input domain to 0 components of the output y. The preimage will depend on the location of the input relative to the arrangement of the hyperplanes defined by the affine part of the mapping: DISPLAYFORM3 These hyperplanes divides the input space into a maximum of 2 d number of different cells with the maximum attained if all hyperplanes cut through the input space which we take as the non negative orthant of the d-dimensional Euclidean space R d +. Understanding the arrangement of these hyperplanes in general and especially in the case of convolutional mappings will be central to our understanding of how input domains are contracted and collapsed through the network. The preimage problem can be treated geometrically using these hyperplanes as well as the constraint input domains defined by these. For a given output y we can denote the components where y j > 0 as y j1, y j1... y jq and the complementary index set where y i = 0 as i 1, i 1... i p With each positive component of y we can associate a hyperplane: DISPLAYFORM4 which is just the hyperplane Π j translated with the output y j For the 0-components of y we can define the half spaces DISPLAYFORM5 I.e the half space cut out by the negative side of the plane Π i. These planes and half spaces together with the general input domain constraint of being inside R + d define the preimage constraints given the output y. If we define the affine intersection subspace: DISPLAYFORM6 and the intersection of half spaces: DISPLAYFORM7 the preimage of y can be defined as: DISPLAYFORM8 The constraint sets and the preimage set is illustrated in FIG0 for the case of d = 3 and various outputs y with different number of 0-components. For fully connected networks, computing the preimage set amounts to finding the intersection of an affine subspace with a polytope in d− dimensional space. This problem is known to be exponential in d and therefore intractable. However, we will see that this situation is changed substantially when we consider convolutional instead of fully connected networks. In order to get more insight into the nature of preimages we will devise a general method of computing that highlights the nature of the arrangement of hyperplanes. The set of hyperplanes Π i, i = 1... d will be assumed to be in general position, i.e. no two planes are parallel. The intersection of all hyperplanes excluding plane i: DISPLAYFORM0 is then a one-dimensional affine subspace S i that is contained in all hyperplanes Π j excluding j = i. For all i we can define vectors e i in R d parallel to S i. The general position of the hyperplanes then guarantees that the set e i is complete in R d. By translating all vectors e i to the point in R d which is the mutual intersection of all planes Π i DISPLAYFORM1 they can therefore be used as a basis that spans R d. This construction also has the property that the intersection of the subset of hyperplanes: DISPLAYFORM2 is spanned by the complementary dual basis set DISPLAYFORM3 The dual basis can now be used to express the solution to the preimage problem. The affine intersection subspace P * associated with the positive components j 1 j 2... j q of the output y is spanned by the complementary vectors associated with the negative components i 1 i 2... i p. These indices also define the hyperplanes Π 1, Π 2... Π p that constrain the preimage to lie in the intersections of half spaces associated with the negative sides. We now define the positive direction of the vector e i as that associated with the negative side of the plane Π i. If we consider the intersection of the subspace P * and the subspace generated by the intersections of the hyperplanes Π i associated with the negative components of y we get: DISPLAYFORM4 Due to complementarity of the positive and negative indices, this is a unique element x * ∈ R d (marked "output" in figure 1 which lies in the affine subspace of the positive output components P * as well as on the intersection of the boundary hyperplanes Π i that make up the half space intersection constraint X − for the preimage. if we take the subset of the dual basis vectors with e i1, e i2 . . . e ip and move them to this intersection element, they will span the part of the negative constraint region X − associated with the preimage. I.e. the preimage of the output y is given by: DISPLAYFORM5 We will now specialise to the standard case of convolutional networks. In order to emphasize the basic role of geometric properties we will consider only a single channel with no subsampling. Most of what we state will generalize to the more general case of multiple channels with different convolutional kernels but needs a more careful analysis we will exploit the fact that convolutional matrices are in most respects asymptotically equivalent to those of circulant matrices where each new row is a one element cyclic shift of the previous. For any convolution matrix we will consider the corresponding circulant matrix that appends rows at the end to make it square and circulant. Especially when the support of the convolution is small relative to the dimension d, typically the order of 10 in relation to 1000, this approximation will be negligible. Except for special cases the corresponding circulant will be full rank, which means that properties about dual basis etc. derived previously will apply also here. As is standard we will assume that the bias b is the same for all applications of the convolution kernels. The first thing to note about hyperplanes associated with circulant matrices is that they all intersect on the identity line going through the origin and the point (1, 1, . . . 1). Denote the circulant matrix as C with elements c i,j. The circulant property implies c i+1, DISPLAYFORM0. Each row is shifted one step cyclically relative to the previous. For the hyperplane corresponding to row i we have: DISPLAYFORM1 It is easy to see that the circulant property implies that the sum of all elements along a row is the same for all rows. Let the sum of the row be a. We then get: x j = −b/a for j = 1... d as a solution for this system of equations which is a point on the identity line in R d.The arrangement of the set of hyperplanes: DISPLAYFORM2 with w T i the i:th row of the circulant augmented convolutional matrix W, will be highly regular. Consider a cyclic permutation P x of the components of the input x described by the single shift matrix P i.e x i is mapped to x i+1 for i = 1... d − 1 and x d is mapped to x 1. We then get: DISPLAYFORM3 which states that points on the hyperplane associated with weights w i are mapped to hyperplane associated with weights w i+1. The hyperplanes associated with the weights w i i = 1... d therefore form a regular multidimensional polyhedral cone in R d around the identity line, with the apex located at x T = (−b/a, −b/a . . . − b/a) controlled by the bias b and the sum of filter weights a. Geometrically, the cone is determined by the apex location, the angle of the planes to the central identity line and its rotation in d-dimensional space. Apex location and angle are two parameters which leaves d − 2 parameters for the multidimensional rotation in R d. This maximum degree of freedom is however attained only for unrestricted circulant transformations. The finite support of the convolution weights in CNN:s will heavily restrict rotations of the cone. The implications of this will be discussed later. Any transformation between two layers in a convolutional network can now be considered as a mapping between two regular multidimensional polyhedral cones that are symmetric around the identity line in R d. The coordinate planes of the input space R d + can be modelled as such a cone as well as the output space given by the convolution. The strong regularity of these cones will of course impose strong regularities on the geometric description of the mapping between layers. Just as in the general case, this transformation will be broken down to transformations between intersection subspaces of the two cones. In order to get an idea of this we will start with a simple multi layer network with two dimensional input and output and a circulant transformation: FIG1 illustrates the mapping of data from the input space (x 1, x 2) to the output space (y 1, y 2) for two networks with 3 and 6 layers respectively. The dashed lines represent successive preimages of data that maps to a specific location at a layer. By connecting them we get domains of input data mapped to the same output at the final layer, i.e they are contraction flows depicting how data is moved through the network. Note that in both layers the major part of the input domain is mapped to output. This is illustrated for the first trivial bias only layer with a = 1, b = 0. The domain of the input that is mapped to output domain is just quite trivial planar manifolds. DISPLAYFORM0 The second network with more varied weights illustrates how input domain manifolds with more structure can be created. It also demonstrates the importance of the concept of "nested cones" and how this affects the input data manifolds. The red lines represent data that is associated with layer cones that are completely nested inside its predecessors, while the black lines represent data where the succeeding cone has a wider angle than its predecessor. When this happens, the hyperplanes associated with the output cone will intersect the hyperplanes of the input cone and input data beyond this intersection is just transformed linearly. Since all data in figure 2 is remapped to the input space this has the effect that data is not transformed at all. This has no effect at all at the shaping of the input manifold. One could say that these layers are "wasted" beyond the location of the intersection as far as network properties are concerned since they neither contribute to the shaping or the contraction of input data manifolds. The effect of this on the input manifold can be seen as a less diverse variation of its shape (black) compared to the previous part associated with the completely nested part of the layer cones. In higher dimensions the effects of nested vs. partially nested cones appear in the same way but more elaborate. In addition to the 2d case we also have to consider rotations of the cone, which as was pointed out earlier, has d − 2 degrees of freedom for cones in d dimensional space. The effects of contraction of data from higher to lower dimensions also become more intricate as the number of different subspace dimensionalities increases. Most of these effects can be illustrated with 3 dimensional input and output spaces. For d = 3 the generic circulant matrix can be used to define a layer transformation: DISPLAYFORM1 The transformation properties of this network are most easily illustrated if we start with the pure bias case with transformation W = I, i.e a = 1, b = 0, c = 0. A specific element in input space is mapped according to its position relative to the hyperplanes. If we use the dual basis to define the coordinates of the output data, the mapping for the input element will be the same in input cells with the same relation to all hyperplanes. In d dimensions, the hyperplanes divide the input space into 2 d cells where elements are mapped to a specific associated intersection subspace in the output domain. In FIG2, the grey shaded boxes indicate two cells with different numbers of negative constraints 1 and 2 respectively. The content of the upper one with one negative constraint including all its bounding faces and their intersections is mapped to a specific 2d hyperplane in the output domain while the content of the lower one with two negative constraints is mapped to the 1d intersection of two hyperplanes. This illustrates the most important property of the nesting of the cones associated with the input and output layer: For a range of transformations in the vicinity of the identity mapping, the input space, properly normalised in range, is divided into cells where the elements of the cells including their bounding faces an their intersections are mapped to output intersection subspaces with equal or lower dimension. This means that the content of the cell is irreversibly contracted to lower dimensions. and the components of the dual bases used to span these. Note that these are affected by changing the angle of the output cone. It introduces a limit of the nesting beyond which the mapping properties of the transformation are changed so that data no longer maps to a manifold of equal or lower dimensionality. I.e the contraction property is lost in those regions of the input space where nesting of cones ceases. We will formally define this important property of nested cones as:Let R We see that this definition implies that the cone formed by planes Π is completely contained in that formed by the planes Π 0 but also that its relative rotation is restricted. We will have reason to relax the condition of inclusion of all elements i 1... i p in the intersection subset and talk about cones with restricted nesting. Complete nesting implies contraction of data from one layer to the next which can be seen from the fact that all elements of the complete intersection subset Π 0 i1 ∩... ∩ Π 0 ip and thereby in each subset are mapped to the intersection subset M i1...ip = Π i1 ∩... ∩ Π ip with same dimensionality. In addition elements from intersection subsets formed by subsets of indexes i 1... i p will also be mapped to this same intersection subset. The subsets associated with these indexes are however of higher dimension. Consequently, mapping of data between layers will be from intersection subsets to intersection subsets with equal or lower dimension. This is the crucial property connecting degree of nesting with degree of contraction. By going further through the network to higher layers this contraction is iterated and data is increasingly concentrated on intersection subspaces with lower dimension which is reflected by the increased sparsity of the nodes of the network. The convolutional part of a deep network can therefore be seen as a component in a metric learning system where the purpose of the training will be to create domains in input space associated with different classes that are mapped to separate low dimensional outputs as a preprocessing for the final fully connected layers that will make possible efficient separation of classes. There is therefore a conflict between the diversity of input manifolds that contract to low dimensional outputs and the degree of contraction that can be generated in a convolutional network. The efficient resolution of this conflict seems to lie in increasing the number of layers in the network in order to be able to shape diverse input manifolds but with small incremental convolution filters that retain the nesting property of the cone in order to preserve the proper degree of contraction. Empirically, this is exactly what has been demonstrated to be the most efficient way to increase performance of deep networks (13; 14). We are now in a position to give a general characterizaton of the preimage corresponding to a specified output domain at the final convolutional output layer assuming the property of nested layer cones. Ideally we would like to include the final fully connected layers in this analysis but it will require a special study since we cannot assume the nesting property to be valid for these. In the end the network should map different classes to linearly separable domains in order to enable efficient classification. It is generally suggested that the preprocessing part of a network corresponds to flattening nonlinear input manifolds in order to achieve this final separation at the output. In order to be able to draw as general as possible we shall demonstrate the exact structure of a nonlinear input manifold that maps to a prespecified affine manifold at the final convolutional layer. We denote this manifold M and the output at the final convolutional layer by x (l). The final layer can be characterised by the set of hyperplanes: DISPLAYFORM0 Let the zero components of the output x (l) be i 1, i 2... i q. It can then be associated with the intersection of the output manifold and the corresponding hyperplanes DISPLAYFORM1 The degree of intersection q will depend on the dimensionality of M. If M is a d − 1 dimensional hyperplane in general position it will intersect any combination of hyperplanes at output level l. This is the maximum complexity situation that will generate a d − 1 -dimensional input manifold. Reducing dimensionality of M means reducing the possible intersection with combinations of hyperplanes. Note that if M intersects the set the intersection of planes i 1, i 2... i p it also intersects the intersection of any subset of these. Intersecting M with each of these subsets will generate pieces of intersections linked together. These are affine subsets with different dimensionality and the preimage of each piece will be generated by complementary dual basis components. This is illustrated by figure 4 for the case of an affine plane in R 3 + intersecting to give a triangular domain. In this case we have three points on the coordinate axis, and three lines connecting these. The three points will all span 2d planes bases on different pairs of complementary dual basis components. In addition to these, the points on the lines of the triangle generated by intersecting M with each of the three individual output planes will generate 1d lines that jointly will span a 2d plane. This plane will connect continuously with the planes spanned from the points on the axis to yield a piecewise planar input manifold to the final layer. Continuing through the network, this piecewise planar manifold will intersect with the planes of layer l − 1 and the procedure is iterated until we reach the input layer. This procedure generalises to arbitrary dimensions but the complexity of course grows with the increasing combinatorics. The basic principle of layer by layer recursively generating piecewise affine manifolds still holds. The complexity lies in the fact that each intersection of the manifold M with every subset of possible hyperplane intersections will generate a seeding hyperplane and and each of these will act as a new manifold M at the next layer. Note however that the nested cone property substantially reduces complexity compared to the general case of arbitrary hyperplanes. Figure 4: Piecewise planar manifold in 3d input space that maps to affine manifold (blue triangle) at the final convolutional layer in a 3-node 3-layer network with circulant transformations. All data is remapped to the input space. Left: Red patches are mapped to 0 dimensional red points at the three output coordinate axis Blue patches are mapped to 1d lines connecting the points. (dark red is outside light red is inside of the manifold Right: Patches that are generated by selective components of the dual basis at each layer. The positive span generated by selective components of the dual basis emanating from the red output points on the triangle as well as from each intersection with coordinate lines in early layers, intersects with the arrangement of hyperplanes representing the preceding layer. The 1-d intersections are then used as seed points for new spans that intersect next preceding layer etc. The 2d intersections together with selective edges from the spans generate linking patches that ensures the continuity of the input manifold. as It should be pointed out that these manifold do not necessarily correspond to actual class manifolds since we are not considering the complete network with fully connected layers. They can however be considered as more elaborate and specific building blocks in order to construct the actual class manifolds of a trained network. We have defined a formal procedure for computing preimages of deep linear transformation networks with ReLU non linearities using the dual basis extracted from the set of hyperplanes representing the transformation. Specialising to convolutional networks we demonstrate that the complexity and the symmetry of the arrangement of corresponding hyperplanes is substantially reduced and we show that these arrangements can be modelled closely with multidimensisional regular polyhedral cones around the identity line in input space. We point out the crucial property of nested cones which guarantees efficient contraction of data to lower dimensions and argue that this property could be relevant in the design of real networks. By increasing the number of layers to shape input manifolds in the form of preimages we can retain the nested cone property that most efficiently exploits network data in order to construct input manifolds that comply with manifolds corresponding to real classes and would explain the success of ever deeper networks for deep learning. The retaining of the nested cone property can be expressed as a limitation of the degrees of freedom of multidimensional rotation of the cones. Since convolutional networks essentially always have limited spatial support convolutions, this is to a high degree built in to existing systems. The desire to retain the property of nesting could however act as an extra constraint to further reduce the complexity of the convolutions. This of course means that the degrees of freedom are reduced for a network which could act as a regularization constraint and potentially explain the puzzling efficiency of generalisation of deep networks in spite of a high number of parameters. We demonstrate that it is in principle possible to compute non linear input manifolds that map to affine output manifolds. This demonstrates the possibility of deep convolutional networks to achieve flattening of input data which is generally considered as an important preprocessing step for classification. Since we do not consider a complete network with fully connected layers at the end we cannot give details how classification is achieved. The explicit demonstration of non linear manifolds that map to affine outputs however indicates a possible basic structure of input manifolds for classes. It is easy to see that a parallel translation of the affine output manifold would in two linearly separable manifolds that would be generated by essentially parallel translated non linear manifolds in the input space. This demonstrates that convolutional networks can be designed to exactly separate sufficiently "covariant " classes. and that this could be the reason for the relative success of convolutional networks over previous machine learning approaches to classification and explain why using a large number of classes for training is advantageous since they all contribute to very similar individual manifolds. Disregarding these speculations the fact remains that these manifolds will always exist since they are derived on purely formal grounds from the structure of the network. If they have no role in classification their presence will have to be explained in other ways.
Analysis of deep convolutional networks in terms of associated arrangement of hyperplanes
798
scitldr
Meta learning has been making impressive progress for fast model adaptation. However, limited work has been done on learning fast uncertainty adaption for Bayesian modeling. In this paper, we propose to achieve the goal by placing meta learning on the space of probability measures, inducing the concept of meta sampling for fast uncertainty adaption. Specifically, we propose a Bayesian meta sampling framework consisting of two main components: a meta sampler and a sample adapter. The meta sampler is constructed by adopting a neural-inverse-autoregressive-flow (NIAF) structure, a variant of the recently proposed neural autoregressive flows, to efficiently generate meta samples to be adapted. The sample adapter moves meta samples to task-specific samples, based on a newly proposed and general Bayesian sampling technique, called optimal-transport Bayesian sampling. The combination of the two components allows a simple learning procedure for the meta sampler to be developed, which can be efficiently optimized via standard back-propagation. Extensive experimental demonstrate the efficiency and effectiveness of the proposed framework, obtaining better sample quality and faster uncertainty adaption compared to related methods. Meta learning is an important topic in modern machine learning. The goal is to learn some abstract concepts from different but related tasks, which can then be adapted and generalized to new tasks and environments that have never been encountered during training. There has been lots of research on this topic. A recent review classifies the methods as metric-based, model-based and optimization-based methods . Among these methods, learning-to-learn seeks to learn a meta optimizer that can be applied to different models, with some task-specific information such as current gradients as input . Model agnostic meta learning (MAML) aims to learn a meta parameter/model from a set of training tasks such that it can quickly adapt to models for new tasks . Many follow-up works have been proposed recently, including but not limited to the meta network , the meta learner , the Reptile model , and the lately extensions to an online setting , to model hierarchical relation and sequential strategies , and to its stable version and to some theoretical analysis . It is worth noting that all the aforementioned models are designed from an optimization perspective. Bayesian modeling, in parallel with optimization, has also been gaining increasing attention and found various applications in deep learning. Recent research has extended the above meta-learning methods to a Bayesian setting. For example, Bayesian MAML (BMAML) replaces the stochasticgradient-descent (SGD) step with Stein variational gradient descent (SVGD) for posterior sampling . Probabilistic MAML (PMAML) extends standard MAML by incorporating a parameter distribution of the adapted model trained via a variational lower bound . Amortized Bayesian Meta Learning extends the idea of MAML to amortized variational inference . VERSA uses an amortization network to approximate the posterior predictive distributions. Meta particle flow realizes Bayes's rule based on ODE neural operator that can be trained in a meta-learning framework. Though methodologically elegant with many interesting applications, the above methods lack the ability to uncertainty propagation/adaption, in the sense that uncertainty is either not considered (e.g., in MAML) or only considered in the specific task level (e.g., BMAML). This could slow down model adaption or even inaccurate uncertainty modeling when considering from a Bayesian modeling perspective. For example, suppose one is given samples from a set of Gaussians with different mean and covariance matrices, how can she/he efficiently leverage uncertainty in these samples to generate samples from a complex yet related distribution such as a Gaussian mixture? To tackle this problem, we propose to perform meta learning on the space of probability measures, i.e., instead of adapting parameters to a new task, one adapts a meta distribution to new tasks. When implementing distribution adaption in algorithms where distributions are approximated by samples, our distribution-adaptation framework becomes sample-to-sample adaption. In other words, the meta parameter in standard MAML becomes meta samples in our method, where uncertainty can be well encoded. For this reason, we call our framework Bayesian meta sampling. Specifically, we propose a mathematically elegant framework for Bayesian meta sampling based on the theory of Wasserstein gradient flows (WGF) . Our goal is to learn a meta sampler whose samples can be fast adapted to new tasks. Our framework contains two main components: a meta sampler and a sample adapter. For the meta sampler, we adopt the state-ofthe-art flow-based method to learn to transport noise samples to meta samples. Our meta sampler is parameterized by a neural inverse-autoregressive flow (NIAF), an extension of the recently developed neural autoregressive flows (NAFs) . The NIAF consists of a meta-sample generator and an autoregressive conditioner model, which outputs the parameters of the meta-sample generator. The NIAF takes some task-specific information (such as gradients of target distributions) and random noise as input and outputs meta samples from its generator. These meta samples are then quickly adapted to task-specific samples of target distributions by feeding them to the sample adapter. To ensure efficient and accurate adaptations to new task distributions, a novel optimal-transport Bayesian sampling (OP-sampling) scheme, based on Wasserstein gradient flows, is proposed as the adaptation mechanism of the sample adapter. The OP-sampling is general and can ensure samples to be adapted in a way that makes the sample density evolve to a target distribution optimally, thus endowing the property of fast uncertainty adaption. Finally, when one aims to perform specific tasks such as Bayesian classification with a task network, these samples are used to encode uncertainty into modeling. To this end, we further develop an efficient learning algorithm to optimize the task network based on variational inference. Extensive experiments are conducted to test the advantages of the proposed meta-sampling framework, ranging from synthetic-distribution to posterior-distribution adaption and to k-shot learning in Bayesian neural networks and reinforcement learning. Our demonstrate a better performance of the proposed model compared to related methods. Our model combines ideas from Bayesian sampling, Wasserstein gradient flows and inverseautoregressive flows. A detailed review of these techniques is provided in Section A of the Appendix. Overall idea of the proposed meta-sampling framework. The task is denoted with τ. The two components and specific inputs will be described in details. In meta sampling, one is given a set of related distributions, e.g., posterior distributions of the weights of a set of Bayesian neural networks (BNNs), each of which is used for classification on a different but related dataset. With our notation, each of the network and the related dataset is called a task, which is denoted as τ. Meta sampling aims to learn a meta sampler based on a set of training tasks so that samples from the meta sampler can be fast adapted to samples for an unseen new task. Our overall idea of the proposed Bayesian meta sampling is to mimic a hierarchical-sampling procedure but in a much more efficient way. Specifically, we propose to decompose meta sampling into two components: a meta sampler and a sample adapter. The meta sampler is responsible for generating meta samples that characterize common statistics of different tasks; The sample adapter is designed for fast adaptation of meta samples to task-specific target distributions. The meta sampler is parameterized as a conditional generator, and the sample adapter aggregates all local losses of different tasks to form a final loss for optimization based on optimal-transport theory. Our method allows gradients to be directly backpropagated for meta-sampler updates. The overall idea is illustrated in Figure 1. Comparisons with related works We distinguish our model with two mostly related works: the meta NNSGHMC and the probabilistic MAML (PMAML) . The main differences lie in two aspects: meta representation and model architecture. In terms of meta-model representation, our model adopts data/parameter samples, instead of determinstic parameters, as meta representation, and thus can be considered as a sample-to-sample adaption. Meta NNSGHMC uses samples on different tasks, there is no concept of meta samples. Finally, PMAML fully relies on variational inference whose representation power could be restricted. In terms of model architecture, our model adopts the state-of-the-art autoregressive architectures, which can generate high-quality meta samples. Furthermore, our model adopts a simpler way to define the objective function, which allows gradients to directly flow back for meta-sampler optimization. It is worth noting that our framework reduces to MAML when only one meta sample is considered for sample adaption. Since our methods aim for Bayesian sampling, it is more similar to meta NNSGHMC ; whereas MAML aims for point estimation of a model. Finally, we note that the recently proposed neural process might also be used for meta-learning "few-shot function regression" as stated in the original paper. However, to our knowledge, no specific work has been done for this purpose. Task network 5(⋅; 9) This section aims to design a meta sampler for efficiently generating meta samples. One idea is to use a nonparametric model to generate meta samples such as standard stochastic gradient MCMC (SG-MCMC) and the Stein variational gradient descent (SVGD) , where no parameters are consider in the model. However, methods under this setting are typically slow, and the generated samples are usually highly correlated. Most importantly, it would be hard to design nonparametric samplers that can share information between different tasks. As a , we propose to learn the meta sampler with a parametric model, which is also denoted as a generator. There are two popular options to parameterize the meta sampler: with an explicit generator or with an implicit generator. An explicit generator parameterizes the output (i.e., meta samples) as samples from an explicit distribution with a known density form such as Gaussian, thus limiting the representation power. In the following, we propose to adopt an implicit generator for the meta sampler based on neural inverse-autoregressive flows (NIAF), an inverse extension of the recently proposed NAF used for density estimation. As will be seen, NIAF can incorporate task-specific information into an implicit generator and generates samples in an autoregressive manner efficiently. Finally, meta samples are used in a task network to encode uncertainty for specific tasks such as Bayesian classification. The architecture of the meta sampler is illustrated in Figure 2. Note the idea of using NIAF to generate network parameter is similar to the hypernetwork . As an extention, perform inference on a lower dimensional latent space. Using hypernetworks to model posterior distributions have also been studied in . Neural inverse-autoregressive flows Directly adopting the NAF for sample generation is inappropriate as it was originally designed for density evaluation. To this end, we propose the NIAF for effective meta-sample generation. Specifically, let z k denote the k-th element of a sample z to be generated;z denotes a sample from last round; Γ denotes the task-specific information. In our case, we set Γ (z, ∇z log p(z)) with p(·) denoting the target distribution (with possible hidden parameters). NIAF generates the sample z = (z 1, · · ·, z k, · · ·) via an autoregressive manner, as: where {k} are noise samples; G(·, ·; φ φ φ) is an invertible function (generator) parameterized by φ φ φ and implemented as a DNN; and T is an autoregressive conditioner model to generate the parameters of the generator G at each step k, which is itself parameterized by ψ ψ ψ and implemented as a deep sigmoidal flow or a deep dense sigmoidal flow as in . According to , using strictly positive weights and strictly monotonic activation functions for G is sufficient for the entire network to be strictly monotonic, thus invertible. The original NAFs are not designed for drawing samples, as one needs the inverse function G −1, which is not analytically solvable when G is implemented as a neural network. Although it is stated in that G −1 can be approximated numerically, one needs repeated approximations, making it computationally prohibited. Our proposed NIAF is designed specifically for sample generation by directly transforming the noise with a flow-based network G. The task network In addition to the NIAF, a task network might be necessary for processing specific learning tasks. In particular, if one is only interested in generating samples from some target distribution, a task network is not necessary as the meta samples will be used to adapt to task-specific samples. However, if one wants to do classification with uncertainty, the task network should be defined as a classification network such as an MLP or CNN. In this case, denoting the weights of the task network as W, we consider the task network as a Bayesian neural network, and propose two ways of parameterization to encode uncertainty of meta samples into the task network: • Sample parameterization: A sample of the weights of the task network is directly represented by a meta sample from our meta sampler, i.e., W = (z 1, z 2, · · ·, z p) with p the parameter dimensionality. • Multiplicative parameterization: Adopting the idea of multiplicative normalizing flows , we define an inference network for the weights as the multiplication of z and a Gaussian variational distribution for W, i.e., the variational distribution is defined as the following semi-implicit distribution to approximate the true posterior distribution for W: Here and in the following, we consider the task network parameterized as a one-layer MLP for notation simplicity, although our method applies to all other network structures; and we have used NIAF({ k}, Γ) to denote the output of meta samples from the NIAF. Comparing the two parameterizations, sample parameterization directly generates weights of the task network from the meta sampler, thus is more flexible in uncertainty modeling. However, when the task network grows larger to deal with more complex data, this way of parameterization quickly becomes unstable or even intractable due to the high dimentionality of meta samples. Multiplicative parameterization overcomes this issue by associating each element of a meta sample with one node of the task network, reducing the meta-sample dimensionality from O(N in ×N out) to O(N in +N out) with N in and N out the input and output sizes of the task network. As a , we adopt the multiplicative parameterization when dealing with large-scale problems in our experiments. Efficient inference for these two cases will be described in Section 2.4. Note a recent work on NAF inference proposes to first sample from a mean-field approximating distribution, which are then transformed by an NAF to a more expressive distribution . However, the approach is hard to scale to very high dimensional problems, e.g., posterior distributions of network parameters. The output of the meta sampler contains shared (meta) information of all the tasks. Task-specific samples are expected to be adapted fast from these meta samples. This procedure is called sample adaption. Since there are potentially a large number of tasks, learning task-wise parametric models for sample adaption is impractical. Instead of using standard nonparametric samplers such as SG-MCMC or SVGD, we propose a general Bayesian sampling framework based on optimal-transport theory for new task-sample adaption, where back-propagation can be directly applied. A general Bayesian sampling framework based on optimal transport Let a task-specific target distribution be p τ (z), indexed by τ. A standard way is to adapt the samples based on a Markov chain whose stationary distribution equals p τ, e.g., via SG-MCMC. However, Markov-chain-based methods might not be efficient enough in practice due to the potentially highly-correlated samples (b). Furthermore, it is not obvious how to apply backpropagation (BP) in most of sampling algorithms. To deal with these problems, we follow Chen et al. (2018b) and view Bayesian sampling from the Wasserstein-gradient-flow perspective (discussed in Section A.2), i.e., instead of evolving samples, we explicitly evolve the underlying sample density functions. We will see that such a solution allows us to train the proposed meta sampler efficiently via standard BP. Considering our meta-learning setting. Since we aim to adapt meta samples to new tasks, it is reasonable to define the adaptation via task-wise WGFs, i.e., for each task, there is a WGF with a specific functional energy and the corresponding first variation, denoted respectively as E τ and F τ δEτ δρ with the task index τ. Here ρ denotes the underlying density of the samples. Consequently, ρ will evolve with a variant of the PDE by replacing E with E τ in equation 8 for each task. To solve the corresponding PDE, we prove Theorem 1 based on a discrete approximation of ρ with the evolved meta samples, which is termed optimal-transport Bayesian sampling (OT-Bayesian sampling). Theorem 1 (Optimal-Transport Bayesian Sampling) Let ρ t at time t be approximated by parti-. This is a useful to derive a learning algorithm for the meta sampler described in Section 2.4. Energy functional design Choosing an appropriate energy function E τ is important for efficient sample adaptation. To achieve this, the following conditions should be satisfied: i) E τ (ρ) should be convex w.r.t. ρ; ii) The first variation F τ could be calculated conveniently. A general and convenient functional family is the f -divergence, which is defined, with our notation and a convex function f: R → R such that f = 0, as: The f -divergence is a general family of divergence metric. With different functions f, it corresponds to different divergences including the popular KL divergence, inverse-KL divergences, and the Jensen-Shannon divergence. For more details, please refer to . A nice property of f -divergence is that its first variation endows a convenient form as stated in Proposition 2. * We use the bold letter z to denote the i-th meta sample evolved with equation 3 at time t (or equation 4 at iteration k). This should be distinguished from the normal unbold letter z k defined in Section 2.2.1, which denotes the k-th element of z. ρ (z) pτ (z). The first variation of the f -divergence endows the following form: In our experiments, we focus on the KL-divergence, which corresponds to f (r) = r log r. In this case, Since the density ρ(z) required in evaluating r is not readily available due to its implicit distribution, we follow Chen et al. (2018b) and use the meta samples {z (i) k } at the k-th step for approximation, ing in where κ(·, ·) is a kernel function. The number of adaptation steps k should be set based on problems. For tasks that vary significantly, a larger k should be chosen to ensure the quality of adapted samples. To further improve the accuracy, inspired by Chen et al. (2018b), we combine equation 5 with the first variation of SVGD, ing in the following form at iteration k: where λ ≥ 0 is a hyperparameter to balance the two terms. We first describe how to train the proposed model under the two kinds of parameterization of the task network defined in Section 2.2.1. Training in the sample-parameterization setting In this case, one only needs to optimize the conditional model T (·; ψ ψ ψ) as all parameters of other networks are directly generated. Specifically, because the energy functionals for each task τ are designed so that the minima correspond to the target distributions, the objective thus can be defined over the whole task distribution p(τ) as: For notation simplicity, we will not distinguish among the adapted samples (i.e., z (i) k ) for different tasks. Since the only parameter is ψ ψ ψ in the autoregressive conditioner model T (see Figure 2, and note the parameters φ φ φ and W for the meta generator and task network do not need to be learned as they are the outputs of T and G, respectively), its gradient can be directly calculated using chain rule: where " = " follows by the from Section 2.3 and ∇ z Training in the multiplicative-parameterization setting In this case, two sets of parameters are to be learned, ψ ψ ψ and W. Since ψ ψ ψ and W are decoupled, one can still optimize ψ ψ ψ by adopting the same update equation as in the sample-parameterization setting. For W, we follow and adopt variational inference. Specifically, we first augment the task network with an auxiliary network with a conditional likelihood; , with the inference network defined in, and writing the implicit distribution of z asq ψ ψ ψ (z) and the prior distribution of W as p(W), we arrive at the following ELBO: Different from , we only update (θ, φ φ φ) by optimizing the above ELBO; while leave the update of ψ ψ ψ with the gradient calculated in equation 7, which reflects gradients of samples from the NIAF. Note also that in the meta learning setting, the task network needs to be adapted for new tasks. This can be done by standard MAML with the above ELBO as the new-task objective. The whole algorithm is illustrated in Algorithm 1 in the Appendix. A similar ideas of multiplicative parametrization was proposed recently in , which used a compound density network to quantify the predictive uncertainty. New-task sample generation After training, samples for a new task can be directly generated by feeding the task information Γ and some noise to the meta sampler depicted in Figure 1. Typically, one needs to run a few numbers of sample-adaption steps to generate good samples for new tasks. Notably, the number of sample-adaption steps required to obtain good accuracy will be shown much less than simply starting a sampler from scratch in the experiments. We conduct a series of experiments to evaluate the efficiency and effectiveness of our model, and compare it with related Bayesian sampling algorithms such as SGLD, SVGD and SGHMC. The main compared algorithms for meta learning include the PMAML , Amortized Bayesian Meta-Learning (ABML) , and NNSGHMC , a recently proposed meta SG-MCMC algorithm. Inspired by , we denote our algorithm distribution agnostic meta sampling (DAMS). We first demonstrate our proposed NIAF-based sampler is able to generate more effective samples compared to the popular Bayesian algorithms such as SVGD, SGLD and SGHMC, in a non-meta-sampling setting. To this end, we apply standard Bayesian Logistic Regression (BLR) on several real datasets from the UCI repository: Australian (15 features, 690 samples), German (25 features, 1000 samples), Heart (14 features, 270 samples). We perform posterior sampling for BLR using our proposed sampler, as well as SVGD, SGLD, SGHMC. For a more detailed investigation of different components in our model, we also test the generator with different architectures, including generators with MLP (DAMS with MLP), IAF (DAMS with IAF), and NIAF (DAMS with NIAF). We follow and apply Gaussian priors for the parameters p 0 (w|α) = N (w; 0, α −1 I) with p 0 (α) = Gamma(α, 1, 0.01). A random selection of 80% data are used for training and the remaining for testing. The testing accuracies are shown in Table 1. It is observed that DAMS with NIAF achieves the best performance in terms of accuracy. The also indicate the effectiveness and expressiveness of the proposed NIAF architecture in the OT-Bayesian sampling framework. In this set of experiments, we aim to demonstrate the excellent meta-sample adaptability of our meta-sampling framework in different tasks. An additional synthetic experiment on meta posterior adaption is presented in Section D.3 of the Appendix. Gaussian mixture model We first conduct experiments to meta-sample several challenging Gaussian mixture distributions. We consider mixtures of 4, 6 and 20 Gaussians. Detailed distributional forms are given in the Appendix. To setup a meta sampling scenario, we use 2, 3 and 14 Gaussian components with different means and covariance, respectively, for meta-training of the meta sampler. After training, meta samples are adapted to samples from a target Gaussian mixture by following the new-task-sample-generation procedure described in Section 2.4. We plot the convergence of 1000 meta samples to a target distribution versus a number of particle (sample) updates (iterations), measured with the maximum mean discrepancy (MMD) evaluated by samples. For a fair comparison, we use the same number of samples (particles) to evaluation the MMD. The are shown in Figure 3. It is clear that our proposed meta sampler DAMS converges much faster and better than other sampling algorithms, especially on the most complicated mixture of 20-Gaussians. The reason for the fast convergence (adaption) is partially due to the learned meta sampler, which provides good initialization for sample adaption. This is further verified by inspecting how samples in the adaption process evolve, which are plotted in Figure 10, 11 and 12 in the Appendix. Finally, we test the proposed DAMS for meta sampling of BNNs on MNIST and CIFAR-10 . We follow the experimental setting in , and split the MNIST and CIFAR10 dataset into two parts for meta training and testing (sample adaption), respectively. As we focus on fast adaptation, we show the accuracy within 200 iterations. To deal with the high-dimensionality issue, we adopt the method of multiplicative parameterization proposed in Section 2.2.1. We randomly pick 5 classes for training, and the remaining classes for testing. A BNN is trained only on the training data for meta sample (weights of the BNN) generation, with each sample corresponding to a meta BNN. In testing, the meta BNNs are adapted based on the testing data. For the MNIST dataset, we parameterize a BNN as a CNN with two convolutional layers followed by a fully connected layer with 100 hidden units. The kernel sizes of the two conv layers are 3 and 16, respectively. A similar architecture is applied for the CIFAR10 dataset, but with 16 and 50 filters whose kernel sizes are 7 and 5 for the two convolutional layers, respectively. The hidden units of the fully connected layer is 300. a) Adaptation efficiency: For this purpose, we compare our model with NNSGHMC , as well as with a non-meta-learning method to train from scratch. To demonstrate the effectiveness of our NIAF structure for adaptive posterior sampling, we also compare it with the simple conditional version of MNF . For NNSGHMC and our DAMS, 20 meta samples are used in training. Figure 4 plots the learning curves of testing accuracy versus the number of iterations. It is clearly seen that our DAMS adapts the fastest to new tasks, and is able to achieve the highest classification accuracy on all cases due to the effectiveness of uncertainty adaption. To further demonstrate the superiority of DAMS over NNSGHMC, we list the test accuracy at different adaptation steps in Table 2. The clearly show faster adaption and higher accuracy of the proposed DAMS compared to NNSGHMC. It is also interesting to see that MNF, the non-meta learning method, performs better than NNSGHMC; while our method outperforms both, demonstrating the effectiveness of the proposed NIAF architecture. b) Sample efficiency: To demonstrate sample efficiency of our framework, we compare it with both NNSGHMC and the standard Bayesian learning of DNNs with SGHMC. To this end, we randomly select 5%, 20%, 30% of training data on CIFAR10 in a test task as training data for adaptation, and test on the same testing data. Figure 5 shows the corresponding test accuracies for different settings. It is observed that ours, the adaptation-based methods, obtain higher accuracies than the non-adaptive method of SGHMC. Furthermore, our method achieves the best sample efficiency among other methods. c) Uncertainty evaluation: Finally, we ablate study the uncertainty estimation of our, the standard SGHMC and NNSGHMC models in terms of test accuracy and negative loglikelihood. The on the CIFAR10 dataset are shown in Figure 6 and Table 3. We follow to evaluate uncertainty via entropy of out-of-sample predictive distributions (Figure 6 (right) ). We observe that uncertainty estimates with DAMS are better than others, since the probability of low entropy prediction is much lower than others. Details are given in Section D.4 of the Appendix. Following literature (; 2018), we further apply our framework for meta sampling of few-shot classification and reinforcement learning. Model agnostic meta sampling for few-shot image classification We apply our method on two popular few-shot image-classification tasks on the Mini-Imagenet dataset, consisting of 64, 16, and 20 classes for training, validation and testing, respectively. We compare our method with MAML and its variants with uncertainty modeling, including the Amortized Bayesian Meta-Learning (ABML) and Probabilistic MAML (PMAML) . To get a better understanding of each component of our framework, we also conduct an ablation study with three variants of our model: MAML-SGLD, MAML-SGHMC and DAMS-SGLD. MAML-SGLD and MAML-SGHMC correspond to the variants where SGLD and SGHMC are used to sample the parameters of the classifier, respectively; and DAMS-SGLD replaces the WGF component of DAMS-NIAF with SGLD. Follow the setting in (; 2018), the network architecture includes a stacked 4-layer convolutional feature extractor, followed by a meta classifier with one single fully-connected layer using the multiplicative parameterization. Testing are presented in Table 4. With our method, we observe significant improvement of the classification accuracy at an early stage compared with MAML. The learning curves are plotted in Figure 7, further demonstrating the superiority of our method, which can provide an elegant initialization for the classification network. Finally, from the ablation study, the suggest both the NIAF and the WGF components contribute to the performance gain obtained by our method. Meta sampling for reinforcement learning We next adapt our method for meta reinforcement learning. We test and compare the models on the same MuJoCo continuous control tasks as used in , including the goal velocity task and goal direction task for cheetah robots. For a fair comparison, we leverage the TRPO-RL framework for meta updating following MAML method. Specifically, we implement the policy network with two hidden layers with ReLu activation followed by a linear layer to produce the mean value of the Gaussian policy. The first hidden layer is a fully connected layer, and we adopt the multiplicative parameterization for the second hidden layer. As shown in Figure 8, our method obtains higher rewards compared with MAML on both tasks, indicating the importance of effective uncertainty adaptation in RL. We present a Bayesian meta-sampling framework, called DAMS, consisting of a meta sampler and a sample adapter for effective uncertainty adaption. Our model is based on the recently proposed neural autoregressive flows and related theory from optimal transport, enabling a simple yet effective training procedure. To make the proposed model scalable, an efficient uncertainty parameterization is proposed for the task network, which is trained by variational inference. DAMS is general and can be applied to different scenarios with an ability for fast uncertainty adaptation. Experiments on a series of tasks demonstrate the advantages of the proposed framework over other methods including the recently proposed meta SG-MCMC, in terms of both sample efficiency and fast uncertainty adaption. This section provides a review of on Bayesian sampling, Wasserstein gradient flows and autoregressive flows. Bayesian sampling has been a long-standing tool in Bayesian modeling, with a wide range of applications such as uncertainty modeling , data generation (; a) and reinforcement learning . Traditional algorithms include but are not limited to Metropolis-Hastings algorithm, importance sampling and Gibbs sampling . Modern machine learning and deep learning have been pushing forward the development of large-scale Bayesian sampling. Popular algorithms in this line of research include the family of stochastic gradient MCMC (SG-MCMC) and the Stein variational gradient descent (SVGD) . Recently, a particle-optimization sampling framework that unifies SG-MCMC and SVGD has also been proposed (b), followed by some recent developments (b; a). Generally speaking, all these methods target at sampling from some particular distributions such as the posterior distribution of the weights of a Bayesian neural network (BNN). On the other hand, meta learning is a recently developed concept that tries to learn some abstract information from a set of different but related tasks. A natural question by considering these two is: can we design meta sampling algorithms that learns to generate meta samples, which can be adapted to samples of a new task-specific distribution quickly? This paper bridges this gap by proposing a mathematically sound framework for Bayesian meta sampling. In optimal transport, a density function, ρ t, evolves along time t to a target distribution optimally, i.e., along the shortest path on the space of probability measures P(Ω), with Ω being a subset of R d. The optimality is measured in the sense that ρ t moves along the geodesic of a Riemannian manifold induced by a functional energy, E: P(Ω) → R, under the 2-Wasserstein distance metric. Formally, the trajectory of ρ t is described by the following partial differential equation (PDE): where ∂ zi f is the divergence operator; and F δE δρt (ρ t) is called the first variation of E at ρ t (functional derivative on a manifold in P(Ω)). To ensure ρ t to converge to a target distribution p such as the posterior distribution of the model parameters, one must design an appropriate E such that p = arg ρ min E(ρ). A common choice is the popular KL-divergence, KL(ρ, p). We will consider a more general setting in our framework presented later. Note the WGF framework equation 8 allows to view Bayesian sampling from a density-optimization perspective. For example, recent works (b; b; a) consider approximating ρ t with samples and evolve the samples according to equation 8. Parts of our model will follow this sampling setting. Our model relies on the concept of autoregressive flows for meta-sampler design. We review some key concepts here. More detailed comparisons are provided in the Appendix. A normalizing flow defines an invertible transformation from one random variable to another z. A flexible way to implement this is to define it via implicit distributions, meaning sample generation is implemented as: i ∼ q 0 (i), z i = G(i ; φ φ φ), where i indexes elements of and z; G represents a deep neural network (generator) parameterized by φ φ φ. The autoregressive flow (AF) parameterizes a Gaussian conditional distribution for each z i, e.g., p(z i | z 1:i−1) = N (z i |µ i, exp(α i)), where µ i = g µi (z 1:i−1) and α i = g αi (z 1:i−1) are outputs of two neural networks g µi and g αi. The sample generation process is: z i = i exp(α i) + µ i, with µ i = g µi (z 1:i−1), α i = g αi (z 1:i−1) and i ∼ N. Instances of autoregressive flows include the Autoregressive Flow (AF) and Masked Autoregressive Flow (MAF) . The inverse autoregressive flow (IAF) is an instance of normalizing flow that uses MADE , whose samples are generated as: The neural autoregressive flow (NAF) replaces the affine transformation used in the above flows with a deep neural network (DNN), i.e., t = f (z t, φ φ φ = T (z 1:t−1)), where f is a DNN transforming a complex sample distribution, p(z), to a simple latent representation q 0 . In NAF, q 0 is considered as a simple prior, and f is an invertible function represented by a DNN, whose weights are generated by T, an autoregressive conditional model. Let the induced distribution of by f be p f . f is learned by minimizing the KL-divergence between p f and q 0 . Note the µ i and α i are computed differently for AF and IAF, i.e., previous variables z 1:i−1 are used for AF and previous random noise 1:i−1 are used for IAF. AF can be used for calculating the density p(z) of any sample z in one pass of the network. However, drawing samples requires performing D sequential passes (D is the dimensionality of z). Thus if D is large, drawing samples will be computationally prohibited. IAF, by contrast, can draw samples and estimate densities of the generated samples with only one pass of the network. However, calculating the sample density p(z) requires D passes to find the corresponding noise. The advantage of NAF is that the mapping function f is much more expressive, and density evaluation is efficient. However, drawing samples is much more computationally expensive. To adopt the NAF framework for sampling, we propose the neural inverse-autoregressive flow (NIAF) in Section 2.2.1. Our algorithm in the multiplicative-parameterization setting includes updating the flow parameter and learning the task network with variational inference, which is described in Algorithm 1. Require: p(T): distribution over tasks; Require: α, β step size hyperparameter; randomly initialize θ, φ φ φ, ψ ψ ψ. while not done do Sample batch of tasks Compute adapted parameters with gradient descent: Proof For each task τ, we first write out the corresponding WGF as Note the WGF is defined in the sense of distributions , meaning that for any smooth real functions u(z) with compact support, equation 9 indicates Taking (z), and for each particle letting u(z) = z, equation 10 then reduces to the following differential equation for each particle: which is the particle evolution equation we need to solve. Proof [Proof of Proposition 2] First, we introduce the following from [page 120]: In the f-divergence case, this corresponds tof (ρ) = p τ f (ρ pτ). Applying Lemma 3 and using the chain rule, we have which completes the proof. This section provides extra experimental to demonstrate the effectiveness of our proposed method. Analytic forms of synthetic distributions are provided below. Mog4: Mog6: We apply our DAMS for fast adaptive sampling on regression tasks. We follow and apply DAMS to sample the posterior distribution of the frequency parameter of a sine-wave function, given only three data points. The sinewave function is defined as y(t) = sin(2πf t), with a uniform prior U and a Gaussian likelihood N (y i ; y f (t i), 0.125). We design a meta-sampling setting to adapt the posterior distributions, p(f |D) on the training data, to that on new test data. Specifically, meta training data (t, y) are {, (2/5, 0), (4/5, 0)}. For the first setting, meta testing consists of data {, (3/5, 0), (6/5, 0)}. For the second setting, meta testing consists of data {, (4/5, 0), (8/5, 0)}. Meta training data corresponds to a posterior with two modes of f ∈ {0.0, 5/4}. For the first setting, the test data corresponds to a posterior with three modes f ∈ {0.0, 5/6, 5/3}. For the second setting, the test data corresponds to four modes f ∈ {0.0, 5/8, 5/4, 15/8}. We compare our DAMS with the of re-training from scratch with the test data. Empirical distribution with samples and kernel density estimation are plotted in Figure 13. The first setting takes about 3.4K iterations to find the three modes in the posterior with re-training, while it takes about 0.8K iterations with meta adaptation. For the second setting, it takes Under review as a conference paper at ICLR 2020 Figure 11: Comparison among different samplers on adapting to Mixture of 20-Gaussian. Top to Bottom row: DAMS, SVGD, SGLD and NNSGHMC more than 3.6K iterations to find the four modes with training from scratch, while it is about 0.9K iterations with meta adaptation. For both test tasks, the sampler with re-training miss at least one mode compared with the meta sampler adaptation with the same number of iterations. We can see that DAMS can adapt the training posterior to the test posterior much faster than re-training from scratch due to effective uncertainty adaption, obtaining more than 3X speedups. We show the predictive uncertainty of DAMS compared to SGHMC and NNSGHMC by exploring the posterior of neural parameters, we estimate the uncertainty for out-of-distribution data samples. We train different algorithms on the MNIST dataset, and estimate the entropy of the predictive distribution on the notMNIST dataset. We follow and use the empirical CDF of entropy to evaluate the uncertainty. Since the probability of observing a high confidence prediction is low, curves that are nearer to the bottom right of the figure estimates uncertainty better. The predictive distribution of the trained model is expected to be uniform over the notMNIST digits as the samples from the dataset are from unseen classes. The BNN is a CNN with 16 and 50 filters whose kernel sizes are 5 and 5 for the two convolutional layers,
We proposed a Bayesian meta sampling method for adapting the model uncertainty in meta learning
799
scitldr