paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
sequence
review_writers
sequence
review_contents
sequence
review_ratings
sequence
review_confidences
sequence
review_reply_tos
sequence
iclr_2019_HJflg30qKX
Gradient descent aligns the layers of deep linear networks
This paper establishes risk convergence and asymptotic weight matrix alignment --- a form of implicit regularization --- of gradient flow and gradient descent when applied to deep linear networks on linearly separable data. In more detail, for gradient flow applied to strictly decreasing loss functions (with similar results for gradient descent with particular decreasing step sizes): (i) the risk converges to 0; (ii) the normalized i-th weight matrix asymptotically equals its rank-1 approximation u_iv_i^T; (iii) these rank-1 matrices are aligned across layers, meaning |v_{i+1}^T u_i| -> 1. In the case of the logistic loss (binary cross entropy), more can be said: the linear function induced by the network --- the product of its weight matrices --- converges to the same direction as the maximum margin solution. This last property was identified in prior work, but only under assumptions on gradient descent which here are implied by the alignment phenomenon.
accepted-poster-papers
This paper studies the behavior of weight parameters for linear networks when trained on separable data with strictly decreasing loss functions. For this setting the paper shows that the gradient descent solution converges to max margin solution and each layer converges to a rank 1 matrix with consequent layers aligned. All reviewers agree that the paper provides novel results for understanding implicit regularization effects of gradient descent for linear networks. Despite the limitations of this paper such as studying networks with linear activation, studying gradient descent not with practical step sizes, assuming data is linearly separable, reviewers find the results useful and a good addition to existing literature.
train
[ "Bkg-LaQ5hQ", "SkxCoNJ9C7", "rklqDEk5AQ", "rJgrymKX0X", "Hke838NVhm", "rkxYrG9_T7", "r1gjyf9ua7", "SJefSe9OTm", "rygrfa8shQ" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "In this work the authors prove several claims regarding the inductive bias of gradient descent and gradient flow trained on deep linear networks with linearly separable data. They show that asymptotically gradient descent minimizes the risk, each weight matrix converges to its rank one approximation and the top singular vectors of two adjacent weight matrices align. Furthermore, for the logistic and exponential loss the induced linear predictor converges to the max margin solution. \n\nThis work is very interesting and novel. It provides a comprehensive and exact characterization of the dynamics of gradient descent for linear networks. Such strong guarantees are essential for understanding neural networks and extremely rare in the realm of non-convex optimization results. The work is a major contribution over the paper of Gunasekar et al. (2018) which assume that the risk is minimized. The proof techniques are interesting and I believe that they will be useful in analyzing neural networks in other settings.\n\nRegarding Lemma 3, the proof is not clear. Lemma 8 does not exist in the paper of Soudry et al. (2017). It is also claimed that with probability 1 there are at most d support vectors. How does this relate with assumption 3, which implies that there are at least d support vectors?\n\n-------Revision---------\n\nThank you for the response. I have not changed the original review.\n", "We have uploaded a minor revision of our paper:\n- We have adjusted our concluding \"Summary and Future Directions\" section to highlight the need for convergence rates and practical step sizes.\n- Due to a discussion with AnonReviewer2, we have foreshadowed our use of prior work in our \"Related Work\" subsection.\n- Due to a discussion with AnonReviewer1, we have included the exact version number of (Soudry et al., 2017), and now refer to their \"Lemma 12\" rather than \"Lemma 8\".\n- We have fixed a few typos.\n\nWe thank the reviewers for their time and feedback!", "We thank the reviewer for their comments. As detailed in our \"common response\", we have updated our \"Related Work\" and \"Summary and Future Directions\" in response to your comments. Thank you for your time!", "Thanks for your response. I have changed my review and I think your paper is above the bar.", "Summary:\nThis paper studies the properties of applying gradient flow and gradient descent to deep linear networks on linearly separable data. For strictly decreasing loss like the logistic loss, this paper shows 1) the loss goes to 0, 2) for every layer the normalized weight matrix converges to a rank-1 matrix 3) these rank-1 matrices are aligned. For the logistic loss, this paper further shows the linear function is the maximum margin solution.\n\nComments:\nThis paper discovers some interesting properties of deep linear networks, namely asymptotic rank-1, and the adjacent matrix alignment effect. These discoveries are very interesting and may be useful to guide future findings for deep non-linear networks. The analysis relies on many previous results in Du et al. 2018, Arora et al. 2018 and Soudry et al. 2017 authors did a good job in combing them and developed some techniques to give very interesting results. \nThere are two weaknesses. First, there is no convergence rate. Second, the step size assumption (Assumption 5) is unnatural. If the step size is set proportional to 1/t or 1/t^2 does this setup satisfies this assumption? \n\nOverall I think there are some interesting findings for deep linear networks and some new analysis presented, so I think this paper is above the bar.\nHowever, I don't think this is a strong theory people due to the two weakness I mentioned.", "We thank the reviewer for their time and careful comments.\n\nWe disagree that \"all the analyses have appeared in previous papers\". We wish to communicate with the reviewer during this feedback phase in order to come to a consensus on this comment, and subsequently update the submission to accurately present what is new and what is old in the analysis.\n\nTo start this discussion, we clarify how our analysis goes beyond what was known.\n(1) We first argue that Theorem 1 (and analogously Theorem 3) sharply depart from prior work. In particular, the tools from (Arora et al., 2018; Du et al., 2018) are only used at the beginning of Lemma 2, and moreover Lemma 2 is not nearly strong enough to prove Theorem 1: first, it is still possible for the iterates to get trapped in saddle points, or more generally in a bounded domain; second, even if the iterates grow unboundedly, the risk may still not converge to zero. These problems are handled in the proofs of Lemma 1 and Theorem 1 respectively, using techniques which have not previously appeared. \n(2) Theorem 2 (and Theorem 4) also depart from prior work. We invoke a lemma of Soudry et al. (2017) in our technical Lemma 3; otherwise, the proofs of Lemma 4 and Theorem 2 are new. Indeed, the work of Soudry et al. (2017) is for linear predictors, whereas we consider deep linear networks.\n\nOn a separate note, we agree that producing rates and practical step sizes would be ideal. However, the analysis of gradient flow is already an interesting stepping stone, indeed one which is the main topic of prior work (Arora et al., 2018; Du et al., 2018). Our step sizes are not standard, but we note that they can be computed easily via the expression for beta(R) given in Lemma 5.\n\nWe thank the reviewer once again, and look forward to further comments!", "We thank the reviewer for their time and careful comments.\n\nThe correct reference for \"Lemma 8\" of Soudry et al. (2017) is either Lemma 8 of their ICLR submission ( https://openreview.net/forum?id=r1q7n9gAb ), alternatively Lemma 12 in their current (as of March 18) arxiv version ( https://arxiv.org/abs/1710.10345v3 ). We do not require the full strength of this lemma; we only need all support vectors to have positive dual variables with probability 1. While this lemma is a property of support vectors, our Assumption 3 is on the relation between support vectors and nonsupport vectors; we do not necessarily need the support vectors to span the whole space, it is enough if they span the same space as the data, even if this is a subspace of dimension smaller than the ambient dimension.\n\nWe thank the reviewer for their support. We believe that our techniques will be helpful in understanding nonlinear networks, and that alignment results there will help with other problems, for instance generalization.\n\nWe will be following and responding to comments throughout this feedback phase, and welcome all further comments from the reviewer!", "We thank the reviewer for their time and careful comments.\n\nWe agree with the reviewer's criticisms. We hope to work with nonlinear networks, practical step sizes, and provide rates in follow-up work.\n\nWe thank the reviewer for their support. As we mentioned to AnonReviewer1, we believe these tools can also help in the analysis of nonlinear networks, and these alignment results can then be used to derive refined generalization bounds.\n\nWe thank the reviewer once again, and invite them to provide further comments during this feedback period!", "This paper analyzes the asymptotic convergence of GD for training deep linear network for classification using smooth monotone loss functions (e.g., the logistic loss). It is not a breakthrough, but indeed provides some useful insights.\n\nSome assumptions are very restricted: (1) Linear Activation; (2) Separable data. However, to the best of our knowledge, these are some necessary simplifications, given current technical limit and significant lack of theoretical understanding of neural networks.\n\nThe contribution of this paper contains multiple manifolds: For Deep Linear Network, GD tends to reduce the complexity:\n(1)\tConverge to Maximum Margin Solution;\n(2)\tTends to yield extremely simple models, even for every single weight matrix.\n(3)\tWell aligned means handle the redundancy.\n(4)\tExperimental results justify the implication of the proposed theory.\n\nThe authors use gradient flow analysis to provide intuition, but also present a discrete time analysis.\n\nThe only other drawbacks I could find are (1) The paper only analyze the asymptotic convergence; (2) The step size for discrete time analysis is a bit artificial. Given the difficulty of the problem, both are acceptable to me." ]
[ 9, -1, -1, -1, 6, -1, -1, -1, 7 ]
[ 4, -1, -1, -1, 5, -1, -1, -1, 4 ]
[ "iclr_2019_HJflg30qKX", "iclr_2019_HJflg30qKX", "rJgrymKX0X", "rkxYrG9_T7", "iclr_2019_HJflg30qKX", "Hke838NVhm", "Bkg-LaQ5hQ", "rygrfa8shQ", "iclr_2019_HJflg30qKX" ]
iclr_2019_HJfwJ2A5KX
Data-Dependent Coresets for Compressing Neural Networks with Applications to Generalization Bounds
We present an efficient coresets-based neural network compression algorithm that sparsifies the parameters of a trained fully-connected neural network in a manner that provably approximates the network's output. Our approach is based on an importance sampling scheme that judiciously defines a sampling distribution over the neural network parameters, and as a result, retains parameters of high importance while discarding redundant ones. We leverage a novel, empirical notion of sensitivity and extend traditional coreset constructions to the application of compressing parameters. Our theoretical analysis establishes guarantees on the size and accuracy of the resulting compressed network and gives rise to generalization bounds that may provide new insights into the generalization properties of neural networks. We demonstrate the practical effectiveness of our algorithm on a variety of neural network configurations and real-world data sets.
accepted-poster-papers
The reviewers and AC note that the strength of the paper includes a) an interesting compression algorithm of neural networks with provable guarantees (under some assumptions), b) solid experimental comparison with the existing *matrix sparsification* algorithms. The AC's main concern of the experimental part of the paper is that it doesn't outperform or match the performance of the "vanilla" neural network compression algorithms such as Han et al'15. The AC decided to suggest acceptance for the paper but also strongly encourage the paper to clarify the algorithms in comparison don't include state-of-the-art compression algorithms.
train
[ "Ske32v9VhX", "B1go__H9hX", "S1eKa7phpX", "S1eRFQph6m", "SJeLVQ63TQ", "H1gbAMphaX", "HkeeA0minm" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "Given an additively decomposable function F(X, Q) = sum_over_x_in_X cost(x, Q), one can approximate it using either random sampling of x in X (unbiased, possibly high variance), or using importance sampling and replace the sum_over_x with a sum_over_coreset importance_of_a_point * cost(x, Q) which if properly defined can be both unbiased and have low variance [1]. In this work the authors consider the weighted sum of activations as F and suggest that for each neuron we can subsample the incoming edges. To construct the importance sampling strategy the authors adapt the classic notion of sensitivity from the coreset literature. Then, one has to carefully balance the approximation quality from one layer to the next and essentially union bound the results over all layers and all sampled points. The performed analysis is sound (up to my knowledge).\n\nPro:\n- I commend the authors for a clean and polished writeup.\n- The analysis seems to be sound (apart from the issues discussed below)\n- The experimental results look promising, at least in the limited setup.\n\nCon:\n- There exists competing work with rigorous guarantees, for example [2].\n- The analysis hinges on two assumptions which, in my opinion, make the problem feasible: having (sub) exponential tails allows for strong concentration results, and with proper analysis (as done by the authors), the fact that the additively decomposable function can be approximated given well-behaving summands is not surprising. The analysis is definitely non-trivial and I commend the authors for a clean writeup.\n- While rigorous guarantees are lacking for some previous work, previously introduced techniques were shown to be extremely effective in practice and across a spectrum of tasks. As the guarantees arguably stem from the assumptions 1 and 2, I feel that it’s unfair to not compare to those results empirically. Hence, failing to compare to results of at least [2, 3] is a major drawback of this work.\n- The result holds for n points drawn from P. However, in practice the network might receive essentially arbitrary input from P at inference time. Given that we need to decide on the number of edges to preserve apriori, what are the implications?\n- The presented bounds should be discussed on an intuitive level (i.e. the number of non zero entries is approximately cubic in L).\n\nI consider this to be a well-executed paper which brings together the main ideas from the coreset literature and shows one avenue of establishing provable results. However, given that no comparison to the state-of-the-art techniques is given I'm not confident that the community will apply these techniques in practice. On the other hand, the main strength -- the theoretical guarantees -- hinge on the introduced assumptions. As such, without additional empirical results demonstrating the utility with respect to the state-of-the-art methods (for the same capacity in terms of NNZ) I cannot recommend acceptance.\n\n[1] https://arxiv.org/abs/1601.00617\n[2] papers.nips.cc/paper/6910-net-trim-convex-pruning-of-deep-neural-networks-with-performance-guarantee\n[3] https://arxiv.org/abs/1510.00149\n\n\n========\nThank you for the detailed responses. Given the additional experimental results and connections to existing work, I have updated my score from 5 to 6. ", "The authors propose to reduce the size of fully connected neural networks, defined as the total number of nonzeros in the weight matrices, by calculating sensitivity scores for each incoming connection to a neuron, and randomly keeping only some of the incoming connections with probability proportional to their share of the total sensitivity. They provide a specific definition for the sensitivity scores and establish that the sparsified neural network, with constant probability for any sample from the training population, provides an output that is a small multiplicative factor away from the output of the unsparisfied neural network. The cost of the sparsification is essentially the application of the trained neural network to a small number of data points in order to compute the sensitivity scores\n\nPros:\n- the method works empirically, in that their empirical evaluations on MNIST, CIFAR, and FashionMNIST classification problems show that the drop in accuracy is lower when the neural net is sparsified using their CoreNet algorithm and variations than when it is randomly sparsified or the neural network size is reduced by using SVD.\n- theory is provided to argue the consistency of the sparsified neural network\n\nCons:\n- no comparison is made to the baseline of using matrix sparsification algorithms on the weight matrices themselves. I do not see why CoreNet should be expected to perform empirically better than simply using e.g. the entry-wise sampling scheme from \"Near-optimal entrywise sampling for data matrices\" by Achlioptas and co-authors, or earlier works addressing the same problem of sparsifying matrices.\n- the theory makes very strong assumptions (Assumptions 1 and 2) that are not explained or justified well. Both depend on the specific weight matrices being sparsified, and it isn't clear a priori when the weight matrices obtained from whatever optimization procedure was used to train the neural net will be such that these assumptions hold.\n- despite the suggestions of the theory, the accuracy drop can be quite large in practice, as in the CIFAR panel of Figure 1\n\nI think the ICLR audience will appreciate the attempt to provide a principled approach to decreasing the size of neural networks, but I do not think this approach is widely compelling as :\n(1) no true guaranteed control on the trade-off between accuracy loss and network size is available\n(2) empirically the method does not perform well consistently\n(3) comparisons with reasonable and informative baselines are missing\n\nUpdated in response to author response: the inclusion of experimental comparisons with linear algebraic sparsification baselines, showing that the proposed method can be significantly more accurate, strengthens the appeal of the method.", "We are grateful for the detailed and thorough review of our paper, and thank the reviewer for the constructive feedback. \n\n1) Thank you for pointing out the related work [7]. We would like to highlight that [7] solves a convex optimization (in fact, a proxy to ||W||_0 by instead minimizing ||W||_1) to promote sparsity and consequently, in comparison to our work, does not give an explicit tradeoff between the resulting sparsity of the network and the approximation accuracy. Furthermore, the results of [7] only apply to approximating the output of the neural network with respect to the input training points X, whereas our compressed network provably approximates the output of the neural network for any point randomly drawn from the data distribution. Moreover, the bounds provided by the paper are for the link-normalized network where the matrices are normalized to have ell_1 norm equal to 1. This implies that, as noted by the authors, their error guarantees should be multiplied by ||W||_1 (which can be arbitrarily large) in order to map them back to appropriate guarantees for the original network.\n\n2) Competing work, such as those mentioned by AnonReviewer1, also imposes assumptions (e.g., [7] as well as [9]) to ensure sufficiently small sampling complexity. We would like to emphasize that we impose Assumptions 1 and 2 solely to rule out pathological instances in which we cannot approximate the sensitivity of each edge or the \\Delta of each neuron using a small-sized (~logarithmic in 1/\\delta) set of data points S \\subseteq P. If the desired failure probability \\delta is sufficiently large and many data points are available for use in constructing S, then Assumptions 1 and 2 are not necessary. Furthermore, we believe it is important to highlight that our assumptions are satisfied for a variety of real-world data sets and quantities (e.g., for Asm. 2: all bounded random variables are subgaussian and hence subexponential) and distributions (e.g., for Asm. 1: traditional distributions such as uniform, normal, gamma, defined on [0,1] or [0, M] for M <= 1, among others, satisfy this assumption).\n\nWe would also like to note that our assumptions can be made significantly milder and more general. In particular, the constant log(\\eta \\eta^*) for the upper bound of K in Assumption 1 and \\lambda in Assumption 2 can be replaced by a general constant C > 1, and our sampling complexities (for the size of S and edge sampling complexity m in Alg. 1 and Alg. 2, respectively) would then simply be an expression containing C instead of log(\\eta \\eta^*). \n\n3) Thank you for your reference to the related work. We would like to remark that [8] is based predominantly on heuristics and point out that the methods mentioned in the related work (such as [8]) are synergistic to our methods and can be used as a post- and/or preprocessing step in conjunction with our method. Furthermore, we would like to mention that the work of [8] is more concerned with reducing the storage requirements of the resulting compressed network (e.g., by Huffman coding), whereas our approach not only reduces storage requirements (by promoting sparsity), but also improves inference time complexity (via sparse linear algebra algorithms at inference time). We would like to investigate these prospective research directions and improvements to our algorithm in future work.\n\n4) Finally, we would like to clarify that the number of points does not have to be fixed a priori. More generally speaking, our bound provides a probabilistic guarantee that any point that is input into the network is correctly approximated with probability 1-\\delta. This holds for any randomly drawn data point. If, for example, we want to obtain an approximation guarantee for any set of n randomly drawn points, then taking \\delta’ = \\delta/n in our sampling complexity bounds in conjunction with a straightforward application of the union bound yields the desired approximation guarantee with probability at least 1 - \\delta for the set of n points (see Corollary 12 - Generalized Network Compression, in the Appendix).\n\n[7]: papers.nips.cc/paper/6910-net-trim-convex-pruning-of-deep-neural-networks-with-performance-guarantee\n[8]: https://arxiv.org/abs/1510.00149\t\n[9]: \"Near-optimal entrywise sampling for data matrices\" by Achlioptas et al.\n", "We thank the reviewer for the in-depth review of our paper and the helpful reference to prior work on matrix sparsification by entrywise sampling. \n\n1) The work of [5] on matrix sparsification is similar to our work in the sense that the aim is to approximate a weight matrix W by its sparse counterpart \\hat W, but differs significantly from our end goal of generating \\hat W to provably approximate the neuron’s value. In other words, the overarching goal of our compression is to approximate z = W*a entry-wise using a sparse matrix \\hat W, i.e., \\hat z = \\hat W * a, whereas the focus of [6] is to compute a sparse \\hat W such that the normed difference ||\\hat W - W|| < epsilon. Without too much effort, one can see how, depending on the structure/distribution of the activation a, the weight-based sampling methods may fail for approximating the value z = W*a, despite the normed difference ||\\hat W - W|| being small. Our insight is that rather than taking a data-oblivious approach to sparsification (i.e., simply generating \\hat W that is close to W in some sense), we explicitly consider the input data distribution in our sensitivity computations to generate a more informed sampling distribution of the weight entries that specifically considers the end goal of approximating the neuron’s value. In other words, since we tailor the sparsification of the weight matrix to the underlying data distribution (using our new notion of empirical sensitivity), we obtain a more informed sparsification procedure that yields better performance. We refer the reader to our new figures in our revision that compare our algorithm to state-of-the-art entrywise matrix methods. \n\nWe would also like to remark that [5] imposes a set of 3 “Data matrix” assumptions (see Definition 4.1 of [5]), which clearly do not hold for weight matrices of a neural network and thus render the sparsification approach of [5] inapplicable. Despite this, as mentioned above, we included comparisons to other state-of-the-art matrix sparsification methods that are based on entrywise sampling in our current revision.\n\n2) We would like to highlight that these assumption rule out pathological instances in which we cannot approximate the sensitivity of each edge or the \\Delta of each neuron using a small-sized set of data points S \\subseteq P. In particular, we impose Assumption 1 to ensure that a subset of data points S of size roughly logarithmic in 1/delta can be used to obtain accurate approximations of the sensitivity of each edge. Assumption 2 is imposed to ensure that the same small sized set S can be used to approximate the quantity Delta, which represents the ratio of the positive and negative decompositions of the objective value z = <w,x>. Finally, we remark that defining the sampling complexity in terms of the Delta term is in line with related work, such as that of constructing coresets for logistic regression [6], where a complexity measure \\mu(X) (analogous to our \\Delta) is defined to be the maximum ratio of positive to negative contributions to the objective value and is used to quantify the sampling complexity. We will clarify the exposition of our assumptions and intuition behind our assumptions and better explain the intuition behind them in our final submission.\n\n3) You are correct that the performance of the compression method varies depending on the data set. Nevertheless, competing compression methods (including the recently added comparisons) exhibit very similar performance variations between architectures and data sets. We believe this highlights the importance of the relation between the dataset and the network architecture, which is accurately reflected in our data-dependent and network-dependent sampling bounds, such as the sum of sensitivities and the value of Delta. We also believe that the fact that the sampling complexity can be determined on the fly by simply inspecting the pre-computed values of sensitivity (and considering their sum) and Delta is a strength of our approach. Since, these quantities can shed light on which parts/layers of a neural network are important to retain to preserve the network’s output, and in a sense, provide interpretability of the neural network’s components.\n\n4) The trade-off between accuracy loss and network size can be readily computed from the available sampling complexity bounds of our entry-wise approximation guarantee (Lemma 3 & Theorem 4). In particular, the accuracy of the network is captured by the margin between the most likely output neuron and the remaining neurons. We can therefore pick a margin and obtain a bound in terms of the sampling complexity.\n\n5) We would like to clarify that the failure probability of our algorithm is not constant, and in fact is exponentially small in the edge sample size and the size of the input points used to compute the sensitivity (S). \n\n[5]: \"Near-optimal entrywise sampling for data matrices\" by Achlioptas et al.\n[6]: https://arxiv.org/abs/1805.08571", "Thank you for your insightful comments and constructive feedback. Please find our specific comments below.\n\n1) We would like to clarify that the focus of the paper is to provide a sampling-based compression technique using coresets that is simultaneously practical and provably correct. Towards this end, our coresets-based method provides entry-wise guarantees on the output of the neural network for points drawn from the input data distribution. As we noted in the related works section, this guarantee is significantly stronger than those of Arora et al. because their guarantees are only norm-based and only hold for the initial set of training points. This implies that their compressed network is not ensured to provably approximate the original net for, e.g., points coming from a test set, which significantly limits the applicability of their approach. Another difference to note is that Arora et al. use a JL-based approach whereas we use coresets to conduct the compression.\n\t\nMoreover, as we mentioned in our response to AnonReviewer1, our entry-wise guarantee enables the user to explicitly control the trade-off between classification accuracy loss and the network size, which is a functionality that norm-based bounds on the output cannot provide.\n\n2) We agree that this is insightful to better understand the relationship between different network architectures, the data, and the compressibility of the network, and that including plots of these would be illuminating. In order to keep the length of our current submission in line with that of a conference paper, we intend to include the pertinent plots and a discussion about the distributions of the weights and the sensitivity of the parameters of each layer in future work.\n\n3) Our revision contains comparisons to multiple state-of-the-art matrix sparsification techniques, as mentioned in our General Response. The focus of [4] is to reduce the amount of data required (data compression) to compute an approximately optimal least-squares solution, which significantly differs -- due to inherent differences in problem structure and objective functions -- from the problem of sparsifying the parameters of neural network matrices to approximately preserve the network’s output. \n\n[4]: Near-optimal Coresets For Least-Squares Regression by Boutsidis et al.", "We thank all the reviewers for their useful suggestions and careful consideration of our paper. We believe that the quality and exposition of the paper has been significantly improved thanks to the reviewers’ feedback. Your feedback has raised several points regarding our contributions that we would like to clarify.\n\nOur paper proposes a principled approach for provably compressing neural networks using an importance sampling scheme that is defined by leveraging a novel concept of empirical sensitivity. Constructing an importance sampling distribution using empirical sensitivity is particularly appealing because it is fast to compute using a small subset of the available data points and most importantly because it captures the relative importance of the weights w on the pre-activation value z = <w, a>. This enables us to analyze and bound the approximation guarantee of our sparsification algorithm with respect to the desired output z, rather than providing norm-based matrix bounds -- as prior approaches such as those based on SVD and/or other matrix sparsification methods do [1,2,3].\n\nWe are also thankful for the references to additional baseline and state-of-the-art methods provided by the reviewers. Our revision contains the results of additional experiments that compare the performance of our coreset-based algorithm to three state-of-the-art matrix sparsification algorithms with provable guarantees [1,2,3]. \n\n[1]: Drineas, Petros, and Anastasios Zouzias. \"A note on element-wise matrix sparsification via a matrix-valued Bernstein inequality.\" Information Processing Letters 111.8 (2011): 385-389.\n[2]: Achlioptas, Dimitris, Zohar Karnin, and Edo Liberty. \"Matrix entry-wise sampling: Simple is best.\" KDD 2013.1.1 (2013): 1-4.\n[3]: Kundu, Abhisek, and Petros Drineas. \"A note on randomized element-wise matrix sparsification.\" arXiv preprint arXiv:1404.0320 (2014). https://arxiv.org/abs/1404.0320 ", "In this work the authors improve upon the work of Arora et al. mainly with respect to one aspect, i.e.,\nThey provide eps-approximation of a fully connected neural network output neuron-wise. The idea of \ncompression is very natural and has been explored by various previous works (key refs are cited). Intuitively,\nthe number of effective parameters is significantly less than the number of parameters in the neural network.\nThe authors introduce the notion of the coreset that is suitable for compressing the weight parameters \nin definition 1. Their main result is stated as Theorem 4. Finally, the authors experiment on standard benchmarks, \nperform a careful experimental analysis (i.e., they ensure fairness of comparison between methods such as \nSVD and the rest). It would be interesting to see the histogram/distribution of the weights per layer and at an aggregate level\nfor the datasets used. Also, in the light of the recent results of Arora et al. that show that the signal out of a layer\nis correlated with the top singular values, how would coresets\ndeveloped in the numerical linear algebraic community (e.g., Near-optimal Coresets For Least-Squares Regression \nby Boutsidis et al.) perform, even as an experimental heuristic compared to the proposed method?" ]
[ 6, 7, -1, -1, -1, -1, 6 ]
[ 4, 4, -1, -1, -1, -1, 3 ]
[ "iclr_2019_HJfwJ2A5KX", "iclr_2019_HJfwJ2A5KX", "Ske32v9VhX", "B1go__H9hX", "HkeeA0minm", "iclr_2019_HJfwJ2A5KX", "iclr_2019_HJfwJ2A5KX" ]
iclr_2019_HJgXsjA5tQ
On the loss landscape of a class of deep neural networks with no bad local valleys
We identify a class of over-parameterized deep neural networks with standard activation functions and cross-entropy loss which provably have no bad local valley, in the sense that from any point in parameter space there exists a continuous path on which the cross-entropy loss is non-increasing and gets arbitrarily close to zero. This implies that these networks have no sub-optimal strict local minima.
accepted-poster-papers
This paper introduces a class of deep neural nets that provably have no bad local valleys. By constructing a new class of network this paper avoids having to rely on unrealistic assumptions and manages to provide a relatively concise proof that the network family has no strict local minima. Furthermore, it is demonstrated that this type of network yields reasonable experimental results on some benchmarks. The reviewers identified issues such as missing measurements of the training loss, which is the actual quantity studied in the theoretical results, as well as some issues with the presentation of the results. After revisions the reviewers are satisfied that their comments have been addressed. This paper continues an interesting line of theoretical research and brings it closer to practice and so it should be of interest to the ICLR community.
train
[ "Hkg3N5wTnQ", "rke_QE95RQ", "BJx5sWN5hX", "HylgTfXc0X", "SkgOtfWD0m", "SJentufURQ", "r1erFqJMAQ", "rylHw4Tx0X", "rkeg9UyaTX", "ByeYv_JTaX", "HkgeJDjn6Q", "BkeL8Gi26Q", "Ske7SH5q37" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper shows that a class of deep neural networks have no spurious local valleys –--implying no strict local-minima. The family of neural networks studied includes a wide variety of network structure such as (a variant of) DenseNet. Overall, this paper makes some progress, improving previous results on over-parametrized networks. \n\nPros: The flexibility of the network structure is an interesting point.\nCons: CNN was covered in previous related works (so weight sharing is not a new contribution); DenseNet is not explicitly covered in this work (I mean current DenseNet does not have N skip-connections to output; correct me if wrong). \n The simulation part is not that clear, and I have a few questions that I hope the authors can answer. \n\nSome comments/suggestions:\n1) Training error needs to be discussed.\n Page 8 says “This effect can be directly related to our result of Theorem 3.3 that the loss landscape of skip-networks has no bad local valley and thus it is not difficult to reach a solution with zero training error”. This relation is not justified. The implication of Thm 3.3 is that getting zero training error is easier, but the tables are only for test error. Showing training error is the only way to connect to Thm 3.3. I expect to see a high training error for C-10, original VGG and sigmoid activation functions, and zero training error for both skip-SGD (rand) and skip-SGD (SGD). \n This paper has no theory on generalization, thus if a whole section is just about “investigating generalization error”, then the connection to theoretical parts is weak --btw, one connection is the comparison of two algorithms, which fits the context well, and thus interesting (though comparison result itself probably not surprising). \n\n2) Data augmentation.\n “Note that the rand algorithm cannot be used with data augmentation in a straightforward way and thus we skip it for this part.” Why? \n With data augmentation, is M still larger than N? If yes, then the number of added skip connection is different for C-10 and C-10-plus, which is not mentioned in the instruction of Table 2. \n\n3)It may be better to mention explicitly that \"it is possible to have bad local min\" –perhaps in abstract and/or introduction. \n --Although “no sub-optimal strict local minima” is mentioned, readers, especially non-optimizers, might not notice \"strict\".\n --In fact, in the 1st round read, I do not have a strong impression of \"strict\". Later I realized it. Mentioning this can be helpful. \n\n4) Some references I suggest to include:\n [R1] Yu, X. and Chen, G. On the local minima free condition of backpropagation learning. 1995. --related work. \n [R2] Lu, H., Kawaguchi, K. Depth creates no bad local minima. 2017. --also deep nets.\n [R3] Liang, S., Sun, R., Li, Y., & Srikant, R. \"Understanding the loss surface of neural networks for binary classification.\" 2018. --Also study SoftPlus neurons.\n [R4] Nouiehed, M., & Razaviyayn, M. Learning Deep Models: Critical Points and Local Openness. 2018. --also deep nets. \n\nMinor questions:\n --Exact 10% test accuracy for a few cases. Why exact 10%?\n", "Thank you very much for your positive feedback and all the helpful comments so far.", "This paper presents a class of neural networks that does not have bad local valleys. The “no bad local valleys” implies that for any point on the loss surface there exists a continuous path starting from it, on which the loss doesn’t increase and gets arbitrarily smaller and close to zero. The key idea is to add direct skip connections from hidden nodes (from any hidden layer) to the output.\n\nThe good property of loss surface for networks with skip connections is impressive and the authors present interesting experimental results pointing out that\n* adding skip connections doesn’t harm the generalization.\n* adding skip connections sometimes enables training for networks with sigmoid activation functions, while the networks without skip connections fail to achieve reasonable performance.\n* comparison of the generalization performance for the random sampling algorithm vs SGD and its connection to implicit bias is interesting.\n\nHowever, from a theoretical point of view, I would say the contribution of this work doesn’t seem to be very significant, for the following reasons:\n* In the first place, figuring out “why existing models work” would be more meaningful than suggesting a new architecture which is on par with existing ones, unless one can show a significant performance improvement over the other ones.\n* The proof of the main theorem (Thm 3.3) is not very interesting, nor develops novel proof techniques. It heavily relies on Lemma 3.2, which I think is the main technical contribution of this paper. Apart from its technicality in the proof, the statement of Lemma 3.2 is just as expected and gives me little surprise, because having more than N hidden nodes connected directly to the output looks morally “equivalent” to having a layer as wide as N, and it is known that in such settings (e.g. Nguyen & Hein 17’) it is easy to attain global minima.\n* I also think that having more than N skip connections can be problematic if N is very large, for example N>10^6. Then the network requires at least 1M nodes to fall in this class of networks without bad local valleys. If it is possible to remove this N-hidden-node requirement, it will be much more impressive.\n\nBelow, I’ll list specific comments/questions about the paper.\n* Assumption 3.1.2 doesn’t make sense. Assumption 3.1.2 says “there exists N neurons satisfying…” and then the first bullet point says “for all j = 1, …, M”. Also, the statement “one of the following conditions” is unclear. Does it mean that we must have either “N satisfying the first bullet” or “N satisfying the second bullet”, or does it mean we can have N/2 satisfying the first and N/2 satisfying the second?\n* The paper does not describe where the assumptions are used. They are never used in the proof of Theorem 3.3, are they? I believe that they are used in the proof of Lemma 3.2 in the appendix, but if you can sketch/mention how the assumptions come into play in the proofs, that will be more helpful in understanding the meaning of the assumptions.\n* Are there any specific reasons for considering cross-entropy loss only? Lemma 3.2 looks general, so this result seems to be applicable to other losses. I wonder if there is any difficulty with different losses.\n* Are hidden nodes with skip connections connected to ALL m output nodes or just some of the output nodes? I think it’s implicitly assumed in the proof that they are connected to all output nodes, but in this case Figure 2 is a bit misleading because there are hidden nodes with skip connections to only one of the output nodes.\n* For the experiments, how did you deal with pooling layers in the VGG and DenseNet architectures? Does max-pooling satisfy the assumptions? Or the experimental setting doesn’t necessarily satisfy the assumptions?\n* Can you show the “improvement” of loss surface by adding skip connections? Maybe coming up with a toy dataset and network WITH bad local valleys will be sufficient, because after adding N skip connections the network will be free of bad local valleys.\n\nMinor points\n* In the Assumption 3.1.3, the $N$ in $r \\neq s \\in N$ means $[N]$?\n* In the introduction, there is a sentence “potentially has many local minima, even for simple models like deep linear networks (Kawaguchi, 2016),” which is not true. Deep linear networks have only global minima and saddle points, even for general differentiable convex losses (Laurent & von Brecht 18’ and Yun et al. 18’).\n* Assumption 3.1.3 looked a bit confusing to me at first glance. You might want to add some clarification such as “for example, in the fully connected network case, this means that all data points are distinct.”", "I appreciate the authors for their efforts in revising the paper. Many of my concerns are addressed throughout the revision/feedback process, and I think the paper is now in a better shape. \n\nI'll edit the rating accordingly.", "We thank the reviewer for the response and further comments on the presentation issue of our experimental results.\n\nWe have updated the paper accordingly by taking into account both comments of the reviewer together. Regarding comment 2), we removed the column of data-augmentation in Table 2, and moved them to the appendix for interested readers. We then used this space to show the training accuracy of all models, which is recommended by the reviewer in comment 1). We hope that this becomes more clear now.\n\nWe thank the reviewer again and we welcome further comments on our paper.\n\n", "Overall, I think this paper is quite nontrivial since a rigorous mathematical proof is indeed the interesting part and often quite difficult, and the idea of having flexible skip connections is interesting. But perhaps it is less than a breakthrough due to prior related work on CNN. \n\nI'd like to thank the authors for the effort in improving the paper. My concerns are partially but not fully addressed, as explained below.\n\n 1) As I said, \"This paper has no theory on generalization, thus if a whole section is just about test error, then the connection to theoretical parts is weak.\" It is good that the authors add the training error table Table 4, but Table 4 appears in the appendix. I have to compare Table 2 and Table 4 a few times, when I re-read the paper. Isn't it better to put Table 4 in the main body? That may be a hard choice, as some parts need to moved into the appendix. But having Table 2 and 4 separately is strange. In fact, from a theoretician's perspective, having solely Table 2 in the main body while having Table 4 in the appendix is fine (though some practitioners don't think so). Anyway, having both may be better. \n In addition, \"The training error is zero in all cases, except when the original VGG models are used with sigmoid activation function\" is inconsistent with Table 4, which shows for SoftPlus the training accuracy is also 10%. After comparing with Table 2, I noticed it is probably due to typo. All SoftPlus results in Table 4 should be 100. These typos probably won't appear if Table 4 is near Table 2.\n\n(2) I am not satisfied with this explanation on data augmentation.\n First, there are two types of augmentation: \"at each training iteration the network uses additional examples\" refers to online augmentation; increasing the dataset size and use them in all iterations is off-line augmentation. Clearly, for off-line augmentation, N is increased.\n Second, note that for SGD, some statisticians often refer to one-pass over all data, while many optimizers often refer to multi-passes. In other words, for online augmentation, these statisticians would count additional data (even just used once) into N. \n It is not clear why the authors need to include the experiments with data augmentation in Table 2. For the purpose of illustrating their point, experiments with data augmentation are not necessary --this is a theory paper after all. From a theory perspective, it may break the assumption. The easiest way to fix is just to remove the columns on data augmentation. If not, it requires further explanation such as \"yes, it does not satisfy the assumption, but we just want them to be more comprehensive\", or \"simple data changes do not affect the training much, so it is close to theory\". Anyhow, none of them is very satisfying for me.\n \n ", "Thank you for the quick response.\n\n\"First of all, I still believe it is weird that the assumptions are never used *explicitly* anywhere in the main text. The paper makes some assumptions and never uses them directly in the main text. I would suggest the authors to at least add a “proof sketch” paragraph below Lemma 3.2, and briefly outline the proof while mentioning how the assumptions come into play.\"\n\nWe agree. We have added a proof sketch for Lemma 3.2 and briefly discussed how the assumptions are used now.\n\n\"As for the proof technique part, by “as expected” I meant I would have been more surprised if the set of U with rank-deficient \\Psi(U) had measure greater than zero. This was because in general, rank deficient matrices lie in a set of measure zero, and I’ve seen many results such as “if a hidden layer is wider than N and activation functions have good properties, then some matrix has full rank almost everywhere.”\"\n\nSure. But the problem becomes highly non-trivial when the matrix has very special and sophisticated structure, such as the one analyzed in this paper. Despite of all intuitions, it's still a mathematical problem that needs to be rigorously proved.\n\n\"Unfortunately, however, I can hardly agree that the proof is “elegant” at the moment, especially for Lemma 3.2. There are many steps that makes the proof unnecessarily longer. For example, the very first equation in step 1 is not necessary; you can just start with eq (4). Similarly, I believe that steps 2-5 can be made much more concise. In defining eq (9), why don’t you just start by “for all nodes j in layer l, define all \\alpha_j to be:”? I also don’t understand a few lines above eq (10). Given that the network is not fully connected but a DAG, how can you guarantee that u_j and u_{j’} are of the same size and make them identical? For the softplus case, the choice of \\beta is missing. Without this, how can you make sure that some of the data points fall into the negative side of softplus?\"\n\nFollowing reviewer's suggestion, we have revised/shortened the proof of Lemma 3.2. Please check our revision. Regarding u_j and u_{j'}, we already added further explanation in the proof. Basically they need not have the same size because according to our network description in Section 2, only those neurons with the same number of incoming units can have shared weights. For instance, it's fine to have on the same layer two neurons with weights (1,0,0) and other two neurons (0,0,1,0). The bias for softplus is mentioned now. The \\beta variable is defined in the beginning, so basically we use the same value of \\beta as in the first case.", "First of all, I would like to appreciate the authors for their extensive efforts in revising and improving the paper.\n\nI think most of my concerns were more or less addressed, except for the “assumptions” and “proof technique” parts.\n\nFirst of all, I still believe it is weird that the assumptions are never used *explicitly* anywhere in the main text. The paper makes some assumptions and never uses them directly in the main text. I would suggest the authors to at least add a “proof sketch” paragraph below Lemma 3.2, and briefly outline the proof while mentioning how the assumptions come into play.\n\nAs for the proof technique part, by “as expected” I meant I would have been more surprised if the set of U with rank-deficient \\Psi(U) had measure greater than zero. This was because in general, rank deficient matrices lie in a set of measure zero, and I’ve seen many results such as “if a hidden layer is wider than N and activation functions have good properties, then some matrix has full rank almost everywhere.”\n\nUnfortunately, however, I can hardly agree that the proof is “elegant” at the moment, especially for Lemma 3.2. There are many steps that makes the proof unnecessarily longer. For example, the very first equation in step 1 is not necessary; you can just start with eq (4). Similarly, I believe that steps 2-5 can be made much more concise. In defining eq (9), why don’t you just start by “for all nodes j in layer l, define all \\alpha_j to be:”? I also don’t understand a few lines above eq (10). Given that the network is not fully connected but a DAG, how can you guarantee that u_j and u_{j’} are of the same size and make them identical? For the softplus case, the choice of \\beta is missing. Without this, how can you make sure that some of the data points fall into the negative side of softplus?\n\nI agree that there are some interesting techniques used in constructing the parameter U. However, the main theoretical contribution (proof of Lemma 3.2) is hidden in the appendix, which many readers will end up skipping. My current score is based on the main text, and at least in my opinion, the main text itself doesn’t reveal anything particularly interesting.", "Thank you very much for the detailed feedbacks. Below are answers to your comments/questions in the order that they appear.\n\n* \"In the first place, figuring out “why existing models work” would be more meaningful than suggesting a new architecture which is on par with existing ones, unless one can show a significant performance improvement over the other ones.\"\n\nWe absolutely agree that understanding why existing models work is what one desires to achieve in the end. But to reach that point, one has to start somewhere, and make progress continually. This is the reason for the existence of a bunch of recent work on this topic:\n\nA. Choromanska, M. Hena, M. Mathieu, G. B. Arous, and Y. LeCun. The loss surfaces of multilayer networks. 2015.\nI. Safran and O. Shamir. On the quality of the initial basin in overspecified networks. 2016.\nB. D. Haeffele and R. Vidal. Global optimality in neural network training. 2017.\nH. Lu, K. Kawaguchi. Depth creates no bad local minima. 2017.\nM. Hardt and T. Ma. Identity matters in deep learning. 2017.\nC. Yun, S. Sra, and A. Jadbabaie. Global optimality conditions for deep neural networks. 2017.\nD. Soudry and E. Hoffer. Exponentially vanishing sub-optimal local minima in multilayer neural networks. 2017.\nM. Nouiehed and M. Razaviyayn. Learning Deep Models: Critical Points and Local Openness. 2018.\nT. Laurent and J. H. von Brecht. The Multilinear Structure of ReLU Networks. 2018.\nS. Liang, R. Sun, J. D. Lee, and R. Srikant. Adding one neuron can eliminate all bad local minima. 2018.\n\nAt the moment, we are not aware of any previous work which can prove directly strong theoretical results on the loss landscape of \"existing models\" which actually work in practice. Moreover in this paper, we show that the presented class of networks enjoy both strong theoretical properties and good empirical performance. We do not make great claim about the result, but we believe that this is a significant contribution to the literature, especially w.r.t. the recent great effort of the community in trying to make progress on theoretical understanding of deep learning models.\n\n* \"The proof of the main theorem (Thm 3.3) is not very interesting, nor develops novel proof techniques. It heavily relies on Lemma 3.2, which I think is the main technical contribution of this paper. Apart from its technicality in the proof, the statement of Lemma 3.2 is just as expected and gives me little surprise, because having more than N hidden nodes connected directly to the output looks morally “equivalent” to having a layer as wide as N, and it is known that in such settings (e.g. Nguyen & Hein 17’) it is easy to attain global minima.\"\n\nThe proof of our main result is simple and elegant, as also noted by AnonReviewer2. Simple proofs are often generalizable better to complex models. Thus we think that it is actually an advantage of this work. \nCan the reviewer elaborate on why the statement of Lemma 3.2 is just as expected? Given that said, does the reviewer have in mind an easier proof for this lemma? - which we would be very happy to know We would like to note that the class of networks analyzed in this Lemma is quite general and hence the mathematical proof is non-trivial. We agree that one can view the N skip-connections as an implicit wide layer, but this is just an intuition and very weak argument to conclude that the statements are just as expected. There are things that might look \"intuitive\" and \"as expected\" but it's completely wrong, for instance, a deep linear network with N skip-connections to the output does not satisfy our conditions and results if the training data has very low rank.\n\n* \"I also think that having more than N skip connections can be problematic if N is very large, for example N>10^6. Then the network requires at least 1M nodes to fall in this class of networks without bad local valleys. If it is possible to remove this N-hidden-node requirement, it will be much more impressive.\"\n\nWe agree that the current condition on the number of skip-connections is quite strong. But on the other hand, it's not necessarily too restrictive at the level as mentioned by the reviewer. We would like to refer to Table 1 in [1] for some information on the number of neurons of the first layer of several existing networks. For instance, the first hidden layer of original VGG-Nets has already more than 3M nodes, and so if one sum up this number for all the hidden layers the total will be much than that. Moreover, in the literature it is common to find theoretical work which requires extremely larger number of neurons than the number of training samples, see e.g. https://openreview.net/forum?id=S1eK3i09YQ which requires N^6 neurons for gradient descent to find a zero training error solution for one hidden layer networks. Nevertheless, we agree with the reviewer that it would be interesting to relax this condition in future work.\n\n[1] Nguyen & Hein. Optimization landscape and expressivity of deep cnns. 2017.", "Answers to specific comments/questions:\n* \"Assumption 3.1.2 doesn’t make sense. Assumption 3.1.2 says “there exists N neurons satisfying…” and then the first bullet point says “for all j = 1, …, M”. Also, the statement “one of the following conditions” is unclear. Does it mean that we must have either “N satisfying the first bullet” or “N satisfying the second bullet”, or does it mean we can have N/2 satisfying the first and N/2 satisfying the second?\"\n\nWe apologize for the typo and confusion. Please check our revision now where we have rephrased this a bit. It is possible to have mixed skip-connections as the reviewer mentioned, but for simplicity at the moment we just require that all the neurons with skip-connections have the same activation functions which satisfy one of our conditions.\n\n* \"The paper does not describe where the assumptions are used...but if you can sketch/mention how the assumptions come into play in the proofs, that will be more helpful in understanding the meaning of the assumptions.\"\n\nAs the reviewer noted, these assumptions are used in the proof of Lemma 3.2, and hence in our main result Theorem 3.3 (though not directly used here). Basically in proving Lemma 3.2, we used our conditions on activation functions to prove that there exists a set of parameters so that the matrix Psi has full rank. Then we use the analytic property of the activation functions together with Lemma A.1 to establish the result on the measure-zero set property. The condition on the training data is used to guarantee that the value of each hidden unit can be chosen to be non-identical for different training samples.\n\n* \"Are there any specific reasons for considering cross-entropy loss only? Lemma 3.2 looks general, so this result seems to be applicable to other losses...\"\n\nThe reviewer is right. Indeed our result holds for other convex loss functions. Please check our extension to this setting in Section C in the appendix. The reason why we presented our main result with cross-entropy loss in the beginning is because we wanted to keep everything simple, and also because this is the loss actually used in practice.\n\n* \"...Figure 2 is a bit misleading because there are hidden nodes with skip connections to only one of the output nodes.\"\n\nYes, they are connected to all the hidden units. We apologize for the confusion in Figure 2 as we thought it might look a bit too dense. Please check our revision now where we have updated the figure. \n\n* \"For the experiments, how did you deal with pooling layers in the VGG and DenseNet architectures? Does max-pooling satisfy the assumptions? Or the experimental setting doesn’t necessarily satisfy the assumptions?\"\n\nIt depends. In general, max-pooling can be used above all the neurons with skip-connections in the network. However as the main goal of the experiments is to find out the generalization performance of skip-networks, we did not want to include this part in the paper. Nevertheless, we have added Section G in the appendix to treat this question separately. \n\n* Can you show the “improvement” of loss surface by adding skip connections? Maybe coming up with a toy dataset and network WITH bad local valleys will be sufficient, because after adding N skip connections the network will be free of bad local valleys.\n\nYes. Please check our Section E in the appendix now, where we provide a visual example of the loss landscape of a small network, before and after adding skip-connections. One can easily see that skip-connections to the output help to smooth the loss landscape and get rid of bad local valleys. \n\n* \"In the Assumption 3.1.3, the $N$ in $r \\neq s \\in N$ means $[N]$?\"\nYes. We fixed the typo. Thanks!\n\n* \"In the introduction, there is a sentence “potentially has many local minima, even for simple models like deep linear networks (Kawaguchi, 2016),” which is not true....\"\n\nThe reviewer is right. It's actually an english issue as we meant non-convexity which previously appears before this term. We removed it now in our revision. \n\n* \"Assumption 3.1.3 looked a bit confusing to me at first glance. You might want to add some clarification such as “for example, in the fully connected network case, this means that all data points are distinct.”\"\n\nThanks for another helpful comment. We have updated/improved the statement of this condition a bit. In particular, we require now only the distinctness between the input patches at the same location across different training samples. This is just a subtle change and the current proof of Lemma 3.2 is not affected by this modification. We follow your suggestion by adding the following sentence right below Equation (3): \n\"The third condition is always satisfied for fully connected networks if the training samples are distinct. For CNNs, this condition means that the corresponding input patches across different training samples are distinct.\"", "Thank you very much for the support. Below are our answers to your comments/questions in the order that they appear.\n\nRegarding the failure of original VGG with sigmoid activation, we have added a discussion on this issue under Section F in the appendix (please see also our response to AnonReviewer3 on the 10% accuracy matter).\nBasically, we have observed that the network in this case converges to a constant zero classifier, regardless of our effort in tuning the learning rate. This behavior is actually not restricted to the specific architecture of VGG, but has been shown before as an issue of sigmoid activation when training plain networks with depth > 5, see e.g. [1].\n\nAnswers to minor issues:\nActually the definition of bad local valleys has previously appeared just above Theorem 3.3 in the text. However we follow the reviewer's suggestion by putting this in a formal definition 3.3 now.\n\n\"In proof number 4 (of Theorem 3.3), the statement should be “any *principle* submatrices of negative semi-definite matrices are also NSD”, and it’s not true otherwise. But this typo doesn’t influence the proof.\"\nYes, the reviewer is completely right. We fixed this typo. Thanks!\n\n\"Also, it seems the proof of 3 is somewhat redundant, since local minimum is a special case of your “bad local valley”.\"\nWe agree. We keep it there as we wanted to make all our statements and results become clear and as rigorous as possible.\n\n\"It seems the analysis could not possibly be extended to the ReLU activation, since it will break the analytical property of the function. Just out of curiosity, do the authors have some further thoughts on non-differentiable activations?\"\nThank you for an interesting question. At the moment, we do not really have a clear clue how to extend the result to general non-differentiable activations, so this could be an interesting question for future research. \nFor ReLU, we think that it might be possible to exploit the fact that softplus can approximate ReLU arbitrarily well, and so perhaps a limiting argument on their corresponding loss functions can be helpful..\n\n[1] Understanding the difficulty of training deep feedforward neural networks. Xavier Glorot, Yoshua Bengio. ICML 2010.", "Thank you for the feedback. Below are answers to your comments/questions by their numbering.\n\n1) We agree with the reviewer about the training error matter. Thus we have added Section F in the appendix to discuss training error in details. As expected, the training error is zero except the case where sigmoid activation is used with original VGGs from Table 2 or original CNN13 from Table 1.\nMoreover, we show in this section that adding skip-connections to the output is also helpful for training extremely deep (narrow) networks with softplus activation. This together show that skip-connections are helpful for training deep networks with both sigmoid and softplus activation. In Section E in the appendix, we provide a visual example of the loss landscape of a small network, before and after adding skip-connections, where one can see that adding skip-connections to the output layer help to smooth the loss surface and get rid of bad local valleys, which is helpful for local search algorithms like SGD to succeed.\n\n2) As described in our experiments, the number of skip-connections is fixed to M=N in both cases (with and without data-augmentation), where N is the size of the original data set. We quote the following sentence from our experimental section for the convenience of the reviewer:\n\"...we aggregate all neurons of all the hidden layers in a pool and randomly choose from there a subset of N neurons to be connected to the output layer...\".\nIn the setting of data-augmentation, at each training iteration the network uses additional examples (randomly) generated from the original dataset, and thus it is not clear in this case how the number of training samples should be defined. That's why we fixed the number of skip-connections in both cases to be the size of the original data set.\n\n3) We agree that this might be overlook by non-optimizers. Nevertheless we want to keep our abstract short and precise. Thus we have added the following sentence in the introduction to make this further clear: \n\"We note that this implies the loss landscape has no strict local minima, but theoretically non-strict local minima can still exist.\"\n\n4) We have included the references suggested by the reviewer, and can add more detailed comparisons if the reviewer think that it's necessary.\n\nRegarding 10% test accuracy, we added a discussion on this issue under Section F in the appendix. Briefly, the reason, as observed in our experiments, is that the network converges quickly to a constant zero classifier (i.e. the output of last hidden layer converges quickly to zero), and thus the training/test accuracy converge to 10% and the cross-entropy loss in Equation (2) converges to − log(1/10). We realized later that this is actually a known issue of sigmoid activation when training plain networks with depth > 5, as pointed out earlier by Glorot & Bengio [1].\n\n[1] Understanding the difficulty of training deep feedforward neural networks. Xavier Glorot, Yoshua Bengio. ICML 2010.", "The paper analyzes the loss landscape of a class of deep neural networks with skip connections added to the output layer. It proves that with the proposed structure of DNN, there are uncountably many solutions with zero training error, and the landscape has no bad local valley or local extrema. \n\nOverall I really enjoy reading the paper. \nThe assumptions to aid the proof are very natural and much softer than the existing literature. As far as I’m concerned, the setting is very close to real deep neural networks and the paper is a breakthrough in the area. The experiments also consolidate that the theoretical settings are natural and useful, namely, with enough skip connections and specially chosen activation functions. \nThe presentation of the paper is intuitive and easy to follow. I’ve also checked all the proof and think it’s brilliantly and elegantly written. \n\nMy only complaint is about the experiments. As we all know that both VGG and the sigmoid activation are commonly used DL tools, and why do they fail to generalize when used together? Does the network fail to converge or is it overfitting? The authors should try tuning the parameters and present a proper result. With that said, since the paper is more about theoretical findings, this issue doesn’t influence my recommendation to accept the paper.\n\n\nMinor issues:\nI think it’s better to formally define “bad local valley” somewhere in the paper. From what I read, the definition of “bad local valley” is implied by the abstract and in the proof of Theorem 3.3(2), but I did not find a formal definition anywhere else. \nIn proof number 4 (of Theorem 3.3), the statement should be “any *principle* submatrices of negative semi-definite matrices are also NSD”, and it’s not true otherwise. But this typo doesn’t influence the proof. \nAlso, it seems the proof of 3 is somewhat redundant, since local minimum is a special case of your “bad local valley”. \nIt seems the analysis could not possibly be extended to the ReLU activation, since it will break the analytical property of the function. Just out of curiosity, do the authors have some further thoughts on non-differentiable activations?\n" ]
[ 7, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8 ]
[ 4, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2019_HJgXsjA5tQ", "HylgTfXc0X", "iclr_2019_HJgXsjA5tQ", "r1erFqJMAQ", "SJentufURQ", "BkeL8Gi26Q", "rylHw4Tx0X", "rkeg9UyaTX", "BJx5sWN5hX", "BJx5sWN5hX", "Ske7SH5q37", "Hkg3N5wTnQ", "iclr_2019_HJgXsjA5tQ" ]
iclr_2019_HJgd1nAqFX
DOM-Q-NET: Grounded RL on Structured Language
Building agents to interact with the web would allow for significant improvements in knowledge understanding and representation learning. However, web navigation tasks are difficult for current deep reinforcement learning (RL) models due to the large discrete action space and the varying number of actions between the states. In this work, we introduce DOM-Q-NET, a novel architecture for RL-based web navigation to address both of these problems. It parametrizes Q functions with separate networks for different action categories: clicking a DOM element and typing a string input. Our model utilizes a graph neural network to represent the tree-structured HTML of a standard web page. We demonstrate the capabilities of our model on the MiniWoB environment where we can match or outperform existing work without the use of expert demonstrations. Furthermore, we show 2x improvements in sample efficiency when training in the multi-task setting, allowing our model to transfer learned behaviours across tasks.
accepted-poster-papers
This paper considers the task of web navigation, i.e. given a goal expressed in natural language, the task is to navigate webs by filling up fields and clicking links. The proposed model uses reinforcement learning, introducing a novel extension where the graph embedding of the pages is incorporated into the Q-function. The results are sound, and the paper is overall well-written. The reviewers and AC note the following potential weaknesses. The primary concern that was raised was the novelty. Since the task could potentially be framed as semantic parsing, reviewer 4 mentioned there may be readily available approaches for baselines that the authors did not consider. The comparison to semantic parsing required a more detailed discussion, pointing not only the differences but also the similarities, that would encourage the two communities to explore novel approaches to their tasks. Further, reviewer 2 was concerned about the limited novelty, given the extensive work that combines GNN and RL, such as NerveNet. The authors provided comments and a revision to address these issues. They described why it is not trivial to formulate their setup as a semantic parsing problem, partly due to the fact that the environment is partially observable. Similarly, the authors described the differences between the proposed approach and methods like NerveNet, such as the use of a dynamic graph and off-policy RL, making the latter not a viable baseline for the task. These changes addressed most of the concerns raised by the reviewers. The reviewers agreed that this paper should be accepted.
test
[ "rkg8QY8cRX", "rylDIqT7Tm", "rkeKXXLqRQ", "r1lIlRSqA7", "r1xtL6BqAQ", "BJgtJsUOAX", "Bygx-rVqhX", "B1gKBxIw0Q", "B1e-onPBAX", "rJlLUUOBC7", "BkebhIuBRX", "S1xqrTDrAX", "H1xisrtn27" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "A minor point, just in case it's helpful (apologies if you already know this): one of the main goals of writing a related work section like this is to get the authors of that related work interested in what you're doing, to convince them to try your methods. So, e.g., \"we do something similar to the knowledge graph embedding of Krishnamurthy et al, but we do it better for these reasons\" might make those authors look at and possibly use your work, where they otherwise might never know about it. I'd look at writing a semantic parsing related work section as an opportunity to appeal to those people who are working on similar problems and expand the influence of your paper (especially as the main contributions here seem to be modeling contributions, and those are the ones most easily transferable between semantic parsing and RL).", "Caveat: I am an emergency reviewer filling in for someone that fell through on their commitment to review for ICLR. The framing of this paper is quite outside my typical area, so I am not super familiar with the related work here, nor do I have time to get familiar with it for this last-minute review. \n \nThis paper presents a new model for deep reinforcement learning on web pages, where the system is given a goal (stated in text) and is supposed to interact with the web page (through clicking and entering text) in order to achieve that goal. The supervision is a positive reward when the sequence of actions taken matches the goal. The novel model presented in this paper is a modular Q function that incorporates graph embeddings of the web page's DOM, as well as similarity scores between elements in the DOM with words in the goal.\n \nJust judging the presentation of the paper, it looks sound. The methods seem reasonable (very similar to methods that are known to work well on related problems; more on that below), and the experiments look to be well done. The paper is reasonably well written. I don't know the RL community well enough to know how impactful this particular piece of work would be there - it's a new model architecture, basically, that gives improved performance. I'd probably give a similar paper in my area a 3.5-4 out of 5 for an ACL conference. The one major drawback I see in this paper is that it is _so_ similar to work on semantic parsing, but doesn't realize it.\n \nI am not a \"reinforcement learning\" researcher, though I am a \"semantic parsing\" researcher. The problem statement in this paper reads to me exactly like a semantic parsing problem: map a piece of text to a statement in some formal language. In this case, the \"statement\" is a sequence of actions on the DOM of a web page. The web page is possibly unseen at test time (the particulars of the data setup weren't totally clear to me), so the model has to be able to handle linking words in the sentence to pieces of the DOM in a way that doesn't rely on having seen those DOM elements during training. This setup seems almost identical to the WikiTableQuestions dataset (Pasupat and Liang 2015), which has seen several RL-inspired works recently (e.g., https://arxiv.org/abs/1807.02322). The way that the authors propose to use attention scores in the \"global module\" is _very_ similar to the linking mechanism proposed by Krishnamurthy, Dasigi and Gardner (EMNLP 2017) for WikiTableQuestions, and the way that the \"word-token selection\" only allows words in the goal sentence is very reminiscent of Chen Liang's language for parsing questions in WikiTableQuestions, which has similar restrictions for similar reasons.\n \nI think the main difference between what we call \"weakly-supervised semantic parsing\" and what you call \"deep reinforcement learning\" is that semantic parsing leverages the fact that we know the language we're parsing into, so we don't need to use model-free RL methods like Q-learning. We know the model, so we can be much smarter about learning. Again, I'm not super familiar with the tasks you're looking at here, but I'm pretty sure there are much better _supervised_ learning techniques that you could apply to these problems.\n \nAll of this is to say that the methods proposed here look _very_ similar to methods that have been studied for quite a while in the semantic parsing literature (I gave only recent references above, but the basic problems go back decades; e.g., http://aclweb.org/anthology/P09-1010, or http://www.cs.utexas.edu/~ml/papers/senior-aaai-2008.pdf). Yet this paper only cites recent deep RL papers. I think the authors would benefit greatly from familiarizing themselves with this literature. I think the semantic parsing community would also benefit from this, as there are surely ideas in the deep RL community that we could benefit from, too. But the two communities don't really talk to each other much, it seems, even though in some cases we are working on _very_ similar problems.\n \nSo, to summarize: the paper seems reasonable enough. I'm guessing that the RL community would find it at least moderately interesting, and it appears well written and well executed. My one concern is that it's totally oblivious to the fact that it's sitting right next to a well-established literature that could probably teach it a thing or two about mapping language to actions.\n\n\n--------------\n\nAfter seeing the authors engage at least a little with the related semantic parsing literature, I've increased my score to a 7.", "Thank you, this was much more of what I was looking for in my initial review. I agree with you that there are substantial differences (e.g., I hadn't considered the partially observable environment, thanks for pointing that out), but there are also some very close similarities, and reinforcement learning methods are starting to intersect more with semantic parsing (in addition to the citations you already have, here's another good one: https://arxiv.org/abs/1704.07926). Both of our fields would benefit from more discourse, and all I was hoping for was for you to engage a little bit with this literature. You've done that in what you just wrote. I don't think there's any need to mention or cite things that aren't relevant to your paper just because I mentioned it, but I think some distilled version of what you have here with the most relevant bits would be a nice section to add to the paper.\n\nI've increased my score to a 7.", "We appreciate your prompt reply. We are sorry that our update in the background section for semantic parsing did not show relevance to our work, and we decide to temporarily revert the part of our revision and remove this section and its appendix 6.6.7. As “reinforcement learning” researchers, we tried our best to familiarize ourselves to semantic parsing and make relevant connections to our work within this period. Since previous works for our problem setup [4, 7] also did not mention connections with semantic parsing, this concern is novel, and we hope to explore more on the connections in the later revision of the paper. Based on our understandings, we still believe there are some major differences in what we are doing and what semantic parsing is doing. \n\n[Concern] \"The methods developed for WTQ are __very__ similar to the methods you developed in your paper\"\n[Reply] We respectfully disagree with this statement. The problem formulations and methods developed for WTQ and the web navigation tasks in our paper are significantly different. Here, we provide four major differences:\n\n1) Act in partially observable environment vs fully observed knowledge graph\nIn WTQ, we aim to learn an agent that can map language to action given the FULL access to the knowledge graph. As the agent can query the knowledge graph freely in the WQT tasks, there is no need for exploring the knowledge graph to gather previously unseen information. As a result, actions like argmax and count can be performed easily on the given knowledge graph. (E.g. R[λx[Year.Date.x]].argmax(Country.Greece,Index), operation argmax can be performed only when the full knowledge graph is observed )\n\nOn contrary, our problem formulation assumes the agent lives in a partially observed world. At each timestep, the input to the agent is ONE of the many web pages it can 'navigate' to on a website. Most information contained in the website is NOT present on the current webpage. To solve a language query, the agent has to search for relevant contents by visiting other web pages. For example, in the MiniWoB social media task, the episode starts with page 1 out of 10 twitter pages. To answer the query, our agent needs to use the Twitter interface to flip through the pages. The input web page changes after each click/type action. In other words, our problem formulation is to map language to __a sequence of__ actions. After each action, the input web pages or knowledge graph changes. Therefore, we formulated the web navigation problem as a sequential decision-making process due to the partially observable environment. This differs from the single decision-making problems formulated by all the methods developed for WTQ using semantic parsing.\n\nSolving with partially observable knowledge is essential to web navigation as it will require significant works in crawling all the web pages to provide the full knowledge.\n\n\n2) base level primitive of click/type vs argmax/sort/...etc\nTo solve WTQ tasks, all the approaches assume a domain-specific grammar(e.g. Lisp domain specific language) [1 2 3 5] and specific domain knowledge. In [1], the authors design grammars that contain type-constraints to associate entities to their correct types in the generated logical forms, e.g., disallow the action \"country = 120\" with the scope that pre-defines typed variable bindings. [2] also shows the need for having code assistance by eliminating syntactically or semantically invalid choices. A drawback of these existing methods developed for WTQ is that they still require a great deal of human supervision. The role of hand-crafted grammars is crucial in WTQ yet also limits its general applicability to many different domains. In [5], the authors build semantic parsers for 7 domains, hand-engineered a separate grammar for each domain.\n\nDesigning high-level domain specific grammar may lead to faster learning for some problem domains but will restrict/bias the agent by human-engineered rules. Previous work with DOMNET[4] did design minimalistic formal language to restrict its action space. we argue learning to act using the low-level primitives, e.g. ‘click’ and ‘type’, shared among many tasks allows the agent to transfer learned ‘high-level’ behaviors to unseen domains/websites. This is shown in the improvements in sample efficiency when we train a single model to solve many of the MiniWoB tasks through shared low-level primitives ‘click’, ‘type’, see Sec 4.2 on multi-task learning. \n", "3) Model architecture difference\nKnowledge graph embedding used in [1] is a bag-of-word-vectors model that aggregates the information by simply summing over all the word vector in the neighborhood of an entity with a human engineered rule. This is similar to our model using only the local and global module. We found that the current state-of-the-art method[2] for WQT adopts the Neural Symbolic Machines Framework[6] that is a variant of the seq2seq encoder-decoder model. \n\nOur proposed model incorporates graph neural networks that learn a flexible nonlinear message passing function over the entire input graph converted from raw HTML. We show that having the message passing phase, e_nieghbor module is crucial in solving many of the MiniWoB tasks as shown in the ablation study Sec 4.3. \nThis is simply because many ‘entities’ in web like ‘checkboxes’ have exactly the same embeddings and multi-step message passing is required to propagate the text values, Figure 1. As far as we can tell, there are no prior works on WTQ using graph neural networks.\n\n4) Training objective function: marginal log-likelihood vs Q learning\nIn semantic parsing, [1] first generates a set of logical forms that execute to the correct answer, and enumerates those correct forms for each example and use the sum of log-likelihood of generating those correct forms as objective.\nIn our setup, we aim to solve the problem of dynamic programming directly with Q learning to minimize the TD error.\n\nIn summary, there are some similarities between WTQ tasks and a reinforcement learning approach to web navigation in terms of using word embeddings and deep neural models. However, our proposed problem formulation and model architecture is significantly different than the methods that were previously developed on WTQ.\n\n\n[1] Jayant Krishnamurthy, Pradeep Dasigi, and Matt Gardner. Neural semantic parsing with type constraints for semi-structured tables. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 1516–1526. Association for Computational Linguistics, 2017. doi: 10.18653/v1/D17-1160. URL http://aclweb.org/anthology/D17-1160.\n\n\n[2] Chen Liang, Mohammad Norouzi, Jonathan Berant, Quoc Le, and Ni Lao. Memory augmented policy optimization for program synthesis with generalization. arXiv preprint arXiv:1807.02322, 2018\n\n[3] Panupong Pasupat and Percy Liang. Compositional semantic parsing on semi-structured tables. arXiv preprint arXiv:1508.00305, 2015.\n\n[4] Evan Zheran Liu, Kelvin Guu, Panupong Pasupat, Tianlin Shi, and Percy Liang. Reinforcement learning on web interfaces using workflow-guided exploration. In ICLR, 2018.\n\n[5] Y. Wang, J. Berant, and P. Liang, “Building a Semantic Parser Overnight,” Proc. 53rd Annu. Meet. Assoc. Comput. Linguist. 7th Int. Jt. Conf. Nat. Lang. Process. (Volume 1 Long Pap., pp. 1332–1342, 2015.\n\n\n[6] Chen Liang, Jonathan Berant, Quoc Le, Kenneth D Forbus, and Ni Lao. Neural symbolic machines: Learning semantic parsers on freebase with weak supervision.\narXiv preprint arXiv:1611.00020, 2016.\n\n[7] Tianlin Shi, Andrej Karpathy, Linxi Fan, Jonathan Hernandez, and Percy Liang. World of bits: An open-domain platform for web-based agents. In ICML, 2017.\n", "I appreciate the attempt to put in a section of related work on semantic parsing, but my goal was not to get you just to cite work on WikiTableQuestions (WTQ), but to see the similarities between what you are doing and what semantic parsing does. It doesn't appear that you've actually accepted those similarities, and the section you added (section 2.5) feels out of place in the paper, because it doesn't discuss how that work is related to the current paper.\n\nTo be more concrete: the reason I brought up WTQ was not because it operated on HTML tables. It was because it mapped language to actions in unseen contexts, and methods developed for WTQ are _very_ similar to the methods you developed in your paper. You say here that you \"avoid the challenge of designing the base-level primitive logical forms\" - actually, the action space that you have _is_ the \"base-level primitive logical form\" language in this context. You have the same basic task description, just with a language of clicks and text boxes instead of counts and argmaxes.\n\nIf anything, the updates I see in the paper make me less inclined to recommend an \"accept\", because they look like they were an attempt to appease a grumpy reviewer with additional irrelevant citations. The issue is not navigating HTML vs. answering questions, and section 6.6.7 is entirely unnecessary - I know the differences between the two, as would anyone who is familiar with them. That section adds nothing to the paper. The issue is higher level than that. The broader area that you're operating in is \"mapping language to actions\", with navigating web pages just one particular instance of this very general, well-studied problem. In order to contribute to this literature, you need to understand how your work relates to the literature. It turns out that much of your contributions can already be found there, if you know how to look, and you aren't situating your work in the context of what others have already done. This is a problem.", "DOM-Q-NET:
GROUNDED RL ON STRUCTURED LANGUAGE \n\nThis paper presents a somewhat novel graph-based Q-learning method for web navigation benchmark tasks. The authors show that multi-task learning helps in this case and their method is able to learn without BC as previous works have needed. While this work is interesting and to my knowledge somewhat novel. I concerns with one aspect of the evaluation. In some part it was stated that they show the highest success rate for testing on 100 episodes, if this is indeed the maximum success rate, it is unclear if these results are misleading or not. It is possible that there was a lucky seed in those 100 episodes leading to a higher max that is not representative of the algorithm performance. Also, please have the submission proof-read for English style and grammar issues. There are many minor mistakes, some of which are pointed out below. I am rating marginally below due mainly to the potentially misleading results from the comment on using the highest success rate to report results and to a minor extent due to the novelty aspect (though this is an interesting application).\n\n\nComments:\n\n- “Evaluation metric: we plot the moving average of the reward for last 100 episodes, and report the highest success rate for testing on 100 episodes.” —> This is unclear, do you mean you only displayed the maximum success rate out of all 100 episodes? So if the success rates are [0, 100, 0, 0, 0], Figure 2 shows 100% success? If so, this is somewhat misleading and a better metric may have been the average success rate with confidence intervals. Otherwise you may have just gotten a lucky random seed potentially.\n- I would’ve liked to see if this is the only method which benefits from multitask learning or do DOMNETs also benefit. This however, is just a nice to have.\n- I appreciate the inclusion of hyper parameters and commitment to releasing the code in an effort to promote reproducibility! Great job there. \n- I really like the idea of using graph networks with RL, though I’m not sure if it’s novel to this work. Interesting line of work!\n- While this is an interesting application, I’m not sure about the novelty. I suggest spending a bit more time discussing how this work contrasts with methods like Wang et al., or others cited here.\n\nTypos:\n\n“MiniWoB(Shi et al., 2017) benchmark tasks. “ —> missing space between citation\n“Q network architecture with graph neural network” —> with a graph neural network\n\"MiniWoB(Shi et al., 2017)” —> MiniWoB (Shi et al., 2017) (missing space)\n“achieved the state” —> achieved state of the art \n“2016; Wang et al., 2018)as main” —> missing space\n“series of attentions between DOM elements and goal” —> series of attention (modules?) between the DOM elements and the goal (?)\n“constrained action set” —> constrained action sets\n“In appendix, we define our criteria for difficulties of different tasks.” —> In the appendix", "Thank you for your reply. I think this clears up some concerns I had, and I appreciated the added detail in the paper/appendix. While I still have minor concerns about novelty, I believe the new text helps clear up most of this. I've updated my original rating as such. ", "Dear reviewer,\nWe thank the reviewer for the valuable comments in pointing out the connection between our work and semantic parsing. We have updated the background section to discuss the semantic parsing methods to solve the problem like Question Answering from manipulating the data on HTML tables. \n\nWe would like to emphasize that the main focus of this work is to train an end-to-end RL agent to directly interact with any standard web browser through mouse clicking and typing. Our direct approach avoids the challenge of designing the base-level primitive logical forms in semantic parsing for web navigation. \n\nThe problem setup of mapping language to click/type action was introduced in the MiniWoB[7], which is a set of standard benchmark environments for web agents. The authors of [7], in fact, found that reinforcement learning approaches often significantly outperform supervised learning on these benchmark tasks. We have added a new illustration in Figure 2 to further clarify our problem setup.\n\n[Concern] “This setup seems almost identical to the WikiTableQuestions dataset (Pasupat and Liang 2015), which has seen several RL-inspired works recently”\n[Reply] We appreciate pointing out the relevant works, and we explained the problem setup of this dataset [5] as well as the difference between this task and web navigation in the background sec2.5 and the appendix 6.7 for further details. In short, WikiTableQuestions[5] provides structured HTML tables that only contain the text attributes from the original HTML page. To execute the parsed logical form, an executor is provided. However, MiniWoB[7] environment has a set of more diverse web pages beyond tables. The agent needs to understand raw HTMLs that contain text fields, buttons and checkboxes. Each MiniWoB task is given by natural language goal instructions on different web pages. This RL environment only accepts basic actions like “click DOM indexed i”, “Type a string on DOM indexed i”. So we took a direct approach that allows our agent to control clicking and typing. which avoids the challenge of designing rich primitives and formal language for web navigation. \n\n[Concern] “The problem statement in this paper reads to me exactly like a semantic parsing problem”\n[Reply] Our main goal is to train an agent end-to-end to directly click the DOMs and type strings on the standard browser. We are converting natural language to a sequence of clicking and typing actions that a browser can execute.\n“Map a piece of text to a statement in some formal language” - This is what we hope to avoid because the standard browser only accepts “which DOM to type/click”, “what to type” as valid actions, so we cannot have a complex logical form as an output of the model. For web navigation, it is not trivial to design formal language and the primitives of the formal language. However, we noted in the section 2.6 that Liu et al [6] defined their minimalistic formal language to constrain the exploration, but they still use RL to perform the same set of actions as ours to a standard browser. The focus of our work is to learn an end-to-end RL agent that can act directly in a web browser. Empirically, we found our end-to-end agent matches and outperforms (for some tasks) the models augmented with formal language studied in Liu et al[6].\n\n[1] Jayant Krishnamurthy, Pradeep Dasigi, and Matt Gardner. Neural semantic parsing with type constraints for semi-structured tables. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 1516–1526. Association for Computational Linguistics, 2017. doi: 10.18653/v1/D17-1160. URL http://aclweb.org/anthology/D17-1160.\n\n[2] Satchuthananthavale RK Branavan, Harr Chen, Luke S Zettlemoyer, and Regina Barzilay. Reinforcement learning for mapping instructions to actions. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 1-Volume 1, pp. 82–90. Association for Computational Linguistics, 2009.\n\n[3] Chen Liang, Mohammad Norouzi, Jonathan Berant, Quoc Le, and Ni Lao. Memory augmented policy optimization for program synthesis with generalization. arXiv preprint arXiv:1807.02322, 2018\n\n[4] Raymond J Mooney. Learning to connect language and perception. In AAAI, pp. 1598–1601, 2008. \n\n[5] Panupong Pasupat and Percy Liang. Compositional semantic parsing on semi-structured tables. arXiv preprint arXiv:1508.00305, 2015.\n\n[6] Evan Zheran Liu, Kelvin Guu, Panupong Pasupat, Tianlin Shi, and Percy Liang. Reinforcement learning on web interfaces using workflow-guided exploration. In ICLR, 2018.\n[7] Tianlin Shi, Andrej Karpathy, Linxi Fan, Jonathan Hernandez, and Percy Liang. World of bits: An open-domain platform for web-based agents. In ICML, 2017.", "Dear reviewer,\nThank you for taking the time to review our paper.\nWe appreciate the valuable comments that improve the readability and the clarity of the paper and we will incorporate all the changes and fix the typos in our latest revision.\n\n[Concern1] “Unclear evaluation metric”\n[Reply] Our experimental protocol follows the previous works on the same environment[1, 4]. We report the success rate of the 100 test episodes at the end of the training once the agent has converged to its highest performance on the training episodes. The final success rates reported in Figure 2 of the original submission were averaged across 4 different random seeds/runs. We apologize for the poor wording that has been corrected in our latest revision. \n\nIn detail, what we did was to evaluate the RL agent after training for a fixed number of frames depending on the difficulty of the task. In the original paper, we mentioned in the appendix that we used three different number of frames {5000, 50000, 200000} for training based on the difficulty {easy, medium, hard} of the 23 tasks in MiniWOB. In the initial experiments, we observed that some tasks were solved with far less number of frames than others due to varying difficulties, so we categorized 23 tasks in three difficulty groups to shorten the experiment time for simpler tasks. This alleviated unnecessary computational cost for a large number of experiments. The results and the plots we presented in the paper, are based on the following number of experiments.\nNumber of experiments = (23(number of tasks) + 9(number of tasks concurrently running in multitask) ) * 4(types of goal encoding) * 4 (minimum num of runs for average) + 2(tasks for ablation study) * 3(discounted model) * 4(minimum num of runs for average)= 536 experiments for one set of hyperparameters. \nFor further details on our experiment protocols, please check the updated \"evaluation metric\" in Sec4.1 and Appendix6.5\nSo our success rate reported in Figure 2 (original paper) is based on the average success rate of 4 runs.\n\n\n[Concern2] “Novelty issue”, \"Lack of comparisons with previous works on GNN+RL\"\n[Reply] To our knowledge, this is the first work that applies graph neural networks (GNNs) to represent the HTML structure in standard web pages. This leads to our novel deep Q-network architecture that incorporates both the goal attention mechanism and the GNN representation to learn the state-action value function for Q learning. We appreciate the reviewer to point out the similarity and lack of comparisons with other GNN+RL models, e.g. Nervenet[2]. Unfortunately, previous works on GNN+RL are not directly applicable to our web navigation problem. Please see the following paragraph that has been added to the latest revision for a detailed explanation.\n\n- Our main contribution is to propose a new architecture for parameterizing factorized Q functions using goal attention, local word embeddings and graph neural network(GNN). We also contributed to the formulation of web navigation with this architecture. GNN is one of the components, and we investigated in the ablation study that some tasks need GNN for neural message passing [3] and some tasks do not necessarily need it though the sample efficiency is better with GNN. We also showed how proposed goal attention can be used with GNN for even better sample efficiency when multitasking. Computing the output model of GNN with goal attention is unique in our goal-oriented RL setting with graph state/action representations. Previously proposed Graph Attention Networks [5] uses attention in neural-message passing phase, and is experimented in non-RL settings. In general, GNN is not actively used in RL settings as seen in this comprehensive survey paper of GNN [6], and the previous papers [2, 7] use GNN for mimicking the physical bodies of different robots. We would like to mention some differences when using GNN for representing web pages. ", "There are four key differences that makes NerveNet [2] not applicable to our web navigation task: \n1) In Nervenet [2], the entire policy is parametrized by GNN. In DOM-Q-NET, GNN alone cannot solve some tasks for 100% success rate. We conducted a careful ablation study on the different modules of our proposed Q-network architecture in Figure 4(original paper)/ Figure 5(updated revision). We also conducted the experiments with and without goal attention shown in Figure 2(original paper). The Login-user task and the Social Media task, for example, cannot be solved using GNN component alone for Q_dom network.\n\n2) In Nervenet [2] for locomotion, action is goal independent. In DOM-Q-NET for web navigation, action is goal dependent, which is why goal attention is proposed.\n\n3) In Nervenet [2], graph structure is static across different timesteps within an episode. In DOM-Q-NET, graph structure is dynamic even within an episode because the web page can change after an action.\n\n4) Nervenet [2] learns its controller model using on-policy policy gradient with dense reward from the locomotion control tasks. However, the web navigation tasks are spare reward problems with only 0/1 reward at the end of the episode, DOM-Q-NET uses off-policy Q-learning with replay buffer that is often more efficient against reward sparsity.\n\nWe will make a further clarification in our background section to make this distinction clear.\n\n\n[Concern3] “do DOMNET’s also benefit from multitask learning”\n[Reply] Thank you for bringing this interesting question. DOMNETs [1] take in “key-value goal” as an input in addition to natural language goal. For example, the task with the goal “click A” and the task with the goal “click A and press B” will have different size of inputs; 1 for [“A”] and 2 for [“A”, “B”]. The embeddings for those key-value inputs are directly fed into the network without being aggregated. So the dimension of the weight matrices of the DOMNET is task-dependent. It is mentioned in their paper that structured input is needed for workflow policy as a result of using the formal language for constraining actions. Therefore, it is not trivial to extend a single DOMNET model with shared weight matrices to multitask learning.\n \nIn addition, we have investigated whether the model benefits more from multitasking with various methods of incorporating goal information. “Goal-attention” leads to better sample efficiency without any increase in the network size. This is shown in “effectiveness of goal attention”. DOMNET, however, does not have a flexible attention module with GNN to incorporate different goals that may hinder its performance in multitask learning.\n\nPlease let us know if you have any more questions or if there is anything else we can clarify to make you reconsider your rating.\n\n[1] Evan Zheran Liu, Kelvin Guu, Panupong Pasupat, Tianlin Shi, and Percy Liang. Reinforcement learning on web interfaces using workflow-guided exploration. In ICLR, 2018.\n[2] Tingwu Wang, Renjie Liao, Jimmy Ba, and Sanja Fidler. Nervenet: Learning structured policy with graph neural networks. In ICLR, 2018.\n[3] Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural message passing for quantum chemistry. 2017\n[4] Tianlin Shi, Andrej Karpathy, Linxi Fan, Jonathan Hernandez, and Percy Liang. World of bits: An open-domain platform for web-based agents. In ICML, 2017.\n[5] Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. Graph attention networks. In ICLR, 2018\n[6] Peter W Battaglia, Jessica B Hamrick, Victor Bapst, Alvaro Sanchez-Gonzalez, Vinicius Zambaldi,\nMateusz Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, et al. Relational inductive biases, deep learning, and graph networks.\n[7] Hamrick, J., Allen, K., Bapst, V., Zhu, T., McKee, K., Tenenbaum, J., and Battaglia, P. (2018). Relational inductive bias for physical construction in humans and machines. In CogSci, 2018", "Dear reviewer,\nThank you for the positive feedback and taking the time to review our paper. In order to pursue further reproducibility, we clarified our experiment protocols in sec4.1 and the appendix 6.5 for further details. We have also added background sec2.5 and the appendix 6.7 for further details on a task solved by semantic parsing and how it is different from MiniWoB. In addition, a demo of a successful trajectory, figure2 in the revision, is added to further demonstrate our problem setup and the instances for the tuple of actions.\n", "The authors propose a novel architecture for RL-based web navigation to address both of these problems, DOM-Q-NET, which utilizes a graph neural network to represent tree-structured HTML along with a shared state space across multiple tasks. It is believed more flexible to be probed on WorldOfBits environments. Significant improvements are shown by experiment." ]
[ -1, 7, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, 7 ]
[ -1, 3, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, 1 ]
[ "rkeKXXLqRQ", "iclr_2019_HJgd1nAqFX", "r1lIlRSqA7", "BJgtJsUOAX", "BJgtJsUOAX", "B1e-onPBAX", "iclr_2019_HJgd1nAqFX", "rJlLUUOBC7", "rylDIqT7Tm", "Bygx-rVqhX", "Bygx-rVqhX", "H1xisrtn27", "iclr_2019_HJgd1nAqFX" ]
iclr_2019_HJgeEh09KQ
Boosting Robustness Certification of Neural Networks
We present a novel approach for the certification of neural networks against adversarial perturbations which combines scalable overapproximation methods with precise (mixed integer) linear programming. This results in significantly better precision than state-of-the-art verifiers on challenging feedforward and convolutional neural networks with piecewise linear activation functions.
accepted-poster-papers
The paper addresess an important problem of neural net robustness verification, and presents a novel approach outperforming state of art; author provided details rebuttals which clarified their contributions over the state of art and highlighted scalability; this work appears to be a solid and useful contribution to the field.
train
[ "SJxdpsmL0m", "B1xk_jXURm", "B1eV9q7URm", "HyxBSKmURQ", "S1xULFL9n7", "H1ep3mjh2Q", "BJeGXqR9h7" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "\nQ1. My background is more theoretical, but I'm looking for theorems here, considering the complicatedness of the neural network. All I am looking for is probably some high-level explanation. \n\nR1. RefineAI is a new approach for proving the robustness of neural networks: it is more precise than current incomplete methods and more scalable than current complete methods. We believe this is a difficult problem and RefineAI is a promising step forward.\n\nSome key insights in the paper:\n\nInsight I: expensive but precise techniques like MILP solvers can be used for refinement earlier in the analysis but do not scale for refinement of neurons in later layers. However, they do substantially improve on incomplete verifiers.\n\nInsight II: not all neurons in the network contribute equally to the output and thus we do not need to refine all neurons in a layer. For this, we present a novel heuristic which improves the scalability of our approach while maintaining sufficient precision. \n", "\nQ1. Is MILP-based refinement applicable only for the first few layers of the network?\n\nR1. Generally, such refinement is most effective in the initial layers: as the analysis proceeds deeper into the network, it becomes harder for the MILP solver to refine the bounds within the specified time limit of 1 second. This is due to the increase in the number of integer variables caused by the increase in the number of unstable units (as explained in the general section on unstable ReLU). \n\nQ2. Why is l6 = 0? I think that it is easy to figure out that max(0,x4) is at least 0.\n\nR2. We assume you mean l6=-0.5. The negative lower bound for x6 = ReLU(x4) is due to the Zonotope ReLU transformer shown in Figure 2 which permits negative values for the output. \n\nQ3. I couldn't understand your sentence \"Note that the encoding ...\". Explaining a bit more about how bounds computed in previous layers are used will be helpful.\n\nR3. We mean that both the set of constraints added by the LP encoding (Ehlers (2017)) and the Zonotope transformer (Figure 2) for approximating ReLU behaviour depends on the neuron bounds from the previous layers. The degree of imprecision introduced by these approximations can be reduced by propagating tighter bounds through the network. We will clarify this.\n\nQ4. Do you mean that your algorithm looks into the future layers of each neuron xi and adds the weights of edges in all the reachable paths from xi?\n\nR4. Yes. We consider all outgoing edges from xi and add the absolute values of the corresponding weights.\n\nQ5. Why did you reduce epsilon from 0.07 to 0.02, 0.015 and 0.015?\n\nR5. The 5x100 network is trained using adversarial training and is thus more robust than the other networks which were not obtained through adversarial training. Thus, we chose a higher epsilon for it compared to the other networks (please see the comment in the general section on unstable ReLU).", "\nQ1. The verified robustness percentage of Tjeng & Tedrake is reported but the robustness bound is not.\n\nR1. The epsilon considered for this experiment is reported (page 7) and it is 0.03. \n \nQ2. Can RefineAI handle only piecewise linear activation functions? How about other activations such as sigmoid? If so, what modifications are needed?\n\nR2. RefineAI provides better approximations for ReLU because it uses tighter bounds returned by MILP/LP solvers. Similarly, we can refine DeepZ approximations for sigmoid (which already exist) by using better bounds from a tighter approximation, e.g., quadratic approximation.\n\nQ3. How is the verification problem affected by considering the untargeted attack as in this paper vs. the targeted attack in Weng et al (2018) and Tjeng & Tedrake (2017)?\n\nR3. Since the targeted attack is weaker, the complete verifier from Tjeng and Tedrake runs faster and the incomplete verifier from Weng et al. proves more properties in their respective evaluation than it would if they considered untargeted attacks as considered in this paper.\n\nQ4. How tight are the output bounds improved by the neuron selection heuristics? \n\nR4. We observed that the width of the interval for the correctly classified label is up to 37% smaller with our neuron selection heuristic.\n", "\nWe thank the reviewers for their feedback.\n\nBelow is a summary of key points, followed by further elaboration on each point. We also provide individual replies to each reviewer.\n\nSummary points [short]\n\n1. RefineAI is more precise than state-of-the-art incomplete verifiers.\n2. RefineAI is more scalable than existing state-of-the-art complete verifiers, including the latest: https://openreview.net/forum?id=HyGIdiRqtm based on Tjeng & Tedrake.\n3. RefineAI is applicable to much larger networks than shown in the paper.\n4. Effectiveness of verification methods for neural networks is primarily affected by the number of unstable ReLU units, *not* by the number of neurons.\n5. DeepZ [1], the domain used in our paper, is publicly available [3].\n\nWe are happy to provide further results or explanations if requested.\n\nSummary points [longer]\n\n→ RefineAI is more precise than all state-of-the-art incomplete verifiers.\n\nThis is because DeepZ has the same precision as Weng et. al (2018) and Kolter and Wong (2018) while being faster (unlike RefineAI, Weng et al. cannot handle convolutional nets). Then, based on DeepZ results, Refine AI computes more precise results.\n\n→ RefineAI is more scalable than all state-of-the-art complete verifiers, including the latest: https://openreview.net/forum?id=HyGIdiRqtm, based on Tjeng & Tedrake.\n\nThis is because the above method uses Box to compute initial bounds and uses more expensive methods if required. Unfortunately, in deeper layers, Box analysis becomes too imprecise and does not help. As a result, the above approach primarily relies on LP to obtain tight bounds for formulating a MILP instance for the whole network. Determining bounds with LP for all neurons in the larger networks is prohibitive. For example, on the 9x200 network from our paper, determining bounds with LP for all neurons already takes > 20 minutes (without calling the MILP solver which is more expensive than LP) whereas DeepZ computes significantly more precise bounds than Box for deeper layers in few seconds. \n\nThis gives us considerably fewer candidates to refine using LP/MILP than the Box analysis provides. Note that Tjeng & Tedrake (2017) is in turn significantly faster than Reluplex.\n\n→ RefineAI is applicable to much larger networks than shown in the paper.\n\nWe evaluated RefineAI on larger publicly available networks from [3]: three MNIST convolutional networks containing 3,604 (Conv1), 4,804 (Conv2), 34,688 (Conv3) neurons and one skip net containing 71,650 neurons. We also tried a CIFAR10 convolutional network with 4,852 neurons. As in the paper, we considered epsilon values for which the precision of DeepZ drops significantly. The performance numbers below show RefineAI scales to larger networks (DiffAI is a particular defense [2]):\n\n Dataset Network Epsilon Adversarial training Avg. runtime(s)\n DeepZ RefineAI\n MNIST\t Conv1 0.1 None 1.1 357 \n Conv2 0.2 DiffAI 6.8 602\n Conv3 0.2 DiffAI 7 1011 \n Skipnet 0.13 DiffAI 163 682\n CIFAR10 Conv 0.012 DiffAI 3.9 262\n\n→ The effectiveness of a verification method for neural networks is primarily affected by the number of unstable ReLU units, *not* by the number of neurons.\n\nThis is because the speed of a complete verifier and the precision of an incomplete verifier are affected mainly by unstable ReLU units: those which can take both + and - values. Indeed, the speed of the MILP solver used in both RefineAI and the method based on Tjeng & Tedrake (2017) is adversely affected by the presolve approximations for such unstable units. \n\nThis explains why defending a network (e.g., via DiffAI) will make any verifier scale better (including RefineAI): because defended networks have much fewer unstable units than undefended networks.\n\n[1] Fast and Effective Robustness Certification, NIPS’18\n[2] Differentiable Abstract Interpretation for Provably Robust Neural Networks, ICML’18.\n[3] DeepZ analysis: https://github.com/eth-sri/eran. ", "This paper introduces a verifier that obtains improvement on both the precision of the incomplete verifiers and the scalability of the complete verifiers. The proposed approaches combines over-parameterization, mixed integer linear programming, and linear programming relaxation. \n\nThis paper is well written and well organized. I like the simple example exposed in section 2, which is a friendly start. However, I begun to lose track after that. As far as I can understand, the next section listed several techniques to be deployed. But I failed to see enough justification or reasoning why these techniques are important. My background is more theoretical, but I'm looking for theorems here, considering the complicatedness of neural network. All I am looking for is probably some high level explanation.\n\nEmpirically, the proposed approach is more robust while time consuming that the AI2 algorithm. However, the contribution and the importance of this paper still seems incremental to me. I probably have grumbled too much about the lack of reasonings. As this paper is purely empirical, which is totally fine and could be valuable and influential as well. In that case, I found the current experiment unsatisfying and would love to see more extensive experimental results. \n", "This paper proposed a mixed strategy to obtain better precision on robustness verifications of feed-forward neural networks with piecewise linear activation functions.\n\nThe topic of robustness verification is important. The paper is well-written and the overview example is nice and helpful. \n\nThe central idea of this paper is simple and the results can be expected: the authors combine several verification methods (the complete verifier MILP, the incomplete verifier LP and AI2) and thus achieve better precision compared with imcomplete verifiers while being more scalable than the complete verifiers. However, the verified networks are fairly small (1800 neurons) and it is not clear how good the performance is compared to other state-of-the-art complete/incomplete verifiers. \n\nAbout experiments questions:\n1. The experiments compare verified robustness with AI2 and show that RefineAI can verify more than AI2 at the expense of much more computation time (Figure 3). However, the problem here is how is RefineAI or AI2 compare with other complete and incomplete verifiers as described in the second paragraph of introduction? The AI2 does not seem to have public available codes that readers can try out but for some complete and incomplete verifiers papers mentioned in the introductions, I do find some public codes available:\n* complete verifiers\n1. Tjeng & Tedrake (2017): github.com/vtjeng/MIPVerify.jl\n2. SMT Katz etal (2017): https://github.com/guykatzz/ReluplexCav2017\n\n* incomplete verifiers\n3. Weng etal (2018) : https://github.com/huanzhang12/CertifiedReLURobustness\n4. Wong & Kolter (2018): http://github.com/locuslab/convex_adversarial\n\nHow does Refine AI proposed in this paper compare with the above four papers in terms of the verified robustness percentage on test set, the robustness bound (the epsilon in the paragraph Abstract Interpretation p.4) and the run time? The verified robustness percentage of Tjeng & Tedrake is reported but the robustness bound is not reported. Also, can Refine AI scale to other datasets?\n\nAbout other questions:\n1. Can RefineAI handle only piece-wise linear activation functions? How about other activation functions, such as sigmoid? If so, what are the modifications to be made to handle other non-piece-wise linear activation functions? \n\n2. In Sec 4, the Robustness properties paragraph. \"The adversarial attack considered here is untargeted and therefore stronger than ...\". The approaches in Weng etal (2018) and Tjeng & Tedrake (2017) seem to be able to handle the untargeted robustness as well? \n\n3. In Sec 4, the Effect of neural selection heuristic paragraph. \"Although the number of images verified change by only 3 %... produces tighter output bounds...\". How tight the output bounds improved by the neuron selection heuristics? \n", "In the paper, the authors provide a new approach for verifying the robustness of deep neural networks that combines complete yet expensive methods based on mixed integer-linear programming (MILP) and incomplete yet cheap methods based on abstract interpretation or linear-programming relaxation. Roughly speaking, the approach is to run an abstract interpreter but to refine its results at early layers of a neural network using mixed integer-linear programming and some of later layers using linear programming. The unrefined results of the abstract interpreter help these refinement steps. They help prioritize or prune the refinement of the abstract-interpretation results at neurons at a layer. Using neural networks with 3, 5, 6, 9 layers and the MNIST dataset, the authors compared their approach with AI^2, which uses only abstract interpretation. This experimental comparison shows that the approach can prove the robustness of more examples for all of these networks.\n\nI found the authors' way of combining complete techniques and incomplete techniques novel and interesting. They apply complete techniques in a prioritized manner, so that those techniques do not incur big performance penalties. However, I feel that more experimental justifications are needed. The approach in the paper applies MILP to the first few layers of a given network, without any further simplification or abstraction of the network. One possible implication of this is that this MILP-based refinement is applicable only for the first few layers of the network. Of course, prioritization and timeout of the authors help, but I am not sure that this is enough. Also, I think that more datasets and networks should be tried. The experiments in the paper with different networks for MNIST show the promise, but I feel that they are not enough.\n\n* p3: Why is l6 = 0? I think that it is easy to figure out that max(0,x4) is at least 0.\n\n* p4: [li,yi] for ===> [li,ui] \n\n* p4: gamma_n(T^#_(x|->Ax+b)) ===> gamma_n(T^#_(x|->Ax+b)(a))\n\n* p4: subseteq T^#... ===> subseteq gamma_n(T^#...)\n\n* p5: phi^(k)(x^(0)_1,...,x^(k-1)_p) ===> phi^(k)(x^(0)_1,...,x^k_p) \n\n* p6: I couldn't understand your sentence \"Note that the encoding ...\". Explaining a bit more about how bounds computed in previous layers are used will be helpful.\n\n* p6: I find your explanation on the way to compute the second ranking with weights confusing. Do you mean that your algorithm looks into the future layers of each neuron xi and adds the weights of edges in all the reachable paths from xi?\n\n* p7: Why did you reduce epsilon from 0.07 to 0.02, 0.15 and 0.017?\n" ]
[ -1, -1, -1, -1, 4, 5, 6 ]
[ -1, -1, -1, -1, 3, 4, 3 ]
[ "S1xULFL9n7", "BJeGXqR9h7", "H1ep3mjh2Q", "iclr_2019_HJgeEh09KQ", "iclr_2019_HJgeEh09KQ", "iclr_2019_HJgeEh09KQ", "iclr_2019_HJgeEh09KQ" ]
iclr_2019_HJgkx2Aqt7
Learning To Simulate
Simulation is a useful tool in situations where training data for machine learning models is costly to annotate or even hard to acquire. In this work, we propose a reinforcement learning-based method for automatically adjusting the parameters of any (non-differentiable) simulator, thereby controlling the distribution of synthesized data in order to maximize the accuracy of a model trained on that data. In contrast to prior art that hand-crafts these simulation parameters or adjusts only parts of the available parameters, our approach fully controls the simulator with the actual underlying goal of maximizing accuracy, rather than mimicking the real data distribution or randomly generating a large volume of data. We find that our approach (i) quickly converges to the optimal simulation parameters in controlled experiments and (ii) can indeed discover good sets of parameters for an image rendering simulator in actual computer vision applications.
accepted-poster-papers
This paper discusses the promising idea of using RL for optimizing simulators’ parameters. The theme of this paper was very well received by the reviewers. Initial concerns about insufficient experimentation were justified, however the amendments done during the rebuttal period ameliorated this issue. The authors argue that due to considered domain and status of existing literature, extensive comparisons are difficult. The AC sympathizes with this argument, however it is still advised that the experiments are conducted in a more conclusive way, for example by disentangling the effects of the different choices made by the proposed model. For example, how would different sampling strategies for optimization perform? Are there more natural black-box optimization methods to use? The reviewers believe that the methodology followed has a lot of space for improvement. However, the paper presents some fresh and intriguing ideas, which make it overall a relevant work for presentation at ICLR.
train
[ "H1e7y2eA3Q", "r1gzzcZ91E", "HyxPjJwf2Q", "rkgPl6OwAX", "rJlEVhiYhm", "Byg0vTOv0m", "r1eN2TuDCX", "Bkx0ZTdDCm" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author" ]
[ "Pros:\n* Using RL to choose the simulator parameters is a good idea. It does not sound too novel, but at the same time I am not personally aware of this having been explored in the past (Note that my confidence is 4, so maybe other reviewers might be able to chime in on this point)\n* In theory, you don't need domain adaptation or other sim2real techniques if you manage to get the optimal parameters of the simulator with this method.\n* Certain attributes of the method were evaluated sufficiently: eg the number of training epochs for each policy iteration, the dataset size generated in each iteration, and whether initialization was random or not in each iteration.\nCons:\n* Experiments were underwhelming, and the choice of problems/parameters to tune was not the right one for the problem.\n* Parts of the paper could be clearer\n\nQUALITY:\n* I believe that although the idea is great, but the quality of the experiments could have been higher. Firstly, better problems could have been selected to showcase the method. I was excited to see experiments with CARLA, but was underwhelmed when I realized that the only parameter of the simulator that the method controlled was the number and the type of cars in the scene, and the task of interest was a car counting task (for which not much detail was provided). This would have been much more interesting and useful to the community if more parameters, including rendering parameters (like lighting, shading, textures, etc) were part of the search space. Similarly, the semantic segmentation task could have used more than one category. But even for the one category, there were no previous methods considered, and the only comparison was between random parameters and the learned ones, where we only see marginal improvement, and what I perceive to be particularly low IoU for the car (although it'd help to know what's the SOTA there for comparison) For both vision applications I could help but wonder why the authors did not try to simply train on the validation set to give us another datapoint to evaluate the performance of the method: this is data that *is* used for training the outer loop, so it does beg the question of what is the advantage of having hte inner loop. \n\nCLARITY:\n* The writing of the paper was clear for the most part, however the experimental section could have been clearer. I was wondering how model/hyperparameter selection was performed? Was there another validation set (other than the one used to train the outer loop)\n* The proposed policy is dubbed \"its\". What does it mean?\n* It's not clear what is a \"deliberately adversarial\" initialization. Could you elaborate?\n* The letter R is used to mean \"reward\" and \"rendering\". This is confusing. Similarly some symbols are not explicitly explained (eg S) Generally Section 2.3 is particularly unclear and confusing until one gets to the experimental section.\n* Section 3 discusses the technique and states that \"we can thus generate or oversample unusual situations that would otherwise not be part of the training data\" I believe it is important to state that, as the method is presented, this is only true if the \"validation\" data is varied enough and includes such situations. I believe this would be more applicable if eg rendering parameters were varied and matched the optimal ones.\n* Also the method is presented as orthogonal to domain adaptation and other sim-to-real techniques. However, I do not necessarily believe that this paper should be discussed outside the context of such techniques like domain randomization, Cycada, PixelDA etc. Even though these (esp. the latter ones) focus on vision, I do think it sets the right context.\nORIGINALITY:\n* As far as I'm aware noone has tried something similar yet. However, I'm not confident on this.\nSIGNIFICANCE:\n* Although the idea is good, I don't think that the approach to select the simulation parameters presented in the experiments in such a way is significant. I think that eg doing so for rendering parameters would be a lot more powerful and useful (and probably a lot more challenging). Also, I think that a single set of parameters (which seems to be what the goal is in this work) is not what one wants to achieve; rather one wants to find a good range of parameters that can help in the downstream task.\n", "I upgraded my score from 6 to 7.\n\nThe revision and the responses provided by the authors address some of my concerns.\n\nI still have doubts about the use of RL here. (I don't think it's needed.) And I wish the authors have gone further in the aspects of the simulation they optimize as well as the downstream tasks they tackle. Overall, on the methodological and the experimental fronts, I consider the paper to be rather weak. However, this is counterbalanced by the idea itself, which I find timely and stimulating. This paper may spur others to study this direction, bring more appropriate methods to bear on this problem, and attack more complex and realistic downstream tasks.\n\nAs a kind of \"lightning rod\" that attracts attention and stimulates follow-up work, this paper can be a useful addition to the literature.\n\nI also appreciate that the authors have thoroughly addressed the reviewers' concerns and have added more substantial experimental results to the revision.\n\nOverall, the benefits of publishing the work probably outweigh the drawbacks.\n", "The paper explores an interesting idea: automatically tuning the parameters of a simulation engine to maximize the performance of a model that is trained using this simulation engine. In the most interesting scenario, the model is trained using such optimized simulation and then tested on real data; this scenario is explored in Section 4.5.\n\nThe basic idea of optimizing simulation parameters for transfer performance on real data is very good. I believe that this idea will be further explored and advanced in future work. The present submission is either the first or one of the first papers to explicitly explore this idea, and deserves some credit and goodwill for this reason. This is the primary reason my rating is \"marginally above acceptance threshold\" and not lower.\n\nThe paper suffers from some issues in the technical formulation and experimental evaluation. The issues are reasonably serious. First, it is not clear at all that RL is the right approach to this optimization problem. There is no multi-step decision making, there are no temporal dynamics, there is no long-term credit assignment. The optimization problem is one-shot: you pick a set of parameters and get a score. Once. That's it. It's a standard black-box optimization setting with no temporal aspect. My interpretation is that RL is used here because it's fashionable, not because it's appropriate.\n\nThe evaluation is very incomplete and unsatisfactory. Let's focus on Table 1, which I view as the main result since it involves real data. First, is the optimization performed using the KITTI validation set? Without any involvement of the test set during the optimization? I hope so, but would like the authors to explicitly confirm.\n\nSecond, the only baseline, \"random params\", is unsatisfactory. I take this baseline to be the average performance of randomized simulation. But this is much too weak. Since the authors have access to the validation set during the optimization, they can simply test which of the random parameter sets performs best on the validation set and use that. This would correspond to the *best* set of parameters sampled during training. It's a valid baseline, there is no reason not to use it. It needs to be added to Table 1.\n\nAlso, 10 sets of random params seems quite low. How many sets of parameters does the RL solver sample during training? That would be the appropriate number of sets of params to test for the baseline. (And remember to take the *best* of these for the baseline.)\n\nThe last few points really boil down to setting up an honest random search baseline. I consider this to be mandatory and would like to ask that the authors do this for the rebuttal. There are also other derivative-free optimization techniques, and a more thorough evaluation would include some of these as well.\n\nMy current hypothesis is that an honest random search baseline will do as well as or better than the method presented in the submission. Then the submission boils down to \"let's automatically tune simulation parameters; we can do this using random search\". It's still a stimulating idea. Is it sufficient for an ICLR paper? Not sure. Something for the reviewers and the ACs to discuss as a group.\n", "We thank the reviewer for appreciating the idea. We hope the following clarifications and experiments allow for a re-evaluation.\n \nChoice of parameters:\n(a) We believe that the paper was unclear about which parameters are learned. Specifically, in Sections 2.3 and 4.2 the reader had been lead to believe that we do not vary the rendering parameters in our work. We do vary lighting parameters in both the car counting and segmentation experiments. We define four weather types - clear noon, clear sunset, wet sunset and rainy sunset. Our policy outputs a categorical distribution over them. The illumination, color hue and light direction are varied as well as reflections from water puddles and weather particles such as rain drops. Sections 2.3 and 4.2 have been modified accordingly.\n(b) We learn not only types of cars and preponderance of cars but also the length of the road ahead, which influence the amount of cars and the structure of the scene.\n(c) We are adding a figure to the appendix to show how weather is learned over time by our method. We observe that our algorithm automatically deduces that giving higher probability to scenes without rain or puddles improves the performance of the main task model.\n(d) We have added text in the paper to highlight the variations described in (a) and (b).\n \nState-of-the-art KITTI segmentation experiment:\n(a) The submission uses ResNet-18 since it is faster for experimentation. We now use a ResNet-50 to achieve a state-of-the-art implementation.\n(b) With ResNet-50, we let our policy learn for 600 iterations and sample random parameters for 600 iterations. Our best policy iteration achieves 0.579 IoU, which is 20% better than the best dataset generated with random parameters (0.480 IoU). Thus, we show a clear improvement over a strong baseline.\n(c) We also introduce another baseline, specifically, random search to optimize over the simulator parameters. Random search achieves 0.407 IoU on the test set. Thus, learning to simulate achieves an increase in performance of 42% over this method. We hypothesize that performance using random search is low due to the nature of the problem which presents sparse and noisy rewards.\n(d) Even though our simulated scenes are limited in their realism, we achieve 57.9% IoU for car segmentation, which is reasonable. As a state-of-the-art reference, an upper-bound of 77.8% IoU is obtained by training the same network on 982 real annotated images, which is much more than the 100 synthetic images used to train our method.\n(e) To make space for these additional experiments we move the dataset size experiments to the appendix.\n \nUse of CARLA:\n(a) Our idea of learning to simulate is independent of the choice of simulator. We choose CARLA to make our contribution concrete. But the resources needed to fully demonstrate on a rich simulator like CARLA are immense. We respectfully submit that such a bar will preclude most groups from publishing on the use of simulators. On the other hand, focusing on a small set of parameters allows more insights into the proposed idea.\n(b) While CARLA is a promising tool, we do extend it in useful ways. It required a significant development effort to turn it into a procedural generator for new traffic scenes. The CARLA plugin is not necessarily built with extensions like this in mind.\n(c) We hope the orthogonal contributions of our paper suggest a useful direction for the CARLA development team too.\n\nUse validation set for training:\n(a) Our use of train-validation-test sets is conventional. It is important to note that we use the validation set akin to hyperparameter selection, rather than using it as labeled training data.\n(b) For deployment, one may follow parameter-tuning with retraining that includes the validation set to achieve the best test performance. However, it’s not common practice for benchmarking new ideas and likewise, we only wish to demonstrate the benefit of learning to simulate.\n(c) There are regimes where the size of validation set is sufficient for evaluation, but not for training. But a small number of real images can instead have a significant impact in terms of bridging the domain gap from simulations, making a fair evaluation tricky.\n(d) Some advantages of learning to simulate persist even if one considers the validation set for training. For example, if a scenario is not sufficiently represented in validation set, it is hard to train a network for it simply by including those images, while our proposed method can oversample it to maximize accuracy.\n(e) As a reference, and instead of training on the validation set, we train on a large real dataset for our main KITTI segmentation experiment.", "This work makes use of policy gradients for fitting the parameters of a simulator in order to generate training data that results in maximum performance on real test data (e.g., for classification). The difficulty of the task rises from the non-differentiability of the simulator.\n\n# Quality\n\nThe method is sound, well-motivated, and presented with a set of reasonable experiments. However, and this is a critical weakness of the paper, no attempt is made to compare the proposed method with respect to any related work, beyond a short discussion in Section 3. The experiments do include some baselines, but they are all very weak. \n\n# Clarity\n\nThe paper is well-written and easy to follow. The method is illustrated with various experiments that either study some properties of the algorithm or show some good performance on real data.\n\n# Originality\n\nThe related work is missing important previous papers that have proposed very similar/identical algorithms for fitting simulator parameters in order to best reproduce observed data. For example,\n- https://arxiv.org/abs/1804.01118\n- https://arxiv.org/abs/1707.07113\nwhich both make use of policy gradients for fitting an adversary between fake and real data (which is then used a reward signal for updating the simulator parameters).\n\n# Significance\n\nThe significance of the paper is moderate given some similar previous works. However, the significance of the method itself (regardless of previous papers) is important.\n", "We thank the reviewer for their comments. We provide additional evaluations and discussion of related works to address their concerns.\n \nSufficiency of evaluation:\nWe agree that a practical implementation would use a more extensive simulator, but we believe our choices sufficiently illustrate the idea, while keeping the effort reasonable for an ICLR paper. Please refer to the first three points in the response to Reviewer 2.\n \nComparative references:\nThank you for pointing out these papers. We briefly highlight below that our contributions are quite different from both of those works. We have added this discussion to the related work section.\n \n“Synthesizing Programs for Images using Reinforced Adversarial Learning” from ICML 2018 trains a policy to generate a program that creates a copy of the input image.\n(a) Simulators used in the paper are a brushstroke simulator and an object placer.\n(b) Some similarities are that they use reinforcement learning to update the parameters of their policy and generate synthetic data using a non-differentiable simulator.\n(c) But they generate plausible synthetic data identical to the input, or sample from the latent space to create a program that simulates an unconditioned sample. In contrast, we learn parameters of a simulator that maximize performance of a main task model.\n(d) Importantly, we do not wish to reproduce observed data. Indeed, the reward function can even be chosen to amplify some rare cases. For example, if an object category is rare in road scenes, but important to segment for collision avoidance, our reward can be used to reflect this.\n(e) Another difference is that they use an adversarial loss to create their reward signal, while we use the validation accuracy of the main task model.\n \n“Adversarial Variational Optimization of Non-Differentiable Simulators” is, to the best of our knowledge, an unpublished work. It seeks to \"match the marginal distribution of the synthetic data to the empirical distribution of observations\".\n(a) They replace the generator in a GAN by a non-differentiable simulator and solve the minimax problem by minimizing variational upper bounds of the adversarial objectives.\n(b) The main similarity is tuning parameters of a domain-based non-differentiable simulator.\n(c) But they focus on particle physics and in their main experiment, tune a single parameter. Our experiments focus on computer vision and explore a higher dimensional parameter space (11 parameters, 6 for cars, 1 for length to intersection, 4 for weather type).\n(d) Further, while they use policy gradients, they use an adversarial loss to create their reward signal while we use the validation accuracy.\n(e) The most important difference is that we do not seek to mimic the distribution of real-world. In many cases, it is not the distribution that maximizes the reward. In our toy example, we achieve higher accuracy with a learned distribution that is completely different from the ground truth distribution (we add these numbers to the appendix).\n\nSufficiency of comparison:\n(a) For comparison purposes we believe there are no direct counterparts to our work. The closest related work we have identified is \"Learning To Teach\" (Fan et al.) since they seek to improve accuracy of a model. Nevertheless, they do not create new data but select which data to train on from existing datasets.\n(b) In order to evaluate our method we present baselines on our experiments. We seek to prove that by learning to simulate we achieve higher accuracy than randomly sampling scenes which is what works such as \"Playing for Data: Ground Truth From Computer Games\" (Richter et al.) do. We demonstrate this to be the case in all of our experiments.", "We thank the reviewer for noting the novelty. We hope the following clarifications and new experiments ease their concerns.\n \nUse of policy gradients:\nWe agree that our problem is a black box optimization without temporality or discounted reward. In Section 3, second paragraph, we do discuss alternatives such as evolutionary algorithms or sampling methods. Our use of policy gradients is not due to it being fashionable. Rather, we use them to estimate gradients for a non-differentiable function with the following advantages:\n(a) Simplicity: The method is simple, easy to implement and easy to formalize (see Algorithm 1). This makes it easily reproducible. There are very few hyperparameters to tune (baseline, learning rate of policy)\n(b) Flexibility: The policy that is defined can be arbitrarily flexible. We use a Gaussian policy but the work can be extended to discrete policies or those using neural networks.\n(c) Sample efficiency: We observe in all experiments that parameters converge after less than 500 iterations. For some experiments we observe convergence in less than 200 iterations. This is due to the direct relationship between our reward and the value we want to optimize (validation accuracy). In our case, those two are the same.\n(d) Interpretability: We show curves of weather probabilities and car type probabilities in a new figure in the appendix. We can visualize how the probabilities are learned through iterations.\n\nWe believe that (a) and (b) are the most distinct advantages of policy gradients. (c) and (d) are advantages to a lesser extent and can be present in other derivative-free optimization methods. The generality afforded by the method is important since our work is designed to be applied to different applications where simulation is possible.\n \nWe note that policy gradients are also used in other works that have a similar one-shot scenario as ours, such as “Neural Architecture Search”, “Neural Optimizer Search”, as well as both the works cited by Reviewer 1.\n \nEvaluation (test set):\nWe emphasize the test set is not used in any experiment for parameter tuning. It is unseen for every problem and only used once for final evaluation. Section 2.2 and Figure 4 state this. We have modified the paper to highlight this further.\n \nEvaluation (comparison with “best” random sample):\nWe already present a strong demonstration on the car counting task in Figure 5, where learning to simulate outperforms the “best” random parameters. We initialize two networks and train them using datasets generated by learning to simulate policy (red curve) and random policy (grey curve). We show that the random policy is vastly outperformed by the learned policy.\n\nAdditionally we present a more extensive and fair real data segmentation experiment. We use a more powerful ResNet-50 backbone (ResNet-18 was used in the original submission for faster experimentation) and let our policy learn for 600 iterations and sample random parameters for 600 iterations. Our best policy iteration achieves 0.579 IoU, which is 20% better than the best dataset generated with random parameters (0.480 IoU). Thus, we show a clear improvement over this baseline. Our intuition is that the higher the dimensionality of the parameter space and smaller the areas of high reward, the more likely random parameters will have difficulty achieving high reward.\n \nRandom search baseline as opposed to policy gradients:\nThank you for this suggestion. We believe random search is a valid baseline, but not as sample efficient or successful as policy gradients in some scenarios. To verify this, we use a hypersphere radius of 0.1 for random search, extensively tuned using several runs of the method, for both the car counting and KITTI segmentation experiments. For car counting, which presents a less noisy reward signal, random search performs about the same as our method achieving an L1 error of 16.53 reward compared to 16.94 for our proposed method. However, for KITTI car segmentation, it achieves an IoU of 40.7% (using the same number of iterations, namely 600), yet policy gradients achieve higher IoU of 57.9%. In this scenario policy gradients demonstrates an increase in performance of 42%. This has been added to the paper.\n \nKITTI segmentation evaluation:\nPlease see response to Reviewer 2, where we demonstrate 20% improvements over the best random parameters, 42% improvements over a well-tuned random search baseline, as well as obtaining IoU with 100 synthetic images that is reasonable compared to 982 real images for training.\n", "\nHyperparameters:\nWe use standard hyperparameters for both tasks and use the same ones for all main task networks within an experiment. We use a learning rate of 3e-4 using the Adam optimizer for car counting, as well as standard values for beta_1 (0.9) and beta_2 (0.999). For segmentation, the optimizer used is SGD. We use a learning rate of 6e-4, tuned by generating a balanced dataset containing all weather types, semi-crowded scenes (with cars) and all car types to maximize performance on the KITTI validation dataset. We then use the same learning rate for all synthetic segmentation experiments. To obtain the upper bound trained on 982 annotated real KITTI images, we directly optimize hyperparameters by training on that dataset and using KITTI validation set as a reference.\n \nAdversarial initialization:\nWe mean initial parameters that have been chosen to be suboptimal. Specifically, these correspond to using low probability for spawning cars in the scene and higher probability for cars or weather least represented in the test distribution. We modify the paper to be more clear on this point.\n \nNotation:\nWe have modified the use of “R” (stylized R for rendering). The proposed policy was named “lts\" which stands for \"learning to simulate”, but we modified to “LTS\" to avoid confusion. Moreover, we have clarified Section 2.3.\n \nOversampling unusual situations:\nYes, we need the unusual situation to be present in the validation set, which we assume is representative of test scenarios. While these scenarios are present in the validation set at a low frequency, one does need several samples of rare cases in order to train a network effective for them. That is where oversampling rare scenarios can make a difference.\n \nDomain adaptation:\nWe have included extra discussion in Section 3. Note that even if the optimal parameters are learned using our method, there is still need for sim2real domain adaptation. Often the simulator will be limited and not be able to generate images that are completely realistic. Domain adaptation is needed to bridge this gap, thus, leads to orthogonal benefits. The interplay of our method and domain adaptation to achieve stronger results will be interesting future work." ]
[ 6, -1, 7, -1, 6, -1, -1, -1 ]
[ 4, -1, 4, -1, 5, -1, -1, -1 ]
[ "iclr_2019_HJgkx2Aqt7", "HyxPjJwf2Q", "iclr_2019_HJgkx2Aqt7", "H1e7y2eA3Q", "iclr_2019_HJgkx2Aqt7", "rJlEVhiYhm", "HyxPjJwf2Q", "rkgPl6OwAX" ]
iclr_2019_HJlLKjR9FQ
Towards Understanding Regularization in Batch Normalization
Batch Normalization (BN) improves both convergence and generalization in training neural networks. This work understands these phenomena theoretically. We analyze BN by using a basic block of neural networks, consisting of a kernel layer, a BN layer, and a nonlinear activation function. This basic network helps us understand the impacts of BN in three aspects. First, by viewing BN as an implicit regularizer, BN can be decomposed into population normalization (PN) and gamma decay as an explicit regularization. Second, learning dynamics of BN and the regularization show that training converged with large maximum and effective learning rate. Third, generalization of BN is explored by using statistical mechanics. Experiments demonstrate that BN in convolutional neural networks share the same traits of regularization as the above analyses.
accepted-poster-papers
+ the ideas presented in the paper are quite intriguing and draw on a variety of different connections - the presentation has a lot of room for improvement. In particular, the statement of Theorem 1, in its current form, requires rephrasing and making it more rigorous. Still, the general consensus is that, once these presentation shortcomings are address, this will be an interesting paper.
val
[ "HJxHQLtF07", "SkxVzjdYAQ", "HJeeREuK0X", "BkglGEYK07", "rJlRVnUHaX", "H1efug0i3X", "HJe6PQtLnQ" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "List of changes:\n1.\tThe conditions for the regularization form of BN are made clearer in Sec. 2.1.\n\n2.\tThe extension of BN regularization in deep neural networks has been added in the last paragraph in Sec. 2 and Appendix C.4.\n\n3.\tAnalytical comparisons of the generalization errors of BN, WN+gamma decay, and vanilla SGD with both identity and ReLU units are included. Moreover, both theoretical and numerical validations have been conducted in Sec. 4.1 and figure 1.\n\n4.\tDerivation of the generalization error of a student network with ReLU units under the statistical mechanics theory. (Appendix D.2)\n\n5.\tThe network for PN+gamma decay has been changed to a ResNet18 network, which has a much higher baseline as shown in Sec.5.1 Fig.2(a)&(b).\n\n6.\tThe network for WN+dropout has also been changed to a ResNet18 network, which has a much higher baseline. (Sec.5.1 Fig.2(g)&(h))\n\n7. The experiment section has been re-organized.\n\n8.\tThe introduction part has been re-organized by putting forward the “Related Wrok” part and presenting the relationship between the current work and previous ones.\n\n9.\tThe whole manuscript has been thoroughly scrutinized and proof-read to make it neater and clearer. \n", "We thank AnonReviewer2 for the constructive comments. We appreciate the comments on improving the clarity of theorem statements and experimental validations, and the revision of the manuscript is being made accordingly. Besides, we would also like to address several of the above concerns. \n\nThe following answers are corresponding to the number (index) of your comments.\n\n1. As to the assumptions in deriving Theorem 1, it has been stated in the abstract that “we analyze BN by using a basic block of neural networks, consisting of a kernel layer, a BN layer, and a nonlinear activation function”. We have revised the presentation of our results in section 2.1 of the latest manuscript and made this clear. \n\nWe modeled the loss function through a probabilistic perspective for a single-layer perceptron. This is treated as an illustrative case, in order to make our discussions as intuitive and easy to understand as possible. The analyses of a single-layer network already explain optimization and generalization of BN compared to the other approaches. Despite the loss construction based on a single layer, we have also verified the major conclusions in deep CNNs.\n\nAs for multiple-layer neural networks, the current analysis can be naturally extended. (see Appendix C.4 in the new version of the manuscript). \n\n4. In the experiment section, the `batch size’ subsection investigates how the strength of regularization of BN affects the parameter norm. This is one of our findings of gamma decay. We’ll consider move this part to Appendix.\n\n5. The results of section 2.1 can be extended to deep networks. We achieve this by performing decomposition in a deep network with respect to a certain hidden BN layer. The discussions can be found in Appendix C.4. The decay factor of the regularization depends on Hessian matrix of the hidden layer, whose regularization form is not as intuitive and easy to read as the singe-layer ReLU network. We evaluated BN’s regularization in deep CNNs in experiments.\n\n7. We have revised the experiment section by conducting a strong baseline in CIFAR10 using ResNet18 to study the regularization of BN. We would like to argue that down-sampled ImageNet with image size of 32*32 is more challenging than the full size ImageNet (224*224). That’s why the performance of ResNet18 in the down-sampled version is not comparable to the original ImageNet.\n\nIt should be also noted that the purpose of our experiments is to validate the regularization of BN in deep neural networks, instead of improving the performance of networks already pursued with a lot of finely tuned methods. Therefore, in order to focus on the regularization effect of BN, we removed augmentation in data preparation as well as weight decay or dropout to rule out the regularization from these techniques. In this setting in the current paper, the inverse relationship between the strength of BN regularization and batch size was observed most evidently. \n\n8. We have revised the grammatical errors in the latest manuscript.\n\n9. As to the presentation of the current paper, thanks for the advice and we have revised it in the latest version of the manuscript.\n", "Dear AnnoReviewer1, \n\nThanks for the constructive comments. We are glad that the reviewer agrees on the necessity and impact of the current work. We list detailed responses to some concerns.\n\n(1) Theorem 1. As we have stated in the main text, we derived the loss decomposition from a single building block in neural networks. We have rephrased section 2.1 to make this clearer. In fact, we started from a single-layer perceptron in order to keep our discussions intuitive and easy to understand. \n\nBy analyzing the regularization of BN in a single-layer network, its optimization and generalization are investigated sufficiently and BN is compared to WN+gamma decay and vanilla SGD both theoretically and numerically, as shown in section 3 and 4. These results have never been presented before. We believe they should be presented to the community.\n\nIn the latest version of manuscript, we extend the regularization form of BN to deep networks as shown in Appendix C.4. We also analyze generalization of BN in a nonlinear ReLU student network in section 4. Moreover, the current study has verified the major findings both analytically in a single-layer network and empirically in deep nonlinear CNNs. \n\n(2) Motivation and narration. We are grateful for this suggestion. We have made modifications in the latest manuscript. For example, the “notation” subsection is removed from the introduction and moved to section 2, the “section 6 Related Work” is moved forward in the introduction to better motivate the problem.\n\n\n(3) Implicit Back Propagation [1]. Thanks for the nice advice to compare with other method that stabilizes training and learning rate. We have cited this reference. However, despite the similarity between BN and implicit BP (ISGD) in the effects of stabilization and robust learning rates, these two methods are quite different. \n\nImplicit BP (ISGD) achieves the above effects by implicitly accounting for higher-order derivatives on the loss surface in backward gradient propagation, while BN reaches this goal by reshaping the loss surface to be more isotropic through normalization in forward computation. From their different perspectives, it seems possible to combine ISGD with BN, but the analysis is beyond the topic of the current study. \n\n(4) Figure 1 (and 3). We agree with the reviewer that vanilla SGD would not be adopted because of its simplest form. \n\nHowever, the purpose of the comparison in section 4 between vanilla SGD, WN+gamma decay, and BN is to quantitatively verify the findings of BN’s regularization form. In a linear single-layer network, the analytical solutions are easy to obtain and easy to understand for readers. \n\nThe extension to non-linear solutions is a bit less straightforward. Thanks for the advice and we have also verified the results in a ReLU nonlinear network as shown in section 4 in the latest manuscript, and made corresponding derivations that are enclosed in Appendix D2. \n\nHope the above feedback persuade you to raise your confidence and score.", "We thank the review for the patient reviewing and helpful advice. \n\nAs you have pointed out, we mainly contributed to the community in three parts: loss decomposition, learning rate selection and generalization of BN. The content may be a little dense, but they are inter-connected to each other. The latter two, namely learning rate selection and generalization analysis are all based on the loss decomposition. And these phenomena have been considered to be windfalls of BN and partially conclusive only from experiments. Therefore, a theoretical and systematic analysis on the effect of BN is necessary in this study. \n\nWe also thank the reviewer for the advice to clean up the manuscript to improve its coherence and we are working on it accordingly. Here is the detailed response for the technical comments. \n\n1) “Theorem 1 is not acceptable for publication. It is not a rigorous statement. This should be fixed”. \nSince we have given proper assumptions of the theorem, we suppose that the reviewer’s concern on Theorem 1 is the same as R1 and R2. The construction of the loss function is firstly based on a single-layer network. Its extension to deep neural networks is also possible, as answered in Appendix C.4 in our latest version of manuscript. Moreover, the regularization effect in deep CNNs is also verified in this paper. We have revised the statement to make it clearer to the reader in the latest version.\n\n2) In this study, effective learning rate is defined as lr * \\gamma_0 / L_0, where \\gamma_0 represents the scale parameter at equilibrium position and L_0 is the square average of weight (w) parameters. \n\nMaximum learning rate is the maximum effective lr that allows training to converge without diverging, which maintains the stability of the neural network at a fix point. These two definitions simplify the analysis of learning dynamics but may cause confusion. It has been clarified more in the newest revision. \n\n3) For the derivation and application of the main theorem, which is the loss decomposition with BN, we did not impose any distribution of the input x. Following this result, the learning dynamics is analyzed in Sec. 3, a general expression of ODEs has been presented without knowing the input distribution. \n\nAs for the effective and maximum learning rate at the fixed point, Gaussian input gives intuitive and meaningful expressions and is thus presented. This has been stated in the latest main text. \n\n4) The thermodynamic limit (N,P->infinity) allows the asymptotic analysis of learning dynamics from continuous differential equations. In the regime of finite N & P is finite, the differential equation would be replaced by a difference equation. In non-asymptotic regime, higher differential orders of parameters must be accounted for, and the interactions between parameters are more complex [1], and this would normally make the optimization process less stable. In reality, the networks are normally constructed with large number of neurons (~10K in each layer) and data points (~1M), the asymptotic analysis would hold.\n\n[1] E. Moulines and F. R. Bach, “Non-Asymptotic Analysis of Stochastic Approximation Algorithms for Machine Learning,” in Advances in Neural Information Processing Systems 24, 2011, pp. 451–459.\n", "This is an interesting paper on a statistical analysis of batch normalization. It takes a holistic approach, \ncombining techniques and ideas from various fields, and considers multiple endpoints, such as tuning of learning rates and estimation of generalization error. Overall it is an interesting paper.\n\nSome aspects of the paper that could be improved:\n\n1) Theorem 1 is not particularly compelling, and may be misleading at a first reading. It considers the simple model of Equation (1) in a straightforward bias-variance decomposition, and may not be useful in general. Some aspects of the theorem are not technically correct or unclear. E.g., \\gamma is a single parameter, what does it mean to have a Fisher information matrix?\n\n2) The problem is not motivated well. It may be a good idea to bring some discussions from Section 6 early in the introduction of the paper. When does BN work well? And what is the current understanding (prior to the paper) and how does the paper compare/contribute? I think the paper does a good job on that front, but it follows a disordered narration flow which makes it hard to read. I understand there is a lot of material to cover, but it would help a lot to reorganize the paper in a more linear way.\n\n3) What about alternatives, such as implicit back propagation that stabilizes learning? [1]\n\n4) I don't find Figure 1 (and 3) particularly useful on how it handles vanilla SGD. In practice, it would be straightforward to avoid the mentioned pathologies. Overall, the experiments are interesting but it may be hard to generalize the findings to non-linear settings.\n\n\n[1] Implicit back propagation, Fagan & Iyengar, 2017\n\n", "This is a thought provoking paper that aims to understand the regularization effects of batch-normalization (BN) under a probabilistic interpretation. The authors connect BN to population normalization (PN) and a gamma-decay term that penalizes the scale of the weights. They analyze the generalization error of BN for a single-layer perceptron using ideas in statistical physics.\n\nDetailed comments:\n\n1. Theorem 1 uses the loss function of a single-layer perceptron in the proof. This is not mentioned in the main writeup. This theorem is not valid in general.\n\n2. The main contribution of this paper is Theorem 1 which connects BN to population normalization and weight normalization. It shows that the regularization of BN can be split into two components that depend on the mini-batch mean and variances: the former penalizes the magnitude of activations while the latter penalizes their correlation.\n\n3. Although the theoretical analysis is conducted under simplistic models, this paper corroborates a number of widely-known observations about BN in practice. It validates these predictions on standard experiments.\n\n4. The scaling of BN regularization with batch-size can be easily seen from Teye et al., 2018, so I think the experiments that validate this prediction are not strictly necessary.\n\n5. It is difficult to use these techniques for deep non-linear networks.\n\n6. The predictions in Section 3.3 are very interesting: it is often seen that fully-connected layers (where BN helps significantly) need small learning rates to train without BN; with BN one can use larger learning rates.\n\n7. The experimental section is very rough. In particular the experiments on CIFAR-10 and downsampled-ImageNet with CNNs seem to have very high errors and it is difficult to understand whether some of the predictions about generalization error apply here. Why not use a more recent architecture for CIFAR-10?\n\n8. There is a very large number of grammatical and linguistic errors in the narrative.\n\n9. The presentation of the paper is very dense, I would advise the authors to move certain parts to the appendix and remove the inlining of important equations to improve readability.", "This paper investigates batch normalization from three points of view. i) Loss decomposition, ii) learning rate selection, iii) generalization. If carefully read, I believe authors have interesting results and insightful messages. However, as a whole, I found the paper difficult to follow. Too much content is packed into too little space and they are not necessarily coherent with each other. Many of the technical terms are not motivated and even not defined. Overall, cleaning up the exposition would help a lot for readability. \n\nI have a few other technical comments.\n1) Theorem 1 is not acceptable for publication. It is not a rigorous statement. This should be fixed.\n2) Effective and maximum learning rate is not clear from the main body of the paper. I can intuitively guess what they are but they lack motivation and definition (as far as I see).\n3) In Section 3 I believe random data is being assumed (there is expectation over x in some notation). This should be stated upfront. Authors should broadly comment on the applicability of the learning rates calculated as N->\\infty in the finite N,P regime?" ]
[ -1, -1, -1, -1, 5, 6, 6 ]
[ -1, -1, -1, -1, 3, 5, 2 ]
[ "iclr_2019_HJlLKjR9FQ", "H1efug0i3X", "rJlRVnUHaX", "HJe6PQtLnQ", "iclr_2019_HJlLKjR9FQ", "iclr_2019_HJlLKjR9FQ", "iclr_2019_HJlLKjR9FQ" ]
iclr_2019_HJlNpoA5YQ
The Laplacian in RL: Learning Representations with Efficient Approximations
The smallest eigenvectors of the graph Laplacian are well-known to provide a succinct representation of the geometry of a weighted graph. In reinforcement learning (RL), where the weighted graph may be interpreted as the state transition process induced by a behavior policy acting on the environment, approximating the eigenvectors of the Laplacian provides a promising approach to state representation learning. However, existing methods for performing this approximation are ill-suited in general RL settings for two main reasons: First, they are computationally expensive, often requiring operations on large matrices. Second, these methods lack adequate justification beyond simple, tabular, finite-state settings. In this paper, we present a fully general and scalable method for approximating the eigenvectors of the Laplacian in a model-free RL context. We systematically evaluate our approach and empirically show that it generalizes beyond the tabular, finite-state setting. Even in tabular, finite-state settings, its ability to approximate the eigenvectors outperforms previous proposals. Finally, we show the potential benefits of using a Laplacian representation learned using our method in goal-achieving RL tasks, providing evidence that our technique can be used to significantly improve the performance of an RL agent.
accepted-poster-papers
This paper provides a novel and non-trivial method for approximating the eigenvectors of the Laplacian, in large or continuous state environments. Eigenvectors of the Laplacian have been used for proto-value functions and eigenoptions, but it has remained an open problem to extend their use to the non-tabular case. This paper makes an important advance towards this goal, and will be of interest to many that would like to learn state representations based on the geometric information given by the Laplacian. The paper could be made stronger by including a short discussion on why the limitations of this approach. Its an important new direction, but there must still be open questions (e.g., issues with the approach used to approximate the orthogonality constraint). It will be beneficial to readers to understand these issues.
train
[ "Hylrlueq3Q", "SJe3zTf6p7", "BJlAT9zapm", "HkxzFKz6pX", "ryl6Crxo3m", "SJxZV3b82m" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Summary: This paper proposes a method to learn a state representation for RL using the Laplacian. The proposed method aims to generalize previous work, which has only been shown in finite state spaces, to continuous and large state spaces. It goes to approximate the eigenvectors of the Laplacian which is constructed using a uniformly random policy to collect training data. One use-case of the learnt state representation is for reward-shaping that is said to accelerate the training of standard goal-driven RL algorithms. \n\n\nIn overall, the paper is well written and easy to follow. The idea that formulates the problem of approximating the Laplacian engenfunctions as constraint optimization is interesting. I have some following major concerns regarding to the quality and presentation of the paper.\n\n- Though the idea of learning a state representation seems interesting and might be of interest within the RL research, the authors have not yet articulated the usefulness of this learnt representation. For larger domains, learning such a representation using a random policy might not be ideal because the random policy can not explore the whole state space efficiently. I wish to see more discussions on this, e.g. transfer learning, multi-task learning etc.\n\n- In terms of an application of the learnt representation, reward-shaping looks interesting and promising. However I am concerned about its sample efficiency and comparing experiments. It takes a substantial amount of data generated from a random policy to attain such a reward-shaping function, so the comparisons in Fig.5 are not fair any more in terms of sample efficiency. On the other hand, the learnt representation for reward-shaping is fixed to one goal, can one do transfer learning/multi-task learning to gain the benefit of such an expensive step of representation learning with a random policy.\n\n- The second equation, below the text\"we rewrite the inequality as follows\" in page 5, is correct? this derivation is like E(X^2) = E(X) E(X)?\n\n- About the performance reported in Section 5.1, I wonder if the gap can be closer to zero if more eigenfunctions are used?\n\n\n================\nAfter rebuttal:\nThanks the authors for clarification. I have read the author's responses to my review. The authors have sufficiently addressed my concerns. I agree with the responses and decide to change my overall rating\n", "We thank the reviewer for the careful reading of the paper. We are glad the reviewer found the contribution of the paper insightful and original. Responses to the reviewer’s questions are below:\n\n“it would be good if the authors could comment on the choice of d. This is in fact a model selection problem. According to which criterion is this selected?”\n\n-- Our choice of d(=20) in reward shaping experiments is arbitrary and we didn’t tune it. In practice, if the downstream task is known, d can be regarded as a hyperparameter and selected according to the performance. If the downstreaming task is not available, one can visualize the distances between representations like in Figure 4 (with randomly sampled goal states) and select d when the visualized distance is meaningful; or in other cases treat it as an additional hyperparameter to search over.\n\n\n“the authors define D(u,v) in eq (4). Why this choice? Is there some intuition or interpretation possible related to this expression?”\n\n-- The underlying motivation is in order to make the graph drawing objective practical to optimize (via sampling) while reflecting the affinity between states. Optimizing the graph drawing objective requires sampling from D(u,v)rho(u)rho(v) so D(u,v)rho(u)rho(v) should be a joint measure over u, v. The Laplacian is defined for undirected graphs so D(u,v) also needs to be symmetric. These are the intuitions behind the conditions for D in Section 2.2. In RL, a natural choice for representing the affinity between two states is to use the transition probabilities P(u|v) (which is also convenient for sampling). However, naively setting D := P is premature, as P in general does not satisfy the conditions necessary for D. To this end, we first “symmetrize” P to achieve the setting of D as in Eq 4 by averaging the transitions u->v and v->u This procedure is analogous to “symmetrized Laplacians” (see Boley, et al., “Commute times for a directed graph using an asymmetric Laplacian”). We then divide it by rho to make D(u,v)rho(u)rho(v) a joint measure over pairs of states so that the graph drawing objective can be written in terms of an expectation (as in (5)) and sample based optimization is possible. \n\n\n“in (6) beta is called a Lagrange multiplier. Given that a soft constraint (not a hard constraint) is added for the orthonormality constraint it is not a Lagrange multiplier.”\n\n-- We have updated the paper to replace this terminology with the more appropriate “KKT multiplier”.\n\n\n“How sensitive are the results with respect to the choice of beta in (6) (or epsilon in the eq above)? The orthonormality constraint will only be approximately satisfied. Isn't this a problem?”\n\n-- The results are not very sensitive to the choice of beta. We have plots for approximation qualities with different values of beta in Appendix D-1 Figure-7 with discussions.\n-- Approximately satisfying the orthonormality constraint is not a problem in RL applications, at least in the reward shaping setting which we experiment with. In reward shaping the important thing is that the distance in the latent space can reflect the affinity between states properly, and orthonormality constraint plays a role more like encouraging the diversity of the representations (preventing them from collapsing to a single point). We think the same argument applies to most other applications of learned representations to RL so only satisfying the constraint approximately should not be a problem in the RL context. \n\n\n“Wouldn't it be better in this case to rely on optimization algorithm on Grassmann and Stiefel manifolds?”\n\n-- In the RL setting, one requires an optimization algorithm which is amenable to stochastic mini-batching. We are not aware of an optimization algorithm based on Grassman and Stiefel manifolds which is applicable in such settings, but would be interested if the reviewer has a specific algorithm in mind. While our paper proposes one technique for enforcing orthonormality, there are likely to be other applicable algorithms to achieve the same aims, and we would be happy to include references to them as alternative methods.\n\n\n“Other scalable methods related to kernel spectral clustering (related to subsets/subgraphs and making out-of-sample extensions) were proposed in literature”\n\n-- We updated our paper to cite these two papers in the related work section.\n", "We thank the reviewer for the valuable feedback. We are glad the reviewer found the paper interesting and easy to follow. Responses to the reviewer’s remaining concerns are addressed below. With these, we hope the reviewer will find the paper more appropriate for publication and, if so, will raise their score accordingly. We are also always happy to discuss further if the reviewer has additional concerns.\n\n“learning such a representation using a random policy might not be ideal because the random policy can not explore the whole state space efficiently”\n\n-- We agree that this can be a concern. However, a random policy can be sufficient for exploration when the initial state is uniformly sampled from the whole state space (as we did in our experiments). As you suggest, a random policy is not sufficient for exploration when the initial state is not sampled from the whole state space but only sampled within a region that is far from the goal. In this case, exploring the whole state space itself is a hard problem which we are not trying to solve here. In this paper, we aim at demonstrating the usefulness of learned representations in “reward shaping” with well controlled experiments in RL settings, so we attempted to exclude other factors such as exploration. \n-- With that being said, we have results showing that our representation learning method works beyond random-walk policies: In appendix D-2 we have experiments (Figure-8) showing that the learned representation with online policies provides a similar advantage in reward shaping as with random-walk policies. Here, the online policy and the representation are learned concurrently starting from scratch and on the same online data. It is thus significant that we retain the same advantages in speed of training. \n\n\n“I am concerned about its sample efficiency and comparing experiments”\n\n-- Even when the pretraining samples are included, our method is much more sample efficient than the baselines. The representation learning phase with a random walk policy is not expensive. For the MuJoCo experiments in Figure 5, we pretrain the representation with 50,000 samples.Then, we train the policy with 250,000(for pointmass)/450,000(for ant) samples. After shifting the mix/fullmix learning curves to the right by 50,000 steps to include the pretraining samples, their learning curves are still clearly above the baseline learning curves.\n\n\n“the learnt representation for reward-shaping is fixed to one goal, can one do transfer learning/multi-task learning to gain the benefit of such an expensive step of representation learning with a random policy”\n\n- Our learnt representation is not fixed to one goal and are in fact agnostic to goal or task reward. Thus, the representations may be used for any goals in subsequent training. The goal is used only when computing the rewards (L2 distances) for training goal-achieving policies.\n- The representation learning phase is not expensive compared with the policy training phase, as we explained in the previous concern point.\n - The representations are learned in a purely unsupervised way without any task information (e.g. goal, reward, a good policy). So it is natural to apply the representations to different tasks without the notion of “transfer” or “multi-task”.\n\n\n“The second equation, below the text \"we rewrite the inequality as follows\" in page 5, is correct?”\n\n-- Yes, it is correct. The square is outside the brackets in all of the expressions, so E(X)^2 = E(X)E(X).\n\n\n“About the performance reported in Section 5.1, I wonder if the gap can be closer to zero if more eigenfunctions are used?”\n\n-- We have additional results for larger values of d (50, 100) in Appendix D-1, Figure 6. The gap actually becomes bigger if more eigenfunctions are used: With much larger values of d the problem becomes harder as you need to approximate (the subspace of) more eigenfunctions of the Laplacian.\n", "We are glad that the reviewer found the paper interesting, well-written, and well-evaluated. We also appreciate the feedback. \n\nWith regards to the methods DQN and DDPG, we have updated the paper to include references in the main text and brief descriptions of these algorithms in the experiment details section in Appendix.\n\nWe have updated the paper to clarify the reasoning behind the half-half mix for reward shaping. By “gradient,” we meant the change in rewards between adjacent states (not the gradient in optimization). When the L2 distance between the representations of the goal state and adjacent states is small the Q-function can fail to provide a significant signal to actually reach the goal state (rather than a state that is just close to the goal). Thus, to better align the shaped reward with the task directive, we use a half-half mix, which clearly draws a boundary between the goal state and its adjacent states (as the sparse reward does) while retaining the structure of the distance-shaped reward.\n", "This works proposes a scalable way of approximating the eigenvectors of the Laplacian in RL by optimizing the graph drawing objective on limited sampled states and pairs of states. The authors empirically show the benefits of their method in two different types of goal achieving task. \n\nPros:\n- Well written, well structured, an overall enjoyable read.\n- The related work section appears to be comprehensive and supports the motivations for the presented work.\n- Clear and rigorous derivations. \n- The method is evaluated both in terms of how well it is able to approximate the optimal Laplacian-based representations with limited samples compared to baseline models and how well it solves reward shaping in RL.\n\nCons:\n- In the experimental section, the methods used to learn the policies, DQN and DDPG, should be briefly explained or at least referenced.\n- A further discussion on why the authors chose a half-half mix of the L2 distance and sparse reward could be beneficial. The provided explanation (L2 distance doesn't provide enough gradient) is not very convincing nor justified.\n ", "The authors propose a Laplacian in the context of reinforcement learning, together with learning the representations. Overall the authors make a nice contribution. The insight of defining rho to be the stationary distribution of the Markov chain P^pi and connecting this to eq (1) is interesting. Also the definition of the reward function on p.7 in terms of the distance between phi(s_{t+1}) and phi(z_g) looks original. The method is also well illustrated and compared with other methods, showing the efficiency of the proposed method.\n\nOn the other hand I also have further comments and suggestions:\n\n- it would be good if the authors could comment on the choice of d. This is in fact a model selection problem. According to which criterion is this selected?\n\n- the authors define D(u,v) in eq (4). Why this choice? Is there some intuition or interpretation possible related to this expression?\n\n- in (6) beta is called a Lagrange multiplier. Given that a soft constraint (not a hard constraint) is added for the orthonormality constraint it is not a Lagrange multiplier.\n\nHow sensitive are the results with respect to the choice of beta in (6) (or epsilon in the eq above)? The orthonormality constraint will only be approximately satisfied. Isn't this a problem?\n\nWouldn't it be better in this case to rely on optimization algorithm on Grassmann and Stiefel manifolds?\n\n- The authors provide a scalable approach related to section 2 by stochastic optimization. Other scalable methods related to kernel spectral clustering (related to subsets/subgraphs and making out-of-sample extensions) were proposed in literature, e.g.\n\nMultiway Spectral Clustering with Out-of-Sample Extensions through Weighted Kernel PCA, IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(2), 335-347, 2010.\n\nKernel Spectral Clustering for Big Data Networks, Entropy, Special Issue: Big Data, 15(5), 1567-1586, 2013.\n\n\n" ]
[ 7, -1, -1, -1, 7, 7 ]
[ 3, -1, -1, -1, 3, 4 ]
[ "iclr_2019_HJlNpoA5YQ", "SJxZV3b82m", "Hylrlueq3Q", "ryl6Crxo3m", "iclr_2019_HJlNpoA5YQ", "iclr_2019_HJlNpoA5YQ" ]
iclr_2019_HJlQfnCqKX
Predicting the Generalization Gap in Deep Networks with Margin Distributions
As shown in recent research, deep neural networks can perfectly fit randomly labeled data, but with very poor accuracy on held out data. This phenomenon indicates that loss functions such as cross-entropy are not a reliable indicator of generalization. This leads to the crucial question of how generalization gap should be predicted from the training data and network parameters. In this paper, we propose such a measure, and conduct extensive empirical studies on how well it can predict the generalization gap. Our measure is based on the concept of margin distribution, which are the distances of training points to the decision boundary. We find that it is necessary to use margin distributions at multiple layers of a deep network. On the CIFAR-10 and the CIFAR-100 datasets, our proposed measure correlates very strongly with the generalization gap. In addition, we find the following other factors to be of importance: normalizing margin values for scale independence, using characterizations of margin distribution rather than just the margin (closest distance to decision boundary), and working in log space instead of linear space (effectively using a product of margins rather than a sum). Our measure can be easily applied to feedforward deep networks with any architecture and may point towards new training loss functions that could enable better generalization.
accepted-poster-papers
The paper suggests a new measurement of layer-wise margin distributions for generalization ability. Extensive experiments are conducted. Though there lacks a solid theory to explain the phenomenon. The majority of reviewers suggest acceptance (9,6,5). Therefore, it is proposed as probable accept.
train
[ "Ske1HVNc07", "HygHuQVq07", "BylGPEVq07", "H1xTRJmc3m", "SylJSZ_STX", "B1gSCYhmTX", "S1e5qVsQaQ", "Hkl7VhOqhX", "HkgvJXxt2Q", "BJgDuN3On7" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "public" ]
[ "Thank you for the review. We address your concerns below.\n\n#What benefit can be acquired when using geometric margin defined in the paper.#\nThe geometric distance is the actual distance between a point “x” and the decision boundary f(x)=0, i.e. d1=min_x ||x|| s.t. f(x)=0.This term is usually used in contrast to functional distance defined as d2=f(x). If x is on the decision boundary, d1=d2=0, but otherwise d1 and d2 can differ. Note that d2 can change by simple reparametrization. For instance, consider a linear decision boundary f(x)=w.x. In this case, geometric distance d1=f(x)/||w|| and d2=f(x). Let F(x)=(c*w).x, i.e. just scaling the weights by factor c. This does not change the decision boundary. For such F, d1 remains the same, but d2 scales with c. One can force a condition to make margins equal in both scenarios: by making the closet point to the decision boundary to have distance 1. However, this requires introducing an inequality per point, similar to SVMs. With geometric margin, we can work with an unconstrained optimization and directly apply gradient descent or SGD.\n\n#Why does normalization make sense?#\nOur normalization allows direct analysis of the margins across different models with the same topology (or different datasets trained on the same network), which is otherwise difficult due to the positive homogeneity of ReLU networks. For example, suppose we have two networks with exactly the same weight, and then in one of the networks, we scale weight_i by constant positive factor c and the weight_{i+1} by 1/c (i is a layer index), the predictions of the two networks remain the same; however, their unnormalized margin distribution will be vastly different and the normalized version will be exactly the same.\n\n#Why does the middle layer margin can help? #\nThere is no reason we can assume a-priori that maximizing only input or output margin (for example) is enough for good generalization. As shown in our ablation results in Tables 1 and 4, the combination of multiple layers performs significantly better. If we cut a deep network at any stage, we can treat the first half of the network as a feature extractor and the second half as the classifier. From this perspective, the margins at middle layer can be just as important as the margins in the output layer or input layer. Lastly, we note that Elsayed et. al. show that optimizing margin at multiple layers provides significant benefits for generalization and adversarial robustness. \n\n#Why a linear (linear log) relation between the statistic and generalization gap.#\nWe are not claiming this is the true relationship between the statistics and the generalization gap. The true relationship may very well be nonlinear and one could perform a nonlinear regression to predict the gap, but it would need regularization and more data to avoid overfitting while a linear combination of simple distributional features already attains high quality prediction (according to CoD, k-fold cross validation and MSE) across 700+ pretrained models. This suggests that a linear relationship is indeed a very close *approximation*.\n\n#I don't think your comparison with Bartlett's work is fair. Their bounds suggest the gap is approximately Prob(0<X<\\gamma) + Const/\\gamma for a chosen \\gamma, where X is the normalized margin distribution. I think using the extracted signature from margin distribution and a linear predictor don't make sense here.#\nWe assume the reviewer is referring to theorem 1.1 of Bartlett et al. If one wishes to compute the gap to be the inside of the soft big O, the result will be much larger than the error emitted by our prediction, and will require picking appropriate gamma and delta values. We further note the following: the case study of Bartlett et. al. (section 2) explicitly show in their diagrams (Figures 2 and 3) the normalized distribution as evidence of generalization prediction power (instead of the bound itself) and this normalized distribution is closely related to but is not directly their bounds (they drop the log terms); extracting the statistics in a sense quantifies their case study. Before submitting the paper, we also had personal communication with one of the authors of Bartlett et. al., and the author agreed that our comparison was fair. \n", "We thank all the reviewers for their comments, suggestions and questions. We have responded to each reviewer’s individual comments below. We have modified the paper as follows to address common questions posed by the reviewers:\n\n1. Using negative examples: we have added linear fits to both test accuracy and generalization gap and shown comparisons with and without negative examples. Table 3 in Appendix 7 (page 13) shows these results. We see that using negative margins predicts accuracy better than the generalization gap. However, as noted above, we chose to predict generalization gap, and in that case, a log relationship provides much stronger prediction, but log transform cannot use negative margin values. \n2. To answer R2’s question about the importance of hidden layers, we show in Table 4, Appendix 7, the results of fitting every single layer and compare to fitting all layers together. No single layer, input, hidden or output performs as well as the combination. We also provide intuition for why it is important from a theoretical perspective to use margins at hidden layers (Section 3). \n\nWe have added to the main body or appendix of the paper a few smaller edits: \n1. typos identified by R1 (Eq. 4)\n2. more compact notations for Table 1\n\nClarifying explanations:\n1. Why we choose to discard negative margins (Sec. 3.1)\n2. Why we use both a linear and log regression model (Sec. 3.3)\n3. Mean square error computations (Tables 1, 3, and 4)\n4. Why we chose evenly spaced layers for our margin computations. (end of Section 3.2)\n5. Added references suggested by reviewers and commenter.\n\nLastly, we will release all the trained CIFAR-10 and CIFAR-100 models. We hope this work along with the model dataset will open up interesting avenues for future research.\n\nWe hope the rebuttal and revision have addressed the reviewers’ questions and comments. \n\nThank you!\n", "#If you do a regression analysis on a five layers cnn, can you have a good prediction on a nine layers cnn (or even residue cnn)#\nIn the Appendix (Section 9.1 and 9.2), we already show both cross-architecture and cross-dataset comparisons, which achieve good predictive accuracy but worse than the result on a single architecture. However, when we tried using the result from cnn alone to predict the generalization gap of residual network or vice versa (not included in the paper), the result does not signify any interesting correlation. Nevertheless, we would like to emphasize that the regression is shared (and gives an accurate prediction) across other significant changes such as channel sizes, batchnorm/group norm, regularization, learning rate, dropout change (presented in appendix section 6)\n\n# Novelty #\nAs you correctly pointed out, our work and Barlett et. al. build on the broad notion of “margin distribution” and “normalization”. However, there are significant differences:\n1. Bartlet’s definition of margin relies only on f_i-f_j, which only reflects margin in the output space, as opposed to (f_i-f_j)/||d/dx f_i - d/dx f_j|| which approximates margin in input (or any hidden) space.\n2. The normalization used in Bartlett et al. is a complexity measure which is drastically different from our normalization that captures more direct geometric properties of the activations. Specifically, Bartlett’s normalization relies on the spectral complexity of a network which involves spectral norm of weight matrices and reference matrices. In our work, the normalization is defined based on the total variance of the activations of the hidden layers directly (Eqs 4 and 5). \n4. Barlett et. al. do *not* show any linear relationship between margin and test performance or gap. \nThe above distinctions lead to very different predictions on the generalization gap as shown in our results (Figure 2 and Table 1). In fact, the choice of distributional features and normalization scheme are crucial for accurate prediction of the generalization gap.\n\nFurthermore, we note again that the normalization scheme of Bartlett et. al. cannot be used as-is for residual networks and is not applicable to hidden layers, a drawback not present in our normalization. Finally, we have conducted a far larger scale of experiments as compared to Bartlett et. al. to verify the effect of each prediction scheme of the generalization gap. Like we mentioned in our response to reviewer 1, we will be releasing the 700+ realistic models we used in the paper as a dataset where researchers can easily test theories on generalization, which is one of the first of its kind. \n\nRegarding Liao et. al. 2018, as stated in the paper, their proposed normalized loss leads to a significant *decrease* in output margin confidence, which is the opposite of what is desirable. Furthermore, normalized cross-entropy loss is different from margin-based loss, so we do not think their observation takes away the novelty of our paper just because both works illustrate linearity.\n", "After author response, I have increased my score. I'm still not 100% sure about the interpretation the authors provided for the negative distances. \n\nThe paper is well written and is mostly clear. (1st line on page 4 has a typo, \\bar{x}_k in eq (4) should be \\bar{x}^l?)\n\nNovelty: I am not sure whether the paper adds any significant on top of what we know from Bartlett et al., Elsayed et al. since:\n\n(i). The fact that \"normalized\" margins are strongly correlated with the test set accuracy was shown in Bartlett et al. (figure 1.). A major part of the definition comes from there or from the reference they cite; \n(ii). Taylor approximation to compute the margin distribution is in Elsayed et al.; \n(iii). I think the four points listed in page 2 (which make the distinction between related work) is misleading: the way I see it is that the authors use the margin distribution in Elsayed et al which simply overcomes some of the obstacles that norm based margins may face. The only novelty here seems to be that the authors use the margin distribution at each layer. \n\nTechnical pitfalls: Computing the d_{f,x,i,j} using Equation (3) is missing an absolute value in the numerator as in equation (7) Elsayed et al.. The authors interpret the negative values as misclassification: why is it true? The margin distribution used in Bartlett et al. (below Figure 4 on page 5 in arxiv:1706.08498) uses labeled data and it is obvious in this case to interpreting negative values as misclassification. I don't see how this is true for eq (3) here in this paper. Secondly, why are negative points ignored?? Misclassified points in my opinion are equally important, ignoring the information that a point is misclassified doesn't sound like a great idea. How do the experiments look if we don't ignore them?\n\nExperiments: Good set of experiments. However I find the results to be mildly taking the claims of the authors made in four points listed in page 2 away: Section 4.1, \"Empirically, we found constructing this only on four evenly-spaced layers, input, and 3 hidden layers, leads to good predictors.\". How can the authors explain this? \n\nBy using linear models, authors implicitly assume that the relationship between generalization gaps and signatures are linear (in Eucledian or log spaces). However, from the experiments (table 1), we see that log models always have better results than linear models. Even assuming linear relationship, I think it is informative to also provide other metrics such as MSE, AIC, BIC etc..", "The author(s) suggest using geometric margin and layer-wise margin distribution in [Elsayed et al. 2018] for predicting generalization gap.\n\npros,\na). The author shows large experiments to support their argument.\n\ncons,\na). No theoretical verification (nor convincing intuition) is provided, especially for the following questions,\n i) what benefit can be acquired when using geometric margin defined in the paper.\n ii) why does normalization make sense beyond the simple scaling-free reason. For example, spectral complexity as a normalization factor in [Bartlett et al. 2017] is proposed from the fact, that the Lipschitz constant determines the complexity of network space.\n iii) why does the middle layer margin can help? \n iv) why a linear (linear log) relation between the statistic and generalization gap.\n\nFurther question towards experiment,\ni) I don't think your comparison with Bartlett's work is fair. Their bounds suggest the gap is approximately Prob(0<X<\\gamma) + Const/\\gamma for a chosen \\gamma, where X is the normalized margin distribution. I think using the extracted signature from margin distribution and a linear predictor don't make sense here.\nii) If you do regression analysis on a five layers cnn, can you have a good prediction on a nine layers cnn (or even residue cnn)?\n\nFinally, I'm not sure the novelty is strong enough since the margin definition comes from [Elsayed et al. 2018] and the strong linear relationship has been shown in [Bartlett et al. 2017, Liao et al. 2018] though in different settings.", "\nWe thank you for your insightful review.\n\n## NOVELTY ##\n\nR2: “The fact that normalized margins are correlated with generalization was shown in Bartlett Fig 1”.\n\nAs you pointed out, both works build on the broad notion of “margin distribution” and “normalization”. However, there are significant differences:\n1. Margin in Bartlett uses f_i-f_j that can only reflect output margins, as opposed to (f_i-f_j)/||d/dx f_i - d/dx f_j|| that works for any layer.\n2. We do not use margin distribution itself to predict the generalization gap, but rather distributional features that involve “nonlinear transform” of the distances (quartiles or moments).\n3. Normalization in Bartlett’s uses norm of weight matrices, which is drastically different from geometric spread of activations (variance) we use (Eqs 4 and 5). Also their cannot be used as-is for residual networks, a drawback not present in our normalization. \n\nThese distinctions result in very different predictions of the generalization, as clearly shown in our Fig 2 and Table 1. In fact, the choice of distributional features and normalization are crucial for accurate prediction of the generalization gap.\n\nFinally, we have conducted a far larger scale of experiments, and will be releasing the 700+ realistic models used in the paper so that researchers can easily test generalization theories. This is the first of its kind. \n\n\n## TECHNICAL ##\n\n# Missing Absolute Value in Eq (3) #\n\nThere is no incorrectness; we deliberately adopt “signed distance”. The polarity reflects which side of the decision boundary the point is. Even Eq (7) of Elsayed that you mentioned quickly evolves to signed distance in their Eq (8).\n\n# Why Negative Distance Implies Misclassification #\n\nIt was our oversight not to mention that “i” in our Eq (3) corresponds to the ground truth label. We will clarify this in the final version. In this case, f_i-f_j>0 (i.e. distance is positive) implies correct classification and f_i-f_j<0 implies misclassification. \n\n# Why Negative Points are Ignored #\n\nWe indeed investigated using negative distances. We observed that:\n\n1. Modern deep architectures often achieve near perfect classification on training data. Hence, the contribution of negative distances to the full distribution is negligible in most trained models.\n\n2. A small fraction of models do have notable misclassification (due to data augmentation or heavy regularization). For these models, we found that margin distribution computed with only positive samples predicted the generalization gap better than (or at par with) the full distribution. However, we observed that the latter is indeed a better predictor of test accuracy (just not the gap). Since we focus our narrative on the generalization gap, we decided to omit these results from the main paper; however, we will include these results in the appendix.\nWe also note that there is no technical problem in using margin distribution with only positive samples, e.g. Bartlett’s work “The Sample Complexity of Pattern Classification with Neural Networks” develops a generalization bound by such samples (paragraph above their Theorem 2).\n\n\n## EXPERIMENTS ##\n\n# Why 4 Layers and Why Even Spacing #\n1. This leads to a fixed-length signature vector, hence agnostic to the architecture and depth.\n2. Computing signature across all layers is expensive for large deep models.\n3. Larger signature would require more pre-trained networks to avoid overfitting in regression phase. Given that each pre-trained network is only one sample in the regression task, creating a large pool of models is prohibitively expensive. Our study with 700 realistic sized pre-trained networks is perhaps already beyond the common practice for such empirical analysis. \n4. The even spacing is merely a natural choice of minimal commitment and already achieves near perfect prediction (CoD close to 1) is some scenarios. However, it is possible to examine other configurations.\n\n# Log/Linear #\nWe are not sure if we understand the question. We provide an answer below, but if this is not what you meant, please let us know. We investigate the use of signature components in two ways: 1. Directly as the input to linear regression, 2. Applying an element-wise log to them before using them as input of the linear regression. In either case, the regression remains linear in optimization variables, but with the log transform we effectively regress the product of signature components to the gap value.\n\n# Other Criteria (MSE, AIC, etc.) #\nWe have pointed out that the coefficient of determination already captures the MSE along with the scale of the error; however, for completeness, we will include this result in the appendix. We report k-fold cross validation results as well, which is known to be asymptotically equivalent to AIC (Stone M. (1977) An asymptotic equivalence of choice of model by cross-validation and Akaike’s criterion)", "We would like to thank you for your review and suggestions. We are very glad that you liked the empirical analysis of generalization gap and margin distribution statistics. On that note, while not mentioned in the paper, we are in preparation to release the 700+ models we used in the paper as a dataset where researchers can easily test theories on generalization. We believe this will be one of the first datasets for studying generalization on realistic and modern network architectures and we hope it will be instrumental in the ongoing generalization research.\n\n\n## Construction of Signature from Pairwise Distances (i,j) in Eq (5) ##\n\nFor computational efficiency, we picked we pick ground truth label as \"i\" (as you correctly pointed out), and the highest non-ground truth logit as \"j\", and compute the distance between the two classes. While aggregating all pairwise distance might be more comprehensive, the complexity scales roughly quadratically with the number of classes. As such, we made the design choice to use the top two classes. In cases where the class with the highest logit is not the ground truth (hence misclassification with negative distance), we discard the data point. We will further explain this choice below. We mention this detail in the text but we will make sure it is more clear.\n\n\n## Notation (i,j) instead of {i,j} to Emphasize Orderedness ##\n\nThank you for the suggestion. We agree and will incorporate this in the revision to avoid confusion.\n\n\n## Why Only Positive Distances in Margin Distribution ##\n\nYou are right that when “i” is the ground truth label, the sign of the distance indicates whether the point is correctly classifier or is misclassified. \n\nWe indeed investigated using negative distances when computing the margin distribution. We observed that:\n\n1. Modern deep architectures often achieve near perfect classification on training data. Hence, the contribution of negative distances to the full distribution is negligible in most trained models.\n\n2. A small fraction of models do have notable misclassification (due to data augmentation or heavy regularization). For these models, we found that margin distribution computed with only positive samples predicted the generalization gap better than (or at par with) the full distribution. However, we observed that the latter is indeed a better predictor of test accuracy (just not the gap). Since we focus our narrative on the generalization gap, we decided to omit these results from the main paper; however, we will include these results in the appendix.\nWe also note that there is no technical problem in using margin distribution with only positive samples, e.g. Bartlett’s work “The Sample Complexity of Pattern Classification with Neural Networks” develops a generalization bound by such samples (paragraph above their Theorem 2).\n\n\n## Typo ##\n\nThank you for pointing out the typo. It will be fixed in revision.\n", "This paper does not even try to propose yet another \"vacuous\" generalization bounds, but instead empirically convincingly shows an interesting connection between the proposed margin statistics and the generalization gap, which could well be used to provide some \"prescriptive\" insights (per Sanjeev Arora) towards understanding generalization in deep neural nets.\n\nI have no major complaints but for a few questions regarding clarifications,\n1. From Eq.(5), such distances are defined for only one out of the many possible pairs of labels. So when forming the so-called \"margin signature\", how exactly do you compose it from all such pair-wise distances? Do you pool all the distances together before computing the statistics, or do you aggregate individual statistics from pair-wise distances? And how do you select which pairs to include or exclude? Are you assuming \"i\" is always the ground-truth label class for $x_k$ here?\n\n2. In Eq.(3), the way you define the distance (that flipping i and j would change the sign of the distance) is implying that {i, j} should not be viewed as an unordered pair, in which case a better notation might be (i, j) (i.e. replacing sets \"{}\" with tuples \"()\" to signal that order matters).\n\nAnd why do you \"only consider distances with positive sign\"? I can understand doing this for when neither i nor j corresponds to the ground-truth label of x, because you really can't tell which score should be higher. But when i happens to be the ground-truth label, wouldn't a positive distance and a negative distance be meaningful different and therefore it should only be beneficial to include both of them in the margin samples?\n\nAnd a minor typo: In Eq.(4), $\\bar{x}_k$ should have been $\\bar{x}^l$?", "Thank you for your helpful comments.\n\n### References ###\n\n1. We agree that the interaction of margin and generalization has been subject to a great amount of research in classical ML literature. This makes it impossible to provide a comprehensive survey in a conference paper. So we had to narrow the scope of related works to recent papers that address generalization/margin in the case of *deep* models. Nonetheless, we will be happy to include the references on SVM and clustering that you suggested.\n\n2. Regarding the other ICLR2019 submission you mentioned, obviously we were not aware of it prior to ICLR submission deadline (and it is not available on arxiv either). We are aware of that submission, but it seems to have some issues (reading the comments for the paper).\n\n### Linear Assumption ###\n\n1. Regarding your suspicion of linear relationship between margin and generalization gap: we are not directly relating the two using a linear map. Note that we are converting the margin distribution to a feature vector via a nonlinear map (quartiles/moments), and it is these features that are regressed to the generalization gap by a linear map. This is a widely used idea for nonlinear regression; e.g. as in kernel SVM for regression (nonlinear feature space followed by linear fitting). One could also train a nonlinear (deep) neural net to predict the gap, but it would need regularization and more data to avoid overfitting while a linear combination of simple distributional features already attains high quality prediction (see next point) across ~700 pretrained models. The latter suggests that a linear relationship is indeed a very close approximation.\n\n2. The point of the paper is not to claim an optimal feature set, but to leverage *simple* and *easy to compute* features that could be extracted from the distribution (like quartiles or moments) can yet give a reasonable prediction of the generalization gap that is much better than recent theoretical upper bounds in the literature. We hope this could be a step toward constructing *practical* algorithms for improving generalization in deep networks. Regarding mathematical proof for why these features should explain the generalization gap: while such results would be very interesting, it is quite ambitious if not impossible. Nevertheless, we assess the quality of the linear fit using one of the standard statistical tools created for this purpose: Coefficient of Determination (CoD). As mentioned in the paper, in some scenarios we observe CoD=0.97 (max is 1.0) which indicates a reasonably good fit.\n\n", "Introducing the theory of margin distribution into the framework of deep learning is an interesting idea. And it seems that there is a related work [Optimal margin Distribution Network, Submission to ICLR 2019], which has tried to design a new loss function based on margin distribution and theoretically proved its generalization effect. As I know, the influence of margin distribution has always been a concern for generalization theory. [Schapire, 1998] [Wang, 2011] [Gao, 2013], and there are several new algorithms based on the theory of margin distribution in both SVM [Zhang, 2017] and Clustering [Zhang, 2018] frameworks. I think that authors should read these papers and add references to them.\nRegarding the content of the paper, I am confused about the linear (or log() ) estimation of the generalization gap: \"$\\hat{g} = a^T \\phi(\\theta) + b$\". Does this formula have a theoretical analysis or some statistical models to explain it? It seems unreasonable to directly explain the relationship between margin distribution and generalization with a simple linear relationship. I expect that the authors can theoretically give a formula to explain the relationship between the generalization gap and the margin distribution.\n\n\n[Optimal margin Distribution Network, Submission to ICLR 2019] Anonymous. “Optimal margin Distribution Network” Submitted to International Conference on Learning Representations 2019\n[Schapire, 1998] Schapire, R., Freund, Y., Bartlett, P. L., Lee, W. Boosting the margin: A new explanation for the effectives of voting methods. Annuals of Statistics 26 (5), 1651–1686. 1998\n[Wang, 2011] Wang, L. W., Sugiyama, M., Yang, C., Zhou, Z.-H., Feng, J. “A refined margin analysis for boosting algorithms via equilibrium margin.” Journal of Machine Learning Research 12, 1835–1863. 2011\n[Gao, 2013] Gao, W., and Zhou, Z.-H. \"On the doubt about margin explanation of boosting.\" Artificial Intelligence 203, 1-18. 2013\n[Zhang, 2017] Zhang, T., Zhou, Z.-H. \"Multi-Class Optimal Margin Distribution Machine.\" International Conference on Machine Learning. 2017.\n[Zhang, 2018] Zhang, T., Zhou, Z.-H. \"Optimal Margin Distribution Clustering.\" Proceedings of the National Conference on Artificial Intelligence, 2018.\n" ]
[ -1, -1, -1, 6, 5, -1, -1, 9, -1, -1 ]
[ -1, -1, -1, 4, 4, -1, -1, 4, -1, -1 ]
[ "SylJSZ_STX", "iclr_2019_HJlQfnCqKX", "Ske1HVNc07", "iclr_2019_HJlQfnCqKX", "iclr_2019_HJlQfnCqKX", "H1xTRJmc3m", "Hkl7VhOqhX", "iclr_2019_HJlQfnCqKX", "BJgDuN3On7", "iclr_2019_HJlQfnCqKX" ]
iclr_2019_HJlmHoR5tQ
Adversarial Imitation via Variational Inverse Reinforcement Learning
We consider a problem of learning the reward and policy from expert examples under unknown dynamics. Our proposed method builds on the framework of generative adversarial networks and introduces the empowerment-regularized maximum-entropy inverse reinforcement learning to learn near-optimal rewards and policies. Empowerment-based regularization prevents the policy from overfitting to expert demonstrations, which advantageously leads to more generalized behaviors that result in learning near-optimal rewards. Our method simultaneously learns empowerment through variational information maximization along with the reward and policy under the adversarial learning formulation. We evaluate our approach on various high-dimensional complex control tasks. We also test our learned rewards in challenging transfer learning problems where training and testing environments are made to be different from each other in terms of dynamics or structure. The results show that our proposed method not only learns near-optimal rewards and policies that are matching expert behavior but also performs significantly better than state-of-the-art inverse reinforcement learning algorithms.
accepted-poster-papers
This paper proposes a regularization for IRL based on empowerment. The paper has some good results, and is generally well-written. The reviewers raised concerns about how the approach was motivated; these concerns have largely been addressed from the reframing of the algorithm from the perspective of regularization. Now, all reviewers agree that the paper is somewhat above the bar for acceptance. Hence, I also recommend accept. There are several changes that the authors are strongly encouraged to incorporate in the final version of the paper (based on discussion between the reviewers): - The claim that empowerment acts as a regularizer in the policy update is a fairly complicated interpretation of the effect of the algorithm. It relies on an approximation derived in the appendix that relates the proposed objective with an empowerment regularized IRL formulation. The new framing makes much more sense. However, the one sentence reference to this section of the appendix in the main paper is not appropriate given that it is central to the claims of the paper's contribution. More discussion in the main text should be included. - There are still some parts of the implemented algorithm that could introduce bias (using a target network in the shaping term which differs from the theory in Ng et al. 1999), but this concern could be remedied by a code release. The authors are strongly encouraged to link to the code in the final non-blind submission, especially since IRL implementations tend to be quite difficult to get right. - The authors said they would change the way they bold their best numbers in their rebuttal. The current paper does not make the promised change, and actually adopts different bolding conventions in different tables which is even more confusing. The numbers should be bolded in a consistent way, bolding the numbers with the best performance up to statistical significance.
train
[ "SJdOg6R3Q", "H1gjxRaq1V", "ryxBthEq1N", "SkeaSLLMyE", "rJgOYVPcnX", "HJgv40ZF0m", "SklJGpL40Q", "BklSaQRunm", "rJlvBZqP6X", "HJeiRvOqTm", "r1xjTdf_TX", "BJldKTtvpX", "BJlixw6ep7", "rke1hVag6Q" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author" ]
[ "Summary/Contribution:\nThis paper builds on the AIRL framework (Fu et al., 2017) by combining the empowerment maximization objective for optimizing both the policy and reward function. Algorithmically, the main difference is that this introduces the need to optimize a inverse model (q), an empowerment function (Phi) and alters the AIRL updates to the reward function and policy. This paper presents experiments on the original set of AIRL tasks, and shows improved performance on some tasks.\n\nPros:\n - The approach outperform AIRL by a convincing margin on the crippled ant problem, while obtaining comparable/favorable performance on other benchmarks.\n\nCons:\n - The justification for using the empowerment maximization framework to learn the shaping parameters is unclear. The formulation introduces a potentially confounding factor by biasing the policy optimization which clouds the experimental picture. \n\nJustification for rating:\nThis paper presents good empirical results, but without a clear identification of the source of improvement. I lean on the side of rejecting unless the authors can better eliminate any potential bias in their formulation (see question below). The justification for combining the empowerment maximization objective is also unclear while being integral to the novelty of the proposed method. \n\nQuestions I could not resolve from my reading:\n - The \"imitation learning benchmark\" numbers in Table 2 are different from the original AIRL paper. Do the authors have an explanation as to why? Is this only due to a difference in the expert performance?\n - Can the authors confirm that in the transfer experiments, the policy is optimized with only the transfered reward and no empowerment bonus? Otherwise, can the authors comment on whether the performance benefits could be explained by the additional bonus.\n - In equation (12), \\Phi is optimized as an (approximate) mutual information, not a value function, so it is not clear why this term approximates the advantage (I suspect this is untrue in EAIRL as V* is recovered at optimality in the AIRL/GAN-GCL formulation). Can the authors comment?\n - Why is w* unnormalized? Unless I am misunderstanding something, in the definition immediately above it, there is a normalization term Z(s). \n\nOther comments:\n - \"AIRL(s, a) fails to learn rewards whereas EAIRL recovers the near optimal rewards function\" -> This characterization is strange since on some tasks AIRL(s,a) outperforms or is within one standard deviation of EAIRL (e.g. on Half Cheetah, Ant, Swimmer, Pendulum).\n - \"Our experimentation highlights the importance of modeling discriminator/reward functions.. as a function of both state and action\". AIRL(s) is better on both the pointmass and crippled-ant task than AIRL(s,a). Can the authors clarify?\n - \"Our method leverages .. and therefore learns both reward and policy simultaneously\". Can the authors clarify in what sense the reward and policy is being learned simultaneously in EAIRL where it is not in AIRL?\n - In all the tables, the authors' approach is bolded as oppose to the best numbers. I would instead prefer that the authors bold the best numbers to avoid confusion.\n\n- Typos:\n - \"the imitation learning methods were proposed\"\n - \"quantify an extent to which\" \n - \"GAIL uses Generative Adversarial Networks formulation\"\n - \"grantee\"\n - \"no prior work has reported the practical approach\"\n - \"but, to\"\n - \"(see (Fu et al., 2017))\"\n", "Due to the double-blind submission policy of ICLR, we didn't link the code with our paper, but for now, you can download it here:\n\nhttps://drive.google.com/file/d/1wK51y5cERqXgC3H_7Ku5nXEJtKmQB4Rx/view\n\nPlease let me know if you face any trouble downloading/running it.\n\nThanks", "Are the authors able to release code with this submission?", "The proposed method uses empowerment both, for reward shaping and for regularization.\nThe reward function is defined as a (properly) shaped reward function, r + \\gamma\\Phi(s') - \\Phi(s) (empowerment-based reward shaping) and the policy is optimized using a regularized TRPO update rule (empowerment-based regularization). Some of the effects _approximately_ cancel out such that the update is similar to a standard TRPO update with reward fuction r + \\gamma\\Phi(s'). However, this is a quite wild approximation (treating the inverse model as current policy) and hence the actual algorithm uses a combination of reward shaping and regularization. I think it would be much nicer to present a (slighly different) algorithm that only does regularization. This regularization can be achieved by _either_ using a different policy update rule, or (better) by having an additional objective in the reward function. Note that none of these have anything to do with reward shaping! ", "The authors propose empowerment-based adversarial inverse reinforcement learning (EAIRL), an extension of AIRL which uses empowerment (which quantifies the extent that an agent can influence its state, see eq. 3) as a reward-shaping potential to recover more faithful learned reward functions. \n\nEvaluation: 4/5 Experiments are more preliminary but establish the benefit of the approach.\nClarity: 4/5 Well written. Just a few typos (see below minor comments)\nSignificance: 4/5 Effective, well motivated approach. Excellent transfer learning results.\nOriginality: 3.5/5 As the empowerment subroutine is existing work, as is AIRL, combining previous work, but effectively.\n\nRating: 7/10\nConfidence: 3/5 Reviewed this paper in a little less detail than I would prefer, due to time constraints. I will review in more detail and update this and add any additional questions/comments below the minor comments below.\n\nPros:\n- Extension of AIRL which utilizes empowerment to advance the SOE in reward learning\n- Well written, related previous work well explained.\nCons:\n- Experiments more preliminary\n- Combines existing approaches, somewhat incremental\n\nMinor comments: \n- grantee (typo), barely utilized -> not fully realized?, \n\n----\n\nUpdated review:\n\nAfter reviewing the comments and the paper in more detail (whose story has evolved substantially) , I have revised my score slightly lower. While in hindsight I can see that the paper has definitely improved, the story has changed rather dramatically, and appears to be still unfolding: the paper's many new elements require further maturation, and that the utility of empowerment for reward shaping and/or regularization to evolve AIRL (i.e. the old story vs. the new story) still needs further investigation/maturation. If the paper is accepted I'm reasonably confident that the authors will be able to \"finish up\" and address these concerns. \n(typo: eq. 4 omits maximizing argument)", "We thank our anonymous reviewer for going through our revised paper and providing us with more constructive feedback. Accordingly, we have further improved our paper especially Section 5 (Discussion) to address our reviewer’s comments. \n\nReviewer’s comment: The proposed method uses the empowerment both for regularization as well as for reward shaping, but it is not clear whether the latter improves generalization. The benefit of using empowerment (whether for reward shaping or for regularization) should be discussed. Empowerment for generalization is currently hardly motivated.\n\nResponse:\n\nWe discuss the benefits of using empowerment for regularization as a technique to prevent the policy from overfitting expert demonstrations which leads to learning generalized reward functions. We also present an alternative view of seeing our regularization as a result of biased reward shaping. For more details, please refer to Section 5, paragraph 2-3. \nSummary:\nIn the scalable MaxEnt-IRL framework (Finn et.al 2016), the normalization term is approximated by importance sampling where the importance-sampler/policy is trained to minimize the KL-divergence from the distribution over expert trajectories. However, merely minimizing the divergence between expert demonstrations and policy generated samples leads to localized policy behavior which hinders learning generalized reward functions. In our proposed work, we regularize the policy update with empowerment. Hence, we update our policy to reduce the divergence from expert data distribution as well as to maximize the empowerment (Eqn. 12). The proposed regularization prevents premature convergence to local behavior thus leads to robust rewards learning without any restriction on modeling rewards as a function of states only. \nAn alternative way to interpret our empowerment-regularized policy optimization is through the perspective of reward shaping. Ng et al. (1999) proposed that the reward shaped with a potential function F of form γΦ(s' )-Φ(s) does not induce a bias in policy as the optimal policy in MDP M'=(S,A,P,R'=R+F,ρ_0,γ) will also be optimal in the MDP M=(S,A,P,R,ρ_0,γ). However, in our proposed method, we shape our reward R with a discounted empowerment F=γΦ(s') (Eqn. 12) to induce the bias in our policy optimization. The induced bias is due to reward shaping R'=R+F that leads to generalized policy behavior. Furthermore, it is evident that the optimal policy in MDP M'=(S,A,P,R'=R+γΦ,ρ_0,γ) will no longer be optimal in MDP M=(S,A,P,R,ρ_0,γ) as F=γΦ(s') rather than F=γΦ(s' )-Φ(s). However, depending on the hyperparameter γ, the induced bias can be reduced to learn the optimal policies matching the expert behaviors.\n", "Thanks for the revision; I agree that the quality has significantly improved and updated my review.", "The paper proposes a method for inverse reinforcement learning based on AIRL. It's main contribution is that the shaping function is not learned while training the discriminator, but separately as an approximation of the empowerment (maximum mutual information). This shaping term aims to learn disentangled rewards without being restricted to learning state-only reward functions, which is a major restriction of AIRL.\n\nThe main weakness of the paper is, that it does not justify or motivate the main deviations compared to AIRL. The new objective for updating the policy is especially problematic because it does no longer correspond to the RL objective but includes an additional term that biases the policy towards actions that increase its empowerment. Although both terms of the update can be derived independently from an IRL and Empowerment perspective respectively, optimizing the sum was not derived from a common problem formulation. By combining these objectives, the learned reward function may lead to policies that fail to match the expert demonstration without such bias. This does not imply that the approach is not sound per se, however, simply presenting such update without any discussion is insufficient--especially given that it constitutes the main novelty of the approach. I think the paper would be much stronger if the update was derived from an empowerment-regularized IRL formulation. And even then, the implications of such bias/regularization would need to be properly discussed and evaluated, in particular with respect to the trade-off lambda, which--again--is hardly mentioned in the submission. I'm also not sure if the story of the paper works out; when we simply want to use empowerment as shaping term, why not use two separate policies for computing the empowerment and reward function respectively. Is the bias in the policy update maybe more important than the shaping term in the discriminator update for learning disentangled rewards?\n\nKeeping these issues aside, I actually like the paper. It tackles the main drawback of AIRL and the idea seems quite nice. Having a reward function that does not actively induce actions that can be explained by empowerment, may not always be appropriate, but often enough it may be a sensible approach to get more generalizable reward functions. The paper is also well written with few typos. The parts that are discussed are clear and the experimental results seem fine as well (although more experiments on the reward transfer would be nice).\n\nMinor notes:\nI think there is a sign error in the policy update\nTypo in the theorem, grantee should be guarantee\n\nQuestion:\nPlease confirm that the reward transfer was learned with a standard RL formulation. Does the learned policy change, when we use the empowerment objective as well?\n\n\n\nUpdate (22.11)\nI think that the revised version is much better than the original submission because it now correctly attributes the improved generalization to an inductive bias in the policy update. However, the submission still seems borderline to me. \n\n- The proposed method uses the empowerment both for regularization as well as for reward shaping, but it is not clear whether the latter improves generalization. If the reward shaping was not necessary, it would be cleaner to use empowerment only for regularization. If the reward shaping is beneficial, this should be shown in an ablative experiment.\n\n- The benefit of using empowerment (whether for reward shaping or for regularization) should be discussed. Empowerment for generalization is currently hardly motivated.\n\n- The derivation could be a bit more rigorous.\n\nAs the presentation is now much more sound, I slightly increased my rating.", "We like to thank the anonymous reviewers for their helpful and constructive comments. We provide the individual response to each reviewer's comments. Here we report the list of main changes which we have added to the new revision.\n\n1- We motivate our method through Empowerment-Regularized Maximum Entropy IRL.\n2- A discussion on the policy update rule which maximizes both the learned reward function and Empowerment (Appendix B). To leave the derivation simple, we have modified the equation (6) to absolute error instead of the mean-square error, and all experimental results are updated accordingly.\n3- Further clarifications on why state-action formulation of reward function is vital to both reward and policy learning (Section 5, Paragraph 3).\n4- Further explanations on transfer learning tasks that we use standard RL formulation using only learned rewards, no empowerment to train the agents.\n5-Addressed all typological errors mentioned by the reviewers.\n", "We thank our anonymous reviewer for providing comprehensive and constructive feedback which helped us significantly improve the quality of our paper. We agree with the reviewer, and all modifications have been made to pivot our work around Empowerment-regularized MaxEnt-IRL.\n \nIn the paper (Appendix B), we include the derivation of Empowerment-regularized MaxEnt-IRL. It is highlighted that under empowerment regularization, the policy/importance-sampler is trained to minimize its divergence from the true distribution over expert demonstrations and to maximize the empowerment. The resulting policy update rule (see Eqn. 14 in the paper) becomes:\n\nmax_π⁡ E_π [∑_(t=0)^T r(s,a)+Φ(s' )-log⁡π(a|s)]\n\nIn Appendix B.1, we show that our policy training objective r_π is equivalent to above equation, i.e.,\n\nr_π (s,a,s' )=log⁡[D(s,a,s' )]- log[(⁡1-D(s,a,s' )) ]- λ_I L_I (1)\nr_π (s,a,s' )=r(s,a,s' )+γΦ(s' )+λH(⋅) (2)\n\nwhereas γ and λ are hyperparameters and H(⋅) contains the entropy terms.\n\nReviewer’s comment 1: The policy roughly optimizes \"reward + next empowerment.\" I wonder whether we could show similar generalization benefits by directly optimizing this objective.\n\nResponse: We have verified by rerunning the experiments using the above-mentioned simplified policy objective, and it turns out that we obtain the same generalization as obtained by optimizing (1). Hence, just as our reviewer expected, the empowerment-based regularization prevents the policy from overfitting expert demonstration, thus leads to a generalized behavior which results in learning near-optimal rewards.\n\nReviewer’s comment 2: In the submitted version, there is a huge discrepancy between the text and the actual algorithm.\n\nResponse: The discrepancy has been removed. The revised paper now motivates the algorithm based on the notion of Empowerment-regularized MaxEnt-IRL. \n", "Thanks for the new derivation. I think it sheds some more light on the policy bias, although I think that setting the inverse model equal to the current policy is going too far and it does not really make sense to talk about \"maximizing the entropy of q(.)\" given that q is a variational distribution that is fixed during the policy improvement. \nHowever, treating the last equation of Appendix B as a rough approximation of the actual objective that is maximized by the policy updates and further assuming that lambda_I=0.99 is close enough to 1, we can see that the policy roughly optimizes \"reward + next empowerment\". I wonder whether, we could show similar generalization benefits by directly optimizing this objective, e.g. the discriminator could be computed as exp(r+\\gamma*\\Phi(s'))/(exp(r+\\gamma*\\Phi(s'))+\\pi) and a standard TRPO/PPO update could be used. According to the derivations in Appendix B, this should roughly correspond to the same algorithm. Let's say we can get similar results (potentially replacing \\gamma by a hyper-parameter and using higher entropy regularization), such algorithm could be derived in a principled way--from and empowerment-regularized MaxEnt-IRL formulation.\nIn the submitted version, there is a huge discrepancy between the text (generalization is achieved by using empowerment as potential for reward (un)shaping) and the actual algorithm (generalization is achieved by 1% of reward shaping and 99% of policy biasing). These are two completely different approaches; the former does not affect the learned policy (at least in theory) whereas the latter approach relates to regularization and has the potential to lead to much better generalization by preventing overfitting the demonstrations. As these are different approaches, deriving the algorithm from a reward shaping (better: \"advantage unshaping\") perspective can not be fully sound--which ultimately manifests in the form of a modified policy update rule which is not properly derived. From a reward shaping perspective, the policy for computing the empowerment should not be related at all to the policy that maximizes the reward.\nI think there is not much missing to turn the submission into a nice paper (if my suggested variant would work out of the box, it might even be possible to revise the submission), however, in the current state the submission is in my opinion not sufficiently sound and almost dangerous, because it gives a wrong impression about the way generalization is achieved.", "We would like to thank our reviewer for such comprehensive feedback. We have revised the manuscript to address reviewer comments. The response summaries are as follow:\n\nIssue 1: Please confirm that the reward transfer was learned with a standard RL formulation.\nResponse:\nYes, we use standard RL formulation in reward transfer tasks, i.e., the policy is optimized with only the transferred reward and no empowerment bonus.\nIssue 2: Does the learned policy change, when we use the empowerment objective as well? \nResponse:\nFor the stated values of entropy (λ_h) and information-gain regularizers (λ_I), the policy maximizes the shaped reward and entropy. Shaping rewards induce a policy behavior that leads to learning a generalized reward function. Furthermore, our experiment shows that the policy converges to an expert-like (demonstrated) behavior despite that it maximizes both reward and empowerment. \n\nWe include a derivation in the paper to highlight the impact of trade-off lambda on the policy bias towards maximizing the empowerment or imitating the expert behavior. To leave the derivation simple, we have modified the equation (6) to absolute error instead of the mean-square error, and all experimental results are updated accordingly. We have verified that the modification doesn’t impact the results since the purpose of equation (6) is to measure the discrepancy between forward and inverse models. In our paper, we show that the discriminative reward r ̂ simplifies to the following:\n\nr ̂=log⁡[D(s,a,s' )] - log[⁡1-D(s,a,s' )]=f(s,a,s')-λ_h log⁡[π(a│s)]\n⁡\nThe policy is trained to maximize r_π (s,a,s' )=r ̂(s,a,s')-λ_I L_I, that leads to following expression:\n\nr_π (s,a,s' )=f(s,a,s' )+(λ_I-λ_h)log⁡π(a│s)-λ_I log⁡q(a│s,s' )+λ_I Φ(s)\n\nNote that the inverse model q(⋅) is trained using the trajectories generated by the policy π(⋅) (see Algorithm 1) and both models learn distribution over actions. Therefore, maximizing the entropy of q(⋅) is equivalent to maximizing the entropy of π(⋅). Thus, the entropy terms can be combined together as:\n\nr_π (s,a,s' )=f(s,a,s' )+λH(⋅)+λ_I Φ(s)\n\nwhereas λ is a function of λ_I and λ_h, and H(⋅) is the entropy. Since, f(s,a,s' )=r(s,a,s' )+γΦ(s' )-Φ(s). The overall policy update rule becomes:\n\nr_π (s,a,s' )=r(s,a,s' )+γΦ(s' )-(1-λ_I)Φ(s)+λH(⋅)\n\nHence, when λ_h<λ_I<1, the policy objective will be to maximize the shaped reward as well as the entropy. For the stated values of λ_I and λ_h, the policy training is slightly biased toward maximizing the empowerment. The bias of our policy training towards maximizing the empowerment leads to a generalized policy behavior which results in robust reward learning. \n", "We would like to thank our reviewer for such comprehensive reviews. The response summaries are as follow.\n\n1: The \"imitation learning benchmark\" numbers in Table 2 are different from the original AIRL paper. Do the authors have an explanation as to why? Is this only due to a difference in the expert performance?\n\nResponse: Yes, the different values are because of the difference in expert performances. For instance, if you notice half-cheetah in our results Table 2, and in AIRL(s,a) (Fu et al., 2017), the results are similar as experts performed comparably.\n\n2: Can the authors confirm that in the transfer experiments, the policy is optimized with only the transferred reward and no empowerment bonus? Otherwise, can the authors comment on whether the performance benefits could be explained by the additional bonus.\n\nResponse: Yes, the policy is optimized using the transferred reward only (no empowerment bonus) using standard reinforcement learning approach.\n\n3: In equation (12), \\Phi is optimized as an (approximate) mutual information, not a value function, so it is not clear why this term approximates the advantage (I suspect this is untrue in EAIRL as V* is recovered at optimality in the AIRL/GAN-GCL formulation). Can the authors comment?\n\nResponse: Yes, you are right, equation 12 doesn’t hold for the proposed method. \n\n4: Why is w* unnormalized? Unless I am misunderstanding something, in the definition immediately above it, there is a normalization term Z(s).\n\nResponse: Although w* is defined to be normalized by Z(s), however, there is no direct mechanism for sampling actions or computing Z(s). Therefore, w* is implicitly unnormalized, for more details, please refer to 4.2.2 of ( Mohamed & Rezende 2015).\n\n5: \"AIRL(s, a) fails to learn rewards whereas EAIRL recovers the near optimal rewards function\" -> This characterization is strange since on some tasks AIRL(s,a) outperforms or is within one standard deviation of EAIRL (e.g. on Half Cheetah, Ant, Swimmer, Pendulum).\n\nResponse: The paper attempts to solve two separate problems, i.e., 1) policy learning and 2) reward learning. For instance, GAIL only solves the policy learning problem and does not recover a reward function. Likewise, AIRL (s, a) can learn a policy (see Table 2) but fails to recover reward function (see Table 1) as it performs poorly on the transfer learning tasks. \n\n6: Our experimentation highlights the importance of modeling discriminator/reward functions.. as a function of both state and action\". AIRL(s) is better on both the pointmass and crippled-ant task than AIRL(s,a). Can the authors clarify?\n\nResponse: Please refer to section 5 for details. We highlight the importance of modeling rewards as a function of states and actions in both reward and policy learning problems.\nPolicy learning:\nThe results show that AIRL with state-only rewards, AIRL(s), fails to learn a policy whereas EAIRL, GAIL, and AIRL that include state-action reward/discriminator formulation successfully recover the policies (see Table 2). Hence, our empirical results show that it is crucial to model reward/discriminator as a function of state-action as otherwise, adversarial imitation learning fails to retrieve policy from expert data. \nReward learning:\nThe results in Table 1 shows that AIRL with state-only rewards (AIRL(s)) does not recover the action dependent terms of the ground-truth reward function that penalizes high torques. Therefore, the agent shows aggressive behavior and flips over after few steps (see the accompanying video). The formulation of rewards as a function of both states and actions is crucial for action regularization in any locomotion or ambulation tasks that discourage actions with large magnitudes. This need for action regularization is well known in optimal control literature and limits the use cases of a state-only reward function in most practical, real-life applications.\n\n7: \"Our method leverages .. and therefore learns both reward and policy simultaneously\". Can the authors clarify in what sense the reward and policy is being learned simultaneously in EAIRL where it is not in AIRL?\n\nResponse: AIRL with state-action reward formulation (AIRL (s, a)) learns a policy but fails to recover a ground-truth reward function (see Table 1). To determine the reward function, AIRL restricts state-only reward formulation which might be suitable for learning the reward but fails to learn the expert-like behavior policy. Hence, AIRL requires state-only formulation for reward learning and state-action formulation for policy learning whereas our method requires only state-action formulation to learn both rewards and policies from expert demonstrations. \n\n8: In all the tables, the authors' approach is bolded as opposed to the best numbers. I would instead prefer that the authors bold the best numbers to avoid confusion.\nResponse: Modifications made. \n\n9: Typos\nResponse: All the typo errors are removed. \n", "We would like to thank our reviewer for positive feedback. We would like to satisfy the reviewer concerns about the paper as follow.\nIssue 1: Experiments more preliminary\nResponse: \nThe transfer learning tasks are challenging. In the case of crippled-ant (see Appendix B.1), the standard ant can move sideways whereas the crippled-ant must rotate to move forward. Similarly, in-case of point-mass (see Appendix B.2), the agent must take the opposite route compared to training environment to reach the target. These environments test our method for generalizability and ability to learn transferable/portable reward functions.\n---------\nIssue 2: Combines existing approaches, somewhat incremental\nResponse:\nWe agree that our method combines the existing approaches. However, the combination is not straightforward, and we combine two approaches in a novel way. In (Mohamed & Rezende 2015), the method uses variational information maximization to learn the empowerment. Once empowerment is determined, it is used as an intrinsic motivation to train a reinforcement learning agent, and the results are presented in simple 2D environments. On the other hand, AIRL learns disentangled reward by restricting state-only reward function, which is a major drawback of their method. Our method uses variational information maximization to learn reward-shaping potential function as empowerment in parallel to learning the reward and policy from expert data, unlike (Mohamed & Rezende 2015) where Empowerment is learned offline. As a result, our method successfully learns portable, near-optimal rewards without being restricted to learning state-only reward functions. Furthermore, AIRL (Fu et al., 2017),) requires state-only formulation for reward learning and state-action formulation for policy learning whereas our method requires only state-action formulation to learn both rewards and policies from expert demonstrations.\n" ]
[ 6, -1, -1, -1, 6, -1, -1, 6, -1, -1, -1, -1, -1, -1 ]
[ 4, -1, -1, -1, 3, -1, -1, 4, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2019_HJlmHoR5tQ", "ryxBthEq1N", "BJlixw6ep7", "HJgv40ZF0m", "iclr_2019_HJlmHoR5tQ", "BklSaQRunm", "HJeiRvOqTm", "iclr_2019_HJlmHoR5tQ", "iclr_2019_HJlmHoR5tQ", "r1xjTdf_TX", "BJldKTtvpX", "BklSaQRunm", "SJdOg6R3Q", "rJgOYVPcnX" ]
iclr_2019_HJx9EhC9tQ
Reasoning About Physical Interactions with Object-Oriented Prediction and Planning
Object-based factorizations provide a useful level of abstraction for interacting with the world. Building explicit object representations, however, often requires supervisory signals that are difficult to obtain in practice. We present a paradigm for learning object-centric representations for physical scene understanding without direct supervision of object properties. Our model, Object-Oriented Prediction and Planning (O2P2), jointly learns a perception function to map from image observations to object representations, a pairwise physics interaction function to predict the time evolution of a collection of objects, and a rendering function to map objects back to pixels. For evaluation, we consider not only the accuracy of the physical predictions of the model, but also its utility for downstream tasks that require an actionable representation of intuitive physics. After training our model on an image prediction task, we can use its learned representations to build block towers more complicated than those observed during training.
accepted-poster-papers
1. Describe the strengths of the paper. As pointed out by the reviewers and based on your expert opinion. - The problem is interesting and challenging - The proposed approach is novel and performs well. 2. Describe the weaknesses of the paper. As pointed out by the reviewers and based on your expert opinion. Be sure to indicate which weaknesses are seen as salient for the decision (i.e., potential critical flaws), as opposed to weaknesses that the authors can likely fix in a revision. - The clarity could be improved 3. Discuss any major points of contention. As raised by the authors or reviewers in the discussion, and how these might have influenced the decision. If the authors provide a rebuttal to a potential reviewer concern, it’s a good idea to acknowledge this and note whether it influenced the final decision or not. This makes sure that author responses are addressed adequately. Many concerns were clarified during the discussion period. One major concern had been the experimental evaluation. In particular, some reviewers felt that experiments on real images (rather than in simulation) was needed. To strengthen this aspect, the authors added new qualitative and quantitative results on a real-world experiment with a robot arm, under 10 different scenarios, showing good performance on this challenging task. Still, one reviewer was left unconvinced that the experimental evaluation was sufficient. 4. If consensus was reached, say so. Otherwise, explain what the source of reviewer disagreement was and why the decision on the paper aligns with one set of reviewers or another. Consensus was not reached. The final decision is aligned with the positive reviews as the AC believes that the evaluation was adequate.
train
[ "Bkgyox9DnQ", "BJxwg4pjAX", "H1eh77VKR7", "SyxLdj-IhX", "H1x_6tMgAm", "rygjPOfgRX", "HJgCWEMxCm", "rkglZ8xo2Q", "H1e1DWaMqm", "B1exCohM97", "SkeuI3gb5m", "SJl7bdRy5Q" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "public", "public" ]
[ "Summary:\nThe paper presents a platform for predicting images of objects interacting with each other under the effect of gravitational forces. Given an image describing the initial arrangement of the objects in a scene, the proposed architecture first detects the objects and encode them using a perception module. A physics module then predicts the final arrangement of the object after moving under the effects of gravity. A rendering module takes as input the predicted final positions of objects and returns an image. The proposed architecture is trained by using pixel labels only, by reducing the gaps between the predicted rendered images and the images returned by the MuJuCo physics engine. This error's gradient is back-propagated to the physics and perception modules. The proposed platform is also used for planning object placements by sampling a large number of object shapes, orientations and colors, predicting the final configurations, and selecting initial placements that lead to final configurations that are as close as possible to given goal configurations using the L2 norm in the VGG features. Experiments performed in a simple blocks world show that the proposed approach is not only useful for prediction, but can also be used for planning object placements.\nClarity:\nThe paper is not very well written. The description of the architecture should be much more precise. Some details are given right before the conclusion, but they are still just numbers and leave a lot of questions unanswered. For instance, the perception module is explained in only a few line in subsection 2.1. Some concrete examples could help here. How are the object proposals defined? How are the objects encoded? What exactly is being encoded here? Is it the position and orientation? \nOriginality:\nThe proposed architecture seems novel, but there are many closely related works that are based on the same idea of decomposing the system into a perception, a physics simulation, and a rendering module. Just from the top of my head, I can think of the SE3-Nets. There is also a large body of work from the group of Josh Tanenbaum on similar problems of learning physics and rendering. I think this concept is not novel anymore and the expectations should be raised to real applications. \nSignificance:\nThe simplicity of the training process that is fully based on pixel labeling makes this work interesting. There are however some issues related to the experimental evaluation that remains unsatisfactory. First, all the experiments are performed on a single benchmark, we cannot easily draw conclusions about a given algorithm based on a single benchmark. Second, this is a toy benchmark that with physical interactions that are way less complex than interactions that happen between real objects. The objects are also not diverse enough in their appearances and textures. I wonder why the authors avoided collecting a dataset of real images of objects and using it to evaluate their algorithm instead of the toy artificial data. I also suspect that with 60k training images, you can easily overfit this task. How can this work generalize to real physical interactions? How can you capture mass and friction, for example?\nPlanning is based on sampling objects of different shapes and colors, do you assume the existence of such library in advance? \nThe baselines that are compared to are also not very appropriate. For instance, comparing to no physics does not add much information. We know that the objects will fall after they are dropped, so the \"no physics\" baseline will certainly perform badly. Comparisons to SAVP are also unfair because it requires previous frames, which are not provided here, and SAVP is typically used for predicting the very next frames and not the final arrangements of objects, as done here.\nIn summary: I think the authors are on something here and the idea is great. However, the paper needs to be made much clearer and more precise, and the experimental evaluation should be improved by performing experiments in a real-world environment. Otherwise, this paper will not have much impact. \n\nPost-rebuttal update:\nThe paper was substantially improved. New experiments using real objects have been included, this clearly demonstrates the merits of the proposed method in robotic object manipulation. ", "We would like to thank the reviewers and commenters for their feedback on our submission. Our revised draft incorporates many of their suggestions. Most importantly:\n\n1. We have run our model and planning procedure on a Sawyer robotic arm using real goal images. Results can be found at the following website: https://sites.google.com/view/object-models \nas well as in the new Section 3.4 and Appendix B of the revision. Our results, robot stacking of up to 9 shapes directly from real images, has not been demonstrated in prior work, regardless of the complexity of those shapes.\n\n2. We have given a more precise description of the planning procedure in Algorithm 1 on page 4.\n\nOther changes are discussed in the individual responses below. \n", "I thank the authors for the changes made to the document, which clarify some of my questions. \nI still think that the experimental part of the paper is too weak for a publication at ICLR at this point.\n", "edit: the authors nicely revised the submission, I think it is a very good paper. I increased my rating.\n\n-----\n\nThis paper presents a method that learns to reproduce 'block towers' from a given image. A perception model, a physics engine model, and a rendering engine are first trained together on pairs of images.\nThe perception model predicts a representation of the scene decomposed into objects; the physics engine predicts the object representation of a scene from an initial object representation; the rendering engine predicts an image given an object representation.\n\nEach training pair of images is made of the first image of a sequence when introducing an object into a scene, and of the last image of the sequence, after simulating the object's motion with a physics engine. The 3 parts of the pipeline (perception, physics, rendering) are trained together on this data.\n\nTo validate the learned pipeline, it is used to recreate scenes from reference images, by trying to introduce objects in an empty scene until the given scene can be reproduced. It outperforms a related pipeline that lacks a scene representation based on objects.\n\nThis is a very interesting paper, with new ideas:\n- The object-based scene representation makes a lot of sense, compared to the abstract representation used in recent work. \n- The training procedure, based on observing the result of an action, is interesting as the examples are easy to collect (except for the fact that the ground truth segmentation of the images is used as input, see below).\n\nHowever, there are several things that are swept 'under the carpet' in my opinion, and this should be fixed if the paper is accepted.\n\n* the input images are given in the form of a set of images, one image corresponding to the object segmentation. This is mentioned only once (briefly) in the middle of the paragraph for Section 2.1, while this should be mentioned in the introduction, as this makes the perception part easier. There is actually a comment in the discussion section and the authors promised to clarify this aspect, which should indeed be more detailed. For example, do the segments correspond to the full objects, or only the visible parts?\n\n* The training procedure is explained only in Section 4.1. Before reaching this part, the method remained very mysterious to me. The text in Section 4.1 should be moved much earlier in the paper, probably between current sections 2.3 and 2.4, and briefly explained in the introduction as well.\nThis training procedure is in fact fully supervised - which is fine with me: Supervision makes learning 'safer'. What is nice here is that the training examples can be collected easily - even if the system was not running in a simulation.\n\n* if I understand correctly the planning procedure, it proceeds as follows:\n- sampling 'actions' that introduce 1 object at a time (?)\n- for each sampled action, predicting the scene representation after the action is performed, by simulating it with the learned pipeline, \n- keeping the action that generates a scene representation close to the scene representation computed for the goal image of the scene.\n- performing the selected action in a simulator, and iterate until the number of performed actions is the same as the number of objects (which is assumed to be known).\n\n-> how do you compare the scene representation of the goal image and the predicted one before the scene is complete? Don't you need some robust distance instead of the MSE?\n-> are the actions really sampled randomly? How many actions do you need to sample for the examples given in the paper?\n\nI also have one question about the rendering engine: Why using the weighted average of the object images? Why not using the intensity of the object with the smallest predicted depth? It should generate sharper images. Does using the weighted average make the convergence easier?\n", "Thank you for your thorough feedback. We have uploaded a revision to make the model and planning procedure clearer. We will upload a second revision this coming week to include results on a Sawyer robot using real image inputs (qualitative results are given below in a video link).\n\n-- Use of segments in lieu of object property supervision\nWe have made more explicit the use of object segmentation in the Section 2 notation (where we describe the model and planning procedure). The segmentations correspond to only the visible parts of objects. We have clarified this in Section 3.1, where we describe data collection.\n\nWe have evaluated our approach on a Sawyer arm using physical blocks to demonstrate applicability to the real world (goo.gl/151BT1). Here we used simple color cues to segment the image observations.\n\n-- Planning procedure\nWe have added an algorithmic description on page 4 (Algorithm 1: Planning Procedure) to make this section clearer. To answer your question about comparing scenes with different number of objects: we match a proposed object to the goal object which minimizes L2 distance in the learned object representations. Some goal objects will be unaccounted for until the last step of the planning algorithm, when there is an action for each object in the goal image. \n\nWe have added details of the cross-entropy method (CEM) to Step 4 of Section 2.5. We sampled actions beginning from a uniform distribution and used CEM to update the sampling distribution. We used 5 CEM iterations with 1000 samples per iteration. Because all of the samples could be evaluated in batch-mode, there was little overhead to evaluating a large number of samples. \n\n-- Training procedure\nPer your suggestion, we have moved the training procedure to come after the model description and before the planning algorithm.\n\n-- Clarification on rendering module\nWe use a weighted average for composing individual object images so that the rendering process is fully differentiable. This design decision makes end-to-end training of the perception, physics, and rendering modules easier. \n", "Thank you for your thorough feedback. To address your comment about experiments in a real-world environment, we have tested our model on a Sawyer robot with real camera images. A representative video can be found here:\ngoo.gl/151BT1\nWe will update the paper to include these results this coming week. Additionally, we have already updated the paper to make the model and planning procedure clearer. Below, we describe some of these changes. \n\n-- Clarification on object encodings\nWe have explained more thoroughly that the object encodings are not supervised directly to have semantically meaningful components like position or orientation. As compared to most prior work on object-factorized representations, we do not assume access to ground truth properties for the objects. This is why the perception module cannot be trained independently; we have no supervision for its outputs. Instead, we train the perception, graphics, and physics modules jointly to reconstruct the current observations and predict the subsequent observation (Figure 2c). In this way, the object representations come to encode these attributes without direct supervision of such properties. Of course, learning representations via a reconstruction objective is not unique to our paper; what we show is that these representations can be sufficient for planning in physical understanding tasks.\n\n-- Relation to prior work\nThe most relevant works about learning physics and rendering you might be referring to are Neural Scene De-rendering (NSD) and Visual Scene De-animation (VDA). These works learn object encodings by direct supervision of properties like position and orientation. As discussed in the previous section, we weaken the requirement for ground-truth object properties, instead requiring only segments instead of attribute annotations. We previously cited VDA and have now added NSD along with a short description of this supervision difference. \n\nSE3-Nets used point cloud shape representations, whereas we use learned representations driven by an image prediction task. \n\n-- Generalization to real physical interactions\nWe have now demonstrated our model in the physical world (see video link given above). \n\n-- No-physics baseline, SAVP\nYes, it is not surprising that a model which did not predict physics did not perform well. We included this model as an ablation because we can better understand how our full model makes decisions by comparing it to the physics-ablated version, as in Figure 6. The SAVP baseline takes in a previous frame in the form of an object sample, similar to how our model views a sample by rendering an object mid-air, allowing for a head-to-head comparison to a black-box frame prediction approach. \n", "Thank you for your feedback and suggestions. We have updated the paper to make the planning algorithm clearer, give short descriptions of CEM and perceptual losses, and incorporate your terminology suggestions (‘rectangular cuboid’, ‘unary’, ‘binary’, etc). At the request of other reviewers, we have also tested our approach on a physical Sawyer robot. The following video gives a qualitative result analogous to Figure 4: \ngoo.gl/151BT1\nThese results will be included in the paper in a second revision this week. Below, we give more details about the current changes.\n\n-- Evaluation on downstream tasks \nDownstream task results were in the original submission (all Figures after 3 and Table 1); we have updated the paper to better differentiate between image prediction results in isolation and the use of our model’s predictions in a planning procedure to build towers. \n\nFigure 4 shows qualitative results on this building task, and Table 1 gives quantitative results. Figures 5 and 6 give some analysis of the procedure by which our model selects actions. Figure 7 briefly shows how our model can be adapted to other physics-based tasks: stacking to maximize height, and building a tower to make a particular block stable. \n\n-- Planning algorithm\nWe have added a more precise algorithmic description on page 4 to make the tower-building procedure clearer (Algorithm 1: Planning Procedure).\n\n-- Oracle models\nWe have added a sentence to the Table 1 caption to explain why O2P2 outperforms Oracle (pixels). The Oracle (pixels) model has access to the true physics simulator which generated the data, but not an object-factorized cost function. Instead, it uses pixel-wise L2 over the entire image (Section 3.2). The top row of Figure 4 is illustrative here: the first action taken by Oracle (pixels) was to drop the blue rectangular cuboid in the bottom left to account for both of the blue cubes in the target. Our model, despite having a worse physics predictor, performs better by virtue of its object factorization. \n\n-- Figure 4 clarification\nWe have updated the caption of Figure 4 and changed some text in the graphic. Figure 4 shows qualitative results on the tower building task described above. We show four goal images (outlined in green), and the towers built by each of five methods. This figure has a few utilities:\n 1. It illustrates what our model’s representations capture well for planning and what they do not. For example, most mistakes made by our model concern object colors. This suggests that object positions are more prominently represented by our model’s representations than color. \n 2. It shows why an object-factorization is still useful even if one has access to the “true” physics simulator (as discussed in the previous question).\n 3. It shows that the types of towers being built in the downstream task are not represented in the training set of the perception, graphics, and physics modules (depicted in Figure 3, where we show reconstruction and prediction results). The object-factorized predictions allow our model to generalize out of distribution more effectively than an object-agnostic video prediction model (Table 1). \n\n-- Reinforcement learning baseline\nWe have found that a PPO agent works poorly on this task, possibly due to the high dimensionality of the observation space (raw images). We will continue to try to get this baseline to work for the next revision, and would be happy to try out any other RL algorithms that the reviewer might suggest. ", "A method is proposed, which learns to reason on physical interactions of different objects (solids like cuboids, tetrahedrons etc.). Traditionally in related work the goal is to predict/forecast future observations, correctly predicting (and thus learning) physics. This is also the case in this paper, but the authors explicitly state that the target is to evaluate the learned model on downstream tasks requiring a physical understanding of the modelled environment.\n\nThe main contribution here lies in the fact that no supervision is used for object properties. Instead, a mask predictor is trained without supervision, directly connected to the rest of the model, ie. to the physics predictor and the output renderer. The method involves a planning phase, were different objects are dropped on the scene in the right order, targeting bottom objects first and top objects later. The premise here is that predicting the right order of the planning actions requires understanding the physics of the underlying scene.\n\nI particularly appreciated the fact, that object instance renderers are combined with a global renderer, which puts individual images together using predicted heatmaps for each object. With a particular parametrization, these heatmaps could be related to depth maps allowing correct depth ordering, but depth information has not been explicitly provided during training.\n\nImportant issues:\n\nOne of the biggest concerns is the presentation of the planning algorithm, and more importantly, a proper formalization of what is calculated, and thus a proper justification of this part. The whole algorithm is very vaguely described in a series of 4 items on page 4. It is intuitively almost clear how these steps are performed, but the exact details are vague. At several steps, calculated entities are “compared” to other entities, but it is never said what this comparison really results in. The procedure is reminiscent of particle filtering, in that states (here: actions) are sampled from a distribution and then evaluated through a likelihood function, resulting in resampling. However, whereas in particle filtering there is clear probabilistic formalization of all key quantities, in this paper we only have a couple of phrases which describe sampling and “comparisons” in a vague manner.\n\nSince the procedure performs planning by predicting a sequence of actions whose output at the end can be evaluated, thus translated into a reward, I would have also liked a discussion (or at least a remark) why reinforcement learning has not been considered here.\n\nI am also concerned by an overclaim of the paper. As opposed to what the paper states in various places, the authors really only evaluate the model on video prediction and not on other downstream tasks. A single downstream task is very briefly mentioned in the experimental section, but it is only very vaguely described, it is unclear what experiments have been performed and there is no evaluation whatsoever.\n\nOpen questions:\n\nWhy is the proposed method better than one of the oracles?\n\nMinor remarks:\n\nIt is unclear what we see in image 4, as there is only a single image for each case (=row) and method (=column). \n\nThe paper is not fully self-contained. Several important aspects are only referred to by citing work, e.g. CEM sampling and perceptual loss. These are concepts which are easy to explain and which do not take much space. They should be added to the paper.\n\nA threshold is mentioned in the evaluation section. A plot should be given showing the criterion as a function of this threshold, as is standard in, for instance, pose estimation literature.\n\nI encourage the authors to use the technical terms “unary terms” and “binary terms” in the equation in section 2.2. This is the way how the community referred to interactions in graphical models for relational reasoning long before deep learning showed up on the horizon, let’s be consistent with the past.\n\nI do not think that the physics module can be reasonable be called a “physics simulator” as has been done throughout the paper. It does not simulate physics, it predicts physics after learning, which is not a simulation.\n\nA cube has not been confused with a rectangle, as mentioned in the paper, but with a rectangular cuboid. A rectangle is a 2D shape, a rectangular cuboid is a 3D polyhedron.\n", "Thank you for your questions. We will include an appendix with more implementation details (currently in Section 5) in the next version. In the meantime, here we describe the reconstruction process in more depth. \n\n1. The perception network has four convolutional layers (32, 64, 128, 256 channels) with ReLU nonlinearities followed by a fully connected layer. It predicts a set of object representations given an image at t=0:\n\n o_0 = f_percept(I_0)\n\n2. The physics engine consists of a pairwise interaction MLP and single-object transition MLP, each with two hidden layers. It predicts object representations at the next timestep given an initial configuration: \n\n o_1 = f_physics(o_0)\n\n(To see f_physics broken down into separate terms for the two MLPs, see Section 2.2)\n\n3. The rendering engine has two networks, which we will call f_image and f_heatmap. For each object o_{t,i} in a set of objects o_t at timestep t, f_image predicts a three-channel image and f_heatmap predicts a single-channel heatmap. We render each object separately with f_image and then combine these images by a weighted averaging over objects, where the weights come from the negatives of the heatmaps (passed through a nonlinearity). More precisely, denoting the heatmaps at time t for all objects as\n\n\tH_t = softmax( -f_heatmap(o_t) ),\n\nthe j^th pixel of the predicted composite image is then:\n\n \\hat{I}_{t,j} = \\sum_i f_image( o_{t,i} )_j * H_{t,i,j},\n\nwhere H_{t,i,j} is the j^th pixel of the heatmap for the i^th object at time t. \n\nBoth networks have a single fully-connected layer followed by four deconvolutional layers with ReLU nonlinearities. f_image has (128, 64, 32, 3) channels and f_heatmap has (128, 64, 32, 1) channels. From here on, we will use f_render to describe this entire process:\n\n \\hat{I}_t = f_render(o_t)\n\nThe equations here risk making all of this seem more complicated than it really is. The high-level picture is that we need a way to produce a single image from a set of objects, so we render each object separately and then take a weighted average over the individual images in something that could be thought of as a soft depth pass. \n\n4. Reconstructing an image at the observed timestep then looks a lot like an auto-encoder:\n\n\t\\hat{I}_0 = f_render( f_percept(I_0) )\n\nReconstructing an image at the next timestep uses the physics engine in between:\n\n\t\\hat{I}_1 = f_render( f_physics( f_percept(I_0) ) )\n\t\nThese equations are reflected in the loss functions on page 6. (For example, the physics engine is only trained via the loss from reconstructing I_1, since it is not used in reconstructing I_0.) We used ground truth object segments in our experiments, which we discuss in the answer to the question on 10/02/2018. \n", "Thank you for your detailed feedback.\n\n1. This is a good point. We cite Neural Expectation Maximization (N-EM; Greff et al, 2018) when discussing disentangled object representations, but Relational NEM (R-NEM) is indeed more relevant because it incorporates physical interactions into the model. It is our understanding that the R-NEM code works only on binary images of 2D objects, whereas we consider color images of 3D objects. R-NEM focuses on disentangling objects completely unsupervised, so does not use object segments but is evaluated on simpler inputs. In comparison, we focus on using object representations for downstream tasks, so assume an accurate preprocessing step to give segments but use our object representations in contexts other than prediction (like block stacking). \n\nThese works tackle complementary pieces of the same larger problem, and one could imagine a full pipeline using something like R-NEM to discover segments to feed into our method for planning action sequences with learned object representations. We will add this discussion to the next version of the paper. \n\n2 Yes, we outline the segmented images in orange in Figure 2c because we are using ground truth object segments. We assume we have access to this preprocessing at both train and test time. We will make this more clear in the main text.\n\n3. The rendered scene has a few forward-facing lights at about two-thirds of the image’s height, so most objects appear a bit brighter before they are dropped. You can also see this happening in Figure 6. \n\n4. We train the model to reconstruct images at both t=0 and t=1 given the observation at t=0. The loss for the image at t=0 is equation (1) on page 6:\n\n L_2(\\hat{I}_0, I_0) + L_vgg(\\hat{I}_0, I_0),\n\nwhere L_vgg is a perceptual loss in the feature space of the VGG network. The analogous loss for t=1 is equation (2). As you mention, reconstructing the t=0 image essentially amounts to bypassing the physics engine. A more complete description is given in the last paragraph on page 5.\n\nPlease let us know if you have any follow-up questions.\n", "Dear authors, I think conceptually that this is a very nice paper and I like the choice of experiments. I have just a few comments:\n\n(1) The authors say that: “Existing works that have investigated the benefit of using objects have either assumed that an interface to an idealized object space already exists or that supervision is available to learn a mapping between raw inputs and relevant object properties (for instance, category, position, and orientation).”\n\nThe following paper is very relevant and they don’t make either of the assumptions that the authors state in their paper (quoted above). RELATIONAL NEURAL EXPECTATION MAXIMIZATION: UNSUPERVISED DISCOVERY OF OBJECTS AND THEIR INTERACTIONS - Steenkiste et al. ICLR 2018. Steenkiste et al. automatically learn to segment objects and predict the physics across multiple time steps. A detailed comparison between the authors' model and that of Steenkiste et al. would make the authors contributions more clear.\n\n(2) Could you please clarify if *ground truth* object segments are fed into the Perception model? If *ground truth* object segments are used, this should be made more clear. (The last line in the caption of Figure 2 is not sufficient to make this clear in the main text).\n\n(3) Very minor, but in Figure 2, the yellow triangle appears to change colour, to green.\n\n(4) How exactly is the model is Figure 2 trained? Is it trained to predict t=1 given t=0? If so how are reconstructions in Figure 3, for t=0 obtained? Is the physics engine bypassed to obtain a reconstruction for t=0? This is not clear. Is the reconstruction (error) for t=0 used to train the model? It is not clear what loss functions are used for training?\n\n(5) Figure 5 and section 4.3 are really nice!", "This is a very nice paper! However, I wish it included some more details of the implementation (perhaps a future revision could include an appendix?) For example, how did you get the region proposals/segmentation for each video frame? What exactly are the equations involved in the reconstruction process?" ]
[ 7, -1, -1, 9, -1, -1, -1, 5, -1, -1, -1, -1 ]
[ 4, -1, -1, 4, -1, -1, -1, 5, -1, -1, -1, -1 ]
[ "iclr_2019_HJx9EhC9tQ", "iclr_2019_HJx9EhC9tQ", "HJgCWEMxCm", "iclr_2019_HJx9EhC9tQ", "SyxLdj-IhX", "Bkgyox9DnQ", "rkglZ8xo2Q", "iclr_2019_HJx9EhC9tQ", "SJl7bdRy5Q", "SkeuI3gb5m", "iclr_2019_HJx9EhC9tQ", "iclr_2019_HJx9EhC9tQ" ]
iclr_2019_HJxB5sRcFQ
LayoutGAN: Generating Graphic Layouts with Wireframe Discriminators
Layout is important for graphic design and scene generation. We propose a novel Generative Adversarial Network, called LayoutGAN, that synthesizes layouts by modeling geometric relations of different types of 2D elements. The generator of LayoutGAN takes as input a set of randomly-placed 2D graphic elements and uses self-attention modules to refine their labels and geometric parameters jointly to produce a realistic layout. Accurate alignment is critical for good layouts. We thus propose a novel differentiable wireframe rendering layer that maps the generated layout to a wireframe image, upon which a CNN-based discriminator is used to optimize the layouts in image space. We validate the effectiveness of LayoutGAN in various experiments including MNIST digit generation, document layout generation, clipart abstract scene generation and tangram graphic design.
accepted-poster-papers
Reviewers agree the paper should be accepted. See reviews below.
train
[ "Hye5G91CRQ", "S1eLur3cnQ", "r1emOpxi0m", "rJeEMnsXR7", "B1lzpiiQAX", "SJlQtjjm0X", "S1eY99iXAX", "rJxm6KDi2X", "HkeCYxhc27" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thank you for your detailed rebuttal and for addressing my concerns and responding to my questions. \n- Specifically I found the additional analysis (both human evaluation and showing results from other baselines) on Clip-art scene generation satisfying. \n- I also found it helpful to look at DCGAN results for all experiments. \n- I also looked at the animation videos to demonstrate the movements of all the graphic elements and it helped me appreciate the the approach / design choices a bit more. \n\nOverall, I think the paper is greatly improved from the time of the submission and contains more exhaustive evaluation with existing work and shows application for a wide variety of tasks. Based on the rebuttal, I am updating my score to 7 (Good paper)", "\f\nSummary: This paper presents a novel GAN framework for generating graphic layouts which consists of a set of graphic elements which are geometrically and semantically related. The generator learns a function that maps input layout ( a random set of graphic elements denoted by their classes probabilities and and geometric parameters) and outputs the new contextually refined layout. The paper also explores two choices of discriminators: (1) relation based discriminator which directly extracts the relations among different graphic elements in the parameter space, and (2) wireframe rendering discriminator which maps graphic elements to 2D wireframe images using a differentiable layer followed by a CNN for learning the discriminator. The novel GAN framework is evaluated on several datasets such as MNIST, document layout comparison and clipart abstract scene generation \n\nPros:\n- The paper is trying to solve an interesting problem of layout generation. While a large body of work has focussed on pixel generation, this paper focuses on graphic layouts which can have a wide range of practical applications. \n- The paper presents a novel architecture by proposing a generator that outputs a graphic layout consisting of class probabilities and polygon keypoints. They also propose a novel discriminator consisting of a differentiable layer that takes the parameters of the output layout and generates a rasterized image representing the wireframe. This is quite neat as it allows to utilize a CNN for learning a discriminator for real / fake prediction. \n- Qualitative results are shown on a wide variety of datasets - from MNIST to clipart scene generation and tangram graphic design generation. I found the clipart scene and tangram graphic design generation experiments quite neat. \n\nCons:\n- While the paper presents a few qualitative results, the paper is missing any form of quantitative or human evaluation on clip-art scene generation or tangram graphic design generation. \n- The paper also doesn’t report results on simple baselines for generating graphic layouts. Why not have a simple regression based baseline for predicting polygon parameters? Or compare with the approach mentioned in [1]\n- Even for generating MNIST digits, the paper doesn’t report numbers on previous methods used for MNIST digit generation. \nInterestingly, only figure 4 shows results from a traditional GAN approach (DCGAN). Why not show the output on other datasets too? \n\nQuestions / Remarks:\n- Why is the input to the GAN not the desired graphic elements and pose the problem as just predicting the polygon keypoints for those graphic elements. I didn’t quite understand the motivation of choosing a random set of graphic elements and their class probabilities as input. \n - How does this work for the case of clip-art generation for example? The input to the gan is a list of all graphic elements (boy, girl glasses, hat, sun and tree) or a subset of these?\n - It is also not clear what role the class probabilities are playing this formulation. \n- In section 3.3.2, it’s mentioned that the target image consist of C channels assuming there are C semantic classes for each element. What do you mean by each graphic element having C semantic classes? Also in the formulation discusses in this section, there is no further mention of C. I wasn’t quite clear what the purpose of C channels is then. \n- I found Figure 3b quite interesting - it would have been nice if you expanded on that experiments and the observations you made a little more. \n\n[1] Deep Convolutional Priors for Indoor Scene Synthesis by Wang et al\n", "Thank you for the new experiments. I think this makes the paper stronger.", "Q: “My only complaint is that the most important use case of their GAN (Document Semantic Layout Generation) is tested on a synthetic dataset. It would have been nice to test it on a real life dataset.”\n \nA: We added a new experiment of mobile app layout generation by using the RICO dataset (http://interactionmining.org/rico). We showed the results in the appendix due to page limitation. Please see Section 6.6 in the uploaded version for more details. ", "Q: “Why is the input to the GAN not the desired graphic elements and pose the problem as just predicting the polygon keypoints for those graphic elements. I didn’t quite understand the motivation of choosing a random set of graphic elements and their class probabilities as input. How does this work for the case of clip-art generation for example? The input to the gan is a list of all graphic elements (boy, girl glasses, hat, sun and tree) or a subset of these?”\n \nA: Given a set of desired graphic elements, our LayoutGAN is actually able to predict their geometric parameters. We demonstrated its ability in the perturbation experiment in Figure 7. However, such synthesis process requires human priors in advance, i.e., the class of each graphic element desired by a reasonable layout. \nOur work goes beyond that. It synthesizes graphic layouts from a set of purely random graphic elements in terms of both geometric parameters and classes. The class of each input element is not predefined but randomly sampled from Uniform distribution, thus the class combination of all the input elements can be in various semantic forms. Take the Clipart experiment as an example, an input set of elements may contain all categories (boy, girl, glasses, hat, sun and tree) or a subset of these with duplicates (boy, girl, glasses, glasses, hat, hat, tree). The model should learn to figure out and adjust the spatial-semantic relations among all elements automatically, and to predict refined class probabilities (can be zero class vector to remove duplicates if necessary) along with geometric parameters for each element to form a reasonable layout. Predicting class probabilities together with geometric parameters greatly increases the flexibility and applicability of the LayoutGAN on different tasks.\n\nQ: “It is also not clear what role the class probabilities are playing this formulation. In section 3.3.2, it’s mentioned that the target image consist of C channels assuming there are C semantic classes for each element. What do you mean by each graphic element having C semantic classes? Also in the formulation discusses in this section, there is no further mention of C. I wasn’t quite clear what the purpose of C channels is then.” \n \nA: As a graphic element can be boy, girl, hat, glasses, etc, out of C possible classes, we use a C-dimensional vector to represent the probabilities of one element being a particular class. It serves two roles. 1) It allows the network to model the spatial-semantic relations among different elements, e.g. a hat should precisely appear on top of a boy’s head. 2) In the generation experiment, it allows the network to modify the class of each input random element to produce a layout that follow the ground truth class distribution. \nThe rendered image consists of C channels because we render wireframes that belong to a specific class with predicted probabilities onto a single channel (C equals to the total number of classes), upon which the CNN-based discriminator can be applied to optimize both the geometric parameters and class probabilities of all graphic elements coherently in a differentiable way. Note that if we render all the wireframes onto a single channel, then we lose the semantic information. In other words, the CNN discriminator would not be able to tell if a bounding box represents a hat or the sun.\n \nQ: “I found Figure 3b quite interesting - it would have been nice if you expanded on that experiments and the observations you made a little more.”\n\nA: Thanks. The colors was used to trace the points from initial random positions to final positions. Following your suggestion, we expanded this experiment on both MNIST and tangram generation by visualizing the displacements or the flows between initial and final positions of graphic elements. In particular, we made animation videos to demonstrate the movements of all the graphic elements. Please review them in the following anonymous link: https://sites.google.com/view/supp-videos-for-iclr-2019/home ", "Q: “While the paper presents a few qualitative results, the paper is missing any form of quantitative or human evaluation on clip-art scene generation or tangram graphic design generation”\n \nA: Thank you for your suggestions. This work is the first attempt to solve layout synthesis from random input for both Clipart scene generation and tangram graphic design (tangram data are collected and annotated by ourselves, we promise to release it upon acceptance). As no previous methods have focused on these problems, there is a lack of widely-accepted quantitative evaluation metrics for both tasks. To this end, we carried out a user study involving 20 respondents for a subjective evaluation of the generated Clipart abstract scenes. Please see Table 3 in the updated version.\n \nQ: “The paper also doesn’t report results on simple baselines for generating graphic layouts. Why not have a simple regression based baseline for predicting polygon parameters? Or compare with the approach mentioned in [1]”\n[1] Deep Convolutional Priors for Indoor Scene Synthesis by Wang et al\n\nA: Thank you for your suggestions. We have supplemented experiments of generating tangram graphic design sequentially as Wang et al [1] for comparison in the updated version. Specifically, Wang et al [1] generate indoor scenes iteratively by adding objects one-by-one. The choice of such sequential paradigm is partly because the rendering process from geometric parameters (object location) to indoor scene images is not differentiable. Similarly, we would have faced such a problem in our layout design. However, we propose a novel wireframe rendering layer to make the layout rendering process differentiable. Benefiting from it, we can predict a set of graphic elements simultaneously in an end-to-end network. But still, we can adopt the sequential paradigm in Wang et al [1] to our layout design problem by generating graphic elements one-by-one. However, we found such sequential synthesis process suffers from accumulated error, which validates the superiority of the proposed LayoutGAN. Please see Figure 8 for comparisons in the updated version.\n\nQ: “Even for generating MNIST digits, the paper doesn’t report numbers on previous methods used for MNIST digit generation. \n \nA: Our experiment on MNIST serves as sanity test. A 2D point, as the simplest geometric form, is not a desirable element representation for our approach, and we do not expect it to compete with other GANs applied to MNIST. We have reflected this in the updated version. \n\nQ: Interestingly, only figure 4 shows results from a traditional GAN approach (DCGAN). Why not show the output on other datasets too?”\n\nA: Thanks for the suggestion. We added experiments to apply DCGAN to both Clipart abstract scene generation and tangram graphic design task in the updated version. Please see Figure 5 and 8.\n", "Q: “Why not ask the generator to generate the rendering instead of class probabilities?”\n \nA: The generator produces geometric layout parameters together with class probabilities. Rendering a wireframe image from the layout parameters is then trivial. Rendering an application-specific layout, e.g., graphic design bitmap, is application-dependent and unnecessarily complex for modeling layout. Does this answer your question?\n", "Summary:\nThe paper proposed to use GAN to synthesize graphical layouts. The generator takes a random input and generates class probabilities and geometric parameters based on a self-attention module. The discriminator is based on a differentiable wireframe rendering component (proposed by the paper) to allow back propagation through the rendering module. I found the topic very interesting and the approach seems to make sense.\n\nQuality:\n+ The idea is very interesting and novel.\n\nClarity:\n+ The paper is clearly written and is easy to follow.\n\nOriginality:\n+ I believe the paper is novel. The differentiable wireframe rendering is new and very interesting.\n\nSignificance:\n+ I believe the paper has value to the community.\n- The evaluation of the task seems to be challenging (Inception score may not be appropriate) but since this is probably the first paper to generate layouts, I would not worry too much about the actual accuracy.\n\nQuestion:\nWhy not ask the generator to generate the rendering instead of class probabilities?", "\nThe authors present a GAN based framework for Graphic Layouts. Instead of considering a graphic layout as a collection of pixels, they treat it as a collection of primitive objects like polygons. The objective is to create an alignment of these objects that mimics some real data distribution.\n\nThe novelty is a differentiable wireframe rendering layer allowing the discriminator to judge alignment. They compare this with a relation based discriminator based on the point net architecture by Qi et al. The experimentation is thorough and demonstrates the importance of their model architecture compared to baseline methods. \n\nOverall, this is a well written paper that proposes and solves a novel problem. My only complaint is that the most important use case of their GAN (Document Semantic Layout Generation) is tested on a synthetic dataset. It would have been nice to test it on a real life dataset." ]
[ -1, 7, -1, -1, -1, -1, -1, 7, 6 ]
[ -1, 3, -1, -1, -1, -1, -1, 4, 4 ]
[ "B1lzpiiQAX", "iclr_2019_HJxB5sRcFQ", "rJeEMnsXR7", "HkeCYxhc27", "S1eLur3cnQ", "S1eLur3cnQ", "rJxm6KDi2X", "iclr_2019_HJxB5sRcFQ", "iclr_2019_HJxB5sRcFQ" ]
iclr_2019_HJxeWnCcF7
Learning Mixed-Curvature Representations in Product Spaces
The quality of the representations achieved by embeddings is determined by how well the geometry of the embedding space matches the structure of the data. Euclidean space has been the workhorse for embeddings; recently hyperbolic and spherical spaces have gained popularity due to their ability to better embed new types of structured data---such as hierarchical data---but most data is not structured so uniformly. We address this problem by proposing learning embeddings in a product manifold combining multiple copies of these model spaces (spherical, hyperbolic, Euclidean), providing a space of heterogeneous curvature suitable for a wide variety of structures. We introduce a heuristic to estimate the sectional curvature of graph data and directly determine an appropriate signature---the number of component spaces and their dimensions---of the product manifold. Empirically, we jointly learn the curvature and the embedding in the product space via Riemannian optimization. We discuss how to define and compute intrinsic quantities such as means---a challenging notion for product manifolds---and provably learnable optimization functions. On a range of datasets and reconstruction tasks, our product space embeddings outperform single Euclidean or hyperbolic spaces used in previous works, reducing distortion by 32.55% on a Facebook social network dataset. We learn word embeddings and find that a product of hyperbolic spaces in 50 dimensions consistently improves on baseline Euclidean and hyperbolic embeddings, by 2.6 points in Spearman rank correlation on similarity tasks and 3.4 points on analogy accuracy.
accepted-poster-papers
This paper proposes a novel framework for tractably learning non-eucliean embeddings that are product spaces formed by hyperbolic, spherical, and Euclidean components, providing a heterogenous mix of curvature properties. On several datasets, these product space embeddings outperform single Euclidean or hyperbolic spaces. The reviewers unanimously recommend acceptance.
train
[ "ryxIIdud37", "H1exAN8qaQ", "B1la9NI96Q", "S1lIwN896m", "SygOB4LcTX", "ryljZN8caX", "r1gDfZ8ca7", "Byxc9eI9am", "Hklpflm6h7", "r1x-tAv3jm" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "\nPage 2: What are p_i, i=1,2,...,n, their set T and \\mathcal{P}?\n\nWhat is | | used to compute distortion between a and b?\n\nPlease fix the definition of the Riemannian manifold, such that M is not just any manifold, but should be a smooth manifold or a particular differentiable manifold. Please update your definition more precisely, by checking page 328 in J.M. Lee, Introduction to Smooth Manifolds, 2012, or Page 38 in do Cormo, Riemannian Geometry, 1992.\n\nPlease define \\mathcal{P} in equation (1).\n\nDefine K used in the definition of the hyperboloid more precisely.\n\nPlease provide proofs of these statements for product of manifolds with nonnegative and nonpositive curvatures: “In particular, the squared distance in the product decomposes via (1). In other words, dP is simply the l2 norm of the component distances dMi.”\n\nPlease explain what you mean by “without the need for optimization” in “These distances provide simple and interpretable embedding spaces using P, enabling us to introduce combinatorial constructions that allow for embeddings without the need for optimization.” In addition, how can you compute geodesic etc. if you use l1 distance for the embedded space?\n\nBy equation (2), the paper focuses on embedding graphs, which is indeed the main goal of the paper. Therefore, first, the novelty and claims of the paper should be revised for graph embedding. Second, three particular spaces are considered in this work, which are the sphere, hyperbolic manifold, and Euclidean space. Therefore, you cannot simply state your novelty for a general class of product spaces. Thus, the title, novelty, claims and other parts of the paper should be revised and updated according to the particular input and output spaces of embeddings considered in the paper. \n\nPlease explain how you compute the metric tensor g_P and apply the Riemannian correction (multiply by the inverse of the metric tensor g_P) to determine the Riemannian gradient in the Algorithm 1, more precisely. \n\nStep (9) of the Algorithm 1 is either wrong, or you compute v_i without projecting the Riemannian gradient. Please check your theoretical/experimental results and code according to this step.\n\nWhat is h_i used in the Algorithm 1? Can we suppose that it is the ith component of h?\n\nIn step (6) and step (8), do you project individual components of the Riemannian gradient to the product manifold? Since their dimensions are different, how do you perform these projections, since definitions of the projections given on Page 5 cannot be applied? Please check your theoretical/experimental results and code accordingly.\n\nPlease define exp_{x^(t)_i}(vi) and Exp(U) more precisely. I suppose that they denote exponential maps.\n\nHow do you initialize x^(0) randomly?\n\nThe notation is pretty confusing and ambiguous. First, does x belong to an embedded Riemannian manifold P or a point on the graph, which will be embedded? According to equation (2), they are on the graph and they will be embedded. According to Algorithm 1, x^0 belongs to P, which is a Riemannian manifold as defined before. So, if x^(0) belongs to P, then L is already defined from P to R (in input of the Algorithm 1). Thereby, gradient \\nabla L(x) is already a Riemannian gradient, not the Euclidean gradient, while you claim that \\nabla L(x) is the Euclidean gradient in the text.\n\nOverall, Algorithm 1 just performs a projection of Riemannian or Euclidean gradient \\nabla L(x) onto a point v_i for each ith individual manifold. Then, each v_i is projected back to a point on an individual component of the product manifold by an exponential map. \n\nWhat do you mean by “sectional curvature, which is a function of a point p and two directions x; y from p”? Are x and y not points on a manifold?\n \nYou define \\xi_G(m;b,c) for curvature estimation for a graph G. However, the goal was to map G to a Riemannian manifold. Then, do you also consider that G is itself a Riemannian manifold, or a submanifold?\n\nWhat is P in the statement “the components of\nthe points in P” in Lemma 2?\n\nWhat is \\epsilon in Lemma 2?\n\nHow do you optimize positive w_i, i=1,2,...,n?\n\nWhat is the “gradient descent” refered to in Lemma 2?\n\nPlease provide computational complexity and running time of the methods.\n\nPlease define \\mathbb{I}_r.\n\nAt the third line of the first equation of the proof of Lemma 1, there is no x_2. Is this equation correct?\n\nIf at least of two of x1, y1, x2 and y2 are linearly dependents, then how does the result of Lemma 1 change?\n\nStatements and results given in Lemma 1 are confusing. According to the result, e.g. for K=1, curvature of product manifold of sphere S and Euclidean space E is 1, and that of E and hyperbolic H is 0. Then, could you please explain this result for the product of S, E and H, that is, explain the statement “The last case (one negative, one positive space) follows along the same lines.”? If the curvature of the product manifold is non-negative, then does it mean that the curvature of H is ignored in the computations?\n\nWhat is \\gamma more precisely? Is it a distribution or density function? If it is, then what does (\\gamma+1)/2 denote?\n\nThe statements related to use of Algorithm 1 and SGD to optimize equation (2) are confusing. Please explain how you employed them together in detail.\n\nCould you please clarify estimation of K_1 and K_2, if they are unknown. More precisely, the following statements are not clear;\n\n- “Furthermore, without knowing K1, K2 a priori, an estimate for these curvatures can be found by matching the distribution of sectional curvature from Algorithm 2 to the empirical curvature computed from Algorithm 3. In particular, Algorithm 2 can be used to generate distributions, and K1, K2 can then be found by matching moments.” Please explain how in more detail? What is matching moments?\n\n- “we find the distribution via sampling (Algorithm 3) in the calculations for Table 3, before being fed into Algorithm 2 to estimate Ki” How do you estimation K_1 and K_2 using Algorithm 3?\n\n- Please define, “random (V)”, “random neighbor m” and “\\delta_K/s” used in Algorithm 3 more precisely.\n", "We welcome the reviewer's detailed questions and suggestions on the technical presentation of our paper, and we appreciate the opportunity to improve it. To the best of our understanding, many of the reviewer's questions are addressed in the submitted draft, or pertain to standard notation and arguments. Nevertheless, we respond to the reviewer’s comments in detail below, clarifying ideas or pointing out specific lines where questions are answered.\n\nWe sincerely hope that our response clarifies any potential notational confusions, and we look forward to further engaging in a substantial discussion on the overall merits of our work.\n\nAll pages and lines referenced refer to the original submission.\n", "- Page 2: What are p_i, i=1,2,...,n, their set T and \\mathcal{P}?\n\nThis refers to an arbitrary set T containing points p_1,...,p_n on a manifold P, for which we wish to define a mean.\n\n\n- What is | | used to compute distortion between a and b?\n\nAbsolute value\n\n\n- Please fix the definition of the Riemannian manifold, such that M is not just any manifold, but should be a smooth manifold or a particular differentiable manifold. Please update your definition more precisely, by checking page 328 in J.M. Lee, Introduction to Smooth Manifolds, 2012, or Page 38 in do Cormo, Riemannian Geometry, 1992.\n\nYes, it is a smooth manifold, as specified in the first line of the “Product Manifolds” paragraph.\n\n\n- Please define \\mathcal{P} in equation (1).\n\n\\mathcal{P} is a product manifold.\n\n\n- Define K used in the definition of the hyperboloid more precisely.\n\nK is an arbitrary constant that indexes the curvature. This is described in the first paragraph of section “Learning the curvature”.\n\n\n- Please provide proofs of these statements for product of manifolds with nonnegative and nonpositive curvatures: “In particular, the squared distance in the product decomposes via (1). In other words, dP is simply the l2 norm of the component distances dMi.”\n\nThe given statement is a standard fact about products of Riemannian manifolds: some classical references are [Levy] and [Ficken], although the result is stated directly in, e.g, [TS, pg. 81, eq. (4.19)]. Here is a sketch of the proof: first, the Levi-Civita connection on the manifold decomposes along the product components [DoCarmo Ex. 6.1]. This implies that the acceleration is 0 iff it is 0 in each component; in other words, geodesics in the product manifold decompose into geodesics in each of the factors. The distance function’s decomposition follows from the additivity of the Riemannian metric, i.e. |\\dot{\\gamma}(t)| = \\sqrt{\\dot{\\gamma_1}(t)^2 + \\dot{\\gamma_2}(t)^2}.\n\n\n- Please explain what you mean by “without the need for optimization” in… In addition, how can you compute geodesic etc. if you use l1 distance for the embedded space?\n\n* We are referring to embedding algorithms that do not require optimizing a loss function via, for example, gradient descent. This concept is detailed in Appendix C.3. For example, the second paragraph on page 19 shows how to embed a cycle by explicitly writing down the coordinates of the points, with no optimization. Similarly, for hyperbolic space, the combinatorial construction previously studied in [Sarkar, SDGR] embeds trees in hyperbolic space without optimization.\n* Additionally, it is explicitly mentioned in the first line of the corresponding paragraph that the alternative distances proposed are meant to “ignore the Riemannian structure”, because many common applications of embeddings such as link prediction do not actually require Riemannian manifold structure, or related notions such as geodesics. Conversely, the motivation for the application in Section 4.2 is to show a task where manifold structure and geodesics are actually required, where the (Riemannian) product is effective.\n\n\n- By equation (2), the paper focuses on embedding graphs, which is indeed the main goal of the paper. Therefore, first, the novelty and claims of the paper should be revised for graph embedding. Second, three particular spaces are considered in this work, which are the sphere, hyperbolic manifold, and Euclidean space. Therefore, you cannot simply state your novelty for a general class of product spaces. Thus, the title, novelty, claims and other parts of the paper should be revised and updated according to the particular input and output spaces of embeddings considered in the paper.\n\nOur embedding technique is not limited to graphs, and indeed we perform word embeddings into product manifolds as described in Section 4.2. Graphs, however, are used as a standard metric for non-Euclidean embeddings [NK1, SDGR, NK2], and so we evaluate our approach on a variety of graphs in Section 4.1. The language of graphs is also convenient for stating some of our results, but not necessary, as described in Footnote 1.\n\nThe three particular spaces are the standard spaces of constant curvature, which has been considered in previous work. Our claimed novelty is in combining these using the Riemannian product construction to perform efficient embeddings into mixed-curvature spaces, as stated in the abstract (3rd sentence), introduction (3rd paragraph), and many other places throughout.\n\n\n- Please explain how you compute the metric tensor g_P and apply the Riemannian correction (multiply by the inverse of the metric tensor g_P) to determine the Riemannian gradient in the Algorithm 1, more precisely. \n\nThis is standard, as in [NK1,NK2, WL]. The only place it is necessary for us is for the hyperbolic components in Step (9).", "- Step (9) of the Algorithm 1 is either wrong, or you compute v_i without projecting the Riemannian gradient. Please check your theoretical/experimental results and code according to this step.\n\nThere is a typo; the RHS should have v_i instead of h_i.\n\n\n- What is h_i used in the Algorithm 1? Can we suppose that it is the ith component of h?\n\nh_i refers to the coordinates corresponding to the i-th component or factor.\n\n\n- In step (6) and step (8), do you project individual components of the Riemannian gradient to the product manifold? Since their dimensions are different, how do you perform these projections, since definitions of the projections given on Page 5 cannot be applied? Please check your theoretical/experimental results and code accordingly.\n\nEach projection is within its component; the text mentions each component is handled independently. A subscript i has been added to the RHS of steps (6),(8).\n\n\n- Please define exp_{x^(t)_i}(vi) and Exp(U) more precisely. I suppose that they denote exponential maps.\n\nExp denotes the exponential map as defined in Section 2. The image Exp(U) refers to the standard notation f(S) := {f(s) : s \\in S} where S is a set.\n\n\n- How do you initialize x^(0) randomly?\n\nThe initialization scheme depends on the application. An example of a standard initialization selects each coordinate of x^(0) either uniform or Gaussian with std on the order of 1e-2 to 1e-3 [NK1, LW], which is what we also use in our empirical evaluation. We have clarified this in Appendix D.\n\n- The notation is pretty confusing and ambiguous. First, does x belong to an embedded Riemannian manifold P or a point on the graph, which will be embedded? According to equation (2), they are on the graph and they will be embedded. According to Algorithm 1, x^0 belongs to P, which is a Riemannian manifold as defined before. So, if x^(0) belongs to P, then L is already defined from P to R (in input of the Algorithm 1). Thereby, gradient \\nabla L(x) is already a Riemannian gradient, not the Euclidean gradient, while you claim that \\nabla L(x) is the Euclidean gradient in the text.\n\nx is the manifold point to be optimized. The notation \\nabla L(x) is defined to be the Euclidean gradient at the bottom of page 4 of the initial submission. Note that this is the gradient of the embedding into ambient space; this is standard as in [NK2, WL].\n\n\n- Overall, Algorithm 1 just performs a projection of Riemannian or Euclidean gradient \\nabla L(x) onto a point v_i for each ith individual manifold. Then, each v_i is projected back to a point on an individual component of the product manifold by an exponential map.\n\nThat is correct.\n\n- What do you mean by “sectional curvature, which is a function of a point p and two directions x; y from p”? Are x and y not points on a manifold?\n\nAs mentioned earlier in the section, sectional curvature is a function of a point p and two directions (i.e. tangent vectors) u,v. However, tangent vectors can be identified with points on the manifold via geodesics (i.e. through Exp). The way our discrete curvature estimation is described in this section is analogous to other discrete curvature analogs [B]. For example, the Ricci curvature is defined for a point p and a tangent vector u, and the coarse Ricci curvature is defined for a node p and neighbor x [Ollivier2].\n\n\n- You define \\xi_G(m;b,c) for curvature estimation for a graph G. However, the goal was to map G to a Riemannian manifold. Then, do you also consider that G is itself a Riemannian manifold, or a submanifold?\n\nG is a graph and does not have manifold structure. The goal of \\xi is to provide a discrete analog of curvature which satisfies similar properties to curvature and facilitates choosing an appropriate Riemannian manifold to embed G into. There are other similar notions of discrete curvature on graphs, for example the Forman-Ricci [WSJ] and Ollivier-Ricci [Ollivier1] curvatures.\n\n\n- What is P in the statement “the components of the points in P” in Lemma 2?\n\nIt is the product manifold. We have changed it to \\mathcal{P}.\n\n\n- What is \\epsilon in Lemma 2?\n\n\\epsilon refers to a desired tolerance within which to compute the solution, in this case the mean. This is also explicitly mentioned in the last line of the second to last paragraph of Section 1. This is standard notation for gradient descent-based rates.\n", "- How do you optimize positive w_i, i=1,2,...,n?\n\nBy convention, the weights w_i are constants independent of the optimization. For example, to compute the standard Euclidean mean one would take w_i = 1/n for all i.\n\n\n- What is the “gradient descent” refered to in Lemma 2?\n\nThe usual Riemannian gradient descent, since it is a manifold.\n\n\n- Please provide computational complexity and running time of the methods.\n\nThe complexity of the Karcher mean algorithm is O(nr log epsilon^(-1)), as described on Page 2, PP 3, line 4. The convergence rate of RSGD is standard [ZS]: it converges to a stationary point with rate O(c/t), where c is a constant and t is the number of iterations. Algorithms 2 and 3 find good estimates of the corresponding distributions in a small number (~10^4) of samples; each sample requires constant time for both algorithms.\n\n\n- Please define \\mathbb{I}_r.\n\nThis is standard notation for the r x r identity matrix, but we have explicitly defined it now.\n\n\n- At the third line of the first equation of the proof of Lemma 1, there is no x_2. Is this equation correct?\n\nThe second R_1(x_1, y_1)x_1 should be R_2(x_2, y_2)x_2, which follows from directly applying equation (5) to the previous line.\n\n\n- If at least of two of x1, y1, x2 and y2 are linearly dependents, then how does the result of Lemma 1 change?\n\nThe result does not change.\n\n\n- Statements and results given in Lemma 1 are confusing. According to the result, e.g. for K=1, curvature of product manifold of sphere S and Euclidean space E is 1, and that of E and hyperbolic H is 0. Then, could you please explain this result for the product of S, E and H, that is, explain the statement “The last case (one negative, one positive space) follows along the same lines.”? If the curvature of the product manifold is non-negative, then does it mean that the curvature of H is ignored in the computations?\n\nIn the case of a product of E and H, the sectional curvature ranges in [-1,0]. The line “and similarly for K_1, K_2 non-positive” implies that in the non-positive case we have K(u,v) \\in [min(K_1, K_2), 0], since everything is negated.\n\n\n- What is \\gamma more precisely? Is it a distribution or density function? If it is, then what does (\\gamma+1)/2 denote?\n\n\\gamma is a random variable which is distributed as the dot product of two uniformly random unit vectors, as defined on the bottom of page 16. Hence (\\gamma+1)/2 is a well-defined random variable.\n\n\n- The statements related to use of Algorithm 1 and SGD to optimize equation (2) are confusing. Please explain how you employed them together in detail.\n\nEquation (2) is a loss function from \\mathcal{P}^n to \\mathbb{R} where the embeddings x_i are variables, and can thus be optimized using RSGD (Algorithm 1) on each point simultaneously. This is the same approach taken in previous works [NK1, SDGR, NK2] for the case of single space embeddings.\n\n\n- On estimation of K_1, K_2 and matching moments\n\nAlgorithm 2 and 3 both produce distributions. Moment matching (or the method of moments) is a standard term referring to parameter estimation via equating the moments of distributions. More details have been added to the revised draft.\n\n\n- Please define, “random (V)”, “random neighbor m” and “\\delta_K/s” used in Algorithm 3 more precisely.\n\nWe have clarified that the random sampling is uniform. \\delta_K refers to the delta function.\n", "[B] Bauer et al. Modern Approaches to Discrete Curvature. Lecture Notes in Mathematics\n[Ficken] Ficken, “The Riemannian and Affine Differential Geometry of Product-Spaces”, Annals of Math., 1939.\n[LW] Leimeister and Wilson. Skip-gram word embeddings in hyperbolic space.\n[Levy] Levy, \"Symmetric Tensors of The Second Order Whose Covariant Derivatives Vanish\", Annals of Math., 1926.\n[NK1] Nickel and Kiela. Poincaré embeddings for learning hierarchical representations.\n[NK2] Nickel and Kiela. Learning continuous hierarchies in the Lorentz model of hyperbolic geometry.\n[Ollivier1] Ollivier. Ricci curvature of Markov chains on metric spaces.\n[Ollivier2] Ollivier. A visual introduction to Riemannian curvatures and some discrete generalizations.\n[SDGR] Sala, De Sa, Gu, Ré. Representation tradeoffs for hyperbolic embeddings.\n[Sarkar] Sarkar. Low distortion Delaunay embedding of trees in hyperbolic plane.\n[TS] Turaga and Srivastava, Riemannian Computing in Computer Vision, Springer 2016.\n[WSJ] Weber, Saucan, and Jost. Characterizing complex networks with Forman-Ricci curvature and associated geometric flows.\n[WL] Wilson and Leimeister. Gradient descent in hyperbolic space.\n[ZS] Zhang and Sra. First-order methods for geodesically convex optimization.\n", "We appreciate the reviewer’s thoughtful feedback on our work.\n\n- On the definition of K\n\nK is a constant that parametrizes the curvature of the model spaces (hyperbolic, Euclidean, and spherical); for any constant K, there is a corresponding space with curvature K. In our notation, \\mathbb{E}^d has curvature 0, \\mathbb{S}^d_K has curvature K, and \\mathbb{H}^d_K has curvature -K.\n\n\n- On the use of the signature estimation\n\nTable 2 does not use Algorithms 2 and 3, instead using Algorithm 1 with a variety of signatures to show the interaction between signature and dataset. For every experiment, the curvatures are initialized to -1,0, or 1 for H, E, and S components resp., and learned using the method described in Section 3.1; this is what is reported in the Best model. These details have been clarified in Appendix D.\n\nAs the reviewer has correctly observed, Algorithm 1 can be initialized with the estimated signature from Algorithms 2 and 3, which saves on hyperparameter searching and computation time. Table 3 shows that this method would indeed choose the best signature among the two-component options.\n\n\n- On comparison vs ISOMAP\n\nWe thank the reviewer for pointing out ISOMAP, a non-linear dimensionality reduction algorithm. We ran an experiment to compare against our proposed techniques. We first embedded the graphs from 4.1 into a higher (100) dimensional Euclidean space, than used ISOMAP to reduce the dimension to 10 in order to compare the average distortion against the product manifolds from Section 4.1. We saw a d_avg for PhD's/Facebook/Power Graph/Cities 0.4085 / 2.2295 / 0.4863 / 0.3711. We hypothesize that while ISOMAP can be good for dimensionality reduction for an already-good Euclidean embedding (with many dimensions), it does not perform as well as our technique for situations when the higher-dimensional Euclidean embeddings themselves have non-zero distortion---nor can it capture the mixed-curvature manifolds our approach offers.\n\n\n- On the link between the different contributions \n\nThe operational flow is the following. We start with the data to be embedded. We \n\n(i) seek an appropriate space to embed it in (in order to get a high-quality representation). To find what this embedding space should be, we estimate the signature (Section 3.2). More concretely, we use Algorithm 3 to estimate the distribution of discrete curvature of the data and Algorithm 2 to find a matching product manifold. This yields the \"signature\", i.e., a the number of factors and each factor's type and dimension for our product manifold.\n\nWe have now selected an embedding space, and we \n\n(ii) perform the embedding. This is done via Algorithm 1(RSGD) in Section 3.1.\n\nNow we have an embedding. There are many further tasks to be done with these representations. Perhaps the most fundamental is to take the mean of the representations for a subset of the data. Since our embeddings are into a product manifold, this requires a slightly more sophisticated approach; we\n\n(iii) compute this mean via the Karcher mean detailed in Section 3.3.\n\n\n- On the complexity of learning the product space and the limited data sample regime\n\nThis is an excellent point. We point out that (1) Optimization in the sphere and hyperboloid has the same complexity up to a constant as in Euclidean space, so that the complexity of our product manifold proposal is roughly the same as using SGD to produce typical embeddings, as we simply use R-SGD on the factor spaces. (2) The heuristic for choosing a space is very cheap (i.e., Algorithms 2 and 3) compared to the main embedding procedure, and is better suited for simple products anyways, avoiding the sample complexity issue of a large search space. Indeed, we do not seek to embed into higher dimensional spaces: our approach shows good results with few dimensions in a product space.\n", "We appreciate the reviewer’s positive comments about our work.", "This paper proposes a new method to embed a graph onto a product of spherical/Euclidean/hyperbolic manifolds. The key is to use sectional curvature estimations to determine proper signature, i.e., all component manifolds, and then optimize over these manifolds. The results are validated on various synthetic and real graphs. The proposed idea is new, nontrivial, and is well supported by experimental evidence.", "The paper proposes a dimensionality reduction method that embeds data into a product manifold of spherical, Euclidean, and hyperbolic manifolds. The proposed algorithm is based on matching the geodesic distances on the product manifold to graph distances. I find the proposed method quite interesting and think that it might be promising in data analysis problems. Here are a few issues that would be good to clarify:\n\n- Could you please formally define K in page 3?\n\n- I find the estimation of the signature very interesting. However, I am confused about how the curvature calculation process is (or can be) integrated into the embedding method proposed in Algorithm 1. How exactly does the sectional curvature estimation find use in the current results? Is the “Best model” reported in Table 2 determined via the sectional curvature estimation method? If yes, it would be good to see also the Davg and mAP figures of the best model in Table 2 for comparison.\n\n- I think it would also be good to compare the results in Table 2 to some standard dimensionality reduction algorithms like ISOMAP, for instance in terms of Davg. Does the proposed approach bring advantage over such algorithms that try to match the distances in the learnt domain with the geodesic distances in the original graph?\n\n- As a general comment, my feeling about this paper is that the link between the different contributions does not stand out so clearly. In particular, how are the embedding algorithm in Section 3.1, the signature estimation algorithm in Section 3.2, and the Karcher mean discussed in Section 3.3 related? Can all these ideas find use in an overall representation learning framework? \n\n- In the experimental results in page 7, it is argued that the product space does not perform worse than the optimal single constant curvature spaces. The figures in the experimental results seem to support this. However, I am wondering whether the complexity of learning the product space should also play a role in deciding in what kind of space the data should be embedded in. In particular, in a setting with limited availability of data samples, I guess the sample error might get too high if one tries to learn a very high dimensional product space. \n\n\nTypos: \n\nPage 3: Note the “analogy” to Euclidean products\nPage 7 and Table 1: I guess “ring of cycles” should have been “ring of trees” instead\nPage 13: Ganea et al formulates “basic basic” machine learning tools …" ]
[ 7, -1, -1, -1, -1, -1, -1, -1, 7, 7 ]
[ 5, -1, -1, -1, -1, -1, -1, -1, 2, 3 ]
[ "iclr_2019_HJxeWnCcF7", "ryxIIdud37", "ryxIIdud37", "ryxIIdud37", "ryxIIdud37", "ryxIIdud37", "r1x-tAv3jm", "Hklpflm6h7", "iclr_2019_HJxeWnCcF7", "iclr_2019_HJxeWnCcF7" ]
iclr_2019_HJxwDiActX
StrokeNet: A Neural Painting Environment
We've seen tremendous success of image generating models these years. Generating images through a neural network is usually pixel-based, which is fundamentally different from how humans create artwork using brushes. To imitate human drawing, interactions between the environment and the agent is required to allow trials. However, the environment is usually non-differentiable, leading to slow convergence and massive computation. In this paper we try to address the discrete nature of software environment with an intermediate, differentiable simulation. We present StrokeNet, a novel model where the agent is trained upon a well-crafted neural approximation of the painting environment. With this approach, our agent was able to learn to write characters such as MNIST digits faster than reinforcement learning approaches in an unsupervised manner. Our primary contribution is the neural simulation of a real-world environment. Furthermore, the agent trained with the emulated environment is able to directly transfer its skills to real-world software.
accepted-poster-papers
The paper proposes a novel differential way to output brush strokes, taking a few ideas from model-based learning. The method is efficient in that one can train it in an unsupervised manner and does not require paired data. The strengths of the paper are the qualitative results that demonstrate nice interpolations among other things, on a number of datasets (esp. post-rebuttal). The weaknesses of the paper are the writing (which I think is relatively easy to improve if the authors make an honest effort) and some of the quantitative evaluation. I would encourage the authors to get in touch with the SPIRAL paper authors in order to get access to the SPIRAL generated MNIST test data and then perhaps the classification metric could be updated. In summary, from the discussion, the major points of contention were the somewhat lacking initial evaluation (which was fixed to a large extent) and the quality of writing (which could be fixed more). I believe the submission is genuinely novel, interesting (esp. the usage of world model-like techniques) and valuable for the ICLR audience so I recommend acceptance.
train
[ "SkgawVChyE", "rJlM3uThJE", "SyxgZc6hkV", "Hyg0dv2n2Q", "Syx3a2S507", "BJg-3v4v3X", "SJeRjh4SkN", "SkgLeTScA7", "B1l5K2rcAQ", "H1x_SWqKn7", "HkeU4TB9CQ" ]
[ "author", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author" ]
[ "Thank you very much for your kind revision! Again we really appreciate your constructive suggestions that helped us make this a complete work!", "Thanks again for your constructive advice and kind revision.\n\nRegarding the classification metrics of SPIRAL, we did not have access to SPIRAL generated MNIST test data, and the results presented in their paper weren't enough for evaluation. We also considered reproducing the SPIRAL experiment, however, since our computation resource was quite limited, training a SPIRAL agent would be virtually impossible. Thus we were only able to compare several ablated models in the experiment. The curves of SPIRAL in Figure 11 is an excerpt from their paper.\n\nAs for the writing and Figure 11, after the notification of final acceptance or rejection on Dec. 22, we will update the paper to fix those grammatical and stylistic issues and upload to Arxiv.", "Thank you for your revision!\n\nFor future work, we will conduct more complete experiments to provide better evaluation of our model. We will also provide more detailed discussion on our approach in the future version of the paper.", "Revision:\n\nThe addition of new datasets and the qualitative demonstration of latent space interpolations and algebra are quite convincing. Interpolations from raster-based generative models such as the original VAE tend to be blurry and not semantic. The interpolations in this paper do a good job of demonstrating the usefulness of structure.\n\nThe classification metric is reasonable, but there is no comparison with SPIRAL, and only a comparison with ablated versions of the StrokeNet agent. I see no reason why the comparison with SPIRAL was removed for this metric.\n\nFigure 11 does a good job of showing the usefulness of gradients over reinforcement learning, but should have a better x range so that one of the curves doesn't just become a vertical line, which is bad for stylistic reasons.\n\nThe writing has improved, but still has stylistic and grammatical issues. A few examples, \"there’re\", \"the network could be more aware of what it’s exactly doing\", \"discriminator loss given its popularity and mightiness to achieve adversarial learning\". A full enumeration would be out of scope of this review. I encourage the authors to iterate more on the writing, and get the paper proofread by more people.\n\nIn summary, the paper's quality has significantly improved, but some presentation issues keep it from being a great paper. The idea presented in the paper is however interesting and timely and deserves to be shared with the wider generative models community, which makes me lean towards an accept.\n\nOriginal Review:\n\nThis paper deals with the problem of strokes-based image generation (in contrast to raster-based). The authors define strokes as a list of coordinates and pressure values along with the color and brush radius of a stroke. Then the authors investigate whether an agent can learn to produce the stroke corresponding to a given target image. The authors show that they were able to do so for the MNIST and OMNIGLOT datasets. This is done by first training an encoder-decoder pair of neural networks where the latent variable is the stroke, and the encoder and decoder have specific structure which takes advantage of the known stroke structure of the latent variable.\n\nThe paper contains no quantitative evaluation, either with existing methods or with any baselines. No ablations are conducted to understand which techniques provide value and which don't. The paper does present some qualitative examples of rendered strokes but it's not clear whether these are from the training set or an unseen test set. It's not clear whether the model is generalizing or not.\n\nThe writing is also very unclear. I had to fill in the blanks a lot. It isn't clear what the objective of the paper is. Why are we generating strokes? What use is the software for rendering images from strokes? Is it differentiable? Apparently not. The authors talk about differentiable rendering engines, but ultimately we learn that a learnt neural network decoder is the differentiable renderer.\n\nTo improve this paper and make it acceptable, I recommend the following:\n\n1. Improve the presentation so that it's very clear what's being contributed. Instead of writing the chronological story of what you did, instead you should explain the problem, explain why current solutions are lacking, and then present your own solutions, and then quantify the improvements from your solution.\n\n2. Avoid casual language such as \"Reason may be\", \"The agent is just a plain\", \"since neural nets are famouse for their ability to approximate all sorts of functions\".\n\n3. Show that strokes-based generation enables capabilities that raster-based generation doesn't. For instance, you could show that the agent is able to systematically generalize to very different types of images. I'd also recommend presenting results on datasets more complex than MNIST and OMNIGLOT.", "Thank you for your reviews and suggestions. We updated the paper with better readability and many clarifications. \n\nIndeed, it’s difficult to provide quantitative analysis and comparisons of this type of generative model considering the limited research done on this topic. In the new version of the paper, we trained a classifier on MNIST to classify generated digits in Section 5.4. The accuracy reflects the quality of the agent. The classifier and the agent are tested using the test-set of MNIST, so neither models have seen the data before. We also added comparison to other methods in that section.\n\nWe also added two more datasets to the experiment in Section 5.2, so we can see that the model does have the ability to generalize to different types of data.\n\nAs for the differentiability, we also added a discussion in Section 1. In short, when implementing a painting software, we treat the image as a large matrix and index the pixels by integers to calculate new color values for certain pixels. This indexing process is discrete and non-differentiable. While in our neural version of environment, this is done by an MLP, which makes the process differentiable.\n\nFor your suggestions, we made the following improvements:\n\n1.\tWe edited unnecessary parts to the appendix so that we can explain the problems in greater details. We compared our method with SPIRAL to show improved efficiency in Section 5.4. We trained a recognizer to classify the images generated by our agent to show quantitative results.\n\n2.\tWe rewrote many parts of the paper so language is more formal.\n\n3.\tWe extended the architecture and experimented with more complex datasets: QuickDraw and KanjiVG, so that we can show the model is able to generalize to different datasets.", "The paper proposes to use a differentiable drawing environment to synthesize images and provides information about some initial experiments. \n\nNot yet great about this paper: \n - the paper feels premature: There is a nice idea, but restricting the drawing environment to be \n - Some of the choices in the paper are a bit surprising, e.g. the lines in the drawing method are restricted to be at most 16 points long. If you look at real drawing data (e.g. the quickdraw dataset: https://quickdraw.withgoogle.com/data) you will find that users draw much longer lines typically. \nEDIT: the new version of the paper is much better but still feels like a bit incomplete. I personally would prefer a more complete evaluation and discussion of the proposed method. \n - the entire evaluation of this paper is purely qualitative (and that is not quite very convincing either). I feel it would be important for this paper to add some quantitative measure of quality. E.g. train an MNIST recognizer synthesized data and compare that to a recognizer trained on the original MNIST data. \n - a proper discussion of how the proposed environment is different from the environment proposed by Ganin et al (Deepmind's SPIRAL) \n\nMinor comments: \n - abstract: why is it like \"dreaming\" -> I do agree with the rest of that statement, but I don't see the connection to dreaming\n - abstract: \"upper agent\" -> is entirely unclear here. \n - abstract: the footnote at the end of the abstract is at a strange location\n - introduction: and could thus -> and can thus \n - introduction: second paragraph - it would be good to add some citations to this paragraph. \n - resulted image-> resulting image\n - the sentence: \"We can generate....data is cheap\" - is quite unclear to me at this time. Most of it becomes clearer later in the paper - but I feel it would be good to put this into proper context here (or not mention it)\n - we obtained -> we obtain\n - called a generator -> call a generator \n - the entire last paragraph on the first page is completely unclear to me when reading it here. \n - equations 1, 2: it's unclear whether coordinates are absolute or relative coordinates. \n- fig 1: it's very confusing that the generator, that is described first is represented at the right. \n - sec 3.2 - first line: wrong figure reference - you refer to fig 2 - but probably mean fig 1\n - page 3 bottom: by appending the encoded color and radius data we have a feature with shape 64x64xn -> I don't quite see how this is true. The image was 64x64 -> and I don't quite understand why you have a color/radius for each pixel. \n - sec 3.3 - it seem sthat there is a partial sentence missing \n - sec 3.4 - is it relevant to the rest of the paper that the web application exists (and how it was implemented). \n - fig 2 / fig 3: these figures are very hard to read. Maybe inverting the images would help. Also fig 3 has very little value. ", "The authors have addressed most of my comments. I still feel the paper is a bit too early and a more thorough evaluation and explanation would be preferable over the current version. \nI however feel that the current version is acceptable as is and will adjust my review accordingly.", "Many thanks to your detailed advice and patient review! With your help we made this paper more full-fledged.\n\nAs for your major concerns, we made the following improvements:\n\n1.\tWe extended the architecture with a simple recurrent structure and implemented a blending algorithm to enable multiple-stroke drawing. We would like to address several issues here:\n\n1)\tQ: “Couldn't the encoder just output the stroke in a format that contains the pen-down / pen-up event, like the stroke format suggested in [2]?”\nA: Indeed. That’s partly what the pressure parameters in the actions are intended for. However, the agent didn’t develop the trick of zero pressure.\n\n2)\tQ: “Why you only allowed 16 points since most datasets contain sequences longer than 16?”\nA: This is a commonly asked question so we added discussion in Section 3.1. Basically, with the power of Catmull-Rom spline, many sampled points in those datasets could be considered redundant. Most strokes we used in writing and drawing are nice and smooth, and we can vectorize them with a few control points and spline algorithms. In other words, those strokes are scale-invariant, so even for long strokes, we can represent them with a few points. In our setup, we found that 16 points offer powerful enough capability to fit various curves.\n\n2.\tThis is actually a great idea! However, if we use software like [6] to convert dataset like MNIST to vectors, for digits drawn with thick pen, we would yield the contour of the digits, which is not the sequence how the digit is written. Meanwhile, our agent learns to control the size of the brush to draw digits. There’re limitations to our methods though, discussed in Section 6. Our agent avoids to draw intersecting lines, e.g, when writing “8” it’s actually writing “3” with closed endpoints.\n\n3.\tWe added latent space interpolation and latent variable arithmetic for the MNIST agent. We really appreciate this suggestion.\n\n4.\tWe added experiments with QuickDraw and KanjiVG using the recurrent version of StrokeNet. For KanjiVG, we found the agent is doodling instead of writing, which resulted in utterly different stroke orders than humans, we compared the stroke orders in Section 6.\n\nFor minor points:\na)\tExperiment results are presented in PNG, while diagrams are already exported to PDF.\nb)\tWe edited most part of the paper so that the language style is more appropriate.\nc)\tNext time we will upload the code to anonymous repository. This time, however, the authors made sure in advance so that the github account doesn’t leak any identity information.\n", "We thank our reviewers for their valuable feedback. We’ve updated our paper with several major improvements:\n1.\tWe extended our StrokeNet with a simple recurrent structure, which allowed us to evaluate the model on more complex datasets: QuickDraw and KanjiVG, in Section 5.2.\n2.\tWe trained a classifier on MNIST and tested it on generated digits to provide quantitative analysis of our agent in Section 5.4.\n3.\tWe compared our approach to reinforcement learning approaches like SPIRAL in Section 5.4.\n4.\tWe transformed our agent into a VAE and did latent space interpolation in Section 5.3.\n5.\tWe improved our writing style for better readability.\n", "Revision:\n\nThe authors have taken my advice and addressed my concerns wholeheartedly. It is clear to me that they have taken efforts to make notable progress during the rebuttal period. Summary of their improvements:\n\n- They have extended their methodology to handle multiple strokes\n- The model has been converted to a latent-space generative model (similar to Sketch-RNN, where the latent space is from a seq2seq VAE, and SPIRAL where the latent space is used by an adversarial framework)\n- They have ran addition experiments on a diverse set of datasets (now includes Kanji and QuickDraw), in addition to omniglot and mnist.\n- Newer version is better written, and I like how they are also honest to admit limitations of their model rather than hide them.\n\nI think this work is a great companion to existing work such as Sketch-RNN and SPIRAL. As mentioned in my original review, the main advantage of this is the ability to train with very limited compute resources, due to the model-based learning inspired by model-based RL work (they cited some work on world models). Taking important concepts from various different (sub) research areas and synthesizing them into this nice work should be an inspiration to the broader community. The release of their code to reproduce results of all the experiments will also facilitate future research into this exciting topic of vector-drawing models.\n\nI have revised my score to 8, since I believe this to be at least in the better half of accepted papers at ICLR based on my experience of publishing and attending the conference in the past few years. I hope the other reviewers can have some time to reevaluate the revision.\n\nOriginal review:\n\nSummary: they propose a differentiable learning algorithm that can output a brush stroke that can approximate a pixel image input, such as MNIST or Omniglot. Unlike sketch-pix2seq[3] (which is a pixel input -> sketch output model based on sketch-rnn[2]), their method trains in an unsupervised manner and does not require paired image/stroke data. They do this via training a \"world model\" to approximate brush painting software and emulate it. Since this emulation model is differentiable, they can easily train an algorithm to output a stroke to approximate the drawing via back propagation, and avoid using RL and costly compute such in earlier works such as [1].\n\nThe main strength of this paper is the original thought that went into it. From reading the paper, my guess is the authors came from a background that is not pure ML research (for instance, they are experts in Javascript, WebGL, and their writing style is easy to read), and it's great to see new ideas into our field. While research from big labs [1] have the advantage of having access to massive compute so that they can run large scale RL experiments to train an agent to \"sketch\" something that looks like MNIST or Omniglot, the authors probably had limited resources, and had to be more creative to come up with a solution to do the same thing that trains in a couple of hours using a single P40 GPU. Unlike [1] that used an actual software rendering package that is controlled by a stroke-drawing agent, their creative approach here is to train a generator network to learn to approximate a painting package they had built, and then freeze the weights of this generator to efficiently train an agent to draw. The results for MNIST and Omniglot look comparable to [1] but achieved with much fewer resources. I find this work refreshing, and I think it can be potentially much more impactful than [1] since people can actually use it with limited compute resources, and without using RL.\n\nThat being said, things are not all rosy, and I feel there are things that need to be done for this work to be ready for publication in a good venue like ICLR. Below are a few of my suggestions that I hope will help the authors improve their work, for either this conference, or if it gets rejected, I encourage the authors to try the next conference with these improvements:\n\n1) multiple strokes, longe strokes. I don't think having a model that can output only a single stroke is scalable to other (simple) datasets such pixel versions of KangiVG [4] or QuickDraw [5]. The authors mentioned the need for an RNN, but couldn't the encoder just output the stroke in a format that contains the pen-down / pen-up event, like the stroke format suggested in [2]? Maybe, maybe not, but in either case, for this work to matter, multiple stroke generation is needed. Most datasets are also longer than 16 points, so you will need to show that your method works for say 80-120 points for this method to be comparable to existing work. If you can't scale up 16 points, would like to see a detailed discussion as to why.\n\n2) While I like this method and approach, to play devil's advocate, what if I simply use an off the shelf bmp-to-svg converter that is fast and efficient (like [6]), and just build a set of stroke data from a dataset of pixel data, and train a sketch-rnn type model described in [3] to convert from pixel to stroke? What does this method offer that my description fails to offer? Would like to see some discussion there.\n\n3) I'll give a hint for as to what I think for (2). I think the value in this method is that it can be converted to a full generative model with latent variables (like a VAE, GAN, sketch-rnn) where you can feed in a random vector (gaussian or uniform), and get a sketch as an output, and do things like interpolate between two sketches. Correct me if I'm wrong, but I don't think the encoder here in the first figure outputs an embedding that has a Gaussian prior (like a VAE), so it fails to be a generative model (check out [1], even that is a latent variable model). I think the model can be easily converted to one though to address this issue, and I strongly encourage the authors to try enforcing a Gaussian prior to an embedding space (that can fit right between the 16x16x128 average pooling op to the fully connected 1024 sized layer), and show results where we can interpolate between two latent variables and see how the vector sketches are interpolated. This has also been done in [2]. If the authors need space, I suggest putting the loss diagrams near the end into the appendix, since those are not too interesting to look at.\n\n4) As mentioned earlier, I would love to see experimental results on [4] KangiVG and [5] QuickDraw datasets, even subsets of them. An interesting result would be to compare the stroke order of this algorithm with the natural stroke order for human doodles / Chinese characters.\n\nMinor points:\n\na) The figures look like they are bitmap, pixel images, but for a paper advocating stroke/vector images, I recommend exporting the diagrams in SVG format and convert them to PDF so they like crisp in the paper.\n\nb) Write style: There are some terms like \"huge\" dataset that is subjective and relative. While I'm happy about the writing style of this paper, maybe some reviewers who are more academic types might not like it and have a negative bias against this work. If things don't work out this time, I recommend the authors asking some friends who have published (successfully) at good ML conferences to proof read this paper for content and style.\n\nc) It's great to see that the implementation is open sourced, and put it on github. Next time, I recommend uploading it to an anonymous github profile/repo, although personally (and for the record, in case area chairs are looking), I don't mind at all in this case, and I don't think the author's github address revealed any real identity (I haven't tried digging deeper). Some other reviewers / area chairs might not like to see a github link that is not anonymized though.\n\nSo in the end, even though I really like this paper, I can only give a score of 6 (edit: this has since been revised upward to 8). If the authors are able to address points 1-4, please do what you can in the next few weeks and give it your best shot. I'll look at the paper again and will revise the score upwards by a point or two if I think the improvements are there. If not, and this work ends up getting rejected, please consider improving the work later on and submitting to the next venue. Good luck!\n\n[1] SPIRAL https://arxiv.org/abs/1804.01118\n[2] sketch-rnn https://arxiv.org/abs/1704.03477\n[3] sketch-pix2seq https://arxiv.org/abs/1709.04121\n[4] http://kanjivg.tagaini.net/\n[5] https://quickdraw.withgoogle.com/data\n[6] https://vectormagic.com/\n", "Thank you for your patient instructions. We followed your advice and made many improvements.\n\n1.\tIndeed, the idea was premature. In the updated version of the paper, we extended the architecture and evaluated our model on various datasets.\n\n2.\tThis is a commonly asked question so we added discussion in Section 3.1. We’ve extended our architecture to generate more complex pictures with multiple strokes. Basically, with the power of Catmull-Rom spline, many sampled points in those datasets could be considered redundant. Most strokes we used in writing and drawing are nice and smooth, and we can vectorize them with a few control points and spline algorithms. In other words, those strokes are scale-invariant, so even for long strokes, we can represent them with a few points. In our setup, we found that 16 points offer powerful enough capability to fit various curves.\n\n3.\tWe trained a recognizer on the original MNIST dataset and tested it on the generated digits in Section 5.4. The close accuracy reflects the quality of the agent model quantitatively.\n\n4.\tThe major difference between our environment and the one used by SPIRAL is that ours uses Catmull-Rom spline while SPIRAL uses Bezier curve. A Bezier curve doesn’t pass through its control points while a Catmull-Rom spline does. Also, the brush rendering algorithm is different depending on what type of brushes the experiments used. From the perspective of training the agent, the nuance between the environment doesn’t affect too much.\n\nMinor points:\nWe followed your comments, edited the paper, and moved unnecessary parts to the appendix to avoid confusion. \n\nRegarding the shape of the feature, n points are transformed to n 64x64 feature maps by an MLP, then every neighboring pair of feature maps are added together, which reduces the number of feature maps to n – 1. Then we concatenate the feature of brush data, which is also 64x64, and finally we yield 64x64xn feature maps.\nFor the color and radius, we don’t have a color and radius for each pixel, but we do have color and radius for each stroke, as well as the interpolated points along the spline, as shown in Figure 3. For such points and their resulting circle discs, the surrounding pixel values depend on the color and radius.\n" ]
[ -1, -1, -1, 7, -1, 6, -1, -1, -1, 8, -1 ]
[ -1, -1, -1, 4, -1, 4, -1, -1, -1, 5, -1 ]
[ "H1x_SWqKn7", "Hyg0dv2n2Q", "SJeRjh4SkN", "iclr_2019_HJxwDiActX", "Hyg0dv2n2Q", "iclr_2019_HJxwDiActX", "HkeU4TB9CQ", "H1x_SWqKn7", "iclr_2019_HJxwDiActX", "iclr_2019_HJxwDiActX", "BJg-3v4v3X" ]
iclr_2019_HJxyAjRcFX
Harmonizing Maximum Likelihood with GANs for Multimodal Conditional Generation
Recent advances in conditional image generation tasks, such as image-to-image translation and image inpainting, are largely accounted to the success of conditional GAN models, which are often optimized by the joint use of the GAN loss with the reconstruction loss. However, we reveal that this training recipe shared by almost all existing methods causes one critical side effect: lack of diversity in output samples. In order to accomplish both training stability and multimodal output generation, we propose novel training schemes with a new set of losses named moment reconstruction losses that simply replace the reconstruction loss. We show that our approach is applicable to any conditional generation tasks by performing thorough experiments on image-to-image translation, super-resolution and image inpainting using Cityscapes and CelebA dataset. Quantitative evaluations also confirm that our methods achieve a great diversity in outputs while retaining or even improving the visual fidelity of generated samples.
accepted-poster-papers
The paper presents new loss functions (which replace the reconstruction part) for the training of conditional GANs. Theoretical considerations and an empirical analysis show that the proposed loss can better handle multimodality of the target distribution than reconstruction based losses while being competitive in terms of image quality.
train
[ "HkgbB4TxeV", "r1gdodFyx4", "Skg_k4wklV", "rJxtHml92m", "Hyxkbojj2Q", "B1gG_obrAQ", "B1l6yj-H0m", "H1ljacZB0X", "Byer85ZHAQ", "B1e85wUE67", "S1gPNubqn7" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public", "official_reviewer" ]
[ "We are deeply grateful to reviewer3 for a quick reply that reveals the detailed ground for the decision. Now we can understand the review much better to offer more focused answers to the concerns raised by reviewer3.\n\n1. Novelty\n===================================\nAccording to reviewer3’s clarification, per-pixel mean and variance prediction is the core of our methods, and thus our methods don’t have enough novelty compared to the cited papers.\n\nAlthough some of our methods involve mean and variance prediction, the key idea of our methods is matching the moments of the sample distribution to the maximum likelihood estimates of the real moments. As such, MLMM_1 and MCMLE_1, for example, do not use the variance prediction but achieve great diversity and quality. \n\nNote that our methods suggest two simple modifications to existing conditional GANs as final recipes; thus it would not be surprising that some previous work used similar techniques in other applications. However, we would like to emphasize that our methods are novel in the context of conditional GANs and mode collapse of GANs. \n\n\n2. Theoretical results\n===================================\nWe would like to clarify that the proof that reviewer3 looks for is in section 4.4 not in section 3.2. During the rebuttal period, we reorganized section 3.2 and section 4.4 to reflect reviewer3’s comments and to streamline the logic. In the current draft, section 3.2 contains the proof about the conflict between the reconstruction loss and the GAN loss, while section 4.4 proves that our approach does not suffer from the same problem.\n", "Addressing the concerns that my updated review was imprecise or unhelpful and point (3) about the authors' rebuttal being ignored, I hope that the following points make it clear that the rebuttal was carefully considered in making my decision.\n\n1. Regarding novelty\n\nMy initial concern was regarding the novelty of the proposed method and not that of the criticism of the use of reconstruction loss in conditional GANs. The authors responded in the rebuttal that their paper has significant novelties in (1) formal criticism of the use of reconstruction loss and (2) their proposed method in the context of conditional generation tasks. Regarding the concern that I did not leave any comment on this response, my concern was regarding point (2), as the papers I cited, though not directly applied to conditional generation, demonstrate that the proposed method lacks novelty. However, the authors have stressed that I have ignored (1). This point was never a concern for me as I do agree that such a criticism of reconstruction loss has not been shown in prior work.\n\nMy claim that the paper lacks novelty is specifically due to prior work such as CodeSLAM [1] using per-pixel mean and variance predictions. Regarding the point made in the authors' initial response that it is novel to combine these well-established ideas as a solution to loss of multimodality in conditional generation - it is indeed novel to combine or re-use existing ideas to new domains but in this case, the main approach proposed in the paper is solely a result of such re-use this significantly reduces the overall novelty of the paper.\n\n\n2. Regarding theoretical results\n\n“Our methods are designed not to interrupt the GAN optimization and we proved it”\nCorrect me if I’m wrong but I do not see any theoretical proof in the current revision of the paper showing that the proposed method does not interrupt GAN optimization. Section 3.2 in the current version seems to be the only theoretical proof which is criticizing the use reconstruction loss with GAN loss.\n\n“reviewer3 seems to take the lack of proof that our model prevents mode collapse as a serious flaw in our work”\nFor correctness, I was not looking for a specific proof about the proposed work preventing mode collapse. The question I would like to ask is - Given that the paper has shown that the optimizing reconstruction loss directly with GAN loss will lead to inevitable mode collapse, what is the proof that the proposed approach will not suffer the same fate? I agree that your proposed approach does not directly optimize the reconstruction loss, which gives intuition that the moment-matching approaches should not suffer from the same drawbacks. However, this does not constitute a theoretical result backing the claim that the proposed method will not interrupt GAN optimization.\n\nComparing with Diversity-Sensitive Conditional Generative Adversarial Networks: I agree that their approach is vastly different, but the merits of that paper are completely different. Their proposed approach is novel and their analysis shows various perspectives of why their approach is effective. This again comes back to my point that this paper lacks theoretical analysis proving that the proposed method is effective.\n\n”In contrast, we point out that the reconstruction loss conflicts with GANs in a way that reduces the output variance and proposes alternatives without such problem. Thus, we prove the problem of reconstruction loss and that our methods do not conflict with the GAN objective.”\nMy response to this point is the same as what I have stated earlier. I do not see how the second sentence follows from the first - the proof in Section 3.2 does not say anything about whether the proposed approach has any guarantees against mode collapse or conflicts with the GAN objective.\n\n\n3. Conclusion\n\nI wholeheartedly agree that the reviewers and authors must communicate in a precise and constructive way. I am happy to make my decision process transparent and continue this discussion.\n\n\n[1] Bloesch, M., Czarnowski, J., Clark, R., Leutenegger, S., & Davison, A. J. (2018). CodeSLAM-Learning a Compact, Optimisable Representation for Dense Visual SLAM. CVPR 2018.", "We thank again the reviewer for the comments. \nHowever, we have the impression that some critics are unfair, imprecise and unhelpful; thus, hardly acceptable for us. Please see below why. \n\n\n1. Novelty\n===================================\nReviewer3 raised again the concern about novelty in the updated review. \n\n\nIn our rebuttal, we clarified that our work is the first to analyze why the use of reconstruction loss leads to the mode collapse (lose of multimodality) in conditional GANs. Our work is also the first to propose alternatives to the reconstruction loss which greatly improve the multimodality of conditional GANs without losing the visual fidelity of the output samples.\n\nReviewer3 did not leave any comment on this clarification and failed to mention any specific works that undermine our novelty and how closely they are related to our work. In the initial review, reviewer3 referred several papers about variance prediction; however, these papers have no relation with conditional GANs or mode collapse. \n\nWe sincerely ask reviewer3 to be specific and detailed on the claim that our work lacks novelty with proper ground.\n\n\n2. Theoretical results\n===================================\n“Proving that the proposed method is actually effective in what is designed to do”\nAccording to the modified review, reviewer3 seems to take the lack of proof that our model prevents mode collapse as a serious flaw in our work.\nHowever, we think reviewer3 largely misunderstood the key to our paper. Our methods have no multimodality-enhancing mechanism; instead, GANs are responsible for multimodality. Our methods are designed not to interrupt the GAN optimization and we proved it. The methods simply offer training stability without interference. Thus, the multimodality observed in our methods is inherent from GANs, and we pointed out that it is suppressed by the reconstruction loss in existing conditional GANs.\n\nCompared with a parallel submission to ICLR 2019 below, it becomes more obvious that we provide necessary proofs.\nDiversity-Sensitive Conditional Generative Adversarial Networks (https://openreview.net/forum?id=rJliMh09F7)\n\nBoth papers share the same goal: multimodal generation in conditional GANs. However, the approaches are vastly different. Unlike our work, they add a regularization term to the loss while keeping the reconstruction loss. Their regularization term directly forces the model to generate diverse outputs. In this case, the proof that it facilitates diversity is necessary, so they present it. In contrast, we point out that the reconstruction loss conflicts with GANs in a way that reduces the output variance and proposes alternatives without such problem. Thus, we prove the problem of reconstruction loss and that our methods do not conflict with the GAN objective.\n\n\n3. Suggestion for better reviewing process \n===================================\nWe carefully prepared for the rebuttal to answer to the initial review critics. However, we feel that our rebuttal is completely ignored because we cannot find in the updated review which specific question is not answered by our rebuttal and why our clarification cannot be the answer to the original review questions. We strongly believe the communications between authors and reviewers should be precise, specific and helpful to one another. \n", "The paper proposes a modification to the traditional conditional GAN objective (which minimizes GAN loss as well as either L1 or L2 pixel-wise reconstruction losses) in order to promote diverse, multimodal generation of images. The modification involves replacing the L1/L2 reconstruction loss -- which predicts the first moment of a pixel-wise gaussian/laplace (respectively) likelihood model assuming a constant spherical covariance matrix -- with a new objective that matches the first and second moments of a pixel-wise gaussian/laplace likelihood model with diagonal covariance matrix. Two models are proposed for matching the first and second moments - the first one involves using a separate network to predict the moments from data which are then used to match the generator’s empirical estimates of the moments (using K samples of generated images). The second involves directly matching the empirical moment estimates using monte carlo.\n\nThe paper makes use of a well-established idea - modeling pixel-wise image likelihood with a diagonal covariance matrix i.e. heteroscedastic variance (which, as explained in [1], is a way to learn data-dependent aleatoric uncertainty). Following [1], the usage of first and second moment prediction is also prevalent in recent deep generative models (for example, [2]) i.e. image likelihood models predict the per-pixel mean and variance in the L2 likelihood case, for optimizing Equation 4 from the paper. Recent work has also attempted to go beyond the assumption of a diagonal covariance matrix (for example, in [3] a band-diagonal covariance matrix is estimated). Hence, the only novel idea in the paper seems to be the method for matching the empirical estimates of the first and second moments over K samples. The motivation for doing this makes intuitive sense since diversity in generation is desired, which is also demonstrated in the results.\n\nSection specific comments:\n- The loss of modality of reconstruction loss (section 3.2) seems like something which doesn’t require the extent of mathematical and empirical detail presented in the paper. Several of the cited works already mention the pitfalls of using reconstruction loss.\n\n- The analyses in section 4.4 are sound in derivation but not so much in the conclusions drawn. It is not clear that the lack of existence of a generator that is an optimal solution to the GAN and L2 loss (individually) implies that any learnt generator using GAN + L2 loss is suboptimal. More explanation on this part would help.\n\nThe paper is well written, presents a simple idea, complete with experiments for comparing diversity with competing methods. Some theoretical analyses do no directly support the proposition - e.g. sections 3.2 and 4.4 in my specific comments above. Hence, the claim that the proposed method prevents mode collapse (training stability) and gives diverse multi-modal predictions is supported by experiments and intuition for the method, but not so much theoretically. However, the major weakness of the paper is the lack of novelty of the core idea.\n\n=== Update after rebuttal:\nHaving read through the other reviews and the author's rebuttal, I am unsatisfied with the rebuttal and I do not recommend accepting the paper. My rating has decreased accordingly.\n\nThe reasons for my recommendation, after discussion with other reviews, are -- (1) lack of novelty and (2) weak theoretical results (some justification of which was stated in my initial review above). Elaborating more on the second point, I would like to mention some points which came up during the discussion with other reviewers: The theoretical result which states that not using reconstruction loss given that multi-modal outputs are desired is a weaker result than proving that the proposed method is actually effective in what it is designed to do. There are empirical results to back that claim, but I strongly believe that the theoretical results fall short and feel out of place in the overall justification for the proposed method. This, along with my earlier point of lack of novelty are the basis for my decision.\n\n\nReferences:\n[1] Kendall, Alex, and Yarin Gal. \"What uncertainties do we need in bayesian deep learning for computer vision?.\" Advances in neural information processing systems. 2017.\n[2] Bloesch, M., Czarnowski, J., Clark, R., Leutenegger, S., & Davison, A. J. (2018). CodeSLAM-Learning a Compact, Optimisable Representation for Dense Visual SLAM. CVPR 2018.\n[3] Dorta, G., Vicente, S., Agapito, L., Campbell, N. D., & Simpson, I. (2018, February). Structured Uncertainty Prediction Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.", "The paper describes an alternative to L1/L2 errors (wrt output and one ground-truth example) that are used to augment adversarial losses when training conditional GANs. While these augmented losses are often needed to stabilize and guide GAN training, the authors argue that they also bias the optimization of the generator towards mode collapse. To address this, the method proposes two kinds of alternate losses--both of which essentially generate multiple sample outputs from the same input, fit these with a Gaussian distribution by computing the generating sample mean and variance, and try to maximize the likelihood of the true training output under this distribution. The paper provides theoretical and empirical analysis to show that the proposed approach leads to generators that produce samples that are both diverse and high-quality.\n\nI think this is a good paper and solves an important problem---where one usually had to sacrifice diversity to obtain stable training by adding a reconstruction loss. I recommend acceptance.\n\nAn interesting ablation experiment might be to see what happens when one no longer includes the GAN loss and trains only with the MLMM or MCMLE losses, and compare this to training with only the L1/L2 losses. The other thing I'd like the authors to comment on are the potential shortcomings of using a simple un-correlated Gaussian to model the sample distributions. It seems that such a distribution may not capture the fact that multiple dimensions of the output (i.e., multiple pixel intensities) are not independent conditioned on the input. Perhaps, it may be worth exploring whether Gaussians with general co-variance matrices, or independent in some de-correlated space (learned from say simply the set of outputs) may increase the efficacy of these losses.\n\n====Post-rebuttal\n\nI've read the other reviews and retain my positive impression of the paper. I also appreciate that the authors have conducted additional experiments based on my (non-binding) suggestions---and the results are indeed interesting. I am upgrading my score accordingly.", "\n1. The scope of our proof\n===================================\nThat’s a great point. We have to make it clear in the draft. Our proof is confined to conditional GAN models with no explicit latent variable. Since the explicit latent variables provide the model with a vehicle that can represent variability and multimodality, our argument in section 4.4 may not be applicable to the models that explicitly encode latent variables. We add this discussion to the end of section 4.4.\n\n2. BicycleGAN\n===================================\nBicycleGAN has been applied to image-to-image translation, but not to image inpainting and super-resolution. Thus, we cannot find any standard implementation (or learned parameters) of BicycleGAN for the two tasks, which was the main reason why we did not report its results on the two tasks - image inpainting and super-resolution. \n", "We are sincerely grateful for Reviewer 3’s thoughtful review. Please see blue fonts in the newly uploaded draft to check how our paper is updated.\n\n1. Novelty\n===================================\nWe believe that our work has significant novelties as follows:\n\n(1) To the best of our knowledge, our work is the first to formally criticize the use of reconstruction loss in conditional GANs. We also connect this problem to mode collapse (lose of multimodality). Of the prior works in conditional generation tasks, several papers empirically mention the loss of stochasticity in conditional GANs. However, they fail to analyze why this happens or propose what solutions can solve this problem. On the other hand, we reveal that the GAN loss and the reconstruction loss cannot coexist in harmony, and propose a solution to overcome this problem.\n\n(2) We propose alternatives to the reconstruction loss to greatly improve the multimodality of conditional GANs. As Reviewer 3 pointed out, the components of our methods, MLE and moment matching, are well-established ideas. However, it is novel to combine them as a solution to the loss of multimodality in conditional generation. Furthermore, we think the simplicity of our methods is not a weakness but a strength, which makes our methods easily applicable to a wide range of conditional generation tasks.\n\n2. Specific comments on organization and drawn conclusions\n===================================\nWe reorganize section 3.2 and 4.4 to reflect Reviewer 3’s suggestion. Specifically, we simplify section 3.2 and move some content about reconstruction loss from 4.4 to 3.2. \n\nWe agree with Reviewer 3 that the conclusion of section 4.4 may be rather over-stated. Our proof says that any generator cannot be optimal to both GAN and L2 loss simultaneously. It does not prove the generator is underperforming or suboptimal. Therefore, we remove the term ‘suboptimal’ and tone down the overall argument.\n\nWe also cite the papers that Reviewer 3 suggested.\n", "We thank Reviewer 2 for positive and constructive reviews. Please see blue fonts in the newly uploaded draft to check how our paper is updated.\n\n1. Convergence speed\n===================================\nWe observe that our methods need more training steps (about 1.5x) to generate high-quality images compared to that with the reconstruction loss. It might be obvious because our methods train the model to generate a much wider range of outputs. We add some comments to Appendix B.1 regarding the convergence speed.\n\n2. Training stability\n===================================\nMLMM is similar to the reconstruction loss in terms of training stability. Encouragingly, our methods stably work with a large range of hyperparameter \\lambda. For example, the loss coefficient of MLMM is settable across several orders of magnitude (from tens to thousands) with similar results. However, as noted in the paper, MCMLE is unstable compared to MLMM.\n\n3. Why only MLMM_1 is not compared\n===================================\nDue to many combinations between our methods and tasks, we had to choose only a few of our methods for human evaluation. Although MLMM_1 and MLMM_{1/2} attained similar performance for all three tasks, we chose MLMM_{1/2} as the ‘default’ method because it better implements our idea - matching more statistics (i.e. not only means but also variances). \n", "We thank Reviewer 1 for your encouraging and constructive comments. Please see blue fonts in the newly uploaded draft to check how our paper is updated.\n\n1. Ablation experiments\n===================================\nWe carry out the ablation experiments and present the results in appendix G (page 22). The results are indeed interesting. When trained with MLMM_1 or MCMLE_1 only, the outputs are indistinguishable from those with the reconstruction loss only, since there is no variation-inducing term to generate diverse output. In the case of MLMM_{1/2} and MCMLE_{1/2}, the model shows high variation in the output. However, the patterns of the variations differ greatly. Specifically, MLMM_{1/2} shows variations in low-frequency while MCMLE_{1/2} shows those in high-frequency.\n\nWe also add experiments of using GAN loss, MLMM_{1/2} loss, and reconstruction loss altogether. Whiling fixing the coefficient of GAN loss and MLMM loss to 1 and 10 respectively, we gradually increase the coefficient of reconstruction loss from 0 to 100. We find that the output variation decreases as the reconstruction loss increases. Interestingly, the sample quality is high when the reconstruction loss is absolutely zero or dominated by the MLMM loss. In contrast, the samples show poor quality when the reconstruction coefficient is 1 or 10. It seems that either method can assist the GAN loss to find visually appealing local optima but the joint use of them leads to a troublesome behavior.\n\n2. Shortcomings of un-correlated Gaussian\n===================================\nThis is a very interesting and profound question that may need to be further investigated in the future work. In summary, we believe that incorporating more statistics is not guaranteed to improve the performance, and un-correlated Gaussian may not be a bad choice.\n\nAn ideal GAN loss can match with any kind of statistics since it minimizes the JS divergence between sample distribution and real distribution. In this sense, additional loss term should be regarded as a ‘guidance’ signal, while the key player is still the GAN loss. However, it is unclear whether a tighter guidance necessarily yields better outputs.\n\nRegarding the tightness of guidance, the loss terms can be ordered as follows:\nMLMM_1 = MCMLE_1 < MLMM_{1/2} = MCMLE_{1/2} < general covariance Gaussian.\n\nInterestingly, our qualitative evaluations show that MLMM_1 and MCMLE_1 generate comparable or even better outputs compared to MLMM_{1/2} and MCMLE_{1/2}. That is, matching means could be enough to guide GAN training in many cases. Adding more statistics may be helpful in some cases, but generally may not improve the performance. Moreover, we should consider the errors arising from the statistics prediction because a wrong estimation of statistics can even misguide the GAN training. \n\nPlease see blue fonts in section 5.2 of the newly uploaded draft to check how our paper is updated.\n", "Hi, \nI think this is an interesting work for improving the diversity of cGAN.\nBut I have some questions:\n1. The analysis in section 4.4 give a proof to mode collapse of some cGANs, such as pix2pix or UNIT. But the proof is not supported to the model that encode the laten representation to help generate images (Var(y|x,c)=0 is ok in this case), such as BicycleGAN or MUNIT. Right?\n\n2. The diversity scores in Table 1.(a) are remarkable. It will be interesting if you can present more comparisons with BicycleGAN in different tasks.", "This paper analyzes the model collapse problems on training conditional GANs and attribute it to the mismatch between GAN loss and reconstruction loss. This paper also proposes new types of reconstruction loss by measuring higher statistics for better multimodal conditional generation.\n\nPros:\n1.\tThe analysis in Sec 4.4 is insightful, which partially explains the success of MLMM and MCMLE over previous method in generating diverse conditional outputs.\n2.\tThe paper is well written and easy to follow.\n\nCons:\nAnalysis on the experiments is a little insufficient, as shown below.\n\nI have some questions (and suggestions) about experiments. \n1.\tHow does the training process affected by changing the reconstruction loss (e.g., how the training curve changes?)? Do MLMM and MCMLE converge slower or faster than the original ones? What about training stability? \n2.\tWhy only MLMM_1 is not compared with other methods on SRGAN-celebA and GLCIC-A? From pix2pix cases it seems that Gaussian MLMM_1 performs much better than MLMM_{1/2}.\n" ]
[ -1, -1, -1, 4, 8, -1, -1, -1, -1, -1, 7 ]
[ -1, -1, -1, 5, 4, -1, -1, -1, -1, -1, 3 ]
[ "r1gdodFyx4", "Skg_k4wklV", "rJxtHml92m", "iclr_2019_HJxyAjRcFX", "iclr_2019_HJxyAjRcFX", "B1e85wUE67", "rJxtHml92m", "S1gPNubqn7", "Hyxkbojj2Q", "iclr_2019_HJxyAjRcFX", "iclr_2019_HJxyAjRcFX" ]
iclr_2019_HJz05o0qK7
Measuring Compositionality in Representation Learning
Many machine learning algorithms represent input data with vector embeddings or discrete codes. When inputs exhibit compositional structure (e.g. objects built from parts or procedures from subroutines), it is natural to ask whether this compositional structure is reflected in the the inputs’ learned representations. While the assessment of compositionality in languages has received significant attention in linguistics and adjacent fields, the machine learning literature lacks general-purpose tools for producing graded measurements of compositional structure in more general (e.g. vector-valued) representation spaces. We describe a procedure for evaluating compositionality by measuring how well the true representation-producing model can be approximated by a model that explicitly composes a collection of inferred representational primitives. We use the procedure to provide formal and empirical characterizations of compositional structure in a variety of settings, exploring the relationship between compositionality and learning dynamics, human judgments, representational similarity, and generalization.
accepted-poster-papers
This paper presents a method for measuring the degree to which some representation for a composed object effectively represents the pieces from which it is composed. All three authors found this to be an important topic for study, and found the paper to be a limited but original and important step toward studying this topic. However, two reviewers expressed serious concerns about clarity, and were not fully satisfied with the revisions made so far. I'm recommending acceptance, but I ask the authors to further revise the paper (especially the introduction) to make sure it includes a blunt and straightforward presentation of the problem under study and the way TRE addresses it. I'm also somewhat concerned at R2's mention of a potential confound in one experiment. The paper has been updated with what appears to be a fix, though, and R2 has not yet responded, so I'm presuming that this issue has been resolved. I also ask the authors to release code shortly upon de-anonymization, as promised.
train
[ "HkepfJbY3m", "HygfJZNy0m", "BylIvb4JCm", "SJekSWNyCQ", "SJxgMW4JCQ", "Hye-eZd93m", "S1xUg9jcnX" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper tackles a very interesting problem about representations, especially of the connectionist kind -- how do we know if the learned representations capture the compositional structure present in the inputs, and tries to come up with a systematic framework to answer that question. The framework assumes the presence of an oracle that can give us the true compositional structure. Then the author try to answer some refreshing questions about the dynamics of learning and compositionality while citing some interesting background reading.\n\nHowever, I’m a bit torn about the experiments. On the one hand, I like the pedagogical nature of the experiments. They are small and should be easy to reproduce. On the other hand, all of them seem to be fairly similar kinds of composition with very few attributes (mostly bigrams). So whether the intuitions hold for more complex compositional structures is hard to say.\n\nNevertheless, it’s a well written paper and is a helpful first step towards studying the problem of compositionality in vector representations.\n\n\nMinor points\nPg 3 “_grammar_ for composing meanings *where* licensed by derivations” seems incorrect. \nFigure 5: seems quite noisy to make the linear relationship claim\n\nEDIT: I still think the compositions under consideration are the simpler ones. Still with the new experiments the coverage seems nicer. Given the authors plan to release their source code, I expect there will be an opportunity for the rest of the community to build on these, to test TRE's efficacy on more complex compositions. I updated my scores to reflect the change.", "Dear reviewers,\n\nThank you all for your detailed feedback! We are glad that you found our submission to be an \"interesting\" and \"pedagogical\" study of a \"fundamental question\". All of the reviews touched on a similar set of points, so we're addressing most of them in this top-level comment and will reply to individual reviews about more specific questions. \n\nFirst off: we've re-worked the communication experiments in section 7 in response to reviewer feedback. A new paper draft containing these changes has been uploaded. Briefly, the experiment now features:\n\n- a more complicated set of referents (with real tree structures of the form <<obj1:attr1, obj1:attr2>, <obj2:attr1, obj2:attr2>>)\n- a more interesting composition function (which is learned, non-commutative and sensitive to string indices) \n- as suggested by R2, a more challenging task for the listener (which is now required to generate referents rather than simply recognize them)\n\nThe high-level experimental conclusions have remained essentially the same---the only real difference is that some of the correlations are stronger than observed in the initial experiments.\n\nWe really appreciate the suggestions that led to these changes, and believe the new experiments better exercise all the pieces of the TRE framework. We hope they also address the main points raised in all the reviews, namely:\n\n- Choice of composition and distance functions: as mentioned, the new version of Sec. 7 has a new (learned, non-commutative) composition function, and continues to use l1 distance for similarity. This means the four example applications in the paper now feature 3 kinds of composition function (addition, the learned linear operation of S.7, and the general class considered in S.6), and 3 kinds of distance function (cosine similarity, l1 distance, and the general class in S.6). Between this and the fact that experiments cover examples from computer vision, natural language processing, and multiagent reinforcement learning, we believe we have provided fairly comprehensive evidence for the generality of TRE.\n\n- Necessity of pre-selecting a specific composition function: First, we emphasize that this choice is also a feature of all previous work that has attempted to analyze compositionality---both in learned representations and in the natural language processing literature (in the latter case commiting to particular functional forms with some free parameters). Moreover, the point of our Remark 2 is that some pre-commitment to a restricted composition function is essentially inevitable: if we allow the evaluation procedure to select an arbitrary composition function, the result will be trivial.\n\n- Presentation of the approach: We are grateful for all the presentation suggestions. R2's summary of what they would like to be better stated is similar to the statement in the introduction that \"the core of our proposal is to treat a set of primitive meaning representations D0 as hidden, and optimize over them to find an explicitly compositional model that approximates the true model as well as possible\". We have updated the text of the paper to reinforce this point in several other places.", "Thanks for your review---we hope the new experiments address your concerns about the generalization to new kinds of composition functions and deeper trees. Regarding Fig. 5---the relationship is indeed noisy, but as discussed in the paper the correlation is measurable and statistically significant to a high degree. ", "- Thank you for the citation suggestion---as mentioned in the response to R1, we've expanded the related work section to talk about this general line of NLP work, and mentioned in a couple of other places where learned \"NLP\"-style composition functions could be used. One could implement the specific model from B&Z in our framework (modulo some rank constraints) by taking the composition function to be bilinear with a particular choice of tensor.\n\n- Re section 4: the intuition is that there's lots of information in the input images that is not part of the compositional analysis and not relevant for the eventual classification (e.g. the saturation of the pixel in the top-left corner of the image). A good representation will abstract out this irrelevant information, thus reducing MI between the representation and the input image. The experiments in this section suggest that this process of abstraction also results in more compositional representations.\n\n- Re section 5: the single-word representations are also included in the representation dataset X---that is, the model needs to find primitives that are both close to single-word representations and compose properly.\n\n- If time permits, we will try to add extra topographic similarity experiments to S.7. ", "Thanks for the suggested Fyshe cite---we've realized that the related work section should really spend more time on the (large collection) of NLP papers about learning composition functions to predict phrase representations. We've updated the paper accordingly.", "Edit and a further question: Reading again Section 7, I'm wondering whether the the high generalization is possible due to the fact that at test time only one of the two candidates is unseen, and the other is seen. Having *both* candidates to be unseen makes the problem significantly harder since the only way for the listener to get it right is to associate the message with the right candidate, rather than relying in some other strategy like whether the message is novel (thus it's the seen candidate) or new (thus it's the unseen candidate). As such, I don't think I can fully trust your conclusions due to this potential confounder. \n--------------------------------------------------------------\n\nThe authors propose a measure of compositionality in representations. Given instances of data x annotated with semantic primitives, the authors learn a vector for each of the primitive such that the addition of the vectors of the primitives is very close (in terms of cosine) to the latent representation z of the input x. The authors find that this measure correlates with the mutual information between the input x and z, approximates the human judges of compositionality on a language dataset and finally presents a study on the relation between the proposed measure and generalizalization performance, concluding that their measure correlates with generalization error as well as absolute test accuracy.\n\nThis in an interesting study and attacks a very fundamental question; tracking compositionality in representations could pave the way towards representations that facilitate transfer learning and better generalization. While the paper is very clear with respects to results, I found the presentation of the proposed measure overly confusing (and somewhat more exaggerated that what is really going on). \n\nThe authors start with a very clean example, that can potentially facilitate clarifying in a visual way the process of obtaining the measure. However, I feel that clarity is being traded-off for formality. It needs several reads to really distill the idea that essentially the authors are simply learning vectors of primitives that when added should resemble the representation of the input. Moreover, the name of the measure is a bit misleading and not justified by the experiments and the data. The authors do not deal with trees in any of the examples, but rather with a set of primitives (apparent in the use of addition as a composition function which being commutative does not allow for word-order and the like deep syntactic properties). \n\nNow, onto the measure. I like the idea of learning basis vectors from the representations and constraining to follow the primitive semantics. Of course, this constraints quite a bit the form of compositionality that the authors are searching for. \nThe idea of additive semantics has been explored in NLP, however it's mostly applicable for primitives with intersective semantics (e.g., a white towel is something that is both white and a towel). Do the authors think that this restricts their experiments (especially the natural languages ones)? What about other composition techniques found in the literature of compositional semantics (e.g., by Baroni and Zamparelli, 2010). \nThis is good to be clarified. Moreover, given the simplicity of the datasets in the current study, wouldn't a reasonable baseline be to obtain the basis vector of blue by averaging all the latent representations of blue? Similarly, how sensitive are conclusions with respect to different composition functions?\n\nSection 4 is potentially very interesting, but I don't seem to understand why it's good news that TRE correlates with I(x;\\theta). Low TRE indicates high-degree of compositionality. I suspect that low MI means that input and latent representation are somewhat independent but I don't see the connection to compositional components. Can the authors clarify?\n\nSection 5 is a nice addition. The authors mention that they learn word and phrase representations. Where are the word representations used? My understanding is that you derive basis word representations by using SGD and the phrase vectors and compute TRE with these. If this is the case, an interesting experiment would be to report how similar the induced basis vectors are (either some first-order or second-order similarity) to the pre-trained ones.\n\nSection 8 presents results on discrete representations. Since this is the experiment most similar to the recent work that uses topographic similarity (and since the authors already prime the reader at section 7 about relation between the 2 measures), it would be interesting to see the empirical relation between TRE and topographic and its relation to generalization and absolute performance. \n\nBaroni and Zamparelli (2010) Nouns are vectors, adjectives are matrices: Representing adjective-noun constructions in semantic space\n", "This paper describes a framework - Tree Reconstruction Error (TRE) - for assessing compositionality of representations by comparing the learned outputs against those of the closest compositional approximation. The paper demonstrates the use of this framework to assess the role of compositionality in a hypothetical compression phase of representation learning, compares the correspondence of TRE with human judgments of compositionality of bigrams, provides an explanation of the relationship of the metric to topographic similarity, and uses the framework to draw conclusions about the role of compositionality in model generalization.\n\nOverall I think this is a solid paper, with an interesting and reasonable approach to quantifying compositionality, and a fairly compelling set of results. The reported experiments cover reasonable ground in terms of questions relevant to compositionality (relationship to representation compression, generalization), and I appreciate the comparison to human judgments, which lends credibility to applicability of the framework. The results are generally intuitive and reasonable enough to be credible as indicators of how compositionality relates to aspects of learning, while providing some potential insight. The paper is clearly written, and to my knowledge the approach is novel.\n\nI would say the main limitation to the conclusions that can be drawn from these experiments lies in the necessity of committing to a particular composition operator, of which the authors have selected very simple ones without comparing to others. There is nothing obviously unreasonable about the choices of composition operator, but it seems that the conclusions drawn cannot be construed to apply to compositionality as a general concept, but rather to compositionality when defined by these particular operators. Similar limitations apply to the fact that the tests have been run on very specific tasks - it is not clear how these conclusions would generalize to other tasks.\n\nDespite this limitation, I'm inclined to say that the introduction of the framework is a solid contribution, and the results presented are interesting. I think this is a reasonable paper to accept for publication.\n\nMinor comment:\np8 typo: \"training and accuracies\"\n\n------\n\nReviewer 2 makes a good point that the presentation of the framework could be much clearer, currently obscuring the central role of learning the primitive representations. This is something that would benefit from revision. Reviewer 2's comments also remind me that, from a perspective of learning composition-ready primitives, Fyshe et al. (2015) is a relevant reference here, as it similarly learns primitive (word) representations to be compatible with a chosen composition function. \n\nBeyond issues of presentation, it seems that we are all in agreement that the paper's takeaways would also benefit from an increase in the scope of the experiments. I'm happy to adjust my score to reflect this.\n\nReference:\nFyshe et al. (2015) A compositional and interpretable semantic space.\n" ]
[ 7, -1, -1, -1, -1, 6, 6 ]
[ 4, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2019_HJz05o0qK7", "iclr_2019_HJz05o0qK7", "HkepfJbY3m", "Hye-eZd93m", "S1xUg9jcnX", "iclr_2019_HJz05o0qK7", "iclr_2019_HJz05o0qK7" ]
iclr_2019_HJz6tiCqYm
Benchmarking Neural Network Robustness to Common Corruptions and Perturbations
In this paper we establish rigorous benchmarks for image classifier robustness. Our first benchmark, ImageNet-C, standardizes and expands the corruption robustness topic, while showing which classifiers are preferable in safety-critical applications. Then we propose a new dataset called ImageNet-P which enables researchers to benchmark a classifier's robustness to common perturbations. Unlike recent robustness research, this benchmark evaluates performance on common corruptions and perturbations not worst-case adversarial perturbations. We find that there are negligible changes in relative corruption robustness from AlexNet classifiers to ResNet classifiers. Afterward we discover ways to enhance corruption and perturbation robustness. We even find that a bypassed adversarial defense provides substantial common perturbation robustness. Together our benchmarks may aid future work toward networks that robustly generalize.
accepted-poster-papers
The reviewers have all recommended accepting this paper thus I am as well. Based on the reviews and the selectivity of the single track for oral presentations, I am only recommending acceptance as a poster.
train
[ "BygwXHNOlV", "rJxW-nzJyN", "ryxahIfEgE", "SJefbjX1e4", "S1l6EPphkE", "rylvrBOmkV", "SJladTB7y4", "rklBbUGA0m", "rJxnuikRA7", "SJgvBRYaR7", "BkxNgFD9RX", "B1xEWdwc0Q", "rye6cKP5Cm", "Skg0XKPqR7", "r1xJYOvcR7", "B1xfnvvc0m", "SklrTPkU07", "rkl_eOltTX", "S1xvCfDD6X", "HkgLx3DzpQ", "ryeoWVTch7" ]
[ "author", "author", "public", "official_reviewer", "author", "public", "official_reviewer", "author", "public", "public", "author", "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "public", "official_reviewer" ]
[ "Thank you for your interest! Since this task is not adversarial in nature, we do not intend to continually modify the corruptions to subvert new approaches, much like how CIFAR-10 did not continually change to make classification harder for every new architecture and method. Improved generalization to unseen corruptions suggests improved corruption robustness. However if necessary we are open to updating the benchmark, but we will first see whether the research community experiments in this setting.", "A parallel submission proposes to train classifiers on stylized ImageNet images. The aim is to make classifiers rely less on texture and more on shape. https://openreview.net/pdf?id=Bygh9j09KX\n\nWe have found that this method indeed improves corruption robustness. A ResNet-50 obtains an mCE of 76.70%, while a ResNet-50 trained on both ImageNet images and stylized ImageNet images has an mCE of 69.32% (with general improvements noise, blur, weather, and digital categories).", "Hi, it’s an interesting work!\n\nI would like to ask the authors how to ensure that the benchmarks are sufficiently representative to evaluate the robustness of models.\n\nWill the benchmarks be updated in the future as new adversarial attacks (Corruptions or Perturbations) emerge?\n\n\n", "This work already cites many previous fragility studies, both from robustness to random corruptions/perturbations and with respect to worst-case corruptions, some works which include robustness to translations. Based on a quick reading of the proposed additional citations it is unclear to me what these works add on top of what is already cited. I have no strong opinion either way whether additional citations are added, I leave it up to the authors or other reviewers to decide what is best for the proper context of this work.", "Please excuse the delayed response, as we were at NeurIPS.\n\nThe original poster sent an e-mail many months ago including numerous links to many papers, including several of their own. We conclude this because we received only one e-mail with citation suggestions. In consequence, we cited two of the papers authored by the person sending the e-mail, giving a sentence description for each citation. Several months later, the email sender posted the comment above. The only link which appeared in both the e-mail and in the comment above is Engstrom et al. (which is under review). The Fawzi et al. and Kanbak et al. papers are new to us. These may be added to the \"ConvNet Fragility Studies\" section. We think it is a reasonable suggestion to spend more time discussing other ConvNet perturbation fragility findings, although we do already cite works which mention translation instability (such as the parallel work of Azulay & Weiss, 2018).", "If I am interpreting the comment correctly, the author seems to be saying that they have cited the sender of the email's other papers, but does not see the need to cite any of the papers listed above. \n\nThis is a bit confusing as our comments are not an attempt to \"extort\" citations, but rather an effort to put this work in the right context. The fact that the authors cite other (admittedly less relevant) papers of the email sender does not render the suggested work less relevant.\n\nTo reiterate, I believe that all these works are very relevant to the subject of the above paper. If the authors do not want to cite these papers, that is okay - however one would expect them to at least explain in OpenReview why or give a brief comparison.", "It sounds as if the authors agree with the suggestion, although I am not completely sure. However, I would like to emphasize that if they did not agree, then it would be up to the reviewers to determine whether adding these citations was important. Without investigating further, I have no position either way.\n\nBut, in general, our obligation as scientists is to cite other work when doing so benefits the reader. We should exercise our own taste in what we cite and avoid citing things that we do not think enhance the experience of the reader. \n\nAuthors: please don't hesitate to ask reviewers+AC to weigh-in if you are even in doubt about the importance of adding a particular citation.", "We would be happy to expand the related works further in future revisions of this draft. We have cited the sender of the e-mail from \"a long time ago\" twice in the current draft, but we can add more in a future revision.", "I would like to point out that this submission is missing a discussion of some very relevant prior work. That work already evaluates the robustness of ML classifiers to naturally occurring transformations such as rotations and translations. Specifically:\n\n• Fawzi et al. (2015) [https://arxiv.org/abs/1507.06535] compute the minimum transformation (composed of rotations, translations, scaling, etc.) needed to cause a misclassification for a wide variety of models. They find that it is relatively small, and make several observations about the relative robustness of different classifiers.\n\n• Engstrom et al. (2017) [https://arxiv.org/abs/1712.02779] fix a range of rotations and translations and compute that worst-case accuracy of models over this space. They also find models to be relatively non-robust and propose methods for improving it.\n\n• Kanbak et al. (2018) [https://arxiv.org/abs/1711.09115] develop a first-order method to find such worst-case transformations fast. They show that this method can then be used to perform adversarial training and improve the model's robustness.\n\n```The authors were already notified about existence of some of this prior work a long time ago, but still seem to dismiss it.", "Thanks for the quick response. Also, I really appreciated the additional section at the end of the paper where you talk about the robustness enhancement attempts, it is good to know not just what worked but also what did not work and why. ", "Thank you for your interest in this topic and your analysis of our paper.\n\n“I think it might be more realistic to allow training on a subset of the corruptions.”\nResearchers could train on various other corruptions, such as film grain, adversarial noise, HSV noise, uniform noise,\nhigh-pass filtering, median blur, spherical camera distortions, pincushion distortions, out-of-distribution object occlusions, stylized images ( https://openreview.net/forum?id=Bygh9j09KX ), lens scratches, image quilting, color quantization, etc. We have updated the text to make it clearer that researchers can train on more than just cropped and flipped images, but we still do not want researchers training on the test corruptions. In the paper we experimented with uniform noise data augmentation in the stability training experiment and found minor perturbation robustness gains, but not with Gaussian noise with a large standard deviation.\n\nThank you for pointing out that the brief Stone comment requires much more context. For that reason we have removed the citation. Essentially, if f is a model and f^\\hat is an approximation, and if input x is d-dimensional, then if we want | f(x) - f^\\hat (x) | < epsilon, then in some scenarios the number of samples necessary is ~ epsilon^{-d}. Other context is on slide 10 of https://github.com/joanbruna/MathsDL-spring18/blob/master/lectures/lecture1.pdf\n\n“l infinity perturbations on small images”\nThanks to your suggestion, we have changed this to “perturbations on small images.” We kept the word “small” as the images often have side length 32 pixels. We removed “l_infinity” since that method has had some success for perturbations which are small in an l_2 sense.", "Thank you for your interest in this topic and making us aware of your work. An earlier draft of our work appeared months before the time of the ICLR submission deadline, and we have added all citations to your traffic sign recognition work and your parallel works.", "We thank you for your careful analysis of our paper.\n\n“Question: Why do authors do not recommend training on the new datasets?”\nWe do not suggest this as the datasets are corrupted or perturbed forms of clean ImageNet validation images, and that training on these specific corruptions would no longer provide a test of generalization ability to novel forms of corruptions. Researchers could train on various other corruptions, such as film grain, adversarial noise, HSV noise, uniform noise, high-pass filtering, median blur, spherical camera distortions, pincushion distortions, out-of-distribution object occlusions, stylized images ( https://openreview.net/forum?id=Bygh9j09KX ), lens scratches, image quilting, color quantization, etc.\n\n“Are there other useful adversarial defenses?”\nDifferent adversarial training schemes can degrade accuracy so much that they performed worse on these benchmarks. Many other adversarial defenses which do not use train on adversarial or benign noise have been shown not to provide robustness on noise corruptions (see the thorough work of https://openreview.net/pdf?id=S1xoy3CcYX Figure 3). In the coming month, we intend to explore more combinations of techniques to increase robustness, such as the combinations you suggest. In the appendix we explicate four attempts which did not lead to added robustness.", "Noises such as those from gradients or uniform noise are perfectly acceptable forms of augmentation for this task. In the stability training experiment, we observed only minor gains in perturbation robustness when training with uniform noise, but perhaps training with more severe uniform noise could improve corruption robustness. In the revised paper, we make it clearer that training with other forms of data augmentation is acceptable. Please forgive this confusion.", "We thank you for taking time to review our work.", "We should like to thank all of the reviewers and commenters for their constructive comments and kind reception. Independent from their comments, we have created CIFAR-10-C and CIFAR-10-P which could be adequate for rapid experimentation. Also in the revised version is a new appendix where we briefly analyze a different notion of robustness separate from our main contributions. We will respond to each reviewer’s comments individually.", "I would like to thank the authors for focusing on such a critical issue in a comprehensive manner. Algorithmic solutions behind the core technologies have to be robust even under challenging conditions in order for such technologies to be effective and useful in our daily lives. With more and more studies similar to the submitted ICLR work, we can identify the weaknesses and strengths of existing algorithms to develop more reliable perception systems. One of the main contributions of the submitted work is based on the common corruptions and perturbations not worst-case adversarial perturbations. With a similar mindset, we have introduced three datasets, two for traffic signs (CURE-TSR [2], CURE-TSD [3]) and one for generic objects (CURE-OR [1]) to investigate the robustness of recognition/detection systems under challenging conditions corresponding to adversaries that can naturally occur in real-world environments and systems. The controlled challenging conditions in the CURE-OR [1] dataset include underexposure, overexposure, blur, contrast, dirty lens, image noise, resizing, and loss of color information. And the controlled conditions in the CURE-TSR [2] and CURE-TSD [3] datasets include rain, snow, haze, shadow, underexposure, overexposure, blur, dirtiness, loss of color information, sensor and codec errors. Based on the similarities between introduced datasets and conducted studies, including aforementioned studies in the literature analysis of the submitted paper can be helpful to reflect recent related work. Looking forward to authors’ upcoming studies, thanks. \n\n[1] D. Temel*, J. Lee*, and G. AlRegib, “CURE-OR: Challenging unreal and real environments for object recognition,” IEEE International Conference on Machine Learning and Applications, Orlando, Florida, USA, December 2018, (*: equal contribution). https://arxiv.org/abs/1810.08293\n[2] D. Temel, G. Kwon*, M. Prabhushankar*, and G. AlRegib, “CURE-TSR: Challenging unreal and real environments for traffic sign recognition,” Advances in Neural Information Processing Systems (NIPS) Workshop on Machine Learning for Intelligent Transportation Systems, Long Beach, U.S., December 2017, (*: equal contribution).https://arxiv.org/abs/1712.02463\n[3] D. Temel and G. AlRegib, “Traffic Signs in the Wild: Highlights from the IEEE Video and Image Processing Cup 2017 Student Competition [SP Competitions],” in IEEE Signal Processing Magazine, vol. 35, no. 2, pp. 154-161, March 2018.https://arxiv.org/abs/1810.06169", "This paper introduces two benchmarks for image classifier robustness, ImageNet-C and Image-P. The benchmarks cover two important cases in classifier robustness which are ignored by most current researchers. The authors' evaluations also show that current deep learning methods have wide room for improvement. To our best knowledge, this is the first work that provides systematically a common benchmarks for the deep learning community. The reviewer believes that these two benchmarks can play an important role in the research of image classifier robustness.", "This paper introduces new benchmarks for measuring the robustness of computer vision models to various image corruptions. In contrast with the popular notion of “adversarial robustness”, instead of measuring robustness to small, worst-case perturbations this benchmark measures robustness in the average case, where the corruptions are larger and more likely to be encountered at deployment time. The first benchmark “Imagenet-C” consists of 15 commonly occurring image corruptions, ranging from additive noise, simulated weather corruptions, to digital corruptions arising from compression artifacts. Each corruption type has several levels of severity and overall corruption score is measured by improved robustness over a baseline model (in this case AlexNet). The second benchmark “Imagenet-P” measures the consistency of model predictions in a sequence of slightly perturbed image frames. These image sequences are produced by gradually varying an image corruption (e.g. gradually blurring an image). The stability of model predictions is measured by changes in the order of the top-5 predictions of the model. More stable models should not change their prediction to minute distortions in the image. Extensive experiments are run to benchmark recent architecture developments on this new benchmark. It’s found that more recent architectures are more robust on this benchmark, although this gained robustness is largely due to the architectures being more accurate overall. Some techniques for increasing model robustness are explored, including a recent adversarial defense “Adversarial Logit Pairing”, this method was shown to greatly increase robustness on the proposed benchmark. The authors recommend future work benchmark performance on this suite of common corruptions without training on this corruptions directly, and cite prior work which has found that training on one corruption type typically does not generalize to other corruption types. Thus the benchmark is a method for measuring model performance to “unknown” corruptions which should be expected during test time.\n\nIn my opinion this is an important contribution which could change how we measure the robustness of our models. Adversarial robustness is a closely related and popular metric but it is extremely difficult to measure and reported values of adversarial robustness are continuously being falsified [1,2,3]. In contrast, this benchmark provides a standardized and computationally tractable benchmark for measuring the robustness of neural networks to image corruptions. The proposed image corruptions are also more realistic, and better model the types of corruptions computer vision models are likely to encounter during deployment. I hope that future papers will consider this benchmark when measuring and improving neural network robustness. It remains to be seen how difficult the proposed benchmark will be, but the authors perform experiments on a number of baselines and show that it is non-trivial and interesting. At a minimum, solving this benchmark is a necessary step towards robust vision classifiers. \n\nAlthough I agree with the author’s recommendation that future works not train on all of the Imagenet-C corruptions, I think it might be more realistic to allow training on a subset of the corruptions. The reason why I mention this is it’s unclear whether or not adversarial training should be considered as performing data augmentation on some of these corruptions, it certainly is doing some form of data augmentation. Concurrent work [4] has run experiments on a resnet-50 for Imagenet and found that Gaussian data augmentation with large enough sigma (e.g. sigma = .4 when image pixels are on a [0,1] scale) does improve robustness to pepper noise and Gaussian blurring, with improvements comparable to that of adversarial training. Have the authors tried Gaussian data augmentation to see if it improves robustness to the other corruptions? I think this is an important baseline to compare with adversarial training or ALP.\n\nFew specific comments/typos:\n\nPage 2 “l infinity perturbations on small images”\n\nThe (Stone, 1982) reference is interesting, but it’s not clear to me that their main result has implications for adversarial robustness. Can the authors clarify how to map the L_p norm in function space of ||T_n - T(theta) || to the traditional notion of adversarial robustness?\n\n1. https://arxiv.org/pdf/1705.07263.pdf\n2. https://arxiv.org/pdf/1802.00420.pdf\n3. https://arxiv.org/pdf/1607.04311.pdf\n4. https://openreview.net/forum?id=S1xoy3CcYX&noteId=BklKxJBF57", "You've shown that ALP performs so well on this benchmark, but ALP performs some form of data augmentation by training on worst-case perturbations. Therefore, its unclear whether or not this satisfies the recommendation that future work not train on the Imagenet-C corruptions. Have you compared the ALP model with simply performing Gaussian data augmentation? Some recent adversarial defense works have reported that Gaussian data augmentation improves small perturbation robustness.", "Summary: This paper observes that a major flaw in common image-classification networks is their lack of robustness to common corruptions and perturbations. The authors develop and publish two variants of the ImageNet validation dataset, one for corruptions and one for perturbations. They then propose metrics for evaluating several common networks on their new datasets and find that robustness has not improved much from AlexNet to ResNet. They do, however, find several ways to improve performance including using larger networks, using ResNeXt, and using adversarial logit pairing.\n\nQuality: The datasets and metrics are very thoroughly treated, and are the key contribution of the paper. Some questions: What happens if you combine ResNeXt with ALP or histogram equalization? Or any other combinations? Is ALP equally beneficial across all networks? Are there other useful adversarial defenses?\n\nClarity: The novel validation sets and reasoning for them are well-explained, as are the evaluation metrics. Some explanation of adversarial logit pairing would be welcome, and some intuition (or speculation) as to why it is so effective at improving robustness.\n\nOriginality: Although adversarial robustness is a relatively popular subject, I am not aware of any other work presenting datasets of corrupted/perturbed images.\n\nSignificance: The paper highlights a significant weakness in many image-classification networks, provides a benchmark, and identifies ways to improve robustness. It would be improved by more thorough testing, but that is less important than the dataset, metrics and basic benchmarking provided.\n\nQuestion: Why do authors do not recommend training on the new datasets? " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 9, -1, 9 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, -1, 4 ]
[ "ryxahIfEgE", "iclr_2019_HJz6tiCqYm", "iclr_2019_HJz6tiCqYm", "rylvrBOmkV", "SJladTB7y4", "SJladTB7y4", "rklBbUGA0m", "rJxnuikRA7", "iclr_2019_HJz6tiCqYm", "B1xEWdwc0Q", "S1xvCfDD6X", "SklrTPkU07", "ryeoWVTch7", "HkgLx3DzpQ", "rkl_eOltTX", "iclr_2019_HJz6tiCqYm", "iclr_2019_HJz6tiCqYm", "iclr_2019_HJz6tiCqYm", "iclr_2019_HJz6tiCqYm", "iclr_2019_HJz6tiCqYm", "iclr_2019_HJz6tiCqYm" ]
iclr_2019_Hk4dFjR5K7
ADef: an Iterative Algorithm to Construct Adversarial Deformations
While deep neural networks have proven to be a powerful tool for many recognition and classification tasks, their stability properties are still not well understood. In the past, image classifiers have been shown to be vulnerable to so-called adversarial attacks, which are created by additively perturbing the correctly classified image. In this paper, we propose the ADef algorithm to construct a different kind of adversarial attack created by iteratively applying small deformations to the image, found through a gradient descent step. We demonstrate our results on MNIST with convolutional neural networks and on ImageNet with Inception-v3 and ResNet-101.
accepted-poster-papers
The submission proposes a method to construct adversarial attacks based on deforming an input image rather than adding small peturbations. Although deformations can also be characterized by the difference of the original and deformed image, it is qualitatively and quantitatively different as a small deformation can result in a large difference. On the positive side, this paper proposes an interesting form of adversarial attack, whose success can give additional insights on the forms of existing adversarial attacks. The experiments on MNIST and ImageNet are reasonably comprehensive and allow interesting interpretation of how the image deforms to allow the attack. The paper is also praised for its clarity, and cleaner formulation compared to Xiao et al. (see below). Additional experiments during rebuttal phase partially answered reviewer concerns, and provided more information e.g. about the effect of the smoothness of the deformation. There were some concerns that the paper primarly presents one idea, and perhaps missed an opportunity for deeper analysis (R1). R2 would have appreciated more analysis on how to defend against the attack. A controversial point is the relation / novelty with respect to Xiao et al., ICLR 2018. As e.g. pointed out by R1: "The paper originates from a document provably written in late 2017, which is before the deposit on arXiv of another article (by different authors, early 2018) which was later accepted to ICLR 2018 [Xiao and al.]. This remark is important in that it changes my rating of the paper (being more indulgent with papers proposing new ideas, as otherwise the novelty is rather low compared to [Xiao and al.])." On the balance, all three reviewers recommended acceptance of the paper. Regarding novelty over Xiao et al., even ignoring the arguable precedence of the current submission, the formulation is cleaner and will likely advance the analysis of adversarial attacks.
val
[ "Bkg6rXVVAm", "B1xYmy1-Cm", "SJgaSNc92X", "BJeklt6saX", "Byx15QgopQ", "SkgnsKp9am", "rkeZVY6cT7", "r1l_-cA-am", "SJxk8F0bT7", "rye0ucE1am", "Ske8yLon37" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thank you very much for the reconsideration. We are happy to engage in further discussion, and your question is interesting indeed. Our method focuses on finding an exact solution to Equation (5). This equation has many solutions, and in Equation (7) we choose the one that minimizes the l^2 norm of the vector field. One could replace the vector field update in (7) with a vector field satisfying (5) with minimal T-norm. We have been able to produce smaller vector fields in this way, however, in our experience the resulting vector fields tend to be visually less satisfying. This has to do with the vector directions being determined only by the signs of the entries from Equation (6).\n\nIn this context, we would like to point out that it is not so clear when a deformation is too large, and thus it is difficult to decide on a box-constraint. We decided to reject vector fields of T-norm greater than 3 in order to have a credible, quantifiable measure of the success rate of ADef. In general, smooth deformations of size less than 3 should not change the semantic meaning of an MNIST image, but this condition can be relaxed immensely with higher dimension. Nevertheless, we chose to use the same criterion for ImageNet, simply because the vast majority of deformations fall below this very low threshold of 3. With this in mind, we think the entire distribution of the vector field norms are of interest, and hence we include Appendix A.", "Thank you for your detailed answer to all of my remarks.\nI appreciate in particular the supplementary experiments with more regular deformations (higher sigma), though the results are difficult to interpret.\nI am still a bit disappointed by answer 4 (in that I guess it is pretty straightforward to adapt the algorithm to make it easier to analyze), and by the lack of additional discussions or interpretations of results in general in the main paper.\nI somehow agree with answer 12.\nOverall I will keep the same rating.\n", "The paper introduces an iterative method to generate deformed images for adversarial attack. The core idea is to perturb the correctly classified image by iteratively applying small deformations, which are estimated based on a first-order approximation step, until the image is misclassified. Experimental results on several benchmark datasets (MNIST, ImageNet) and commonly used deep nets (CNN, ResNet, Inception) are reported to show the power of adversarial deformations. \n\nThe idea of gradually adding deformations based on gradient information is somewhat interesting, and novel as far as the reviewer knows about. The method is clearly presented and the results are mostly easy to access. However, the intuition behind the proposal does not make strong sense to the reviewer: since the main focus of this work is on model attack, why not directly (iteratively or not) adding random image deformations to fool the system? Particularly, the first-order approximation strategy (as shown in Eq.4 and Eq.5) is quite confusing. On one side (see Eq.4), the deformation \\tau should be small enough in scale to make an accurate approximation. On the other side (see Eq. 5), \\tau is required to be sufficiently large in order to generate misclassification. Such seemingly conflicting rules for estimating the deformation makes the proposed method less rigorous in math. \nAs another downside, the related adversarial training procedure is not fully addressed. The authors briefly discussed this point in the experiment section and provided a few numerical results in Table 2. These results, as acknowledged by the authors, do not well support the effectiveness of deformation adversarial attack and defense. In the meanwhile, the mentioned adversarial training framework follows straightforwardly from PGD (Madry et al. 2018), and thus the novelty of this contribution is also weak. More importantly, it is not clear at all, both in theory and algorithm, whether the advocated gradual deformation attack and defense can be unified inside a joint min-max/max-min learning formulation, as what PGD is rooted from.\n\nPros: \n\n- The way of constructing deformation adversarial is interesting and novel\n- The paper is mostly clearly organized and presented.\n\nCons:\n\n- The motivation of approach is questionable. \n- The related adversarial training problem remains largely unaddressed.\n- Numerical study shows some promise in adversarial attack, but is not supportive to the related defense capability. ", "Thank you for providing very detailed author response and paper revision. I found my main concerns reasonably clarified in this feedback, especially the concern on adversarial training. However, I am still a bit worried about solving Equation (5) in a way of unconstrained least-squares, which clearly does not guarantee to generate small deformation \\tau as required for accurate first-order approximation. It is good to know from the response that in most cases in practice very small vector fields can be found to approximately satisfy Equation (5). My question (for discussion) here is: if this is the case, then why not imposing a box-constraint on \\tau so as to explicitly enforce small deformation when solving (5)? I believe using such a box-constrained quadratic program could make the entire framework more rigorous in reasoning. Nevertheless, given that my major concerns are satisfactorily addressed and in view of the positive opinions from fellow reviewer, I will not be opposed to accepting the paper with further modifications. In the meanwhile, I also would like to hear from the author(s) about the above discussion point. ", "We thank all reviewers for their helpful input. We have now updated our submission accordingly. The revised version includes a short appendix which explores the effect of using a wider Gaussian filter for smoothing the deforming vector fields. Other changes are minor clarifications that are detailed in our responses to the reviewers.", "9. We are fully aware that this abuse of notation may create confusion. However, we reckon that most readers will prefer this formulation than a more correct, but more involved, derivation. As mentioned in the introduction, the interested reader may find all mathematical details in [1], now in Appendix C (to maintain anonymity). In particular, section C.1.2 contains the rigorous derivation, where the derivative and the gradient are kept distinct.\n\n10. Many thanks for your comments on the link between the choice of the metric and the prior implicitly put in the deformations. We feel that this aspect requires a thorough analysis, which would go much beyond the scope of this paper, but would certainly be a very interesting topic for future research.\n\n11. Thank you for the reference to Fawzi and Frossard [2], which should be cited as relevant work in the introduction. We will add the following at the very end of the introduction: “[...] performance of existing DNNs, and Fawzi & Frossard (2015), in which the authors introduce a method to measure the invariance of classifiers to geometric transformations.”\n\n12. We do not think it is clear that a model trained against intensity perturbations is expected to perform better against deformations. It is true that a deformed image, y = x^\\tau, can be written as a sum of the original image and a perturbation, y = x + r. However, the perturbation r is large in general (using the l^infinity norm), while adversarial training only takes into account small perturbations.\n\n[1] Author Anonymous. Adversarial perturbations and deformations for convolutional neural networks, 2017.\n[2] A. Fawzi and P. Frossard. Manitest: Are classifiers really invariant? BMVC2015.", "Dear Reviewer 1, thank you for the detailed feedback. Its positive nature is much appreciated. In the following, we hope we can address each of your concern.\n\nWe believe that the idea of moving from perturbations to deformations is substantial. In addition, the derivation of ADef is much less straightforward than the derivation of algorithms for adversarial perturbations because of the nonlinearity of the problem, as is evident from the mathematical details included in Appendix C. To address your specific comments on the discussion in the paper:\n\n1. Your comment “one can see on MNIST the parts of the numbers that the adversarial attack is trying to delete/create” is an interesting observation. We would be happy to point this out in the text. It would most appropriately fit in Appendix B.1, with the images that show this effect. We will add to appendix B.1: “Observe that in some cases, features resembling the target class have appeared in the deformed image. For example, the top part of the 4 in the fifth column of figure 8 has been curved slightly to more resemble a 9.”\n\n2. Showing quantitative results for varying width of the Gaussian filter is certainly of interest. We are preparing a short section on this issue, to be included in the appendices. These changes will be implemented as soon as possible, and the anonymized document updated.\n\n3. One could come up with a variation of ADef to induce high confidence adversarial deformations. However, we consider targeted ADef to be the more important variant of our algorithm. In order not to complicate the presentation, we choose to discuss only the latter.\n\n4. It should be stated explicitly in the paper that we use bilinear interpolation in our implementation. We do not expect the choice of interpolation to be of much consequence for high dimensional images, and including a study of this choice runs the risk of obscuring the main purpose of the paper. We will change the MNIST and ImageNet paragraphs on page 5 as follows: “It performs smoothing by a Gaussian filter of standard deviation 1/2, uses bilinear interpolation to obtain intermediate pixel intensities, and it overshoots [...]” and “It employs a Gaussian filter of standard deviation 1, bilinear interpolation, and an overshoot [...]”.\n\n5. The question of convergence is interesting. However, we believe that the clear mathematical motivation and the effectiveness of ADef against existing models justifies the formulation of the algorithm sufficiently. Given the changes in the objective function you have noted, and potential nonlinearity in the update step we do not expect proof of convergence to be straightforward, and such an investigation might unnecessarily complicate the simple message of the paper: there exist small adversarial deformations, and an efficient way to find them.\n\n6. We agree that the overshoot factor (1+\\eta) is not very elegant, and that is why we only invoke it when ADef converges to a decision boundary. In practice, it works well to avoid ambiguous predictions. Likewise, in practice \\tau* = \\sum_i \\tau_i is a helpful proxy to quickly estimate the total vector field, and relies on the (observed) facts that the individual steps are small and that few iterations are needed.\n\n7. Thank you for pointing out the phrasing of the remark at the end of section 2.3. It should be understood as \"provided that \\nabla f is moderate\", and will be fixed in the revision.\n\n8. We do not have timing of the experiments, but it is indeed reasonable, and large scale experiments on high dimensional images are not prohibitive. We would like to point out that our code is available online, albeit not on an anonymous platform.\n\n", "We thank Reviewer 3 for the feedback. We would like to respond to the main points raised.\n\nWith our configuration of smoothing and interpolation, we observed ADef to be very effective. We agree that quantitative experiments with different interpolation schemes would not have improved our contribution. However, experiments with different levels of smoothing may be of interest, and we will add them to the appendix soon (in addition to the example shown in Figure 2).\n\nTo clarify the interpretation of Table 1, we have now included in the caption a reference to Equation 3, which defines the T-norm of vector fields. In the case of Inception, an average deforming vector field displaces all pixels by less than 0.59 pixels. That is only 0.59/299 = 0.2% of the image width.\n\nWe agree that it is interesting that training with PGD provides better protection against ADef, than training with ADef. We find it quite remarkable that PGD training, which only considers small additive perturbations, can (to a degree) resist deformations, which in general correspond to large additive perturbations. However, we remark that it is unclear what the effect of ADef training is in terms of the geometry of the input space, while it is clear from Madry et al. [1] how PGD training has regularizing effects.\n\n[1] A. Madry, A. Makelov, L. Schmidt, D. Tsipras, A. Vladu. Towards deep learning models resistant to adversarial attacks. ICLR2018.\n", "Thank you, Reviewer 2. Your critique revolves around two issues, motivation of algorithm and adversarial training, both of which we believe can be addressed by the following clarification.\n\nMotivation of algorithm:\n1. Deforming images by small random vector fields does not in general induce misclassification. This mirrors the fact that adding a random noise mask to an image usually does not change the predicted label (see for example [3]). For an adversarial attack to be effective, the image transformation has to be selected carefully.\n2. Unless the derivative of the classifier in question is very small, Eq. 5 does not require the vector field \\tau to be large. On the contrary, the experiments show that in the overwhelming majority of cases we find very small vector fields that satisfy this condition. In the derivation of the algorithm, we do not guarantee the existence of adversarial deformations. We simply say that if there exists a suitably small vector field that satisfies Eq. 5, then (by Eq. 4) the corresponding deformed image will be misclassified. In Eq. 7, we give a formula for a vector field that satisfies Eq. 5, and turns out to be small most of the time. This is a common line of reasoning in the literature of adversarial attacks: moving iteratively in the direction of the classifier’s gradient increases the chances of finding small adversarial transformations. We specifically point to the DeepFool attack for perturbations by Moosavi-Dezfooli et al. [2], which inspired our construction of deformations.\n\nAdversarial training:\n1. When evaluating the effectiveness of a new adversarial attack, one may wonder if models that are specifically designed to resist adversarial examples are also robust to the new attack. To our knowledge, the PGD adversarial training procedure of Madry et al. [1] is the best defense against adversarial examples on MNIST. We show that this defense method is not sufficient to achieve robustness to adversarial deformations. \n2. A natural question is whether this kind of adversarial training can be adapted to resist adversarial deformations. As a first step in that direction, we show that doing this in the most straightforward way, by replacing PGD with ADef in the training loop, does not yield better results. We do not propose this as a novel method for adversarial training. The current work focuses on introducing the new attack method, and it is beyond the scope of the paper to formulate rigorously a defense strategy to combat it.\n\n[1] A. Madry, A. Makelov, L. Schmidt, D. Tsipras, A. Vladu. Towards deep learning models resistant to adversarial attacks. ICLR2018. \n[2] S. M. Moosavi-Dezfooli, A. Fawzi, and P. Frossard. DeepFool: a simple and accurate method to fool deep neural networks. CVPR2016. \n[3] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, R. Fergus. Intriguing properties of neural networks. arXiv:1312.6199.\n", "The paper proposes a new way to construct adversarial examples: do not change the intensity of the input image directly, but deform the image plane (i.e. compose the image with Id + tau where tau is a small amplitude vector field).\n\nThe paper originates from a document provably written in late 2017, which is before the deposit on arXiv of another article (by different authors, early 2018) which was later accepted to ICLR 2018 [Xiao and al.]. This remark is important in that it changes my rating of the paper (being more indulgent with papers proposing new ideas, as otherwise the novelty is rather low compared to [Xiao and al.]).\n\nPros:\n- the paper is well written, very easy to read, well explained (and better formalized than [Xiao and al.]);\n- the idea of deforming images is new (if we forget about [Xiao and al.]) and simple;\n- experiments show what such a technique can achieve on MNIST and ImageNet. Interestingly, one can see on MNIST the parts of the numbers that the adversarial attack is trying to delete/create.\n\nCons:\n- the paper is a bit weak, in that it is not very dense, and in that there is not much more content than the initial idea;\n- for instance, more discussions about the results obtained could have been appreciated (such as my remark above about MNIST);\n- for instance, a study of the impact of the regularization would have been interesting (how does the sigma of the Gaussian smoothing affect the type of adversarial attacks obtained and their performance -- is it possible to fool the network with [very] smooth deformations?);\n- for instance, what about generating adversarial examples for which the network would be fully (wrongly) confident? (instead of just borderline unsure); etc.\n- The interpolation scheme (how is defined the intensity I(x,y) for a non-integer location (x,y) within the image I) is rather important (linear interpolation, etc.) and should be at least mentioned in the main paper, and at best studied (it might impact the gradient descent path and the results);\n- question: does the algorithm converge? could there be a proof of this? This is not obvious, as the objective potentially changes with time (selection of the current m best indices k of |F_k - F_l|). Also, the final overshoot factor (1+eta) is not very elegant, and not guaranteed to perform well if tau* starts being not small compared to the second derivative (i.e. g''.tau^2 not small) while I guess that for image intensities, spatial derivatives can be very high if no intensity smoothing scheme is used.\n- note: the approximation tau* = sum_i tau_i (section 2.3) does not stand in the case of non-small deformations.\n- still in section 2.3, I do not understand the statement \"given that \\nabla f is moderate\": where does this property come from? or is \"given\" meant to be understood as \"provided...\" (i.e. under the assumption that...)?\n- computational times could have been given (though I guess they are reasonable).\n\nOther remarks:\n- suggestion: I find the \"slight abuse of notation\" (of confusing the derivative with the gradient) a bit annoying and suggest to use a different symbol, such as \\nabla g. This could be useful in particular in the following perspective:\n- Mathematical side note: the \"gradient\" of a functional is not a uniquely-defined object in that it depends on the metric chosen in the tangent space. More clearly: the space of small deformations tau comes with an inner product (here L2, but one could choose another one), and the gradient \\nabla g obtained depends on this inner product choice M, even though the derivative Dg is the same (they are related by Dg(tau) = < \\nabla_M g | tau >_M for any tau). The choice of the metric can then be seen as a prior over desired gradient descent paths. In the paper, the deformation fields get smoothed by a Gaussian filter at some point (eq. 7), in order to be smoother: this can be interpreted as a prior (gradient descent paths should be made of smooth deformations) and as an associated inner product change (there do exist a metric M such that the gradient for that metric is \\nabla_M g = S \\nabla_L2 g). It is possible to favor other kind of deformations (not just smooth ones, but for instance rigid ones, etc. [and by the way this could make the link with \"Manitest: Are classifiers really invariant?\" by Fawzi and Frossard, BMVC 2015, who observe that a rigid motion can affect the classifier output]). If interested, you can check \"Generalized Gradients: Priors on Minimization Flows\" by Charpiat et al. for general inner products on deformations (in particular favoring rigid motion), and \"Sobolev active contours\" by Sundaramoorthi et al. for inner products more dedicated to smoothing (such as with the H1 norm).\n- Note: about the remark in section 3.2: deformation-induced transformations are a subset of all possible transformations of the image (which are all representable with intensity changes), so it is expected that a training against attacks on the intensity performs better than a training against attacks on spatial deformations.\n\n", "In this paper, the authors proposed a new attack using deformation. The results are quite realistic to the naked eyes (at least for the example shown). The idea is quite simple, generate small displacement and resample (interpolate) image until the label flips.\n\n- I think this is a good contribution, It is a kind of attack we should consider.\n- One thing which is good to consider is the type of interpolation. I believe the success rate would be different for linear versus say B-spline interpolation. Also, the width of the smoothing applied to the deformation field has an impact. The algorithm is straightforward, there is no reason to experiment with those.\n\n- It is useful to report pixel displacement in Table 1. The reported values are not intuitive, the **average** displacement for Inception-v3 is 0.59. Here is my back of envelope conversion of 0.59 which is probably off:\n\n299 (# pixels of the smaller axis 299 for the Inception) x 1/2 (image are centered) x 0.59 = 88 pixels\n\nThis is huge! I think I am calculating something incorrectly because in Fig3,4 those displacements are not big. \n\n- The results of Table 2 is interesting. Why a networked trained with PGD is more robust to ADef attack that a network trained adversarially with Adef?\n\n\n\n\nMinor:\n- The paper is a bit nationally convoluted for no good reason, the general idea is straightforward. \n" ]
[ -1, -1, 6, -1, -1, -1, -1, -1, -1, 7, 7 ]
[ -1, -1, 3, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "BJeklt6saX", "SkgnsKp9am", "iclr_2019_Hk4dFjR5K7", "SJxk8F0bT7", "iclr_2019_Hk4dFjR5K7", "rkeZVY6cT7", "rye0ucE1am", "Ske8yLon37", "SJgaSNc92X", "iclr_2019_Hk4dFjR5K7", "iclr_2019_Hk4dFjR5K7" ]
iclr_2019_Hk4fpoA5Km
Discriminator-Actor-Critic: Addressing Sample Inefficiency and Reward Bias in Adversarial Imitation Learning
We identify two issues with the family of algorithms based on the Adversarial Imitation Learning framework. The first problem is implicit bias present in the reward functions used in these algorithms. While these biases might work well for some environments, they can also lead to sub-optimal behavior in others. Secondly, even though these algorithms can learn from few expert demonstrations, they require a prohibitively large number of interactions with the environment in order to imitate the expert for many real-world applications. In order to address these issues, we propose a new algorithm called Discriminator-Actor-Critic that uses off-policy Reinforcement Learning to reduce policy-environment interaction sample complexity by an average factor of 10. Furthermore, since our reward function is designed to be unbiased, we can apply our algorithm to many problems without making any task-specific adjustments.
accepted-poster-papers
This work highlights the problem of biased rewards present in common adversarial imitation learning implementations, and proposes adding absorbing states to to fix the issue. This is combined with an off-policy training algorithm, yielding significantly improved sample efficiency, whose benefits are convincingly shown empirically. The paper is well written and clearly presents the contributions. Questions were satisfactorily answered during discussion, and resulted in an improved submission, a paper that all reviewers now agree is worth presenting at ICLR.
train
[ "S1gpuXlih7", "S1gnoccmC7", "HkeHJf0GRQ", "SJxux2BMCm", "H1gEMY4eAQ", "BkgkROElAm", "SJeQPruhpX", "B1gGZfAq6Q", "rkeAEjMcT7", "Byebk4m5Tm", "rJgV29zq67", "SygDehUmpQ", "rJePR4gGT7", "rJgH5Yg0h7", "S1gzKHFunm", "BJxqUpTA3Q", "HyebhMXa2X", "ByeHNeho37", "BkxfyEhmhm", "S1gOUczL3m", "H1xlBqzU2X", "BygoS4eHnQ", "B1l_dvkgn7", "Byl5uav0oX" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "public", "author", "public", "public", "author", "author", "public", "author", "public" ]
[ "The paper suggests to use TD3 to compute an off-policy update instead of the TRPO/PPO updates in GAIL/AIRL in order to increase sample efficiency.\nThe paper further discusses the problem of implicit step penalties and survival bias caused by absorbing states, when using the upper-bounded/lower-bounded reward functions log(D) and -(1-log(D)) respectively. To tackle these problem, the paper proposes to explicit add a unique absorbing state at the end of each trajectory, such that its rewards can be learned as well.\n\nPro:\nThe paper is well written and clearly presented. \n\nUsing a more sample efficient RL method for the policy update is sensible and turned out effective in the experiments.\n\nProperly handling simulator resets in MDPs is a well known problem in reinforcement learning that I think is insufficiently discussed in the context of IRL.\n\n\nCons:\nThe contributions seem rather small.\na) Replacing the policy update is trivial, since the rl methods are used as black-box modules for the discussed AIL methods. \n\nb) Using importance weighting to reuse old trajectories for the discriminator update hardly counts as a contribution either--especially when the importance weights are simply omitted in practice. I also think that the reported problems due to the high variance have not been sufficiently investigated. There should be a better solution than just pretending that the replay buffer corresponds to roll-outs of the current policy. Would it maybe help to use self-normalized importance weights? The paper does also not analyze how such assumption/approximation affects the theoretical guarantees.\n\nc) The problem with absorbing states is in my opinion the most interesting contribution of the paper. However, the discussion is rather shallow and I do not think that the illustrative example is very convincing. Section 4.1.1. argues that for the given policy roll-out, the discriminator reward puts more reward on the policy trajectory than the expert trajectory. However, it is neither surprising nor problematic that the discriminator reward does not produce the desired behavior during learning. By assigning more cumulative reward for s2_a1->s1 than for s2_a2->g, the policy would (after a few more updates) choose the latter action much less frequently than with probability 0.5 and the corresponding reward would grow towards infinity until at some point Q(s2,a2) > Q(s2,a1)--when the policy would match the expert exactly. The illustrative example also uses more policy-labeled transitions than agent-labeled ones for learning the classifier, which may also be problematic. The paper further argues that a strictly positive reward function always rewards a policy for avoiding absorbing states, which I think is not true in general. A strictly positive reward function can still produce arbitrary large reward for any action that reaches an absorbing state. Hence, the immediate reward for choosing such action can be made larger than the discounted future reward when not ending the episode (for any gamma < 1). Even for state-only reward functions the problem does not persist when reseting the environment after reaching the absorbing state such that the training trajectories contain states that are only reached if the simulator gets reset. Hence, I am not convinced that adding a special absorbing state to the trajectory is necessary if the simulation reset is correctly implemented. This may be different for resets due to time limits that can not be predicted by the last state-action tuple. However, issues relating to time limits are not addressed in the paper. I also think that it is strange that the direct way of computing the return for the terminal state is much less stable than recursively computing it and think that the paper should include a convincing explanation.\n\n---------------\nUpdate 21.11.2018\n\nI think my initial assessment was too positive. During the rebuttal, I noticed that the discussion of reward bias was not only shallow but also wrong in some aspects and very misleading, because problems arising from hacky implementations of some RL toolboxes were discussed as theoretical shortcoming of AIL algorithms. Hence, I think the initial submission should be clearly rejected. However, the authors submitted a revised version that presents the root of the observed problem much more accurately. I think that the revised version is substantially better than the original submission. However, I think that my initial rating is still valid (better: became valid), because the main issues that I raised for the initial submission still apply to the current revision, namely:\n- The technical contributions are minor.\n- The theoretical discussion (in particular regarding absorbing states) is quite shallow.\n\nThe merits of the paper are:\n- Good results due to off-policy learning\n- Raising awareness and providing a fix for a common pitfall \n\nI think that the problems arising from incorrectly treated absorbing states needs to be discussed more profoundly. \nSome suggestions: \n\nSection 3.1\n\"As we discuss in detail in Section 4.2 [...]\"\nI think this should refer to section 4.1. Also the discussion should in section 4.1 should be a bit more detailed. How do common implementations implicitly assign zero rewards? Which implementations are affected? Which papers published inferior results due to this bug? I think it is also important to note, that absorbing states are hidden from the algorithm and that the reward function is thus only applied to non-absorbing states.\n\n\"We will demonstrate empirically in Section 4.1 [...]\"\nThe demonstration is currently missing. I think it would be nice to illustrate the problem on a simple example. The original example might actually work, as shown by the code example of the rebuttal, however the explanation was not convincing. Maybe it would be easier to argue with a simpler algorithm (e.g MaxEnt-IRL, potentially projecting the rewards to positive values)?\n\nSection 3.1 seems to focus too much on resets that are caused by time limits. Such resets are inherently different from terminal states such as falling down in locomotion tasks, because they can not be modelled with the given MDP formulation unless time is considered part of the state. Indeed, I think that for infinite horizon MDPs without time-awareness, time limits can not be modelled using absorbing states (I think the RL book misses to mention that time needs to be part of the state such that the policy remains Markovian, which is a bit misleading). Instead those resets are often handled by returning an estimate of the future return (bootstrapping). This treatment of time limits is already part of the TD3 implementation and as far as I understood not the focus of the paper. Instead section 3.1. should focus on resets caused by task failure/completion, which can actually be modelled with absorbing states, because the agent will always transition to the absorbing state when a terminal state is reached which is in line with Markovian dynamics.\n\nSection 4.2 should also add a few more details. Did I understand correctly, that when computing the return R_T the sum is indeed finite and stopped after a fixed horizon? If yes, this should be reflected in the equation, and the horizon should be mentioned in the paper. The paper should also better explain how the proposed fix enables the algorithm to learn the reward of the absorbing state. For example, section 4.2. does not even mention that the state s_a was added as part of the solution. \n\n\n-------------\nUpdate 22.11.2018\nBy highlighting the difference between termination due to time-limits and termination due to task completion, and by better describing how the proposed fix addresses the problem of reward bias that is present in common AIL implementations, the newest revision further improves the submission. \nI think that the submission can get accepted and I adapted my rating accordingly.\n\nMinor:\nConclusion should also squeeze in somehow that the reward biases are caused by the implementations.\nTypo in 4.2: \"Thus, when sample[sic] from the replay buffer AIL algorithms will be able to see absorbing states there[sic]\nwere previous hidden, [...]\"\n", "We again would like to emphasize that we appreciate your patience and valuable feedback that helps us to improve our submission.\n\nWe have updated the paper to try to address your suggestions. In particular:\n\n1) As per your suggestion, we extended the last paragraph of Section 3.1 in order to clarify our discussion on episode termination because of time limits. We believe that it adds clarity to the paper because it discusses termination states in more detail. It also explains the difference between absorbing states and rollout breaks. For a detailed discussion on implementation specific biases in algorithms (GAIL/AIRL) please refer section 4.1. \n\n2) In Section 4.1, we enumerate papers affected by this problem, with specific instances. For each paper that we cite in this section, we consider the official implementations provided by the authors. In the same section, we further elaborate on how exactly these algorithms are affected by the issue.\n\n3) In section 4.2, we assume infinite horizon for R_T since the series converges ( assuming reward bounded by r_max, the series is bounded by gamma/(1-gamma) r_max and thus can be computed either analytically or will converge in the limit using TD updates, section 3.1 now also includes a clarification of this point). We also extended Section 4.2 to clarify how absorbing states can be used by the AIL algorithms and how the corresponding transitions affect estimations of returns. Please see the second paragraph of Section 4.2.\n\n4) Regarding revisiting the illustrative example, we agree that the same reasoning might apply to Inverse RL algorithms in general and we appreciate your suggestion regarding the analysis of this simple example for MaxEnt-IRL. We unfortunately will not be able to add such an experiment before the end of the revision period, but we have added some discussion in Section 4 that the basic principle applies also to other IRL methods. This can be considered as an interesting direction of future work. e will attempt to add a better illustrative example in the final version (we just have not had time to do so), and will make sure to update the reviewers about it.\n\n5) We fixed the wrongly referenced section. Thanks for catching this. \n\nWe hope that this revision of our submission will address your concerns.\n", "Thanks for the quick revision. The submission is much better now.\nI updated my review to take the revised version into account, however, I did not feel comfortable adapting my rating quite yet (please refer to the review for an explanation).\nI encourage you to further revise the submission.", "Thank you for your detailed and encouraging response. \n\nWe have updated the paper to try to address your suggestions. We hope that this revised version more appropriately positions the contribution and draws a clear distinction between MDP formulation and algorithm, as per your suggestion. In particular:\n\n1. We now make it clear that the correct handling of absorbing states is something that should be applied to any inverse reinforcement learning or reward learning algorithm, whether adversarial or otherwise, and is independent of the DAC algorithm in that sense.\n\n2. We have added the suggested citation and other papers that discuss time limits (Pardo et al: https://arxiv.org/abs/1712.00378, Tucker et al: https://arxiv.org/abs/1802.10031 ) in the related work section.\n\n3. In Section 3, we've added a discussion of time limits in MDPs, as well as a discussion of how temporal difference methods can handle infinite-horizon tasks with finite-horizon rollouts (which is what DAC does also). Please see the last paragraph of the section.\n\n4. As per your suggestion, we have removed the illustrative example in Section 4.1.1. While we do believe that an example would help illustrate the issue to the reader, we understand your reservations against the illustrative example. We would like to attempt to add a better illustrative example in the final version(we just have not had time to do so), but we will be sure to make an additional post about it if we do, to confirm that it is satisfactory.\n\n5. For the sake of clarity, we removed the last paragraph from section 4.2 that discusses our choice of the implementation of bootstrapping for the terminal states.\n\nWe appreciate your patience, and would appreciate it if you took another look at the paper and let us know if this has addressed your concerns.\n", "\"We propose a new algorithm, which we call Discriminator-Actor-Critic (DAC) (see Figure 1), that is\ncompatible with both the popular GAIL and AIRL frameworks, incorporates explicit terminal state\nhandling, an off-policy discriminator and an off-policy actor-critic reinforcement learning algorithm\"\n- Don't say that DAC incorporates terminal state handling. Rather write something like\n\n\"We propose a new algorithm, which we call Discriminator-Actor-Critic (DAC) (see Figure 1), that extends GAIL and AIRL by replacing the policy update by the more sample efficient TD3 algorithm. Furthermore, our implementation of DAC includes a proper handling of terminal states that can be straightforwardly transferred to other inverse reinforcement learning algorithms. We show in ablative experiments, that our off-policy inverse reinforcement learning approach requires approximately an order of magnitude fewer policy roll-outs than the state of the art, and that proper handling of terminal states is crucial for matching expert demonstrations in the presence of absorbing states.\"\n\nEnd of Introduction:\n\"• Identify, and propose solutions for the problem of bias in discriminator-based reward estimation in imitation learning.\"\n- As far as I can tell, there is no bias is discriminator-based reward estimation. I don't think that the proposed solution has to do with discriminators at all, but would affect any IRL algorithm and all those IL algorithm that use RL in the loop. Change this point to something like \"Identify early termination of policy roll-outs in commonly used reinforcement learning toolboxes as cause of reward bias in the context of inverse reinforcement learning and propose a solution that allows to correctly match expert demonstrations in the presence of absorbing states.\"\n\nRelated work should discuss prior work related to incorrectly handling absorbing states (in RL or IRL). However, I don't think that there is much published literature about fixing implementation hacks. \nI'm aware of a paper at last ICML [1] (that was previously rejected for ICLR due to lack of novelty), that discussed problems relating to time-limits in infinite horizon formulations which might be worth mentioning. \n\nSection 3 needs to explain how exactly the baselines implementation breaks IRL algorithms for absorbing states. The last paragraph of section 3.1. is not at all sufficient to communicate the root of the problem to the reader. Explain why the break in the roll-out violates the MDP formulation (which is assumed by the discussed algorithm) and that the learned reward function is thus not applied to the MDP. Add a new section (after 3.1 or 3.2) that is at least as detailed as my last comment.\n\nSection 4.1. also needs to be rewritten completely. There is no bias for the different reward formulations. Rather, applying IRL/IL algorithms without sufficient care to rl toolboxes that use hacky implementations can lead to different problems for different reward formulations.\n\nAlso section 4.1.1. still discusses the problem as if there was an inherent bias depending on reward formulation. Furthermore, I already pointed out several problems and errors related to the illustrative example (e.g. analysing an intermediate state of the algorithm, rather than a fixed-point). Maybe you could prove for your code example that AIRL does not converge and show a plot that compares the averages trajectory length for the buggy implementation with my naive fix.\n\nSection 4.2. seems like the main technical contribution. The last paragraph still looks fishy to me and the reported problem of using the analytically derived return seems to result from an assumed infinite horizon formulation. I think that the MDP formulation used for handling absorbing states seems to assume (potentially very large) finite horizons and hence, R_T should at least theoretically depend on the current time step. Given that both equations are analytically equivalent, one equation can not be more stable than the other. When, however, the explicit summation is performed until a given horizon is reached, whereas the closed form solution assumes an infinite horizon, the returned values differ and the closed form solution is simply not sound.\n\n[1] Time Limits in Reinforcement Learning, Fabio Pardo, Arash Tavakoli, Vitaly Levdik, Petar Kormushev,\nProceedings of the 35th International Conference on Machine Learning, PMLR 80:4045-4054, 2018. ", "I agree that communicating an idea that is relevant and important for a large subset of the community can justify publishing a research paper--even if the technical contribution is marginal. However the submission communicates an idea that in my opinion is just wrong. Namely, the submission communicates the idea that existing methods for IRL can not handle absorbing states and that learning reward functions that are always positive/negative can lead to an implicit bias. This is plain wrong and not helping the research community at all. Communicating this idea is not important but can be very harmful, especially when it is published at a conference like ICLR. I don't want to review papers next year that propose fixed offsets in order to enable their reward functions to produce both signs (and the like). I know that we need to sell our stuff and I'm fine with calling a TD3 replacement an all new algorithm. But, discussing a fix for a hack in a toolbox as an algorithmic enhancement just can not work out. The initial submission did not even give a hint that the bias is only caused by hacky implementations of the MDP, but pretended that it results from shortcomings of the algorithms. I agree that the the revised version is much better by admitting that it only applies to specific implementations. However, in order to clearly communicate the actual idea it is not sufficient to add one small paragraph, because the original, harmful narrative pervades the whole paper. I propose a number of modification (split over two comments due to character limits) to the paper, that I think are necessary to communicate to the reader how the improved performance was reached. The contribution of the revised version could be just enough to push it over the acceptance threshold.\n\nIntroduction:\n\"[...] 2) bias in the reward function formulation and improper handling of environment terminal states introduces implicit rewards priors that can either improve\nor degrade policy performance.\"\n- This still makes the impression that both AIL methods are biased and can not handle absorbing states correctly.\n \n\"In this work we will also illustrate how the specific form of AIL reward function used has a large\nimpact on agent performance for episodic environments. For instance, as we will show, a strictly\npositive reward function prevents the agent from solving tasks in a minimal number of steps and a\nstrictly negative reward function is not able to emulate a survival bonus. Therefore, one must have\nsome knowledge of the true environment reward and incorporate such priors to choose a suitable\nreward function for successful application of GAIL and AIRL. We will discuss these issues in formal\ndetail, and present a simple - yet effective - solution that drastically improves policy performance\nfor episodic environments; we explicitly handle absorbing state transitions by learning the reward\nassociated with these states\"\n- This paragraph needs to be completely rewritten. The form of the reward function (whether it is strictly positive or negative) does in theory not matter at all. It is completely fine to learn a reward function that only produces positive/negative values. Don't make the impression, that IRL researchers should start looking for ways to learn reward functions that can produce both signs. Furthermore, from a theoretical perspective GAIL and AIRL already explicitly learn rewards associated with absorbing states. This paragraph should clearly state that commonly used implementations of the roll-outs are not in line with the MDP-formulation which may be fine for RL but can lead to problems with IRL approaches. You may already want to point to the \"break\"-statement and state that it prevents the learned reward function from being applied to absorbing states. Although it is interesting to show how strictly positive/negative reward functions are affected by such implementations and it is nice to discuss these effects in the paper (maybe not in the introduction) and confirm them in the experiment, don't discuss the sign of the reward as the central problem. Also make sure to state, that you propose a different way of implementing the MDPs that allows early termination while fixing this problem. It is in my opinion crucial to discuss the problem and the solution in the context of implementing policy roll-outs / absorbing states. Make sure to show that this is a relevant problem that affects multiple toolboxes and that algorithms were incorrectly evaluated due to this issue - put in some references to undermine your claim that numerous work treat absorbing states incorrectly.", "Thank you for your detailed response. We generally agree with the technical side of your description: MDPs with absorbing states require the absorbing states to be handled properly for IRL. This is in essence the point of this portion of our paper. We also agree that addressing this is not so much a new algorithm as it is a fix to the MDP. We have edited the paper to reflect this and clarify this point, please see the difference between the last revision and original submission (the abstract, sections 3.1 and 4). The fact that we test our solution by extending two different prior methods (GAIL and AIRL) reflects the generality of the solution.\n\nHowever, we respectfully disagree that this solution is obvious or trivial. Environments with absorbing states in the MuJoCo locomotion benchmark tasks have been used as benchmarks for imitation learning and IRL in one form or another for over two years. In this time, no one has corrected this issue, or even noted that this issue exists, and numerous works incorrectly treat absorbing states, resulting in results that are not an accurate reflection of the performance of these algorithms, as detailed in Section 5.2 and Figures 5,6 and 7 of our paper. This issue is severe, it is making it difficult to evaluate IRL and imitation algorithms, and as far as we can tell, most of the community is unaware of it. We believe that our paper will raise awareness of this issue and facilitate the development and evaluation of better IRL algorithms in the future. With your help, we have clarified this point further in our current paper. The purpose of a research paper is to communicate an idea that is relevant and important to a large subset of the community, and we believe that our paper does this.\n", "Let me elaborate on why I think that the failure of the existing methods to match the expert is caused solely by an incorrect implementation of the MDP and are not shortcomings of the actual algorithms.\n\nRoll-outs in an MDPs either have a fixed length (finite horizon) or an infinite length (infinite horizon). Variable length trajectory can be simulated by introducing absorbing states in a finite horizon formulation as mentions in section 3.1. of the submission. The infinite horizon case can be approximated using a large horizon and a time-dependent reward function for the discounting. However, in either case the absorbing states need to be treated in the same way as any other state in the MDP. Importantly, these states do not end the episode prematurely but just prevent the agent from entering any non-absorbing state and yield the same value for each policy. In reinforcement learning, we can typically stop the episode and return the Q-Value (which happens to equal the immediate reward, if the constant rewards of absorbing is assumed to be zero) which allows for more efficient implementations. However, it is important to note that the reward function is then only evaluated on the non-absorbing states and the rewards for absorbing states are implicitly assumed to be zero. Hence, when implementing policy roll-outs with a \"break\" one needs to be aware that the specified reward function does not correspond to the actual reward function of the MDP but affects only a subset of the possible state-action pairs (as those states of the MDP that we call \"absorbing\" will not be affected). This is well known in reinforcement learning, and even exploited by specifying constant offsets in the reward function for survival bonus / time penalty which would be useless if the specified reward function would be the actual reward function of the MDP.\n\nUsing such implementation of an environment which is targeted at reinforcement learning and using it for inverse reinforcement learning \nis incorrect, because IRL algorithms are typically derived for learning the reward function for the whole MDP and not for a subset of the MDP. How can we expect an algorithm to learn the correct constant offset of a reward function (which does affect the optimal policy in the given implementation) using a formulation that implies that an offset does not affect the optimal behaviour? \n\nTo summarize: The failure of GAIL and AIRL of matching expert demonstrations for some RL toolkits with absorbing states is caused by implementation hacks that are fine for RL problems and specific reward functions, but not for IRL. Indeed, the convergence problem of the code example can be solved simply by implementing the MDP in the way it is defined in section 3.1.--using a (discounted) fixed horizon and absorbing states. My code can be found at https://colab.research.google.com/drive/11w0McKxg7AA6ueTQNbfTtYAyVrSKgU2z\nThe only changes were\n- adding the missing state (s_g) and transition (sg->sg) to the MDP\n- removing the break in the roll-out\n- using a full expert trajectory as demonstration (including absorbing transitions) \n- solving some numerical issues when both expert and agent have probability 0 for choosing a given action.\nThe algorithm converges reliably to a policy that produces average trajectory lengths of 2.0\n\nOf course, the solution of the paper is a bit more elegant since it avoids to simulate the whole trajectory, but the effect should be the same. It is important to raise awareness of such pitfalls, but I do not think that it is enough to write an ICLR paper about--especially if it is discussed as an algorithmic improvement, when the algorithms are just fine. \n\nAlso in conjunction with the other minor contributions (using all trajectories for training the discriminator--without any theoretical justification, and using a more sample efficient policy update), I don't think that the contributions of the submission are sufficient.", "We thank the reviewer for the positive and constructive feedback.\n\nWe have extended the section 5.1 of the manuscript as suggested by the reviewer.\n\nBelow are detailed answers for the reviewer’s concerns: \n\n1) To simplify the exposition we omitted the entropy penalty as it does not contribute meaningfully to the algorithm performance in our experimentation. Similar findings were observed in the GAIL paper, where the authors disregarded the entropy coefficient for every tested environment, except for the Reacher environment.\n\n2) We added the performance of a random policy to the graph to be consistent with the original GAIL paper. We believe that it improves readability of the plot by providing necessary scaling.\n\n3) We already started working on additional experimentation as requested. We will update the manuscript as soon as we gather these results.\n\n4) We observed the same effect of having absorbing states in the Kuka arm tasks (Fig. 6), as in the MuJoCo environments. Also, we evaluated absorbing states within the AIRL framework for Walker-2D and Hopper environments (Fig. 7). We demonstrate that proper handling of absorbing states is critical for effectively imitating the expert policy. \n\nIn addition, we updated the paper to accommodate the minor suggestions proposed by the reviewer.\n", "We thank the reviewer for the detailed and constructive feedback. We address the above mentioned points and add some additional experiments, as detailed below.\n\nc) “By assigning more cumulative reward for s2_a1->s1 than for s2_a2->g, the policy would (after a few more updates) choose the latter action much less frequently than with probability 0.5 and the corresponding reward would grow towards infinity until at some point Q(s2,a2) > Q(s2,a1)--when the policy would match the expert exactly.”\n“The paper further argues that a strictly positive reward function always rewards a policy for avoiding absorbing states, which I think is not true in general. A strictly positive reward function can still produce arbitrary large reward for any action that reaches an absorbing state.”\n\n> This is a good point, and we will discuss this situation in more detail in the final paper. However, we do not believe that this directly applies to adversarial learning algorithms, such as the ones studied in our paper. We provide discussion as well as a numerical example below, which will be included in the paper. \n\nThe aforementioned situation can only happen in the limit, but the next discriminator update will return the policy to the previous state, in which it is more advantageous to take a loop, according to the GAIL reward definition. Therefore, the original formulation of the algorithm does not converge in this case. In contrast, learning rewards for the absorbing states will resolve this issue. \n\nMoreover, the example provided by the reviewer assumes that we can fix the reward function at some point of training and then train the policy to optimality according to this reward function; while devising a scheme to early terminate learning of the reward function is possible, it is not specified by the dynamic reward learning mechanisms of the GAIL algorithm, which alternates updates between the policy and the discriminator. Please see a simple script that illustrates the example (anonymous link):\nhttps://colab.research.google.com/drive/1gV56NLik367nslwK7iJzs8WTe5tD-BO5\n\nThis specific toy example will be included into our open source release.\n\n“Hence, I am not convinced that adding a special absorbing state to the trajectory is necessary if the simulation reset is correctly implemented.”\n\n> Could you please clarify what do you mean by a correct implementation of simulation resets?\n\n“I also think that it is strange that the direct way of computing the return for the terminal state is much less stable than recursively computing it and think that the paper should include a convincing explanation.”\n\n> We think that it is less stable to analytically compute the returns for absorbing states as it introduces a high variance for TD updates of the value network due to the fact that we bootstrapped for all states. The issue is well known and usually solved by using target networks (see https://www.nature.com/articles/nature14236).\n\n“This may be different for resets due to time limits that can not be predicted by the last state-action tuple. However, issues relating to time limits are not addressed in the paper”\n\n> Although some of the benchmark tasks do have an episodic time limit, an off-policy RL algorithm can still calculate a (discounted) target value at the last time step in such environments, which is what our implementation of TD3 actually does. Please see the original implementation of TD3 for more details:\nhttps://github.com/sfujim/TD3/blob/master/main.py#L123\n\na) We note that this does make a substantial difference in terms of sample efficiency over prior work on adversarial IL, as shown in Figure 4 -- we believe that such substantial improvements in efficiency are of interest to the ICLR community, though it is not the sole contribution of our paper.\n\nb) We did use normalized importance weights, but unfortunately did not find that the resulting method performed well, while simply omitted importance weights achieved good performance. We think that the naive way of estimating importance weights increases variance of updates. We will analyze this further in the final version, but for now we would emphasize that this is not the primary contribution of the work, but only a technical detail that we discussed for completeness.\n", "We thank the reviewer for the feedback and appreciate the strong recommendation.\n", "Thank you for your comments.\n\n1. Yes, that’s correct (using TD3 algorithm). For the target part it’s s’ and action is produced by the action target network: ||logD(s_a,・)-log(1-D(s_a,・)) + γQ_theta_target(s’, A_target(s’),・) -Q_theta(s, a,・) ||**2.\n2. We used zero actions for the absorbing states.\n3. No, we investigated it only with off-policy case. For the off-policy version of your second question, see Figures 6 and 7. However, the part related to absorbing states is independent of off-policy training.\n", "I enjoyed reading your submission, and I am now trying to add absorbing state to AIRL.\nI have 3 questions. First and second questions are about how to learn Q_theta(s_a,・) and third is about ablation study.\n\nThree questions are below.\n\n1. I think that the target of Q_theta(s_a, ・) is logD(s_a,・)-log(1-D(s_a,・)) + γQ_theta(s_a,・). Is this right?\n\n2. What did you use as action at absorbing states for calculating D(s_a,・) or Q_theta(s_a,・)? You use random value?\n\n3. Did you investigate the effect of only absorbing states on on-policy GAIL or AIRL ? Did GAIL+absorbing states or AIRL + absorbing states work better than GAIL or AIRL?\n\nThank you!!", "The authors find 2 issues with Adversarial Imitation Learning-style algorithms: I) implicit bias in the reward functions and II) despite abilities of coping with little data, high interaction with the environment is required. The authors suggest \"Discriminator-Actor-Critic\" - an off-policy Reinforcement Learning reducing complexity up to 10 and being unbiased, hence very flexible. \n\nSeveral standard tasks, a robotic, and a VR task are used to show-case the effectiveness by a working implementation in TensorFlow Eager.\n\nThe paper is well written, and there is practically no criticism.\n\n", "This paper investigates two issues regarding Adversarial Imitation Learning. They identify a bias in commonly used reward functions and provide a solution to this. Furthermore they suggest to improve sample efficiency by introducing a off-policy algorithm dubbed \"Discriminator-Actor-Critic\". They key point here being that they propose a replay buffer to sample transitions from. \n\nIt is well written and easy to follow. The authors are able to position their work well into the existing literature and pointing the differences out. \n\nPros:\n\t* Well written\n\t* Motivation is clear\n\t* Example on biased reward functions \n\t* Experiments are carefully designed and thorough\nCons:\n\t* The analysis of the results in section 5.1 is a bit short\n\nQuestions:\n\t* You provide a pseudo code of you method in the appendix where you give the loss function. I assume this corresponds to Eq. 2. Did you omit the entropy penalty or did you not use that termin during learning?\n\n\t* What's the point of plotting the reward of a random policy? It seems your using it as a lower bound making it zero. I think it would benefit the plots if you just mention it instead of plotting the line and having an extra legend\n\n\t* In Fig. 4 you show results for DAC, TRPO, and PPO for the HalfCheetah environment in 25M steps. Could you also provide this for the remaining environments?\n\n\t* Is it possible to show results of the effect of absorbing states on the Mujoco environments?\n\nMinor suggestions:\nIn Eq. (1) it is not clear what is meant by pi_E. From context we can assume that E stands for expert policy. Maybe add that. Figures 1 and 2 are not referenced in the text and their respective caption is very short. Please reference them accordingly and maybe add a bit of information. In section 4.1.1 you reference figure 4.1 but i think your talking about figure 3.", "It doesn't seem that the reviewer has put any efforts in appreciating or criticising the paper and has merely summarised the paper in a few lines.\nPlease provide proper analysis for your acceptance decision and rating\n\n", "Thank you for your comments!\n\nSince our algorithm uses TD3 (https://arxiv.org/pdf/1802.09477.pdf), we highly recommend to use the original implementation of the algorithm (https://github.com/sfujim/TD3). Our reimplementation of TD3 reproduces the results reported in the original paper. Reproducing results with SAC might be harder since SAC requires tuning a temperature hyperparameter that might require additional efforts in combination with reward learning.\n\n1) We used the batch size equal to 100. We kept all transitions in the replay buffer.\n2) That’s correct. For HalfCheetah, after performing 1K updates of the discriminator we performed 1K updates of TD3. During early stage of development we tried the aforementioned suggestion of simultaneously updating the discriminator and policy, and it produced worse results.\n3) Yes, we will include it in the appendix.\n4) We used gradient penalty described in https://arxiv.org/abs/1704.00028 and implemented in TensorFlow https://www.tensorflow.org/api_docs/python/tf/contrib/gan/losses/wargs/wasserstein_gradient_penalty with a coefficient equal to 10.\n\nAn additional note regarding reproducing results. Please take into account, that depending on when you subsample trajectories to match the original GAIL setup, you need to use importance weights. Specifically, if you first subsample expert trajectories taking every Nth transition, and then add absorbing states to the subsampled trajectories, you will need to use importance weight 1/N for the expert absorbing states while training the discriminator. We will explicitly mention this detail in next version of the submission.\n\nWe would like to emphasize that upon publishing the paper we are going to open source our implementation.\n\nFeel free to request any additional information. We will be glad to provide everything to help you to reproduce our results.", "I enjoyed reading your submission and was trying to reproduce some of your results. I am using Soft Actor-Critic with somewhat different model sizes than yours and had some practical questions that could help me be more effective. I was wondering:\n\n- What is the batch size you use for the updates? How large is your replay buffer size (do you store all previous trajectories)?\n- In your algorithm box, in the update section, it says \"for i = 1, ..., |\\tau|\". Does this mean that for example for halfcheetah environment you do 1000 updates every time you generate a trajectory?\n- Would you be able to also include the numeric reward value your experts achieve on the tasks?\n- Could you elaborate on the specific form of gradient penalty you use and the coefficient of the gradient penalty term?\n\nAnd, one separate question: Have you also tried simultaneously updating the discriminator and policy instead of the alternating scheme shown in the algorithm box?\n\nThank you!\n", "There is another paper which has also combined off-policy training with imitation learning. \nThe only significant contribution of this paper then seems to be unbiased rewards. \nI think the authors should provide more rigorous analysis of what exact effects the absorbing state introduces.\nhttps://arxiv.org/pdf/1809.02064.pdf\n\n", "Thank you for sharing the link. The arxiv paper linked is concurrent work. As such our off-policy algorithm was novel at time of release and remains a primary contribution of this work. We will add this paper to the related work section as a concurrent work in the next update.\n\nThe requested ablation study is already presented in Fig. 6 and Fig.7 where we compare adversarial imitation learning approaches with and without the absorbing states. Due to the bias present in the original reward, the baseline without absorbing state information fails to learn a good policy. We derive why this happens in Section 4.1. \n\nAlso, we would like to emphasize that our paper is not limited to off-policy training but also addresses other issues of adversarial imitation learning algorithms. We first identify the problem of biased rewards, which we then experimentally validate across GAIL and AIRL (note that the other paper is centered around GAIL, and not adversarial imitation learning in general). Following that we introduce absorbing states as a fix for this issue, while empirically validating that our proposed solution solves tasks which are unsolvable by AIRL.", "Thank you again for your comments.\n\n1. We did not have sufficient time to collate these results before the deadline, but we will add them to the appendix for a future revision.\n\n2. In Fig. 7, we run the absorbing state versus non-absorbing state experiments on the more standard Hopper and Walker2D environments. We understand those experiments are with AIRL algorithm and it will be more comprehensive if we ran the same experiment with GAIL algorithm and environments from Fig. 4. However, we were constrained by the page limits and chose to show how our fix to the reward bias not only works across different adversarial algorithms (GAIL in Fig. 6 and AIRL in Fig. 7) but also works on demonstrations collected from humans on a Kuka arm. We will add the figures for the experiments you mentioned in the comment to the next version of the paper.", "I appreciate the authors' response to the comments, and it did address some of my concerns. However, I still have some questions:\n\n1. Could the authors provide the comparisons among DAC, GAIL w/ PPO, and GAIL w/ TRPO for 25M steps for all the 5 tasks (in Fig.4)?\n\n2. Why the authors only evaluate such no absorbing experiments on KUKA tasks? Could the authors provide the results of this baseline on the 5 tasks used in Fig.4? ", "Thank you for your comments.\n\nAt the moment, we plot results only for 1 million steps. In the original implementation of GAIL, the authors use 25M steps to report the results. With 25M steps we are able to replicate results reported in the original GAIL paper. We do have one example of how the methods compare to each other when trained for 25M steps in our submission. This can be seen in the top left sub-plot in Figure 4. We will add the plots with 25M steps in the next update of the paper.\n\nWe perform ablation experiments and visualize the results in Figure 6. The ‘no absorbing’ baseline corresponds to off-policy GAIL while the red line corresponds to DAC. Thanks for pointing this out. We will add a clarification in the text to make the comparison clearer.\n", "I found there is a significant gap between the performances of GAIL reported by the authors and stated in the original GAIL paper(https://papers.nips.cc/paper/6391-generative-adversarial-imitation-learning-supplemental.zip). Since the authors emphasized that they use the original implementation(https://www.github.com/openai/imitation), such empirical results could be doubtful. Can the authors comment on that?\n\nAnother comment is about the sufficiency on experiments. Since DAC is a combination of an improved adversarial reward learning mechanism and off-policy training, evaluations on ablations are needed to clarify which part actually accounts for the improvement on performance or training efficiency. Moreover, I think GAIL with off-policy training should also be a baseline to further validate that whether the unbiased reward learning introduced by the authors could eliminate the sub-optimality." ]
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2019_Hk4fpoA5Km", "S1gpuXlih7", "SJxux2BMCm", "H1gEMY4eAQ", "BkgkROElAm", "SJeQPruhpX", "B1gGZfAq6Q", "Byebk4m5Tm", "S1gzKHFunm", "S1gpuXlih7", "rJgH5Yg0h7", "rJePR4gGT7", "iclr_2019_Hk4fpoA5Km", "iclr_2019_Hk4fpoA5Km", "iclr_2019_Hk4fpoA5Km", "rJgH5Yg0h7", "ByeHNeho37", "iclr_2019_Hk4fpoA5Km", "iclr_2019_Hk4fpoA5Km", "BkxfyEhmhm", "BygoS4eHnQ", "B1l_dvkgn7", "Byl5uav0oX", "iclr_2019_Hk4fpoA5Km" ]
iclr_2019_HkG3e205K7
Doubly Reparameterized Gradient Estimators for Monte Carlo Objectives
Deep latent variable models have become a popular model choice due to the scalable learning algorithms introduced by (Kingma & Welling 2013, Rezende et al. 2014). These approaches maximize a variational lower bound on the intractable log likelihood of the observed data. Burda et al. (2015) introduced a multi-sample variational bound, IWAE, that is at least as tight as the standard variational lower bound and becomes increasingly tight as the number of samples increases. Counterintuitively, the typical inference network gradient estimator for the IWAE bound performs poorly as the number of samples increases (Rainforth et al. 2018, Le et al. 2018). Roeder et a. (2017) propose an improved gradient estimator, however, are unable to show it is unbiased. We show that it is in fact biased and that the bias can be estimated efficiently with a second application of the reparameterization trick. The doubly reparameterized gradient (DReG) estimator does not suffer as the number of samples increases, resolving the previously raised issues. The same idea can be used to improve many recently introduced training techniques for latent variable models. In particular, we show that this estimator reduces the variance of the IWAE gradient, the reweighted wake-sleep update (RWS) (Bornschein & Bengio 2014), and the jackknife variational inference (JVI) gradient (Nowozin 2018). Finally, we show that this computationally efficient, drop-in estimator translates to improved performance for all three objectives on several modeling tasks.
accepted-poster-papers
The paper is well written and easy to follow. The experiments are adequate to justify the usefulness of an identity for improving existing multi-Monte-Carlo-sample based gradient estimators for deep generative models. The originality and significance are acceptable, as discussed below. The proposed doubly reparameterized gradient estimators are built on an important identity shown in Equation (5). This identity appears straightforward to derive by applying both score-function gradient and reparameterization gradient to the same objective function, which is expressed as an expectation. The AC suspects that this identity might have already appeared in previous publications / implementations, though not being claimed as an important contribution / being explicitly discussed. While that identity may not be claimed as the original contribution of the paper if that suspicion is true, the paper makes another useful contribution in applying that identity to the right problem: improving three distinct training algorithms for deep generative models. The doubly reparameterized versions of IWAE and reweighted wake-sleep (RWS) further show how IWAE and RWS are related to each other and how they can be combined for potentially further improved performance. The AC believes that the paper makes enough contributions by well presenting the identity in (5) and applying it to the right problems.
train
[ "rJlMEiAnam", "rygSDcRhaQ", "SJluWc0npX", "HyelqKC2pX", "Hkx5PKC3hX", "Hyle7iIF27", "S1gF1q36j7" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We have updated the manuscript based on reviewer feedback. Apart from clarifying edits, we have rewritten the derivation in Appendix 8.1 and included a plot of variance for several values of K as Appendix Figure 8.", "Recent work on reparameterizing mixture distributions has shown that the necessary gradients can be computed with the implicit reparameterization trick (Graves 2016, Jankowiak & Obermeyer 2018; Jankowiak & Karaletsos 2018; Figurnov et al. 2018). Using this approach to reparameterize the mixture, DReGs readily apply when q is a Gaussian mixture model. We mention this explicitly in the text now.\n\nEq. 6 explicitly characterizes the bias in STL. There is no reason to believe this term analytically vanishes, and we confirm numerically that it is non-zero in the toy Gaussian example. We believe this is sufficient to support our claim of bias.\n\nWe present the K ELBO results in these plots to be consistent with previous work (Rainforth et al. 2018). We agree that it can be misleading for the reasons you indicated, so we now explicitly call this out in the maintext.\n\nYes, the color assignment is the same. We note this in the caption for both figures now.", "Thank you for the helpful suggestions.\n\n1. Thank you for pointing out this source of confusion. The correctness of the proof is related to the fact that \\frac{\\partial}{\\partial \\phi} g(\\phi, \\tilde{\\phi}) |_{\\tilde{\\phi} = \\phi} != \\frac{\\partial}{\\partial \\phi} g(\\phi, \\phi). On the left hand side the derivative is taken first, which results in a function of \\phi and \\tilde{\\phi}, which we then evaluate. As you note, this is not equivalent to setting \\tilde{\\phi} = \\phi, and then taking the derivative. We want the former. Following your suggestion, we have completely rewritten the proof to avoid this confusing step.\n\n2. We used the trace of the Covariance matrix (normalized by the number of parameters) to summarize the variance, and we implemented this by maintaining exponential moving average statistics. SNR was computed as the mean of the estimator divided by the standard deviation (as in Rainforth et al. 2018). We added this information as footnotes in the maintext.\n\n3. We have added a plot of the variance of the gradient estimator as K changes (Appendix Fig. 8). We found that as K increases, for IWAE and JVI, the variance of the doubly reparameterized gradient estimator slowly decreases relative to the variance of the original gradient estimator. On the other hand for RWS, we found that as K increases, the variance of the doubly reparameterized gradient estimator gradually increases relative to the variance of the original gradient estimator. However, we emphasize that in all cases, the variance of the doubly reparameterized gradient estimator was less than the variance of the original gradient estimator.\n\n4. Yes, intuitively, the right hand side directly takes advantage of the gradient of f whereas the left hand side ends up computing something akin to finite differences. We have added a sentence explaining this intuition in the maintext.\n", "Thank you for checking the derivations. We appreciate the positive comments.", "This paper applies a reparameterization trick to estimate the gradients objectives encountered in variational autoencoder based frameworks with continuous latent variables. Especially the authors use this double reparameterization trick on Importance Weighted Auto-Encoder (IWAE) and Reweighted Wake-Sleep (RWS) methods. Compared to IWAE, the developed method's SNR does not go to zero with increasing the number of particles.\n\nOverall, I think the idea is nice and the results are encouraging. I checked all the derivations, and they seem to be correct. Thus I recommend this paper to be accepted in its current form.", "The paper observes the gradient of multiple objective such as IWAE, RWS, JVI are in the form of some “reward” multiplied with score function which can be calculated with one more reparameterization step to reduce the variance. The whole paper is written in a clean way and the method is effective.\n\nI have following comments/questions:\n\n1. The conclusion in Eq(5) is correct but the derivation in Sec. 8.1. may be arguable. Writing \\phi and \\tilde{\\phi} at the first place sets the partial derivative of \\tilde{\\phi} to \\phi as 0. But the choice of \\tilde{\\phi} in the end is chosen as \\phi. If plugging \\phi to \\tilde{\\phi}, the derivation will change. The better way may be calculating both the reparameterization and reinforce gradient without redefining a \\tilde{\\phi}.\n\n2. How does the variance of gradient calculated where the gradient is a vector? And how does the SNR defined in the experiments?\n\n3. How does the variance reduction from DReG changes with different value of K?\n\n4. Is there any more detailed analysis or intuition why the right hand side of Eq(5) has lower variance than the left hand side?", "Overall:\nThis paper works on improving the gradient estimator of the ELBO. Author experimentally found that the estimator of the existing work(STL) is biased and proposed to reduce the bias by using the technique like REINFORCE.\nThe problem author focused on is unique and the solution is simple, experiments show that proposed method seems promising.\n\nClarity:\nThe paper is clearly written in the sense that the motivation of research is clear, the derivation of the proposed method is easy to understand.\n\nSignificance:\nI think this kind of research makes the variational inference more useful, so this work is significant. But I cannot tell the proposed method is really useful, so I gave this score.\nThe reason I doubt the reason is that as I written in the below, the original STL can handle the mixture of Gaussians as the latent variable but the proposed method cannot. So I do not know which is better and whether I should use this method or use the original STL with flexible posterior distribution to tighten the evidence lower bound. I think additional experiments are needed. I know that motivation is a bit different for STL and proposed method but some comparisons are needed.\n\nQuestion and minor comments:\nIn the original paper of STL, the author pointed out that by freezing the gradient of variational parameters to drop the score function term, we can utilize the flexible variational families like the mixture of Gaussians.\nIn this work, since we do not freeze the variational parameters, we cannot utilize the mixture of Gaussians as in the STL. IWAE improves the lower bound by increasing the samples, but we can also improve the bound by specifying the flexible posteriors like the mixture of Gaussians in STL.\nFaced on this, I wonder which strategy is better to tighten the lower bound, should we use the STL with the mixture of Gaussians or use the proposed method? \nTo clarify the usefulness of this method, I think the additional experimental comparisons are needed.\n\nAbout the motivation of the paper, I think it might be better to move the Fig.1 about the Bias to the introduction and clearly state that the author found that the STL is biased \"experimentally\".\n\nFollowings are minor comments.\nIn experiment 6.1, I'm not sure why the author present the result of K ELBO estimator in the plot of Bias and Variance.\nI think author want to point that when K=1, STL is unbiased with respect to the 1 ELBO, but when k>1, it is biased with respect to IWAE estimator.\nHowever, the objective of K ELBO and IWAE are different, it may be misleading. So this should be noted in the paper.\n\nIn Figure 3, the left figure, what each color means? Is the color assignment is the same with the middle figure?\n(Same for Figure 4)" ]
[ -1, -1, -1, -1, 7, 7, 6 ]
[ -1, -1, -1, -1, 3, 5, 4 ]
[ "iclr_2019_HkG3e205K7", "S1gF1q36j7", "Hyle7iIF27", "Hkx5PKC3hX", "iclr_2019_HkG3e205K7", "iclr_2019_HkG3e205K7", "iclr_2019_HkG3e205K7" ]
iclr_2019_HkNGYjR9FX
Learning Recurrent Binary/Ternary Weights
Recurrent neural networks (RNNs) have shown excellent performance in processing sequence data. However, they are both complex and memory intensive due to their recursive nature. These limitations make RNNs difficult to embed on mobile devices requiring real-time processes with limited hardware resources. To address the above issues, we introduce a method that can learn binary and ternary weights during the training phase to facilitate hardware implementations of RNNs. As a result, using this approach replaces all multiply-accumulate operations by simple accumulations, bringing significant benefits to custom hardware in terms of silicon area and power consumption. On the software side, we evaluate the performance (in terms of accuracy) of our method using long short-term memories (LSTMs) and gated recurrent units (GRUs) on various sequential models including sequence classification and language modeling. We demonstrate that our method achieves competitive results on the aforementioned tasks while using binary/ternary weights during the runtime. On the hardware side, we present custom hardware for accelerating the recurrent computations of LSTMs with binary/ternary weights. Ultimately, we show that LSTMs with binary/ternary weights can achieve up to 12x memory saving and 10x inference speedup compared to the full-precision hardware implementation design.
accepted-poster-papers
This work proposes a simple but useful way to train RNN with binary / ternary weights for improving memory and power efficiency. The paper presented a sequence of experiments on various benchmarks and demonstrated significant improvement on memory size with only minor decrease of accuracy. Authors' rebuttal addressed the reviewers' concern nicely.
train
[ "Syxc_KqB1V", "BkelhUWx1E", "SJxm71x3CX", "r1eCUUH0oQ", "Byx7WRFtAX", "rkgP1CFYR7", "BylZ06Ft0X", "HyxUw6KKC7", "rye5HTKF0m", "BkeyGhFFRQ", "SyxrxntK0X", "rkgP3jYYAX", "HJlRfyx_aQ", "HyxKWlpFh7" ]
[ "author", "public", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "We sincerely thank the reader for careful reading of our manuscript and code. Below we respond to each comment of yours in detail. \n---------------------------------------------------- \nComment: 1. what is the optimizer used for word-level language modeling on PTB data set? The submitted paper does not mention what kind of optimizer is used. After checking the code, I found that it is vanilla SGD, but scaled by a \"norm_factor\", which is the squared 2 norm of the gradient. Could the authors clarify it in the paper why this particular scaling parameter is chosen? Moreover, if the particular scaling is crucial to the performance, for fair comparison, the baseline methods compared should also use this kind of optimization. \n\nOur response: Thank you for the comment. Due to the limited space, we had to defer the details of training settings to Appendix C in the paper. In Appendix C.2, we have mentioned that we only use vanilla SGD while clipping the gradient norm at 0.25. We also used this setting not only for our models but also for the baseline models reported in Table 3 for a fair comparison. Moreover, the Alternating LSTM (Xu et al. (2018)) was trained with the same setting (please see Section 5 of the Alternating paper). Since all the models reported in Table 3 were trained with the same setting, we believe that we have constructed a fair comparison. It is also worth mentioning that while we used SGD with the norm clipping method for this task, our method is not limited to only this setting. In fact, we have trained our models using different optimizers such as Adam and achieved comparable perplexity values. For instance, our medium LSTM with ternary weights trained with the Adam optimizer yields a perplexity value of 90. However, we agree that vanilla SGD with the norm clipping method works better, which explains why other works in literature (such as Xu et al. (2018) and Zaremba et al. (2014)) use this setting. \n---------------------------------------------------- \nComment: 2. What is the sequence length used for character-level language modeling on text8 dataset? The paper says it is 180, but the released code shows it is 200. Which one is correct? and does this cause a significant difference in the final performance? What sequence length is used for the compared baseline methods? \n\nOur response: Thank you for the comment. All the results reported in Table 2 were obtained using the sequence length of 180. However, since we have been using the code for our other works, we simply forgot to restore the original settings that we used for this paper. Based on your comment, we have updated the code with the original settings that we used to obtain the results reported in Table 2. \n---------------------------------------------------- \nComment: 3. This paper used sequence length 35 for the word-level language modeling task on Penn Treebank. However, the Alternating LSTM (Xu et al. (2018)) uses 30. It is known that usually the larger sequence length, the better performance. For this aspect, the comparison may not be fair. \n\nOur response: Thank you for the comment. For this task, we adopted the model introduced by Zaremba et al. (2014) (please see https://arxiv.org/pdf/1409.2329.pdf) as our baseline which uses the sequence length of 35. Using the sequence length of 35 is a common choice for this task. For example, all the models reported in Table 4 in https://arxiv.org/pdf/1707.05589.pdf use the same sequence length. We also believe that learning longer-term dependencies is more desirable and challenging when using LSTMs. As a result, we followed the same trend. \n\nIn Table 3, we showed that our ternary models match their performance with their baseline while there is a large gap between the 2-bit Alternating LSTM model and its baseline (please see Table 1 in (Xu et al. (2018))). Regardless of the sequence length, our model can match its performance with the baseline while the Alternating model fails to do so when using 2 bits for the representation of weights. Additionally, based on your comment, we have also trained our small LSTM models with the sequence length of 30, and obtained perplexity values of 92.4 and 90.7 for the small binary and ternary LSTM models, respectively. In fact, the obtained results for the sequence length of 30 (i.e., 92.4 for the binary model and 90.7 for the ternary model) are very similar to the results obtained for the sequence length of 35 (i.e., 92.2 for the binary model and 90.7 for the ternary model). ", "Thank the authors for the simple and effective methodology for binary and ternary quantization in LSTMs. However, the experiment settings are not very explicitly stated in the paper. I tried out the released code online and have some questions about the experiment settings, which could be important for fair evaluation for the efficacy of the proposed method. Could the authors clarify a bit about inconsistency or the implicit part in the experiment settings?\n\n1. what is the optimizer used for word-level language modeling on PTB data set?\nThe submitted paper does not mention what kind of optimizer is used. After checking the code, I found that it is vanilla SGD, but scaled by a \"norm_factor\", which is the squared 2 norm of the gradient. Could the authors clarify it in the paper why this particular scaling parameter is chosen? Moreover, if the particular scaling is crucial to the performance, for fair comparison, the baseline methods compared should also use this kind of optimization.\n\n2. What is the sequence length used for character-level language modeling on text8 dataset? The paper says it is 180, but the released code shows it is 200. Which one is correct? and does this cause a significant difference in the final performance? What sequence length is used for the compared baseline methods?\n\n3. This paper used sequence length 35 for the word-level language modeling task on Penn Treebank. However, the Alternating LSTM (Xu et al. (2018)) uses 30. It is known that usually the larger sequence length, the better performance. For this aspect, the comparison may not be fair.", "I think the authors for the in-depth response and revision. I increased my score from 5 to 7.", "* Summary\nThis paper proposes batch normalization for learning RNNs with binary or ternary weights instead of full-precision weights. Experiments are carried out on character-level and word-level language modeling, as well as sequential MNIST and question answering.\n\n\n* Strengths\n- I liked the variety of tasks used evaluations (sequential MNIST, language modeling, question answering).\n- Encouraging results on specialized hardware implementation.\n\n\n* Weaknesses\n- Using batch normalization on existing binarization/ternarization techniques is a bit of an incremental contribution.\n- All test perplexities for word-level language models in table 3 underperform compared to current vanilla LSTMs for that task (see Table 4 in https://arxiv.org/pdf/1707.05589.pdf), suggesting that the baseline LSTM used in this paper is not strong enough.\n- Results on question answering are not convincing -- BinaryConnect has the same size while achieving substantially higher accuracy (94.66% vs 40.78%). This is nowhere discussed and the paper's major claims \"binaryconnect method fails\" and \"our method [...] outperforms all the existing quantization methods\" seem unfounded (Section 5.5).\n- In the introduction, I am lacking a distinction between improvements w.r.t. training vs inference time. As far as I understand, quantization methods only help at reducing memory footprint or computation time during inference/test but not during training. This should be clarified.\n- In the introduction on page 2 is argued that the proposed method \"eliminates the need for multiplications\" -- I do not see how this is possible. Maybe what you meant is that it eliminates the need for full-precision multiplications by replacing them with multiplications with binary/ternary matrices? \n- The notation is quite confusing. For starters, in Section 2 you mention \"a fixed scaling factor A\" and I would encourage you to indicate scalars by lower-case letters, vectors by boldface lower-case letters and matrices by boldface upper-case letters. Moreover, it is unclear when calculations are approximate. For instance, in Eq. 1 I believe you need to replace \"=\" with \"\\approx\". Likewise for the equation in the next to last line on page 2. Lastly, while Eq. 2 seems to be a common way to write down LSTM equations, it is abusive notation.\n\n\n* Minor Comments\n- Abstract: What is ASIC? It is not referenced in Section 6.\n- Introduction: What is the justification for calling RNNs over-parameterized? This seems to depend on the task. \n- Introduction; contributions: Here, I would like to see a distinction between gains during training vs test time.\n- Section 3.2 comes out of nowhere. You might want to already mention why are introducing batch normalization at this point.\n- The boldfacing in Table 1, 2 and 3 is misleading. I understand this is done to highlight the proposed method, but I think commonly boldfacing is used to highlight the best results.\n- Figure 2b. What is your hypothesis why BPC actually goes down the longer the sequence is?\n- Algorithm 1, line 14: Using the cross-entropy is a specific choice dependent on the task. My understanding is your approach can work with any differentiable downstream loss?", "We sincerely thank the reviewer for careful reading of our manuscript and many insightful comments and suggestions towards improving our paper. Below we respond to each comment of yours in detail. \n\n--------------------------------------------------------- \n\nMajor Comments (Weaknesses): \n\n--------------------------------------------------------- \n\nReviewer comment: Using batch normalization on existing binarization/ternarization techniques is a bit of an incremental contribution. \n\nOur response: We agree with the reviewer that the idea may sound simple, but we believe it can be counted as a strength of our paper due to its effectiveness for the binarization/ternarization process. While most of the existing binarization/ternarization methods are specialized for a specific temporal task, we have shown that our method can perform equally good over different temporal tasks and outperform the existing quantization methods in terms of prediction accuracy. We believe our method has paved the way for hardware designers to exploit LSTMs with binarized/ternarized weights which require less implementation cost for embedded systems. \n\n--------------------------------------------------------- \n\nReviewer comment: All test perplexities for word-level language models in table 3 underperform compared to current vanilla LSTMs for that task (see Table 4 in https://arxiv.org/pdf/1707.05589.pdf), suggesting that the baseline LSTM used in this paper is not strong enough. \n\nOur response: Based on the reviewer’s comment, we adopted the large LSTM model (Zaremba et al. (2014)) in Table 4 of the mentioned paper as a baseline and performed our binarization/ternarization method for this baseline. The simulation results are reported in Table 3 of the revised manuscript and show that our binarized and ternarized models outperform the baseline model in terms of perplexity. More precisely, our binarized and ternarized models respectively achieve perplexity values of 76.5 and 75.3 while the perplexity value of the baseline model is 78.5. \n\n--------------------------------------------------------- \n\nReviewer comment: Results on question answering are not convincing -- BinaryConnect has the same size while achieving substantially higher accuracy (94.66% vs 40.78%). This is nowhere discussed and the paper's major claims \"binaryconnect method fails\" and \"our method [...] outperforms all the existing quantization methods\" seem unfounded (Section 5.5). \n\nOur response: Thank you for the comment. In the submitted manuscript, we had reported the test error rate as opposed to accuracy of different methods for the question answering task in Table 5. However, we agree with the reviewer that since Table 4 reported accuracy and Table 5 reported error rate, there was an inconsistency between Table 4 and Table 5 that could cause confusion. Based on the reviewer’s comment, we have now summarized the results of Table 5 in terms of accuracy rate in the revised manuscript to make them consistent with the results of the sequential image classification task in Table 4. According to Table 5 of the revised manuscript, BinaryConnect completely fails to learn the question answering task while our method yields a similar accuracy rate to its full-precision baseline. We showed that our method can binarize/ternarize weights of a more realistic RNN application. \n\n---------------------------------------------------------", "Reviewer comment: In the introduction, I am lacking a distinction between improvements w.r.t. training vs inference time. As far as I understand, quantization methods only help at reducing memory footprint or computation time during inference/test but not during training. This should be clarified. \n\nOur response: Based on your comment, we have clearly mentioned that our method only helps at reducing memory footprint and computation time during inference in the introduction section of the revised manuscript. \n\n--------------------------------------------------------- \n\nReviewer comment: In the introduction on page 2 is argued that the proposed method \"eliminates the need for multiplications\" -- I do not see how this is possible. Maybe what you meant is that it eliminates the need for full-precision multiplications by replacing them with multiplications with binary/ternary matrices? \n\nOur response: Thank you for raising this. Yes, we meant that it eliminates the need for full-precision multiplication by replacing them with multiplications with binary/ternary matrices. It is worth mentioning that a multiplication with one binary or ternary multiplicand is implemented with a very simple hardware circuit, much smaller and more power efficient than a full-precision or multi-bit multiplier. We have revised the statement in the manuscript. \n\n--------------------------------------------------------- \n\nReviewer comment: The notation is quite confusing. For starters, in Section 2 you mention \"a fixed scaling factor A\" and I would encourage you to indicate scalars by lower-case letters, vectors by boldface lower-case letters and matrices by boldface upper-case letters. Moreover, it is unclear when calculations are approximate. For instance, in Eq. 1 I believe you need to replace \"=\" with \"\\approx\". Likewise for the equation in the next to last line on page 2. Lastly, while Eq. 2 seems to be a common way to write down LSTM equations, it is abusive notation. \n\nOur response: Based on your comment, all the notations have been addressed in the revised manuscript.", "--------------------------------------------------------- \n\nMinor Comments: \n\n--------------------------------------------------------- \n\nReviewer comment: Abstract: What is ASIC? It is not referenced in Section 6. \n\nOur response: Application-Specific Integrated Circuit (ASIC) is a common term used as an integrated circuit customized for a particular use. We have now defined this abbreviation in Section 6 of the revised manuscript. \n\n--------------------------------------------------------- \n\nReviewer comment: Introduction: What is the justification for calling RNNs over-parameterized? This seems to depend on the task. \n\nOur response: We agree with the reviewer that calling all RNNs over-parameterized is not precise, as it highly depends on the task and dimensions of inputs/outputs/state vectors of RNNs. Based on your comment, we have changed the sentence to “..., RNNs are typically over-parameterized ...” in the introduction section of the revised manuscript. It is worth mentioning that it has been shown in literature that most networks’ parameters can be pruned or quantized without any performance degradation, suggesting that neural networks are typically over-parameterized. \n\n--------------------------------------------------------- \n\nReviewer comment: Introduction; contributions: Here, I would like to see a distinction between gains during training vs test time. \n\nOur response: Based on your comment, we have clearly mentioned in the revised manuscript (see the first bullet point of Section 1 and the last sentence of Section 4) that using binarized/ternarized weights is only beneficial for inference. \n\n--------------------------------------------------------- \n\nReviewer comment: Section 3.2 comes out of nowhere. You might want to already mention why are introducing batch normalization at this point. \n\nOur response: We completely agree with the reviewer on this point. Based on your comment, we have merged Section 3.2 of the submitted version with Section 4. We now introduce batch normalization right after explaining why we are motivated to use it in the revised manuscript. \n\n--------------------------------------------------------- \n\nReviewer comment: The boldfacing in Table 1, 2 and 3 is misleading. I understand this is done to highlight the proposed method, but I think commonly boldfacing is used to highlight the best results. \n\nOur response: Based on your comment, we have only highlighted the best results in all the tables of the revised manuscript. \n\n--------------------------------------------------------- \n\nReviewer comment: Figure 2b. What is your hypothesis why BPC actually goes down the longer the sequence is? \n\nOur response: We believe that the models learn to focus only on information relevant to the generation of the next target character. The prediction accuracy of the models improves as the sequence length increases since longer sequences provide more information from the past to generate the next target character. We have added the above discussion to Section 5.5 of the revised manuscript. \n\n--------------------------------------------------------- \n\nReviewer comment: Algorithm 1, line 14: Using the cross-entropy is a specific choice dependent on the task. My understanding is your approach can work with any differentiable downstream loss? \n\nOur response: Thank you for raising this. We have also used our method with other loss functions, and it is not limited to only cross entropy. We have addressed the issue in the revised manuscript. ", "We sincerely thank the reviewer for careful reading of our manuscript and many insightful comments and suggestions towards improving our paper. Below we respond to each comment of yours in detail. \n\n--------------------------------------------------------- \n\nMajor Comments (Weaknesses): \n\n--------------------------------------------------------- \n\nReviewer comment: little understanding is provided into _why_ covariance shift occurs/ why batch normalisation is so useful. The method works, but the authors could elaborate more on this, given that this is the core argument motivating the chosen method. \n\nOur response: The main motivation of using batch normalization for the binarization/ternarization process relies on an observation of distribution of an LSTM gates/states trained with the BinaryConnect method. More precisely, we first trained an LSTM using the BinaryConnect for 50 epochs and illustrated the distribution of gates/states (see Fig. 4 of the revised manuscript). Comparing the distribution curves of the BinaryConnect model with its full-precision counterpart shows that the BinaryConnect method makes LSTM ineffective. In fact, the LSTM gates that are supposed to control the flow of information fail to function properly. For instance, the output gate o and the input gate i tend to let all information through while these gates in the full-precision model behaves differently. To explore the cause of this problem, we performed the second experiment: we measured the distribution of the input gate i before its non-linear function applied during different training iterations. We observed that the binarization process changes the distribution and pushes it towards positive values (see Fig. 5 in the revised manuscript) during the training process. Motivated from these observations, we decided to use batch normalization as it provides more robustness to the network. Based on the reviewer’s comment, we have added the above discussion to Section 4 of the revised manuscript. \n\n--------------------------------------------------------- \n\nReviewer comment: some statements are too bold/vague , e.g. page 3: “a binary/ternary model that can perform all temporal tasks” \n\nOur response: Thank you for raising this issue. We have revised the bold/vague statements in the manuscript. \n\n--------------------------------------------------------- \n\nReviewer comment: unclear: by adapting a probabilistic formulation / sampling quantised weights, some variance is introduced. Does it matter for predictions (which should now also be stochastic)? How large is this variance? Even if negligible, it is not obvious and should be addressed. \n\nOur response: In fact, the variance that is introduced by the stochastic binarization/ternarization process on the prediction accuracy is very small. For instance, we measured the distribution of the prediction accuracy on the Penn Treebank corpus when using the stochastic ternarization process over 10000 samples as shown in Fig. 1 (b) of the revised manuscript. This curve shows that the variance imposed by the stochastic process is negligible on the prediction accuracy. Based on the reviewer’s comment, we have added the above discussion to Section 5.5 of the revised manuscript.", "--------------------------------------------------------- \n\nOther Questions / Comments \n\n--------------------------------------------------------- \n\nReviewer comment: How dependent is the method on the batch size chosen? This is in particular relevant as smaller batches might yield poor empirical estimates for mean/var. What happens at batch size 1? Are predictions of poorer for smaller batches? \n\nOur response: Based on your comment, we have investigated the effect of using different batch sizes on the prediction accuracy of our binarized/ternarized models (see Section 5.5 of the revised manuscript). To this end, we trained an LSTM of size 1000 over a sequence length of 100 and different batch sizes to perform the character-level language modeling task on the Penn Treebank corpus. The simulation results show that batch normalization cannot be used with batch size of 1, as the output vector will be all zeros. Moreover, using batch sizes slightly larger than 1 lead to a high variance in the estimations of the statistics of the unnormalized vector, resulting in a lower prediction accuracy than the baseline model (without batch normalization) as shown in Figure 3 of the revised manuscript. On the other hand, the prediction accuracy of our binarized/ternarized models improves as the batch size increases while the prediction accuracy of the baseline model decreases. \n\n--------------------------------------------------------- \n\nReviewer comment: Section 2, second line — detail: case w_{i,j}=0 is not covered \n\nOur response: Thank you for raising this. We have fixed the typo in the revised manuscript. \n\n--------------------------------------------------------- \n\nReviewer comment: equation (5): total probability mass does not add up to 1 \n\nOur response: For the stochastic ternarization process, we sample from [0, 1] interval depending on the weight sign. In case of having a positive sign, the probability of getting +1 is equal to the absolute value of the normalized weight and the probability of getting 0 is 1-P(w = 1) which adds up to 1. Similarly, in case of having a negative sign, the probability of getting -1 is equal to the absolute value of the normalized weight and the probability of getting 0 is 1-P(w = -1) which also adds up to 1. \n\n--------------------------------------------------------- \n\nReviewer comment: a direct comparison with models from previous work would have been interesting, where these previous methods also rely on batch normalisation \n\nOur response: Unfortunately, we could not find any other methods that rely on batch normalization to the best of our knowledge. However, we tried our best to compare our method with other existing quantization methods. \n\n--------------------------------------------------------- \n\nReviewer comment: as I understand, the main contribution is in the inference (forward pass), not in training. It is somewhat misleading when the authors speak about “the proposed training algorithm” or “we introduced a training algorithm” \n\nOur response: We completely agree with the reviewer. We have revised the misleading statements in the manuscript based on your comment. \n\n--------------------------------------------------------- \n\nReviewer comment: unclear: last sentence before section 6. \n\nOur response: Thank you for raising this. We have rephrased the sentence in the revised manuscript. \n\n--------------------------------------------------------- ", "We sincerely thank the reviewer for careful reading of our manuscript and many insightful comments and suggestions towards improving our paper. Below we respond to each comment of yours in detail. \n\n--------------------------------------------------------- \n\nMajor Comments (Weaknesses): \n\n--------------------------------------------------------- \n\nReviewer comment: While the application of batch normalization demonstrates good results, having more compelling results on why covariate shift is such a problem in LSTMs would be helpful. Is this methodology applicable to other recurrent layers like RNNs and GRUs? \n\nOur response: The main motivation of using batch normalization for the binarization/ternarization process relies on an observation of distribution of an LSTM gates/states trained with the BinaryConnect method. More precisely, we first trained an LSTM using the BinaryConnect for 50 epochs and illustrated the distribution of gates/states (see Fig. 4 of the revised manuscript). Comparing the distribution curves of the BinaryConnect model with its full-precision counterpart shows that the BinaryConnect method makes LSTM ineffective. In fact, the LSTM gates that are supposed to control the flow of information fail to function properly. For instance, the output gate o and the input gate i tend to let all information through while these gates in the full-precision model behaves differently. To explore the cause of this problem, we performed the second experiment: we measured the distribution of the input gate i before its non-linear function applied during different training iterations. We observed that the binarization process changes the distribution and pushes it towards positive values (see Fig. 5 in the revised manuscript) during the training process. Motivated from these observations, we decided to use batch normalization as it provides more robustness to the network. Based on the reviewer’s comment, we have added the above discussion to Section 4 of the revised manuscript. \n\nTo also show the applicability of our method to GRUs, we repeated the character-level language modeling task performed in Section 5.1 while using GRUs instead of LSTMs on the Penn Treebank, War & Peace and Linux Kernel corpora. We also adopted the same network configurations and settings used in Section 5.1 for each of the aforementioned corpora. Table 6 summarizes the performance of our binarized/ternarized models. The simulation results show that our method can successfully binarize/ternarize the recurrent weights of GRUs. Based on the reviewer’s comment, we have added the above discussion to Section 5.5 of the revised manuscript. \n\n--------------------------------------------------------- \n\nReviewer comment: Does applying batch normalization across layer boundaries or at the end of each time-step help? This may incur lower overhead during inference and training time compared to applying batch normalization to the output of each matrix vector product (inputs and hidden-states). \n\nOur response: Since the immediate impact of the binarization/ternarization process is on the value of gates in LSTM, it works the best when batch normalization is applied right after the vector-matrix multiplications. \n\n---------------------------------------------------------", "Reviewer comment: Does training with batch-normalization add additional complexity to the training process? I imagine current DL framework do not efficiently parallelize applying batch normalization on both input and hidden matrix vector products. \n\nOur response: Yes, it adds additional complexity and makes the training process slightly slow. However, since we target embedded devices requiring real-time inference process, the additional complexity in the training process is a worthwhile tradeoff when considering the gain that batch normalization provides for hardware implementations (i.e., having binary/ternary weights which requires less hardware cost). Based on the reviewer’s comment, we have added a discussion stating that batch normalization introduces additional complexity to the training process. \n\n--------------------------------------------------------- \n\nReviewer comment: It would be nice to have more intuition on what execution time overheads batch-normalization applies during inference on a CPU or GPU. That is, without a hardware accelerator what are the run-time costs, if any. \n\nOur response: Batch normalization by itself makes the inference computations slower (by a factor of ~1.3 in our simulation) on a CPU or GPU platform. On the other hand, binarized/ternarized weights can be exploited to speed up the computations. For instance, XNOR-Net paper (https://arxiv.org/abs/1603.05279) has shown that using binarized weights can speed up the computations by a factor of 2 on a CPU platform. Moreover, using binarized/ternarized weights saves memory and reduces the memory access and consequently power consumption. Unfortunately, due to the limited time that we had for the rebuttal period, we could not measure the inference time of our method on a CPU or GPU platform. \n\n--------------------------------------------------------- \n\nReviewer comment: The hardware implementation could have much more detail. First, where are the area and power savings coming from. It would be nice to have a breakdown of on-chip SRAM for weights and activations vs. required DRAM memory. Similarly having a breakdown of power in terms of on-chip memory, off-chip memory, and compute would be helpful. \n\nOur response: In fact, the power and area savings come from replacing the full-precision multipliers with binary/ternary multipliers (i.e., multiplexers). Table 6 only reports the implementation results of the computational core excluding the DRAM that was used to store all the weights and activations. Depending on the target application, the size of DRAM could be different. It is worth mentioning that the main computational core builds on a large array of Multiply-and-Accumulate units working in parallel. The intermediate results are stored into registers, and the computed activations are written into the DRAM. The breakdown of storage required to store the weights for each task is reported in Table 1 to 5 in Section 5. Based on your comment, we have added the above discussion to the paper. \n\n--------------------------------------------------------- ", "Reviewer comment: The hardware accelerator baseline assumes a 12-bit weight and activation quantization. Is this the best that can be achieved without sacrificing accuracy compared to floating point representation? Does adding batch normalization to intermediate matrix-vector products increase the required bit width for activations to preserve accuracy? \n\nOur response: According to our simulation results, both the baseline and the proposed models require 12 bits for activations without incurring any accuracy degradation. The weights of the baseline model also require 12 bits for a fixed-point representation without incurring any performance degradation. Similar results have also been reported in ESE paper (see https://dl.acm.org/citation.cfm?id=3021745). Based on your comment, we have added the above discussion to Section 6 of the revised manuscript. \n\n--------------------------------------------------------- \n\nOther comments \n\n--------------------------------------------------------- \n\nReviewer comment: Preceding section 3.2 there no real discussion on batch normalization and covariate shift which are central to the work’s contribution. It would be nice to include this in the introduction to guide the reader. \n\nOur response: We completely agree with the reviewer. We have now merged Section 3.2 with Section 4 in the revised manuscript to make a more coherent statement. \n\n--------------------------------------------------------- \n\nReviewer comment: It is unclear why DaDianNao was chosen as the baseline hardware implementation as opposed to other hardware accelerator implementations such as TPU like dataflows or the open-source NVDLA. \n\nOur response: We agree with the reviewers that TPU and NVDLA are among the best accelerators reported to-date. However, we believe that DaDianNao is also one of the best accelerators and the reason for that is twofold. First, DaDianNao is designed for energy-efficiency: it can process neural computations ~656x faster and is ~184x more energy efficient than GPUs (see https://ieeexplore.ieee.org/document/7480791). Second, some hardware techniques can be adopted on top of DaDianNao to further speed up the computations. For instance, in Cambricon-X paper (see https://ieeexplore.ieee.org/document/7783723), it was shown that sparsity among both activations and weights can be exploited on top of the DaDianNao’s dataflow to skip the noncontributory computations with zeros and speed up the process. Similarly, Cnvlutin’s paper (see https://ieeexplore.ieee.org/document/7551378) uses the DaDianNao’s architecture to skip the noncontributory computations of zero-valued activations. We believe that similar techniques can be also exploited to skip the noncontributory computations of zero-valued weights of ternarized RNNs as a future work. Based on the reviewer’s comment, we have added the above discussion to the revised manuscript. ", "The paper proposes a method to achieve binary and ternary quantization for recurrent networks. The key contribution is applying batch normalization to both input matrix vector and hidden matrix vector products within recurrent layers in order to preserve accuracy. The authors demonstrate accuracy benefits on a variety of datasets including language modeling (character and word level), MNIST sequence, and question answering. A hardware implementation based on DaDianNao is provided as well.\n\nStrengths\n- The authors propose a relatively simple and easy to understand methodology for achieving aggressive binary and ternary quantization.\n- The authors present compelling accuracy benefits on a range of datasets.\n\nWeaknesses / Questions\n- While the application of batch normalization demonstrates good results, having more compelling results on why covariate shift is such a problem in LSTMs would be helpful. Is this methodology applicable to other recurrent layers like RNNs and GRUs? \n- Does applying batch normalization across layer boundaries or at the end of each time-step help? This may incur lower overhead during inference and training time compared to applying batch normalization to the output of each matrix vector product (inputs and hidden-states). \n- Does training with batch-normalization add additional complexity to the training process? I imagine current DL framework do not efficiently parallelize applying batch normalization on both input and hidden matrix vector products.\n- It would be nice to have more intuition on what execution time overheads batch-normalization applies during inference on a CPU or GPU. That is, without a hardware accelerator what are the run-time costs, if any.\n- The hardware implementation could have much more detail. First, where are the area and power savings coming from. It would be nice to have a breakdown of on-chip SRAM for weights and activations vs. required DRAM memory. Similarly having a breakdown of power in terms of on-chip memory, off-chip memory, and compute would be helpful. \n- The hardware accelerator baseline assumes a 12-bit weight and activation quantization. Is this the best that can be achieved without sacrificing accuracy compared to floating point representation? Does adding batch normalization to intermediate matrix-vector products increase the required bit width for activations to preserve accuracy?\n\nOther comments\n- Preceding section 3.2 there no real discussion on batch normalization and covariate shift which are central to the work’s contribution. It would be nice to include this in the introduction to guide the reader.\n- It is unclear why DaDianNao was chosen as the baseline hardware implementation as opposed to other hardware accelerator implementations such as TPU like dataflows or the open-source NVDLA. \n", "This work proposes a method for reducing memory requirements in RNN models via binary / ternary quantisation. The authors argue that binarising RNNs is due to a covariate shift, and address it with stochastic quantised weights and batch normalisation.\nThe proposed RNN is tested on 6 sequence modelling tasks/datasets and shows drastic memory improvements compared to full-precision RNNs, with almost no loss in test performance.\nBased on the more efficient RNN cell, the authors furthermore describe a more efficient hardware implementation, compared to an implementation of the full-precision RNN.\n\nThe core message I took away from this work is: “One can get away with stochastic binarised weights in a forward pass by compensating for it with batch normalisation”.\n\nStrengths:\n- substantial number of experiments (6 datasets), different domains\n- surprisingly simple methodological fix \n- substantial literature review\n- it has been argued that char-level / pixel-level RNNs present somewhat artificial tasks — even better that the authors test for a more realistic RNN application (Reading Comprehension) with an actually previously published model.\n\nWeaknesses:\n- little understanding is provided into _why_ covariance shift occurs/ why batch normalisation is so useful. The method works, but the authors could elaborate more on this, given that this is the core argument motivating the chosen method.\n- some statements are too bold/vague , e.g. page 3: “a binary/ternary model that can perform all temporal tasks”\n- unclear: by adapting a probabilistic formulation / sampling quantised weights, some variance is introduced. Does it matter for predictions (which should now also be stochastic)? How large is this variance? Even if negligible, it is not obvious and should be addressed.\n\n\nOther Questions / Comments\n- How dependent is the method on the batch size chosen? This is in particular relevant as smaller batches might yield poor empirical estimates for mean/var. What happens at batch size 1? Are predictions of poorer for smaller batches?\n- Section 2, second line — detail: case w_{i,j}=0 is not covered\n- equation (5): total probability mass does not add up to 1\n- a direct comparison with models from previous work would have been interesting, where these previous methods also rely on batch normalisation\n- as I understand, the main contribution is in the inference (forward pass), not in training. It is somewhat misleading when the authors speak about “the proposed training algorithm” or “we introduced a training algorithm”\n- unclear: last sentence before section 6.\n\n\n\n" ]
[ -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, 6, 8 ]
[ -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "BkelhUWx1E", "iclr_2019_HkNGYjR9FX", "BylZ06Ft0X", "iclr_2019_HkNGYjR9FX", "r1eCUUH0oQ", "r1eCUUH0oQ", "r1eCUUH0oQ", "HyxKWlpFh7", "HyxKWlpFh7", "HJlRfyx_aQ", "HJlRfyx_aQ", "HJlRfyx_aQ", "iclr_2019_HkNGYjR9FX", "iclr_2019_HkNGYjR9FX" ]
iclr_2019_Hke-JhA9Y7
Learning concise representations for regression by evolving networks of trees
We propose and study a method for learning interpretable representations for the task of regression. Features are represented as networks of multi-type expression trees comprised of activation functions common in neural networks in addition to other elementary functions. Differentiable features are trained via gradient descent, and the performance of features in a linear model is used to weight the rate of change among subcomponents of each representation. The search process maintains an archive of representations with accuracy-complexity trade-offs to assist in generalization and interpretation. We compare several stochastic optimization approaches within this framework. We benchmark these variants on 100 open-source regression problems in comparison to state-of-the-art machine learning approaches. Our main finding is that this approach produces the highest average test scores across problems while producing representations that are orders of magnitude smaller than the next best performing method (gradient boosting). We also report a negative result in which attempts to directly optimize the disentanglement of the representation result in more highly correlated features.
accepted-poster-papers
The reviewers all feel that the paper should be accepted to the conference. The main strengths that they noted were the quality of writing, the wide applicability of the proposed method and the strength of the empirical evaluation. It's nice to see experiments across a large number of problems (100), with corresponding code, where baselines were hyperparameter tuned as well. This helps to give some assurance that the method will generalize to new problems and datasets. Some weaknesses noted by the reviewers were computational cost (the method is significantly slower than the baselines) and they weren't entirely convinced that having more concise representations would directly lead to the claimed interpretability of the approach. Nevertheless, they found it would make for a solid contribution to the conference.
train
[ "BJgNl0PKnX", "SyxS9k3-Am", "ryxU37agA7", "rkgGkod5pQ", "S1lEd9Oqa7", "Skl9kud9p7", "rJeoSzM9h7", "SkgGi8lYnX" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper introduces a genetic algorithm that maintains an archive of representations that are iteratively evolved and selected by comparing validation error. Each representation is constructed as a syntax tree consists of elements that are common in neural network architectures. The experimental results showed that their algorithm is competitive to the state-of-the-art while achieving much smaller model size.\n\nComments:\n1. I think this paper lacks technical novelty. I'm going to focus on experimental result in the following two questions.\n2. FEAT is a typical genetic algorithm that converges slowly. In the appendix, one can verify that FEAT converges at least 10x slower than XGBoost. Can FEAT achieve lower error than XGBoost when they use the same amount of time? \nCan the authors provide a convergence plot of their algorithm (i.e. real time vs test error)?\n3. From Figure 3 it seems that the proposed algorithm is competitive to XGBoost, and the model size is much smaller than XGBoost. Have the authors tried to post-processing the model generated by XGBoost? How's the performance compare?", "2. We expanded the parameter space for XGBoost to give it a larger computational budget. This larger budget compensates for the fact that fitting a single model using XGBoost is quicker than with Feat. The extra tuning made the XGBoost results slightly better; in the pdfdiff for Figure 3 of the revision, one can see a slight improvement in the boxplot for XGBoost. However, XGBoost's accuracy was still not significantly different than Feat over all problems (p = 1.0). Interestingly, the new XGBoost results did significantly outperform MLP, unlike the original results. ", "1. I should say I'm biased since the techniques that the authors used actually sounds familiar to me. I'll take this into consideration.\n\n2. Why was the parameter expansion necessary? Does it reduce the error?\n\n3. This addresses my question. Thanks.", "We thank the reviewer for their comments, and address a few minor points below. \n\n1) \"only very limited hyperparameter tuning for the other methods was performed\"\n\n - We have extended the hyperparameter space for XGBoost, the closest competitor, in our revision. Hopefully this addresses the reviewer's concern.\n\n2) The reviewer correctly points out that size is only a proxy for interpretability in this experiment. We do not have a better way to assess lebility outside of an application with expert analysis. Nevertheless, simpler models are generally (but not always) easier to interpret. Our goal with the illustrative example is to show this, and we state similar caveats as the reviewer has suggested. \n\n3) Regarding adjustment of weights, weights are only adjusted for features that are composed of differentiable operators because this is a limitation of the chain rule with gradient descent. It is important to note that all of the floating point operators we considered were differentiable; the only non-differentiable nodes were boolean operators, which don't include weights. It would also be possible to use another method to tune the weights such as stochastic hillclimbing, although previous symbolic regression research on this subject tends to favor gradient descent for weight tuning weights [1,2]. Hopefully this addresses the reviewer's question; if not we are happy to clarify further. \n\n[1] Kommenda, M. et. al. (2013, July). Effects of constant optimization by nonlinear least squares minimization in symbolic regression. In Proceedings of the 15th annual conference companion on Genetic and evolutionary computation (pp. 1121-1128). ACM.\n[2] Topchy, A., & Punch, W. F. (2001, July). Faster genetic programming based on local gradient search of numeric leaf values. In Proceedings of the 3rd Annual Conference on Genetic and Evolutionary Computation (pp. 155-162). Morgan Kaufmann Publishers Inc..\n", "We thank the reviewer for the critiques, which have led to some improvements to our experiment and hopefully more convincing analysis. \n\n1. It is hard for us to respond to the reviewer's contention that our work lacks technical novelty without more specific critiques. However, we will restate what is novel here. \n\nFirst, FEAT represents models in the population as sets of syntax trees/equations. This representation is novel both in neural network literature and genetic algorithm literature. Second, we use the feedback of model weights to guide variation probabilities; to our knowledge this is a new approach. FEAT also uses multiple type representations, meaning it can learn boolean and continuous functions in the same representation, something we believe to be novel as well. Finally, the composition of syntax trees using NN activation functions along with other operations is rarely seen in GA/GP literature, much less the edge-based encoding of weights. Taken as a whole, there are several novel technical aspects of the algorithm. \n\nIn addition to the methodological aspects, few if any previous works in neural architecture search / neuroevolution focus on regression with the goal of intelligibility. In this regard we believe our results are novel and important: by establishing a new state-of-the-art, they point to a new area of application for this field of research. \n\n2. We completely agree with the reviewer's point that FEAT converges more slowly than XGBoost. We should expect a randomized, population-based heuristic search method to be slower than a greedy, single-model heuristic-based method. To address this point, we have added text to the experiments and discussion, and reworked the XGBoost analysis . \n\nOur stated goal is to produce simplest possible models without sacrificing accuracy, and we contend that our method achieves this. Although computation time suffers as a result, we believe it is reasonable to consider a 60 minute cutoff for optimization time on every problem, some of which contain millions of samples. \n\nThe reviewer also asks whether FEAT can achieve lower error than XGBoost given the same amount of time. Based on the reviewer's comments we have expanded the hyperparameter space for XGBoost in our revision, from 9 hyperparameter combinations to 1925. This extension results in wallclock runtimes closer to those of FEAT and MLP. Under these conditions, the accuracy comparisons do not change much. We still see no significant differences between FEAT and XGBoost in terms of accuracy.\n\n3. To address the reviewer's suggestion regarding complexity, we have generated our XGBoost results in this revision with a pruning step after tree construction. We have also optimized the minimum split loss criterion (gamma) that controls the amount of pruning. Under these conditions, we observe very similar size comparisons as before. \n\nWe hope the updated manuscript addresses the reviewer's concerns.", "We thank the reviewer for their positive comments. We agree with the reviewer's assessment of the tradeoff between interpretability and computational cost. Many applications with interpretability as a main focus can stand the additional burden (in this case, 60 minutes maximum). It is also worth noting that this method is parallelizable, although that functionality has not been exploited in our benchmarking. \n\nBased on the reviewer's comments and other comments, we have made the following changes:\n\n- we explicitly mention the termination criteria in the experiments section and the computation times in the results\n - a discussion of the tradeoff of computational cost has been added to the discussion\n - we have added a validation loss terminal criterion (a.k.a. early stopping) to Feat to improve the runtimes a bit\n\nThanks for the helpful comments. ", "# Summary\nThe paper presents a method for learning network architectures for regression tasks. The focus is on learning interpretable representations of networks by enforcing a concise structure made from simple functions and logical operators. The method is evaluated on a very large number of regression tasks (99 problems) and is found to yield very competitive performance.\n\n# Quality\nThe quality of the paper is high. The method is described in detail and differences to previous work are clearly stated. Competing methods have been evaluated in a fair way with reasonable hyperparameter tuning.\n\nIt is very good to see a focus on interpretability. The proposed method is computationally heavy, as can be seen from figure 7 in the appendix, but I see the interpretability as the main benefit of the method. Since many applications, for which interpretability is key, can bear the additional computational cost, I would not consider this a major drawback. However, it would be fair to mention this point in the main paper.\n\n# Clarity\nThe paper reads well and is nicely structured. The figures and illustrations are easy to read and understand.\n\n# Originality\nThe paper builds on a large corpus of previous research, but the novelties are clearly outlined in section 3. However, the presented method is very far from my own field of research, so I find it difficult to judge exactly how novel it is.\n\n# Significance\nThe proposed method should be interesting to a wide cross-disciplinary audience and the paper is clearly solid work. The focus on interpretability fits well with the current trends in machine learning. However, the method is far from my area of expertise, so I find it difficult to judge the significance.\n", "The paper proposes a method for learning regression models through evolutionary\nalgorithms that promise to be more interpretable than other models while\nachieving similar or higher performance. The authors evaluate their approach on\n99 datasets from OpenML, demonstrating very promising performance.\n\nThe authors take a very interesting approach to modeling regression problems by\nconstructing complex algebraic expressions from simple building blocks with\ngenetic programming. In particular, they aim to keep the constructed expression\nas small as possible to be able to interpret it easier. The evaluation is\nthorough and convincing, demonstrating very good results.\n\nThe presented results show that the new method beats the performance of existing\nmethods; however, as only very limited hyperparameter tuning for the other\nmethods was performed, it is unclear to what extent this will hold true in\ngeneral. As the main focus of the paper is on the increased interpretability of\nthe learned models, this is only a minor flaw though.\n\nThe interpretability of the final models is measured in terms of their size.\nWhile this is a reasonable proxy that is easy to measure, the question remains\nto what extent the models are really interpretable by humans. This is definitely\nsomething that should be explored in future work, as a small-size model does not\nnecessarily imply that humans can understand it easily, especially as the\ngenerated algebraic expressions can be complex even for small trees.\n\nThe description of the proposed method could be improved; in particular it was\nunclear to this reviewer why the features needed to be differentiable and what\nthe benefit of this was (i.e. why was this the most appropriate way of adjusting\nweights).\n\nIn summary, the paper should be accepted." ]
[ 6, -1, -1, -1, -1, -1, 7, 8 ]
[ 3, -1, -1, -1, -1, -1, 1, 4 ]
[ "iclr_2019_Hke-JhA9Y7", "ryxU37agA7", "S1lEd9Oqa7", "SkgGi8lYnX", "BJgNl0PKnX", "rJeoSzM9h7", "iclr_2019_Hke-JhA9Y7", "iclr_2019_Hke-JhA9Y7" ]
iclr_2019_Hke20iA9Y7
Efficient Training on Very Large Corpora via Gramian Estimation
We study the problem of learning similarity functions over very large corpora using neural network embedding models. These models are typically trained using SGD with random sampling of unobserved pairs, with a sample size that grows quadratically with the corpus size, making it expensive to scale. We propose new efficient methods to train these models without having to sample unobserved pairs. Inspired by matrix factorization, our approach relies on adding a global quadratic penalty and expressing this term as the inner-product of two generalized Gramians. We show that the gradient of this term can be efficiently computed by maintaining estimates of the Gramians, and develop variance reduction schemes to improve the quality of the estimates. We conduct large-scale experiments that show a significant improvement both in training time and generalization performance compared to sampling methods.
accepted-poster-papers
This paper presents methods to scale learning of embedding models estimated using neural networks. The main idea is to work with Gram matrices whose sizes depend on the length of the embedding. Building upon existing works like SAG algorithm, the paper proposes two new stochastic methods for learning using stochastic estimates of Gram matrices. Reviewers find the paper interesting and useful, although have given many suggestions to improve the presentation and experiments. For this reason, I recommend to accept this paper. A small note: SAG algorithm was originally proposed in 2013. The paper only cites the 2017 version. Please include the 2013 version as well.
train
[ "rJgsqcLt3m", "SJxICZhjam", "rkxc_enipX", "ryltke3iT7", "BJgu7y3o67", "ryx3glNinm", "r1gf2j2w27", "r1gepzTgnm", "BygKKCEuiX", "r1xBJUGgo7", "H1ejQiWgsX", "SJl1I_Zgi7", "rJx8ypbkjQ", "rkeCV3lK5m", "rke9t6tEcm" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "public", "author", "author", "author", "public", "public", "public" ]
[ "This paper proposes an efficient algorithm to learn neural embedding models with a dot-product structure over very large corpora. The main method is to reformulate the objective function in terms of generalized Gramiam matrices, and maintain estimates of those matrices in the training process. The algorithm uses less time and achieves significantly better quality than sampling based methods. \n\n1. About the experiments, it seems the sample size for sampling based experiments is not discussed. The number of noise samples have a large influence on the performance of the models. In figure 2, different sampling strategies are discussed. It would be cool if we can also see how the sampling size affects the estimation error. \n\n2. If we just look at the sampling based methods, in figure 2a, uniform sampling’s Gramian estimates is the worst. But the MAP of uniform sampling on validation set for all three datasets are not the worst. Do you have any comments?\n\n3. wheter an edge -> whether an edge.\n", "We would like to thank all reviewers for their careful reading and helpful suggestions. We have uploaded a revision of the paper with the following changes:\n- We added a new section to the appendix (Appendix C) discussing how to adapt the methods to a non-uniform weight matrix.\n- We added Appendix E.1 to relate the gradient estimation error to the Gramian estimation error, with a numerical experiment (Figure 6) showing the effect of our methods on gradient estimates.\n- We added a comment to the conclusion to emphasize that our experiments were focused on problems with very large vocabulary size.\n- We rearranged the introduction, and improved transitions between sections.\n- We added comments to the numerical experiments (Section 4 and Appendix E) highlighting the effect of the batch size and of the sampling distribution.\n\nWe thank the reviewers again for their time and helpful comments.\n", "Thank you for your review and your helpful suggestions.\n\nWe updated the organization following the reviewer's suggestions, by reorganizing the introduction and improving the transitions between sections. We also added a comment about our choice of hyper-parameters: in the main experiments of Section 4, the hyper-parameters were cross-validated using the baseline. The effect of some of the hyper-parameters is further studied in the appendix: the effect of the batch size and learning rate is studied in Appendix D.2 (now Appendix E.4 in the revision), and the effect of the penalty coefficient λ is illustrated in Appendix C (now Appendix D in the revision). We did not include these results in the main body of the paper for space constraints, and to keep the message focused, but we added a note to Section 4 pointing to the appendix for further details on the effect of the various hyper-parameters.\n", "Thank you for your review and your helpful suggestions.\n\n1) On the effect of sample size: we agree that the sample size directly affects the performance of these methods. We investigated this effect in Appendix D.2 (which is now Appendix E.4 in the revision), where we ran the same experiment on Wikipedia English with batch sizes 128, 512 (Tables 3 and 4), and compared the results to batch size 1024 (Table 2). We simultaneously varied the learning rate to understand its effect as well, but focusing on the effect of batch size only, we can observe that\n(i) the performance of all methods increases with the batch size (at least in the 128-1024 range). \n(ii) the relative improvement of our methods (compared to the baseline) is larger for smaller batch sizes: the relative improvement is 19.5% for 1024, 26.7% for 512, and 29.1% for 128.\nOf course, one cannot increase the batch size indefinitely as there are hard limits on memory size, and the key advantage of our methods is in problems where sampling-based methods give poor estimates even with the largest feasible batch size.\nThe effect of the batch size can also be seen to some extent in Figure 2.a, where we show the quality of the Gramian estimates for batch size 128 and 1024. The figure suggests that the quality improves, for all methods, with larger batch sizes, and that SOGram with batch size 128 has a comparable estimation quality to the baseline with batch size 1024.\n\n2) The reviewer raises an interesting point. We have observed in our experiments that for a fixed sampling distribution, improving the Gramian estimates generally leads to better MAP, but we cannot draw conclusions when the sampling distribution changes. One possible explanation is that the sampling distribution affects both the quality of the Gramian estimates, and the frequency at which the item embeddings are updated. In particular, tail items are sampled more often under uniform sampling than under the other distributions, and updating their embeddings more frequently may contribute to improving the MAP. We added a comment (Appendix E.2 in the revision) to highlight this observation.", "Thank you for your assessment and your helpful suggestions.\n\nRegarding evaluation: since the focus of the paper is on the design of an efficient optimization method, we wanted to choose an experiment where (i) the evaluation metric is aligned with the optimization objective, and (ii) the vocabulary size is very large (on the order of 10^6 or more), making traditional sampling-based methods inefficient, because they would require too many samples to achieve high model quality. This is why we chose the Wikipedia dataset, which is, to our knowledge, one of the few publicly available datasets of this scale. It also offers different subsets of varying scale, which allowed us to illustrate the effect of the problem size, suggesting that the benefit of the Gramian-based methods increases with vocabulary size. We added a note to the revision to comment on our choice.\nWe also agree that it will be beneficial to evaluate these method on other applications such as more traditional natural language tasks, and this is something we intend to pursue in future work.", "Summary of the paper:\n\nThis work presents a novel method for similarity function learning using non-linear model. The main problem with the similarity function learning models is the pairwise component of the loss function which grows quadratically with the training set. The existing stochastic approximations which are agnostic to training set size have high variance and this in-turn results in poor convergence and generalisation. This paper presents a new stochastic approximation of the pairwise loss with reduced variance. This is achieved by exploiting the dot-product structure of the least-squares loss and is computationally efficient provided the embedding dimensions are small. The core idea is to rewrite the least-squares as the matrix dot product of two PSD matrices (Grammian). The Grammian matrix is the sum of the outer-product of embeddings along the training samples. The authors present two algorithms for training the model, 1)SAGram: By maintaining a cache of all embedding vectors of training points (O(nk) space)$, whenever a point is encountered it's cache is replaced with it's embedding vector. 2) SOGram: This algorithm keeps a moving average of the Grammian estimate to reduce the variance. Experimental results shows that this approach reduces the variance in the Grammian estimates, results in faster convergence and better generalisation.\n\nReview:\n\nThe paper is well written with clear contribution to the problem of similarity learning. My only complain is that, I think the evaluation is a bit weak and does not support the claim that is applicable all kinds of problems e.g. nlp and recommender systems. This task in Wikipedia does not seem to be standard (kind of arbitrary) — there are some recommendation results in the appendix but I think it should have been in the main paper.\n\nOverall interesting but I would recommend evaluating in standard similarity learning for nlp and other tasks (perhaps more than one)\n\nThere are specific similarity evaluation sets for word embeddings. It can be found in following papers: https://arxiv.org/pdf/1301.3781.pdf \nhttp://www.aclweb.org/anthology/D15-1036", "This paper proposes a method for estimating non-linear similarities between items using Gramian estimation. This is achieved by having two separate neural networks defined for each item to be compared, which are then combined via a dot product. The proposed innovation in this paper is to use Gramian estimation for the penalty parameter of the optimization which allows for the non-linear case. Two algorithms are proposed which allow for estimation in the stochastic / online setting. Experiments are presented which appear to show good performance on some standard benchmark tasks. \n\nOverall, I think this is an interesting set of ideas for an important problem. I have two reservations. First, the organization of the paper needs to be addressed in order to aid user readability. The paper often jumps across sections without giving motivation or connecting language. This will limit the audience of the paper and the work. Second (and more importantly), I found the experiments to be slightly underwhelming. The hyperparameters (batch size, learning rate) and architecture don’t have any rationale attached to them. It is also not entirely clear whether the chosen comparison methods fully constitute the current state of the art. Nonetheless, I think this is an interesting idea and strong work with compelling results. \n\nEditorial comments:\n\nThe organization of this paper leaves something to be desired. The introductions ends very abruptly, and then appears to begin again after the related work section. From what I can tell the first three sections all constitute the introduction and should be merged with appropriate edits to make the narrative clear.\n\n“where x and y are nodes in a graph and the similarity is wheter an edge” → typo and sentence ends prematurely. \n", "1) For observed pairs, one can use arbitrary weights 𝑤_𝑖𝑗 . For the unobserved data, in our problem setting, the set of all possible pairs (i, j) is too large to specify an arbitrary weight matrix (say if the vocabulary size is 10^7 or more, the full weight matrix would have more than 10^14 entries). In such situations one needs to provide a concise description of this weight matrix. One such representation is the sum of a sparse + low-rank component, and our methods handle this case: the sparse component can be optimized directly, and the low-rank component can be optimized using our Gramian estimation methods. The previous answer describes the rank-1 case where 𝑤_𝑖𝑗 = 𝑎_𝑖 𝑏_𝑗 , and the same argument generalizes to the low-rank case (for a rank-r matrix weight matrix, one needs to maintain 2*r Gramians).\n\n2) In retrieval setting with a very large corpus, the dot product structure can be the only viable option, as scoring all candidates in linear time is prohibitively expensive, while maximum inner-product search is approximated in sublinear time. As mentioned above, even in models that don't have the dot product structure, our method applies to the global orthogonal regularizer in any embedding layer.\nWe believe our methods are applicable to industrial settings. Our experiments suggest that the relative improvement (w.r.t. existing sampling based methods) grows with the corpus size (see Table 2), so we expect to see large improvements in applications with very large corpora. As for comparing different model classes (neural embedding models Vs. factorization machines) this is outside the scope of the paper, our focus is instead on developing efficient optimization methods for the neural embedding model class.", "thanks for your explanation. no doubt, this is an excellent work. I just read the answer for my first question (will read others later). When talking about the weight setting, I mean a_ij which involves both users and items, not only user-specific or item-specifc weight. a_ij is very common and it seems that it cannot always be rewritten as a_i b_j. Do the algorithm apply in this settting? \n2 I still think the dot product structure in fig1 is not that popular recently, kinda of a bit popular when deep learning is just in the starting stage. Do you find this structure much better than a basic factorization machines( just a digression. \nBtw: what do you think applying this algorithm in industry :)\n", "Thank you for your comments. We will discuss each point below.Thank you for your comments. We will discuss each point below.\n\n1) We agree that it is often a good idea to use non-uniform weights, (as well as non-uniform sampling distributions), and the proposed methods support these variants. We did not discuss non-uniform weights to avoid overloading the presentation, but we can certainly add a section to the appendix. As discussed in our previous comment, if we define the penalty as 1/𝑛^2 ∑_𝑖 ∑_𝑗 𝑎_𝑖 𝑏_𝑗 ⟨𝑢_𝑖, 𝑣_𝑗⟩^2 , (where in a recommendation setting, 𝑎_𝑖 is a user-specific weight and 𝑏_𝑗 is an item-specific weight), then this expression is equivalent to ⟨𝐺^𝑢, 𝐺^𝑣⟩ where 𝐺^𝑢, 𝐺^𝑣 are weighted Gram matrices, defined by 𝐺^𝑢 = 1/𝑛 ∑_𝑖 𝑎_𝑖 𝑢_𝑖⊗𝑢_𝑖 and similarly for 𝐺^𝑣. The same methods (SAGram, SOGram) can be applied to the weighted Gramians.\n\n2) The dot product structure remains important in recent literature, e.g. [1, 2, 3], especially in retrieval settings where one needs to score a large corpus, as finding the top-k items in a dot product model is efficient (see literature on Maximum Inner Product Search, e.g. [4, 5] and references therein). In addition to such models, our methods can also apply to arbitrary models using the Global Orthogonal regularizer described in [6]. The effect of the regularizer is to spread-out the distribution of embeddings, which can improve generalization. We show in Appendix C that this regularizer can be written using Gramians, thus one can apply SOGram or SAGram to such models.\n\n3) On the choice of loss function: the loss on observed pairs (the function ℓ in our notation) is not limited to square loss, and could be logistic loss for example. The penalty function on all pairs, (𝑔 in our notation) is a quadratic function. It can be extended to a larger family (the spherical family discussed in [7]), but this is beyond the scope of this paper.\n\n4) On the derivation of the Gramian formulation: we gave a concise derivation in Section 2.2 due to space limitations, but we can expand here and give some intuition. The penalty term 𝑔 is a double-sum 1/𝑛^2 ∑_𝑖 ∑_𝑗 ⟨𝑢_𝑖, 𝑣_𝑗⟩^2 . If we focus on the contribution of a single left embedding 𝑢_𝑖 , we can observe that this is a quadratic function 𝑢 ↦ ∑_𝑗 ⟨𝑢, 𝑣_𝑗⟩^2 . Importantly, this is the same quadratic function that applies to all the 𝑢_𝑖 (independent of 𝑖 ). A quadratic function on ℝ^𝑑 can be represented compactly using a 𝑘×𝑘 matrix, and this is exactly the role of the Gramian 𝐺^𝑣, and because the same function applies to all 𝑢_𝑖, we can maintain a single estimate and reuse it across batches (unlike sampling-based methods that recompute the estimate at each step). There is additional discussion in Appendix C on the interpretation of this term.\n\n5) On the choice of the weight 𝜆: as mentioned in the experiments, this is a hyper-parameter that we tuned using cross-validation. Intuitively, a larger 𝜆 puts more emphasis on penalizing deviations from the prior, while a lower 𝜆 emphasizes fitting the observations. We have experiments in Appendix C that explore this effect, e.g. the impact on the embedding distribution in Figure 4, and the impact on precision in Figure 5.\n\n6) In eq (1), 𝑛 denotes the number of observed pairs (size of the training set). To simplify, we also define the Gramians as a sum over training examples, although in a recommendation setting, this can be rewritten as a sum over distinct users and distinct items. More precisely, if we let S be the set of users, and 𝑓_s the fraction of training examples which involve user s, then 𝐺^𝑢=1/𝑛 ∑_𝑖 𝑢_𝑖⊗𝑢_𝑖 = ∑_s∈S 𝑓_s 𝑢_s⊗𝑢_s.\n\n7) We plan to open-source our TensorFlow implementation in the near future.\n\n[1] P. Neculoiu, M. Versteegh and M. Rotaru. Learning Text Similarity with Siamese Recurrent Networks. Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, 2016.\n[2] M. Volkovs, G. Yu, T. Poutanen. DropoutNet: Addressing Cold Start in Recommender Systems. NIPS 2017.\n[3] P. Covington, J. Adams, E. Sargin. Deep Neural Networks for YouTube Recommendations. Proceedings of the 10th ACM Conference on Recommender Systems (RecSys 2016).\n[4] B. Neyshabur and N. Srebro. On symmetric and asymmetric lshs for inner product search. ICML 2015.\n[5] A. Shrivastava and P. Li. Asymmetric lsh (alsh) for sublinear time maximum inner product search (mips). NIPS 2014.\n[6] X. Zhang, F. X. Yu, S. Kumar, and S. Chang. Learning spread-out local feature descriptors. In IEEE International Conference on Computer Vision (ICCV 2017).\n[7] P. Vincent, A. de Brebisson, and X. Bouthillier. Efficient exact gradient update for training deep networks with very large sparse targets. In NIPS 2015.", "Thank you for your comments, we will discuss each point below.\n\n1) The dot-product structure is important in many applications, especially in retrieval with very large corpora (since it allows efficient scoring using maximum-inner product search techniques [1, 2]). In addition to dot-product models, our methods can also be useful in more general architectures when used jointly with the Global Orthogonal regularizer proposed in [3], which \"spreads-out\" the embeddings by pushing the embedding distribution towards the uniform distribution. This was shown to improve generalization performance. In the last paragraph of Appendix C, we show that the Global Orthogonal regularizer can be written in terms of Gramians, thus our methods can be used in such models.\n\n2) Using non-uniform weights can be important, and it is supported by the methods we propose. They also support the use of a non-uniform sampling distribution, and non-uniform prior (as discussed in Appendix B). For non-uniform weights, if we define the weight of a left item i to be 𝑎_𝑖 and the weight of a right item 𝑗 to be 𝑏_𝑗 , and define the penalty term as 1/𝑛^2 ∑_𝑖 ∑_𝑗 𝑎_𝑖 𝑏_𝑗 ⟨𝑢_𝑖, 𝑣_𝑗⟩^2, then one can show, using the same argument as in Section 2.2, that this is equal to the matrix inner-product ⟨𝐺^𝑢, 𝐺^𝑣⟩ where 𝐺^𝑢, 𝐺^𝑣 are now weighted Gram matrices given by 𝐺^𝑢 = 1/𝑛 ∑_𝑖 𝑎_𝑖 𝑢_𝑖⊗𝑢_𝑖 and similarly for 𝐺^𝑣 . One can then apply SAGram/SOGram to the weighted Gramians.\n\n3) It is our intention to open-source our TensorFlow implementation in the near future.\n\n[1] Behnam Neyshabur and Nathan Srebro. On symmetric and asymmetric lshs for inner product search. In Proceedings of the 32nd International Conference on Machine Learning (ICML 2015).\n[2] Anshumali Shrivastava and Ping Li. Asymmetric lsh (alsh) for sublinear time maximum inner product search (mips). In Proceedings of the 27th International Conference on Neural Information Processing Systems (NIPS 2014).\n[3] Xu Zhang, Felix X. Yu, Sanjiv Kumar, and Shih-Fu Chang. Learning spread-out local feature descriptors. In IEEE International Conference on Computer Vision (ICCV 2017).", "Thank you for your comments and for the suggestion.\nFirst, one can make a formal connection between the quality of Gramian estimates and the quality of gradient estimates.\nThe prior term can be written as 1/𝑛 ∑_i ⟨𝑢_𝑖, 𝐺^𝑣 𝑢_𝑖⟩ , thus the partial derivative w.r.t. 𝑢_𝑖 is ∇_𝑢𝑖 𝑔 = 2/𝑛 𝐺^𝑣 𝑢_𝑖 . If the Gramian 𝐺^𝑣 is approximated by Ĝ^𝑣 , then the gradient estimation error is 2/𝑛 ∑_i ‖(𝐺^𝑣− Ĝ^𝑣) 𝑢_𝑖‖^2 = 2/𝑛 ∑_i ⟨(𝐺^𝑣 − Ĝ^𝑣)𝑢_𝑖,(𝐺^𝑣 − Ĝ^𝑣)𝑢_𝑖⟩ which is equal to 2⟨(𝐺^𝑣 − Ĝ^𝑣),(𝐺^𝑣 − Ĝ^𝑣)𝐺^𝑢⟩ , in other words, the estimation error of the right gradient is the \"𝐺^𝑢 -weighted\" Frobenius norm of the left Gramian error.\nWe generated these plots as suggested, on Wikipedia simple, and we observe the same trend for the gradient estimation errors as the Gramian estimation error in Figure 2.a. We will include this experiment in the updated version of the paper during rebuttal. Thanks again for the suggestion.", ".\nI have several questions:\n\nfirst, is using whole data or whole unobserved data necessary? Using whole data is better than sampling methods? I think it may depend, for some relative dense data such as in nlp-word embedding task, particularly for large word corpus, using whole data performs much worse than the sampling methods. The performance of whole data based models is largely determined by the weighting of the unobserved or negative examples. for example in [Bayer et al., 2017], they only use a constant weight and compare with very simple baseline. The model is not applicable for models with weights that associate with both users and items in the recommendation scenario. it is unknown whether whole data based method can beat state-of-the-art. do authors agree?\n\n2 The model structure is limited to the dot product structure. Although it is a very popular structure in previous literature , it is not the case for deep models. A simple dot product structure is limited in modeling complicated relations. The common way is to add a full-connected layer on top of dot product. it seems that the current model does not support this popular structure.\n\n3 the current optimization method is limited to least square loss? what about logistic loss for classification\n\n4 The mathematical derivation in section2.2 is very hard to follow. Can you give some motivations and a little bit more details. \n\n5 what about the negative weighting design in equation 1?\n6 eq.(1) is not clear ? why the first term is \\sum_i^n as the number of observed examples should be much larger than n\nwhy the second term is \\sum_i^n\\sum_i^n, e,g, in recommender system, the number of user and items are different.\n6 will you release the code if it is accepted. The mathematics are kinda very hard to follow for most readers. Do you think the algorithm is good to be used in industry?", "Definitely, it's a good paper.\nSampling-based methods has dominated the main trend for many years, through BPR in recommendation field and negative sampling in word embedding. Some previous research proposed to train from whole data while their methods only focused on shadow linear models like matrix factorization. This paper proposed to extend the framework of learning from whole data to deep learning based embeddings by using Gramian estimates.\nSeveral questions:\n1. Although the proposed scheme can get rid of sampling, the final layer must be an inner product. Will it limit the performance of the model?\n2.The hyperparameter lambda is defined as the weight for negative samples. Is it reasonable to assign a uniform weight for all samples?\n3.Could you please public the code for one of your evaluation tasks? ", "This paper studies the problem of learning embeddings on large corpora, and proposes to replace the commonly used sampling mechanism by an online Gramian estimate. It seems like the proposed Gramian estimate allows a lot of information reuse (which is otherwise lost in baseline sampling methods) and hence improves the training. \n\nI liked the idea of maintaining an estimate of an important (and relatively small-sized) quantity to allow information reuse, and I think it has the potential to be generalized into similar types of problems as well.\n\nA question about the experiment: in Section 4.2 it is shown that the maintained Gramian estimates are indeed better than sampling estimates. Perhaps a similar test can be done on the gradients, and hopefully the stochastic gradients given by the Gramian estimate are indeed closer to the full gradient, compared with the baseline sampling methods?" ]
[ 8, -1, -1, -1, -1, 7, 7, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, -1, -1, -1, -1, 4, 2, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2019_Hke20iA9Y7", "iclr_2019_Hke20iA9Y7", "r1gf2j2w27", "rJgsqcLt3m", "ryx3glNinm", "iclr_2019_Hke20iA9Y7", "iclr_2019_Hke20iA9Y7", "BygKKCEuiX", "r1xBJUGgo7", "rJx8ypbkjQ", "rkeCV3lK5m", "rke9t6tEcm", "iclr_2019_Hke20iA9Y7", "iclr_2019_Hke20iA9Y7", "iclr_2019_Hke20iA9Y7" ]
iclr_2019_Hke4l2AcKQ
MAE: Mutual Posterior-Divergence Regularization for Variational AutoEncoders
Variational Autoencoder (VAE), a simple and effective deep generative model, has led to a number of impressive empirical successes and spawned many advanced variants and theoretical investigations. However, recent studies demonstrate that, when equipped with expressive generative distributions (aka. decoders), VAE suffers from learning uninformative latent representations with the observation called KL Varnishing, in which case VAE collapses into an unconditional generative model. In this work, we introduce mutual posterior-divergence regularization, a novel regularization that is able to control the geometry of the latent space to accomplish meaningful representation learning, while achieving comparable or superior capability of density estimation.Experiments on three image benchmark datasets demonstrate that, when equipped with powerful decoders, our model performs well both on density estimation and representation learning.
accepted-poster-papers
This paper proposes a solution for the well-known problem of posterior collapse in VAEs: a phenomenon where the posteriors fail to diverge from the prior, which tends to happen in situations where the decoder is overly flexible. A downside of the proposed method is the introduction of hyper-parameters controlling the degree of regularization. The empirical results show improvements on various baselines. The paper proposes the addition of a regularization term that penalizes pairwise similarity of posteriors in latent space. The reviewers agree that the paper is clearly written and that the method is reasonably motivated. The experiments are also sufficiently convincing.
train
[ "HkxNWckD0Q", "SJec11dj3X", "ByxYlNP4C7", "HyeQs7536Q", "r1eryMXiTX", "BJgo0o0NnX", "HklIzEowaQ", "H1l3IfiwaQ", "Byl31zowTX", "rJgA-rlU37", "BylPAIJZ97", "BJlD3jFg9X" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "public" ]
[ "Thanks to the authors for the work in addressing my questions and comments.\n\n2. That’s interesting to know, makes sense indeed. I would explicitly indicate this in your “Measure of Smoothness.” section then, as this does not come across in the current text.\nThe new figure in Appendix B.1.3 is interesting to see, but does not seem to indicate such a drastic effect, which I guess might be due to t-SNE “fixing it”, but I am not sure what would be the best way to showcase this effect. \n\n5. Yes sorry what I meant by “latent traversals” is something akin to the single unit clamping done in Beta-VAE (Higgins et al 2017, https://openreview.net/forum?id=Sy2fzU9gl). In your case, given you have latents with 32 dimensions this is harder to do easily, hence interpolations might be interesting to see indeed.\n\nI think the updated results seem to make the model stronger and show visible improvements on VLAE. \nI am still a bit unclear on the exact characteristics of the latent space learnt and I’m looking forward to see more work in that direction. \n\nHence the paper does seem good enough in its current state, so I’d recommend publication as a poster (keeping my score, increasing my confidence).\n", "This paper proposes changes to the ELBO loss used to train VAEs, to avoid posterior collapse. They motivate their additional components rather differently than what has been done in the literature so far, which I found quite interesting.\nThey compare against appropriate baselines, on MNIST and OMNIGLOT, in a complete way.\n\nOverall, I really enjoyed this paper, which proposed a novel way to regularise posteriors to force them to encode information. However, I have some reservations (see below), and looking squarely at the results, they do not seem to improve over existing models in a significant manner as of now.\n\nCritics:\n1.\tThe main idea of the paper, in introducing a measure of diversity, was well explained, and is well supported in its connection to the Mutual Information maximization framing. One relevant citation for that is Esmaeili et al. 2018, which breaks the ELBO into its components even further, and might help shed light on the exact components that this new paper are introducing. E.g. how would MAE fit in their Table A.2?\n2.\tOn the contrary, the requirement to add a “Measure of Smoothness” was less clear and justified. Figure 1 was hard to understand (a better caption might help), and overall looking at the results, it is even unclear if having L_smooth is required at all?\n
Its effect in Table 1, 2 and 3 look marginal at best?
\nGiven that it is not theoretically supported at all, it may be interesting to understand why and when it really helps.\n3.\tOne question that came up is “how much variance does the L_diverse term has”? If you’re using a single minibatch to get this MC estimate, I’m unsure how accurate it will be. Did changing M affect the results?\n4.\tL_diverse ends up being a symmetric version of the MI. What would happen if that was a Jensen-Shannon Divergence instead? This would be a more principled way to symmetrically compare q(z|x) and q(z).\n5.\tOne aspect that was quite lacking from the paper is an actual exploration of the latent space obtained. 
The authors claim that their losses would control the geometry of the latents and provide smooth, diverse and well-behaved representations. Is it the case?\n
Can you perform latent traversals, or look at what information is represented by different latents?
 \nThis could actually lend support to using both new terms in your loss.\n6.\tReconstructions on MNIST by VLAE seem rather worst than what can be seen in the original publication of Chen et al. 2017? Considering that the re-implementation seems just as good in Table 1 and 3, is this discrepancy surprising?\n7.\tFigure 2 would be easier to read by moving the columns apart (i.e. 3 blocks of 3 columns).\n\nOverall, I think this is an interesting paper which deserves to be shown at ICLR, but I would like to understand if L_smooth is really needed, and why results are not much better than VLAE.\n\nTypos:\n-\tKL Varnishing -> vanishing surely?\n-\tDevergence -> divergence\n", "The changes to the paper look great, thanks for your updates. They do not, however, change my basic opinion of the paper and so I will maintain my score as is.", "Thank you for upgrading your score!\nWe really appreciate your suggestion to evaluate learned representations with simple non-linear classifiers.\nWe are performing experiments with SVM using non-linear kernels and will update results soon.\n", "Thank you for your clarifications and the additional experiments. As a result of these, I have increased my score by one point.\n\nI agree with your comments on the importance of learning interpretable and disentangled representation. However notice that this can also be achieved learning simple non-Euclidean spaces, that may require however a simple but non-linear classifiers (e.g. 1-layer neural network with a small number of hidden units, non-linear SVM).", "In this paper the authors present mutual posterior divergence regularization, a data-dependent regularization for the ELBO that enforces diversity and smoothness of the variational posteriors. The experiments show the effectiveness of the model for density estimation and representation learning.\nThis is an interesting paper dealing with the important issues of fully exploiting the stochastic part of VAE models and avoiding inactive latent units in the presence of very expressive decoders. The paper reads well and is well motivated. \n\nThe authors claim that their method is \"encouraging the learned variational posteriors to be diverse\". While it is important to have models that can use well the latent space, the constraints that are encoded seem too strong. If two data points are very similar, why should there be a term encouraging their posterior approximation to be different? In this case, their true posteriors will be in fact be similar, so it seems counter-intuitive to force their approximations to be different.\n\nThe numerical results seem promising, but I think they could be further improved and made more convincing.\n- For the density estimation experiments, while there is an improvement in terms of NLL thanks to the new regularizer, it is not clear which is the additional computational burden. How much longer does it takes to train the model when computing all the regularization terms in the experiments with batch size 100? \n- I am not completely convinced by the claims on the ability of the regularizer to improve the learned representations. K-means implicitly assumes that the data manifold is Euclidean. However, as shown for example by [Arvanitidis et al. Latent space oddity: on the curvature of deep generative models, ICLR 2018] and other authors, the latent manifold of VAEs is not Euclidean, and curved riemannian manifolds should be used when computing distances and performing clustering. Applying k-means in the high dimensional latent spaces of ResNet VAE and VLAE does not seem therefore a good idea.\nOne possible reason why your MAE model may perform better in the unsupervised clustering of table 2 is that the terms added to the elbo by the regularizer may force the space to be more Euclidean (e.g. the squared difference term in the Gaussian KL) and therefore more suitable for k-means. \n- The semi-supervised classification experiment is definitely better to assess the representation learning capabilities, but KNN suffers with the same issues with the Euclidean distance as in the k-means experiments, and the linear classifier may not be flexible enough for non-euclidean and non-linear manifolds. Have you tried any other non-linear classifiers?\n- Comparisons with other methods that aim at making the model learn better representation (such as the kl-annealing of the beta-vae) would be useful.\n- The lack of improvements on the natural image task is a bit concerning for the generalizability of the results.\n\nTypos and minor comments:\n- devergence -> divergence in introduction\n- assistant -> assistance in 2.3\n- the items (1) and (2) in 3.1 are not very clear\n- set -> sets in 3.2\n- achieving -> achieve below theorem 1\n- cluatering -> clustering in table 2", "Thank you for the insightful comments! \n-- For your questions and concerns about the results on CIFAR-10, please see this post:\nhttps://openreview.net/forum?id=Hke4l2AcKQ&noteId=BylQ2fjL6X \nwhere we show stronger performance of our model.\n\n-- For your questions about the motivation of our method:\n “encouraging the learned variational posteriors to be diverse” is the motivation of L_diversity. If we only have L_diversity in our regularization method, it is, as in your comment, counter-intuitive for similar data points. However, by adding the smoothness term L_smooth, we expect that the model itself is able to learn how to balance diversity and smoothness to capture both diverse patterns in different data points and shared patterns in similar ones. And our experimental results show that these two regularization terms together help achieve stronger performance.\n\n-- For your questions about additional computational burden:\nIn order to train the model with large batch size, like 100, it requires more memory. But the computation of all the regularization terms is relatively efficient comparing to the computation of other parts of the objective. And the model converges as fast as that without the regularization.\n\n-- We really appreciate your comments about the evaluation of the learned representations. \nWe agree that the latent manifold of VAEs may not be Euclidean.\nHowever, as discussed in our paper and previous works, good latent representations need to capture global structured information and disentangle the underlying causal factors, tease apart the underlying dependencies of the data, so that it becomes easier to understand, to classify, or to perform other tasks. Evaluating learned representations with unsupervised or semi-supervised methods with limited capacity is a reasonable way and has been widely adopted by previous works. From this perspective, it might be an important advantage of our method if our regularizer can force the space to be more Euclidean, because the learned representations are easier to be interpreted and utilized. Flexible classifiers might favor representations by just memorizing the data, thus not providing fair evaluation of the learned representations.\n", "Thank you for the insightful comments! \n\nFor your questions and concerns about the results on CIFAR-10 with more expressive decoders, please see this post:\nhttps://openreview.net/forum?id=Hke4l2AcKQ&noteId=BylQ2fjL6X\nwhere we show stronger performance with more expressive decoders for our model.\n\nFor your specific questions, \n1 & 2. We appreciate your suggestion to perform ablation experiments for the two terms in our regularizer. Actually, both of the regularization terms play important roles. Without L_smooth, the model will easily place different posteriors into isolated points far away from each other, obtaining L_diversity close to zero, and the model performance on both density estimation and representation learning is worse than original VLAE without the regularization. Moreover, removing the L_smooth term, the training of the model becomes unstable.\n\n3. Thanks for your suggestion, we have added samples from VLAE in the updated version.\n\n4. Thanks for your comment, we have revised the paper to fix the grammatical mistakes.\n", "Thank you for the insightful comments! \nFor your questions:\n1. Thanks for pointing out the related work. We cited Esmaeili’s paper in our updated version. Actually, MAE does not fit anyone in their Table A.2. If we also decompose our objective in the same, our objective is, if we use the original form of MPD and ignore L_sommth, term (1) + (2) + (4’), where (4’) is a modified version of (4).\nThe original (4) is KL(q(z) || p(z)) = E_q(z} [log q(z) - log p(z)], while (4’) is E_{p(x) q(z)} [log q(z|x) - log p(z)]\n\n2. In our experiments, L_smooth plays a very important role. If we remove it, the model will easily place different posteriors into isolated points far away from each other, obtaining L_diversity close to zero. This phenomenon becomes more serious when a more powerful prior is applied, like auto-regressive flow. The unsupervised clustering and semi-supervised classification experiments justified the necessity of L_smooth. We also visualized the latent spaces with different settings in Appendix B.1.3, which might be helpful to understand the effects of the two regularization terms.\n\nFrom the theoretical perspective, we have not provided rigorous support of L_smooth and will leave it to future work.\n\n3. In order to better approximate L_diversity, we used large batch size in our experiments. For binary images, we use batch size 100. For natural images, due to memory limits, we use 64. The details are provided in Appendix. In practice, we found that these batch sizes provide stable estimation of L_diversity.\n\n4. As we discussed in the paper, one advantage of our regularization method is that L_diversity is computationally efficient. Previous works such as InfoVAE and AAE also has considered the Jensen-Shannon Divergence. But directly optimizing it is intractable, and they applied adversarial learning methods.\n\n5. We plan to show the reconstruction results with linearly interpolated z-vectors in another updated version. We appreciate your suggestions if there are better ways of investigating the latent space in terms of \"latent travelsals\".\n\n6. The possible reason that VLAE obtained worse reconstruction than the original paper is that in our experiments, we used more powerful decoders with more layers and receptive fields. We want to test the performance of our regularizer with sufficiently expressive decoders. With more powerful decoders, our reimplementation of VLAE achieved better NLL but worse reconstruction, showing that VLAE suffers the KL varnishing issue with stronger decoders.\n\n7. Thanks for your suggestion! We will make figure 2 easier to understand and update the revised version later.\n", "This paper presents a new regularization technique for VAEs similar in motivation and form to the work on InfoVAE. The basic intuition is to encourage different training samples to occupy different parts of z-space, by maximizing the expected KL divergence between pairwise posteriors, which they call Mutual Posterior-Divergence (MPD). They show that this objective is a symmetric version (sum of the forward and reverse KL) of the Mutual Info regularization used by the InfoVAE. In practice however, they do not actually use this objective. They use a different regularization which is based on the MPD loss but they say is more stable because it's always greater than zero, and ensures that all latent dimensions are used. In addition to the MPD based term, they also add another term which encouraging the pairwise KL-divergences to have a low standard-deviation, to encourge more even spreading over the z-space rather than the clumpy distribution that they observed with only the MPD based term.\n\nThey show state of the art results on MNIST and Omniglot, improving over the VLAE. But on natural data (CIFAR10), their results are worse than VLAE. \n\nPros:\n\t1. The technique has a nice intuitive (but not particularly novel) motivation which is kinda-sorta theoretically motivated if you squint at it hard enough.\n\t2. The results on the simple datasets are solid and encouraging.\n\nCons:\n\t1. The practical implementation is a bit ad-hoc and requires turn two additional hyper parameters (like most regularization techniques).\n\t2. The basic motivation and observations are the same as InfoVAE, so it's not completely novel.\n\t3. The CIFAR10 results are bit concerning, and one can't help but wondering if the technique really only helps when the data has simpler shared structure.\n\nOverall: I think the idea is interesting enough, and the results encouraging enough to be just above the bar for acceptance at ICLR.\n\nI have the following question for the authors:\n\n\t1. Why do you use the truncated pixelcnn on CIFAR10? Did you try it with the more expressive decoder (as was used on the binary images) and got worse results? or is there some other justification for this difference?\n\nI would have like to see the following modifications to the paper:\n\n\t1. The paper essentially presents two related but separate regularization techniques. It would be nice to have ablation results to show how each of these perform on their own.\n\t2. Bonus points for showing results which combine VLAE (which already has a form of the MPD regularization) with the smoothness regularization.\n\t3. It would be nice to see samples from VLVAE in Figure 3 next to the MAE samples to more easily compare them directly.\n\t4. There are many grammatical and English mistakes. The paper is still quite readably, but please make sure the paper is proofread by a native English speaker.\n", "Thanks for pointing out the related work missed in the paper.\nWe will cite and compare with it in our revised version.\n\nWe really appreciate your comments about the notation used in the appendix.\nWe will revise it.", " This work is closely related to the following work:\n R. D. Hjelm et al, \"Learning deep representations by mutual information estimation and maximization\", https://arxiv.org/abs/1808.06670.\n\n I would suggest the authors cite the latest work and compare the performance between the two methods. \n\n By the way, in the appendix, it mentions that the KL divergence is equal to H(.,.) - H(.), where H(.,.) denotes the relative entropy. Note that relative entropy is actually the KL divergence. Please use a proper name to define H(.,.). The different information measures can be found in \n 1. Cover and Thomas, \"Elements of Information Theory\".\n 2. Raymond Yeung, \"Information Theory and Network Coding\"\n 3. Robert Gallager, \"Information Theory and Reliable Communication\"" ]
[ -1, 7, -1, -1, -1, 6, -1, -1, -1, 6, -1, -1 ]
[ -1, 5, -1, -1, -1, 4, -1, -1, -1, 4, -1, -1 ]
[ "Byl31zowTX", "iclr_2019_Hke4l2AcKQ", "H1l3IfiwaQ", "r1eryMXiTX", "HklIzEowaQ", "iclr_2019_Hke4l2AcKQ", "BJgo0o0NnX", "rJgA-rlU37", "SJec11dj3X", "iclr_2019_Hke4l2AcKQ", "BJlD3jFg9X", "iclr_2019_Hke4l2AcKQ" ]
iclr_2019_HkeGhoA5FX
Residual Non-local Attention Networks for Image Restoration
In this paper, we propose a residual non-local attention network for high-quality image restoration. Without considering the uneven distribution of information in the corrupted images, previous methods are restricted by local convolutional operation and equal treatment of spatial- and channel-wise features. To address this issue, we design local and non-local attention blocks to extract features that capture the long-range dependencies between pixels and pay more attention to the challenging parts. Specifically, we design trunk branch and (non-)local mask branch in each (non-)local attention block. The trunk branch is used to extract hierarchical features. Local and non-local mask branches aim to adaptively rescale these hierarchical features with mixed attentions. The local mask branch concentrates on more local structures with convolutional operations, while non-local attention considers more about long-range dependencies in the whole feature map. Furthermore, we propose residual local and non-local attention learning to train the very deep network, which further enhance the representation ability of the network. Our proposed method can be generalized for various image restoration applications, such as image denoising, demosaicing, compression artifacts reduction, and super-resolution. Experiments demonstrate that our method obtains comparable or better results compared with recently leading methods quantitatively and visually.
accepted-poster-papers
1. Describe the strengths of the paper. As pointed out by the reviewers and based on your expert opinion. - strong qualitative and quantitative results - a good ablative analysis of the proposed method. 2. Describe the weaknesses of the paper. As pointed out by the reviewers and based on your expert opinion. Be sure to indicate which weaknesses are seen as salient for the decision (i.e., potential critical flaws), as opposed to weaknesses that the authors can likely fix in a revision. - clarity could be improved (and was much improved in the revision). - somewhat limited novelty. 3. Discuss any major points of contention. As raised by the authors or reviewers in the discussion, and how these might have influenced the decision. If the authors provide a rebuttal to a potential reviewer concern, it’s a good idea to acknowledge this and note whether it influenced the final decision or not. This makes sure that author responses are addressed adequately. No major points of contention. 4. If consensus was reached, say so. Otherwise, explain what the source of reviewer disagreement was and why the decision on the paper aligns with one set of reviewers or another. The reviewers reached a consensus that the paper should be accepted.
train
[ "BJeBKz333m", "B1lVIRTKAX", "H1llV0aYAX", "SJlreA6FCQ", "BylyOpaKRX", "B1gPVaaYAQ", "Bygp3i6YA7", "B1lKz511pQ", "BJx1ow5K2X" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper proposes a convolutional neural network architecture that includes blocks for local and non-local attention mechanisms, which are claimed to be responsible for achieving excellent results in four image restoration applications.\n\n\n# Results\nThe strongest point of the paper is that the quantitative and qualitative image restoration results appear to be very good, although they seem almost a bit too good.\n\n\n# Novelty\nI'm not sure about the novelty of the paper, but I suspect it to be rather incremental. The paper says \"To the best of our knowledge, this is the first time to consider residual non-local attention for image restoration problems.\" Does that mean non-local attention (in a very similar way) has already been used, just not in a residual fashion? If so, that would not constitute much novelty. I have to admit that I'm not familiar with the related work on attention, but I did not understand *why* the results of the proposed method are supposed to be much better than that of previous work.\n\n\n# Clarity\nI think the paper is not self-contained enough, since it seems to implicitly assume substantial background knowledge on attention mechanisms in CNNs. \n\nFurthermore, the introduction of the paper identifies three problems with existing CNNs that I don't necessarily fully agree with. None of these supposed problems are backed up by (experimental) evidence.\n\nI don't think it is sufficient to just show superior results than previous methods. It is also important to disentangle why the results are better. However, the presented ablation experiments are not very illuminating to me.\n\nThe attempts at explaining what the novel attention blocks do and why they lead to superior results are very vague to me. Maybe they are understandable in the context of related work, but I found many statements, such as the following, devoid of meaning:\n- \"Without considering the uneven distribution of information in the corrupted images, [...]\"\n- \"However, in this paper, we mainly focus on learning non-local attention to better guide feature extraction in trunk branch.\"\n- \"We only incorporate residual non-local attention block in low-level and high-level feature space. This is mainly because a few non-local modules can well offer non-local ability to the network for image restoration.\"\n- \"The key point in mask branch is how to grasp information of larger scope, namely larger receptive field size, so that it’s possible to obtain more sophisticated attention map.\"\n\n\n# Experiments\n- The experimental results are the best part of the paper. However, it would've been nice to include some qualitative results in the main paper.\n- The proposed RNAN model is trained on a big dataset (800 images with ~2 million pixels each). Are the competing methods trained on datasets of similar size? If not, this could be a major reason for improved performance of RNAN over competing methods. At least in the appendix, RNAN and FFDNet are compared more fairly since they are trained with the same/similar data.\n- The qualitative examples in the appendix mostly show close-ups/details of very structured regions (mostly stripy patterns). Please also show some other regions without self-similar structures.\n\n\n# Misc\n- Residual non-local attention learning (section 3.3) was not clear to me.\n- The word \"trunk\" is used without definition or explanation.\n- Fig. 2 caption is too short, please expand.\n\n# Update (2018-11-29)\nGiven the substantial author feedback, I'm willing to raise my score.", "Q3-8: - The proposed RNAN model is trained on a big dataset (800 images with ~2 million pixels each). Are the competing methods trained on datasets of similar size? If not, this could be a major reason for improved performance of RNAN over competing methods. At least in the appendix, RNAN and FFDNet are compared more fairly since they are trained with the same/similar data.\nA3-8: First, for image super-resolution, EDSR and our RNAN used DIV2K 800 images for training. SRMDNF and D-DBPN used DIV2K 800 images and Flickr2K 2650 images for training, much more images than ours. Our ANAN obtains better results, while using similar or smaller training set and much less network parameters than those of EDSR and D-DBPN.\nSecond, for image denoising, demosaicing, and compression artifacts reduction, the compared methods use smaller training size. It’s hard to use their official released code to retrain their models with DIV2K 800 images mainly for two reasons. One is that it’s very hard to preprocess data with their codes for DIV2K training data. Second, some of the compared methods (e.g., MemNet) would need large-memory GPU (e.g., Nvidia P40 with 24G memory to train MemNet) and very long training time (e.g., 5 days to train MemNet). \nHowever, to make fair comparisons, we retrain our RNAN with smaller dataset and show the results in Table 8. As we can see, our RNAN still achieves better results, even using smaller training data (e.g., for denoising, we use BSD400, FFDNet uses BSD400+, which has 5144 more images than BSD400). It should also be noted that we only train our network about 2 hours, being far away from well-trained. While, other compared methods would have to take much longer training time. For example, MemNet trains for about 5 days, almost 60 times longer than ours.\n\nQ3-9: - The qualitative examples in the appendix mostly show close-ups/details of very structured regions (mostly stripy patterns). Please also show some other regions without self-similar structures.\nA3-9: First, our RNAN obtains pretty good results for regions with self-similar structures. This comparison also demonstrates the effectiveness of our proposed residual non-local attention network. Thanks for the suggestions, we further add more qualitative results without self-similar structures in the revised paper.\n\nQ3-10: # Misc\n- Residual non-local attention learning (section 3.3) was not clear to me.\n- The word \"trunk\" is used without definition or explanation.\n- Fig. 2 caption is too short, please expand.\nA3-10: Thanks for pointing them out. We have revised the paper to make it better understandable and easy to follow. The word “trunk” mainly means main body to extract features, just being distinguished with mask branch. Moreover, we show it in the Fig. 2. We also expand the caption of Fig. 2.", "Q3-3: # Clarity\nI think the paper is not self-contained enough, since it seems to implicitly assume substantial background knowledge on attention mechanisms in CNNs. \nA3-3: Due to the limited space, we only included key references about attention mechanisms in the previous paper. Thanks for the reviewer’s suggestions, in the revised paper, we add more descriptions about attention mechanisms.\n\nQ3-4: Furthermore, the introduction of the paper identifies three problems with existing CNNs that I don't necessarily fully agree with. None of these supposed problems are backed up by (experimental) evidence.\nA3-4: For the first issue, Zhang et al. [R2] investigated that larger patch size contributes more for image denoising to make better use of receptive field size, especially when the noise level is high. In this paper, we use non-local attention to make full use of all the pixels of the inputs simultaneously. We compared DnCNN in [R2] to show the effectiveness of our method.\nFor the second issue, we provide analyses about previous methods, which didn’t use non-local attention for image restoration and lacked discriminative ability according to the specific noisy content. We also provide visual results to demonstrate our analyses. For example, to denoise the kodim11 in Fig. 4, all the previous methods cannot recover the line above the boat. They take the tiny line as a part of plain sky and just remove it. However, our RNAN could keep the line and remove the noise by distinctively treat the line and sky. \nFor the third issue, previous methods seldomly take the features distinctively in channel-wise or spatial-wise. Namely, they take the feature maps equally, which lacks flexibility in the real cases. Instead, we learn non-local mixed attention to guide the network training and obtain stronger representational ability. We support this claim with the ablation study and comparisons with other methods quantitatively and qualitatively.\n[R2] Zhang, Kai, et al. \"Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising.\" TIP 2017.\n\nQ3-5: I don't think it is sufficient to just show superior results than previous methods. It is also important to disentangle why the results are better. However, the presented ablation experiments are not very illuminating to me.\nA3-5: Please refer to A3-2 for the reasons and analyses why our results are better. On the other hand, the ablation study is used to verify the effects of each proposed component. It also serves as a guidance for us to decide the final network structure.\n\nQ3-6: The attempts at explaining what the novel attention blocks do and why they lead to superior results are very vague to me. Maybe they are understandable in the context of related work, but I found many statements, such as the following, devoid of meaning:\n- \"Without considering the uneven distribution of information in the corrupted images, [...]\"\n- \"However, in this paper, we mainly focus on learning non-local attention to better guide feature extraction in trunk branch.\"\n- \"We only incorporate residual non-local attention block in low-level and high-level feature space. This is mainly because a few non-local modules can well offer non-local ability to the network for image restoration.\"\n- \"The key point in mask branch is how to grasp information of larger scope, namely larger receptive field size, so that it’s possible to obtain more sophisticated attention map.\"\nA3-6: We summarize our main contribution as three-fold and corresponding brief explanations at the end of Introduction. We also try our best to revise the, aiming to make it better understandable to readers.\n\nQ3-7: # Experiments\n- The experimental results are the best part of the paper. However, it would've been nice to include some qualitative results in the main paper.\nA3-7: Due to the limited space, we didn’t include some qualitative results in the main body of the paper. Thanks for the reviewer’s suggestion, we add some qualitative results in the main body of the revised one.", "We thank Reviewer3 for his/her valuable comments. We will release the code and pretrained model reproducing the results in the paper soon. Our responses are as follows:\n\nQ3-1: # Results\nThe strongest point of the paper is that the quantitative and qualitative image restoration results appear to be very good, although they seem almost a bit too good.\nA3-1: We mainly show the effectiveness of our idea and don’t pursue higher performance. We were surprising to find that our current model has achieved much better performance than most previous methods in image restoration. Actually, in our later research, we further obtained better results based on the idea in this paper. Anyway, we will release the train/test codes and pretrained models soon, which reproduce the exact results in this paper. \n\nQ3-2: # Novelty\nI'm not sure about the novelty of the paper, but I suspect it to be rather incremental. The paper says \"To the best of our knowledge, this is the first time to consider residual non-local attention for image restoration problems.\" Does that mean non-local attention (in a very similar way) has already been used, just not in a residual fashion? If so, that would not constitute much novelty. I have to admit that I'm not familiar with the related work on attention, but I did not understand *why* the results of the proposed method are supposed to be much better than that of previous work.\nA3-2: Non-local attention was NOT used for image restoration in terms of papers in CVPR/ICCV/ECCV/NIPS/ICML/ICLR. We are the first to investigate non-local attention for image denoising, demosaicing, compression artifact reduction, and super-resolution simultaneously. The reasons why we propose residual non-local attention learning (in Section 3.3 of the main paper) are mainly as follows:\n(1) It is a proper way to incorporate non-local attention into the network and contribute to the image restoration performance.\n(2) It allows us to train very deep networks by preserving more low-level features, being more suitable for image restoration. \n(3) It allows the network to pursue better representational ability. We demonstrate its effectiveness in both the main paper and our response to Reviewer2.\nThe reasons why our proposed method achieves much better results than that of previous works are as follows:\n(1) Our residual non-local attention network is an effective network structure for high-quality image restoration. No matter we use small training data (e.g., Table 8 in main paper) or DIV2K (e.g., Table 6 in the main paper), our method achieves better results than most compared ones. Let’s take image super-resolution as an example, even though some other methods have larger number of network parameters (e.g., EDSR and D-DBPN), our method still achieves better performance.\n(2) Our proposed residual attention learning allows we train very deep network, achieve stronger representation ability. We’re the first to investigate such a deep network for image denoising, demosiacing, and compression artifacts reduction.\n(3) Our proposed method is powerful enough to further take advantage of larger training set (e.g., DIV2K). As we show Table 8 in the main paper, for small training data, we only train our network about 2 hours, being far away from well-trained. While, other compared methods would have to take much longer training time. For example, MemNet (Tai et al., 2017) trains for about 5 days, almost 60 times longer than ours. ", "Q1-3: - The contribution of the non-local operation is not clear to me. For example, how does the global information (i.e., long-range dependencies between pixels) help to solve image denoising tasks such as image denoising?\nA1-3: Zhang et al. [R2] investigated that larger patch size contributes more for image denoising to make better use of receptive field size, especially when the noise level is high. Similar observation could also be found in image super-resolution [R3]. Although large patch size makes better use of larger receptive field size, previous methods are restricted by local convolutional operation and equal treatment of spatial and channel-wise features. \nIn this paper, we use non-local attention to make full use of all the pixels of the inputs simultaneously. Namely, all the positions are considered to obtain better attention maps. Such non-local mixed attention enhances the network with distinguished power for noise and image content. For example, to denoise the kodim11 in Fig. 4, all the previous methods cannot recover the line above the boat. They take the tiny line as a part of plain sky and just remove it. However, our RNAN could keep the line and remove the noise by distinctively treat the line and sky with non-local mixed attention.\n[R2] Zhang, Kai, et al. \"Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising.\" TIP 2017.\n[R3] Wang, Xintao, et al. \"ESRGAN: Enhanced super-resolution generative adversarial networks.\" ECCVW 2018.\n\nQ1-4: Overall, the technical contribution of the proposed method is not so high, but the proposed method is valuable and promising if we focus on the performance.\nA1-4: As Reviewer2 said ‘However, up to some point all the new ConvNet designs can be seen as incremental developments of the older ones, yet they are needed for the progress of the field.’, we have to admit that too many CNN based works focus on performance. What’s more, some works by famous companies need hundreds and thousands of high-performance GPUs, use tons of data, and take tens of days to train their networks. Although they achieve very impressive results based on existing network structures, researchers (e.g., students in most universities) without so much resource cannot even run their released codes. Such works consumes so much resource that it becomes undoable for researchers with limited resources. However, such kinds of works are not challenged or blamed with their ‘novelty’ very much and there tends to be more and more such very-large-resource-consuming works.\nIn contrast, in this work, we design a compact yet effective network for image restoration. We conduct extensive experiments to demonstrate the positive contributions of each component and the effectiveness of the idea. We are the first to investigate non-local attention in image restoration tasks. Although we can make more complex network structures to achieve more ‘novelty’ and better performance, we didn’t. In fact, in our later works, we obtained much better results based on the idea in this paper. \nWe want to inspire other researchers to investigate more about non-local attention for the large community, image restoration, with limited resource. All the experiments can be done with one regular GPU (e.g., 12G memory). The results are also reproducible, as we will release the train/test codes and pretrained models. ", "We thank Reviewer1 for his/her valuable comments. We will release the code and pretrained model reproducing the results in the paper soon. Our responses are as follows:\n\nQ1-1: - Cons\n - It would be better to provide the state-of-the-art method[1] in the super-resolution task. \n [1] Y. Zhang et al., Image Super-Resolution Using Very Deep Residual Channel Attention Networks, ECCV, 2018.\nA1-1: Thanks for the suggestion. RCAN [1] is very powerful and shows great performance gains over previous SR methods. We include the RCAN [1] for comparison in the revised paper. It should be noted that RCAN mainly focus on much deeper network design and channel attention. Our network depth is much shallower than that of RCAN. Our RNAN mainly focus on investigating residual non-local attention and its application for image restoration. We believe that our RNAN could also contribute to RCAN to obtain better performance.\n\nQ1-2: - The technical contribution of the proposed method is not high, because the proposed method seems to be just using existing methods.\nA1-2: Our main principle of network design is to make it ‘Compact yet work’. This work mainly focuses on investigating the usage of residual local and non-local attention for image restoration. Based on some existing concepts (e.g., residual block, non-local network), we conduct extensive experiments to obtain such a compact network structure and demonstrate its effectiveness. We mainly show the effectiveness of our idea and don’t pursue higher performance by further refining the network modules. We believe that more and more related works could be done to further improve such a compact network.", "We thank Reviewer2 for his/her valuable comments and approval for our work. We will release the code and pretrained model reproducing the results in the paper soon. Our responses are as follows:\n\nQ2-1: The main weakness of the paper is the limited novelty, as the proposed design builds upon existing ideas and concepts. However, up to some point all the new ConvNet designs can be seen as incremental developments of the older ones, yet they are needed for the progress of the field.\nA2-1: Our main principle of network design is to make it ‘Compact yet work’. This work mainly focuses on investigating the usage of residual local and non-local attention for image restoration. Based on some existing concepts (e.g., residual block, non-local network), we conduct extensive experiments to obtain such a compact network structure and demonstrate its effectiveness. We mainly show the effectiveness of our idea and don’t pursue higher performance by further refining the network modules. We believe that more and more related works could be done to further improve such a compact network.\n\nQ2-2: Inclusion of more related works, such as: \nTimofte et al., \"NTIRE 2018 Challenge on Single Image Super-Resolution: Methods and Results\", CVPRW 2018\nWang et al., \"A fully progressive approach to single-image super-resolution\", CVPRW 2018\nAgustsson et al., NTIRE 2017 Challenge on Single Image Super-Resolution: Dataset and Study, CVPRW 2017\nBlau et al., \"2018 PIRM Challenge on Perceptual Image Super-resolution\", ECCVW 2018\nZhang et al., \"Image Super-Resolution Using Very Deep Residual Channel Attention Networks\", ECCV 2018\nA2-2: The NTIRE and PIRM challenges and recent related works really contribute to the image restoration community very much. We have included those valuable works and given corresponding analyses in the revised paper.\n\nQ2-3: Why not using dilated convolutions instead of or complementary with the mask branch or other design choices from this paper?\nA2-3: First of all, we investigated the usage of dilated convolutions in mask branch before and found that it didn’t make obvious difference. Dilated convolution may be a good choice to obtain spatial attention, as done in BAM [R1]. While, in this paper, we target to obtain non-local mixed attention, including channel and spatial attention simultaneously.\n[R1] Park, Jongchan, et al. \"BAM: bottleneck attention module.\" BMVC 2018.\nFurthermore, we provide more experiments using dilated convolutions in mask branch to demonstrate our above claims. Here we give a brief introduction to the experiments. As dilated convolutions are good at obtaining larger receptive field size, we remove all the non-local blocks in our network. We divide the experiments as 4 cases.\nCase-1: we replace the mask branch with two dilated convolutions and remove our proposed residual attention learning (in Section 3.3 of the main paper) strategy. Namely, we use Eq. (7) for attention learning.\nCase-2: we replace the mask branch with two dilated convolutions and keep our proposed residual attention learning (in Section 3.3 of the main paper) strategy. Namely, we use Eq. (8) for attention learning.\nCase-3: we add two dilated convolutions in the previous mask branch and remove our proposed residual attention learning (in Section 3.3 of the main paper) strategy. Namely, we use Eq. (7) for attention learning.\nCase-4: we add two dilated convolutions in the previous mask branch and keep our proposed residual attention learning (in Section 3.3 of the main paper) strategy. Namely, we use Eq. (8) for attention learning.\nWe test the performance on Set5 for color image denoising with noise level=30. To save training time, we set path size as 48, block number as 7. The performance comparisons (in terms of PSNR (dB) within 200 epochs) are as follows:\nCase-1: 31.486 dB; Case-2: 31.508 dB; Case-3: 31.535 dB; Case-4: 31.552; RNAN: 31.602 dB. \nCompare Case-1 and -2, or Case-3 and -4, we can see that our proposed residual attention learning is more suitable for image restoration and contributes to the performance.\nCompare Case-2 and RNAN, we find that mix attention works better than simple spatial attention.\nCompare Case-4 and RNAN, we find that non-local block helps to learn better attention by taking long-range dependencies between pixels than that with dilated convolutions.", "The authors propose a residual non-local attention net (RNAN) which combines local and non-local blocks to form a deep CNN architecture with application to image restoration.\n\nThe paper has a compact description, provides sufficient details, and including the appendix has an excellent experimental validation.\n\nThe proposed approach provides top results on several image restoration tasks: image denoising, demosaicing, compression artifacts reduction, and single image super-resolution.\n\nThe main weakness of the paper is the limited novelty, as the proposed design builds upon existing ideas and concepts. However, up to some point all the new ConvNet designs can be seen as incremental developments of the older ones, yet they are needed for the progress of the field.\n\nI would suggest to the authors the inclusion of related works such as:\nTimofte et al., \"NTIRE 2018 Challenge on Single Image Super-Resolution: Methods and Results\", CVPRW 2018\nWang et al., \"A fully progressive approach to single-image super-resolution\", CVPRW 2018\nNote that DIV2K dataset was introduced in:\nAgustsson et al., NTIRE 2017 Challenge on Single Image Super-Resolution: Dataset and Study, CVPRW 2017\n\nalso, the more recent related works:\nBlau et al., \"2018 PIRM Challenge on Perceptual Image Super-resolution\", ECCVW 2018\nZhang et al., \"Image Super-Resolution Using Very Deep Residual Channel Attention Networks\", ECCV 2018\n\nAlso, I would like a response from the authors on the following:\nWhy not using dilated convolutions instead of or complementary with the mask branch or other design choices from this paper?\n", "- Summary\nThis paper proposes a residual non-local attention network for image restoration. Specifically, the proposed method has local and non-local attention blocks to extract features which capture long-range dependencies. The local and non-local blocks consist of trunk branch and (non-) local mask branch. The proposed method is evaluated on image denoising, demosaicing, compression artifacts reduction, and super-resolution.\n\n- Pros\n - The proposed method shows better performance than existing image restoration methods.\n - The effect of each proposed technique such as the mask branch and the non-local block is appropriately evaluated.\n\n- Cons\n - It would be better to provide the state-of-the-art method[1] in the super-resolution task. \n [1] Y. Zhang et al., Image Super-Resolution Using Very Deep Residual Channel Attention Networks, ECCV, 2018.\n - The technical contribution of the proposed method is not high, because the proposed method seems to be just using existing methods.\n - The contribution of the non-local operation is not clear to me. For example, how does the global information (i.e., long-range dependencies between pixels) help to solve image denoising tasks such as image denoising?\n\nOverall, the technical contribution of the proposed method is not so high, but the proposed method is valuable and promising if we focus on the performance.\n" ]
[ 7, -1, -1, -1, -1, -1, -1, 7, 6 ]
[ 3, -1, -1, -1, -1, -1, -1, 5, 3 ]
[ "iclr_2019_HkeGhoA5FX", "BJeBKz333m", "BJeBKz333m", "BJeBKz333m", "BJx1ow5K2X", "BJx1ow5K2X", "B1lKz511pQ", "iclr_2019_HkeGhoA5FX", "iclr_2019_HkeGhoA5FX" ]
iclr_2019_HkeoOo09YX
Meta-Learning For Stochastic Gradient MCMC
Stochastic gradient Markov chain Monte Carlo (SG-MCMC) has become increasingly popular for simulating posterior samples in large-scale Bayesian modeling. However, existing SG-MCMC schemes are not tailored to any specific probabilistic model, even a simple modification of the underlying dynamical system requires significant physical intuition. This paper presents the first meta-learning algorithm that allows automated design for the underlying continuous dynamics of an SG-MCMC sampler. The learned sampler generalizes Hamiltonian dynamics with state-dependent drift and diffusion, enabling fast traversal and efficient exploration of energy landscapes. Experiments validate the proposed approach on Bayesian fully connected neural network, Bayesian convolutional neural network and Bayesian recurrent neural network tasks, showing that the learned sampler outperforms generic, hand-designed SG-MCMC algorithms, and generalizes to different datasets and larger architectures.
accepted-poster-papers
This paper proposes to use meta-learning to design MCMC sampling distributions based on Hamiltonian dynamics, aiming to mix faster on set of problems that are related to the training problems. The reviewers agree that the paper is well-written and the ideas are interesting and novel. The main weaknesses of the paper are that (1) there is not a clear case for using this method over SG-HMC, and (2) there are many design choices that are not validated. The authors revised the paper to address some aspects of the latter concern, but are encouraged to add additional revisions to clarify the points brought up by the reviewers. Despite the weaknesses, the reviewers all agree that the paper exceeds the bar for acceptance. I also recommend accept.
train
[ "BylJFZFmy4", "SkxGLPbKCX", "Skl2LzvOA7", "B1g0UDdF6Q", "Skgd58dF6Q", "rJgWaN_KTX", "BkxAJlS5n7", "r1xb6mtK3X", "rkxTfzKFhQ" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you very much for the elaboration and clarification", "I have read your response and thank you for your clarifications.\n\nThank you for the precision regarding Stein's; I do agree it is not a significant part of your work but I was wondering if this could be a particularly fragile one. The fact that it does not seem sensitive to kernel choice/hyperparameters seem to be a good indication.\n\nI do not change my initial assessment of the paper as I still think the empirical gains are not very convincing; especially in light of a 1.3-1.5x slowdown.", "Thank you for your reviews, we have revised our submission based on some of the reviews below:\n\n1. We add some comments on using RBF kernel for Stein gradient estimator in the Appendix C, explaining some of the concerns raised by the reviewers.\n\n2. We re-plot the Figure 6\n\n3. 'Particles' are now replaced by 'Samples' to avoid any confusions.\n\nBest,\n\nPaper383 Author", "We thank the reviewer for his/her valuable time for detailed reviews on our submission. Here are our responses for the concerns raised by Reviewer 2:\n\nQ1: Concerns about Stein gradient estimator.\n\nA1: The main objective of this paper is to propose a meta-learning algorithm for SG-MCMC, and the investigation for better kernels is not our main focus. We used RBF kernel in our experiments, and empirically we didn’t find the objective to be very sensitive to the hyper-parameters of this kernel choice. \n\nWe expect a better kernel choice would improve the performance of the meta-sampler even further. However, the choice of kernel for Stein discrepancy remains an open challenge. The typical choices are the RBF kernel with median heuristics [Liu et al, 2016, Chwialkowski et al, 2016, Li & Turner, 2018; Shi et al., 2018] or the IMQ kernel [Gorham & Mackey, 2017, Chen, et al., 2018; Li, 2017]. In this paper, we adopt the settings of RBF kernel, but other kernels can be easily applied. We thank you for pointing out this and we will add discussions in revision. \n\nQ2: Weak results\n\nA2: Our results indeed show significant improvement when compared with SG-MCMC literature. E.g. on MNIST, even with much bigger neural nets, hand-designed SG-MCMC samplers often show around 0.2% improvements over baselines, see Figure 4 in [Chen, et al., 2014] and table 2 in [ Li, et al, 2016].\n\nFor Cifar-10 experiments, we use the same settings as [Luo, et al., 2017] which is the current state-of-the-art SG-MCMC sampler. Their approach improved performance over SGHMC by 1.3% (over our SGHMC baseline), and our approach improved over SGHMC by 0.5%. However, their method uses 5 augmented variables with very carefully engineered dynamics design, while our approach only uses 1 augmented variable (momentum), and the dynamics are learned from data. Thus, we argue that by using the same augmentation as SGHMC, this performance increase is significant. The meta-learning idea is also applicable to improve the sampler of [Luo et al. 2017] which we leave to future work.\n\nQ3: Computation costs.\n\nA3: We report the plots in terms of epochs, making the metric consistent with other SG-MCMC papers. For wall clock time concern, in MNIST experiments the meta sampler took around 1.3-1.5x time when compared with SGHMC (see the last paragraph in appendix B). \n\nAs an important side note, to the best of our knowledge, none of the existing meta-learning optimisation papers has reported results in wall clock time. We presume these meta-learned optimisers can be much slower than Adam in real time, however this cost can be amortised using parallel computing. Our proposed sampler can be easily parallelized too, and in such case convergence speed in terms of number of iterations/epochs is more important.\n\nReference:\n\nChen et al. 2016. \"Stochastic gradient hamiltonian monte carlo.\" ICML. 2014.\n\nLi, et al. 2016. \"Preconditioned Stochastic Gradient Langevin Dynamics for Deep Neural Networks.\" AAAI 2016.\n\nLiu et al, 2016. “A Kernelized Stein Discrepancy for Goodness-of-fit Tests and Model Evaluation”. ICML 2016.\n\nChwialkowski et al, 2016. “A kernel test of goodness of fit”. ICML 2016.\n\nLiu & Wang, 2016. “Stein Variational Gradient Descent: A General Purpose Bayesian Inference Algorithm”. NIPS 2016.\n\nGorham & Mackey, 2017. “Measuring Sample Quality with Kernels”. ICML 2017\n\nLi, 2017. \"Approximate Gradient Descent for Training Implicit Generative Models.\" NIPS 2017 Bayesian deep learning workshop.\n\nLuo, et al. 2017. \"Thermostat-assisted Continuous-tempered Hamiltonian Monte Carlo for Multimodal Posterior Sampling on Large Datasets.\" arXiv preprint arXiv:1711.11511 .\n\nLi & Turner, 2018. \"Gradient estimators for implicit models.\" ICLR 2018.\n\nShi et al., 2018. \"A Spectral Approach to Gradient Estimation for Implicit Distributions.\" ICML 2018.\n\nChen et al. 2018. \"Stein points.\" ICML 2018.\n", "We thank the reviewer for his/her valuable time for detailed reviews on our submission. Here are our responses for the concerns raised by Reviewer 3:\n\nQ1: “the proposed procedure is quite complicated”\n\nA1: In theory the designing choice of D and Q can be arbitrary and the resulting sampler is still valid due to the completeness results of Ma et al. In practice, however, there are several constraints when concerning scalability. Here \\theta has dimensionality around 15,000 even for the smallest MLP we tested (1 hidden layer with 20 hidden units). Therefore, after momentum variable augmentation, if full rank matrices are used then the Q and D matrices will have around 30000^2 (O(d^2)) entries. Furthermore, computing D^{1/2} has O(d^3) cost, in this example it would be 30000^3. In sum these high costs make full rank matrix design prohibitive, and we resort to diagonal matrices for better scalability. \n\nQ2: “choice of meta learning objective is not obvious”\n\nA2: The proposed two losses target different aspects of the MCMC chain. The cross-chain objective encourages q_t to approach the exact posterior faster. By definition, if q_t = p then the sampler has converged to the stationary distribution. However, cross-chain loss does not reflect mixing properties within a single chain, even when the chain is initialised by samples from p. Therefore we developed the in-chain loss to improve single chain mixing, which encourages the underlying distribution of in-chain samples to have high entropy. \n\nThe choice of divergence in use is a bit tricky. First in MCMC the q distribution is implicitly defined via parallel chain simulation and/or thinning. On the other hand, we can only evaluate the exact posterior (up to a constant) at a given input \\theta. Furthermore evaluating p on the full dataset can be costly, what we have in practice is an unbiased estimate of log p(\\theta|D) with mini-batches. These two observations motivate the usage of KL[q||p], where the intractable gradient problem for H[q] is addressed by the Stein gradient estimator (see answers to the next question).\n\nWe do not use GAN-based approaches (as done in some of the citations) since in this case we do not have “real data” from the exact posterior, and \\theta is at least 15,000 dimensions in our smallest MLP. As for integral probability metrics, the only possible choice is kernelized Stein discrepancy (KSD) [Liu et al, 2016, Chwialkowski et al, 2016]. But as shown in [Liu & Wang, 2016], minimising KSD is equivalent to minimising the norm of the gradient of KL in a unit ball of an RKHS. This means the two divergences are closely related, applications of KSD to our task can be an interesting future direction.\n\nQ3: “the use of the Stein gradient estimator is known sometimes to be problematic”\n\nA3: The main objective of this paper is to propose a meta-learning algorithm for SG-MCMC, and the investigation for better kernels is not our main focus. We used RBF kernel in our experiments, and empirically we didn’t find the objective to be very sensitive to the hyper-parameters of this kernel choice. \n\nWe expect a better kernel choice would improve the performance of the meta-sampler even further. However, the choice of kernel for Stein discrepancy remains an open challenge. The typical choices are the RBF kernel with median heuristics [Liu et al, 2016, Chwialkowski et al, 2016, Li & Turner, 2018; Shi et al., 2018] or the IMQ kernel [Gorham & Mackey, 2017, Chen, et al., 2018; Li, 2017]. In this paper, we adopt the settings of RBF kernel, but other kernels can be easily applied. We thank you for pointing out this and we will add discussions in revision. \n\nReference:\n\nLiu et al, 2016. “A Kernelized Stein Discrepancy for Goodness-of-fit Tests and Model Evaluation”. ICML 2016.\n\nChwialkowski et al, 2016. “A kernel test of goodness of fit”. ICML 2016.\n\nLiu & Wang, 2016. “Stein Variational Gradient Descent: A General Purpose Bayesian Inference Algorithm”. NIPS 2016.\n\nGorham & Mackey, 2017. “Measuring Sample Quality with Kernels”. ICML 2017\n\nLi, 2017. \"Approximate Gradient Descent for Training Implicit Generative Models.\" NIPS 2017 Bayesian deep learning workshop.\n\nLi & Turner, 2018. \"Gradient estimators for implicit models.\" ICLR 2018.\n\nShi et al., 2018. \"A Spectral Approach to Gradient Estimation for Implicit Distributions.\" ICML 2018.\n\nChen et al. 2018. \"Stein points.\" ICML 2018.\n", "We thank the reviewer for his/her valuable time for detailed reviews on our submission. Here are our responses for the concerns raised by Reviewer 1:\n\nQ1: “the phrase 'Q_f is responsible for the acceleration of \\theta' is not really instructive” \n\nA1: The Q_f value should not be viewed as the inverse mass matrix, otherwise Q_f should appear at the kinetic energy term inside Hamiltonian term H, where the kinetic term will be p^T Q_f p / 2. The resulting update rule will be different from Eq. 7. Additionally, the Q matrix should satisfy the anti-symmetry property which is not required for inverse mass matrix. \nIn fact, from the first line in Eq.7, we noticed that Q_f is also responsible for scaling of driven forces for momentum p (appears at the \\nabla_{\\pmb{\\theta}}\\pmb{\\tilde{U}}), similarly as in the second line of Eq.7. Thus, we think Q_f controls both the scaling of momentum p and \\pmb{\\theta}. Therefore, we conclude it as the acceleration force for \\pmb{\\theta}\n\nQ2: “how the stochastic estimate \\tilde{U}(\\theta) in equation (10) is computed”\n\nA2: The energy function is estimated using the current mini-batch training data. To be specific, at the end of time t, we have a set of K \\theta samples from K parallel chains, and mini-batch observed data with batch size M drawn from the training data set, we use these mini-batch data to estimate U using Eq. 4 for each \\theta_k. \n\nQ3: ”how the correlation between the chains due to thinning for the in-chain loss affects the results”\n\nA3: The chains are run completely in parallel, thus there is no correlation between chains. As for the entropy term in in-chain loss, the gradient estimator took samples inside each chain, and the back-propagation is through each single chain. \n\nQ4: “Did you tune the SGHMC method in Figure 2, as well?” \n\nA4: We use the same step size for both SGHMC and our meta sampler, and that step size is set to be very small in order to reflect the behaviour of the continuous dynamics. We agree that SGHMC with carefully tuned hyper-parameters can perform well, however the point of meta learning is exactly to avoid laborious hyper-parameter tuning by humans, and instead to learn them automatically from data. Indeed this advantage is clearly demonstrated by this synthetic example.\n\nQ6: “How was the tuning of the baseline methods performed?”\n\nA6: To test a meta-learning algorithm, we need to define both the training task and the test taks. Each task contains its own training/validation/test datasets. We train the meta-sampler on the training task. For evaluation, we tune the hyper-parameters of both the baseline samplers and the meta-sampler on the **validation set** of the test task, and report the performances on the **test set** of the test task.\n\nQ7: “Are the results in Figure 3 based on single runs?”\n\nA7: Results in Figure 3 shows both the mean and standard error over 10 runs as in table 1. However the standard errors are too small compared to the magnitude of the classifier error, so it may be not so clear in the figure 3. Will improve this on revision.\n", "In the paper \"Meta-Learning for Stochastic Gradient MCMC\", the authors present a meta-learning approach to automatically design MCMC sampler based on Hamiltonian dynamics to mix faster on problems similar to the training problems. The approach is evaluated on simple multidimensional Gaussians, and Bayesian neural networks (including fully connected, convolutional, and recurrent networks).\n\nMCMC samplers in general, and Hamiltonian Monte Carlo sampler in particular, are very powerful tools to perform Bayesian inference in high-dimensional spaces. Combined with stochastic gradients, methods like Stochastic Gradient MCMC (SGMCMC), or Stochastic Gradient Langevin Dynamics (SGLD) have been successfully used to apply these methods in the large data regime, where only noisy estimates of the gradients are feasible. Even though, many different samplers exists, and they are provably correct (meaning they converge to the correct distribution), fast mixing and low auto-correlation within the chain can heavily depend on the problem at hand and the hyperparameters of the sampler used. The work presented here, uses the general framework for SG-MCMC samplers of Ma et al., parametrizes it with a neural network and learns its weights on representative training problems.\n\nThe paper is well written, although occasional minor mistakes and typos can be found.\nIt seems however, that the method is still quite laborious and some care needs to be taken to train the meta-sampler.\nThe overall narrative is easy to follow, but could benefit from more detail in certain parts. In general, I argue for acceptance of the paper, but have the following questions/comments:\n\n- Below Eq. (7), an interpretation of the parametrizations Q_f and D_f is given. I greatly appreciate this, but the phrase 'Q_f is\nresponsible for the acceleration of \\theta' is not really instructive. By definition, the change in \\theta is mostly driven by the momentum p. Therefor, Q_f looks like an inverse mass (at least in the second line of (7)), but maybe that is not a very helpful analogy either.\n- at the beginning of section 3.2, the term 'particles' is used. While I am fully aware of what that is supposed to mean, a reader less familiar with the topic could be confused, because there is no explanation of it.\n- It is unclear to me how the stochastic estimate \\tilde{U}(\\theta) in equation (10) is computed exactly. Is it estimated using the current mini-batch at time t, or is it estimated using a 'holdout-test set'?\n- I was wondering how the correlation between the chains due to thinning for the In-chain loss affects the results. The text, does not address this at all.\n- The experiments are very thorough and I appreciate the comparison to the tuned baselines, but I am missing some details in the paper:\n (a) Did you tune the SGHMC method in Figure 2, as well? It is not mentioned in the text, and the sample path looks very volatile, which could indicate a poor combination of step length and noise.\n (b) How was the tuning of the base line methods performed?\n (c) Are the results in Figure 3 based on single runs, or do you show the mean over 10 independent runs (as in table 1).\n- The insets in Figure 6 are helpful, but I think you could shrink the 'outer y axis' and have the inset in the top right corner instead. That way, the zoomed-out plot would show more details on its own.\n", "TITLE\nMeta-learning for stochastic gradient mcmc\n\nREVIEW SUMMARY\nA wonderful paper with many great ideas and insights. Main weakness is the complexity of the algorithms and many design choices wich are well argued for but not theoretically or empirically well founded. \n\nPAPER SUMMARY\nThe main idea (based on the result of Ma et al.'s \"complete recipe for stochastic gradient mcmc\") is to parameterize the diffusion and curl matrices by neural networks and (meta-)learn/optimize an sg-mcmc algorithm. \n\nQUALITY\nThe technical quality of the paper appears to be good. Due to the complexity of the algorithm and lack of access to authors code at review time, it is not feasible for me to validate empirical results.\n\nMy main critisism of this work is that the proposed procedure is quite complicated, and there are a lot of steps and design choices that are made in the paper which are not backed up by theory or experiment. For example, the structure and parametrization of D and Q. I would like to have seen e.g. empirical results on full matrices compared to the particular \"diagonal\" struture used, to give an idea of how much we loose by that design choise. Similarly, the choice of meta learning objective is not (to me at least) obvious, and this could be examined further. Also, the use of the Stein gradient estimator is known sometimes to be problematic (maybe particularly with an rbf kernel) but this is not explored.\n\nAll in all, the paper leaves me wanting more, but of course there is only so much space in a conference paper. My conclusion here is that I recommend that the paper is published as it is, and I hope the authors will continue their work in future research (as also outlined in the paper). \n\nCLARITY\nThe paper is clear and well written, notation is consistent, and everything is fairly easy to follow.\n\nORIGINALITY\nThe idea of meta learning sg-mcmc is not something I have seen before, so to my knowledge the idea is original. \n\nSIGNIFICANCE\nI think the whole line of research in which this paper falls has a very high potential, and i strongly welcome any new results. This paper develops new interesting ideas of broad interest.\n", "This paper proposes a novel method to perform meta-learning for stochastic gradient MCMC. They utilize a general family of SDEs that guarantees preservation of the target density with somewhat loose constraint on the drift and diffusion functions (from Ma et al. (2015)). Then, they propose learning these functions on a set of training tasks and evaluating on unseen, different tasks, in a meta-learning fashion.\n\nThis paper is well written and easy to follow. They do a very good job presenting the motivation for their work as well as seminal work in SG-MCMC. The idea is fairly natural, especially in light of recent success of meta-learning and learning optimizers. They do a thorough survey of related work and also do a good job presenting their method in context of very modern work on MCMC and SG-MCMC.\n\nI am not completely convinced by the meta-training objective; both losses seem natural but quite intractable to compute in practice. The use of Stein indicates that the kernel must probably be *very* carefully crafted and given that the whole method relies on this objective, it seems like this could be a breaking point. I am also curious to know how you diagnostic/evaluate the choice of these kernels.\n\nIn terms of evaluation, the experimental results are not the most convincing given that across the board, they are (except in one case) in 4 case within 0.2% of SGHMC and in the two others, within 0.5% and 0.8% respectively. This seems a bit weak, especially considering the compute invested both at training time and for each SG-MCMC step (i.e. getting the outputs from the neural networks vs simply doing HMC). Is there really a case for using the method over SG-HMC? I would have also very much liked to see a run-time evaluation." ]
[ -1, -1, -1, -1, -1, -1, 7, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "Skgd58dF6Q", "B1g0UDdF6Q", "iclr_2019_HkeoOo09YX", "rkxTfzKFhQ", "r1xb6mtK3X", "BkxAJlS5n7", "iclr_2019_HkeoOo09YX", "iclr_2019_HkeoOo09YX", "iclr_2019_HkeoOo09YX" ]
iclr_2019_HkezXnA9YX
Systematic Generalization: What Is Required and Can It Be Learned?
Numerous models for grounded language understanding have been recently proposed, including (i) generic models that can be easily adapted to any given task and (ii) intuitively appealing modular models that require background knowledge to be instantiated. We compare both types of models in how much they lend themselves to a particular form of systematic generalization. Using a synthetic VQA test, we evaluate which models are capable of reasoning about all possible object pairs after training on only a small subset of them. Our findings show that the generalization of modular models is much more systematic and that it is highly sensitive to the module layout, i.e. to how exactly the modules are connected. We furthermore investigate if modular models that generalize well could be made more end-to-end by learning their layout and parametrization. We find that end-to-end methods from prior work often learn inappropriate layouts or parametrizations that do not facilitate systematic generalization. Our results suggest that, in addition to modularity, systematic generalization in language understanding may require explicit regularizers or priors.
accepted-poster-papers
This paper generated a lot of discussion. Paper presents an empirical evaluation of generalization in models for visual reasoning. All reviewers generally agree that it presents a thorough evaluation with a good set of questions. The only remaining concerns of R3 (the sole negative vote) were lack of surprise in findings and lingering questions of whether these results generalize to realistic settings. The former suffers from hindsight bias and tends to be an unreliable indicator of the impact of a paper. The latter is an open question and should be worked on, but in the opinion of the AC, does not preclude publication of this manuscript. These experiments are well done and deserve to be published. If the findings don't generalize to more complex settings, we will let the noisy process of science correct our understanding in the future.
train
[ "BklMFEOy1V", "Byg9M9Vyy4", "Hkei5tV1JE", "B1xq62y0CX", "Hyere82c2m", "SylZPzWcA7", "BJe48Db6Tm", "rkeU3aB86X", "H1xbP6B8TX", "S1xJ92HIaX", "rJe-UgPqnX", "rylJdHwn2Q" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thanks for the response, and sorry for the slow reply!\n\nAfter reading the response and revised paper, I'm leaving my review score unchanged, because I think my main concerns still stand. I didn't find the results surprising, and I don't see evidence that these results would generalize to more complex tasks. I think if the paper is only reporting experiments on a toy task, it would need to uncover something really interesting. That said, I would encourage the authors to keep working on this exciting topic.\n\n> Reading prior work on visual reasoning may lead a researcher to conclude, roughly speaking, that NMNs are a lost cause, since a variety of generic models perform comparably or better. In contrast, our rigorous investigation highlights their strong generalization capabilities and relates them to the specific design of NMNs.\n\nI don't find this argument convincing. For example, we could easily design a rule-based system that would show very strong generalization abilities on your task. However, that would not persuade me that it rule-based methods are not a lost cause for visual reasoning. I would really like to see some evidence that your results would generalize to more realistic tasks.\n\n> Notably, chain-structured NMNs were used in the literature prior to this work (e.g. in the model of Jonshon et al multiple filter_...[...] modules are often chained), so the fact that tree-structured NMNs show much stronger generalization was not obvious prior to this investigation and should be of a high interest to the research community.\nAs mentioned by another reviewer, “Neural Compositional Denotational Semantics for Question Answering” shows systematic generalization with tree structured NMNs, and goes much further with structure learning. I think you should at least explain how your results relate to this paper.\n\n> We are not sure if we fully understand the question “Could you somehow test for if a given trained model will show systematic generalization?” that R3 asked. \nSorry that this was unclear. I was wondering if you could test for this property without actually running on test data (maybe it converges faster, or the norm of the weights is lower; I have no idea). Knowing that might help us to regularize models properly during training.\n\n> All these experiments are repeated at least 5 times each, like you suggested in your review, although it’s worth noting that results the original version of the paper also reported results after multiple runs. \n\nBy \"large numbers of runs\", I was thinking more like thousands than five (I don't know if that is computational practical). The question I was curious about is whether these models will ever find the right solution, or perhaps if they even have an inductive bias against finding it. This would be very helpful to know.", "Dear Reviewer 3,\n\nWe thank you again for your informative review that you wrote before the revision period. In our response and the revised version of the paper we tried our best to address your concerns. We would highly appreciate to get some feedback from you regarding the changes that we have made and the arguments that we have presented. In particular, we report that NMN-Chains (with a lot of inductive bias built-in and also used in prior work such as Johnson et al. 2017) generalize poorly compared to even generic modules, and that layout/parameterization induction often fails to converge to the correct solution. We believe both these findings are quite surprising. We also report new experiments with the MAC model, including a hyperparameter search, a comparison against end-to-end NMNs, and a qualitative exploration of the failure modes of this model. All these experiments are repeated at least 5 times each, like you suggested in your review, although it’s worth noting that results the original version of the paper also reported results after multiple runs. \n\nWe would highly appreciate a response on our newest revision and suggestions on how it could be improved. If you still think that paper is uninteresting or not well executed, could you then suggest what specifically it is lacking?\n\nWe are sincerely hoping to hear from you. ", "Dear Reviewer 2,\n\nThank you once again for the thoughtful and thorough review that you wrote before the revision period. Our understanding of your review is that overall, you find the paper interesting and useful, but certain presentation and evaluation decisions, as well as the fact that we use a new dataset, did not allow you to recommend it stronger. Since then we have improved the paper a lot by incorporating a lot of your suggestions, including but not limited to reporting mean performance on at least 5 runs in all experiments, comparing MAC and Attention N2NMN, investigating different version of the MAC models. We have also argued extensively why we think our decision to build SQOOP from scratch, rather than rely on Blender or ShapeWorld’s rendering, will not have any negative consequences on our research field. \n\nA response from you on the updated version of our paper would be highly valuable to help us improve this work in the future. We would highly appreciate if you could take a look at the revised paper and let us know if you think it is still merely marginally above the acceptance threshold, or if perhaps you find that it already deserves a higher rating. We would be grateful even for a short response from you, highlighting what issues in the paper have not been addressed, or what arguments in our response are still are unconvincing.\n\nWe are sincerely hoping to hear from you.", "Thank you for your detailed responses and updates to the paper. I do think the updates made in the paper makes it more clear and above acceptance threshold. I am convinced that it successfully analyzes an interesting set of questions and carefully studies this in a specific (albeit slightly narrow) notion of generalization.\n\nTherefore, I am updating the rating to above acceptance threshold. ", "Summary: The paper focuses on comparing the impact of explicit modularity and structure on systematic generalization by studying neural modular networks and “generic” models. The paper studies one instantiation of this systematic generalization for the setting of binary “yes” or “no” visual question answering task. They introduce a new dataset called in which model has to answer questions that require spatial reasoning about pairs of randomly scattered letters and digits in the image. While the models are evaluated on all possible object pairs, they are trained on a smaller subset. They observe that NMNs generalize better than other neural models when an appropriate choice of layout and parametrization is made. They also show that current end-to-end approaches for inducing model layout or learning model parametrization fail to generalize better than generic models.\n\nPros:\n- The conclusions of the paper regarding the generalization ability of neural modular networks is timely given the widespread interest in these class of algorithms. \n- Additionally, they present interesting observations regarding how sensitive NMNs are to the layout of models. Experimental evidence (albeit on specific type of question) of this behaviour will be helpful for the community and hopefully motivate them to incorporate regularizers or priors that steer the learning towards better layouts. \n- The authors provide a nice summary of all the models analyzed in Section 3.1 and Section 3.2. \n\nCons:\n- While the results on SQOOP dataset are interesting, it would have been very exciting to see results on other synthetic datasets. Specifically, there are two datasets which are more complex and uses templated language to generate synthetic datasets similar to this paper:\n - CLEVR environment or a modification of that dataset to reflect the form of systematic the authors are studying in the paper. \n - Abstract Scenes VQA dataset introduced in“Yin and Yang: Balancing and Answering Binary Visual Questions” by Zhang and Goyal et al. They provide a balanced dataset in which there are a pairs of scenes for every question, such that the answer to the question is “yes” for one scene, and “no” for the other for the exact same question. \n- Perhaps because the authors study a very specific kind of question, they limit their analysis to only three modules and two structures (tree & chain). However, in the most general setting NMN will form a DAG and it would have been interesting to see what form of DAGs generalize better than other. \n- It is not clear to me how the analysis done in this paper will generalize to other more complex datasets where the network layout NMN might be more complex, the number of modules and type of modules might also be more. Because, the results are only shown on one dataset, it is harder to see how one might extend this work to other form of questions on slightly harder datasets. \n\nOther Questions / Remarks:\n- Given that the accuracy drop is very significant moving from NMN-Tree to NMN-Chain, is there an explanation for this drop? \n- While the authors mention multiple times that #rhs/#lhs = 1 and 2 are more challenging than #rhs/#lhs=18, they do not sufficiently explain why this is the case anywhere in the paper. \n- Small typo in the last line of section 4.3 on page 7. It should say: This is in stark contrast with “NMN-Tree” …..\n- Small typo in the “Layout induction” paragraph, line 6 on Page 7: … and for $p_0(tree) = 0.1$ and when we use the Find module \n\n", "We are happy to present a new, substantially improved revision of the paper. We have polished our experimental setup (see details in the end of the message), performed many additional experiments as requested by the reviewers and improved presentation of the results.\n\nMost important changes in the revision include:\n\n1) We report means and standard deviation for at least 5 (and at least 10 in some comparisons due to variance in performance) runs of each of the models. We switched to reporting error rates instead of accuracies in all tables in order to make our results easier to understand.\n2) Performance of MAC baseline has somewhat improved, compared to what we reported in the original submission, but this model is still far from solving SQOOP for #rhs/lhs of 1, 2, 4, 8, and it fails sometimes even on #rhs/lhs=18. We performed an ablation study of MAC as requested by R2 and R3, in which we varied the number of hidden units, the number of modules and the level of weight decay (see Appendix B). Results for all hyperparameters settings that we tried are still hopelessly far from systematic generalization of the kind exhibited by NMN-Tree, although on average MAC models with 256 hidden units performed somewhat better (barely statistically significantly) than the default version with 128 hidden units that we used in our experiments. We also now report qualitative analysis of rare (3 out of 15) cases when MAC does generalize, showing that this is likely to be due to a lucky initialization. \n3) As suggested by R1, we added a DAG-like NMN-Chain-Shortcut model to the comparison. We found that its generalization performance is in between those of NMN-Chain and NMN-Tree and is in general quite similar to the performance of generic models. \n4) We present additional results for NMN-Chain, showing that it does not generalize even when #rhs/lhs=18! We find this drastic lack of generalization highly surprising and not at all easily predictable without performing our study. \n5) We performed an analysis of the responses produced by an NMN-Chain model to answer R1’s question as to why it performs so much worse than NMN-Tree. Our analysis has shown that there is almost no agreement in test set responses of several NMN-Chain models, allowing us to conclude that NMN-Chain essentially predicts randomly on the test set.\n6) The results of layout induction experiments have somewhat improved, without major changes to the conclusions.\n7) Perhaps the most significant changes have occured in our parametrization induction results. We found that Attention N2NMN may generalize quite well (9 times out of 10) even for #rhs/lhs=2, and most unexpectedly, even when attention weights are not very sharp. The results on #rhs/lhs=1 have remained the same. Our new results suggest that Attention N2NMN lends itself to systematic generalization more than MAC, supporting the hypothesis expressed by R2.\n\nOther changes include:\n1) We cite “Neural Compositional Denotational Semantics for Question Answering”, as suggested by R2.\n2) We state explicitly in the text that our Find module outputs feature maps instead of attention maps, somewhat differently from the original Find modules from Hu et al.\n3) Appendix A with training details has been added. \n4) Appendix B with some qualitative analysis about why some MAC runs generalized successfully and others failed. We also report an attempt to hard-code control scores (as requested by R2) in MAC but that did not improve performance.\n5) We explain the motivation for the dataset generation procedure more clearly in Section 2, and also follow a suggestion by R3 and explain better why lower rhs/lhs is harder for generalization.\n\nWe thank all reviewers for their valuable suggestions that allowed us to greatly improve the paper. We believe that the revised paper should be of a high interest for anyone working on language understanding, and we sincerely hope that reviewers will consider revisiting their evaluations.\n\nP. S. The changes in the results were caused by the following improvements in the experimental setup:\n1) We disabled the weight decay of 0.00001 that was the default in the codebase on top of which we start our project. This change allows for rare convergence to systematic solutions on the #rhs/lhs=1 split for MAC (3/15 runs). .\n2) We found that the publicly available codebase for the FiLM model had redundant biases before batch normalization, and removing this redundancy has stabilised training on NMNs with Find module, including Attention NMNs.\n3) In our preliminary experiments we set the learning rate for structural parameters to be higher than the one used for regular weights (0.001 vs 0.0001). To simplify our setup, we reran all experiments with the same learning rate for all parameters. ", "We thank Reviewer 2 (R2) for their excellent and thorough review and for raising several particularly interesting points about modeling and evaluation. \n\nWhile we do agree with the reviewer’s concerns that the proliferation of synthetic datasets may be counterproductive, we chose to create SQOOP instead of directly using existing datasets to keep things simple. R2 suggests that we could’ve defined new objects out of (color, shape) tuples. We believe though, that even if we used Blender (CLEVR) or ShapeWorld rendering to build a dataset for out studies, this would not make further experimentation any simpler, because even though the rendering would be the same, this would still constitute a new dataset. The entire code for generating SQOOP is merely 550 lines, and comes with an extremely simple set of command line arguments. This is to be contrasted with ~9500 lines of code in ShapeWorld codebase, which aims to be universally usable, and hence is highly convoluted. Furthermore, in order to help researchers avoid the burden of “downloading and re-running a huge amount of code”, we will release our codebase that contains implementations of all the models used in this study and comes with ready-to-use CLEVR and SQOOP bindings. \n\nWe thank R2 for their thoughtful suggestion to consider splits other then the one with heldout right-hand sides (rhs). We fully agree that other options exist, for example a split where different lhsand rhs objects are used for each relation, and that investigating such options would be interesting. At the same time, we do not think that these extra experiments would radically change the conclusions, and we note that even in the current form our paper hits ICLR page limit. Our specific split was chosen based on the following considerations: we wanted to uniformly select a subset of object pairs for training set questions, in a way that guarantees that all objects word are seen at the training time. If we sampled a certain percentage of object pairs for training questions randomly, it could happen that certain words just never occur in the training set. Hence, we came up with the idea of having a fixed number of rhs objects for each lhs object. We note that this very split can also be seen as allowing a random (possibly zero) number of lhs objects for each rhs object, exhibiting sparsity on the lhs like R2 suggested. We will better explain the considerations above in the upcoming paper revision.\n\nApart from the above points of R2, we fully agree with their suggested changes and experiments and will incorporate almost all of these in the updated version of the paper. \n\n1) We follow R2’s suggestions and improved the presentation in Table-1: we will report means and standard deviations for 5 runs for all our models. \n2) CNN+LSTM and RelNet baselines are being re-run with higher #rhs/lhs.\n3) We have run experiments with varying number of MAC cells (3,6,12,24) and found that using 12 cells performed best (and as well as using 24 cells). We believe that this has to do with lucky control score initializations. This, along with some new interesting qualitative investigations about the nature of control parameters that result in successful generalization, will be elaborated on in our updated manuscript. \n4) In our initial experiments, we found that conceptually simpler homogenous NMNs (of the form proposed by Johnson et al.) are already sufficient to solve even the hardest version of SQOOP. Hence, we chose to focus our study on this, arguably, more generic approach, and we adapted the Find module from (Hu et al) to output a full feature map, instead of an attention map. We believe it is highly interesting to include such a model in comparison, as Residual and Find represent two very distinct paradigms of conditioning modules on their language inputs. We agree that extra studies of NMNs with attention bottlenecks would be a interesting direction of the future work, but we also think that our paper is quite complete without this investigation and has enough interesting findings.\n5) We will report performance of all baseline models on the #rhs/lhs=18 version of our dataset as well.\n6) We also fully agree with R2’s excellent observation about the nature of supervision in MAC vs hard-coded parameter NMN models. We are now running MAC experiments with hardcoded control attention where the control scores are hard-coded such that some of the modules focus entirely on the LHS object and some focus entirely on the RHS object. This particular hard-coding strategy was a result of our qualitative understanding of successful learnt attention for MAC. We will elaborate on this in the paper.\n7) We agree with R2’s comment that studying seq2seq learning in our setting would add an interesting new dimension to this work, and this is something we’ll consider for future work. \n8) We also note R2’s feedback on strong language, presentation issues and a missing citation and will improve the paper in these aspects.", "We would like to conclude our response by replying to the higher-level concern of R1 that the findings of our study may not “generalize to other more complex datasets where the network layout NMN might be more complex, the number of modules and type of modules might also be more”. While we fully agree that more complex datasets with more complex questions would bring new challenges, these are ones we purposely put aside (such as the general unavailability of ground-truth layouts for vanilla NMN, the need to consider an exponentially large set of possible layouts for Stochastic N2NMN, etc.) We believe that it is highly valuable for the research community to know what happens in the simple ideal case of SQOOP, where we can precisely test our specific generalization criterion. This knowledge (e.g. the superiority of trees to chains, the sensitivity of layout induction to initialization, the emergence of spurious parameterization in end-to-end learning), will guide researchers in choosing, designing and troubleshooting their models, as they now know what to expect modulo the optimization challenges that they may face. The field of language understanding with deep learning is not easily amenable to mathematical theoretical investigations and, with that in mind, rigorous minimalistic studies like ours are arguably very important. To some extent, they play the role of the former: they inform researcher intuition and lay a solid foundation for scientific dialogue. We purposely traded breadth for depth in our investigations, and we will go even deeper in the additional experiments that the upcoming revision will contain. We believe that the total of our results makes a complete conference paper. All that said, we would welcome specific suggestions of additional experiments that we could carry out in order to better validate our claims.\n\nWe hope that this response has clarified to R1 what our paper was insufficiently clear about. A new revision with additional experiments and fixed typos will soon be uploaded to OpenReview, and we hope that R1 takes this response and the changes that we will make to the paper into account.\n", "We thank Reviewer 1 (R1) for their review and for asking interesting questions that helped us to understand where our paper may have been unclear. In our response below we will try our best to better explain our motivation for building and using SQOOP, as well as address R1’s other questions and concerns. \n\nA key concern that R1 expressed in their review is that we perform our study on the new SQOOP dataset, instead of using an available one (for example CLEVR or Abstract Scenes VQA). Though we appreciate the concern (it has spurred us to rethink and rephrase how we justify SQOOP) we still believe that the SQOOP dataset is the best choice for precisely testing our ideas. We kindly invite R1 to consider the following arguments in favor of doing so:\n\nThe goal of our study was to perform a thorough investigation of systematic generalization of language understanding models. To that end, we wanted a setup that is as simple as possible, while still being challenging by testing the ability to extend the relational reasoning learned to unseen combinations of seen words. We therefore choose to focus on simplest relational questions of the form XRY, as they also allow us to factor out challenges of discrete optimization in choosing the right module layout (required for Stochastic N2NMN). The simplicity is also useful because most models get to 100% accuracy on the training set of SQOOP, which allowed us to put aside any remaining optimization challenges and just focus our study on systematic generalization. \nIn contrast, we find that the popular CLEVR dataset does not satisfy our requirements and if we did modify it sufficiently, we believe that it would only differ from SQOOP in the actual rendering and would not affect our conclusions. Though visually more complex, CLEVR has only 3 object types: cylinder, sphere and cube. Therefore, it would only allow for 3x4x3=36 different XRY relational questions. This is arguably not enough to sufficiently represent real world situations, and would definitely hinder our experiments. Specifically, we would not be able to sufficiently vary the difficulty of our generalization challenge when allowing 1,2,4,8 or 18 possible right hand-side objects in the questions (we clarify why splits with lower #rhs/lhs are more difficult than those with higher #rhs/lhs later in this response). Hence, we did not find the original CLEVR readily appropriate for our study. We could, in theory, introduce new object types to CLEVR and rerender a new dataset in 3D using Blender (the renderer that was used to create CLEVR) with different lighting conditions and partial occlusions. Though enticing, we believe that such a 3D version of SQOOP would lead to exactly same conclusions, because the vision required to recognize the objects in the scene would still be rather trivial. \nThe Ying and Yang dataset is clearly a valuable resource (and we thank the reviewer for the pointer), but we do not think it is readily suitable for the kind of study that we aim to perform. The dataset, to the best of our understanding, uses crowd-sourced questions (as the questions are taken from Abstract VQA dataset, whose captions were entered by a human, according to the original VQA paper https://arxiv.org/pdf/1505.00468v6.pdf). Using crowd-sourced questions would not allow us to control our experiments at the level of precision that we wanted to achieve (e.g. we would not know the ground-truth layouts, it would be harder to construct splits of varying difficulty, etc.). As well, Abstract VQA contains only 50k scenes, and from our experience with SQOOP we know that this number would be not sufficient to rule out overfitting to training images as a factor. \n\nWe thank R1 for their constructive suggestion to consider NMNs that form a DAG. We are currently investigating a chain-structured NMN with shortcuts from the output of the stem to each of the modules, and we will soon report these additional results in the upcoming revision of the paper. We hope that these results, combined with further qualitative investigations we are conducting, will answer the legitimate question of R1 as to why Chain-NMN performs so much worse than Tree-NMN.\n\nWe acknowledge that the text of the paper can be improved to explain better why splits with lower #rhs/lhs are generally harder than those with higher #rhs/lhs, and we thank R1 for pointing this out. Our reasoning is that lower #rhs/lhs are harder because the training admits more spurious solutions in them. In such spurious regimes models adapt to the specific lhs-rhs combinations from the training and can not generalize to unseen lhs-rhs combinations (i.e. generalizing from questions about “A” in relation with “B” to “A” in relation to “D” (as in #rhs/lhs=1) is more difficult than generalizing from questions about “A” in relation to “B” and “C” to the same “A” in relation to “D” (as in #rhs/lhs=2). We will update the paper to be more explicit in explaining these considerations. \n", "We thank Reviewer 3 (R3) for their review and for clearly articulating their concerns regarding the paper. In our response below, we will clarify the design and results of our experiments as well as argue why we believe that these results should be of interest and are not, indeed, that predictable.\n\nR3 asked why training performance of many models is 100% when they do not generalize and suggested us to perform a large number of training runs to see if occasionally the right solution is found. First, we agree that from the point of view of training there are many equally good solutions, and in fact, this is the main and the only challenge of SQOOP. We designed the task with the goal of testing which models are more likely to converge to the right solution, with which they can handle all possible combinations of objects, despite being trained only on a small subset of objects. We argued extensively in the introduction that such an ability to find the systematic solution despite other alternatives being available is highly desirable for language understanding approaches. We fully agree with R3 that in investigations of whether or not a particular model converges to the right solution repeating every experiment several times is absolutely necessary, and we would like to emphasize that we did repeat each experiment 3, 5, or 10 times (see “details” in Table 1 and the paragraph “Parametrization Induction” on page 8). In most cases we saw a consistent success or consistent failure, one exception being the parametrization induction results, where 4 out of 10 runs were successful (see Table 4, row 1 for the mean and the confidence interval). We hope that 3 takes this fact into account, and we will furthermore improve on the current level of rigor in the upcoming revision by repeating each experiment at least 5 times. \n\nWe are not sure if we fully understand the question “Could you somehow test for if a given trained model will show systematic generalization?” that R3 asked. We test the systematic generalization of a model by evaluating it on all SQOOP questions that were not present in the training set. We hope that this answers the question of R3 and we would be happy to engage in a further discussion regarding this and make edits to the paper if necessary. \n\nWe thank R3 for the suggestion to investigate the influence of model size and regularization on systematic generalization. It is indeed a very appropriate question in the context of our study, however, we note that there exists a wide variety of regularization methods and trying them all (and all their combinations) would be infeasible. In the upcoming update of the paper we will report results of an on-going ablation study for the MAC model, in which we vary the module size, the number of modules and experiment with weight decay. We would welcome any other specific experiment requests R3 may have.\n\nFinally, we would like to discuss the significance of our investigation and its results. While we agree that the results that we report may not shock the reader (although perhaps hindsight bias plays a role in what people find surprising or not after reading an article) we find them highly interesting and not at all easily predictable. Reading prior work on visual reasoning may lead a researcher to conclude, roughly speaking, that NMNs are a lost cause, since a variety of generic models perform comparably or better. In contrast, our rigorous investigation highlights their strong generalization capabilities and relates them to the specific design of NMNs. Notably, chain-structured NMNs were used in the literature prior to this work (e.g. in the model of Jonshon et al multiple filter_...[...] modules are often chained), so the fact that tree-structured NMNs show much stronger generalization was not obvious prior to this investigation and should be of a high interest to the research community. Last but not least, an important part of our investigation (which the review does not discuss) is the systematic generalization analysis of popular end-to-end NMN versions, that shows how making NMNs more end-to-end makes them more susceptible to finding spurious solutions. As we argued in our conclusion, these findings should be of a highest importance to researchers working on end-to-end NMNs, which is a very popular research direction nowadays. \n\nWe conclude our response by announcing that an updated version of the paper, that among others incorporates valuable suggestions by R3, will soon be uploaded to OpenReview. We are currently performing a lot of additional experiments, the results of which will make our investigation even more rigorous and complete. We sincerely hope that R3 takes into account the arguments we have made here and the new results that we will publish soon and reevaluates our paper more positively. \n\n", "This paper presents a targeted empirical evaluation of generalization in models\nfor visual reasoning. The paper focuses on the specific problem of recognizing\n(object, relation, object) triples in synthetic scenes featuring letters and\nnumbers, and evaluates models' ability to generalize to the full distribution of\nsuch triples after observing a subset that is sparse in the third argument. It\nis found that (1) NMNs with full layout supervision generalize better than other\nstate-of-the art visual reasoning models (FiLM, MAC, RelNet), but (2) without\nsupervised layouts, NMNs perform little better than chance, and without\nsupervised question attentions, NMNs perform better than the other models but\nfail to achieve perfect generalization.\n\nSTRENGTHS\n- thorough analysis with a good set of questions\n\nWEAKNESSES\n- some peculiar evaluation and presentation decisions\n- introduces *yet another* synthetic visual reasoning dataset rather than\n reusing existing ones\n\nI think this paper would have been stronger if it investigated a slightly\nbroader notion of generalization and had some additional modeling comparisons.\nHowever, I found it interesting and think it successfully addresses the set of\nquestions it sets out to answer. If it is accepted, there are a few things that\ncan be done to improve the experiments.\n\nMODELING AND EVALUATION\n\n- Regarding the dataset: the proliferation of synthetic reasoning datasets is\n annoying because it makes it difficult to compare results without downloading\n and re-running a huge amount of code. (The authors have, to their credit, done\n so for this paper.) I think all the experiments here could have been performed\n successfully using either the CLEVR or ShapeWorld rendering engines: while the\n authors note that they require a \"large number of different objects\", this\n could have been handled by treating e.g. \"red circle\" and \"red square\" as\n distinct atomic primitives in questions---the fact that redness is a useful\n feature in both cases is no different from the fact that a horizontal stroke\n detector is useful for lots of letters.\n\n- I don't understand the motivation behind holding out everything on the\n right-hand side. For models that can't tell that the two are symmetric, why\n not introduce sparsity everwhere---hold out some LHSs and relations?\n \n- Table 1 test accuracies: arbitrarily reporting \"best of 3\" for some model /\n dataset pairs and \"confidence interval of 5\" for others is extremely\n unhelpful: it would be best to report (mean / max / stderr) for 5. Also, it's\n never stated which convidence interval is reported.\n\n- Table 1 baselines: why not run Conv+LSTM and RelNet with easier #rhs/lhs data?\n\n- How many MAC cells are used? This can have significant performance\n implications. I think if you used their code out of the box you'll wind up\n with way bigger structures than you need for this task.\n\n- I'm not sure how faithful the `find` module used here is to the one in the\n literature, and one of the interesting claims in this work is that module\n implementation details matter! The various Hu papers use an attentional\n parameterization; the use of a ReLU and full convolution in Eq. 14 suggest\n that that one here can pass around more general feature maps. This is fine but\n the distinction should be made explicit, and it would be interesting to see\n additional comparisons to an NMN with purely attentional bottlenecks.\n\n- Why do all the experiments after 4.3 use #rhs/lhs of 18? If it was 8 it would\n be possible to make more direct comparisons to the other baseline models.\n\n- The comparison to MAC in 4.2 is unfair in the following sense: the NMN\n effectively gets supervised textual attentions if the right parameters are\n always plugged into the right models, while the MAC model has to figure out\n attentions from scratch. A different way of structuring things would be to\n give the MAC model supervised parameterizations in 4.2, and then move the\n current MAC experiment to 4.3 (since it's doing something analogous to\n \"parameterization induction\".\n \n- The top-right number in Table 4---particularly the fact that it beats MAC and\n sequential NMNs under the same supervision condition---is one of the most\n interesting results in this paper. Most of the work on relaxing supervision\n for NMNs has focused on (1) inducing new question-specific discrete structures\n from scratch (N2NMN) or (2) finding fixed sequential structures that work well\n in general (SNMN and perhaps MAC). The result this paper suggests an\n alternative, which is finding good fixed tree-shaped structures but continuing\n to do soft parameterization like N2NMN.\n\n- The \"sharpness ratio\" is not super easy to interpret---can't you just report\n something standard like entropy? Fig 4 is unnecessary---just report the means.\n\n- One direction that isn't explored here is the use of Johnson- or Hu-style\n offline learning of a model to map from \"sentences\" to \"logical forms\". To the\n extent that NMNs with ground-truth logical forms get 100% accuracy, this turns\n the generalization problem studied here into a purely symbolic one of the kind\n studied in Lake & Baroni 18. Would be interesting to know whether this makes\n things harder (b/c no grounding signal) or easier (b/c seq2seq learning is\n easier.)\n\nPRESENTATION\n\n- Basically all of the tables in this paper are in the wrong place. Move them\n closer to the first metnion---otherwise they're confusing.\n\n- It's conventional in this conference format to put all figure captions below\n the figures they describe. The mix of above and below here makes it hard to\n attach captions to figures.\n\n- Some of the language about how novel the idea of studying generalization in\n these models is a bit strong. The CoGenT split of the CLEVR dataset is aimed\n at answering similar questions. The original Andreas et al CVPR paper (which btw\n appears to have 2 bib entries) also studied generalization to structurally\n novel inputs, and Hu et al. 17 notes that the latent-variable version of this\n model with no supervision is hard to train.\n\nMISCELLANEOUS\n\n- Last sentence before 4.4: \"NMN-Chain\" should be \"NMN-Tree\"?\n\n- Recent paper with a better structure-induction technique:\n https://arxiv.org/abs/1808.09942. Worth citing (or comparing if you have\n time!)", "The paper explores how well different visual reasoning models can learn systematic generalization on a simple binary task. They create a simple synthetic dataset, involving asking if particular types of objects are in a spatial relation to others. To test generalization, they lower the ratio of observed combinations of objects in the training data. The authors show the result that tree structured neural module networks generalize very well, but other strong visual reasoning approaches do not. They also explore whether appropriate structures can be learned. I think this is a very interesting area to explore, and the paper is clearly written and presented.\n\nAs the authors admit, the main result is not especially surprising. I think everyone agrees that we can design models that show particular kinds of generalization by carefully building inductive bias into the architecture, and that it's easy to make these work on the right toy data. However, on less restricted data, more general architectures seem to show better generalization (even if it is not systematic). What I really want this paper to explore is when and why this happens. Even on synthetic data, when do or don't we see generalization (systematic or otherwise) from NMNs/MAC/FiLM? MAC in particular seems to have an inductive bias that might make some forms of systematic generalization possible. It might be the case that their version of NMN can only really do well on this specific task, which would be less interesting.\n\nAll the models show very high training accuracy, even if they do not show systematic generalization. That suggests that from the point of view of training, there are many equally good solutions, which suggests a number of interesting questions. If you did large numbers of training runs, would the models occasionally find the right solution? Could you somehow test for if a given trained model will show systematic generalization? Is there any way to help the models find the \"right\" (or better) solutions - e.g. adding regularization, or changing the model size? \n\nOverall, I do think the paper has makes a contribution in experimentally showing a setting where tree-structured NMNs can show better systematic generalization than other visual reasoning approaches. However, I feel like the main result is a bit too predictable, and for acceptance I'd like to see a much more detailed exploration of the questions around systematic generalization.\n\n" ]
[ -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, 6, 4 ]
[ -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, 5, 4 ]
[ "S1xJ92HIaX", "rylJdHwn2Q", "rJe-UgPqnX", "rkeU3aB86X", "iclr_2019_HkezXnA9YX", "iclr_2019_HkezXnA9YX", "rJe-UgPqnX", "Hyere82c2m", "Hyere82c2m", "rylJdHwn2Q", "iclr_2019_HkezXnA9YX", "iclr_2019_HkezXnA9YX" ]
iclr_2019_Hkf2_sC5FX
Efficient Lifelong Learning with A-GEM
In lifelong learning, the learner is presented with a sequence of tasks, incrementally building a data-driven prior which may be leveraged to speed up learning of a new task. In this work, we investigate the efficiency of current lifelong approaches, in terms of sample complexity, computational and memory cost. Towards this end, we first introduce a new and a more realistic evaluation protocol, whereby learners observe each example only once and hyper-parameter selection is done on a small and disjoint set of tasks, which is not used for the actual learning experience and evaluation. Second, we introduce a new metric measuring how quickly a learner acquires a new skill. Third, we propose an improved version of GEM (Lopez-Paz & Ranzato, 2017), dubbed Averaged GEM (A-GEM), which enjoys the same or even better performance as GEM, while being almost as computationally and memory efficient as EWC (Kirkpatrick et al., 2016) and other regularization-based methods. Finally, we show that all algorithms including A-GEM can learn even more quickly if they are provided with task descriptors specifying the classification tasks under consideration. Our experiments on several standard lifelong learning benchmarks demonstrate that A-GEM has the best trade-off between accuracy and efficiency
accepted-poster-papers
Pros: - Great work on getting rid of the need for QP and the corresponding proof of the update rule - Mostly clear writing - Good experimental results on relevant datasets - Introduction of a more reasonable evaluation methodology for continual learning Cons: - The model is arguably a little incremental over GEM. In the end I think all the reviewers agree though that the practical value of a considerably more efficient and easy to implement approach largely outweighs this concern. I think this is a good contribution in this area and I recommend acceptance.
train
[ "H1ltuen11V", "rklWHLB9hQ", "BygF0mRAC7", "HJe92m0RCm", "HkeVfvnnhX", "rkepkT_u07", "B1lVPuVcpQ", "rJev5wEq6m", "r1epCSNc67", "BylvMvH5hm" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "Thank you for your detailed rebuttal and revisions to the paper. I do agree that you have addressed my primary concerns and clarified some areas of confusion for me about the paper. I have updated my score in favor of acceptance after the revisions. ", "This paper proposes a variant of GEM called A-GEM that substantially improves the computational characteristics of GEM while achieving quite similar performance. To me the most interesting insight of this work is the proof that an inner product between gradients can suffice instead of needing to solve the quadratic program in GEM – which I have found to be a major limitation of the original algorithm. The additional experiments using task descriptors to enable zero shot learning are also interesting. Moreover, the discussion of the new evaluation protocol and metrics make sense with further clarification from the authors. Overall, I agree with the other reviewers that this paper makes a clear and practical contribution worthy of acceptance. \n", "May we ask the reviewer, if we were able to address the main concerns that the reviewer had through our rebuttal and revision of the paper? Are there any further issues that the reviewer wants us to address? If so, we would appreciate your feedback and further discussion. ", "May we ask the reviewer, if we were able to address the main concerns that the reviewer had through our rebuttal and revision of the paper? Are there any further issues that the reviewer wants us to address? If so, we would appreciate your feedback and further discussion. ", "The paper is well-written, with the main points supported by experiments. The modifications to GEM are a clear computational improvement.\n\nOne complaint: the \"A\" in A-GEM could stand for \"averaging\" (over all task losses) or \"approximating\" (the loss gradient with a sample). Both ideas are good. However, the paper does not address the question: how well does GEM do when it uses a stochastic approximation to each task loss? (Note I'm not talking about S-GEM, which randomly samples a task constraint; rather, approximate each task's constraint by sampling that task's examples).\n\nAnother complaint: reported experimental results lack any associated idea of uncertainty, confidence interval, empirical variation, etc. Therefore it is unclear whether observed differences are meaningful.", "I find your response very satisfying and highly professional. Consequently I am upgrading my review.", "We thank the reviewer for providing the feedback on the draft. Following is our response to the the questions asked by the reviewer:\n\nScattered Discourse: \nOur motivation for working on lifelong learning is mostly based on the unprecedented opportunity to learn more quickly new tasks given the experience accumulated in the past. A major reason why catastrophic forgetting is bad is that it prevents the learner from quickly adapting to new tasks that are similar to old tasks.\nThe focus of this work is then on sample and computational efficiency in LLL. It is important to impose the restrictions of learning from few examples in a single pass (and to cross-validate on a different set of tasks to properly assess generalization in this single pass setting) as we really aim at models that learn quickly without iterating multiple times over the same data. Moreover, it is important to be able to measure how quickly one learns, and to improve efficiency of existing algorithms (A-GEM and compositional task descriptors). The current evaluation framework that other works borrow from supervised learning (multiple passes over the data and cross-validate on the same tasks as used for testing) is often misleading, as the methodology (training protocol and metrics) is inadequate for evaluating continual learning algorithms. With this work, we hope to convince the research community to adopt our proposed training/evaluation protocol and to also consider sample/computational and memory efficiency in their metrics.\nWe hope the reviewer can find the revised paper more coherent and clear in this respect.\n\n1, 2: The reviewer is correct in saying that the use of compositional task descriptors and joint embedding models are not specific to A-GEM. In fact, we apply the joint embedding model also to the baseline methods and show improvements on those as well (see fig. 2 and fig. 4). The reason why we introduce them in this work is because a) there may be applications where an agent is given some sort of *description* of the task to perform, and b) since we focus on efficient learning (meaning, learning quickly from few examples), compositional task descriptors enable the learner to perform well at 0-/few-shot learning (see new fig. 5).\n\n3: \na) Training/ Evaluation Protocol: To the best of our knowledge, standard practice in LLL is to perform several passes over the data of each task, and several passes over the whole stream of tasks to set hyper-parameters, and then report error on the test set. This evaluation protocol is not adequate because the point of LLL is to quickly learn new tasks, and doing multiple passes over the data defeats the original purpose. Moreover, the prevalent protocol greatly puts the baseline, which simply finetunes parameters from the previous task without any regularization, at disadvantage. The more the passes are done over the data of a given task, the more the model will forget. Therefore, the conclusions drawn from using the “supervised learning” protocol in a LLL setting can be highly misleading, while using the proposed methodology takes us closer to our goal to fairly assess algorithms in the continual learning setting.\n\nb) LCA: In the few shot learning literature, people specify the number of examples they will be given at test time, and use $Z_b$ as defined in eq. 4, which is the average accuracy after seeing $b$ minibatches (or a certain number of examples). LCA is the area under the $Z_b$ curve. LCA is a better metric because it also contains information of the values of $Z_j$ for $j <= b$. If $b$ is relatively large, all methods produce similar average accuracy. LCA enables us to distinguish those models that have learned fast, because their 0 or few shot accuracy is higher. Since we care about how quickly a model learns, LCA is a useful metric to assess sample efficiency. \n\nc) Measuring performance on few examples: If by measuring performance on few examples, the reviewer mean reporting $Z_b$ numbers (Eq.4) and not taking the area under the $Z_b$ curve, then we would like to highlight that area under the curve (LCA) is capturing the learner's performance up to the $b$-th minibatch, giving the average profile of the complete few-shot region. $Z_b$, on the other hand, would only give the performance at the current mini-batch and will not have the path information. \n\n4: Adding error bars: As suggested, we have added the uncertainty estimates measured across multiple runs and seeds in the updated draft. Please take a look at Figs 1, 2 and Tabs. 4, 5, 6. Our conclusions are confirmed.\nRegarding running GEM with task descriptors, we have shown that GEM and A-GEM have similar performance on MNIST and CIFAR. We did not run it on CUB and AWA because GEM is too computationally expensive to run on larger models.\n", "We thank the reviewer for providing the feedback on the draft. Here is our response to the questions asked by the reviewer:\n\nClarity About Section 5: We have updated the Section 5 of the paper and tried to add additional details about the model. Here are some clarifications:\n\n1 - Matrix Description t^k: The matrix description is not learnt. It is composed from class attributes. \nFor instance, in CUB each class is described by 312 attributes. If the current task has 10 classes, then the task descriptor is a matrix of size 10x312. The task descriptor is the same for all samples belonging to that task. As noted in the section, each input example consists of (x^k, y^k, t^k).\n\n2 - Variable size of the attribute matrix: Let A be the number of attribute per class (the same across all classes) and C^k the number of classes in task k, then the input task descriptor has size C^k \\times A. Module \\psi_{\\omega} is simply a matrix of size A \\times D embedding each attribute. By multiplying the input task descriptor with this embedding matrix, we obtain a matrix of size C^k \\times D, embedding each class descriptor. The joint embedding model scores each class by computing a dot product between the image features and the class embeddings (each row of the above matrix), and it turns this scores into probability values using a softmax, as shown in eq. 13.\n\nIf the model extracts good image features that reveal the underlying attributes, it can now perform 0 shot learning on unseen classes (as long as their constituent attributes have already been learned for other tasks albeit in different combinations).\n\nIn the rewrite of Section 5, we have clarified this point and the corresponding notation.\n \n3 - Functions used to represent \\psi_{\\omega}: We use a lookup table whose parameter matrix has size A \\times D. \n\n4 - Confusion between C and C_k: The reviewer is correct. It should be C_k. We have corrected this in the updated draft.\n5 - Eq. 12: The reviewer is correct. We have corrected the equation in the updated draft. \n\nEffect of Representative Sampling on the Performance: [1] showed that using the existing LLL setups and benchmarks, more sophisticated strategies to populate the memory do not have an appreciable impact. We did try herding-based sampling [2] and got an improved performance of 1-2%. We leave further exploration to future work.\n\nT^{CV} \\ll T: In the updated draft, the AWA-10 experiments has been replaced with AWA-20. So, now we have 20 tasks for all the datasets. While, comparatively, 3 may not be much less than 20, in general, the idea is to use a small and separate subset of tasks for the cross-validation which will not be used for further training and evaluation. This allows us to conform to our stricter definition of LLL setting. \n\nLegends of Figs 4 and 5: We have fixed the legend in the updated draft. A-GEM is the one with the dashed line. \n\n[1] RWalk: Riemannian walk for incremental learning: Understanding forgetting and intransigence, ECCV2018.\n[2] Incremental Classifier and Representation Learner: CVPR 2016\n\n*Additional comment*: \nWhile A-GEM is an important contribution of this paper as it makes the original GEM algorithm much more practical, we believe that the introduction of the new evaluation protocol, new metric and extension using compositional task descriptors are also significant contributions. \n\nLifelong learning setting entails learning more quickly given the experience accumulated in the past. One reason why catastrophic forgetting is bad is that it prevents the learner from quickly adapting to new tasks that are similar to old tasks.\n\nSince the focus should be on sample and computational efficiency, in this work we considered learning from few examples in a single pass, and cross-validating on a different set of tasks to satisfy that requirement. The metric, the additional efficiency achieved by the use of task descriptors and the new A-GEM algorithm are then part of the same effort to make lifelong learning methods and evaluation protocol more realistic. The current evaluation framework that other works borrow from supervised learning (multiple passes over the data and cross-validate on the same tasks as used for testing) is often misleading. We hope to convince the research community to adopt our proposed training/evaluation protocol and to also consider sample/computational and memory efficiency in their metrics.\n", "We thank the reviewer for providing the feedback on the draft. Here is our response to the questions asked by the reviewer:\n\nQ1: We tried the version of GEM where each task loss is approximated by the few examples in the memory for that task as suggested by the reviewer. This approximation yielded slightly better numbers than original GEM:\n\nMethod | DataSet | Average Acc | Forgetting \n--------------------------------------------------------------------------------------\nApprox-GEM | MNIST | 90.1 (+-0.6) | 0.06 (+-0.01)\n | CIFAR | 61.8 (+-0.5) | 0.06 (+- 0.01)\n--------------------------------------------------------------------------------------\nGEM | MNIST | 89.5 (+- 0.5) | 0.06 (+- 0.004) \n | CIFAR | 61.2 (+-0.8) | 0.06 (+- 0.01)\n--------------------------------------------------------------------------------------\nA-GEM | MNIST | 89.1 (+-0.14) | 0.06 (+-0.001)\n(this paper) | CIFAR | 62.9 (+-2.2) | 0.07 (+- 0.02)\n\nHowever, note that this approximation only makes gradient computation more efficient (although not as much on modern GPUs), but the crux of the computation which is due to the inner optimization problem has the same memory and time complexity as the original GEM; overall, this stochastic version of GEM has very similar run time as the original GEM algorithm. Instead, the proposed A-GEM has much lower time (about 100 times faster) and memory cost (about 10 times lower) while achieving similar performance, as highlighted in the Section 6.1 of the paper. \n\nQ2: As suggested by the reviewer, we have added the uncertainty estimates in the updated draft. As you can see from the updated Figs 1, 2 and Tabs. 4, 5, and 6, conclusions do not change. \n\n*Additional comment*: \nWhile A-GEM is an important contribution of this paper as it makes the original GEM algorithm much more practical, we believe that the introduction of the new evaluation protocol, new metric and extension using compositional task descriptors are also significant contributions. \n\nLifelong learning setting entails learning more quickly given the experience accumulated in the past. One reason why catastrophic forgetting is bad is that it prevents the learner from quickly adapting to new tasks that are similar to old tasks.\n\nSince the focus should be on sample and computational efficiency, in this work we considered learning from few examples in a single pass, and cross-validating on a different set of tasks to satisfy that requirement. The metric, the additional efficiency achieved by the use of task descriptors and the new A-GEM algorithm are then part of the same effort to make lifelong learning methods and evaluation protocol more realistic. The current evaluation framework that other works borrow from supervised learning (multiple passes over the data and cross-validate on the same tasks as used for testing) is often misleading. We hope to convince the research community to adopt our proposed training/evaluation protocol and to also consider sample/computational and memory efficiency in their metrics.", "Summary of the paper:\n\nThis paper focuses on the problem of lifelong learning for multi-task\nneural networks. The goal is to learn in a computationally and memory\nefficient manner new tasks as they are encountered while at the same\ntime remembering how to solve previously seen tasks with a focus on\nhaving only one training pass through all the training data. The paper\nbuilds on the GEM method introduced in the paper \"Gradient episodic\nmemory for continuum learning\", NIPS 2017.\n\nThe main novelty over the original GEM paper is that A-GEM simplifies\nthe constraints on what constitutes a feasible update step during its\nSGD training so that GEM's QP problem is replaced by a couple of\ninner-products (and thus makes A-GEM much more computationally\nefficient). This simplification also means that only one gradient\nvector (the average gradient computed from the individual gradients of\nthe task loss of the previously seen tasks) has to be stored at each\nupdate as opposed to GEM where each task specific gradient vector has\nto be stored. Thus the memory requirements of A-GEM is much less than\nGEM and is independent of the number of already learnt tasks.\n\nThe paper then presents experimental evidence that A-GEM does run much\nfaster and uses less memory and results in performance similar to the\noriginal GEM strategy. The latter point is important as the simplified\nA-GEM algorithm - which adjusts the network's parameter to improve\nperformance on the current task while ensuring the average performance\non the previously seen tasks should not decrease - does not guarantee\nas stringently as GEM that the network does not forget how to perform\nall the previous tasks.\n\nThe paper also introduces an extra performance metric is introduced\n called the \"Learning Curve Area\" which measures how quickly a new\n task is learnt when it is presented with new material.\n\n\nPros and Cons of the paper:\n\n+/- The paper presents a simple intuitive extension to the original GEM\npaper that is much more computationally efficient and is thus more\nsuited and feasible for real lifelong learning applications. And it\nshows that performance exceeds other methods that have similar\ncomputational demands. The paper can be viewed as somewhat incremental\nbut the increment is probably crucial for any real-world practical\napplication.\n\n+ The validity of the approach is demonstrated experimentally on\n standard datasets in the field.\n\n\n- Some of the presentation of the material is somewhat vague, in\n particular section 5. In this section a joint embedding model is\n described that helps facilitate zero-shot learning. However, not\n enough detail is given to fully understand or appreciate this\n contribution, see below for details.\n\n\nRationale for my evaluation:\n\nThe method is somewhat incremental, however, this increment could be\nquite practically important. The presentation is lacking in some regard and would benefit\n from some re-working i.e. section 5. \n \n\nUnclear in the paper:\n\nSection 5 describing the \"Joint Embedding Model Using Compositional Task descriptors\" is very sparse on detail. Here are some of the details that I feel are missing:\n- In the experiments how is the matrix description (via attributes) of the different tasks $t^k$ learnt/discovered?\n- The size of this attribute matrix is able to vary from one task to the next. How does the function $\\psi_{\\omega}$ deal with this problem?\n- What functions are used in the experiments to represent $\\psi_{\\omega}$?\n- In the second last line of paragraph 2 should $C$ be $C_k$? If it should be $C$ how is $C$ chosen?\n- In equation (12) should the $c$th column of $\\psi_{\\omega}$ be extracted as opposed to the $k$th column?\n\nRepresentative labelled samples from each task are stored in memory\nand these are used when learning for a new task. The system\nhas a fixed memory so when a new task is added then the number of\nimages stored for each task has to be reduced. Then uniform sampling is\nused to randomly decide which images to keep. Could this selection\nprocess be improved upon and would any such improvement have any large\nimpact on performance?\n\nTypos and minor errors spotted:\n\nIn the third paragraph of section 2 it is stated $T^{CV} \\ll T$ in the\nexperiments performed this is not case. I don't think 3 is much less\nthan 10 or 20.\n\nIn figures 4 and 5 it is not entirely clear which curves correspond to\nA-GEM and A-GEM-JE from the legend. In the legend the dashed line with\nthe triangle looks the same the non-dashed line with the triangle. I\npresuming A-GEM is the non-dashed line, but only because that makes\nthings consistent with the previous figures." ]
[ -1, 7, -1, -1, 7, -1, -1, -1, -1, 6 ]
[ -1, 4, -1, -1, 4, -1, -1, -1, -1, 4 ]
[ "HJe92m0RCm", "iclr_2019_Hkf2_sC5FX", "rJev5wEq6m", "B1lVPuVcpQ", "iclr_2019_Hkf2_sC5FX", "r1epCSNc67", "rklWHLB9hQ", "BylvMvH5hm", "HkeVfvnnhX", "iclr_2019_Hkf2_sC5FX" ]
iclr_2019_HkfPSh05K7
Multi-step Retriever-Reader Interaction for Scalable Open-domain Question Answering
This paper introduces a new framework for open-domain question answering in which the retriever and the reader \emph{iteratively interact} with each other. The framework is agnostic to the architecture of the machine reading model provided it has \emph{access} to the token-level hidden representations of the reader. The retriever uses fast nearest neighbor search that allows it to scale to corpora containing millions of paragraphs. A gated recurrent unit updates the query at each step conditioned on the \emph{state} of the reader and the \emph{reformulated} query is used to re-rank the paragraphs by the retriever. We conduct analysis and show that iterative interaction helps in retrieving informative paragraphs from the corpus. Finally, we show that our multi-step-reasoning framework brings consistent improvement when applied to two widely used reader architectures (\drqa and \bidaf) on various large open-domain datasets ---\tqau, \quasart, \searchqa, and \squado\footnote{Code and pretrained models are available at \url{https://github.com/rajarshd/Multi-Step-Reasoning}}.
accepted-poster-papers
pros: - novel idea for multi-step QA which rewrites the query in embedding space - good comparison with related work - reasonable evaluation and improved results cons: There were concerns about missing training details, insufficient evaluation, and presentation. These have been largely addressed in revision and I am recommending acceptance.
train
[ "SkxCvQKjyE", "H1eqAZ8iJV", "SJeVzbUoyN", "rJeN1G8oA7", "ryliayIs0X", "BklS0hsYs7", "Sygq4coqA7", "SJl8P-Fc07", "r1xQXGYqA7", "SyxjKXK90m", "HJgTj6OcRm", "BklH_p_qRm", "H1lQ7Ar52X", "BylrP5Q92m" ]
[ "public", "author", "public", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "That would be really helpful! Thanks for your update!", "Thanks for your comment!. Right now the link is intentionally anonymized. We will release the code once the decision on the paper is finalized. Thank you for your interest!", "This paper is very interesting and we're are doing follow-up research. Could the authors update their link to their source code? The current link doesn't seem to work. Thanks a lot!", "Thank you for your insightful comments which helped make the paper a lot better.", "Thanks for the authors for updating the paper. The updated paper have more clear comparisons with other models, with more & stronger experiments with the additional dataset. Also, the model is claimed to perform multi-step interaction rather than multi-step reasoning, which clearly resolves my initial concern. The analysis, especially ablations in varying number of iterations, was helpful to understand how their framework benefits. I believe these make the paper stronger along with its initial novelty in the framework. In this regard, I vote for acceptance.", "This paper introduces a new framework to interactively interact document retriever and reader for open-domain question answering. While retriever-reader framework was often used for open-domain QA, this bi-directional interaction between the retriever and the reader is novel and effective because\n1) If the retriever fails to retrieve the right document at the first step, the reader can give a signal to the retriever so that the retriever can recover its mistake at the next step\n2) The idea of `reader state` from the reader to the retriever is new\n3) The retriever use question-independent representation of paragraphs, which does not require different representation depending on the question and makes the framework easily scalable.\n\nStrengths\n1) The idea of multi-step & bi-directional interaction between the retriever and the reader is novel enough (as mentioned above). The paper contains enough literature studies on existing retriever-reader framework in open-domain setting, and clearly demonstrates how their framework is different from them.\n2) The authors run the experiments on 4 different dataset, which supports the argument about the framework’s effectiveness.\n\nWeakness\n1) The authors seem to highlight multi-step `reasoning`, while it is not `reasoning` in my opinion. Multi-step reasoning refers to the task which you need evidence from different documents, and/or you need to find first evident to find the second evidence from a different document. I don’t think the dataset here are not multi-step reasoning dataset, and the authors seem not to claim it either. Therefore, I recommend using another term (maybe `multi-step interaction`?) instead of `multi-step reasoning`.\n2) While the idea of multi-step interaction and how it benefits the overall performance is interesting, the analysis is not enough. Figure 3 in the paper does not have enough description — for example, I got the left example means step 2 recovers the mistake from step 1, but what does the right example mean?\n\nQuestions on result comparison\n1) On TriviaQA (both open and full), the authors mentioned the result is on hidden test set — did you submit it to the leaderboard? I don’t see the same numbers on the TriviaQA leaderboard. Also, the authors claim they are SOTA on TriviaQA, but there are higher numbers on the leaderboard (which are submitted prior to the ICLR deadline).\n2) There are other published papers with higher result on Quasar-T, SearchQA and TriviaQA (such as https://aclanthology.info/papers/P18-1161/p18-1161 and https://arxiv.org/abs/1805.08092) which the authors did not compare with.\n3) In Section 4.2, is there a reason for the specific comparison to AQA (5th line), though AQA is not SOTA on SearchQA? I don’t think it means latent space is better than natural language space. They are totally different model and the only intersection is they contains interaction between two submodules.\n4) In Section 5, the authors mentioned their framework outperforms previous SOTA by 15% margin on TriviaQA, but what is that? I don’t see 15% margin in Table 2.\n\nMarginal comments:\n1) If I understood correctly, `TriviaQA-open` and `TriviaQA-full` in the paper are officially called `TriviaQA-full` and `open-domain TriviaQA`. How about changing the term for readers to better understand the task? Also, in Section 4, the authors said TriviaQA-open is larger than web/wiki setting, but to my knowledge, this setting is part of the wiki setting.\n2) It would be great if the authors make the capitalization consistent. e.g. EM, Quasar-T, BiDAF. Also, the authors can use EM instead of `exact match` after they mentioned EM refers to exact match in Section 4.2.\n\nOverall comment\nThe idea in the paper is interesting, and their model and experiments are concrete. My only worries is that the terms in the paper are confusing and performance comparison are weak. I would like to update the score when the authors update the paper.\n\n\nUpdate 11/27/2018\nThanks for the authors for updating the paper. The updated paper have more clear comparisons with other models, with more & stronger experiments with the additional dataset. Also, the model is claimed to perform multi-step interaction rather than multi-step reasoning, which clearly resolves my initial concern. The analysis, especially ablations in varying number of iterations, was helpful to understand how their framework benefits. I believe these make the paper stronger along with its initial novelty in the framework. In this regard, I vote for acceptance.", "We thank you for your helpful reviews. We have significantly updated the writing of the paper to hopefully address all confusion and we’ve also updated the results section of the paper for better comparison. In a nutshell, we have added a section on retriever performance demonstrating the scalability of our approach (sec 4.1). We have improved results for our experiments with BiDAF reader and we have also added new results on the open-domain version of the SQuAD dataset.\n\n> In the general sense, the architecture can be seen as a specific case of a memory network. Indeed, the multi-reasoner step can be seen as the controller update step of a memory network type of inference. The retriever is the attention module and the reader as the final step between the controller state and the answer prediction.\n\nWe agree with you and think its a valid way of viewing our framework. We have updated and cited memory networks in our paper (Sec 4) . However, we would like to point out that most memory network architectures are based on soft-attention, but in our case the retriever actually makes a “hard selection” of the top-k paragraphs and hence for the same reason, we have to train it via reinforcement learning.\n\n> The authors claim the method is generic, however, the footnote in section 2.3 mentioned explicitly that the so-called state of the reader assumes the presence of a multi-rnn passage encoding. Furthermore, this section 2.3 gives very little detailed about the \"reinforcement learning\" algorithms used to train the reasoning module.\n\nWe agree with you and based on your comments we have made this absolutely clear in the paper. Our method needs access to the internal token level representation of the reader model in order to construct the current state. The current API of machine reading models only return the span boundaries of the answer, but for our method, it needs to return the internal state as well. What we wanted to convey is, our model does not depend/need any neural architecture re-designing to an existing reader model. To show the same, we experimented and showed improvements with two popular and widely used reader architectures - DrQA and BiDAF.\nRegarding results of BiDAF -- During submission we ran out of time and hence we could not tune the BiDAF model. But now the results of BiDAF have improved a lot and as can be seen from (Table 2, row 9), the results of BiDAF are comparable to that of DrQA. \nWe have also significantly updated the model section of our paper to include more details about methods and training (Sec 2 & 3) with details about our policy gradient methods and training procedure.\n\n> Finally, the experimental section, while giving encouraging results on several datasets could also have been used on QAngaroo dataset to assess the multi-hop capabilities of the approach. \n\nWe did not consider QAngaroo for the following reasons -- (a) The question in QAngaroo are based on knowledge base relations and are not natural language questions. This makes the dataset a little synthetic in nature and we were unsure if our query reformulation strategy would work in this synthetic setting. (b) In this paper, we have tried to focus on datasets for open domain settings where the number of paragraphs per query is large (upto millions). QAngaroo on the other hand is quite small in that respect (avg of 13.7 paragraphs per question). We were unsure, that in this small setting, if we would see significant gains by doing query reformulation. \n\nWe have shown the effectiveness of our model in 4 large scale datasets including new results on SQuAD-open since submission. We sincerely hope, we will not be penalized for not showing the effectiveness of our model on enough number of datasets.\n\n> Furthermore, very little details are provided regarding the reformulation mechanism and its possible interpretability.\n\nWe have significantly updated this section of the paper. We have added a whole new section (Sec 5.3) with detailed analysis of the effect of query reformulation. In Table 4, we quantitatively measure if the iterative interaction between the retriever and reader is able to retrieve better context for the reader.\n\n", "We thank you for your very useful and detailed review. We have significantly updated the writing of the paper to hopefully address all confusion and we’ve also updated the results section of the paper for better comparison. In a nutshell, we have added a section on retriever performance demonstrating the scalability of our approach (sec 5.1). We have improved results for our experiments with BiDAF reader and we have also added new results on the open-domain version of the SQuAD dataset. Below we address your concerns point-by-point.\n\n1. The authors seem to highlight multi-step `reasoning`, while it is not `reasoning` in my opinion. Multi-step reasoning refers to the task which you need evidence from different documents, and/or you need to find first evident to find the second evidence from a different document. I don’t think the dataset here are not multi-step reasoning dataset, and the authors seem not to claim it either. Therefore, I recommend using another term (maybe `multi-step interaction`?) instead of `multi-step reasoning`.\n\nAfter much discussion among us, we have arrived to an agreement with your comment. We have renamed the title of the paper to “Multi-step Retriever-Reader Interaction for Scalable Open-domain Question Answering”.\nWe believe that our framework that supports retriever-reader interaction would be a starting point to build models for multi-hop reasoning but the current datasets do not explicitly need models with such inductive bias. There has been some very recent efforts in this direction such as HotpotQA -- but this dataset was very recently released (after the ICLR submission deadline).\n\n2. While the idea of multi-step interaction and how it benefits the overall performance is interesting, the analysis is not enough. Figure 3 in the paper does not have enough description — for example, I got the left example means step 2 recovers the mistake from step 1, but what does the right example mean?\n\nWe have significantly updated this section of the paper with much more analysis. We have included a new section on analysis of results (Sec 4.3) in which we quantitatively measure if the iterative interaction between the retriever and the reader is able to retrieve better context for the reader. We have also updated Figure 2 to report the results of our model for steps = {1, 3, 5, 7} for SearchQA, Qusar-T and TriviaQA-unfiltered.\nTo answer your specific question about the second example from figure 3, after the query reformulation the new paragraph that was added also has the right answer string, i.e. the total occurrence of the correct answer span increased after the reformulation step. Since we sum up the scores of spans, this led to the overall increase in the score of the right answer span (Demeter, in Figure 3) to be the maximum. We have explained this in the text of the paper.\n\n3. On TriviaQA (both open and full), the authors mentioned the result is on hidden test set — did you submit it to the leaderboard? I don’t see the same numbers on the TriviaQA leaderboard. Also, the authors claim they are SOTA on TriviaQA, but there are higher numbers on the leaderboard (which are submitted prior to the ICLR deadline).\n\nWe apologize for the confusion about this experiment. Ours and the reported baseline results are on the “TriviaQA-unfiltered” dataset (unfiltered version in http://nlp.cs.washington.edu/triviaqa/), for which there is no official leaderboard. The unfiltered version is built for open-domain QA. The evidence for each question in this setting are top 10 documents returned by Bing search results along with the Wikipedia pages of entities in the question. In the web setting, each question is associated with only one web document and in the Wikipedia setting, each question is associated with the wiki pages of entities in the question (1.78 wiki pages per query on avg.) Thus, the unfiltered setting has much more number of paragraphs than the individual web/wiki setting. Moreover, there is no guarantee that every document in the evidence will contain the answer making this setting even more challenging. However we did submit our model predictions to the TriviaQA admin who emailed us back the result on the hidden test set and to the best of our knowledge, we achieve the highest result on this setting of TriviaQA. We have updated the paper by naming this experiment TriviaQA-unfiltered and have clarified other details.\n", "Response to Reviewer 2 (continued from before)\n4. There are other published papers with higher result on Quasar-T, SearchQA and TriviaQA (such as https://aclanthology.info/papers/P18-1161/p18-1161 and https://arxiv.org/abs/1805.08092) which the authors did not compare with.\n\nWork by (Min, Zhong, Socher, Ziong, 2018) has results on TriviaQA-wikipedia setting. Our results are on the unfiltered setting of TriviaQA as we mentioned in the previous response, hence the results are not comparable. However, their results on SQuAD-open is comparable to our new experiments on SQuAD and we have added it in Table 2.\nWe also have results of DS-QA (Lin, Ji, Liu, Sun, 2018) in Table 2. They indeed have better results than us on SearchQA and we outperform them in TriviaQA-unfiltered. We tried to reproduce their results on Quasar-T with their code base and shared hyperparameter setting, but we could not reproduce it. However, for fairness, we have reported both their reported scores and our scores in the latest version of the paper. \n\n5. In Section 5.2, is there a reason for the specific comparison to AQA (5th line), though AQA is not SOTA on SearchQA? I don’t think it means latent space is better than natural language space. They are totally different model and the only intersection is they contains interaction between two submodules.\n\nActive Question Answering (AQA) propose a model in which an query reformulation agent sits between an user and a black box “QA” system. The agent probes the reader model (BiDAF) with (N=20) reformulations of the initial natural language query and aggregates the returned evidence to yield the best answer. The reformulation is done by a seq2seq model. In our method, the query reformulation is done by a gated recurrent unit to the initial query vector and this update is conditioned on the current state of the reader. By using the same reader architecture (BiDAF) in our experiments, we find significant improvements on SearchQA and other datasets.\nWe have updated the paper to make this distinction very clear. We only wanted to convey that our strategy of query reformulation yield better empirical results than the query reformulation strategy adopted by AQA. However we do agree with you that there is no specific reason to compare this in the experiment section and we have removed it from there and added more relevant results.\n\n6. In Section 5, the authors mentioned their framework outperforms previous SOTA by 15% margin on TriviaQA, but what is that? I don’t see 15% margin in Table 2.\n\nThis is a miscalculation and was a huge oversight from our part. The relative increase from the previous best result is 9.5% (61.66 - 56.3)/56.3. We mistakenly calculated the improvement from results of R^3 which is a 14.98% (61.66 - 53.7)/53.7 relative increase. We have fixed it. \n\nIf I understood correctly, `TriviaQA-open` and `TriviaQA-full` in the paper are officially called `TriviaQA-full` and `open-domain TriviaQA`. How about changing the term for readers to better understand the task? Also, in Section 4, the authors said TriviaQA-open is larger than web/wiki setting, but to my knowledge, this setting is part of the wiki setting.\n\nThanks for the suggestion. Yes we agree, the naming convention we chose was confusing. `TriviaQA-full` is better known as TriviaQA-unfiltered, so we adopted that name. And for the experiment with 1.6M paragraphs per query, we have renamed it to TriviaQA-open, as per your suggestion.\n\nIt would be great if the authors make the capitalization consistent. e.g. EM, Quasar-T, BiDAF. Also, the authors can use EM instead of `exact match` after they mentioned EM refers to exact match in Section 5.2.\nWe have fixed this, thanks!\n\n\n\n", "Based on the insightful feedback from our reviewers, we’ve updated our paper. Below we summarize the general changes.\n\t\nWriting and analysis of results: We have significantly improved the writing of our paper, especially the model (Sec 2, Sec 3) and the experiments section (Sec 5). We have added the details of our training methodology (e.g. details of reinforcement learning and various hyperparameters). In the experiments section, we have included a new section on analysis of results (Sec 5.3) in which we quantitatively measure if the iterative interaction between the retriever and reader is able to retrieve better context for the reader (Table 4)\n\nPerformance of paragraph retriever: We have added a new section on the performance of the paragraph retriever (Sec 4.1). We show that our retriever architecture based on fast nearest neighbor search can scale to corpus containing millions of paragraphs where as retrievers of current best-performing models cannot scale to that size.\n\nNew BiDAF results: During initial submission we ran out of time and could not tune our implementation of the BiDAF model. But since, the results of BiDAF have improved a lot and are comparable to that of DrQA (Table 2).\n\nNew results on SQuAD-open: We have also added new results on another popular dataset -- the open domain setting of SQuAD. Following the setting of Chen et al., (2017), we were able to demonstrate that our framework of multi-step-interaction improves the exact match performance of a base DrQA model from 27.1 to 31.9.\n\nChange in title:. Following the comment by reviewer 2, we have renamed the title of the paper to “Multi-step Retriever-Reader Interaction for Scalable Open-domain Question Answering”.\nWe believe that our framework that supports retriever-reader interaction would be a starting point to build models for multi-hop “reasoning” but the current datasets do not explicitly need models with such inductive bias. Hence it will be more appropriate for our work to have this title. \n", "Response to Reviewer 1 (continued from before)\n\nMoreover, for TriviaQA their results and the cited baselines seem to all perform well below to current top models for the task (cf. https://competitions.codalab.org/competitions/17208#results).\n\nWe apologize for the confusion about this experiment. Ours and the reported baseline results are on the “TriviaQA-unfiltered” dataset (unfiltered version in http://nlp.cs.washington.edu/triviaqa/), for which there is no official leaderboard. The unfiltered version is built for open-domain QA. The evidence for each question in this setting are top 10 documents returned by Bing search results along with the Wikipedia pages of entities in the question. In the web setting, each question is associated with only one web document and in the Wikipedia setting, each question is associated with the wiki pages of entities in the question (1.78 wiki pages per query on avg.) Thus, the unfiltered setting has much more number of paragraphs than the individual web/wiki setting. Moreover, there is no guarantee that every document in the evidence will contain the answer making this setting even more challenging. However, we did submit our model predictions to the TriviaQA admin who emailed us back the result on the hidden test set. We have updated the paper by naming this experiment TriviaQA-unfiltered and have clarified other details.\n\nI would also like to see a better analysis of how the number of steps helped increase F1 for different models and datasets. The presentation should include a table with number of steps and F1 for different step numbers they tried. (Figure 2 is lacking here.)\n\nWe have included a detailed result in figure 2 where we note the results of our model for steps = {1, 3, 5, 7} for SearchQA, Qusar-T and TriviaQA-unfiltered. The key takeaway from the result is that multi-step interaction uniformly increases the performance across all the datasets.\n\nIn the text, the authors claim that their result shows that natural language is inferior to 'rich embedding spaces'. They base this on a comparison with the AQA model. There are two problems with this claim: 1) The two approaches 'reformulate' for different purposes, retrieval and machine reading, so they are not directly comparable. 2) Both approaches use a 'black box' machine reading model, but the authors use DrQA as the base model while AQA uses BiDAF. Indeed, since the authors have an implementation of their model that uses BiDAF, an additional comparison based on matched machine reading models would be interesting.\n\nWe have now reported the results of our method with a BiDAF reader on SearchQA (row 9, table 2) and have shown that our method outperforms AQA by a significant margin when both the model uses the same reader architecture (BiDAF).\n\nActive Question Answering (AQA) propose a model in which an query reformulation agent sits between an user and a black box “QA” system. The agent probes the reader model (BiDAF) with (N=20) reformulations of the initial natural language query and aggregates the returned evidence to yield the best answer. The reformulation module is trained end to end using policy gradients to maximize the F1 of the reader. In our method as well, the query reformulation is done to the initial query vector to maximize the F1 of the reader. In other words, both methods are reformulating to improve retrieval. By using the same reader architecture (BiDAF) in our experiments, we find significant improvements on SearchQA. We have updated the paper to make this distinction very clear. \n", "We sincerely thank you for your insightful comments and we’re glad that you found our approach interesting. Based on your comments, we have significantly improved the writing of the paper with more details and have added more evaluation. Below we address your concerns point-by-point.\n\n- I find some of the description of the models, methods and training is lacking detail. For example, their should be more detail on how REINFORCE was implemented; e.g. was a baseline used?\n\nWe have significantly updated the model section of our paper to include more details about methods and training (Sec 2 & 3). To answer your specific question about use of variance reduction baseline with REINFORCE -- In question answering settings, it has been noted by previous work such as Shen et al., (2017) that common variance reduction techniques don’t work well. We also tried experimenting with a commonly used baseline - the average reward in a mini-batch, but found that it significantly degrades the final performance.\n\nI am not sure about the claim that their method is agnostic to the choice of machine reader, given that the model needs access to internal states of the reader and their limited results on BiDAF.\n\nWe agree with you and based on your comments we have made this absolutely clear in the paper. Our method needs access to the internal token level representation of the reader model in order to construct the current state. The current API of machine reading models only return the span boundaries of the answer, but for our method, it needs to return the internal state as well. What we wanted to convey is, our model does not depend/need any neural architecture re-designing to an existing reader model. To show the same, we experimented and showed improvements with two popular and widely used reader architectures - DrQA and BiDAF.\nRegarding results of BiDAF -- During submission we ran out of time and hence we could not tune the BiDAF model. But now the results of BiDAF have improved a lot and as can be seen from (Table 2, row 9), the results of BiDAF are comparable to that of DrQA. \n\nIt is not clear to me which retrieval method was used for each of the baselines in Table 2.\n\nWe report the best performance for each of our baseline that is publicly available. Most of the results for the baseline (except DS-QA) are taken as reported in the R^3 paper. We briefly describe the retrieval method used by the baselines below:\n(a) R^3 and DS-QA, like us, has a trained retriever module. R^3 retriever is based on the Match-LSTM model and DS-QA is based on DrQA model (more details in the respective papers). However, their retrievers compute query dependent para representation and hence don’t scale as we experimentally demonstrate in Fig 2.\n(b) AQA, GA and BiDAF lack an explicit retriever module. They concatenate all paragraphs in the context and feed it to their respective machine reading module. Since the reader has to find the answer from possible very large context (because of concatenation), these models have lower performance as can be seen from Table 2.\n\nWhy does Table 2 not contain the numbers obtained by the DrQA model (both using the retrieval method from the DrQA method and their method without reinforcement learning)? That would make their improvements clear.\n\nThanks for suggesting this experiment! We ran the experiment and results are in (Table 2, row 7). We trained a DrQA baseline model and the results indeed suggest that multi-step reasoning give uniform boost in performance across all datasets.", "The paper proposes a multi-document extractive machine reading model and algorithm. The model is composed of 3 distinct parts. First, the document retriever and the document reader that are states of the art modules. Then, the paper proposes to use a \"multi-step-reasoner\" which learns to reformulate the question into its latent space wrt its current value and the \"state\" of the machine reader.\n\nIn the general sense, the architecture can be seen as a specific case of a memory network. Indeed, the multi-reasoner step can be seen as the controller update step of a memory network type of inference. The retriever is the attention module and the reader as the final step between the controller state and the answer prediction.\n\nThe authors claim the method is generic, however, the footnote in section 2.3 mentioned explicitly that the so-called state of the reader assumes the presence of a multi-rnn passage encoding. Furthermore, this section 2.3 gives very little detailed about the \"reinforcement learning\" algorithms used to train the reasoning module.\n\nFinally, the experimental section, while giving encouraging results on several datasets could also have been used on QAngoroo dataset to assess the multi-hop capabilities of the approach. Furthermore, very little details are provided regarding the reformulation mechanism and its possible interpretability.", "The authors improve a retriever-reader architecture for open-domain QA by iteratively retrieving passages and tuning the retriever with reinforcement learning. They first learn vector representations of both the question and context, and then iteratively change the vector representation of the question to improve results. I think this is a very interesting idea and the paper is generally well written.\n\nI find some of the description of the models, methods and training is lacking detail. For example, their should be more detail on how REINFORCE was implemented; e.g. was a baseline used?\n\nI am not sure about the claim that their method is agnostic to the choice of machine reader, given that the model needs access to internal states of the reader and their limited results on BiDAF.\n\nThe presentation of the results left a few open questions for me:\n\n - It is not clear to me which retrieval method was used for each of the baselines in Table 2.\n - Why does Table 2 not contain the numbers obtained by the DrQA model (both using the retrieval method from the DrQA method and their method without reinforcement learning)? That would make their improvements clear.\n - Moreover, for TriviaQA their results and the cited baselines seem to all perform well below to current top models for the task (cf. https://competitions.codalab.org/competitions/17208#results).\n - I would also like to see a better analysis of how the number of steps helped increase F1 for different models and datasets. The presentation should include a table with number of steps and F1 for different step numbers they tried. (Figure 2 is lacking here.)\n - In the text, the authors claim that their result shows that natural language is inferior to 'rich embedding spaces'. They base this on a comparison with the AQA model. There are two problems with this claim: 1) The two approaches 'reformulate' for different purposes, retrieval and machine reading, so they are not directly comparable. 2) Both approaches use a 'black box' machine reading model, but the authors use DrQA as the base model while AQA uses BiDAF. Indeed, since the authors have an implementation of their model that uses BiDAF, an additional comparison based on matched machine reading models would be interesting.\n- Generally, it would be great to see more detailed results for their BiDAF-based model as well.\n" ]
[ -1, -1, -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, 5, 4 ]
[ "H1eqAZ8iJV", "SJeVzbUoyN", "iclr_2019_HkfPSh05K7", "ryliayIs0X", "r1xQXGYqA7", "iclr_2019_HkfPSh05K7", "H1lQ7Ar52X", "BklS0hsYs7", "SJl8P-Fc07", "iclr_2019_HkfPSh05K7", "BklH_p_qRm", "BylrP5Q92m", "iclr_2019_HkfPSh05K7", "iclr_2019_HkfPSh05K7" ]
iclr_2019_HkfYOoCcYX
Double Viterbi: Weight Encoding for High Compression Ratio and Fast On-Chip Reconstruction for Deep Neural Network
Weight pruning has been introduced as an efficient model compression technique. Even though pruning removes significant amount of weights in a network, memory requirement reduction was limited since conventional sparse matrix formats require significant amount of memory to store index-related information. Moreover, computations associated with such sparse matrix formats are slow because sequential sparse matrix decoding process does not utilize highly parallel computing systems efficiently. As an attempt to compress index information while keeping the decoding process parallelizable, Viterbi-based pruning was suggested. Decoding non-zero weights, however, is still sequential in Viterbi-based pruning. In this paper, we propose a new sparse matrix format in order to enable a highly parallel decoding process of the entire sparse matrix. The proposed sparse matrix is constructed by combining pruning and weight quantization. For the latest RNN models on PTB and WikiText-2 corpus, LSTM parameter storage requirement is compressed 19x using the proposed sparse matrix format compared to the baseline model. Compressed weight and indices can be reconstructed into a dense matrix fast using Viterbi encoders. Simulation results show that the proposed scheme can feed parameters to processing elements 20 % to 106 % faster than the case where the dense matrix values directly come from DRAM.
accepted-poster-papers
The authors propose an efficient scheme for encoding sparse matrices which allow weights to be compressed efficiently. At the same time, the proposed scheme allows for fast parallelizable decompression into a dense matrix using Viterbi-based pruning. The reviewers noted that the techniques address an important problem relevant to deploying neural networks on resource-constrained platforms, and although the work builds on previous work, it is important from a practical standpoint. The reviewers noted a number of concerns on the initial draft of this work related to the experimental methodology and the absence of runtime comparison against the baseline, which the reviewers have since fixed in the revised draft. The reviewers were unanimous in recommending that the revision be accepted, and the authors are requested to incorporate the final changes that they said they would make in the camera-ready version.
train
[ "HJlNr-wCCm", "SyxPPFpDhm", "SJlvpFOh0X", "SJgAEEpDhQ", "H1gFqAPjAm", "Byl9RiviCX", "HJlAUngO2X", "r1gC_KvsAQ", "SygQztZjCX", "Hke8lgOq0m", "Hygh6aDcA7", "HygDJA3YRQ", "B1gIeQnYRQ", "SJxecAdFRX", "BJly3hutA7", "rygt-_dF0X" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "Thank you for your kind response. We tried to address your requests as below:\n \n1. As suggested, we will add the information about the comparison between “Multi-bit quantization only” case and “Multi-bit-quantization + Viterbi-based binary code encoding\" case in the manuscript when we are allowed to update the manuscript.\n\t \n2. Thanks for the valuable comments. We actually initially considered applying the proposed method without “Don’t Care” elements on pruned networks. However, such a method has several issues and we decided to use the “Don’t Care” elements and the index matrix mask generated by the Viterbi-Pruning (1st step of the training process).\n\t \nFirst, applying proposed method on pruned networks without “Don’t Care” elements requires more bits per weight because at least two bits are required to represent each weight bit to express pruned bit as well as +1 and -1. Eg: +1,0(pruned bit),-1.\nSecond, if the pruned bits have errors in the initial try, they need to be retrained as well as the non-zero bits during the retraining. As a result, another pruning needs to be done in each retraining step to maintain the pruning rate. Repeating the pruning in every retraining step in the loop leads to slower compression process and worse accuracy. Note that pruning is not repeated in our method (Fig. 1)\nThird, using “Don’t Care” elements helps to find good Viterbi encoded weight because Viterbi Decompressor produces “0” and “1” with 50% probability each. If “Don’t Care” is not used, the number of “1” and “0” in the weight matrix can be significantly different depending on the sparsity of the pruned weight matrix so it is hard to find the “good” weight matrix from the Viterbi Decompressor outputs. \n\t \nTo verify, we conducted additional experiments on using the LSTM model on the PTB corpus applying the proposed method with and without “Don’t Care” elements. The test PPW of “without Don’t Care” case was 92.0, while the test PPW of “with Don’t Care” case was 84.6. We could observe that “Don’t Care” elements play significant role in improving the performance of the network compressed with proposed method. We will add this result to the manuscript to clarify that “Don’t Care” elements are required in the proposed method.  \n  \n3. We will update the manuscript as suggested. Thank you very much.\n", "This paper presents a new way to represent a dense matrix in a compact format. First, the method prunes a dense matrix based on the Viterbi-based pruning. Then, the pruned matrix is quantized with alternating multi-bit quantization. Finally, the binary vectors produced by the quantization algorithm are further compressed with the Viterbi-based algorithm. It spots the problem of each existing approach and solve the problems by combining each method. The combination is new and the result is encouraging.\n\nI find this paper is interesting and I like the strong results. It is an interesting combination of methods. However, the experiments are not enough to show that the proposed method is really needed to achieve the results. If these are answered well, I'd be happy to change my evaluation.\n\n1. The method should be compared with other combinations of components. At least, it should be compared with \"Multi-bit quantization only (Xu et al., 2018)\" and \"Multi-bit-quantization + Viterbi-based binary code encoding\".\n\n2. The experiments with \"Don't Care\" should go to the experiment section, and the end-to-end results should be present but not the ratio of incorrect bits.\n\n3. Similarly, the paper will become stronger if it has some experimental results that compare quantization methods. In Section 3.3., it mentions that the conventional k-bit quantization was tried and significant accuracy drops were observed. I feel that this is a kind of things which support the proposed method if it is properly assessed.\n\n4. When you say \"slow\" form something and propose a method to address it, I'd like to see some benchmark numbers. There is an experiment with simulation, but that does not seem to simulate the slow \"sequential sparse matrix decoding process\".\n\nMinor comments:\n\n* It was a bit hard to understand how a matrix is processed through the flowchart in Fig. 1 at first glance. It would help readers to understand it better if it has a corresponding figure which shows how a matrix is processed through the flowchart.", "Thank you for the revision. It looks better now.\n\n1. I'd suggest to put what you wrote in the manuscript. You could just have additional rows in the tables and your consideration. You need to say that the additional cost is small enough for the benefit with some numbers.\n\n2. The text says \"To verify the effectiveness of using the \"Don’t Care\" elements, we apply our proposed method on the original network and pruned one. \" Why don't you apply the proposed method on the pruned one with and without the \"Don't Care\" elements and compare the results?\n\n3. I'd recommend to put the numbers in the manuscript. It could be added as a footnote.\n\n4. Thank you for doing this.\n\n5. Thanks.", "Summary:\n\nThis paper addresses the computational aspects of Viterbi-based encoding for neural networks. \n\nIn usual Viterbi codes, input messages are encoded via a convolution with a codeword, and then decoded using a trellis. Now consider a codebook with n convolutional codes, of rate 1/k. Then a vector of length n is represented by inputing a message of length k and receiving n encoded bits. Then the memory footprint (in terms of messages) is reduced by rate k/n. This is the format that will be used to encode the row indices in a matrix, with n columns. (The value of each nonzero is stored separately.) However, it is clear that not all messages are possible, only those in the \"range space\" of my codes. (This part is previous work Lee 2018.) \n\nThe \"Double Viterbi\" (new contribution) refers to the storage of the nonzero values themselves. A weakness of CSR and CSC (carried over to the previous work) is that since each row may have a different number of nonzeros, then finding the value of any particular nonzero requires going through the list to find the right corresponding nonzero, a sequential task. Instead, m new Viterbi decompressers are included, where each row becomes (s_1*codeword_1 + s_2*codeword2 + ...) cdot mask, and the new scalar are the results of the linear combinations of the codewords. \n\nPros:\n - I think the work addressed here is important, and though the details are hard to parse and the new contributions seemingly small, it is important enough for practical performance. \n - The idea is theoretically sound and interesting.\n\nCons: \n - My biggest issue is that there is no clear evaluation of the runtime benefit of the second Viterbi decompressor. Compressability is evaluated, but that was already present in the previous work. Therefore the novel contribution of this paper over Lee 2018 is not clearly outlined.\n - It is extremely hard to follow what exactly is going on; I believe a few illustrative examples would help make the paper much clearer; in fact the idea is not that abstract. \n - Minor grammatical mistakes (missing \"a\" or \"the\" in front of some terms, suggest proofread.)\n\n", "Thank you very much for your constructive feedback and score revision. We appreciate it!", "Thanks. I missed that sentence in section 3.4. I have revised the score. The runtime issue that other reviewers had is a tough one. I will let other reviewers lead the discussion. Thanks for the great work.", "The paper proposes two additional steps to improve the compression of weights in deep neural networks. The first is to quantize the weights after pruning, and the second is to further encode the quantized weights.\n\nThere are several weaknesses in this paper. The first one is clarity. The paper is not very self-contained, and I have to constantly go back to Lee et al. and Xu et al. in order to read through the paper.\n\nThe paper can be made more mathematically precise. The input and output types of each block in Figure 1. should be clearly stated. For example, in Section 3.2, it can be made clear that the domain of the quantization function is the real and the codomain is a sequence k bits. Since the paper relies so heavily on Lee et al., the authors should make an effort to summarize the approach in a mathematically precise way.\n\nThe figures are almost useless, because the captions contain very little information. For example, the authors should at least say that the \"D\" in Figure 2. stands for delay, and the underline in Figure 4. indicates the bits that are not pruned. Many more can be said in all the figures.\n\nThe second weakness is experimental design. There are two conflicting qualities that need to be optimized--performance and compression rate. When optimizing the compression rate, it is important not to look at the test set error. If the compression rate is optimized on the test set, then the compressed model is nothing but a model overfit to the test set. The test set is typically small compared to the training set, so it is no surprise that the compression rate can be as high as 90%.\n\nOptimizing compression rates should be done on the training set with a separate development set. The test set should not used before the best compression scheme is selected. Both the results on the development set and on the test set should be reported for the validity of the experiments. I do not see these experimental settings mentioned anywhere in the paper, and this is very concerning. Lee et al. seem to make similar mistakes, and it is likely that their experimental design is also flawed.", "The language modeling and machine translation experiments also did not use test set in the training. The pruning was done on the training set and tuned on validation set. The Table A1,A2,A3,A4 already have separate accuracy data for validation error and test error.\nUnfortunately, we cannot upload the revised manuscript any more. We will update the manuscript as follows when we are allowed to revise it again. \nAt the bottom of Section 3.4 of the current manuscript, there is a sentence which we intended to state that we did not use the test set for hyperparameter tuning throughout the paper. The sentence is that “Note that entire training process used the training dataset and the validation dataset only to decide the best compressed weight data. The accuracy measurement for the test dataset was done only after training is finished so that any hyperparameter was not tuned on the test dataset.”. \nFor further clarification, we will add the following sentence right after it. \n“All the experiments in this paper followed the above training principle”.\nWe will also add the following sentences in appendix A.3 and A.4 to make it clear.\n“As described in Section 3.4, the compression is done on the training set and tuned on validation set so that any hyperparameter was not tuned on the test dataset during the process.” \n\nIn fact, the information you requested for Fig. 5 was already included in the figure itself. In the next revision, we will elaborate the information in the caption for better readability as follows. \n“In the figure, each circle indicates a state. A circle which is the source point of arrows indicates a current state and a circuit which is the sink point of arrows indicates a next state. The arrow indicates a transition from the current state to the next state. Depending on the input to VD, each current state can be switched to one of the two potential next states in the next clock. The number in a circle indicates the index for the state.”\nFor Fig. 2, we will add the following sentence in addition to the existing description in the caption.\n“the + symbol indicates an adder.” \nCaptions for Figs. 1, 3, 4, 6 have been already updated with additional information. We will be happy to elaborate further if the Reviewer has additional suggestions. ", "What about the LM and MT experiments? Is the pruning also done on a subset of the training set and tuned on the validation set? If so, please state them clearly in the paper. If not, the authors should redo those experiments as well.\n\nWhen revising the captions, please add useful descriptions not just more words. For example, please describe what the nodes, arrows, and numbers mean. Please don't just revise the captions in figure 5. Apply this to all captions.", "Thank you very much for your quick and positive response. Considering your suggestion, we updated our manuscript as follows:\n\nTo clarify what Fig. 6c is representing, we changed the label of Y axis of Fig. 6c to \"Parameter feeding rate\". Our data is describing the number of parameters fed to PEs in certain clocks - it is feeding rate, as you suggested.", "Thank you very much for your positive and rapid response. We appreciate it. As suggested, we made the following updates in the manuscript.\n\n1. We included the new numbers which we showed in the previous response in the manuscript. The unpruned results have been also added in all the tables. As a result, Table 1, Table 2, Table 3 have been updated. In particular, table 3 now shows the accuracy data for validation set and the test set separately.\n2. At the bottom of Section 3.4, we explicitly stated that we did not use the test set for training. \n3. In section 4.2, we explained that we randomly selected 5K validation set to avoid using the test set for training. \n4. In section 3.1, we added a brief sentence for describing Viterbi-pruning more. We agree that Viterbi-pruning deserves more description in the manuscript but please understand that the page limit still makes us to defer the most of detail to the appendix A.1. \n5. We added more information in the captions for Fig. 3, 4. 5. ", "Thanks for the update. The paper now reads better. Since the Viterbi pruning is heavily used in the paper, I still think it deserves more text in section 3.1. I also think the captions can include more information, such as Figure 3, 4, and 5.\n\nThe numbers provided above are good and should be included in the paper. The authors should also explain clearly how the experiments are carried out. It would be great to provide the unpruned results in all tables.\n\nThe experimental methodology should be strictly followed for all experiments, i.e., using a data set from the training set (subsampled or not) to prune the network, tuning the hyperparameters on the validation set, and testing the models on the test set. It is especially important since the pipeline involves a re-training step. We do not tolerate any hyperparameter tuning on the test set, and any paper that does this should be rejected.\n\nThe numbers haven't been changed. I will raise my score once the experiments are done properly.", "Thanks for the revisions. Just to clarify, the plot (fig 6) is a rate, right? # param / xxx cycles? If not, it is a bit confusing; if so, that clarification would help, since rate is a more intuitive performance metric. Figure 1 is also great for added clarification.\n\nMy only remaining suggestion is, if available, to have some runtime comparisons as well, accompanying fig 6, just as a more visible improvement metric. Otherwise, I think just clarifying what exactly Fig 6 is plotting already strengthens the paper significantly. ", "Thank you very much for the constructive comments. We tried to strengthen our claims by adding more experimental data which the Reviewer requested.\n\n1. The proposed \"Multi-bit-quantization + Viterbi-based binary code encoding\" requires slightly larger memory footprint than \"Multi-bit quantization only ([4])\" because some of the Viterbi encoded bits have different indices from their corresponding quantization bits. Hence, the \"Multi-bit quantization only\" requires 10 % to 20 % smaller memory footprint than \"Multi-bit-quantization + Viterbi-based binary code encoding\" case. However, the main reason why we apply the Viterbi weight encoding is that parallel sparse-to-dense matrix conversion can be done by applying same Viterbi encoding process to the non-zero values and indices of the non-zero values in parallel. This parallel sparse-to-dense conversion makes the speed of feeding parameters to PEs 10 % to 40 % faster compared to [1] (Figure 6c).\n\n2. Per Reviewer’s suggestion, the experimental results for the effectiveness of \"Don’t Care\" term have been moved to Section 4.1.\n\n3. Per Reviewer's suggestion, we measured accuracy differences before and after Viterbi encoding for several quantization methods such as linear quantization ([2]), logarithmic quantization ([3]), and alternating quantization ([4]) methods with the same quantization bits (3-bit). The result shows that combination with alternating quantization and Viterbi weight encoding had only 2 % validation accuracy degradation after the Viterbi encoding was applied first right after the quantization and the accuracy was easily recovered with retraining. On the other hand, the combination with the other quantization methods and Viterbi weight encoding showed accuracy degradation as much as 71 %, which was too large to recover the accuracy with retraining. The accuracy difference mainly results from the uneven weight distribution. Because weights of neural networks usually are normally distributed, the composition ratio of '0' and '1' is not equal when the linear or logarithmic quantization is applied to the weights of neural networks. As we stated in the manuscript, Viterbi encoder tends to produce similar number of '0' and '1'. Therefore, we can conclude that under the same bit condition, alternating quantization method shows best accuracy and compatibility with our bit-by-bit Viterbi encoding scheme regardless of the type of neural networks.\n\n4. We conducted additional simulations to compare sparse matrix reconstruction speed of [1] and the proposed method. We used a random 512-by-512 size matrix with various pruning rate ranging from 75 % to 95 %. We conducted the simulations under the assumptions described in Figure 6c. The simulation results are shown in Figure 6c in updated manuscript. We could observe that the proposed method could feed 10 % to 40 % more nonzero weights and input activations to PEs in same 10000 cycles compared to [1]. Proposed method could also feed parameters to PEs 20 % to 106 % faster compared to baseline method, which reads dense weight and activation matrices directly from DRAM. The improvement in the proposed scheme mainly comes from the parallelized process of assigning non-zero values to their corresponding indices in the weight matrix. While preparing addition data for the rebuttal, we realized that our simulation model did not fully exploit the parallelized weight and index decoding process of the proposed method. After further optimization, we could observe that the parameter feeding rate of the proposed method increased compared to the reported data in original manuscript. Therefore, we updated Figure 7 in original manuscript to Figure 6c in updated manuscript according to the new data.\n\n5. We added the change of the exact weight representation at each process in Figure 1 to clarify the flowchart.\n\nReference\n[1] Dongsoo Lee, Daehyun Ahn, Taesu Kim, Pierce I. Chuang, and Jae-Joon Kim. Viterbi-based pruning for sparse matrix with fixed and high index compression ratio. International Conference on Learning Representations (ICLR), 2018.\n[2] Darryl D. Lin, Sachin S. Talathi, and V. Sreekanth Annapureddy. Fixed point quantization of deep convolutional networks. In Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48, ICML’16, pp. 2849–2858. 2016.\n[3] Daisuke Miyashita, Edward H. Lee, and Boris Murmann. Convolutional Neural Networks using Logarithmic Data Representation. CoRR, abs/1603.01025, 2016. URL https://arxiv.org/abs/1603.01025.\n[4] Chen Xu, Jianqiang Yao, Zouchen Lin, Wenwu Qu, Yuanbin Cao, Zhirong Wang, and Hongbin Zha. Alternating multi-bit quantization for recurrent neural networks. International Conference on Learning Representations (ICLR), 2018.\n", "Thank you very much for the positive comments. We added the more experimental data of runtime analysis to address the Reviewer's main concern.\n\nQ1. My biggest issue is that there is no clear evaluation of the runtime benefit of the second Viterbi decompressor. Compressability is evaluated, but that was already present in the previous work. Therefore the novel contribution of this paper over [1] is not clearly outlined.\n\nWe conducted additional simulations to evaluate the runtime benefit of the proposed method compared to that of the method in [1]. We generated random 512-by-512 matrices with pruning rate ranging from 70 % to 95 % and simulated the number of parameters fed to PEs in 10000 cycles. The assumptions used for the simulation and analysis data have been updated in Figure 6c of the revised manuscript. We could observe that proposed parallel weight decoding based on the second Viterbi decompressor allowed 10 % to 40 % more parameters to be fed to PEs than the previous design [1]. The proposed method outperformed both baseline method and [1] in all simulation results. Please note that the data described in Figure 6c has been updated from Figure 7, and our method shows better performance in new data compared to the data shown in the original manuscript. While preparing for the rebuttal, we realized that our simulation model did not fully exploit the parallelized weight and index decoding process of the proposed method. After further optimization, we could observe that the parameter feeding rate of the proposed method increased compared to the reported data in original manuscript. Therefore, we updated Figure 7 in original manuscript to Figure 6c in the updated manuscript according to the new data.\n\nQ2. It is extremely hard to follow what exactly is going on; I believe a few illustrative examples would help make the paper much clearer; in fact the idea is not that abstract.\n\nIn the revision, we added the more precise mathematical description of the input and output of each block in Figure 1 and showed the change of the exact weight representation at each process. We first prune weights in a neural network with the Viterbi-based pruning scheme [1], then we quantize the pruned weights with the alternating quantization method [2]. Our main contribution is the third process, which includes encoding each weight with the Viterbi algorithm, and retraining for the recovery of accuracy. With our proposed method, the sparse and encoded weights are reconstructed to a dense matrix as described in Figure 2. Figure 2 illustrates the purpose of our proposed scheme, which is the parallelization of the whole sparse-to-dense conversion process with the VDs while maintaining the high compression rate.\n\nQ3. Minor grammatical mistakes (missing \"a\" or \"the\" in front of some terms, suggest proofread.)\n\nThanks very much for the suggestions. We tried to fix grammatical mistakes as much as possible in the revision.\n\nReference\n[1] Dongsoo Lee, Daehyun Ahn, Taesu Kim, Pierce I. Chuang, and Jae-Joon Kim. Viterbi-based pruning for sparse matrix with fixed and high index compression ratio. International Conference on Learning Representations (ICLR), 2018.\n[2] Chen Xu, Jianqiang Yao, Zouchen Lin, Wenwu Qu, Yuanbin Cao, Zhirong Wang, and Hongbin Zha. Alternating multi-bit quantization for recurrent neural networks. International Conference on Learning Representations (ICLR), 2018.", "Thank you very much for the comments. We believe that this response can help the Reviewer to be more convinced about the validness of our experiments; in particular, the validness of our retraining methodology.\n\nQ1. The paper is not very self-contained, and I have to constantly go back to [1] and [2] in order to read through the paper.\n\nIn the original manuscript, we had to limit the detailed information of the previous work due to the page limit. Based on the Reviewer’s comments, we added more description about the schemes we adopted from [1] and [2] in Appendix A.1 and A.2 of the revised manuscript.\n\nQ2. The input and output types of each block in Figure 1. should be clearly stated, and the figures are almost useless because the captions contain very little information.\n\nWe tried to add more information to the figures in the revision. First, in Figure 1, we added the more mathematically precise description of input and output of each block to show how the exact weight representation is changed at each process. We also added additional explanation for 'D' of Figure 2 in its caption. For the Figure 4, we added the description of the underlined numbers.\n\nQ3. Optimizing compression rates should be done on the training set with a separate development set. The test set should not used before the best compression scheme is selected. Both the results on the development set and on the test set should be reported for the validity of the experiments.\n\nThanks for pointing this out. We believe that this is the Reviewer 1's core question so would like to justify our results more in detail in this response and try to convince the Reviewer. We agree that optimizing compression rates should not use the test set before the best compression scheme is selected. In fact, in case of PTB and Wikitext-2 corpus, we already used the provided validation set and measured the test PPW only once after training (Table 2) in the original manuscript. From the Table 2, we can see that our proposed scheme maintains the accuracy of the uncompressed baseline network. On the other hand, the CIFAR-10 dataset does not include a separate validation set, so we had to use the test set in the retraining process. To avoid using the test set in the retraining process as the Reviewer pointed out, we randomly selected 5K validation images among the original 50K training images in CIFAR-10 dataset, and applied our scheme. Then, we observed the training and validation accuracy at each training epoch, and measured the test accuracy once after training. The accuracy results are as shown in the following table. Note the compression rates are the same as the data in Table 3 in the original manuscript.\n\n----------------------------------------------------------------------------------\nCompression scheme Validation Error (%) Test Error (%)\n------------------------------ --------------------------- ---------------------\n Baseline 11.5 12.2\n Pruning [1] 11.4 12.2\n VWM (Ours) 11.4 12.4\n----------------------------------------------------------------------------------\n\nThe test accuracy in the above table is about 1 % less than the accuracy which we reported in the originally submitted manuscript because the number of training data was decreased as part of the data set is used as a validation set. However, the results show that our proposed method does not make the network be overfitted to test data as the Reviewer doubted because the difference between the accuracy for validation set and test set are consistent with the values from the previous works. Note that even the uncompressed baseline network exhibits similar accuracy difference between the validation error and the test error compared with the compressed networks. Therefore, we believe that our proposed compression method does not suffer from the concerned overfitting problem regardless of the types of neural networks or dataset.\n\nReference\n[1] Dongsoo Lee, Daehyun Ahn, Taesu Kim, Pierce I. Chuang, and Jae-Joon Kim. Viterbi-based pruning for sparse matrix with fixed and high index compression ratio. International Conference on Learning Representations (ICLR), 2018.\n[2] Chen Xu, Jianqiang Yao, Zouchen Lin, Wenwu Qu, Yuanbin Cao, Zhirong Wang, and Hongbin Zha. Alternating multi-bit quantization for recurrent neural networks. International Conference on Learning Representations (ICLR), 2018.\n" ]
[ -1, 6, -1, 7, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, 3, -1, 2, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "SJlvpFOh0X", "iclr_2019_HkfYOoCcYX", "SJxecAdFRX", "iclr_2019_HkfYOoCcYX", "Byl9RiviCX", "r1gC_KvsAQ", "iclr_2019_HkfYOoCcYX", "SygQztZjCX", "Hygh6aDcA7", "B1gIeQnYRQ", "HygDJA3YRQ", "rygt-_dF0X", "BJly3hutA7", "SyxPPFpDhm", "SJgAEEpDhQ", "HJlAUngO2X" ]
iclr_2019_Hkg4W2AcFm
Overcoming the Disentanglement vs Reconstruction Trade-off via Jacobian Supervision
A major challenge in learning image representations is the disentangling of the factors of variation underlying the image formation. This is typically achieved with an autoencoder architecture where a subset of the latent variables is constrained to correspond to specific factors, and the rest of them are considered nuisance variables. This approach has an important drawback: as the dimension of the nuisance variables is increased, image reconstruction is improved, but the decoder has the flexibility to ignore the specified factors, thus losing the ability to condition the output on them. In this work, we propose to overcome this trade-off by progressively growing the dimension of the latent code, while constraining the Jacobian of the output image with respect to the disentangled variables to remain the same. As a result, the obtained models are effective at both disentangling and reconstruction. We demonstrate the applicability of this method in both unsupervised and supervised scenarios for learning disentangled representations. In a facial attribute manipulation task, we obtain high quality image generation while smoothly controlling dozens of attributes with a single model. This is an order of magnitude more disentangled factors than state-of-the-art methods, while obtaining visually similar or superior results, and avoiding adversarial training.
accepted-poster-papers
The paper proposes a new way to tackle the trade-off between disentanglement and reconstruction, by training a teacher autoencoder that learns to disentangle, then distilling into a student model. The distillation is encouraged with a loss term that constrains the Jacobian in an interesting way. The qualitative results with image manipulation are interesting and the general idea seems to be well-liked by the reviewers (and myself). The main weaknesses of the paper seem to be in the evaluation. Disentanglement is not exactly easy to measure as such. But overall the various ablation studies do show that the Jacobian regularization term improves meaningfully over Fader nets. Given the quality of the results and the fact that this work moves the needle in an important (albeit hard to define) area of learning disentangled representations, I think would be a good piece of work to present at ICLR so I recommend acceptance.
train
[ "SyxL9vbI27", "H1gnj_npCm", "r1gqJD2u2X", "SyxiD3BG0Q", "rJlI1wrzAX", "HklRMVBzA7", "S1lLdzSG0X", "BJxo72kq3X", "rygfJ69oqm", "B1xizrM-q7" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "public" ]
[ "This paper proposed a novel approach for learning disentangled representation from supervised data (x as the input image, y as different attributes), by learning an encoder E and a decoder D so that (1) D(E(x)) reconstructs the image, (2) E(D(x)) reconstruct the latent vector, in particular for the vectors that are constructed by mingling different portion of the latent vectors extracted from two training samples, (3) the Jacobian matrix matches and (4) the predicted latent vector matches with the provided attributes. In addition, the work also proposes to progressively add latent nodes to the network for training. The claim is that using this framework, one avoid GAN-style training (e.g., Fader network) which could be unstable and hard to tune. \n\nAlthough the idea is interesting, the experiments are lacking. While previous works (e.g., Fader network) has both qualitative (e.g., image quality when changing attribute values) and quantitative results (e.g., classification results of generated image with novel combination of attributes), this paper only shows visual comparison (Fig. 4 and Fig. 5), and its comparison with Fader network is a bit vague (e.g., it is not clear to me why Fig. 5(e) generated by proposed approach is “more natural” than Fig. 5(d), even if I check the updated version mentioned by the authors' comments). Also in the paper there are five hyperparameters (Eqn. 14) and the center claim is that using Jacobian loss is better. However, there is no ablation study to support the claim and/or the design choice. From my opinion, the paper should show the performance of supervised training of attributes, the effects of using Jacobian loss and/or cycle loss, the inception score of generated images, etc. \n\nI acknowledge the authors for their honesty in raising the issues of Fig. 4, and providing an updated version. ", "The authors have done a good work to improve their submission and addressed my concerns (e.g., Tab 1 and Appendix is good). I have increased the rating by 1. ", "Summary: The paper proposes a method to tackle the disentanglement-reconstruction tradeoff problem in many disentangling approaches. This is achieved by first training the teacher autoencoder (unsupervised or supervised) that learns to disentangle the factors of variation at the cost of poor reconstruction, and then distills these learned representations into a student model with extra latent dimensions, where these extra latents can be used to improve the reconstructions of the student autoencoder compared to the teacher autoencoder. The distillation of the learned representation is encouraged via a novel Jacobian loss term that encourages the change in reconstructions of the teacher and student to be similar when the latent representation changes. There is one experiment for progressive unsupervised disentangling (disentangling factor by factor) on MNIST data, and one experiment for semi-supervised disentangling on CelebA-HQ.\n\nPros:\n- I think the idea of progressively capturing factors of variation one by one is neat, and this appears to be one of the first successful attempts at this problem.\n- The distillation appears to work well on the MNIST data, and does indeed decrease the reconstruction loss of the student compared to the teacher.\n- The qualitative results on CelebA-HQ look strong (especially apparent in the video), with the clear advantage over Fader Networks being that the proposed model is a single model that can manipulate the 40 different attributes, whereas Fader Nets can only deal with at most 3 attributes per model.\n\nCons:\n- There are not enough quantitative results supporting the claim that the model is “effective at both disentangling and reconstruction.” The degree of disentanglement in the representations is only shown qualitatively via latent interpolation, and only for a single model. Such qualitative results are generally prone to cherry-picking and it is difficult to reliably compare different disentangling methods in this manner. This calls for quantitative measures of disentanglement. Had you used a dataset where you know the ground truth factors of variation (e.g. dSprites/2D Shapes data) for the unsupervised disentangling method, then the level of disentanglement in the learned representations could be quantified, and thus your method could be compared against unsupervised disentangling baselines. For the semi-supervised disentanglement example on CelebA, you could for example quantify how well the encoder predicts the different attributes (because there is ground truth here) e.g. report RMSE of the y_i’s on a held out test set with ground truth. A quantitative comparison with Fader Networks in this manner appears necessary. The qualitative comparison on a single face in Figure 5 is nowhere near sufficient.\n- There is quantitative evidence that the reconstruction loss decreases when training the student, but here it’s not clear whether this quantitative difference makes a qualitative difference in the reconstructions. Getting higher fidelity images is one of the motivations behind improving reconstructions, so It would be informative to compare the reconstructions of the teacher and the student on the same image.\n- In the CelebA experiments, the benefit of student training is not visible in the results. In Figure 5 you already show that the teacher model gives decent reconstructions, yet you don’t show the reconstruction for the student model (quantitatively you show that it improves in Figure 3b, but again it is worth checking if it makes a difference visually). Also it’s not clear whether Figure 4 are results from the student model or the teacher model. I’m guessing that they are from the student model.\n- These quantitative results could form the basis of doing ablation studies for each of the different losses in the additive loss (for both unsupervised & semi-supervised tasks). Because there are many components in the loss, with a hyperparameter for each, it would be helpful to know what losses the results are sensitive to for the sake of tuning hyperparameters. This would be especially useful should I wish to apply the proposed method to a different dataset.\n- I think the derivation of the Jacobian loss requires some more justification. The higher order terms in the Taylor expansion in (2) and (3) can only be ignored when ||y_2 - y_1|| is small compared to the coefficients, but there is no validation/justification regarding this.\n\nOther Qs/comments:\n- On page 5 in the last paragraph of section 3, you say that “After training of the student with d=1 is finished, we consider it as the new teacher”. Here do you append z to y when you form the new teacher?\n- On page 6 in the paragraph for prediction loss, you say “This allows the decoder to naturally …. of the attributes”. I guess you mean this allows the model to give realistic interpolations between y=-1 and 1?\n- bottom of page 6: “Here we could have used any random values in lieu of y_2” <- not sure I understand this?\n- typo: conditionnning -> conditioning\n- I would be inclined to boost the score up to 7 if the authors include some quantitative results along with more thorough comparisons to Fader Networks\n\n************ Revision ***********\nThe authors' updates include further quantitative comparisons to Fader Networks and ablation studies for the different types of losses, addressing the concerns I had in the review. Hence I have boosted up my score to 7.", "Thank you very much for your detailed review.\n\nWe answer each item below.\n\n> \"There are not enough quantitative results [...]\"\n\nWe added quantitative comparisons for both the unsupervised and supervised\ntasks. The quantitative measure consists in evaluating, via an external\nclassifier, how well the latent units condition the specified factor of\nvariation in the generated image. \n\nIn the MNIST example we measure how well the first two latent units can\nmanipulate the digit class in the images generated by the student models. The\nresults are presented in the new Table 1, showing that the student with Jacobian\nsupervision obtains a better trade-off between disentanglement and reconstruction.\n\nIn the facial attribute manipulation task we used a pre-trained attribute\nclassifier provided by the authors of Fader Networks. Using the classifier, we\nmeasure if by manipulating the latent unit corresponding to one attribute we can\nchange the presence or absence of that attribute in the generated image. We do\nthis for all attributes and for all images in the test set. The results are\nshown in Table 2 and Figure 4.\n\nFor comparison, we trained two Fader Networks models to manipulate all\nattributes. The training did not converge and the resulting manipulation and\nreconstruction performance is inferior to our method. Besides the quantitative\ncomparison, this can also be seen qualitatively in the new Figures 7 and 8.\n\n\n> \"[...] compare the reconstructions of the teacher and the student on the same\n image.\"\n\nWe added a new figure to the appendix showing a comparison between the\nreconstructions obtained by the teacher and by the student (new Figure 7). It\nshows that the student model is better at reconstructing fine image details. The\ncomparison also includes a Fader Networks model trained to manipulate multiple\nattributes, and show that its reconstruction is distorted.\n\n> \"Also it’s not clear whether Figure 4 are results from the student model or\n the teacher model. [...]\"\n\nSorry for this lack of clarity. Figure 4 shows results by the student model\ntrained with Jacobian supervision. We clarified this in the manuscript.\n\n> \"[...] ablation studies for each of the different losses [...]\"\n\nWe added ablation studies for both unsupervised and supervised tasks in the new\nsection A.3 in the appendix (page 14). Unless otherwise noted, the weighs of\nthe losses were found by evaluation on separate validation sets.\n\n> \"[...] The higher order terms in the Taylor expansion in (2) and (3)\n can only be ignored when ||y_2 - y_1|| is small [...]\"\n\nIndeed, because of the higher order terms, even assuming (5) and (6) hold, (7)\nis only an approximation. Note however that the norm of the approximation error\nin (7) is that of the difference between the higher order terms of the teacher\nand the student, namely ||o^T(||y_2-y_1||) - o^S(||y_2-y_1||)||. This might be\nlower than the individual higher order terms, especially if both decoders\nrespond similarly to variations in $y$. Currently, our justification is mainly\nempirical. We also considered weighing the loss by a factor reciprocal\nto ||y_2-y_1||, to give less importance to pairs of samples for\nwhich ||y_2-y_1|| is large. Another option we contemplated is, for the Jacobian\nsupervision, to consider a blurred version of the student, so that it has the\nlow resolution of the teacher. The formulation still holds and this would also\nmake (6) easier to enforce. In informal experiments we observed no significant\nadvantage w.r.t. the current approach, which is simpler. We\nleave these possible avenues of improvement as future work.\n\n> \"[...] you say that “After training of\n the student with d=1 is finished, we consider it as the new teacher”. Here do\n you append z to y when you form the new teacher?\"\n\nYes this is correct. We clarified this in the text. \n\n> On page 6 in the paragraph for prediction loss, you say “This allows the\n decoder to naturally …\" of the attributes”. I guess you mean this allows the\n model to give realistic interpolations between y=-1 and 1?\n\nWe intended to say that we do not require the prediction to be binary values, as\nif we used the cross-entropy loss, but any real value. Thus, the decoder can\nread the amount of attribute variation from this variable, and not only if the\nattribute is present or not.\n\n> \"[...] “Here we could have used any random values in lieu of y_2” [...]\"\n\nWe wanted to say that the $y$ part in the fabricated latent code could be\nrandom, but instead we sample it from the data (copy from another sample). \nWe clarified this in the text.\n\n> \"typo: conditionnning -> conditioning\"\n\nThank you.\n\n> \"I would be inclined to boost the score up to 7 if the authors include some\n quantitative results along with more thorough comparisons to Fader Networks\"\n\nThank you. We hope the additional quantitative and qualitative results can\nconvince you of the superior performance of our method with respect to Fader\nNetworks, for multiple attributes manipulation.\n", "Thank you very much for reviewing our work.\n\n\nTo address your main concern, we added quantitative comparisons by using external\nclassifiers to assess the conditioning of the disentangled factors. \n\nWe believe the new quantitative results strongly support our two main claims:\n1) Our model outperforms Fader Networks by achieving better reconstruction and\n multiple attribute manipulation.\n2) Once a disentangling teacher model has been obtained, the proposed Jacobian\n loss allows to add latent units that help improving the reconstruction while\n maintaining the disentangling.\n\n\nWe address each of your concerns below.\n\n\n> \"e.g., it is not clear to me why Fig. 5(e) generated by proposed approach is\n“more natural” than Fig. 5(d)\" \n\nWe realize that this is a very subjective remark so we removed this claim from\nthe image caption. The intent of Fig. 5 is to show that even for single\nattribute manipulation and reconstruction, our proposed method performs similar\nor better than Fader Networks. For multiple attributes, a Fader Network model\ndoes not converge and has a poorer reconstruction and attribute manipulation\nperformance. Besides the new quantitative results in Table 2 and Figure 4, this\nis also shown qualitatively in the new Figures 7 and 8 in the appendix.\n\n> \"Also in the paper there are five hyperparameters (Eqn. 14) and the center\nclaim is that using Jacobian loss is better. However, there is no ablation study\nto support the claim and/or the design choice.\"\n\nWe show quantitatively in the new Table 2 and Figure 4 that using the Jacobian\nsupervision performs better than the cycle-consistency loss, in terms of the\ndisentanglement versus reconstruction trade-off. To measure the disentangling\nperformance of the models, we manipulate the latent variables aiming to change\nthe presence or absence of each attribute, and check with an external classifier\nthat the attribute is indeed changed. We used a pre-trained classifier provided\nby the authors of Fader Networks.\n\n> \"From my opinion, the paper should show the performance of\nsupervised training of attributes, the effects of using Jacobian loss and/or\ncycle loss, the inception score of generated images, etc.\"\n\nWe included ablation studies in the appendix (new Section A.3, page 14). These\nshow the separate and combined use of Jacobian and cycle-consistency losses for\ntraining the student (Table 5). Their combination actually works OK. For the\nsake of simplicity we keep only the Jacobian loss, and the cycle-consistency\nloss is only used to train the disentangling by the teacher.\n\nNote that by using an external classifier, the measure we obtain is in some\nsense similar to an inception score.\n", "Thank you very much for reviewing our work.\n\nWe chose MNIST for the unsupervised disentangling experiment because the two\nprincipal factors of variation are related to the digit class and thus it served\nas a very good pedagogic example.\n\nTo address your first concern, we conducted further experiments for the\nunsupervised disentanglement on the Street View House Numbers (SVHN)\ndataset. The results are shown in the appendix (Section A.5, page 17). In this\ncase, the two principal factors are related to the shading of the digit image\nand not to the class. However, we found that later in the progressive discovery\nof factors of variation, the algorithm learns factors that are quite related to\nthe digit class (ninth and tenth factors). Then, the final student model is able\nto manipulate the class of the digit while approximately maintaining the style\nof the digit (Figure 11).\n\nTo address your second concern, we added quantitative experiments for the\nunsupervised example of Section 3 (new Table 1). These were obtained by using an\nexternal MNIST classifier to assess the digit class manipulation. The results\nshow that the Jacobian supervision indeed allows a more advantageous traversing\nof the disentanglement versus reconstruction trade-off.\n\nFinally, we also added quantitative results for the CelebA experiments, showing\nthe advantage of our method with respect to Fader Networks (new Table 2 and\nFigure 4).\n", "We thank the reviewers for their constructive comments which helped us\nto significantly improve our submission.\n\n\nWe did the following modifications to address the reviewers concerns:\n\n1) We addressed the lack of quantitative results, which was an\nimportant concern shared among all reviewers. By using external classifiers on\nthe generated images, we were able to assess the degree of disentangling and\nconditioning of the models and thus we were able to consistently quantify their\ntrade-off between disentanglement and reconstruction.\n\nWe believe the resulting quantitative results further support our approach. In\nparticular, we quantitatively demonstrate superior performance to Fader\nNetworks in the facial attribute manipulation task.\n\n2) We extended the unsupervised experiments by including results on the SVHN\n dataset (Section A.5 in the appendix, page 17).\n\n3) We added further qualitative comparison with Fader Networks on image\n reconstrucion and attributes manipulation (Section A.4 in the appendix, page\n 15).\n\n4) We added ablation studies for the different components in the loss functions\n (Section A.3 in the appendix, page 14).\n\n5) We replaced Figure 3(b) by a more informative graph showing the traversal of\n the disentanglement-reconstruction trade-off in the new Figure 4.\n\n\n\nBesides the modifications suggested by the reviewers, we also did the following\nchanges:\n\n6) We made minor modifications to the manuscript aiming to improve our\n exposition.\n\n7) We use a model with different hyperparameters in Figure 1 and we corrected\n the values of two hyperparameters in the model of Section 4.\n\n8) We added one missing reference (Burgess et al., 2018, NIPS workshops).\n\n9) We moved Table 3 to the appendix.\n", "The paper aims to learn an autoencoder that can be used to effectively encode the known attributes/ generative factors and this allows easy and controlled manipulation of the images while producing realistic images.\n\nTo achieve this, ordinarily, the encoder produces latent code with two components y and z where y are clamped to known attributes using supervised loss while z is unconstrained and mainly useful for good reconstruction. But his setup fails when z is sufficiently large as the decoder can learn to ignore y altogether. Smaller sized z leads to poor reconstruction.\n\nTo overcome this issue, the authors propose to employ a student teacher training paradigm. The teacher is trained such that the encoder only produces y and the decoder that only consumes y. This ensures good disentanglement but poor reconstruction. Subsequently, a student autoencoder is learned which has a much larger latent code and produces both y and z. The y component is mapped to the teacher encoder’s y component using Jacobian regularization.\n\nPositives:\nThe results of image manipulation using known attributes is quite impressive. The authors propose modifications to the Jacobian regularization as simple reconstruction losses for efficient training. The approach avoids adversarial training and thus is easier to train.\n\nNegatives:\nUnsupervised disentanglement results are only shown for MNIST. I am not convinced similar results for unsupervised disentanglement can be obtained on more complex datasets. Authors should include some results on this aspect or reduce the emphasis on unsupervised disentanglement. Also when studying this quantitative evaluation for disentanglement such as in beta-VAE will be nice to have.\n\nTypos:\npage 3: tobtain -> obtain\npage 5: conditionning -> conditioning ", "(1) In equation (1) y_i refers to an arbitrary dimension in the input space of the\ndecoders. Both T and S decoders have the same input space for the specified\nvariables, namely $\\mathds{R}^k$. In the paper we use the superscript when we\nwant to indicate the value was produced by one of the encoders.\n\n(2) Please refer to our answer to item (5) below for a quantitative\ncomparison. Yes, epoch 0 in Fig.1 (d) corresponds to the teacher. We will\nclarify it.\n\n(5) We quantified the level of disentanglement as follows: we evaluated how well\nthe first two hidden variables ($k$=2), maintain the encoding of the digit class\nin the student models. We take two images of different digits from the test set,\nfeed them to the encoder, swap their corresponding 2D subpart of the latent code\nand feed the fabricated latent codes to the decoder. We then run a pre-trained\nMNIST classifier in the generated image to see if the class was correctly\nswapped.\n\n| model | $d$ | recons. MSE | swaps OK |\n|---------------------------------+-----+---------------+-----------|\n| teacher | 0 | 3.66e-2 | 80.6% |\n| student w/ Jac. sup. (*) | 14 | 1.38e-2 | 57.2% |\n| student wo/ Jac. sup. | 14 | 1.12e-2 | 32.0% |\n| student wo/ Jac. sup | 10 | 1.40e-2 | 41.4% |\n|---------------------------------|------|--------------|------------|\n| random weights | 14 | 1.16e-1 | 9.8% |\n\nWe observe that at the same level of reconstruction performance (~1.4e-2), the\nstudent with Jacobian supervision maintains a better disentangling of the class\n(under this metric) than the student without it. We will include a figure\nshowing that the reconstruction-disentanglement trade-off traversed by varying\n$d$ is indeed more advantageous for our model. Note that the first two variables\ndo not encode perfectly the digit class. This advantage in the trade-off is much\nlarger in the application of Section 4.\n\n(*) Note: this model was trained with $\\lambda_{diff} = 0.1$ instead of $1.0$ as\nthe one currently in the paper. The figure will be updated for this model.\n\n(4) We evaluated the disentangling measure (described in (5)), on the\nMNIST test set, for the student with Jacobian supervision:\n\n| xcov weight | $d$ | recons. MSE | swaps OK |\n|-------------+-----+-------------+----------|\n| 1e-3 | 14 | 1.38e-2 | 57.2% |\n| 1e-2 | 14 | 1.46e-2 | 56.3% |\n| 1e-1 | 14 | 1.49e-2 | 56.6% |\n\n(6) Thank you for remarking this important point. In this paper we use the\nword disentangling to refer to both aspects:\n\na) each latent unit in the specified part is sensitive to one generative factor\nb) the value of each of these latent units conditions the generated output such\nthat it varies the corresponding generative factor\n\nWe will clarify this in the manuscript and revise the text to make sure it is\ncoherent.\n\n(7) See item (8)\n\n(8) We evaluated quantitatively how well the output is conditioned to the specified\nfactors, similarly to the procedure described in item (5). To do this, for each\nimage in the CelebA test set, we tried to flip each of the 32 disentangled\nattributes, one at a time (e.g. eyeglasses/no eyeglasses). We did the flipping\nby setting the latent variable y_i to sign(y_i)*-1*\\alpha, with \\alpha >0 a\nmultiplier to exaggerate the attribute, found in a separate validation set for\neach model (\\alpha=40 for all).\n\nTo verify that the attribute was indeed flipped in the generated image, we used\nan external classifier trained to predict each of the attributes. We used the\nclassifier provided by the authors of Lample et al. (2017), which was trained\ndirectly on the CelebA dataset.\n\nThe results are as follows:\n\n| model | $d$ | flips OK | recons. MSE |\n|------------------------------+--------+------------+---------------|\n| teacher | 2048 | 73.1% | 1.82e-3 |\n| student w/ Jac. sup. | 8192 | 72.2% | 1.08e-3 |\n| student wo/ Jac. sup. | 8192 | 42.7% | 1.04e-3 |\n|------------------------------+--------+------------+---------------|\n| Lample et al., 2017 | 2048 | 43.1% | 3.08e-3 |\n| random weights | 2048 | 20.2% | 1.01e-1 |\n\nAt approximately the same reconstruction performance, the student with Jacobian\nsupervision is significantly better at flipping attributes than the student\nwithout it. \n\nWe also trained a Fader Networks model (Lample et al., 2017) with the same\nhyperparameters and training epochs as our teacher model. The result suggests\nthat the adversarial discriminator acting on the latent code harms the\nreconstruction and that the conditionning is worse than with our teacher model.\n\n(9) We will add to the appendix the result of trying the same experiment as in\nFigure 4, but using the student model without Jacobian supervision. It will be\nclear from this experiment that the latter cannot effectively control most of\nthe attributes.\n", "Dear authors, this is an interesting paper but I have a few questions and concerns:\n\n(1) In equation (1) could you explain why y is used instead of y^S and y^T? Is y supposed to refer to some Oracle factors? And if so, it is not clear what assumption the authors are making later in the paper to relate y to y^S and y^T.\n\n(2) In Figure 1., the authors claim that the student obtains better reconstruction than the teacher, however is there any quantitative comparison? It is not clear if Figure 1.(d) is sufficient to show this? Does epoch 0 correspond to the teacher? If it does, it would be good to say this explicitly.\n\n(3) The derivation of Equation (7) is clear and very easy to follow. \n\n(4) Is it possible to quantify the contribution of L_{xcov} to the model?\n\n(5) The authors say that: \n`Once the student model is trained, it generates a better reconstructed image than the teacher model, thanks to the expanded latent code, while maintaining the conditionning of the output that the teacher had.’\n\nThe authors have not quantified the level of ‘conditionning’ (disentanglement) for either the student or the teacher, so it is not clear if this claim is well backed, or the extent to which this is true. It would be hard for other researchers to build on this work, without having methods to qualitatively compare models. Higgins et al. ICLR 2017 propose one method for measuring disentanglement.\n\n(6) A more serious concern is that the term disentanglement as defined in the abstract:\n\n`where a subset of the latent variables is constrained to correspond to specific factors'\n\n is not clear nor is it consistently used throughout the paper. When the authors disentangle MNIST, they appear to be searching for linear separability, and when they disentangle CelebA they appear to be trying to assign one factor of variation (attribute) to each unit of y^T. Additionally, the paper refers more to ‘conditionning’ than disentanglement, it would be nice to rectify or explain this discontinuity between the main body of the text and the title.\n\n(7) Reconstruction results in Figure 4. appear to be very good, however there is no quantitative evaluation nor comparison with other models.\n\n(8) Additionally, while most of the results in Figure 4. are visually pleasing, there are no quantitative results. From these visual results it is not clear how reliably (or consistently) the model is able to edit the correct attribute? \n\n(9) The authors say that:\n`In comparison, a student model with enlarged latent code but that continues with the training procedure as the teacher, without Jacobian supervision, achieves good reconstruction but loses the effective conditionning on the attributes.’\n\nThere are no quantitative (or qualitative) results to demonstrate that the disentanglement is worse in the `student model with enlarged latent code'." ]
[ 5, -1, 7, -1, -1, -1, -1, 7, -1, -1 ]
[ 3, -1, 4, -1, -1, -1, -1, 4, -1, -1 ]
[ "iclr_2019_Hkg4W2AcFm", "rJlI1wrzAX", "iclr_2019_Hkg4W2AcFm", "r1gqJD2u2X", "SyxL9vbI27", "BJxo72kq3X", "iclr_2019_Hkg4W2AcFm", "iclr_2019_Hkg4W2AcFm", "B1xizrM-q7", "iclr_2019_Hkg4W2AcFm" ]
iclr_2019_HkgEQnRqYQ
RotatE: Knowledge Graph Embedding by Relational Rotation in Complex Space
We study the problem of learning representations of entities and relations in knowledge graphs for predicting missing links. The success of such a task heavily relies on the ability of modeling and inferring the patterns of (or between) the relations. In this paper, we present a new approach for knowledge graph embedding called RotatE, which is able to model and infer various relation patterns including: symmetry/antisymmetry, inversion, and composition. Specifically, the RotatE model defines each relation as a rotation from the source entity to the target entity in the complex vector space. In addition, we propose a novel self-adversarial negative sampling technique for efficiently and effectively training the RotatE model. Experimental results on multiple benchmark knowledge graphs show that the proposed RotatE model is not only scalable, but also able to infer and model various relation patterns and significantly outperform existing state-of-the-art models for link prediction.
accepted-poster-papers
This paper proposes a knowledge graph completion approach that represents relations as rotations in a complex space; an idea that the reviewers found quite interesting and novel. The authors provide analysis to show how this model can capture symmetry/assymmetry, inversions, and composition. The authors also introduce a separate contribution of self-adversarial negative sampling, which, combined with complex rotational embeddings, obtains state of the art results on the benchmarks for this task. The reviewers and the AC identified a number of potential weaknesses in the initial paper: (1) the evaluation only showed the final performance of the approach, and thus it was not clear how much benefit was obtained from adversarial sampling vs the scoring model, or further, how good the results would be for the baselines if the same sampling was used, (2) citation and comparison to a closely related approach (TorusE), and (3) a number of presentation issues early on in the paper. The reviewers appreciated the author's comments and the revision, which addressed all of the concerns by including (1) additional experiments to performance with and without self-adversarial sampling, and comparisons to TorusE, (2) improved presentation. With the revision, the reviewers agreed that this is a worthy paper to include in the conference.
test
[ "HyeIocuQxN", "rJg9ItjCCm", "ByxtN4yRR7", "rkgm5j1aAQ", "H1e_OITis7", "B1gmLa420X", "Sye6N04n0m", "SJluiXkhRm", "Hkxa6j_oAQ", "H1lMravo07", "SJxD-H5t0m", "r1xF475K0m", "Bkxn0G5YR7", "Skl2_GqtAm", "BJx75b5FCX", "HJlFFR7167", "rkgeGrg_Tm", "HJlYlIhn2X", "HJlUq8jq3X", "HkxYw_tn3Q", "HyekBjPinm", "HJxnXzWVnX", "HklyVUAX2m", "rkerutP2tQ" ]
[ "public", "public", "public", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "public", "author", "author", "author", "author", "author", "public", "public", "official_reviewer", "official_reviewer", "public", "author", "public", "author", "public" ]
[ "This is a great paper with strong empirical performance!!\n\nI suppose you have also tried RotatE without self-adversarial training. Was it still better than all the other baselines (without self-adversarial training)? Or is it the combination of RotatE and self-adversarial that is crucial?\n\nI think it is also necessary to put extensive results of all the baselines with self-adversarial training on *ALL* the datasets. When proposing two complementary methods, it is crucial to clearly separate the contribution. To me, it is surprising that self-adversarial training alone can significantly boost the performance of all the methods, and the training strategy is already a great contribution.", "Thanks for your response. However, I think your example of the ComplEx model missed the point. Moreover, it is not a proof that ComplEx cannot model composition. In fact, the example has reasoning error. I can always start from picking r1 \\circ r2 != alpha r3 then picking x, y, z that satisfies <r1, x, \\bar{y}>, <r2, y, \\bar{z}>, and <r3, x, \\bar{z}>. One example does not make a proof.\n\nThe point is, as you have many strong claims, I expect to see the proofs, either mathematical proof, or clear empirical evidences. For example, showing ComplEx fails miserably on synthetic data with composition pattern.\n\nUpdate: We should focus on main points. Please justify your claim about composition pattern. Thanks again.", "Thanks for the great answer! It makes sense to me!\nProbably, when N is large (in 1-to-N relation), it is better for TransE-type model to down-weight the corresponding loss term, so that those N entities will not be forced to have very similar embeddings.\n\nAnother related question: how does the training loss behave for your model? Does it perfectly fit the training set?", "Thanks for your understanding! You are right! ‘ordinal’ is not sufficient in the case when the true triple comes earlier in the list, especially when the true triplet is put in the beginning of the list. The ConvKB’s updated new eval.py [1] suffers this problem by always putting the true triplet in the first position (see the codes below).\n\n#thus, insert the valid test triple again, to the beginning of the array\nnew_x_batch = np.insert(new_x_batch, 0, x_batch[i], axis=0)\nnew_y_batch = np.insert(new_y_batch, 0, y_batch[i], axis=0)\n\nIn this case, ‘ordinal’ is essentially equivalent to ‘min’, so it’s not sufficient. However, this problem can be easily addressed by randomly shuffling the list. \n\n[1] https://github.com/daiquocnguyen/ConvKB/commit/c7ee60526ee81b46c2b0075cca2e387b0dbc6e90\n", "# Summary\nThis paper presents a neural link prediction scoring function that can infer symmetry, anti-symmetry, inversion and composition patterns of relations in a knowledge base, whereas previous methods were only able to support a subset. The method achieves state of the art on FB15k-237, WN18RR and Countries benchmark knowledge bases. I think this will be interesting to the ICLR community. I particularly enjoyed the analysis of existing methods regarding the expressiveness of relational patterns mentioned above.\n\n# Strengths\n- Improvements over prior neural link prediction methods\n- Clearly written paper\n- Interesting analysis of existing neural link prediction methods\n\n# Weaknesses\n- As the authors not only propose a new scoring function for neural link prediction but also an adversarial sampling mechanism for negative data, I believe a more careful ablation study should have been carried out. There is an ablation study showing the impact of the negative sampling on the baseline TransE, as well as another ablation in the appendix demonstrating the impact of negative sampling on TransE and the proposed method, RotatE, for the FB15k-237. However, from Table 10 in the appendix, one can see that the two competing methods, TransE and RotatE, in fact, perform fairly similarly once both use adversarial sampling it still remains unclear whether the gains observed in table 4 and 5 are due to adversarial sampling or a better scoring function. Particularly, I want to see results of a stronger baseline, ComplEx, equipped with the adversarial sampling approach. Ideally, I would also like to see multiple repeats of the experiments to get a sense of the variance of the results (as it has been done for Countries in Table 6).\n\n# Minor Comments\n- Eq 5: Already introduce gamma (the fixed margin) here.\n- While I understand that this paper focuses on knowledge graph embeddings, I believe the large body of other relational AI approaches should be mention as some of them can also model symmetry, anti-symmetry, inversion and composition patterns of relations as well (though they might be less scalable and therefore of less practical relevance), e.g. the following come to mind:\n - Lao et al. (2011). Random walk inference and learning in a large scale knowledge base.\n - Neelakantan et al. (2015). Compositional vector space models for knowledge base completion.\n - Das et al. (2016). Chains of Reasoning over Entities, Relations, and Text using Recurrent Neural Networks. \n - Rocktaschel and Riedel (2017). End-to-end Differentiable Proving.\n - Yang et al. (2017). Differentiable Learning of Logical Rules for Knowledge Base Completion.\n- Table 6: How many repeats were used for estimating the standard deviation?\n\n\nUpdate: I thank the authors for their response and additional experiments. I am increasing my score to 7.", "We first would like to provide some theoretical analysis to show that the RotatE model can also somehow model the 1-to-N relations. Taking a 1-to-N relation r as an example. The triplets having the head entity x and relation r are denoted as: r(x, y1), r(x, y2) …. r(x, yn). When the optimization converges, it could be easily to find out that the embeddings of y1, y2, …, yn will be evenly distributed on the surface of a hypercube (or a hypersphere in the case of L-2 norm) centered at rx. In other words, ||rx - y1|| = ||rx - y2|| = .. = ||rx - yn||. This phenomenon is the same as in semantic matching models, like ComplEx, where the scores <r,x,\\bar{y1}>=<r,x,\\bar{y2}>=..=<r,x,\\bar{yn}>. Therefore, the RotatE model can somehow deal with 1-to-N relations just like ComplEx, as well as TransE.\n\nA more elegant and rigorous approach to model the 1-to-N, N-to-1, and N-to-N relations is to leverage a probabilistic framework to model the uncertainties of the entities, where each predicted entity is represented as a Gaussian distribution. This has been proved quite effective in [1]. Our RotatE model can easily leverage this framework to mitigate this issue. \n\nAnother thing to note is that the focus of this paper is to model and infer the different types of relation patterns, but not the 1-to-N, N-to-1, and N-to-N relationships. However, we will conduct further experiments to compare the performance of different methods (TransE, ComplEx and RotatE) on the 1-1, 1-to-N, N-to-1, and N-to-N relationships. \n\n[1] Shizhu He, Kang Liu, Guoliang Ji and Jun Zhao, Learning to Represent Knowledge Graphs with Gaussian Embedding\n", "Thanks for such a good question! We have provided some theoretical analysis to show that the RotatE model can also somehow model the 1-to-N relations. Please refer to our response to Reviewer2.", "Thanks a lot for the response and updating the paper. \n\nWhat is your response to the public comment above? \nhttps://openreview.net/forum?id=HkgEQnRqYQ&noteId=rkgeGrg_Tm\n\nSpecifically, if TransE and RotatE suffer from not being able to model 1-to-N, N-to-1, N-to-N relations, what is your take on why this is not reflected in the experimental results for RotatE? Is this a limitation of the used datasets?\n\n\n-- R2", "This will likely not be taken into account for the decision, so I don't want to discuss this too much. \n\nBut it is an important issue for the field, and I understand the concern raised by the authors: triples with the same score should get random (or ideally, max) ranking, not min. With min, the MRR ranking will be inflated, incorrectly, and benefits methods that tend to produce tied scores.\n\nI have a quick question for the authors though. Can you verify, and explain, why rankdata(results, method=’ordinal’) is not sufficient? Is it because the true triple comes earlier in the list (somehow)?", "1. \" results_with_id = rankdata(results, method='min') \": I used this last year because I simply want to give the valid test triple and its replicated triples a same rank (since I used a batch size).\n\n2. \" A simple ... wrong\": your example is not real since a model tends to give high scores to valid triples and low score to invalid triples. None of existing models can give a MRR score of 1. \n\nI have another question for you: Assume that a valid test triple and some of its corrupted triples have a same score. Why must you think it is wrong if assigning them a same rank? \n\n3. \" For the previous codes, We opened ... /pull/4). \": I keep to maintain my code and do not accept any pull request before/without opening an issue in my ConvKB github for a discussion. As I said in my previous reply, it could be much better if you created an open issue in my github with your official account.\n\n4. \" results_with_id = rankdata(results, method=’ordinal’) \": I just updated my code using \"ordinal\" and still get a same results (with a quick test using pre-trained TransE embeddings). You can check and test it for ConvKB. No bug.\n\n5. I will not discuss about the implementation of my evaluation further, here. If you still have other problems, you can create an open issue in my ConvKB github.\n\nThank you for your time and discussion. ", "Thanks for your comments!! The difference between RotatE and ComplEx can be summarized as follows:\n\n(1)ComplEx belongs to the semantic matching model while RotatE belongs to the distance-based model. Most of existing knowledge graph embedding models can be roughly classified into two categories: Translational(Transformational) Distance Models and Semantic Matching Models [1]. The former measure the plausibility of a fact as a translation(transformation) between two entities, while the latter measure the plausibility of facts by matching latent semantics of entities and relations. RotatE and ComplEx are in different categories. Actually, we can find that the relation between ComplEx and RotatE is in analogy to the relation between TransF [2] and TransE, where the former can be regarded as a slack version of the latter.\n\n(2) As a result, the biggest difference between ComplEx and RotatE addressed in this paper is that, the RotatE model can infer the composition pattern of relations, while the ComplEx model cannot. A simple counterexample could illustrate this point.\n\nLet’s assume r1(x, y), r2(y, z) and r3(x, z) hold, and then according to ComplEx we have\n\nRe(<r1, x, \\bar{y}>) > Re(<r1, x’, \\bar{y’}>)\nRe(<r2, y, \\bar{z}>) > Re(<r2, y’, \\bar{z’}>)\nRe(<r3, x, \\bar{z}>) > Re(<r3, x’, \\bar{z’}>)\n\nwhere r1(x’, y’), r2(y’,z’) and r3(x’, z’) are negative triplets.\n\nFrom the above equations, we can find that the ComplEx model does not model a bijection mapping from h to t via relation r. For example, let x=-1+i, y=1, z=1+i, r1=-1-0.8i, r2= 0.2+i, r3=-0.8-i, we have r1(x, y), r2(y, z) and r3(x, z) hold, because\n\n<r1, x, \\bar{y}> = 1.8 - 0.2i\n<r2, y, \\bar{z}> = 1.2 + 0.8i\n<r3, x, \\bar{z}> = 2 - 1.6i\n\nHowever, r1 * r2 = 0.6 - 1.16i, r3= - 0.8 - i do not show the supposed pattern r1 \\circ r2 = \\alpha r3 here.\n\nAs for the comparison with TransE, the rotation in the RotatE model is in the complex plane of each embedding vector element, as the same as TransE. This is different from the rotation is in the whole embedding space by matrix multiplication.\n\n“About experiments, for fair comparisons, results should be reported on common and standard settings, especially with and without new negative sampling method….”\n\nWe have added the results of TransE and ComplEx with the new adversarial negative sampling technique on three datasets in Table 8. \n\n“The authors should also address how they estimate/or approximate the softmax in Equation 4 of negative sampling method to scale to large datasets, because it is very costly due to the normalization term. ...”\n\np(h’_j , r, t’_j |{(h_i , r_i , t_i)}) is defined as the probability that we sample (h’_j , r, t’_j) from a sampled set {(h_i , r_i , t_i)}, so we calculate the softmax function only on the sampled triplets. This is very efficient.\n\n“ It's also not clear what $ f_r $ refers to in Equation 4.”\n\n $f_r$ is the score function introduced in Table 1, which equals to $- d_r$.\n\n[1] Knowledge Graph Embedding: A Survey of Approaches and Applications\n[2] Knowledge graph embedding by flexible translation\f", "Thanks for your verification for your model. We do agree that the implementation of your model is correct. However, what we pointed out is that your evaluation is problematic!!\n\nFor your updated eval.py, we find that you used the following code to get the rank for each triplets:\n\nresults_with_id = rankdata(results, method='min')\n\nwhere ‘min’ represents “The minimum of the ranks that would have been assigned to all the tied values is assigned to each value. (This is also referred to as “competition” ranking.)” according to the official document [1].\n\nHowever, such \"a specific ranking procedure\" tends to rank the true positive triplets in a high position, if there are many triplets with the same score.\n\nA simple example is that a model produce score=b for all triplets, then results_with_id = rankdata(results, method='min') will return the results that all the triplets are ranked in the first position. In other words, in this case MRR = 1, which is definitely wrong.\n\nMoreover, as mentioned in [2], we have fixed the bug in your previous codes and reported the true performance of your model on FB15k-237. We provided the checkpoint file, where you can check that MRR = 40 by your original eval.py, but 24 by our bug-fixed eval.py.\n\nAs for your updated codes, we suggest that you should replace the “rankdata” part by:\n\nresults_with_id = rankdata(results, method=’ordinal’)\n\nwhere ‘ordinal’ represents “All values are given a distinct rank, corresponding to the order that the values occur in a.” according to the official document [1]. Although the results may be a little different from the results of our released bug-fixed eval.py [2] (We used quicksort ranking by following you), it would also provide a valid evaluation for your model.\n\nFor the previous codes, We opened a pull that fix the bug (https://github.com/daiquocnguyen/ConvKB/pull/3), but it was closed. For your new codes, we also opened a pull to fix the bug (https://github.com/daiquocnguyen/ConvKB/pull/4).\n\nFinally, we want to emphasize again that we did not intend any offence to your work. The truth is that we found a problem, and we want to make it right. \n\n[1]: https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.rankdata.html\n[2]: https://github.com/KnowledgeBaseCompleter/eval-ConvKB", "Thanks for your appreciation to our work and the great comments. We’ve revised the introduction part on the representations in complex domain. \n\n“The optimization section does not mention how constraints are imposed.”\n\nSince each relation is modeled as a rotation in the complex vector space, we represent each relation r according to its polar form with its modulus as 1, i.e., \nRe(r) = sine(\\theta), and Im(r) = cosine(\\theta), where \\theta is the phase of relation r. With the polar form representation, the constraints can be easily satisfied.\n\n“In experiments, how does the effective number of parameters that are used to express representations compare when the representations are a complex vs a real number ….”\n\nIf the same number of dimension is used for both the real and imaginary parts of the complex number as the real number, the number of parameters for the complex embedding would be twice the number of parameters for the embeddings in the real space. To make a fair comparison, in the process of grid search for finding the optimum embedding dimension, we double the range of the search space for models represented in real space such as TransE. \n\n“Since the method is reported to beat several number of competitors, it is useful to provide the code.”\n\nYes, we will definitely release our code and share it with the entire community. ", "“Particularly, I want to see results of a stronger baseline, ComplEx, equipped with the adversarial sampling approach….”\n\nWe have added the experimental results of TransE and ComplEx on three datasets in our paper (Table 8). We can see that our proposed approach still outperforms ComplEx with the new adversarial approach, especially on the data set FB15k-237 and Countries. The reason is that FB15k-237 and Countries contain many composition patterns, which cannot be modeled by ComplEx but can be effectively modeled by RotatE.\n\n“Ideally, I would also like to see multiple repeats of the experiments to get a sense of the variance of the results...”\n\nWe also added the variance of the results of our model on different data sets, which are summarized into Table 12 in the appendix. We can see that the variance of the results are very small, 0.001 at maximum. \n\n“Table 6: How many repeats were used for estimating the standard deviation?”\n\nOnly 3 are used. Since the variance are very small, the same results are obtained with more repeats.\n\n“While I understand that this paper focuses on knowledge graph embeddings, I believe the large body of other relational AI approaches should be mention….”\n\nWe have added some discussion on these methods in the related work section.", "Thanks for your appreciation to our work and your great comments on improving the paper. We have added the experimental results of TransE and ComplEx with self-adversarial negative sampling on three datasets in our paper (Table 8). We have also added the contribution of the self-adversarial negative sampling into both the abstract and introduction.\n\nRegarding TorusE, thanks again for bringing it to our attention, which we did not notice before. It is indeed relevant to our model, which is a concurrent work. We have discussed this model in the related work section. The difference between TorusE and RotatE can be summarized as below:\n\n(1) The TorusE model constraints the embedding of objects on a torus, and models relations as translations, while the RotatE model embeds objects on the entire complex vector space, and models relations as rotations.\n\n(2) The TorusE model requires embedding objects on a compact Lie group [2] while the RotatE model allows embedding objects on a non-compact Lie group, which has much more representation capacity. The TorusE model is actually very close to a special case of our model, i.e., pRotatE, which constraints the modulus of the head and entity embeddings fixed. As shown in Table 5, it is very important for modeling and inferring the composition patterns by embedding the entities on a non-compact Lie group. We can also compare the results of TorusE and RotatE on the FB15k and WN18 data sets (Table 3 in the TorusE paper and Table 4 in our paper), we can see that our RotatE model significantly outperforms TorusE on the two data sets.\n\n(3) The motivations of the TorusE paper and this paper are quite different. The TorusE paper aims to solve the regularization problem of TransE, while our paper focuses on inferring and modeling three important and popular relation patterns.\n\n[1] Ebisu, Takuma, and Ryutaro Ichise. \"Toruse: Knowledge graph embedding on a lie group.\" arXiv preprint arXiv:1711.05435 (2017).\"\n[2] https://en.wikipedia.org/wiki/Compact_group#Compact_Lie_groups", "The reported results are high, which raise my interest. But, it also raises attention to some important issues that need to be addressed.\n\nThe proposed model is very similar to the ComplEx embedding model [1]. In fact, in the ComplEx model, the score function is $ real(<r, h, \\bar{t}>) $, which includes the element-wise product between $ r \\circ h $. Because the ComplEx model uses complex-value embeddings, this product is essentially rotation in the complex plane, thus the same as the idea in this paper.\n\nThe authors should clarify and emphasize how their model could provide advantage over the ComplEx model, which is currently one of the SOTA. The authors should provide convincing theoretical arguments because many researches have shown that excessive hyper-parameter tuning and optimization techniques can change benchmark results a lot [2]. The authors also need to provide proof that the ComplEx model cannot model \"composition\" as in Table 2, given the two models are essentially similar.\n\nAdditionally, the comparison with TransE is ambiguous. The authors should make clear that the rotation is in the complex plane of each embedding vector element, thus different from rotation in the embedding space; and check that their arguments and analyses regarding TransE still stand.\n\nAbout experiments, for fair comparisons, results should be reported on common and standard settings. An example practice could be seen in [3].\n\nRef:\n[1] Trouillon, Theo, et al. Complex Embeddings for Simple Link Prediction. ICML 2016.\n[2] Kadlec, Rudolf, Ondrej Bajgar, and Jan Kleindienst. \"Knowledge base completion: Baselines strike back.\" arXiv preprint arXiv:1705.10744 (2017).\n[3] Lacroix, Timothée, Nicolas Usunier, and Guillaume Obozinski. \"Canonical Tensor Decomposition for Knowledge Base Completion.\" ICML 2018.", "This paper argues that the advantage of the proposed method against ComplEx is its ability to model composition. While this is true, the disadvantage of the TransE-type model (which includes RotatE) is its inability to deal with 1-to-N, N-to-1, N-to-N relations. It seems to me that the composition and modeling of these complicated relations are intrinsically at odds with each other. The author should make this clear, especially in Table 2; ComplEX can handle 1-to-N, N-to-1, N-to-N relations, while RotatE cannot.", "The authors propose to model the relations as a rotation in the complex vector space. They show that this way one can model symmetry/antisymmetry, inversion and composition. Another contribution is the so-called self-adversarial negative sampling.\n\nPros: The problem that they raise is important and the solution is relevant. The results considering the simplicity of the proposed model are impressive. The experiments, proof of lemmas and general overview are easy to follow, well-written and well-organized. The improvement given the negative sampling approach is also noteworthy.\n\nCons: Nevertheless, this approach is very similar to TorusE [1], since the element-wise rotation on the complex plane is somehow related to transformation on high-dimensional Torus. Therefore, it is expected from the authors to investigate the differences between these two approaches.\n\nSuggestions:\nAlso, it is important to note the result of ablation study on Table 10 in supplementary materials, since part of the improvement does not come only from how the authors model the relation but also from the negative sampling(which could improve the results of other works as well). Maybe it is even better if Table 10 is presented in the main paper. \nAnother suggestion is to mention the negative sampling contribution also in the abstract.\n\n\n[1] Ebisu, Takuma, and Ryutaro Ichise. \"Toruse: Knowledge graph embedding on a lie group.\" arXiv preprint arXiv:1711.05435 (2017).\"\n", "The paper proposes a method for graph embedding to be used for link prediction, in which each entity is represented as a vector in complex space and each relation is modeled as a rotation from the head entity to the tale entity. \nFrom the modeling perspective, the proposed model is rich as many type of relations can be modeled with it. In particular, symmetric and anti-symmetric relations can be modeled. It is also possible to model the inverse of a relation and the composition of two relations with this setup. Empirical evaluation demonstrates that method is effective and beats a number of well known competitors.\n\nThis is a solid work and could be of interest in the community. Modeling is elegant and experimental results are strong.\nI have not seen it proposed before.\n\n- The presentation of paper could be improved, in particular the first paragraph of page 2 where the representation in complex domain is introduced is hard to follow and could be improved by inserting formulations instead of merely text. \nIt would be nice to explicitly mention the number of real and imaginary dimensions of the complex vectors and provide explicit formulation for the Hadamard product on the complex domain, since the term elementwise could be ambiguous.\n- The optimization section does not mention how constraints are imposed. This is an important technicality and should be clarified.\n- In experiments, how does the effective number of parameters that are used to express representations compare when the representations are a complex vs a real number? Each complex number is presented with two parameters and each real number with one parameter. How is that taken into account in experiments\n- Since the method is reported to beat several number of competitors, it is useful to provide the code.\n\n \nBased on the results above, I vote for the paper to be accepted.\n", "How many valid test triples and their corrupted triples have the same score. And what are they and their ranks on WN18RR and FB15k237? You had mentioned \"equal to 0\" (the same score) in your first reply. It seems that you actually did not run my code before. I do not want to discuss our model in details as my code was based on Denny Britz's implementation for employing a CNN to text classification.\n\nThere is nothing called \"a specific ranking procedure\" in my evaluation. I do not know why you must pay much attention to \"replicating the valid test triples\". Again, this is straightforward and does not matter when ranking, because each valid test triple and its replicated triples have a same score and a same rank.\n\nAs I said in my first reply, it would be nice if you created an open issue in my ConvKB github for further discussions. So I could tell you that we also had another version to evaluate the model without replicating the valid test triples, for which the experimental results are still same for with and without replicating the triples. This obviously helps to save time for both of us. \n\nThe \"without replicating\" version ran slower than the version in the github, thus I did not update it last year. But now, I have just updated it to my ConvKB github. You can check and test it.\n\nYour approach and results are great. And you do not need to beat all scores on all datasets to have an accepted paper. I appreciate if you can also include our published results. ", "Hi Dai, \n Thanks for the verification. In the above comment, sorry we meant that many triplets have the same score, which equals to the bias of your model, i.e., b = tf.Variable(tf.constant(0.0, shape=[num_classes]), name=\"b\") in your model.py code. The reasons is that in many cases all the nonlinear RELU units are not activated. In addition, we found that this problem would only occur when the nonlinear activation Relu is used in the model. This explains why the evaluation of other models, including TransE, TransR, TransH and STransE, are correct. \n\nWe suggest you to re-evaluate your model without replicating the true triplets. We’ve fixed this debug in your code and put the updated codes in https://github.com/KnowledgeBaseCompleter/eval-ConvKB . \n\nBy the way, we appreciate your work, which we find is really interesting. We did not intend any offence to your work. We hope we can push forward this exciting direction together. We look forward to your feedback.\n", "Disclose: I am the author of ConvKB. I had re-run my ConvKB implementation. And there is not a single triple having score at 0 on FB15k-237.\n\nIt would be nice if you can create an open issue in my ConvKB github before discussing any information made in public.\n\nUpdate for a clarification:\n\nIt is important to note that our implementation can work with other score functions. Last year, I verified my “eval.py” implementation by using the same output vector and matrix embeddings produced by other models (such as TransE, TransR, TransH and STransE) to prove that our \"eval.py\" implementation is correct and can produce the exact same scores as produced by those models. \n\nFor each correct test triple, I just replicated this correct test triple several times to add to its set of corrupted triples, in order to work with a batch size (as shown in Lines 188-190 in “eval.py”). This is straightforward and does not matter when ranking the correct test triple. I thought that \"the same score\" you mentioned is actually for the correct test triple because of replicating. You should have a careful look at this point and then edit your comment above to have a reasonable reply.\n\nI just read your paper. This is nice work. Your experimental results are still great even you add negative results from other papers.\n", "Thanks for pointing this out! We’re aware of the result of ConvKB, which achieves a very high MRR on FB15k (0.396). The reason that we did compare with ConvKB [1] is that there is a bug in ConvKB’s evaluation.\n\nWe tried to reproduce their results from their published code [2], but found that the ConvKB tends to assign the same score, i.e., 0, to many triplets. The reason is that the RELU activation function is used in the convolution layers, which tends to have very sparse output, i.e., the output of many neurons are zero. This brings a big problem in the evaluation.\n\nFor evaluation, given a query (h,r, ?), the goals is to identify the rank of the true positive triplets (h, r, t) among all the possible (h, r, t’) triplets. Since the scores of many triplets given by ConvKB equal to 0 (typo, should be \"the same score\" or \"bias\"), the true positive triplets and many other false triplets are all ranked the first position at the same time. A reasonable solution would be to randomly pick a triplet among those triplets as the first ranked triplet, and so on. However, we find that a specific ranking procedure is used by ConvKB, which tends to rank the true positive triplets in a high position. As a result, the performance evaluated in this way is really high, which is not true in reality. We strongly suggest the authors of ConvKB to take a look at this issue and fix their results.\n\nFor the results of Reciprocal ComplEx-N3, thanks again for pointing this out, which we are not aware of before the submission. However, note that the focus of the Reciprocal ComplEx-N3 and this paper is different. Our paper proposes a new distance function for learning knowledge graph embedding, and our proposed RotatE is able to infer three relation patterns including composition, symmetry/asymmetry, and inversion, which offers good model interpretability. The focus of Reciprocal ComplEx-N3, however, is on different regularization techniques, which could be potentially applied to our proposed RotatE model. For example, on the FB15k data set, the performance of RotatE increases from 0.797 to 0.815 with the N3-regularizer, which outperforms the performance of ComplEx-N3 on FB15k (0.80). We are still in the process of implementing the reciprocal setting for our RotatE model, which seems to be pretty effective according to [3]. \n\n[1] A Novel Embedding Model for Knowledge Base Completion Based on Convolutional Neural Network\n[2] https://github.com/daiquocnguyen/ConvKB\n[3] Canonical Tensor Decomposition for Knowledge Base Completion\n", "You should mention the experimental results of ConvKB [1] and Reciprocal ComplEx-N3 [2]. Reciprocal ComplEx-N3 gives higher MRR and Hits@10 scores than yours on both FB15K and FB15k-237. ConvKB produces better scores than yours for MRR on FB15k-237 and MR on WN18RR.\n\n[1] A Novel Embedding Model for Knowledge Base Completion Based on Convolutional Neural Network. NAACL-HLT 2018.\n[2] Canonical Tensor Decomposition for Knowledge Base Completion. ICML-2018. Oral presentation." ]
[ -1, -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, -1, -1, -1, -1, -1 ]
[ "iclr_2019_HkgEQnRqYQ", "SJxD-H5t0m", "Sye6N04n0m", "Hkxa6j_oAQ", "iclr_2019_HkgEQnRqYQ", "SJluiXkhRm", "rkgeGrg_Tm", "Skl2_GqtAm", "H1lMravo07", "r1xF475K0m", "HJlFFR7167", "HkxYw_tn3Q", "HJlUq8jq3X", "H1e_OITis7", "HJlYlIhn2X", "iclr_2019_HkgEQnRqYQ", "HJlFFR7167", "iclr_2019_HkgEQnRqYQ", "iclr_2019_HkgEQnRqYQ", "HyekBjPinm", "HJxnXzWVnX", "HklyVUAX2m", "rkerutP2tQ", "iclr_2019_HkgEQnRqYQ" ]
iclr_2019_HkgSEnA5KQ
Guiding Policies with Language via Meta-Learning
Behavioral skills or policies for autonomous agents are conventionally learned from reward functions, via reinforcement learning, or from demonstrations, via imitation learning. However, both modes of task specification have their disadvantages: reward functions require manual engineering, while demonstrations require a human expert to be able to actually perform the task in order to generate the demonstration. Instruction following from natural language instructions provides an appealing alternative: in the same way that we can specify goals to other humans simply by speaking or writing, we would like to be able to specify tasks for our machines. However, a single instruction may be insufficient to fully communicate our intent or, even if it is, may be insufficient for an autonomous agent to actually understand how to perform the desired task. In this work, we propose an interactive formulation of the task specification problem, where iterative language corrections are provided to an autonomous agent, guiding it in acquiring the desired skill. Our proposed language-guided policy learning algorithm can integrate an instruction and a sequence of corrections to acquire new skills very quickly. In our experiments, we show that this method can enable a policy to follow instructions and corrections for simulated navigation and manipulation tasks, substantially outperforming direct, non-interactive instruction following.
accepted-poster-papers
The paper proposes a meta-learning approach to "language guided policy learning" where instructions are provided in the form of natural language instructions, rather than in the form of a reward function or through demonstration. A particularly interesting novel feature of the proposed approach is that it can seamlessly incorporate natural language corrections after an initial attempt to solve the task, opening up the direction towards natural instructions through interactive dialogue. The method is empirically shown to be able to learn to navigate environments and manipulate objects more sample efficiently (on test tasks) than approaches without instructions. The reviewers noted several potential weaknesses: while the problem setting was considered interesting, the empirical validation was seen to be limited. Reviewers noted that only one (simple) domain was studied, and it was unclear if results would hold up in more complex domains. They also note lack of comparison to baselines based on prior work (e.g., pre-training). The authors provided very detailed replies to the reviewer comments, and added very substantial new experiments, including an entire new domain and newly implemented baselines. Reviewers indicated that they are satisfied with the revisions. The AC reviewed the reviewer suggestions and revisions and notes that the additional experiments significantly improve the contribution of the paper. The resulting consensus is that the paper should be accepted. The AC would like to note that several figures are very small and unreadable when the paper is printed, e.g., figure 7, and suggests that the authors increase figure size (and font size within figures) to ensure legibility.
train
[ "Syx6sF9JnQ", "ByejewrEAX", "Hygq4Iy4R7", "BklVgmpgCQ", "S1edmre36m", "HyxheSl2a7", "S1gowVln67", "HJxXmVe3pm", "r1glU6XCnQ", "Hkx2E2HphX" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "\n\nUPDATE: I've increased my rating based on the authors' thorough responses and the updates they've made to the paper. However, I still have a concern over the static nature of the experimental environments.\n\n=====================\n\nThis paper proposes the use of iterative, linguistic corrections to guide (ie, condition and adjust) an RL policy. A major challenge in learning language-guided policies is grounding the language in environment states and agent actions. The authors tackle this challenge with a meta-learning approach.\n\nThe approach is fairly complex, blending imitation and supervised learning. It operates on a training set from a distribution of virtual pick-move-place tasks. The policy to be learned operates on this set and collects data, via something close to DAgger, for later supervised learning on the task distribution. The supervised-learning data comprises trajectories augmented with linguistic subgoal annotations, which are referred to as policy \"corrections.\" By ingesting its past trajectories and the correction information, the policy is meant to learn to solve the task and to ground the corrections at the same time, end-to-end. Correction annotations are derived from an expert policy.\n\nThe idea of guiding a policy through natural language and the requisite grounding of language in environment states and policy actions have been investigated previously: for example, by supervised pretraining on a language corpus, as in the cited work of Andreas et al. (2018). The alternative meta-learning approach proposed here is both well-motivated and original.\n\nGenerally, I found the paper clear and easy to read. The authors explain convincingly the utility of guiding policies through language, especially with respect to the standard mechanisms of reward functions (sparse, engineered) and demonstrations (expertise required). The paper is also persuasive on the utility of iterative, interactive correction versus a fully-specified language instruction given a priori. The meta-learning algorithm and training/test setup are both explained well, despite their complexity. On the other hand, most architectural details necessary to reproduce the work are missing, at least from the main text. This includes various tensor dimensions, the structure of the network for perceiving the state, etc.\n\nI like the proposed experimental setting. It enables meta-learning on sequential decision making problems in a partially observable environment, which seems useful to the research community at large. Ultimately, however, this paper's significance is not evident to me, mainly because the proposed method lacks thorough experimental validation. No standard baselines are evaluated on the task (with or without meta-learning), nor is a detailed analysis of the learned policies undertaken. The ablation study is useful, and a good start, but insufficient in my opinion. Unfortunately, the results are merely suggestive rather than convincing.\n\nSome things I'd like to see in an expanded results section before recommending this paper include:\n- Comparison to an RL baseline that attempts to learn the full task, without meta-training or language corrections.\n- Comparison to a baseline that learns from intermediate rewards. Instead of annotating data with corrections, you could provide +/- scalar rewards throughout each trajectory based on progress towards the goal (since you know the optimal policy). How effective might this be compared to using the corrections?\n- Comparison to a baseline that does some kind of pretraining on the language corrections, as in Andreas et al. (2018).\n- Quantification of how much meta-training data is required. What is the sample complexity like with/without language corrections?\n\nI also have concerns about the need for near-optimal agents on each task -- this seems very expensive and inefficient. The expert policy is obtained via RL on each individual task using \"ground truth\" rewards. It is not specified what these rewards are, nor is it stated how near to optimal the resulting policy is nor how this nearness affects the overall meta-learning process.\n\nIts unclear to me how the \"full information\" baseline processes and conditions on the full set of subgoals/corrections. Are they read as a single concatenated string converted to one vector by the bi-LSTM?\n\nThere also might be an issue with the experimental setup, unless I've misunderstood it. The authors state that \"the agent only needs 2 corrections where the first correction is the location of the goal object and the second is the location of the goal square.\" But if the specific rooms, indicated by colors, do not change location from task to task (and they appear not to from all the figures), then the agent can learn the room locations during meta-training and these two \"corrections\" tell it everything it needs to know to solve the task.\n\nPros:\n- Appealing, well-motivated idea for training policies via language.\n- Clear, pleasant writing and good communication of a complicated algorithm.\n- Good experimental setup that should be useful in other research (except for possible issue with static room locations).\n\nCons:\n- The need for a near-optimal policy for each task. \n- Overall complexity of the training process.\n- The so-called corrections are actually linguistic statements of subgoals computed from the optimal policy. There is much talk in the introduction of interactive policy correction by humans, which is an important goal and interesting problem, but the present paper does not actually investigate human interaction. This comes as a letdown after the loftiness of the introduction.\n- Various details needed for reproduction are lacking. Maybe they're in the supplementary material; if so, please state that in the main text.\n- Major lack of comparisons to alternative approaches.", "Based on your thorough responses and paper modifications, I'll revise my review.", "For the multi-room environment, the room colors do not change location from task to task. While two corrections could tell it everything it needs to know and we observe this in some cases, we see that the agent often fails to complete subgoals it has information on and still benefits from successive corrections after two (as seen in Table 1). An example of this can be seen in Appendix A.1. Here the agent is not perfect and is able to complete the task after receiving multiple corrections (sometimes the same correction twice). \n\nOur model is also able to handle more relative types of corrections where the agent cannot memorize absolute positions. In Section 7.4 we add different types of corrections such as (“you are in the wrong room” or “goal room is southwest.” The agent cannot just memorize the locations of each room and instead must map corrections to changes in behavior. \n\nThe second environment we have added, the robotic object relocation task, has relative corrections such as “Move a little up right” or “Push closer to the green block”. A fixed number of corrections cannot exactly specify the task and the agent must consider the correction in terms of its previous behavior to gradually move closer to the goal.\n", "I'm very impressed and mostly satisfied with the responses to my review. There remains one important, unanswered question, however, that I'd like to be addressed.\n\nIf the specific rooms, indicated by colors, do not change location from task to task (and they appear not to from all the figures), then the agent can learn the room locations during meta-training and the two \"corrections\" tell it everything it needs to know to solve the task. So: do colored rooms change location from task to task? I.e., is the blue room sometimes in the lower right and other times in the upper left, etc?", "“- Quantification of how much meta-training data is required. What is the sample complexity like with/without language corrections?”\n> We add these details to the paper in Appendix A.3.\n\nMeta-training: For the multi-room domain we meta-train on 1700 environments. Our method converges in 6 DAgger steps so it takes 30 corrections per environment for a total of 51,000 corrections. For the robotic object relocation domain, we train on 750 environments. Our method converges in 9 DAgger steps so it takes 45 corrections per environment for a total of 33750 corrections. \n\nMeta-testing: On new tasks, asymptotically RL is able to achieve better final performance than our method but takes orders of magnitudes more samples. In Figure 7 we plot the number of training trajectories used per test task. While LGPL only receives up to 5 trajectories for each test task, RL takes more than 1000 trajectories to reach similar levels of performance. \n\n“Its unclear to me how the \"full information\" baseline processes and conditions on the full set of subgoals/corrections. Are they read as a single concatenated string converted to one vector by the bi-LSTM?”\n> For the full information baseline, all the subgoals are concatenated and converted to one vector by a bi-LSTM.\n\n“I also have concerns about the need for near-optimal agents on each task -- this seems very expensive and inefficient.”\n> To ground the language corrections we need some form of supervision. Typically methods for grounding natural language instructions assume access to either a large corpus of supervised data (i.e expert behavior) or a reward function (Janner et al 2017, Misra et al 2017, Wang, Xiong, et al. 2018, Andreas et al) in order to train the model. In our setting, we similarly assume access to near optimal agents or a reward function (which we can use to train near optimal agents), which is used to learn the policy and language grounding, but only on the meta-training tasks. On unseen meta-test tasks, we can learn very quickly simply by using language corrections, without the need for reward functions or expert policies. \n\n“On the other hand, most architectural details necessary to reproduce the work are missing, at least from the main text.”\n> We have added architecture and training details (including reward functions) to the appendix A.3 and referenced them in the main text. We also intend to open source the code once the review decision is out.\n\n\n[1] Wang, Xin et al. “Look Before You Leap: Bridging Model-Free and Model-Based Reinforcement Learning for Planned-Ahead Vision-and-Language Navigation.” CoRRabs/1803.07729 (2018)\n\n[2] Andreas, Jacob et al. “Learning with Latent Language.” NAACL-HLT (2018).\n\n[3] Misra, Dipendra Kumar et al. “Mapping Instructions and Visual Observations to Actions with Reinforcement Learning.” EMNLP (2017).\n\n[4] Janner, Michael et al “Representation Learning for Grounded Spatial Reasoning” TACL 2017", "Thank you for the detailed and constructive feedback. To address concerns about limited experimental evaluation, we have added a new environment that we call robotic object relocation, which involves a continuous state space and more relative corrections. The results for this environment are in the revised paper in Section 7.3. To address comments about comparisons, we have also added a number of additional comparisons, comparing LGPL to state of the art instruction following methods (Misra 2017, Table 1), pre-training with language (similar to Andreas 2018, Fig 7), using rewards instead of language corrections (Fig 7), and training from scratch via RL (Fig 7). Additionally, to provide a deeper understanding of the methods performance, we included a number of additional analyses on the methods extrapolation and generalization in Section 7.4. \n\nWe would appreciate it if the reviewer could take another look at our changes and additional results, and let us know if they would like to either revise their rating of the paper, or request additional changes that would alleviate their concerns.\n\nFind below the responses to specific comments:\n\n“No standard baselines are evaluated on the task (with or without meta-learning), nor is a detailed analysis of the learned policies undertaken. “\n-> We have added additional comparisons and points of analysis to the updated paper. We compare with a strong instruction following method from the literature, Misra et al. (2017) (Table 1), as well as a number of other comparisons including all the comparisons that were requested (see detailed comments below) (Fig 7). \n\nWe have also added a number of new points of analysis. We analyze the performance of the method on stochastically chosen corrections instead of very well formed ones (Table 4). We analyze the extrapolation performance of the method to more corrections than training time (Table 3). We also analyze the performance of LGPL on tasks that are slightly out of distribution (Table 5). We would be happy to add additional analysis that the reviewer believes is important for the paper -- please let us know if we have addressed all of your concerns in this regard!\n\n“- Comparison to an RL baseline that attempts to learn the full task, without meta-training or language corrections.”\n> We have added a RL baseline that trains a separate policy per task using a dense reward (Section 7.3, Fig 7). The details of the reward functions and training algorithm can be found in the appendix A.3. The RL baseline is able to achieve better final performance but takes orders of magnitude more samples on the new tasks. Our method can obtain reasonable performance with just 5 samples on the test tasks. An important distinction to make is that this baseline also assumes access to the test task reward function; our method only uses the language corrections. Additional details can be found in Section 7.3, 7.4. \n\n“Comparison to a baseline that learns from intermediate rewards. Instead of annotating data with corrections, you could provide +/- scalar rewards”\n> We have added a baseline (Section 7.3, Fig 7) that uses intermediate rewards instead of language corrections, that we call Reward Guided Policy Learning (RGPL).The correction for a trajectory is the sum of rewards of that trajectory. RGPL performed worse than LGPL in both domains as seen in Fig 7. As seen from Fig 7 language corrections allow for more information to be transmitted over scalar rewards. Additional details for this comparison can be found in Section 7.2. \n\n“- Comparison to a baseline that does some kind of pretraining on the language corrections, as in Andreas et al. (2018).”\n> We have added a baseline (Section 7.3, Fig 7) that follows a pre-training paradigm similar to Andreas et al (2018) -- first pre-train a model on language instructions across many tasks and then finetune the model on new tasks using task-specific reward. Andreas et al. (2018) trains a learner with task-specific expert policies using DAgger. It then searches in the instruction space for the policy with the highest reward and then adapts the policy to individual tasks by fine tuning with RL. Since we can provide the exact instruction the policy needs, we do not perform the search in instruction space. We pretrain on the training tasks with DAgger and then finetune on test tasks with RL. This baseline is able to achieve slightly better final performance both domains but takes orders of magnitude more samples on the test tasks (>1000 trajectories vs 5 for our method). Details for this comparison can be found in Section 7.3. \n\n", "Thank you for the detailed and constructive feedback. We have made a number of changes to the paper to address this feedback - including new experimental domains, more comparisons and in-depth analysis of model behavior. We describe these further in responses to specific comments below:\n\n“Only one setting is studied”\n> To extend the experimental evaluation beyond a single domain, we have added a new environment that we call robotic object relocation and involves manipulating a robotic gripper to push blocks. This environment involves relative corrections and continuous state space and is described in Section 7.1.2. This environment shows our method can generalize to substantially different domains (continuous state space) as well as new kinds of corrections beyond subgoals. The results for this environment are in the revised paper in Section 7.3.\n\n“the task distribution seems not very complex.”\n> We specify the task distribution in Section 7.1. For the multi-room environment the training and test tasks are generated such that for any test task, its list of all five subgoals does not exist in the training set. There are 3240 possible lists of all five subgoals. We train on 1700 of these environments and reserve a separate set for testing. For the robotic object relocation environment, we generate tasks by sampling one of the 3 movable blocks to be pushed. We then randomly choose one of the 5 immovable blocks and sample a direction and distance from that block to get a goal location. We generate 1000 of these environments and train on 750 of them. \n\n“How the proposed model performs if the task is a little bit out of distribution? “\n->We have added another experiment (Section 7.4, table 5) where we hold out specific objects in the training set and test on these unseen objects in the test set. For example, the agent will not see green triangles during training, but will see other green objects and non-green triangles during training and must generalize to the unseen combination at test time. As seen from results in Section 7.4, our method does have a lower completion rate on these tasks but is still able to complete a high completion rate (0.75) and outperform the baselines.\n\nOther improvements: To further improve the experimental comparison, we have also added a number of additional comparisons, comparing to state of the art instruction following methods (Misra 2017, Table 1), pretraining with language (similar to Andreas 2018, Fig 7), using rewards instead of language corrections (Fig 7). We have also provided more analysis regarding the extrapolation and generalization of LGPL in Section 7.4.\n\n[1] Misra, Dipendra Kumar et al. “Mapping Instructions and Visual Observations to Actions with Reinforcement Learning.” EMNLP (2017).\n\n[2] Andreas, Jacob et al. “Learning with Latent Language.” NAACL-HLT (2018).\n", "Thank you for the detailed and constructive feedback. To address concerns about the experimental setup setup, we have added a new environment that we call robotic object relocation, which involves a continuous state space. Instead of subgoals the corrections here are more relative such as “move a little left”. The results for this environment are in the revised paper in Section 7.3. To address comments about comparisons, we have also added a number of additional comparisons, comparing LGPL to state of the art instruction following methods (Misra 2017, Table 1), pre-training with language (similar to Andreas 2018, Fig 7), using rewards instead of language corrections (Fig 7). To provide a deeper understanding of the methods performance, we have also included a number of additional analyses on extrapolation and generalization in Section 7.4. Please let us know if adding additional comparisons or analysis would be helpful!\n\nWe respond to specific comments below:\n\n“I am wondering how the method will be compared with a state-of-the-art method that focuses on following instructions”\n-> We have implemented and compared to state of the art instruction following methods (results in Section 7.2, 7.3) Misra et al. (2017), and pretraining based on language (Andreas et al 2018) which show strong results on instruction following. We find that Misra et al. (2017) performs a little worse than our full information oracle method on the multi-room domain when given all subgoals along with the instruction, and significantly worse when given just the instruction. On the object relocation domain, Misra et al. (2017) performs around the same as our instruction baseline. We would like to emphasize that our work is complementary to better instruction following methods/architectures, it provides us a way to incorporate additional corrections in scenarios where just instructions are misspecified/vague. The specific comparison suggested, Artzi and Zettlemoyer, needs a domain specific executor and a formal language over actions. This approach requires specific engineering for each task and it’s unclear how to create a deterministic executor for ours. We also note that recent state of the art work in instruction following [Andreas 2018], [Misra 17], [Wang 2018], [Janner 2017] do not compare to A+Z for their tasks. \n\n“Moreover, the current experiments does not convince the reviewer if the claims are true in a more realistic setup”\n-> We have now added an additional continuous state space environment, robotic object manipulation, and tested over more varied types of corrections, which demonstrates the applicability of our method to diverse task and correction setups. These results can be found in Section 7.4, and show that our method scales to different setups. \n\n“Moreover the authors need to compare their method in an environment that has been previously used for other domains with instructions”\n-> Our algorithm incorporates language corrections to improve agent behavior quickly on new tasks, when the instruction is vague or ambiguous. No other work to our knowledge studies this problem setting, so we made our own environments for this task - based on existing instruction following domains. Our minigrid environment is a partially observed navigation-based environment and shares structural similarities to existing navigation-based environments such as [Matterport 3D, SAIL, Pond world]. \n\n[1] Wang, Xin et al. “Look Before You Leap: Bridging Model-Free and Model-Based Reinforcement Learning for Planned-Ahead Vision-and-Language Navigation.” CoRRabs/1803.07729 (2018)\n\n[2] Andreas, Jacob et al. “Learning with Latent Language.” NAACL-HLT (2018).\n\n[3] Misra, Dipendra Kumar et al. “Mapping Instructions and Visual Observations to Actions with Reinforcement Learning.” EMNLP (2017).", "This paper provides a meta learning framework that shows how to learn new tasks in an interactive setup. Each task is learned through a reinforcement learning setup, and then the task is being updated by observing new instructions. They evaluate the proposed method in a simulated setup, in which an agent is moving in a partially-observable environment. They show that the proposed interactive setup achieves better results than when the agent all the instructions are fully observable at the beginning. \n\nThe task setup is very interesting. However, the experiments are rather simplistic, and does not evaluate the full capability of the model. Moreover, the current experiments does not convince the reviewer if the claims are true in a more realistic setup. The authors compare the proposed method with one algorithm (their baseline) in which all the instructions are given at the beginning. I am wondering how the method will be compared with a state-of-the-art method that focuses on following instructions, e.g., Artzi and Zettlemoyer work. Moreover, the authors need to compare their method in an environment that has been previously used for other domains with instructions. ", "Summary:\nThis paper studies how to teach agents to complete tasks via natural language instructions in an iterative way, e.g., correct the behavior of agents. This is a very natural way to learn as humans. The basic idea is to learn a model that takes correction and history as inputs and output what action to take. This paper formulates this in meta-learning setting in which each task is drawn from a pre-designed task distribution and then the models are able to adapt to new tasks very fast. The proposed method is evaluated in a virtual environment where the task is to pick up a particular object in a room and bring it to a particular goal location in a different room. There are two baselines: 1) instruction only (missing information), 2) full information (not iterative), the proposed method outperforms 1) with higher task completion rate and 2) with fewer number of corrections.\n\nStrength:\n- This paper addresses a very interesting problem in order to make agents learn more human like.\n\nComments:\n- Only one setting is studied. And, the task distribution seems not very complex.\n- How the proposed model performs if the task is a little bit out of distribution?\n" ]
[ 6, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2019_HkgSEnA5KQ", "Hygq4Iy4R7", "BklVgmpgCQ", "S1edmre36m", "HyxheSl2a7", "Syx6sF9JnQ", "Hkx2E2HphX", "r1glU6XCnQ", "iclr_2019_HkgSEnA5KQ", "iclr_2019_HkgSEnA5KQ" ]
iclr_2019_HkgTkhRcKQ
AdaShift: Decorrelation and Convergence of Adaptive Learning Rate Methods
Adam is shown not being able to converge to the optimal solution in certain cases. Researchers recently propose several algorithms to avoid the issue of non-convergence of Adam, but their efficiency turns out to be unsatisfactory in practice. In this paper, we provide a new insight into the non-convergence issue of Adam as well as other adaptive learning rate methods. We argue that there exists an inappropriate correlation between gradient gt and the second moment term vt in Adam (t is the timestep), which results in that a large gradient is likely to have small step size while a small gradient may have a large step size. We demonstrate that such unbalanced step sizes are the fundamental cause of non-convergence of Adam, and we further prove that decorrelating vt and gt will lead to unbiased step size for each gradient, thus solving the non-convergence problem of Adam. Finally, we propose AdaShift, a novel adaptive learning rate method that decorrelates vt and gt by temporal shifting, i.e., using temporally shifted gradient gt−n to calculate vt. The experiment results demonstrate that AdaShift is able to address the non-convergence issue of Adam, while still maintaining a competitive performance with Adam in terms of both training speed and generalization.
accepted-poster-papers
This paper proposes a new stochastic optimization scheme similar to Adam. The authors claim that Adam can be improved upon by decorrelating the second-moment estimate v_t from gradient estimates g_t. This is done through the temporal decorrelation scheme, as well as block-wise sharing of estimates v_t. The reviewers agree that the paper is sufficiently well-written, original and significant to be accepted for ICLR, although some unclarity remains after the reviews. A disadvantage of the method is mainly an increased computational cost (linear in 'n', however this might be negligible when sharing v_t across blocks).
train
[ "B1gVyyYq0X", "BklQd4otAQ", "SkxDrCgdnX", "r1gToxwR6X", "BJe4vgwCaX", "S1gWBeDCpQ", "r1gzGyM5hX", "S1guRoZwhQ", "H1gOA4Dc57", "Byg39Xum9Q", "S1eEIQx-9Q" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "public", "author", "public" ]
[ "Thank you so much for the comments and useful references. We will put our effort into the convergence analysis. Hopefully, we will have some convergence analysis in our final version. ", "I think the authors have further improved the paper, thus I have increased my score to 6. \n\nHowever, some further theoretical analysis should be made in the future work. \n\n\"We are sorry for the confusion. We mixed the general arguments and the counterexample-specific arguments together. According to the reviewers’ feedback, we have reorganized the analysis section, and now the analysis on counterexamples and the general arguments on the non-convergence of Adam are separated. We would appreciate if you could have a check on these reorganized arguments (Section 3.3). The general arguments are actually very sound. \"\n\nThe current Section 3.3 provides general claims only based on the common clues from specific counterexamples. \nThus, it is important to give convergence analysis of AdaShift in the general case. Although this part is missing in the current version, I am fine with it due to its novel idea. I list two references which provide the convergence analysis of Adam-type algorithms for nonconvex optimization. I hope these can help authors to build their own analysis.\n\n\nZhou, Dongruo, et al. \"On the convergence of adaptive gradient methods for nonconvex optimization.\" arXiv preprint arXiv:1808.05671 (2018).\n\nChen, Xiangyi, et al. \"On the convergence of a class of adam-type algorithms for non-convex optimization.\" arXiv preprint arXiv:1808.02941 (2018).\n\n", "In this paper, the authors found that decorrelating $v_t$ and $g_t$ fixes the non-convergence issue of Adam. Motivated by that, AdaShift that uses a temporal decorrelation technique is proposed. Empirical results demonstrate the superior performance of AdaShift compared to Adam and AMSGrad. My detailed comments are listed as below. \n\n1) Theorem 2-4 provides interesting insights on Adam. However, the obtained theoretical results rely on specific toy problems (6) and (13). In the paper, the authors mentioned that \"... apply the net update factor to study the behaviors of Adam using Equation 6 as an example. The argument will be extended to the stochastic online optimization problem and general cases.\" What did authors mean the general cases?\n\n2) The order of presenting Algorithm 1, 2 and Eq. (17) should be changed. I suggest to first present AdaShift (i.e., Eq. (17) or Algorithm 3 with both modified adaptive learning rate and moving average), and then elaborate on temporal decorrelation and others. AdaShift should be presented as a new Algorithm 1. In experiments, is there any result associated with the current Algorithm 1 and 2? If no, why not compare in experiments? One can think that Algorithm 1 and 2 are adaptive learning rate methods against adaptive gradient methods (e.g., Adam, AMSGrad). \n\n3) Is there any convergence rate analysis of AdamShift even in the convex setting?\n\n4) The empirical performance of AdamShift is impressive. Can authors mention more details on how to set the hyperparameters for AdamShift, AMSGrad, Adam, e.g., learning rate, \\beta 1, and \\beta 2? \n", "Thanks for your constructive feedback. \n\nQ: In my eyes, the limitations of the paper are that the example studied is a bit contrived and as a result, I am not sure how general the improvements. More generally, I am worried that the theoretical results and the intuitions backing the improvements are built only on one pathological example. Are there arguments to claim that this example is a prototype for a more general behavior? \n\n>> We mixed the general arguments for the non-convergence of Adam into these analyses of counterexamples. According to the reviewers' feedback, we realize that it is indeed confusing. We thus have reorganized the analysis section, and clearly separated the analysis on counterexamples and the general arguments on the non-convergence issue of Adam. Actually, ‘‘assigning relatively small step-size to large gradient and assigning relatively large step-size to small gradient’’ is the general behavior of Adam and traditional adaptive learning rate methods. Sometimes it causes non-convergence, and more generally, it just hampers the convergence. Please see the reorganized arguments in Section 3.3 for details. \n\nQ: With regards to the solution proposed, temporal decorrelation, I wonder how it interacts with the mini-batch side. With only a light understanding of the problem, it seems to me that large mini-batches will decrease the variance of the gradient estimates and hence increase the correlation of successive samples, breaking the assumptions of the method. \n\n>> The argument is thought-provoking. But it seems that, though decreasing the variance makes the difference between samples smaller, it does not change the independence. Assume that the gradients are independently sampled from a standard Gaussian N(0, 1). If the Gaussian is squeezed to N(0, 0.1), gradients sampled from the squeezed Gaussian are still independent of each other. Using our argument in the paper, we still reach the same conclusion: assuming the loss function is fixed, as long as these mini-batches are independently sampled, no matter the mini-batch size is large or small, their gradients are always independent. \n\nQ: The performance gain compared to Adam seems consistent. It would have been interesting to see Nadam in the comparisons. \n\n>> We have conducted a set of experiments for Nadam. The results are presented in Appendix K. Generally, we found Nadam shows quite similar performance as Adam. Please check Appendix K for details. \n\nQ: Ali Rahimi presented a very simple example of the poor performance of the Adam optimizer in his test-of-time award speech at NIPS this year. It seems like an excellent test for any optimizer that tries to be robust to ill-conditioning (as with Adam), though I suspect that the problem solved here is a different one than the problem raised by Rahimi's example. \n\n>> It is an interesting test and we have tested our algorithm with the code they provided. Our finding is somewhat weird: as long as the training is sufficiently long, SGD, Adam, and AdaShift basically converge in this problem, though the final performance of SGD is significantly better than Adam and AdaShift. \n\n>> We tend to believe this is a general issue of adaptive learning rate method when comparing with vanilla SGD. Because these adaptive learning rate methods are generally scale-invariance, i.e., the step-size in terms of g_t/sqrt(v_t) is basically around 1, which makes it hard to converge very well in such an ill-conditioning quadratic problem. SGD, in contrast, has a step-size g_t. As the training converges, SGD would have a decreasing step-size, making it much easier to converge better. To confirm our analysis, we train the same task with a decreasing learning rate, and we found that at the end of the training, Adam and AdaShfit both converge satisfactorily. \n\n>> Levenberg-Marquardt, which minimizes $(\\delta W_1, \\delta W_2)$ by solving least-squares, shows the fastest convergence. It indicates the possibility of better alternatives to gradient descent (backpropagation) based optimization, which deserves further investigations. \n", "Thanks for your constructive feedback. \n\nQ: However, the obtained theoretical results rely on specific toy problems (6) and (13). In the paper, the authors mentioned that \"... apply the net update factor to study the behaviors of Adam using Equation 6 as an example. The argument will be extended to the stochastic online optimization problem and general cases.\" What did the authors mean the general cases? \n\n>> We are sorry for the confusion. We mixed the general arguments and the counterexample-specific arguments together. According to the reviewers’ feedback, we have reorganized the analysis section, and now the analysis on counterexamples and the general arguments on the non-convergence of Adam are separated. We would appreciate if you could have a check on these reorganized arguments (Section 3.3). The general arguments are actually very sound. \n\nQ: The empirical performance of AdaShift is impressive. Can authors mention more details on how to set the hyperparameters for AdaShift, AMSGrad, Adam, e.g., learning rate, \\beta 1, and \\beta 2? \n\n>> In the revision, we have listed hyperparameter settings in each experiment in Appendix. We have also conducted a set of experiments on hyperparameter sensitivities of AdaShift, which are also included in Appendix. Please check these details in Appendix I of the new version of our paper. \n\nQ: I suggest to first present AdaShift (i.e., Eq. (17) or Algorithm 3 with both modified adaptive learning rate and moving average), and then elaborate on temporal decorrelation and others. AdaShift should be presented as a new Algorithm 1. \n\n>> Thanks a lot for this valuable suggestion. We have tried your suggestion and it looks much better. Please check it in the revised version. \n\nQ: Is there any convergence rate analysis of AdaShift even in the convex setting? \n\n>> Currently, we do not have convergence rate analysis for AdaShift. We will work on it and hope it will appear soon. \n", "Thanks for your constructive feedback. \n\nQ: Regarding content, the reviewer is quite dubious about the spatial decorrelation idea. Assuming shared moment estimation for blocks of parameters is definitely meaningful from an information perspective, and has indeed been used before, but it seems to have little to do with the 'decorrelation' idea. \n\n>> In our proposed algorithm, only the spatial elements of temporally-shifted gradient g_{t-n} are involved in the calculation of v_t. Based on the temporal independence assumption, g_{t-n} is independent of g_t, which naturally implies that all elements in g_{t-n} are independent of the elements in g_t. Thus, using the spatial elements in g_{t-n} does not break the independence assumption. We have revised the related sections and avoided the term ‘‘spatial independence’’ that is indeed confusing. \n\nQ: Regarding presentation, the reviewer's opinion is that the paper is too long. Too much space is spent discussing an interesting yet limited counterexample, on which 5 theorems (that are simple analytical derivations) are stated. This should be summarized (and its interesting argument stated more concisely), to the benefit of the actual algorithm presentation, that should appear in the main text (Algorithm 3). The spatial decorrelation method, that remains unclear to the reviewer, should be discussed more and validated more extensively. The current size of the paper is 10 pages, which is much above the ICLR average length. \n\n>> Thanks a lot for these constructive suggestions. We have rewritten related sections accordingly. The main changes are: (i) we have renamed the analytical derivations as lemmas and removed unnecessary details; (ii) we have reorganized the analysis section to make it more concise and clear; (iii) we have removed Algorithms 1 and 2, and directly presented Algorithm 3; (iv) we have made the arguments on the validity of using spatial elements much more clear. \n\nQ: The reviewer would be curious to see a comparison with temporal-only AdaShift in the experiment, as the block/max operator \\phi, to isolate the temporal and 'spatial' effect. \n\n>> We have added experiments on temporal-only AdaShift and spatial-only AdaShift. Some experiments on temporal-only AdaShift can be found in Figure 2 and Figure 3 in the experiments Section, and more results are included in Appendix J and K. \n\n>> Temporal-only AdaShift is actually not as stable as AdaShift. It works well in simple tasks, but it suffers from explosive gradient in complex systems: a neuron recovering from a vanishing gradient state is the typical failure case, where v_t is nearly zero. AdaShift with spatial operation, in contrast, does not suffer from this problem: the gradients of an entire block is relatively stable and won’t vanish. \n\n>> Spatial-only AdaShift turns out not to fit our assumption, but it is indeed a very interesting extension of Adam. Therefore, we have also conducted a set of experiments on spatial-only AdaShift. According to our initial investigations, ‘‘spatial-only AdaShift’’ shares a similar performance to Adam. Details are presented in Appendix J and K. ", "Summary\n------\n\nBased on an extensive argument acoordig to which Adam potential failures are due to the positive correlation between gradient and moment estimation, the authors propose Adashift, a method in which temporal shift (and more surprisingly 'spatial' shift, ie mixing of parameters) is used to ensure that moment estimation is less correlated with gradient, ensuring convergence of Adashift in pathological cases, without the efficiency cost of simpler method such as AMSGrad. An extensive analysis of a pathological counter example, introduced in Reddi et al. 2018 is analysed, before the algorithm presentation and experimental validation. Experiments shows that the algorithm has equivalent speed as Adam and sometimes false local minima, resulting in better training error, and potentially better test error.\n\nReview\n-------\n\nThe decorrelation idea is original and well motivated by an extensive analysis of a pathological examples. The experimental validation is thorough and convincing, and the paper is overall well written. \n\nRegarding content, the reviewer is quite dubious about the spatial decorrelation idea. ASsuming shared moment estimation for blocks of parameters is definitely meaningful from an information perspective, and has indeed been used before, but it seems to have little to do with the 'decorrelation' idea. The reviewer would be curious to see a comparison with temporal-only adashift in the experiment, as the block / max operator \\phi, to isolate the temporal and 'spatial' effect.\n\nRegarding presentation, the reviewer's opinion is that the paper is too long. Too much space is spent discussing an interesting yet limited counterexample, on which 5 theorems (that are simple analytical derivations) are stated. This should be summarized (and its interesting argument stated more concisely), to the benefit of the actual algorithm presentation, that should appear in the main text (algorithm 3). The spatial decorrelation method, that remains unclear to the reviewer, should be discussed more and validated more extensively. The current size of the paper is 10 pages, which is much above the ICLR average length.\n\nHowever, due to the novelty of the algorithm, the reviewer is in favor of accepting the paper, provided the authors can address the comments above.\n", "This manuscript contributes a new online gradient descent algorithm with adaptation to local curvature, in the style of the Adam optimizer, ie with a diagonal reweighting of the gradient that serves as an adaptive step size. First the authors identify a limitation of Adam: the adaptive step size decreases with the gradient magnitude. The paper is well written.\n\nThe strengths of the paper are a interesting theoretical analysis of convergence difficulties in ADAM, a proposal for an improvement, and nice empirical results that shows good benefits. In my eyes, the limitations of the paper are that the example studied is a bit contrived and as a results, I am not sure how general the improvements.\n\n# Specific comments and suggestions\n\nUnder the ambitious term \"theorem\", the results of theorem 2 and 3 limited to the example of failure given in eq 6. I would have been more humble, and called such analyses \"lemma\". Similarly, theorem 4 is an extension of this example to stochastic online settings. More generally, I am worried that the theoretical results and the intuitions backing the improvements are built only on one pathological example. Are there arguments to claim that this example is a prototype for a more general behavior?\n\n\nAli Rahimi presented a very simple example of poor perform of the Adam optimizer in his test-of-time award speech at NIPS this year (https://www.youtube.com/watch?v=Qi1Yry33TQE): a very ill-conditioned factorized linear model (product of two matrices that correspond to two different layers) with a square loss. It seems like an excellent test for any optimizer that tries to be robust to ill-conditioning (as with Adam), though I suspect that the problem solved here is a different one than the problem raised by Rahimi's example.\n\n\nWith regards to the solution proposed, temporal decorrelation, I wonder how it interacts with mini-batch side. With only a light understanding of the problem, it seems to me that large mini-batches will decrease the variance of the gradient estimates and hence increase the correlation of successive samples, breaking the assumptions of the method.\n\n\nUsing a shared scalar across the multiple dimensions implies that the direction of the step is now the same as that of the gradient. This is a strong departure compared to ADAM. It would be interesting to illustrate the two behaviors to optimize an ill-conditioned quadratic function, for which the gradient direction is not a very good choice.\n\n\nThe performance gain compared to ADAM seems consistent. It would have been interesting to see Nadam in the comparisons.\n\n\n\nI would like to congratulate the authors for sharing code.\n\nThere is a typo on the y label of figure 4 right.\n", "Thank you for your kind reply! I think my tone might be too serious before. Your paper is good and I just want to say you don't need that kind of little \"tricks\". :) Hope you can have a good result.", "Thanks for your interest in our paper and sorry for not releasing the code in time. The code is now accessible from the provided link. \n\nWe think publicizing the code should be done before the review process, rather than after paper acceptance. And from our perspective, releasing the code bears no relation to contribution, but the authors' duty. ", "You claim that \"The anonymous code is provided at http://bit.ly/2NDXX6x\", but there is nothing there.\nIt has been almost a week since the submission was closed. Do you plan to upload the code some days later but before the official reviewers start reading your paper? I don't like this behavior.\n\nSeriously, it should be regarded as some kinds of \"cheating\". You can successfully pretend that you had done everything before the submission if no one notices that. Reviewers may think that you've done more than others. It is unfair to other authors that honestly admit they haven't managed/refactored the code yet.\n\nI don't think whether you publish the code in the review period can strongly affect the result of acceptance. Honestly admitting that you haven't made your code really is much better than \"cheating\"." ]
[ -1, -1, 6, -1, -1, -1, 6, 9, -1, -1, -1 ]
[ -1, -1, 4, -1, -1, -1, 4, 4, -1, -1, -1 ]
[ "BklQd4otAQ", "BJe4vgwCaX", "iclr_2019_HkgTkhRcKQ", "S1guRoZwhQ", "SkxDrCgdnX", "r1gzGyM5hX", "iclr_2019_HkgTkhRcKQ", "iclr_2019_HkgTkhRcKQ", "Byg39Xum9Q", "S1eEIQx-9Q", "iclr_2019_HkgTkhRcKQ" ]
iclr_2019_HkgYmhR9KX
AD-VAT: An Asymmetric Dueling mechanism for learning Visual Active Tracking
Visual Active Tracking (VAT) aims at following a target object by autonomously controlling the motion system of a tracker given visual observations. Previous work has shown that the tracker can be trained in a simulator via reinforcement learning and deployed in real-world scenarios. However, during training, such a method requires manually specifying the moving path of the target object to be tracked, which cannot ensure the tracker’s generalization on the unseen object moving patterns. To learn a robust tracker for VAT, in this paper, we propose a novel adversarial RL method which adopts an Asymmetric Dueling mechanism, referred to as AD-VAT. In AD-VAT, both the tracker and the target are approximated by end-to-end neural networks, and are trained via RL in a dueling/competitive manner: i.e., the tracker intends to lockup the target, while the target tries to escape from the tracker. They are asymmetric in that the target is aware of the tracker, but not vice versa. Specifically, besides its own observation, the target is fed with the tracker’s observation and action, and learns to predict the tracker’s reward as an auxiliary task. We show that such an asymmetric dueling mechanism produces a stronger target, which in turn induces a more robust tracker. To stabilize the training, we also propose a novel partial zero-sum reward for the tracker/target. The experimental results, in both 2D and 3D environments, demonstrate that the proposed method leads to a faster convergence in training and yields more robust tracking behaviors in different testing scenarios. For supplementary videos, see: https://www.youtube.com/playlist?list=PL9rZj4Mea7wOZkdajK1TsprRg8iUf51BS The code is available at https://github.com/zfw1226/active_tracking_rl
accepted-poster-papers
The paper presents an adversarial learning framework for active visual tracking, a tracking setup where the tracker has camera control in order to follow a target object. The paper builds upon Luo et al. 2018 and proposes jointly learning tracker and target policies (as opposed to tracker policy alone). This automatically creates a curriculum of target trajectory difficulty, as opposed to the engineer designing the target trajectories. The paper further proposes a method for preventing the target to fast outperform the tracker and thus cause his policy to plateau. Experiments presented justify the problem formulation and design choices, and outperform Luo et al. . The task considered is very important, active surveillance with drones is just one sue case. A downside of the paper is that certain sentences have English mistakes, such as this one: "The authors learn a policy that maps raw-pixel observation to control signal straightly with a Conv-LSTM network. Not only can it save the effort in tuning an extra camera controller, but also does it outperform the..." However, overall the manuscript is well written, well structured, and easy to follow. The authors are encouraged to correct any remaining English mistakes in the manuscript.
train
[ "HkefcSaM6m", "rJlb9G2fp7", "ryl__Au-a7", "r1xYj34yp7", "Hkg8D92gTm", "rkxScidq27", "Hyxga8D52m" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We have updated our paper during the rebuttal period, which could be summarized as below:\n\na) To emphasize our major contribution and clarify the non-trivial different with Luo et al. (2018), we've rewritten Abstract and modified the Introduction. \nb) We've modified Section 3.3. The motivation for the tracker-awareness is added. Explanations are given for why we cannot do a target-aware tracker. \nc) Supplementary videos are updated in: https://www.youtube.com/playlist?list=PL9rZj4Mea7wOZkdajK1TsprRg8iUf51BS \n The videos contains: \n 1. Training the target and tracker jointly via AD-VAT (2D);\n 2. Testing the AD-VAT tracker in four testing environments (2D);\n 3. Using the learned target to attack the baseline trackers (2D);\n 4. Training the target and tracker via AD-VAT in DR Room (3D);\n 5. Testing the tracker in Realistic Environments (3D);\n 6. Passively testing tracker on real-world video clips.\nd) Appendix.A is modified for better explaining the partial zero-sum reward.\ne) Appendix.B is added. It visualizes the training process via drawing the position distribution in different training stages.\nf) Appendix.C is added. It provides evaluation results on video clips to demonstrate the potential of transferring the tracking ability to the real world.\ng) Table.1 is updated. We add the testing result that the adversarial target is tracked by the three trackers in two different maps, and update the average performance simultaneously. The results demonstrate that the target learned in AD-VAT could effectively challenge the two baseline trackers.", "Thanks for appreciating our partial-zero-sum idea. Our primary contribution is the adversary/dueling RL mechanism for training a robust tracker. To stabilize and accelerates the training, we devised the techniques of the partial-zero-sum and the asymmetrical target model. These two techniques are critical for a successful training, and we hope to see their applications to other domains involving adversary/dueling training.\n\nAs for the comments on \"real-world test and results\", we've taken a qualitative testing on some real-world video clips from VOT dataset [Kristan et al. (2016)]. In this evaluation, we feed the video clips to the tracker and observe the network output actions. In general, the results show that the output action is consistent with the position and scale of the target. For example, when the target moves from the image center to the left until disappearing, the tracker outputs actions ``move forward\", ``move forward-left\", and ``turn left\" sequentially. The testing demonstrates the potential of transferring the tracking ability to real-world. \n\nPlease see Appendix.C in our updated submission and watch the demo video here: https://youtu.be/jv-5HVg_Sf4 ", "Thanks for the review. Our feedback goes below.\n\nQ1: \"I think the contributions of this work is incremental compared with [Luo et al (2018)] in which the major difference is the partial zero sum reward structure is used and the observations and actions information from the tracker are incorporated into the target network\"\nA1: Our method is fundamentally different from Luo et al. (2018), please see our reply to R#1 (the Q2-A2) for detailed explanations. In short, the major difference is that we employ Multi-Agent RL to train both the tracker and the target object, while Luo et al. (2018) only train the tracker with Single-Agent RL (where they pre-define/hand-tune the moving path for the target object). Our method turns out better in the sense that it produces a stronger tracker via the proposed asymmetrical dueling training. \n\nThe Multi-Agent RL training in our VAT task is unstable and slow to converge. To address these issues, we derived the two techniques: the partial zero sum and the asymmetrical target object model.\n\n\nQ2: \"In addition, the explanation about importance of the tracker awareness to the target network seems not sufficient. The ancient Chinese proverb is not a good explanation. It would be better if some theoretical support can be provided for such design.\"\nA2: The tracker awareness mechanism for the target object is \"cheating\". This way, the target object would appear to be \"stronger\" than the tracker as it knows what the tracker knows. Such a treatment accelerates the training by inducing a reasonable curriculum to the tracker and finally helps training a much stronger and more generalizable tracker. Note we cannot apply this trick to the tracker as it cannot cheat when deploying. See also our reply to R#1 (Q3-A3).\n\nAs for the details of the tracker-aware model, it not only uses the observation and action of the tracker as extra input information but also employs an auxiliary task to predict the tracker's immediate reward. The auxiliary task could help the tracker learn a better representation for the adversarial policy to challenge the tracker.\n\n\nQ3: \"For active object tracking in real-world/3D environment, designing the reward function only based on the distance between the expected position and the tracked object position can not well reflect the tracker capacity. The scale changes of the target should also be considered when designing the reward function of the tracker. However, the proposed method does not consider the issue, and the evaluation using the reward function based on the position distance may not be sufficient.\"\nA3: The scale of a target object showing up in the tracker's image observation will be implied by the distance between tracker and object, which we've considered when designing the reward function. \n\nConsider a simple case of projecting a line in 3D space onto a camera plane. The length (l) of the line on the 2D image plane is derived by an equation as below:\n l = L*f/d, \nwhere L is the original length in 3D space, f is the distance between the 2D plane and the focal center, and d is the distance between the line and the focal center.\nIn the VAT problem,f depends on the intrinsic parameters of the camera model, which is fixed; L depends on the 3D model of the target object, which also could be regarded as constant. Thus, the scale of the object in the 2D image plane is impacted only by d, the distance between the target and the tracker. It is not difficult to derive that, the farther the distance d is, the smaller the target is observed. This suggests that the designed distance-based reward function has well considered the scale of the object.\n\nNote that calculating the scale of the target in an image is of high computational complexity. It requires to extract the object mask and calculate the area of the mask. In contrast, our distance-based reward is computationally cheap, thanks to the simulator's APIs by which we can easily access the tracker's and target's world coordinate in the bird view map.", "Thanks for the review. Our feedback goes below.\n\nQ1: \"Contrived task\"\nA1: Visual object tracking is widely recognized as an important task in Computer Vision. In this study, we propose a principled approach of how to train a robust tracker.\n\n\nQ2: \"The work is very incremental over Luo et al. (2018) \"End-to-end Active Object Tracking and Its Real-world Deployment via Reinforcement Learning\", as the only two additions are extra observations o_t^{alpha} for the target, and a reward function that has a fudge factor when the target gets too far away\"\nA2: Our method is fundamentally different from Luo et al. (2018), as explained below.\n\nLuo et al. (2018) adopted pre-defined target object moving path, coded in hand-tuned scripts. Thus, only the tracker is trainable, and the settings are single-agent RL. \n\nIn our method, the target object is also implemented by a neural network, learning how to escape the tracker during training. Both the tracker and the target object are trained jointly in an adversary/dueling way, and the settings are multi-agent RL. \n\nWe show the advantage of our method over Luo et al. (2018). Note that the pre-defined target object moving path in Luo et al. (2018) can hurt the generalizability of the tracker. In reality, the target object can move in various hard patterns: Z-turn, U-turn, sudden stop, walk-towards-wall-then-turn, etc., which can pose non-trivial difficulties to the tracker during both training and deployment. Moreover, such moving patterns are difficult to be thoroughly covered and coded by the hand-tuning scripts as in Luo et al. (2018).\n\nThe trainable target object in our method, however, can learn the proper moving path in order to escape from the tracker solely by the adversary/dueling training, without hand-tuned path. The smart target object, in turn, induces a tracker that well follows the target no matter how wild the target object moves. Eventually, we obtain a much stronger tracker than that of Luo et al. (2018), achieving the very purpose of our study: to train a robust tracker for VAT task.\n\n\nQ3: \"Should not the asymmetrical relationship work the other way round, with the tracker knowing more about the target?\"\nA3: We should not do that. \n\nNote that the additional \"asymmetrical\" information is way of \"cheating\". As our goal is to train a tracker, we don't need to consider deploying a target object. Therefore, we can simply let the target object cheat during training by feeding to it the tracker's observation/reward/action. Such a \"peeking\" treatment accelerates the training and ultimately improves the tracker's training quality, as is shown in the submitted paper.\n\nThe tracker, however, is unable to \"cheat\" when deployed (e.g., in a real-world robot). It has to predict the action using its own observations. There is no way for the tracker to acquire the information (observation/reward/action) from a target object. \n\n\nQ4: \"The paper would have benefitted from a proper analysis of the trajectories taken by the adversarial target as opposed to the heuristic ones, ...\"\nA4: We have added to Appendix some texts for the analysis, see Appendix.B in the updated submission. The target object does show intriguing behaviors when escaping the tracker, see the supplementary videos available at https://www.youtube.com/playlist?list=PL9rZj4Mea7wOZkdajK1TsprRg8iUf51BS\n\n\nQ5: \"...and from comparison with non-RL state-of-the-art on tracking tasks.\"\nA5: Luo et al. (2018) had done the comparisons and shown their method improves over several representative non-RL trackers in the literature.\nOur method outperforms that of Luo et al. (2018).\n\n\nQ6: \"Citing Sun Tzu's \"Art of War\" (please use the correct citation format)...\"\nA6: We have fixed this in the updated submission.\n\n\nQ7: \"Further multi-agent tasks could also have been considered, such as capture the flag tasks as in \"Human-level performance in first-person multiplayer games with population-based deep reinforcement learning\"\"\nA7: The method developed in that paper is for playing the First Person Shooting game, where it has to ensure the fairness among the intra- and inter-team players. In our study, the primary goal is to train a tracker (player 1), permitting us to leverage the asymmetrical mechanism for the target object (player 2). This technique effectively improves the adversary/dueling training and eventually produces a strong tracker.", "This work aims to address the visual active tracking problem in which the tracker is automatically adjusted to follow the target. A training mechanism in which tracker and the target serve as mutual opponents is derived to learning the active tracker. Experimental evaluation in both 2D and 3D environments is conducted.\n\nI think the contributions of this work is incremental compared with [Luo et al (2018)] in which the major difference is the partial zero sum reward structure is used and the observations and actions information from the tracker are incorporated into the target network, while the network architecture is quite similar to [Luo et al (2018)].\nIn addition, the explanation about importance of the tracker awareness to the target network seems not sufficient. The ancient Chinese proverb is not a good explanation. It would be better if some theoretical support can be provided for such design.\n\nFor active object tracking in real-world/3D environment, designing the reward function only based on the distance between the expected position and the tracked object position can not well reflect the tracker capacity. The scale changes of the target should also be considered when designing the reward function of the tracker. However, the proposed method does not consider the issue, and the evaluation using the reward function based on the position distance may not be sufficient.\n", "This paper presents a simple multi-agent Deep RL task where a moving tracker tries to follow a moving target. The tracker receives, from its own perspective, partially observed visual information o_t^{alpha} about the target (e.g., an image that may show the target) and the target receives both observations from its own perspective o_t^{beta} and a copy of the information from the tracker's perspective. Both agents are standard convnet + LSTM neural architectures trained using A3C and are evaluated in 2D and 3D environments. The reward function is not completely zero-sum, as the tracked agent's reward vanishes when it gets too far from a reference point in the maze.\n\nThe work is very incremental over Luo et al (2018) \"End-to-end Active Object Tracking and Its Real-world Deployment via Reinforcement Learning\", as the only two additions are extra observations o_t^{alpha} for the target, and a reward function that has a fudge factor when the target gets too far away. Citing Sun Tzu's \"Art of War\" (please use the correct citation format) is not convincing enough for adding the tracker's observations as inputs for the target agent. Should not the asymmetrical relationship work the other way round, with the tracker knowing more about the target?\n\nExperiments are conducted using two baselines for the target agent, one a random walk and another an agent that navigates to a target according to a shortest path planning algorithm. The ablation study shows that the tracker-aware observations and a target's reward structure that penalizes when it gets too far do help the tracker's performance, and that training the target agent helps the tracker agent achieve higher scores. The improvement is however quite small and the task is ad-hoc.\n \nThe paper would have benefitted from a proper analysis of the trajectories taken by the adversarial target as opposed to the heuristic ones, and from comparison with non-RL state-of-the-art on tracking tasks. Further multi-agent tasks could also have been considered, such as capture the flag tasks as in \"Human-level performance in first-person multiplayer games with population-based deep reinforcement learning\".", "This is in a visual active tracking application. The paper proposes a novel reward function - \"partial zero sum\", which only encourages the tracker-target competition when they are close and penalizes whey they are too far.\n\nThis is a very interesting problem and I see why their contribution could improve the system performance. \n\nClarity: the paper is well-written. I also like how the author provides both formulas and a lot of details on implementation of the end-to-end system. \n\nOriginality: Most of the components are pretty standard, however I value the part that seems pretty novel to me - which is the \"partial zero-sum\" idea.\n\nEvaluation: the result obtained from the simulated environment in 2d and 3d are convincing. However, if 1) real-world test and results 2) a stronger baseline can be used, that would be a stronger acceptance. " ]
[ -1, -1, -1, -1, 5, 4, 6 ]
[ -1, -1, -1, -1, 4, 3, 4 ]
[ "iclr_2019_HkgYmhR9KX", "Hyxga8D52m", "Hkg8D92gTm", "rkxScidq27", "iclr_2019_HkgYmhR9KX", "iclr_2019_HkgYmhR9KX", "iclr_2019_HkgYmhR9KX" ]
iclr_2019_HkgqFiAcFm
Marginal Policy Gradients: A Unified Family of Estimators for Bounded Action Spaces with Applications
Many complex domains, such as robotics control and real-time strategy (RTS) games, require an agent to learn a continuous control. In the former, an agent learns a policy over R^d and in the latter, over a discrete set of actions each of which is parametrized by a continuous parameter. Such problems are naturally solved using policy based reinforcement learning (RL) methods, but unfortunately these often suffer from high variance leading to instability and slow convergence. Unnecessary variance is introduced whenever policies over bounded action spaces are modeled using distributions with unbounded support by applying a transformation T to the sampled action before execution in the environment. Recently, the variance reduced clipped action policy gradient (CAPG) was introduced for actions in bounded intervals, but to date no variance reduced methods exist when the action is a direction, something often seen in RTS games. To this end we introduce the angular policy gradient (APG), a stochastic policy gradient method for directional control. With the marginal policy gradients family of estimators we present a unified analysis of the variance reduction properties of APG and CAPG; our results provide a stronger guarantee than existing analyses for CAPG. Experimental results on a popular RTS game and a navigation task show that the APG estimator offers a substantial improvement over the standard policy gradient.
accepted-poster-papers
The paper introduces a new variance reduced policy gradient method, for directional and clipped action spaces, with provable guarantees that the gradient is lower variance. The paper is clearly written and the theory an important contribution. The experiments provide some preliminary insights that the algorithm could be beneficial in practice.
train
[ "rkg6-yBhJ4", "Hyl_lXQF3X", "H1eabnIj67", "SJlQqs8spQ", "S1eUaqUjTQ", "SJgG0gkChX", "H1gSPxyC3Q" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "After reading the rebuttal and other reviews, I would keep my original scores and think this paper presents some simple (of clipping action spaces and marginalizing distributions to lower dimensions and to take gradients) but very useful results in reducing variance of RL methods with continuous action spaces. To me, this idea is worth publishing at ICLR.\n\nHowever, with limited effort/time spent on reviewing the theoretical results in Appendix, I unfortunately cannot justify the correctness of the theoretical results, nor argue whether this machinery is a must for proving this simple method.", "Summary\n\nThis paper derives a new policy gradient method for when continuous actions are transformed by a\nnormalization step, a process called angular policy gradients (APG). A generalization based on\na certain class of transformations is presented. The method is an instance of a \nRao-Blackwellization process and hence reduces variance.\n\n\nDetailed comments\n\nI enjoyed the concept and, while relatively niche, appreciated the work done here and do believe it has clear applications. I am not convinced that the measure theoretic perspective is always\nnecessary to convey the insights, although I appreciate the desire for technical correctness. Still,\nappealing to measure theory does reduces readership, and I encourage the authors to keep this in\nmind as they revise the text.\n\nGenerally speaking it seems like a lot of technicalities for a relatively simple result:\nmarginalizing a distribution onto a lower-dimensional surface.\n\nThe paper positions itself generally as dealing with arbitrary transformations T, but really is \nabout angular transformations (e.g. Definition 3.1). The generalization is relatively \nstraightforward and was not too surprising given the APG theory. The paper would gain in clarity\nif its scope was narrowed. \n\nIt's hard for me to judge of the experimental results of section 5.3, given that there are no other \nbenchmarks or provided reference paper. As a whole, I see APG as providing a minor benefit over PG.\n\nDef 4.4: \"a notion of Fisher information\" -- maybe \"variant\" is better than \"notion\", which implies there are different kinds of Fisher information \nDef 3.1 mu is overloaded: parameter or measure?\n4.4, law of total variation -- define \n\n\nOverall\n\nThis was a fun, albeit incremental paper. The method is unlikely to set new SOTA, but I appreciated\nthe appeal to measure theory to formalize some of the concepts.\n\n\nQuestions\n\nWhat does E_{pi|s} refer to in Eqn 4.1?\nCan you clarify what it means for the map T to be a sufficient statistic for theta? (Theorem 4.6)\nExperiment 5.1: Why would we expect APG with a 2d Gaussian to perform better than a 1d Gaussian\non the angle?\n\n\nSuggestions\n\nParagraph 2 of section 3 seems like the key to the whole paper -- I would make it more prominent.\nI would include a short 'measure theory' appendix or equivalent reference for the lay reader.\n\nI wonder if the paper's main aim is not actually to bring measure theory to the study of policy\ngradients, which would be a laudable goal in and of itself. ICLR may not in this case be the right\nvenue (nor are the current results substantial enough to justify this) but I do encourage authors to\nconsider this avenue, e.g. in a journal paper.\n\n= Revised after rebuttal =\n\nI thank the authors for their response. I think this work deserves to be published, in particular because it presents a reasonably straightforward result that others will benefit from. However, I do encourage further work to\n1) Provide stronger empirical results (these are not too convincing).\n2) Beware of overstating: the argument that the framework is broadly applicable is not that useful, given that it's a lot of work to derive closed-form marginalized estimators.\n", "Thank you for the time and effort spent reviewing our paper, and for the detailed suggestions. Below we repeat the questions/comments from the review and respond to each in turn.\n\n“The paper positions itself generally as dealing with arbitrary transformations T, but really is about angular transformations (e.g. Definition 3.1). The generalization is relatively straightforward and was not too surprising given the APG theory. The paper would gain in clarity if its scope was narrowed.”\n\nOur MPG framework not only supports the angular transformation but also covers the recently proposed clipped transformation in CAPG [Fujita and Maeda, 2018]. The theoretical result is tighter than the one in [Fujita and Maeda, 2018], and it supports general transformations instead of only clipped actions.\n\n\"I am not convinced that the measure theoretic perspective is always necessary to convey the insights, although I appreciate the desire for technical correctness.\" / \"Generally speaking it seems like a lot of technicalities for a relatively simple result: marginalizing a distribution onto a lower-dimensional surface.\"\n\nWe agree that the measure theoretic approach is not always necessary (indeed for angular actions, it is not needed), but it is necessary for a very common scenario -- clipped actions. Researchers and practitioners both almost always clip actions when using policy gradient algorithms for robotics control environments (read: MuJoCo tasks). Recently, a reduced variance method was introduced by Fujita and Maeda (2018) for clipped action spaces. Their algorithm is also a member of the marginal policy gradients family and our theoretical results for MPG significantly tighten the existing analysis of that algorithm. \n\n\n\"It's hard for me to judge of the experimental results of section 5.3, given that there are no other benchmarks or provided reference paper. As a whole, I see APG as providing a minor benefit over PG.\"\n\nFor the results in Section 5.3, the issue is that currently, there are no benchmark environments for directional control. We anticipate that in the future this may change (e.g. console and PC games often have directional controls).\n\n“What does E_{pi|s} refer to in Eqn 4.1?”\n\nThe expectation is taken with respect to the policy \\pi conditioned on the current state s (s here is arbitrary, but fixed). Stated differently, we are taking the expectation with respect to the distribution $\\pi(\\cdot | s,\\theta)$.\n\n“Can you clarify what it means for the map T to be a sufficient statistic for theta? (Theorem 4.6)”\n\nWe have now removed this part of the statement because we are no longer absolutely certain of its correctness, and because it is not used anywhere else in the paper.\n\n“Experiment 5.1: Why would we expect APG with a 2d Gaussian to perform better than a 1d Gaussian on the angle?”\n\nBecause using a 1D Gaussian requires either (1) clipping the angle to [0,2\\pi) before execution in the environment and making updates using the clipped output or (2) using the sampled angle for updates and perform the clipping in the environment. In the first case, this approach is asymmetric in that does not place similar probability on $\\mu_{\\theta}(s) - \\epsilon$ and $\\mu_{\\theta}(s) + \\epsilon$ for $\\mu_{\\theta}(s)$ near to $0$ and $2\\pi$. In the second case, this requires approximating a periodic function. We include both these reasons at the start of Section 3.\n\n\nLastly, thank you for the concrete suggestions:\n\"Def 4.4: \"a notion of Fisher information\" -- maybe \"variant\" is better than \"notion\", which implies there are different kinds of Fisher information \nDef 3.1 mu is overloaded: parameter or measure?\n4.4, law of total variation -- define \"\n\nWe have addressed these and uploaded a new draft to reflect the changes. For the last suggestion, we currently define the law of total variance(variation) in the preliminaries so we did not repeat the definition in Section 4.4. We now write \"law of total variance\" instead of \"law of total variation\" to avoid any ambiguity.", "Thank you for the time and effort spent reviewing our paper. We mostly agree with your characterization of our work, but we think there are two important points we perhaps did not sufficiently emphasize in our paper and that we would like to mention:\n\n(1) There are other existing tasks and algorithms that fall into the marginal policy gradients framework. For example, researchers and practitioners both almost always clip actions when using policy gradient algorithms for robotics control environments (read: MuJoCo tasks). Recently, a reduced variance method was introduced by Fujita and Maeda (2018) for clipped action spaces. Their algorithm is also a member of the marginal policy gradients family and our theoretical results for MPG significantly tighten the existing analysis of their algorithm.\n\n(2) To the best of our knowledge, our work is the first to apply such variance reduction techniques to RL.\n\nTo summarize, our work consists of two components: (a) a new algorithm for directional control and (b) a variance reduction framework that can be applied to directional action space and clipped action spaces. While directional action spaces are not very common at this time, clipped action spaces are extremely common. We also anticipate that in the future, many additional environments will be available that feature directional actions (many console or PC games, for example). For these reasons, we feel that our work is not incremental at all, and is actually quite novel.\n", "Thank you for the time and effort spent reviewing our paper. We are glad you liked the paper. We want to emphasize one point that we perhaps did not highlight enough in our paper: there are other existing algorithms that fall into the marginal policy gradients framework. Specifically, researchers and practitioners both almost always clip actions for use in robotics control environments (read: MuJoCo tasks). Recently, a reduced variance method was introduced by Fujita and Maeda (2018) for clipped action spaces. Their algorithm is also a member of the marginal policy gradients family and our theoretical results for MPG significantly tighten existing analyses of variance reduction that can be achieved for clipped actions.\n\nTo respond to your question, yes it is possible (e.g. the example given above), but their is no general procedure that we know of to derive such methods. Rather, this would be done on an action space by action space basis", "In this paper the authors proposed a new policy gradient method, which is known as the angular policy gradient (APG), that aims to provide provably lower variance in the gradient estimate. Here they presented a stochastic policy gradient method for directional control. Under the set of parameterized Gaussian policies, they presented a unified analysis of the variance of APG and showed how it theoretically outperform (in terms of having lower variance) than other state-of-the art methods. They further evaluated the APG algorithms on a grid-world navigation domain as well as the King of Glory task, and showed that the APG estimator significantly out-performs the standard policy gradient.\n\nIn general I think this paper addressed an important issue in policy gradient in terms of deriving a lower variance gradient estimate. In particular the authors showed that under the parameterized marginal distribution, such as the angular Gaussian distribution, the corresponding APG estimate has a lower variance estimate than that of CAPG. Furthermore, I also appreciate that they evaluated these results in realistic experiments such as the RTS game domains. \n\nMy only question is on the possibility of deriving realistic APG algorithms beyond the class of angular Gaussian policy. In terms of the layout of the paper, I would also recommend including the exact algorithm pseudo-code used in the main paper.\n", "This paper introduces policy gradient methods for RL where the policy must choose a direction (a.k.a., the navigation problem).\n\nMapping techniques from \"non-directional\" problems (where the action space is not a direction) and then projeting on the sphere is sub-optimal (the variance is too big). The authors propose to sample directly on the sphere, using the fact that the likelyhood of an angular Gaussian r.v. has *almost* a closed form and its gradient can almost be computed, up to some normalization term (the integral which is constant in the standard Gaussian case).\n\n\nThis can be seen as a variance reduction techniques.\n\nThe proofs are not too intricate, for someone used to variance reduction (yet computations must be made quite carefully).\n\n\nThe result is coherent, interesting from a theoretical point of view and the experiment are somehow convincing. The main drawback would be the rather incrementality of that paper (basically sample before projecting is a bit better than projecting after sampling) and that this directional setting is quite limited...\n" ]
[ -1, 7, -1, -1, -1, 7, 6 ]
[ -1, 4, -1, -1, -1, 3, 3 ]
[ "S1eUaqUjTQ", "iclr_2019_HkgqFiAcFm", "Hyl_lXQF3X", "H1gSPxyC3Q", "SJgG0gkChX", "iclr_2019_HkgqFiAcFm", "iclr_2019_HkgqFiAcFm" ]
iclr_2019_Hkl5aoR5tm
On Self Modulation for Generative Adversarial Networks
Training Generative Adversarial Networks (GANs) is notoriously challenging. We propose and study an architectural modification, self-modulation, which improves GAN performance across different data sets, architectures, losses, regularizers, and hyperparameter settings. Intuitively, self-modulation allows the intermediate feature maps of a generator to change as a function of the input noise vector. While reminiscent of other conditioning techniques, it requires no labeled data. In a large-scale empirical study we observe a relative decrease of 5%-35% in FID. Furthermore, all else being equal, adding this modification to the generator leads to improved performance in 124/144 (86%) of the studied settings. Self-modulation is a simple architectural change that requires no additional parameter tuning, which suggests that it can be applied readily to any GAN.
accepted-poster-papers
This manuscript proposes an architectural improvement for generative adversarial network that allows the intermediate layers of a generator to be modulated by the input noise vector using conditional batch normalization. The reviewers find the paper simple and well-supported by extensive experimental results. There were some concerns about the impact of such an empirical study. However, the strength and simplicity of the technique means that the method could be of practical interest to the ICLR community.
train
[ "rkef957sRm", "Byl04qxA2X", "r1gBxvdwpm", "Bkgu7f_Dp7", "rylaP-uP6m", "HJgjezT1Tm", "rylkjAtu2m" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "It appears that Reviewer 2 and I disagree with Reviewer 3 in terms of submission rating. I feel strongly about the submission being publication-worthy, and I would like to challenge Reviewer 2’s score.\n\nThere is ample room in a research conference for empirical contributions, provided the experimentation is carried out rigorously. To me, the bar for acceptance for this type of paper is 1) whether or not the results can be expected to generalize outside of the reported experimental setting, 2) whether the proposed approach has the potential to have an impact in the research community, and 3) whether the approach and results are communicated clearly to the target audience. In this instance, criteria 1) and 3) are easily met in my opinion: the breadth of model architectures, regularization techniques, and datasets used for evaluation makes me confident that the observed performance improvements are not a happy accident, and the paper writing was straightforward and easy to follow. For criterion 2), I am of the opinion that although the proposed self-modulation mechanism isn’t likely to drastically change the way we train and think of GANs, it is nevertheless a good addition to the set of architectural features that could facilitate GAN training.\n\nI feel that asking for a fundamental explanation of how self-modulation helps improve performance is an unreasonable bar to set for acceptance. Plenty of architectural features like dropout or batch normalization were poorly understood at the time they were first presented, yet in retrospect had a significant impact in the research community. Likewise, asking for the proposed approach to show an improvement for more than “only” 86% of the evaluation settings is unreasonably strict: I don’t find it surprising that there are instances in which self-modulation does not improve performance, and given these odds I would certainly try the approach on a new dataset and architecture combination.", "This paper proposes a Self-Modulation framework for the generator network in GANs, where middle layers are directly modulated as a function of the generator input z.\nSpecifically, the method is derived via batch normalization (BN), i.e. the learnable scale and shift parameters in BN are assumed to depend on z, through a small one-hidden layer MLP. This idea is something new, although quite straight-forward.\nExtensive experiments with varying losses, architectures, hyperparameter settings are conducted to show self-modulation improves baseline GAN performance.\n\nThe paper is mainly empirical, although the authors compute two diagnostic statistics to show the effect of the self-modulation method. It is still not clear why self-modulation stabilizes the generator towards small conditioning values.\n\nThe paper presents two loss functions at the beginning of section 3.1 - the non-saturating loss and the hinge loss. It should be pointed out that the D in the hinge loss represents a neural network output without range restriction, while the D in the non-saturating loss represents sigmoid output, limiting to take in [0,1]. It seems that the authors are not aware of this difference.\n\nIn addition to report the median scores, standard deviations should be reported.\n\n=========== comments after reading response ===========\n\nI do not see in the updated paper that this typo (in differentiating D in hinge loss and non-saturating loss) is corrected. \n\nThough fundamental understanding can happen asynchronously, I reserve my concern that such empirical method is not substantial enough to motivate acceptance in ICLR, especially considering that in (only) 124/144 (86%) of the studied settings, the results are improved. And there is no analysis of the failure settings.", "We would like to thank the reviewer for the time and useful feedback. Our response is given below.\n\n- The paper is mainly empirical, although the authors compute two diagnostic statistics to show the effect of the self-modulation method. It is still not clear why self-modulation stabilizes the generator towards small conditioning values.\n\nWe consider self-modulation as an architectural change in the line of changes such as residual connections or gating: simple, yet widely applicable and robust. As a first step, we provide a careful empirical evaluation of its benefits. While we have provided some diagnostics statistics, understanding deeply why this method helps will fuel interesting future research. Similar to residual connections, gating, dropout, and many other recent advances, more fundamental understanding will happen asynchronously and should not gate its adoption and usefulness for the community.\n\n- It should be pointed out that the D in the hinge loss represents a neural network output without range restriction, while the D in the non-saturating loss represents sigmoid output, limiting to take in [0,1]. It seems that the authors are not aware of this difference.\n\nWe are aware of this key difference and we apply the sigmoid function to scale the output of the discriminator to the [0,1] range for the non-saturating loss. Thanks for carefully reading our manuscript and noticing this typo which we will correct. \n\n- In addition to report the median scores, standard deviations should be reported.\n\nWe omitted standard errors simply to reduce clutter. The standard error of the median is within 3% in the majority of the settings and is presented in both Tables 5 and Table 6.\n", "We would like to thank the reviewer for the time and useful feedback. Our response is given below.\n\n- Interpretation of self-modulation model performs worse in the combination of spectral normalization and the SNDC architecture.\n\nOverall, self-modulation appears to yield the most consistent improvement for the deeper ResNet architecture, than the shallower, more poorly performing, SNDC architecture. Self-modulation doesn’t help in the SNDC/Spectral Norm setting on the Bedroom data, where the SNDC architecture appears to perform very poorly compared to ResNet. For the other three datasets, self-modulation helps in this setting though.\n\n- The ablation study shows that the impact is highest when modulation is applied to the last layer (if only one layer is modulated). It seems modulation on layer 4 comes in as a close second. I am curious about why that might be.\n\nFigure 4 in the Appendix contains the equivalent of Figure 2(c) for all datasets. Considering all datasets: (1) Adding self-modulation to all layers performs best. (2) In terms of median performance, adding it to the layer farthest from the input is the most effective. We believe that the apparent significance of layer 4 in Figure 2(c) is statistical noise.\n\n- I would like to see some more interpretation on why this method works.\n\nWe consider self-modulation as an architectural change in the line of changes such as residual connections or gating: simple, yet widely applicable and robust. As a first step, we provide a careful empirical evaluation of its benefits. While we have provided some diagnostics statistics, understanding deeply why this method helps will fuel interesting future research. Similar to residual connections, gating, dropout, and many other recent advances, more fundamental understanding will happen asynchronously and should not gate its adoption and usefulness for the community.\n\n- Did the authors inspect generated samples of the baseline and the proposed method? Is there a notable qualitative difference?\n\nA 10% change in FID is visually noticeable. However, we note that FID rewards both improvements in sample quality (precision) and mode coverage (recall), as discussed in Sec 5 of [1]. While we can easily assess the former by visual inspection, the latter is extremely challenging. Therefore, an improvement in FID may not always be easily visible, but may indicate a better generative model of the data.\n\n[1] https://arxiv.org/abs/1806.00035\n\n- Overall, the idea is simple, the explanation is clear and experimentation is extensive. I would like to see more commentary on why this method might have long-term impact (or not).\n\nWe view this contribution as a simple yet generic architecture modification which leads to performance improvements. Similarly to residual connections, we would like to see it used in GAN generator architectures, and more generally in decoder architectures in the long term.\n", "We would like to thank the reviewer for the time and useful feedback. Our response is given below.\n\n- Relationship to z-conditioning strategy in BigGAN.\n\nThanks for pointing out the connection to this concurrent submission. We will discuss the connections in the related work section. The main differences are as follows:\n1. BigGAN performs conditional generation, whilst we primarily focus on unconditional generation. BigGAN splits the latent vector z and concatenates it with the label embedding, whereas we transform z using a small MLP per layer, which is arguably more powerful. In the conditional case, we apply both additive and multiplicative interaction between the label and z, instead of concatenation as in BigGAN. \n2. Overall BigGAN focusses on scalability to demonstrate that one can train an impressive model for conditional generation. Instead, we focus on a single idea, and show that it can be applied very broadly. We provide a thorough empirical evaluation across critical design decisions in GANs and demonstrate that it is a robust and practically useful contribution.\n\n- Propagation of signal and ResNets.\n\nIndeed, ResNets provide a skip connection which helps signal propagation. Arguably, self-modulation has a similar effect. However, there are critical differences in these mechanisms which may explain the benefits of self-modulation in a resnet architecture:\n1. Self-modulation applies a channel-wise additive and multiplicative operation to each layer. In contrast, residual connections perform only an element-wise addition in the same spatial locality. As a result, channel-wise modulation allows trainable re-weighting of all feature maps, which is not the case for classic residual connections. \n2. The ResNet skip-connection is either an identity function or a learnable 1x1 convolution, both of which are linear. In self-modulation, the connection from z to each layer is a learnable non-linear function (MLP).\n\n- Reading Figure 2b, one could be tempted to draw a correlation between the complexity of the dataset and the gains achieved by self-modulation over the baseline (e.g., Bedroom shows less difference between the two approaches than ImageNet). Do the authors agree with that?\n\nYes, we notice more improvements on the harder, more diverse datasets. These datasets also have more headroom for improvement.\n", "Summary:\nThe manuscript proposes a modification of generators in GANs which improves performance under two popular metrics for multiple architectures, loss, benchmarks, regularizers, and hyperparameter settings. Using the conditional batch normalization mechanism, the input noise vector is allowed to modulate layers of the generator. As this modulation only depends on the noise vector, this technique does not require additional annotations. In addition to the extensive experimentation on different settings showing performance improvements, the authors also present an ablation study, that shows the impact of the method when applied to different layers.\n\nStrengths:\n- The idea is simple. The experimentation is extensive and results are convincing in that they show a clear improvement in performance using the method in a large majority of settings.\n- I also like the ablation study showing the impact of the method applied at different layers.\n\nRequests for clarification/additional information:\n- I might have missed that, but are the authors offering an interpretation of their observation that the performance of the self-modulation model performs worse in the combination of spectral normalization and the SNDC architecture?\n- The ablation study shows that the impact is highest when modulation is applied to the last layer (if only one layer is modulated). It seems modulation on layer 4 comes in as a close second. I am curious about why that might be.\n- I would like to see some more interpretation on why this method works.\n- Did the authors inspect generated samples of the baseline and the proposed method? Is there a notable qualitative difference?\n\nOverall, the idea is simple, the explanation is clear and experimentation is extensive. I would like to see more commentary on why this method might have long-term impact (or not).", "The paper examines an architectural feature in GAN generators -- self-modulation -- and presents empirical evidence supporting the claim that it helps improve modeling performance. The self-modulation mechanism itself is implemented via FiLM layers applied to all convolutional blocks in the generator and whose scaling and shifting parameters are predicted as a function of the noise vector z. Performance is measured in terms of Fréchet Inception Distance (FID) for models trained with and without self-modulation on a fairly comprehensive range of model architectures (DCGAN-based, ResNet-based), discriminator regularization techniques (gradient penalty, spectral normalization), and datasets (CIFAR10, CelebA-HQ, LSUN-Bedroom, ImageNet). The takeaway is that self-modulation is an architectural feature that helps improve modeling performance by a significant margin in most settings. An ablation study is also performed on the location where self-modulation is applied, showing that it is beneficial across all locations but has more impact towards the later layers of the generator.\n\nI am overall positive about the paper: the proposed idea is simple, but is well-explained and backed by rigorous evaluation. Here are the questions I would like the authors to discuss further:\n\n- The proposed approach is a fairly specific form of self-modulation. In general, I think of self-modulation as a way for the network to interact with itself, which can be a local interaction, like for squeeze-and-excitation blocks. In the case of this paper, the self-interaction allows the noise vector z to interact with various intermediate features across the generation process, which for me appears to be different than allowing intermediate features to interact with themselves. This form of noise injection at various levels of the generator is also close in spirit to what BigGAN employs, except that in the case of BigGAN different parts of the noise vector are used to influence different parts of the generator. Can you clarify how you view the relationship between the approaches mentioned above?\n- It’s interesting to me that the ResNet architecture performs better with self-modulation in all settings, considering that one possible explanation for why self-modulation is helpful is that it allows the “information” contained in the noise vector to better propagate to and influence different parts of the generator. ResNets also have this ability to “propagate” the noise signal more easily, but it appears that having a self-modulation mechanism on top of that is still beneficial. I’m curious to hear the authors’ thoughts in this.\n- Reading Figure 2b, one could be tempted to draw a correlation between the complexity of the dataset and the gains achieved by self-modulation over the baseline (e.g., Bedroom shows less difference between the two approaches than ImageNet). Do the authors agree with that?\n" ]
[ -1, 5, -1, -1, -1, 7, 7 ]
[ -1, 5, -1, -1, -1, 4, 4 ]
[ "Byl04qxA2X", "iclr_2019_Hkl5aoR5tm", "Byl04qxA2X", "HJgjezT1Tm", "rylkjAtu2m", "iclr_2019_Hkl5aoR5tm", "iclr_2019_Hkl5aoR5tm" ]
iclr_2019_HklKui0ct7
Off-Policy Evaluation and Learning from Logged Bandit Feedback: Error Reduction via Surrogate Policy
When learning from a batch of logged bandit feedback, the discrepancy between the policy to be learned and the off-policy training data imposes statistical and computational challenges. Unlike classical supervised learning and online learning settings, in batch contextual bandit learning, one only has access to a collection of logged feedback from the actions taken by a historical policy, and expect to learn a policy that takes good actions in possibly unseen contexts. Such a batch learning setting is ubiquitous in online and interactive systems, such as ad platforms and recommendation systems. Existing approaches based on inverse propensity weights, such as Inverse Propensity Scoring (IPS) and Policy Optimizer for Exponential Models (POEM), enjoy unbiasedness but often suffer from large mean squared error. In this work, we introduce a new approach named Maximum Likelihood Inverse Propensity Scoring (MLIPS) for batch learning from logged bandit feedback. Instead of using the given historical policy as the proposal in inverse propensity weights, we estimate a maximum likelihood surrogate policy based on the logged action-context pairs, and then use this surrogate policy as the proposal. We prove that MLIPS is asymptotically unbiased, and moreover, has a smaller nonasymptotic mean squared error than IPS. Such an error reduction phenomenon is somewhat surprising as the estimated surrogate policy is less accurate than the given historical policy. Results on multi-label classification problems and a large-scale ad placement dataset demonstrate the empirical effectiveness of MLIPS. Furthermore, the proposed surrogate policy technique is complementary to existing error reduction techniques, and when combined, is able to consistently boost the performance of several widely used approaches.
accepted-poster-papers
This is an interesting paper that shows how improved off-policy estimation (and optimization) can be improved by explicitly estimating the data logging policy. It is remarkable that the estimation variance can be reduced over using the original logging policy for IPW, although this result depends on the (somewhat impractical) assumption that the parametric form for the true logging policy is known. The reviewers unanimously recommended the paper be accepted. However, there remain criticisms of the theoretical analysis that the authors should take into account in preparing a final version (namely, motivating the assumptions needed to obtain the results, and providing stronger intuitions behind the reduced variance).
train
[ "ByeAL2huRX", "SJeq-2ZGRQ", "H1giFhZMC7", "BJx1SnZM0m", "H1eGHjd5hX", "HJxF1VhYhQ", "ByxKCFcOhQ" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We have made a revision of our paper. We included 5-6 pages of extra details and proofs. The major changes are summarized as follows:\n\n(1). We highlighted a key fact (orthogonality between $\\Pi$ and $\\tilde{V} - V - \\Pi$) for a better understanding of the main theorem on page 6, before the interpretation of the main theorem.\n\n(2). We added some discussion on the possible model misspecification when fit the logging policy by function approximators.\n\n(3). We added more technical details to Appendix D (Application to Multinomial Logistic Regression). In specific, based on Assumption D.1, we added Lemma D.2, along with a proof, for non-singularity of the Fisher information matrix, a proof of Lemma D.3 (which is Lemma D.1 in the previous version) and proof sketches of Lemma D.4-D.5 (which are Lemmas D.2 and D.3 in the previous version).\n\n(4). We also included additional experiment results in Appendix E to address the statistical significance of our ML-based approach. Table 2 in Appendix E gives the standard deviation of performances of policy optimizations with different estimators (IPS, MLIPS, POEM and MLPOEM). The standard deviation of performances of ML-based techniques is much smaller than the improvements in all four bandit datasets, which means that our approach makes statistically significant improvements over the original methods.\n\nFinally, we include the link (https://www.crowdai.org/challenges/nips-17-workshop-criteo-ad-placement-challenge) for the NIPS’17 Criteo Ad Placement Challenge in Section 4.3.1. We placed the 3rd in that challenge with a 54.314 IPS. It is worth noting that our IPS_std is also very small, which indicates a good statistical strength of our approach.", "Thank you for your valuable comments. In the following we address the issues raised in the comments. Please find the corresponding revisions in our updated paper. \n\nEq 2.4 and Eq 3.3: Thanks for pointing them out. We have revised accordingly.\n\nEq 3.5: Thanks for pointing it out. We have made it clear what we mean by writing “d/d beta (S(x, a; beta*))” in the updated paper.\n\nAfter Eq 3.7: In the appendix of our updated paper, we have specified how these assumptions are satisfied by a logging policy that follows the multinomial logistic model. \n\nModel Misspecification: In practice, for logged bandit data, when the logging distribution is unavailable, we need to approximate it in order to at least calculate the propensity scores, i.e., $\\pi(a | x)/\\mu(a | x)$. There is a risk of model misspecification for those universal function approximators such as neural networks. However, the approximation error by neural network is usually very small and ,moreover, decreases with the number of layers and neurons, as shown by a number of recent works[1-3]. Such a diminishing approximation error should enter the Taylor expansions in Lemma A.1-A.2. We have included discussion on this problem in the Section 3.2 in the update paper.\n\nClipping Constant: We do not introduce extra clipping constant in our ML-based approach. However, our approach is orthogonal and compatible with those importance weighted estimators with clipping constants. For example, as we illustrate in our experiments, the performance of the POEM algorithm can be boosted by ML-based approach. The POEM algorithm is based on the propensity weight capping approach, which has a clipping constant M.\n\nLemma D.1: We have added a proof for the lemma in our update version, please see the proof Lemma D.3 in the updated paper. \n\nEq(D.3): We have added a lemma with proof of the non-singularity as well as upper and lower bounds for eigenvalues of Fisher information after the equation under some regularity conditions. Please see Lemma D.2 in the updated paper.\n\nAlternative Techniques: In fact, the POEM algorithm is a technique using a clipping constant to ensure no small propensities. The variance reduction made by POEM can be further boosted by our ML-based approach as shown in the experiments in Section 4.3.\n\n[1] Schmidt-Hieber, J., 2017. Nonparametric regression using deep neural networks with ReLU activation function. arXiv preprint arXiv:1708.06633.\n[2] Yarotsky, D., 2017. Error bounds for approximations with deep ReLU networks. Neural Networks, 94, pp.103-114.\n[3] Telgarsky, M., 2016. Benefits of depth in neural networks. arXiv preprint arXiv:1602.04485.", "Thanks for your valuable comments. In the following we address the issues raised in the comments. Please find the corresponding revisions in our updated paper.\n\nInsight of Theoretical Analysis: In the revised version, we have highlighted the insight behind the reduction of MSE after stating the main theorem.\nHere is a brief explanation of the intuition behind the analysis: The reason behind the reduction of MSE is due to the Eq(A.16) we derived in the proof of Theorem 3.9. Interpretation for the equation is that $(\\tilde{V} - V)^2$ and $\\Pi^2(r, x, a; \\beta^*)$ are orthogonal in expectation. Therefore, we have $(1/n)Var(\\Pi(r, x, a; \\beta^*))$ as the MSE reduction term. For the rest parts of the proof, as we perform a non-asymptotic analysis in this paper, many efforts are taken to present the concentration condition and bound the residual term $\\xi(n)$ to ensure that it doesn’t dominate the MSE reduction term.\n\nThe Assumptions: In the paper, we try to make our assumption as mild as possible to fit in more general cases. As a consequence, they become more abstract to be able to fit in more general cases. To make the presentation clearer, we will add more explanations and examples for the assumptions in the revision.\n1) We allow flexibility in these the Assumptions 3.3-3.4, i.e., those O(\\zeta)’s are not specified, which means they can depend on what kind of logging distribution $\\mu$ we are handling with. It is worth noting that as these two assumptions are made on population quantities, which do not scale with n, they do not affect our main theorem provided that the sample size n is reasonably large.\n2) Our intuition for the Condition 3.8 is that when the log data are i.i.d. samples, the Condition 3.8 can be verified by using Bernstein-type concentration inequalities. Those inequalities have exponentially decaying tails, which are stronger than the super polynomial decaying requirement made in Condition 3.8.\n\nTable 1: We have added some extra results on the four datasets in Table 1. These extra results are the standard deviation of the performances of policy optimization (not evaluation, which we already discussed in Section 4.2) by IPS & MLIPS and POEM & MLPOEM. From the table, we can see that our ML-based approach has much smaller standard deviations than the boosts in performances. So our approach boosts the performance of IPS and POEM significantly in policy optimization with high probability. For details, please check the Appendix E in the updated paper.\n\nAsymptotic Notions: For a population quantity $f$, the asymptotic notion $f = O(\\zeta)$ in Assumptions 3.3-3.4 mean that $f(d, model) \\leq C\\zeta(d, model)$, where $C$ is some positive constant that depend only on the dimension d and necessary regularity conditions for the model of logging distribution.\n\nCriteo Experiment: The training dataset was given while the test dataset was held out by the challenge organizer. During that challenge, we apply policy optimization over the training set, then give our optimized policy back to them. Then they evaluate the performance of our policy on the hold out testing dataset. The values we include in our paper are the rewards of the policies (the higher the better). The improvements made us rank among prize wining teams in that challenge, which convince us that the ML-based approach did achieve significant improvements in this case.", "Thank you for your valuable comments. In the following we address the issues raised in the comments. Please find the corresponding revisions in our updated paper. \n\nSection 3 (Knowledge of Logging Distribution): We have added some discussion over this issue in the updated paper.\nThroughout the theoretical analysis, we assume that the parametrization of the logging policy is known. Our theoretical results show that we can benefit from estimating the parameter within the parametrization family of distributions using MLE. Even when the true parameter $\\beta^*$ is known beforehand, this approach reduces MSE of the policy estimation.\nAs the reviewers have pointed out, the parametrization of the logging distribution may be misspecified in practice when we approximate using some model. However, when universal function approximators such as neural networks are used, the approximation error (bias) often diminishes with an increasing number of layers and neurons (see, for example, [1-3]). Such approximation error enters the Taylor expansions in Lemma A.1-A.2. \n\nSection 3.1: \n1)(Deterministic Reward): In fact, approaches based on propensity scores are unable to handle cases where the reward has endogenous randomness, i.e., the randomness depends on x and a. It complicates our notation a bit by adding an extra expectation nation on the reward if we allow exogenous randomness to the reward, i.e., the randomness is independent of x and a, but this does not change the arguments we make.\nThus, here we make the reward deterministic for the simplicity of analysis and for better understanding of the source of the improvement made by our ML-based approach, which is due to the use of MLE of logging distribution parameter in IPS.\n\n2)(Toy Example): We use the multinomial logistic regression as an example. The detailed proof is deferred to Appendix D. At the end of Appendix D, we quantify with a MSE reduction of order O(1/n) for multinomial logistic regression, while the term $\\xi(n)$ is of order $O((d/n)^{3/2})$, which is small compared to the reduction of MSE. Furthermore, as the MSE term itself is of order O(1/n) (the same order in n as the reduction term) when the samples are i.i.d., so we also illustrate that the MSE reduction does not vanish asymptotically for regression.\n\nTheorem 3.9: \n1) (Asymptotic Unbiasedness): The asymptotic unbiasedness is proved in Proof of Theorem 3.9 in the Appendix A. Please see Eq(A.3)-(A.7).\n2) (Structure of Proof): The proof is is decomposed to the following three parts. First, we introduce Lemma A.1 and A.2 to get Eq(A.3). Then, we analyze three terms in Eq(A.8) separately. Among the three terms, (i) is the most crucial one while the rest two terms are small terms compared to (i). Finally, we conclude our proof by applying the Condition 3.8 on tail behavior to Eq(A.26).\n\nMinor issues/typos: Thanks for pointing them out, we have revised accordingly in the updated paper.\n\n[1] Schmidt-Hieber, J., 2017. Nonparametric regression using deep neural networks with ReLU activation function. arXiv preprint arXiv:1708.06633.\n[2] Yarotsky, D., 2017. Error bounds for approximations with deep ReLU networks. Neural Networks, 94, pp.103-114.\n[3] Telgarsky, M., 2016. Benefits of depth in neural networks. arXiv preprint arXiv:1602.04485.", "The paper proposes to fit a model of the logging policy that generates bandit feedback data, and use this model's propensities when performing off-policy optimization. When the model is well-specified (i.e. the logging policy indeed lies within the parametric class of models we are fitting), and we use maximum likelihood estimation to fit the model, this approach can yield a lower error when evaluating a policy's performance using off-policy data. The paper then shows how this improved off-policy estimation can also yield better off-policy optimization, and demonstrate this in semi-synthetic experiments.\n\nSpecific Comments:\nEq2.4: Lambda is overloaded (context distribution vs. regularization hyper-parameter).\nEq3.3: E[.] is used before defining it (i.e., E[.] should be interpreted as E_(x,a)~mu(.|beta*) [.])\nEq3.5: I^-1(beta*) makes sense, but the second term E[ d/d beta (S(x,a; beta*)) ] uses a notation that needs to be introduced (you mean || (E[ d/d beta (S(x,a; beta)) ] |_at beta=beta* )^-1 ||).\n\nAfter Eq3.7: It will be instructive to specify some examples of logging policies mu which satisfy these assumptions (and how big the O(.) constants are for those examples).\nSection 3.2: In practical considerations, expected a discussion of how robust things are when the logging policy class is mis-specified (i.e. assuming there is a beta* such that mu(.|beta*) created the data is unlikely to be true).\nFor ML- approaches, was a clipping constant M still used? If so, was it crucial and why?\nLemma D.1: The lemmas in appendix should be accompanied by a proof. E.g. what is C_beta? I don't immediately see why D.3 suggests that the inverse of the Fisher matrix has bounded norm (for instance, if x=0 the inverse is undefined).\n\nGeneral Comments:\nClarity: Good. The paper is easy to follow. Some examples from the Appendix can be moved to the main text (especially to provide a firm grounding for the constants appearing in Section3.1)\nCorrectness: I did not step through Appendix A-C. In Appendix D, there was a questionable claim. The stated theorems in the main text are believable [not surprising that asymptotic bias vanishes when the logging policy model is well-specified].\nOriginality: This builds on several previous works on off-policy optimization in bandit settings, and proposes a simple addition to improve performance.\nSignificance: The paper seems to have missed an opportunity; it can be substantially stronger with a more careful study of when fitting the logging policy will help vs. hurt, and what kinds of regularization or alternatives to maximum likelihood estimation can yield similar improvements (e.g. regularizing propensities close to uniform, ensure no small propensities). ", "Summary:\nThe paper considers the problem of learning from logged bandit feedback, and focuses on the problem of the ratio of the target policy and the logged policy (the basis of algorithms such as inverse propensity scoring). The paper proposes a surrogate policy to replace the logged policy with known parametrization, with a policy obtained by maximum likelihood estimation on the observed data. The authors present theoretical arguments that the variance of the value function estimate is reduced. Empirical experiments show that the surrogate policy can be used to improve IPS and POEM, and also works when the logging policy is unknown.\n\nThe paper analyses an important and interesting problem which is critical to many practical applications today. The proposed solution is modular, and the empirical experiments point to its usefulness. The theoretical analysis, while not fully explaining the proposed approach, provides comfort that there is reduced variance when using the maximum likelihood surrogate.\n\nOverall comments:\n- page 3, Section 3: It is unclear why the assumption that we know the logging policy, as well as its optimal parameter is a sensible one. In particular, the first paragraph seems to indicate that the surrogate policy some somehow the same parameterization and $\\hat{\\beta}$ is in the same space as $\\beta^*$, and just a different parameter. On one hand the authors seem to indicate that they know everything about the logging. On the other hand they seem to want to claim that not knowing the logging policy is ok. What happens when there is a model mismatch between the logging policy and the surrogate policy? Please expand on these two assumptions.\n- page 4, Section 3.1: It might be useful to have a toy example which exactly matches the requirements of Theorem 3.9, such that you can present empirical intuition about the terms in (3.13). In particular: what is the effect of assuming a deterministic reward? How does (3.14) grow? Why is the reduction of MSE greater than $\\xi(n)$?\n- Theorem 3.9: Please present the result that MLIPS is asympotically unbiased explicitly. Furthermore, the current proof of this main theorem should be structured better, so that it can be properly checked.\n\nMinor issues/typos:\n- page 3, above (3.1): In specific, we --> In particular, we\n- Figure 1: the legend is very confusing, making it totally unclear what the text is talking about. Please match text, caption and legend.\n- Section 4.3: please say that the data is the multilabel datasets of Swaminathan and Joachims in Table 1.\n", "This work is concerned with the problem of batch contextual bandits, in which a target contextual bandit policy is optimized on the data generated by a different logging policy. The main problem is to come up with a low-variance low-bias estimator for the value of the target policy. Many of the known techniques are based on an unbiased estimator known as inverse propensity scoring (IPS), which uses the distribution over actions of the logging policy, conditioned on the observed contexts. However, IPS suffers from large variance. The paper's idea is to do a maximum likelihood fit of a simple surrogate policy to the logged data, and then use the conditional distribution over actions of the surrogate policy to compute inverse propensity scores.\nThe theoretical results show that the bias of this estimator vanishes asymptotically, whereas the variance is smaller than IPS. Experiments using known/unknown logging policies on artificial/real-world bandit data show that the IPS scores computed with the proposed technique are empirically better than those computed directly using the logging policy. Moreover, the advantage increases when the distribution extracted from the surrogate policy is used to compute more sophisticated estimators than IPS.\n\nThe off-policy evaluation in contextual bandits is an important problem, and this paper appears to make some progress. However, the theoretical analysis is a bit disappointing, as it does not shed much light on the reasons why using a surrogate policy should help. Some additional discussion would add value to the paper.\n\nThe result about the decrease in variance depends on assumptions that are not clearly justified, and is expressed in terms of abstract quantities that hard to connect to concrete scenarios. In the end, one does not get many new insights from the theory.\n\nIn Assumptions 3.3-3-4, what is the variable w.r.t the asymptotic notations are understood? By that I mean, the variable n such that f(n) = O(g(n)).\n\nThe experiments are competent and quite elaborated. However, the statistical significance of the improvements in Table 1 is unclear.\n\nThe evaluation criterion for the Criteo experiment is unclear. As a consequence it is hard to appreciate the significance of the improvements in this case." ]
[ -1, -1, -1, -1, 6, 8, 6 ]
[ -1, -1, -1, -1, 4, 4, 3 ]
[ "iclr_2019_HklKui0ct7", "H1eGHjd5hX", "ByxKCFcOhQ", "HJxF1VhYhQ", "iclr_2019_HklKui0ct7", "iclr_2019_HklKui0ct7", "iclr_2019_HklKui0ct7" ]
iclr_2019_HklSf3CqKm
Subgradient Descent Learns Orthogonal Dictionaries
This paper concerns dictionary learning, i.e., sparse coding, a fundamental representation learning problem. We show that a subgradient descent algorithm, with random initialization, can recover orthogonal dictionaries on a natural nonsmooth, nonconvex L1 minimization formulation of the problem, under mild statistical assumption on the data. This is in contrast to previous provable methods that require either expensive computation or delicate initialization schemes. Our analysis develops several tools for characterizing landscapes of nonsmooth functions, which might be of independent interest for provable training of deep networks with nonsmooth activations (e.g., ReLU), among other applications. Preliminary synthetic and real experiments corroborate our analysis and show that our algorithm works well empirically in recovering orthogonal dictionaries.
accepted-poster-papers
This paper studies non smooth and non convex optimization and provides a global analysis for orthogonal dictionary learning. The referees indicate that the analysis is highly nontrivial compared with existing work. The experiments fall a bit short and the relation to the loss landscape of neural networks could be described more clearly. The reviewers pointed out that the experiments section was too short. The revision included a few more experiments. The paper has a theoretical focus, and scores high ratings there. The confidence levels of the reviewers is relatively moderate, with only one confident reviewer. However, all five reviewers regard this paper positively, in particular the confident reviewer.
train
[ "r1g5pn1527", "SJgbLGX96Q", "rkg-4f75pQ", "rkgvlzm96Q", "H1gObZmc6m", "ByeePKFwaX", "r1gomAzrpQ", "Bkxa-CzHpX", "HJeARTGrpX", "BklN3TzB6X", "SkxX-tyrT7", "Byl9W4UmTQ", "H1lW-zgChm" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper proposes a subgradient descent method to learn orthogonal, squared /complete n x n dictionaries under l1 norm regularization. The problem is interesting and relevant, and the paper, or at least the first part, is clear.\n\nThe most interesting property is that the solution does not depend on the dictionary initialization, unlike many other competing methods. \n\nThe experiments sections in disappointingly short. Could the authors play with real data? How does sparsity affect the results? How does it change with different sample complexities? Also, it would be nice to have a final conclusion section. I think the paper contains interesting material but, overall, it gives the impression that the authors rushed to submit the paper before the deadline!", "We have expanded our synthetic experiment section, added an experiment with real data, and added a conclusion section which discusses some connections to shallow neural nets. Please feel free to take a look at our revision.", "We have expanded the synthetic experiments in Section 5 and added a real data experiments in Appendix H. Please feel free to take a look at our revision.", "Thank you for your valuable feedback!\n\nWe have expanded our synthetic experiment section, added an experiment with real data, and added a conclusion section which discusses some connections to shallow neural nets. Please feel free to take a look at our revision.", "We have made a revision of our paper. The major changes are summarized as follows:\n\n(1) The synthetic experiment (Section 5) is slightly expanded with results on different sparsity (\\theta = 0.1, 0.3, 0.5). Recovery is easier when the sparsity is higher (i.e. \\theta is lower), but in all cases we get successful recovery when m >= O(n^2).\n\n(2) We added an experiment on real images (Appendix H), which shows that complete dictionaries offer a reasonable sparsifying basis for real image patches.\n\n(3) We have added a conclusion section (Section 6) with discussions of our contributions and future directions. ", "The paper provides a very nice analysis for the nonsmooth (l1) dictionary learning minimization in the case of orthogonal complete dictionaries and linearly sparse signals. They utilize a subgradient method and prove a non-trivial convergence result.\n\nThe theory provided is solid and expands on the earlier works of sun et al. for the nonsmooth case. Also interesting is the use a covering number argument with the d_E metric.\n\nA big plus of the method presented is that unlike previous methods the subgradient descent based scheme presented is independent of the initialization.\n\nDespite a solid theory developed, lack of numerical experiments reduces the quality of the paper. Additional experiments with random data to illustrate the theory would be beneficial and it would also be nice to find applications with real data.\n\nIn addition as mentioned in the abstract the authors suggest that the methods used in the paper may also aid in the analysis of shallow non-smooth neural networks but they need to continue and elaborate with more explicit connections.\n\nMinor typos near the end of the paper and perhaps missing few definitions and notation are also a small concern\n\nThe paper is a very nice work and still seems significant! Nonetheless, fixing the above will elevate the quality of the paper.\n", "Thank you for the positive feedback!\n\nWe are performing some more experiments as well as expanding the experiments section in more details. Please stay tuned and we will let you know when it’s done.", "Thank you for the thoughtful feedback! \n\nOur preliminary experiments do show the effect of sample complexity -- in particular, empirically the subgradient descent algorithm almost always succeed as long as m = O(n^2), which is even better than the O(n^4) suggested by our theory.\n\nWe are working on additional experiments comparing different sparsity, and real data experiments. (The experiments are indeed a bit time-consuming and would require days.) \n\nWe are also working on adding a conclusion section and revising the paper a bit. Please stay tuned and we will let you know when it’s done. ", "Thank you for the positive feedback! We respond to the specific questions in turn.\n\n“Challenge of extending SQW to non-smooth case” --- The high-level ideas of obtaining the two results are the same: characterizing the nice global landscape of the respective objectives on the sphere, and then designing specific optimization algorithms taking advantage of the particular landscapes. Characterization of the landscape is through the use of first-order (and second-order) derivatives. For our nonsmooth setting, we have to use the subdifferential to describe the first-order geometry, which involves dealing with set-valued functions and random sets (due to the randomness in the data assumption)---very different than dealing with the gradient and Hessian in the smooth calculus, as in SQW. Moreover, traditional argument of uniform convergence of random quantities to their expectation often relies on Lipschitz property of the quantities of interest. For random sets, the notion of concentration is unconventional, and the desired Lipschitz property also fails to hold. We introduce tools from random set theory and construct a novel concentration argument getting around the Lipschitz requirement. This in turn implies that the first-order geometry of the sample objective is close to the benign population objective, from which the algorithmic guarantee follows.\n\n“Potential generalizations” ---  We believe that our theory has the potential to generalize into the overcomplete case. There, a natural generalization of the orthogonality assumption is that the dictionary A is a well-conditioned tight frame (n x L “fat” matrix with orthonormal rows and suitably widespread columns in the n-dim space). Although the \"sparse vectors in a linear subspace\" intuition fails there, we would still expect the columns a_i of A minimize the population objective ||a^T Y||_1 = ||a^T A X||_1: due to the widespread nature of columns of A, a_i^T A would be an “approximately 1-sparse” vector (i.e., with one dominant entry and others having small magnitudes) and so vectors a_i^T AX are expected to be noisy versions of rows of X, which are the sparest (in a soft sense) vectors among all vectors of the form a^T AX. Figuring out the precise optimization landscape in that case would be of great interest.  ", "Thank you for the positive feedback! We respond to the questions in the following.\n\n“Extending to overcomplete DL” --- We believe that our theory has the potential to generalize into the overcomplete case. There, a natural generalization of the orthogonality assumption is that the dictionary A is a well-conditioned tight frame (n x L “fat” matrix with orthonormal rows and suitably widespread columns in the n-dim space). Although the \"sparse vectors in a linear subspace\" intuition fails there, we would still expect the columns a_i of A minimize the population objective ||a^T Y||_1 = ||a^T A X||_1: due to the widespread nature of columns of A, a_i^T A would be an “approximately 1-sparse” vector (i.e., with one dominant entry and others having small magnitudes) and so vectors a_i^T AX are expected to be noisy estimates of rows of X, which are the sparest (in a soft sense) vectors among all vectors of the form a^T AX. Figuring out the precise optimization landscape in that case would be of great interest. \n\n“Nonsmooth approach vs. (randomized) smoothing” --- We wonder whether you’re referring to the smoothed *objective*, or applying smoothing *algorithms* on our non-smooth objective. We will discuss both as follows.\n\nA smoothed objective was analyzed in Sun et al.‘15. Smoothing therein helped to make conventional calculus tools and expectation-concentration style argument readily applicable conceptually, but the smoothed objective and its low-order derivatives led to involved technical analysis---the smoothed objective loses the simplicity of the L1 function. This tends to be the case for several natural smoothing schemes. Also, L1 function is the regularizer people use in practical dictionary learning. This paper directly works with the non-smooth L1 objective and is able to obtain stronger results with a substantially cleaner argument, using unconventional yet highly accessible tools from nonsmooth analysis, set-valued analysis, and random set theory. \n\nSmoothing algorithms on non-smooth objective is an active area of ongoing research. For example, Jin et al. ‘18 showed that randomized smoothing algorithms succeed on minimizing non-smooth objectives as long as it is point-wise close to a smooth objective, which is often chosen to be its expected version. However, in our case, even the expected objective is non-smooth (see e.g. Section 3.1), so it is not readily applicable. Moreover, the result there is based on a zero-th order method, which is a conservative algorithmic choice when the (sub)gradient information is readily available---this is the case for us. In this paper, we are able to show the convergence of subgradient descent (i.e., a first-order method) directly on the non-smooth objective. It would be of interest to see whether first-order smoothing algorithms work as well.\n\n“Nonsmoothness in neural networks” --- It depends on what perspective we take. \n\nIf we are interested in the landscape (i.e. the global geometry of the loss function), then the nonsmoothness matters a lot as the nonsmooth points are scattered everywhere in the space, and if one initializes the model adversarially near the highly nonsmooth parts, intuitively the performance can be hurt by the nonsmoothness.\n\nHowever, if we are more interested in the trajectory of some particular algorithms (say, SGD), then maybe the non-smoothness won’t hurt a lot --- as long as nice properties on the trajectory can be established. Such a trajectory-specific analysis has been done recently in, e.g., Du et al. ‘18. Even in this kind of results, there is no formal theory or statement saying that the nonsmooth points won’t be encountered. \n\nBesides our work, there are other recent papers showing why nonsmoothness should and can be handled on a rigorous basis, e.g., Laurent & von Brecht ’17, Kakade & Lee ’18. \n\nReference:\nSun, J., Qu, Q., & Wright, J. (2015). Complete Dictionary Recovery over the Sphere I: Overview and the Geometric Picture. arXiv preprint arXiv:1511.03607.\n\nJin, C., Liu, L. T., Ge, R., & Jordan, M. I. (2018). Minimizing Nonconvex Population Risk from Rough Empirical Risk. arXiv preprint arXiv:1803.09357.\n\nDu, S. S., Zhai, X., Poczos, B., & Singh, A. (2018). Gradient Descent Provably Optimizes Over-parameterized Neural Networks. arXiv preprint arXiv:1810.02054.\n\nLaurent, T., & von Brecht, J. (2017). The Multilinear Structure of ReLU Networks. arXiv preprint arXiv:1712.10132.\n\nKakade, S., & Lee, J. D. (2018). Provably Correct Automatic Subdifferentiation for Qualified Programs. arXiv preprint arXiv:1809.08530.", "This paper studies dictionary learning problem by a non-convex constrained l1 minimization. By using subgradient descent algorithm with random initialization, they provide a non-trivial global convergence analysis for problem. The result is interesting, which does not depend on the complicated initializations used in other methods. \n\nThe paper could be better, if the authors could provide more details and results on numerical experiments. This could be used to confirm the proved theoretical properties in practical algorithms. ", "This paper studies nonsmooth and nonconvex optimization and provides a global analysis for orthogonal dictionary learning. The analysis is highly nontrivial compared with existing work. Also for dictionary learning nonconvex $\\ell_1$ minimization is very important due to its robustness properties. \n\nI am wondering how extendable is this approach to overcomplete dictionary learning. It seems that overcomplete dictionary would break the key observation of \"sparsest vector in the subspace\". \n\nIs it possible to circumvent the difficulty of nonsmoothness using (randomized) smoothing, and then apply the existing theory to the transformed objective? My knowledge is limited but this seems to be a more natural thing to try first. Could the authors compare this naive approach with the one proposed in the paper?\n\nAnother minor question is about the connection with training deep neural networks. It seems that in practical training algorithms we often ignore the fact that ReLU is nonsmooth since it only has one nonsmooth point — only with diminishing probability, it affects the dynamics of SGD, which makes subgradient descent seemingly unnecessary. Could the authors elaborate more on this connection?", "This paper is a direct follow-up on the Sun-Qu-Wright non-convex optimization view on the Spielman-Wang-Wright complete dictionary learning approach. In the latter paper the idea is to simply realize that with Y=AX, X being nxm sparse and A a nxn rotation, one has the property that for m large enough, the rows of X will be the sparsest element of the subspace in R^m generated by the rows of Y. This leads to a natural non-convex optimization problem, whose local optimum are hopefully the rows of X. This was proved in SWW for *very* sparse X, and then later improved in SQW to the linear sparsity scenario. The present paper refines this approach, and obtain slightly better sample complexity by studying the most natural non-convex problem (ell_1 regularization on the sphere).\n\n\nI am not an expert on SQW so it is hard to evaluate how difficult it was to extend their approach to the non-smooth case (which seems to be the main issue with ell_1 regularization compared to the surrogate loss of SQW).\n\n\nOverall I think this is a solid theoretical contribution, at least from the point of view of non-smooth non-convex optimization. I have some concerns about the model itself. Indeed *complete* dictionary learning seemed like an important first step in 2012 towards more general and realistic scenario. It is unclear to this reviewer whether the insights gained for this complete scenario are actually useful more generally.\n" ]
[ 6, -1, -1, -1, -1, 7, -1, -1, -1, -1, 7, 7, 7 ]
[ 1, -1, -1, -1, -1, 2, -1, -1, -1, -1, 3, 4, 3 ]
[ "iclr_2019_HklSf3CqKm", "Bkxa-CzHpX", "r1gomAzrpQ", "ByeePKFwaX", "iclr_2019_HklSf3CqKm", "iclr_2019_HklSf3CqKm", "SkxX-tyrT7", "r1g5pn1527", "H1lW-zgChm", "Byl9W4UmTQ", "iclr_2019_HklSf3CqKm", "iclr_2019_HklSf3CqKm", "iclr_2019_HklSf3CqKm" ]
iclr_2019_HklY120cYm
ClariNet: Parallel Wave Generation in End-to-End Text-to-Speech
In this work, we propose a new solution for parallel wave generation by WaveNet. In contrast to parallel WaveNet (van Oord et al., 2018), we distill a Gaussian inverse autoregressive flow from the autoregressive WaveNet by minimizing a regularized KL divergence between their highly-peaked output distributions. Our method computes the KL divergence in closed-form, which simplifies the training algorithm and provides very efficient distillation. In addition, we introduce the first text-to-wave neural architecture for speech synthesis, which is fully convolutional and enables fast end-to-end training from scratch. It significantly outperforms the previous pipeline that connects a text-to-spectrogram model to a separately trained WaveNet (Ping et al., 2018). We also successfully distill a parallel waveform synthesizer conditioned on the hidden representation in this end-to-end model.
accepted-poster-papers
The authors discuss an improved distillation scheme for parallel WaveNet using a Gaussian inverse autoregressive flow, which can be computed in closed-form, thus simplifying training. The work received favorable comments from the reviewers, along with a number of suggestions for improvement which have improved the draft considerably. The AC agrees with the reviewers that the work is a valuable contribution, particularly in the context of end-to-end neural text-to-speech systems.
val
[ "rJx4An_ZT7", "r1xoUWp8k4", "BygO0toz67", "HyxgTgJH14", "rygsCGE3CQ", "r1efUdzcRX", "HkxgRJx5Rm", "SJgSu_vK07", "rklM6VPtA7", "B1lEY-wcjX" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "After reading other reviews and author comments, I have raised my rating to a 6. My main concerns remain (lack of significant contribution and lack of an ablation study with more comprehensive experiments). However, I'm not against the paper as an interesting finding in and of itself. It would be great if the authors (or interested members of the research community) may analyze how general-purpose their proposals are (e.g., of Gaussian base distribution) and how extensive the results are on TTS benchmarks.\n\n--\n\nOverall, I very much like the direction this paper pursues. However, the content doesn't substantiate their two claimed contributions. I highly recommend the authors either back up their claims in more detail, or center their work in terms of the result and less so about the ideas (which at the moment, are not convincing to use outside of this specific setup).\n\nThe authors propose two contributions:\n\n1. They build on parallel WaveNet which uses distillation by minimizing a KL divergence from a Logistic IAF as a student to a Mixture of Logistic AF as a teacher. Instead, they simply use Gaussians which has a closed-form KL divergence and makes training during distillation significantly simpler. Because of stability problems, they also add 1. a penalty term to discourage the original loss from dividing by a standard deviation close to zero; and 2. converting van den Oord et al. (2018)'s average power loss penalty to a frame-level loss penalty.\n\nTheir choice of Gaussians requires a restriction on the likelihood, and they show one result arguing the likelihood choice doesn't make much of a difference. This result comprises 4 human-evaluated numbers, with a fixed architecture and training hyperparameters of their choice. Unfortunately, I'm not convinced. Can the authors provide more compelling evidence? If the authors argue this is one of their main contributions, I find that lack of a more comprehensive empirical or theoretical study disconcerting.\n\nSimilarly, while I like that using Gaussian KLs makes the distillation objective in closed-form, there isn't evidence indicating the benefit. The one result (the 4 numbers above) are conflated by both the change in model as well as utilizing the closed-form loss. The same goes for their one result (2 numbers) comparing forward to reverse KL.\n\n2. They \"propose the first text-to-wave neural architecture for TTS, which can be trained from scratch in an end-to-end\nmanner.\" I'm not an expert on speech so I can't accurately assess the novelty here. However, it would be nice to show these results independent of the other proposed changes.\n\nWriting-wise, the paper was clear, although potentially too packed with background information. As a expert on generative models, most of Sections 1-3 are already well-known and could be made more concise by referencing past works for more details. They add various details (such as the architecture notes at the end of 3.1) which should be better placed elsewhere to tease out what the important changes are in this paper.", "Just realized it should be \"van den Oord\" not \"van Oord\" by the way. Apologies for not catching that sooner.", "This paper proposes some modifications to established procedures for neural speech synthesis and investigates their effect experimentally. The proposed modifications are mostly fairly straightforward conceptually, but appear to work well, and this reviewer feels the paper has huge value in its experimental contributions extending and clarifying certain aspects of WaveNet training and distillation. The paper is well-written and fairly concise, with a short-and-sweet experimental results section.\n\nMajor comments:\n\nThe conceptual novelty seems a little overstated in the abstract. For example, the value seems to not really be in the \"proposing\" a text-to-wave neural architecture for speech synthesis (which, aside from important experimental tweaks, is essentially Tacotron 2 training all parameters from scratch) but in showing that it works well experimentally. Conceptually the paper is extremely close to the parallel wavenet paper, the main differences being slightly different component distributions (Gaussian instead of logistic), a different set of loss terms in addition to the reverse KL, and joint training of the spectral synthesis and waveform synthesis parts of the model.\n\nIt would be super insightful to include log probabilities on the test set (everywhere MOS results have been reported) in the experimental results. This would help tease apart the effects of architecture inductive bias, different divergences, distillation, etc. One of the really nice things about flow-based models is the ability to compute the log probability tractably.\n\n\nMinor comments:\n\nPerhaps mention that teacher forcing is maximum likelihood in the introduction? Currently it almost sounds like the paper is contrasting teacher forcing for WaveNet (paragraph 2) and MLE (list item 1).\n\nAt the end of paragraph 3 in the introduction, it would be helpful to mention that the intractable KL divergence being referred to is the frame-level one-step-ahead predictions, not the entire sequence-level prediction. Also, for 1D distributions isn't taking a large number of samples quite effective in practice?\n\nIn introduction list item 3, suggest mentioning Tacotron 2 (Shen et al) and contrasting with the present work for clarity.\n\nIn section 3.1, it surprises me slightly that clipping at -7 is essential. It would be helpful to state what exactly goes wrong if this is not done. Does it lead to overfitting and so bad test log likelihoods? What effect is noticeable in the generated samples?\n\nEquation (6) is incorrect. It should be conditioned on < t, not <= t. Conditioning on z <= t would make x_t deterministic.\n\nEquation (7) is technically true as written, but only because all the distributions involved are deterministic. If <= t is replaced with < t (which based on the mistake in (6) is what I suspect the authors intended) then it is no longer true. This equation is not used anywhere as far as I can tell. It seems to me like the property that enables non-recursive-over-time (\"parallel\") sampling is (5), not (7). Incidentally, when multiple one-step-ahead samples are taken per frame for parallel wavenet, the samples viewed at the sequence level are highly correlated, and do not obey anything like (7), but it doesn't affect the correctness of the expected value.\n\nThe IAF doesn't really \"infers its output x at all time steps\". Maybe \"models\" instead of \"infers\"?\n\nLearning an \"IAF directly through maximum likelihood\" doesn't seem all that impractical. People train networks with recursive dependence such as RNNs (which is essentially what would be required to train certain forms of IAF with MLE) as opposed to non-recursive dependence such as CNNs all the time, after all. It seems like this claim depends on the details of the transform $f$.\n\nOut of interest, did the authors consider reversing the sequence being generated in time between successive IAF blocks? This would limit the ability to do low latency synthesis but might improve performance considerably.\n\nThe first paragraph in section 3.3 seems like it should probably be part of section 3.3.1 (it's not related to other losses such as spectrogram frame loss, for example). It would be helpful to state explicitly that: (a) the goal is to minimize the sequence-level reverse KL; (b) this can be approximated by taking a single sample z, but this may have high variance; (c) the variance of this estimate can be reduced by marginalizing over the one-step-ahead predictions for each frame; (d) parallel wavenet's mixture of logistics means it has to use a separate Monte Carlo sampling at the frame-level, whereas the proposed Gaussian allows this one-step-ahead marginalization to be performed analytically. This one-step-ahead marginalization is an example of Rao-Blackwellization.\n\nIt didn't seem clear from section 3.3 and 3.3.1 that parallel wavenet also uses the one-step-ahead marginalization trick to reduce the variance.\n\nIt might be helpful to mention that using the reverse KL would be expected to have mode-fitting behavior, making samples sound better but log probability on the test set worse.\n\nIt was not clear to me what difference or similarity was being demonstrated in Figure 1.\n\nSmall point, but \"Oord et al\" should be \"van Oord et al\" throughout (it's a surname).\n\nIn section 3.3.2, can the authors give any insight as to why training with reverse KL alone leads to whispering, and why adding the STFT term fixes this? (If it's only something that's been noticed empirically, \"will lead\" -> \"empirically we found\"?)\n\nI noticed quite a large qualitative perceptual difference between the student and teacher samples, particularly in the speech synthesis case (experiment 3), even though I think I'd rate the quality on a linear scale as fairly similar (in line with the MOS results). The teacher sounds noticeably \"harsher\" but \"clearer\" Do the authors have any insight as to why this perceptual difference occurs (if they also perceive a qualitative difference)? Is it probably a difference in inductive bias between an AF (which WaveNet can be seen as) and IAF?\n\nI found it fascinating that reverse KL and forward KL lead to roughly the same MOS for spectrum-to-waveform. I assumed reverse KL would be better due to its preference for high-quality samples due to mode fitting.\n\nOut of curiosity, what is responsible for the pops at the start of the spectrogram-conditioned distilled models? Also why are the synthesized samples shorter than the ground truth (less initial silence)?\n\n", "Thanks so much to the authors for thoroughly taking into account and properly integrating many of my suggestions. I personally feel like this is an even stronger paper now because of the changes.\n\nVery minor responses:\n\nI would indeed expect the log probabilities on the test set not to correlate very well with perceived sample quality, but that it would provide complementary information. Sample quality is measuring something like whether waveforms likely under the model are also likely in reality, whereas log probability is measuring something like whether waveforms likely in reality are also likely under the model. This is particularly relevant here since the original model is trained with KL divergence and the distilled model is trained with reverse KL, which optimize for different aspects of this trade-off. I would also expect log probability not to correlate with sample quality since a medium amount of statistical overfitting is very bad for log probability but often quite beneficial for sample quality.\n\nI think the \"one-step-ahead\" and \"per-time-step\" additions greatly clarify what is done and its differences to previous work. Thanks for including that.\n\nThe first paragraph in section 3.3.1 now seems super clear to me.\n\n\"Figure 1 implies a fast and persistent matching of $log \\sigma$ in teacher and student models\". That makes sense; thanks for the explanation. Perhaps consider also including the empirical histogram of the student log stdev when the unregularized reverse KL is used for distillation, to show the benefit of the proposed method.\n\nObviously it's good to keep the paper concise, but I personally think the point about qualitative difference in sample quality between teacher and student, and the point about high-frequency blurriness, are interesting and worth mentioning in the paper, even if we don't yet understand exactly why it occurs. Up to the authors whether they think this is insightful or not, though.", "Many thanks for your detailed response, and for addressing many of the issues.\n\nI remain positive about the paper, and I'd be very pleased if my review has helped in improving it. I think it's good work, and I wish it best of luck.", "\nThank you for your in-depth review. These comments are really helpful for improving our paper.\n \n- “I think the paper would be strengthened if the performance of sample-based KL distillation was added into Table 2, and if learning curves were reported that evaluate the amount of stabilization that an analytical KL may offer vs a sample-based KL.”\n* This is a good point. We tried to implement sample-based KL distillation with mixture of logistic distribution, but we haven’t produced high quality speech as provided by the original paper. We didn’t include the results in Table 2, as it seems like we are comparing with a straw man. It should be noted, at the time of this submission, implementing parallel WaveNet is still beyond the capabilities of open source community, even though it receives a lot of attention from TTS practitioners and is very valuable for TTS production. For example, there are some open discussion in the following repo: https://github.com/r9y9/wavenet_vocoder/issues/7 .\nIn addition, many thanks for your suggestion about learning curves. We will add the comparison of learning curves (analytical KL vs sample-based KL) in our final draft.\n\n- “It wasn't clear to me whether distillation happens at the same time as the autoregressive WaveNet is trained on data, or after it has been fully trained. I think the paper should make this clear.”\n* The distillation happens after the autoregressive WaveNet is fully trained. We have clarified this in our revision.\n\n- “However, section 3 contains several notational errors … q(x_t | z_{<=t}) is used in several places to mean the Gaussian conditional q(x_t | z_{<t}) … I believe that section 3, especially subsections 3.2 and 3.3.1, should be reworked to be made clearer, and the notation should be carefully revised.”\n* Many thanks for pointing it out. Yes, q(x_t | z_{<t}) is Gaussian, but q(x_t | z_{<=t}) is deterministic a.k.a. a delta distribution. The Eq. (7) is technically true, only because all the involved distributions q(z|x) and q(x_t | z_{<=t}) ) are deterministic (as pointed out by Reviewer 3). To avoid confusion, we have removed Eq. (7) and related description. Also, we have revised these notation errors in Section 3.\n\n- “I don't think the paper needs to span 9 pages. Section 3 is rather wordy, and should be compressed to the important points.”\n* We have shortened Section 3 in our revision. We will further shorten it in our final draft. \n\n- “The paper contains a substantial amount of significant work that I think is important to be communicated to the ICLR community, especially the text-to-speech community.”\n* We really appreciate your comment. \n\nWe have fixed the issues listed in “Nitpicks”. Many thanks for your detailed review. \n", "\nThank you for your review; the feedback is very helpful to improve our paper. \n\n- “Their choice of Gaussians requires a restriction on the likelihood, and they show one result arguing the likelihood choice doesn't make much of a difference. This result comprises 4 human-evaluated numbers, with a fixed architecture and training hyperparameters of their choice. Unfortunately, I'm not convinced. Can the authors provide more compelling evidence? If the authors argue this is one of their main contributions, I find that lack of a more comprehensive empirical or theoretical study disconcerting”\n* It is a standard practice to evaluate Mean Opinion Score (MOS) in speech synthesis. Although they are human-evaluated numbers from crowdsourcing platform (Mechanical Turk), they are much more indicative of the true goal (a.k.a. synthesizing high fidelity speech) than any other objective metrics, as long as the MOS evaluation follows good practices (e.g., crowdMOS). Also, we didn’t claim that autoregressive WaveNet with single Gaussian outperforms other options (e.g., softmax). Instead, we argue it is sufficient for modeling the raw waveform in WaveNet by providing competitive quality of synthesized speech as others options. \n\n- “Similarly, while I like that using Gaussian KLs makes the distillation objective in closed-form, there isn't evidence indicating the benefit.\"\n* This is a good point. One major benefit of Gaussian is the closed-form distillation objective, in contrast to parallel WaveNet. We tried to implement parallel WaveNet with Monte Carlos estimates of the KLD, but we haven’t produced high quality speech as provided by the original paper. We didn’t include this result, as it may impose the impression that we are comparing with a strawman. It should be noted, at the time of this submission, implementing parallel WaveNet is still beyond the capabilities of open source community, even though it attracts a lot of attention from TTS practitioners and is very valuable for TTS production. For example, here is some open discussion ( https://github.com/r9y9/wavenet_vocoder/issues/7 ).\n\n- “The one result (the 4 numbers above) are conflated by both the change in model as well as utilizing the closed-form loss.”\n* The results in Table 1 (4 numbers) don’t utilize the closed-form loss. They are MOS results using different output distributions for autoregressive WaveNet. Our conclusion from Table 1 is that Gaussian WaveNet can produce competitive quality of samples as other options.\n\n - “Writing-wise, the paper was clear, although potentially too packed with background information. As a expert on generative models, most of Sections 1-3 are already well-known and could be made more concise by referencing past works for more details. They add various details (such as the architecture notes at the end of 3.1) which should be better placed elsewhere to tease out what the important changes are in this paper.\"\n* Thanks for your nice suggestion. We have moved the architecture notes at the end of Section 3.1 to Appendix and experiment Section. We have also shortened Section 3 in our revision. Note that it is an application paper; a lot readers are from text-to-speech community. We referred past work for more details in Section 1-3, but we think a self-contained presentation with enough background information could be helpful to communicate with readers from different background. \n", "\n- “The first paragraph in section 3.3 seems like it should probably be part of section 3.3.1. It would be helpful to state explicitly that: (a) the goal is to minimize the sequence-level reverse KL; (b) this can be approximated by taking a single sample z, but this may have high variance; (c) the variance of this estimate can be reduced by marginalizing over the one-step-ahead predictions for each frame; (d) parallel wavenet's mixture of logistics means it has to use a separate Monte Carlo sampling at the frame-level, whereas the proposed Gaussian allows this one-step-ahead marginalization to be performed analytically.”\n* Many thanks for your great suggestion. We have reorganized Section 3.3 and stated (a)(b)(c)(d) explicitly in our revision. \n\n- “It was not clear to me what difference or similarity was being demonstrated in Figure 1.”\n* Figure 1 implies a fast and persistent matching of $log \\sigma$ in teacher and student models because of the proposed regularization term, which is crucial to avoid numerical issue. More importantly, monitoring the empirical histogram of $log \\sigma$ during distillation is very helpful for reproducing ClariNet, because a successful distillation process always exhibits the empirical histograms like Figure 1.\n\n- “Small point, but \"Oord et al\" should be \"van Oord et al\" throughout (it's a surname).”\n* Thanks for your correction. We have fixed it throughout the paper.\n\t\n- “In section 3.3.2, can the authors give any insight as to why training with reverse KL alone leads to whispering, and why adding the STFT term fixes this? (If it's only something that's been noticed empirically, \"will lead\" -> \"empirically we found\"?)”\n* This is a very good question. We only have some intuitions behind this empirical observation, but we recommend a new ICLR submission which gives an in-depth analysis and provides non-trivial insights on this problem ( https://openreview.net/forum?id=rygFmh0cKm ). Adding STFT term fixes the whispering, because it will raise the energy of synthesized voice. We have changed “will lead” to “especially we found” in the revision.\n\n- “I noticed quite a large qualitative perceptual difference between the student and teacher samples, particularly in the speech synthesis case (experiment 3), even though I think I'd rate the quality on a linear scale as fairly similar (in line with the MOS results). The teacher sounds noticeably \"harsher\" but \"clearer\" Do the authors have any insight as to why this perceptual difference occurs (if they also perceive a qualitative difference)? Is it probably a difference in inductive bias between an AF (which WaveNet can be seen as) and IAF?”\n* Yes, we also perceive this qualitative perceptual difference. When we visualize the spectrograms of student and teacher samples, we found that the high frequency bands of student samples tend to be more blurred than teacher’s. It implies that the AF may be better at modeling the high frequency details than the non-autoregressive IAF. \n\n- \"Out of curiosity, what is responsible for the pops at the start of the spectrogram-conditioned distilled models? Also, why are the synthesized samples shorter than the ground truth (less initial silence)? \"\n* The synthesized samples are shorter than the ground truth, because our data preprocessing pipeline chopped the initial and trailing silence. It is also responsible for the pops at the start of the synthesized audios, because the model didn’t see enough silence at the start of audios during training. We will remove this problematic operation and update all synthesized samples afterwards. \n\t\nMany thanks again for your in-depth review and very insightful suggestion.", "\nThank you so much for the detailed comments and suggestions; they are really helpful to improve the quality of our paper.\n\nMajor comments:\n\n- \"The value seems to not really be in the \"proposing\" a text-to-wave neural architecture for speech synthesis (which, aside from important experimental tweaks, is essentially Tacotron 2 training all parameters from scratch) but in showing that it works well experimentally.”\n* Except training all parameters from scratch, our text-to-wave architecture is different from previous Tacotron 2 or Deep Voice 3, because the WaveNet vocoder is conditioned on the hidden states instead of mel-spectrogram from the encoder-decoder architecture. This difference is crucial to the success of training from scratch. Actually, we tried to simply connect text-to-spectrogram model and a mel-spectrogram conditioned WaveNet and train all parameters from scratch, but it performs worse than the separate training pipeline like Tacotron 2. We will emphasize this difference in our paper. \n \n- \"It would be super insightful to include log probabilities on the test set (everywhere MOS results have been reported) in the experimental results.\"\n* Thanks for your nice suggestion. We will include the log probabilities results in our final draft. We also want share some preliminary observations here. We usually find that the test likelihood is not directly related to the quality of synthesized samples. For example, when we perform hyper-parameter search for autoregressive WaveNet, the validation likelihood is not reliable at all for selecting a “good” model that synthesizes high quality speech samples. This is probably the reason that test likelihood is not a common evaluation metric in speech synthesis community.\n\nMinor comments:\n\n- \"Perhaps mention that teacher forcing is maximum likelihood in the introduction? Currently it almost sounds like the paper is contrasting teacher forcing for WaveNet (paragraph 2) and MLE (list item 1). \"\n* Thanks for your suggestion. In contrast to the quantized surrogate loss for mixture of logistic distribution in Parallel WaveNet, we apply MLE for Gaussian. All autoregressive models are trained with teacher forcing. We have clarified it at list item 1 in our revision.\n\n- \"At the end of paragraph 3 in the introduction, it would be helpful to mention that the intractable KL divergence being referred to is the frame-level one-step-ahead predictions, not the entire sequence-level prediction. Also, for 1D distributions isn't taking a large number of samples quite effective in practice? “\n* Thanks for your suggestion. In our draft, frame-level refers to STFT frame, so we add “intractable per-time-step KL divergence” at the end of paragraph 3. In addition, we further clarify this point in Section 3.1 following your suggestion. Monte Carlo sampling can be effective for 1D distribution, but it is certainly less effective than closed-form computation and may require a large number of samples for highly peaked distributions, which is usually the case for WaveNet. In practice, a large number of samples may also raise out-of-memory issue.\n\n- “In introduction list item 3, suggest mentioning Tacotron 2 (Shen et al) and contrasting with the present work for clarity.”\n* Thanks for your suggestion; we have mentioned Tacotron 2 and compared it with our work in list item 3.\n\n- \"In section 3.1, it surprises me slightly that clipping at -7 is essential. It would be helpful to state what exactly goes wrong if this is not done.”\n* Yes, clipping is very important to avoid numerical problem (NaN) during training. When we track the NaN in our initial implementation, we found that $\\sigma$ can be very small at some time-steps, which may lead to numerical issues.\n\n- \"Equation (6) is incorrect. It should be conditioned on < t, not <= t. Conditioning on z <= t would make x_t deterministic. Equation (7) is technically true as written, but only because all the distributions involved are deterministic.\n* Many thanks for pointing it out. We have revised this notation error throughout the paper. To avoid confusion, we have also removed Eq. (7) and misleading description.\n\n- “The IAF doesn't really \"infers its output x at all time steps\". Maybe \"models\" instead of \"infers\"?”\n* Yes, we have changed it to “models” in revision.\n\n- \"Learning an \"IAF directly through maximum likelihood\" doesn't seem all that impractical.”\n* Yes, we agree on that. WaveRNN is a good example. We have moderated our text.\n\n- \"Out of interest, did the authors consider reversing the sequence being generated in time between successive IAF blocks?”\n* This is an excellent idea. We didn’t try it, but we will definitely try it afterwards.", "Paper summary:\n\nThe paper presents two distinct contributions in text-to-speech systems:\na) It describes a method for distilling a Gaussian WaveNet into a Gaussian Inverse Autoregressive Flow that uses an analytically computed KL between their conditionals.\nb) It presents a text-to-speech system that is trained end-to-end from text to waveforms.\n\nTechnical quality:\n\nThe distillation method presented in the paper is technically correct. The evaluation is based on Mean Opinion Score and seems to follow good practices.\n\nThe paper makes three claims:\na) A WaveNet with Gaussian conditionals can model speech waveforms equally well as WaveNets with other types of conditionals.\nb) Analytically computing KL divergence stabilizes distillation.\nc) A text-to-speech system trained end-to-end from text to waveforms outperforms one that has separately trained text-to-spectrogram and spectrogram-to-waveform subsystems.\n\nClaims (a) and (c) are clearly demonstrated in the experiments. However, there is nothing in the paper that substantiates claim (b). I think the paper would be strengthened if the performance of sample-based KL distillation was added into Table 2, and if learning curves were reported that evaluate the amount of stabilization that an analytical KL may offer vs a sample-based KL.\n\nFurther points about the experiments:\n- It wasn't clear to me whether distillation happens at the same time as the autoregressive WaveNet is trained on data, or after it has been fully trained. I think the paper should make this clear.\n- The paper says that distillation makes generation three orders of magnitude faster. I think it would be good if actual generation times (e.g. in seconds) were reported.\n\nClarity:\n\nThe paper is generally well-written. Sections 1 and 2 in particular are excellent.\n\nHowever, section 3 contains several notational errors and technical inaccuracies, that makes it rather confusing to read. In particular:\n- q(x_t | z_{<=t}) is used in several places to mean the Gaussian conditional q(x_t | z_{<t}) (e.g. in Eqs (6) and (7), and elsewhere). This is confusing, as q(x_t | z_{<=t}) is actually a delta distribution.\n- q(x | z) is used in several places to mean q(x) (e.g. in Eq. (7), in Alg. 1 and elsewhere). This is confusing, as q(x | z) is also a delta distribution.\nI believe that section 3, especially subsections 3.2 and 3.3.1, should be reworked to be made clearer, and the notation should be carefully revised.\n\nI don't think the paper needs to span 9 pages. Section 3 is rather wordy, and should be compressed to the important points.\n\nOriginality:\n\nDistilling a Gaussian autoregressive model to another Gaussian autoregressive model by matching their Gaussian conditionals with an analytical KL is rather straightforward, and, methodologically speaking, I wouldn't consider it an original contribution on its own. However, I think its application and demonstration in text-to-speech constitutes an original contribution.\n\nSignificance:\n\nThe paper contains a substantial amount of significant work that I think is important to be communicated to the ICLR community, especially the text-to-speech community.\n\nReview summary:\n\nPros:\n+ Substantial amount of good work.\n+ Significant improvement in text-to-speech end-to-end software.\n+ Generally well-written (with the exception of section 3 which needs work).\n\nCons:\n- Some more experiments would be good to substantiate the claim that analytical KL is better.\n- Notational errors and confusion in section 3.\n- Too wordy, no need for 9 pages.\n\nNitpicks:\n- As I said above, I wouldn't consider distillation of models with Gaussian conditionals using analytical KLs methodologically novel, so I think the phrase \"novel regularized KL divergence\" should be moderated.\n- Eq. (1) should contain theta on the left hand side too.\n- Page 3: \"at Appendix B\" --> \"in Appendix B\".\n- Page 4: In flows we don't just \"suppose z has the same dimension as x\"; rather, it's a necessary condition that must hold.\n- Footnote 5: It's unclear to me what it means to \"make the loss less sensitive\".\n- References: Real NVP, Fourier, Bayes, PixelCNN, WaveNet, VoiceLoop should be properly capitalized." ]
[ 6, -1, 9, -1, -1, -1, -1, -1, -1, 7 ]
[ 3, -1, 4, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2019_HklY120cYm", "HyxgTgJH14", "iclr_2019_HklY120cYm", "rklM6VPtA7", "r1efUdzcRX", "B1lEY-wcjX", "rJx4An_ZT7", "BygO0toz67", "BygO0toz67", "iclr_2019_HklY120cYm" ]
iclr_2019_HkljioCcFQ
MARGINALIZED AVERAGE ATTENTIONAL NETWORK FOR WEAKLY-SUPERVISED LEARNING
In weakly-supervised temporal action localization, previous works have failed to locate dense and integral regions for each entire action due to the overestimation of the most salient regions. To alleviate this issue, we propose a marginalized average attentional network (MAAN) to suppress the dominant response of the most salient regions in a principled manner. The MAAN employs a novel marginalized average aggregation (MAA) module and learns a set of latent discriminative probabilities in an end-to-end fashion. MAA samples multiple subsets from the video snippet features according to a set of latent discriminative probabilities and takes the expectation over all the averaged subset features. Theoretically, we prove that the MAA module with learned latent discriminative probabilities successfully reduces the difference in responses between the most salient regions and the others. Therefore, MAAN is able to generate better class activation sequences and identify dense and integral action regions in the videos. Moreover, we propose a fast algorithm to reduce the complexity of constructing MAA from O(2T) to O(T2). Extensive experiments on two large-scale video datasets show that our MAAN achieves a superior performance on weakly-supervised temporal action localization.
accepted-poster-papers
The paper proposes a new attentional pooling mechanism that potentially addresses the issues of simple attention-based weighted averaging (where discriminative parts/frames might get disportionately high attentions). A nice contribution of the paper is to propose an alternative mechanism with theoretical proofs, and it also presents a method for fast recurrent computation. The experimental results show that the proposed attention mechanism improves over prior methods (e.g., STPN) on THUMOS14 and ActivityNet1.3 datasets. In terms of weaknesses: (1) the computational cost may be quite significant. (2) the proposed method should be evaluated over several tasks beyond activity recognition, but it’s unclear how it would work. The authors provided positive proof-of-concept results on weakly supervised object localization task, improving over CAM-based methods. However, CAM baseline is a reasonable but not the strongest method and the weakly-supervised object recognition/segmentation domains are much more competitive domains, so it's unclear if the proposed method would achieve the state-of-the-art by simply replacing the weighted-averaging-attentional-pooling with the proposed attention mechanism. In addition, the description on how to perform attentional pooling over images is not clearly described (it’s not clear how the 1D sequence-based recurrent attention method can be extended to 2-D cases). However, this would not be a reason to reject the paper. Finally, the paper’s presentation would need improvement. I would suggest that the authors give more intuitive explanations and rationale before going into technical details. The paper starts with Figure 1 which is not really well motivated/explained, so it could be moved to a later part. Overall, there are interesting technical contributions with positive results, but there are issues to be addressed.
train
[ "H1lBtyH214", "r1xzqeXtAm", "r1lyqwftCQ", "SkgHR0-FAQ", "HkgzxyE0hm", "Syec-wnqn7", "rkxNicb5nm" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "I appreciate the updated results on weakly-supervised object localization on images. Overall, I think the paper has reasonable contributions. The improvement in THUMOS14 dataset over STPN is not significant, but the results on ActivityNet look promising and the results on weakly-supervised object localization are convincing to believe that the proposed method can be generally useful to address the challenge of weakly-supervised localization where the model focuses on the most discriminative regions. For these reasons, I maintain my review score as weakly accept.", "\nThanks very much for the valuable comments and suggestions. We have clarified some questions listed below:\n\nQ: Where the improvements come from? \nOur re-implementation is not being able to replicate the results of the original STPN. Many factors may influence the performance, such as the optical flow, the RGB I3D feature and Flow I3D feature extracted with the different toolbox. The same model implemented by PyTorch and TensorFlow may also have different performance. Actually, we use 400 snippets for the re-implementation of STPN in the paper (the same as the original STPN). For SPTN, using 20 snippets is worse than using 400 snippets. The result is shown as follows:\n\nIoU threshold from 0.1:0.1:0.9\nT=20 for STPN: 43.8, 35.5, 26.0, 18.5, 10.5, 6.3, 3.4, 1.6, 0.2;\nT=400 for STPN: 57.4, 48.7, 40.3, 29.5, 19.8, 11.4, 5.8, 1.7, 0.2;\n\nWe use 20 snippets for the proposed MAAN in the paper. The reason is that a small T can accelerate training, and the number of snippets has a less influence of the performance for MAAN compared with STPN. \n\nThere is a typo here that we actually reject classes whose video-level probabilities are below 0.1 instead of 0.01, which we have updated in the paper.\n\nActually, in order to reduce the influence of different factors and provide a fair comparison, we keep all the settings exactly the same between the proposed model and the re-implemented STPN model, as well as all the other compared baseline models. Under the same setting, we only change the feature aggregator in different models. The better qualitative and quantitative results compared with other baseline models empirically demonstrate that the proposed feature aggregator is the main reason for the improvement. In the paper, we also provide theoretical analysis and proof to understand and explain why the proposed model works.\n\n\nQ: Review of current literature.\nAlthough the model in “Tell Me Where to Look: Guided Attention Inference Network” is end-to-end trainable, it is essentially a two-stage architecture where the first stage is based on the CAM model. Our MAAN is a better alternative model for the CAM model by simply replacing the feature aggregator, which can also be served as the first stage of the model in “Tell Me Where to Look: Guided Attention Inference Network”. \nThanks for the suggestion of the recent work W-TALC for weakly-supervised action localization, which we have missed before. We have checked the paper carefully and found that the main idea of W-TALC is a Co-Activity Similarity. The assumption is that a video pair sharing the same action label should have similar feature representations and a video pair not sharing the same action label should have a large feature difference. We do appreciate the idea. But our work is a totally different but complete story from another perspective, where the assumption, theoretical derivation, experimental results are complete and can support each other. We think both of works are beneficial to the community. Moreover, it is interesting to incorporate our method as a plug-in into these frameworks to boost the performance.\n\n\nQ: Results. \nThe quantitative results show the improvement of our methods compared with the baselines in the experiments. It means that our method can bring more true positives than the false positives. We also show more qualitative results on image object localization task (Appendix F in the updated paper). \n", "\nThanks very much for the valuable comments and suggestions. \n\nWe have applied the idea to weakly-supervised image object localization task. As suggested, similar to CAM (Zhou et al. 2016), we plug the proposed MAA pooling method on top of the CNN feature map instead of global average pooling. Besides compared with global average pooling, we have also compared with the weighted average pooling. The specific experimental settings and results are shown in Appendix F in the updated paper. \n\nAs for the time complexity, we use 20 snippets in the training phase. At the test phase for localization, we forward each snippet i to the trained model and compute the p_i but not the lamda_i, as shown in Equation (14) (the proof is demonstrated in Proposition 2 in Section 2.2). Therefore, the time complexity at test phase is indeed O(T), which can also be easily parallelized O(1).\n\nWe have corrected the citation as suggested.\n", "\nThanks very much for the comments and suggestion on other localization tasks. \n\nActually, many works are based on the model pre-trained on other datasets like ImageNet and Kinetics (Carreira and Zisserman 2017). The compared STPN model in this paper has also used the I3D model pre-trained on Kinetics dataset. We compare the proposed MAAN with STPN and other baseline models on the exact same experimental settings. \n\nThe proposed feature aggregator can be used in other weakly-supervised learning tasks. For example, we have applied the proposed method on weakly-supervised image object localization task. The experimental settings and results are shown in the Appendix F in the updated paper.\n", "This paper considers the problem of weakly-supervised temporal action localization. It proposes a marginalized average attention network (MAAN) to suppress the effect of overestimating salient regions. Theoretically, this paper proves that the learned latent discriminative probabilities reduce the difference of responses between the most salient regions and the others. In addition, it develops a fast algorithm to reduce the complexity of constructing MAA to O(T^2). Experiments are conducted on THUMOST14 and ActivityNet 1.3.\n\nI like the theoretical part of this paper but have concerns about the experiments. More specifically, my doubts are\n\n- The I3D network models are not trained from scratch. The parameters are borrowed from (Carreira and Zisserman 2017), which in fact make the attention averaging very easy. I don’t know whether the success is because the proposed MAAN is working or because the feature representation is very powerful.\n\n- If possible, I wish to see the success of the proposed method for other tasks, such as image caption generation, and machine translation. If the paper can show success in any of such task, I would like to adjust my rating to above acceptance.\n\n", "Summary\nThis paper proposed a stochastic pooling method over the temporal dimension for weakly-supervised video localization problem. The main motivation is to resolve a problem of discriminative attention that tends to focus on a few discriminative parts of an input data, which is not desirable for the purpose of dense labeling (i.e. localization). The proposed stochastic pooling method addressed this problem by aggregating all possible subsets of snippets, where each subset is constructed by sampling snppets from learnable sampling distribution. The proposed method showed that such approach learns more smooth attention both theoretically and empirically.\n\nClarity:\nThe paper is well written and easy to follow. The ideas and methods are clearly presented.\n\nOriginality and significance:\nThe proposed stochastic pooling is novel and demonstrated that empirically useful. Given that the proposed method can be generally applicable to other tasks, I think the significance of the work is also reasonable. One suggestion is applying the idea to semantic segmentation, which also shares a similar problem setting but easier to evaluate its impact than videos. Similar to (Zhou et al. 2016), you can plug the proposed pooling method on top of CNN feature map instead of global average pooling, which might be doable with the more affordable computational cost since the number of hidden units for pooling is much smaller than the length of videos (N < T). \n\nOne downside of the proposed method is its computational complexity (O(T^2)). This is much higher than the one for other feedforward methods (O(T)), which can be easily parallelized (O(1)). This can be a big problem when we have to handle very long sequences too (increasing the length of snippets could be one alternative, but it is not desirable for localization at the end). Considering this disadvantage, the performance gain by the proposed method may not be considered attractive enough. \n\nExperiment:\nOverall, the experiment looks convincing to me. \n\nMinor comments:\nCitation error: Wrong citation: Nguyen et al. CVPR 2017 -> CVPR 2018\n", "In this paper the authors focus on the problem of weakly-supervised action localization. The authors state that a problem with weakly-supervised attention based methods is that they tend to focus on only the most salient regions and propose a solution to this which reduces the difference between the responses for the most salient regions and other regions. They do this by employing marginalized average aggregation to averaging a sample a subset of features in relation to their latent discriminative probability then calculating the expectation over all possible subsets to produce a final aggregation.\n\nThe problem is interesting, especially noting that current attention methods suffer from paying attention to the most salient regions therefore missing many action segments in action localization. The authors build upon an existing weakly-supervised action localization framework, having identified a weakness of it and propose a solution. The work also pays attention to the algorithm's speed which is practically useful. The experiments also compare to several other potential feature aggregators.\n\nHowever, there are several weakness of the current version of the paper:\n\n- In parts the paper feels overly complicated, particularly in the method (section 2). It would be good to see more intuitive explanations of the concepts introduce here. For instance, the author's state that c_i captures the contextual information from other video snippets, it would be good to see a figure with an example video and the behaviour of p_i and c_i as opposed to lamba_i. I found it difficult to map p_i, c_i to z and lambda used elsewhere.\n\n- The experimental evidence does not show where the improvement comes from. The authors manage to acheieve a 4-5% improvement over STPN through their re-implemenation of the algorithm, however only have a ~2% improve with their marginalized average attention on THUMOS. I would like to know the cause in the increase over the original STPN results: is it a case of not being able to replicate the results of STPN or do the different parameter choices, such as use of leakly RELU, 20 snippets instead of 400 and only rejecting classes whose video-level probabilities are below 0.01 instead of 0.1, cause this big of an increase in results? There is also little evidence that the actual proposal (contextual information) is the reason for the reported improvement.\n\n- There seems to be several gaps in the review of current literature. Firstly, the authors refer to Wei et al. 2017 and Zhang et al. 2018b as works which erase the most salient regions to be able to explore regions other than the most salient. The authors state that the problem with these methods is that they are not end-to-end trainable, however Li et al. 2018 'Tell Me Where to Look': Guided Attention Inference Network' proposes a method which erases regions which is trainable end-to-end. Secondly, the authors do not mention the recent work W-TALC which performs weakly-supervised action localization and outperforms STPN. It would be good to have a baseline against this method.\n\n- The qualitative results in this paper are confusing and not convincing. It is true that the MAAN's activation sequence shows peaks which correspond to groundtruth and are not present in other methods. However, the MAAN activation sequence also shows several extra peaks not present in other methods and also not present in the groundtruth, therefore it looks like it is keener to predict the presence of the action causing more true positives, but also more false positives. It would be good to see some discussion of these failure cases and/or more qualitative results. The current figure could be easily compressed by only showing one instance of the ground-truth instead of one next to each method.\n\nI like the idea of the paper however I am currently unconvinced by the results that this is the correct method to solve the problem.\n" ]
[ -1, -1, -1, -1, 5, 6, 3 ]
[ -1, -1, -1, -1, 3, 4, 3 ]
[ "r1lyqwftCQ", "rkxNicb5nm", "Syec-wnqn7", "HkgzxyE0hm", "iclr_2019_HkljioCcFQ", "iclr_2019_HkljioCcFQ", "iclr_2019_HkljioCcFQ" ]
iclr_2019_HkxKH2AcFm
Towards GAN Benchmarks Which Require Generalization
For many evaluation metrics commonly used as benchmarks for unconditional image generation, trivially memorizing the training set attains a better score than models which are considered state-of-the-art; we consider this problematic. We clarify a necessary condition for an evaluation metric not to behave this way: estimating the function must require a large sample from the model. In search of such a metric, we turn to neural network divergences (NNDs), which are defined in terms of a neural network trained to distinguish between distributions. The resulting benchmarks cannot be ``won'' by training set memorization, while still being perceptually correlated and computable only from samples. We survey past work on using NNDs for evaluation, implement an example black-box metric based on these ideas, and validate experimentally that it can measure a notion of generalization.
accepted-poster-papers
The paper argues for a GAN evaluation metric that needs sufficiently large number of generated samples to evaluate. Authors propose a metric based on existing set of divergences computed with neural net representations. R2 and R3 appreciate the motivation behind the proposed method and the discussion in the paper to that end. The proposed NND based metric has some limitations as pointed out by R2/R3 and also acknowledged by the authors -- being biased towards GANs learned with the same NND metric; challenge in choosing the capacity of the metric neural network; being computationally expensive, etc. However, these points are discussed well in the paper, and R2 and R3 are in favor of accepting the paper (with R3 bumping their score up after the author response). R1's main concern is the lack of rigorous theoretical analysis of the proposed metric, which the AC agrees with, but is willing to overlook, given that it is nontrivial and most existing evaluation metrics in the literature also lack this. Overall, this is a borderline paper but falling on the accept side according to the AC.
val
[ "BkxqKQQ527", "SJxruT-rCm", "HkgTQTbBAm", "HJlDkTZHCQ", "H1gq7h-BAm", "Bkx0g3Yn27", "HJl4hUgtnX" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Summary:\nThe paper looks at the problem of benchmarking models that unconditionally generate images. In particular they focus on GAN models and discuss the Inception Score (IS) and Fréchet Inception Distance (FID) metrics. The authors argue that a good benchmark should not have a trivial solution (e.g. memorising the dataset) and find that a necessary condition for such a metric is a large number of samples. They also find that for IS and FID , a GAN is outperformed by a model that memorises the dataset, while a method based on neural network divergences (NND) does not show the same behaviour. NND works by training a discriminative model to discriminate between samples of the generative model and samples from a held out test set. The poorer the discriminative model performs, the better the generative model is.\n\nThe authors show a range of results using a CNN based divergence: on PixelCNN++, GANs, overfitted GANs, WGAN-GP and conclude that it’s a better metric than IS/FID at the expense of requiring much more computation to evaluate. They also perform a test with limited compute and show that the results correlate well with a bigger dataset, but show some bias.\n\nReview:\nThe paper is well written, with a clear description of the properties a good benchmark should have, an analysis of the current solutions and their shortcomings and an extensive experimental evaluation of the CNN divergence metric. The authors also compared with non GAN methods and experimented with small datasets, both are not necessarily within scope but a welcome addition. The authors also open source their code.\n\nIn the section “Outperforming Memorization”, the authors mention a way to tune capacity of the “critic” network and influence its ability to overfit on the sample. This means that if someone wants to compare the generalisation and diversity of samples between GANs, they would need to train the exact same critic CNN to be able to make a comparison. However the authors do not provide any principled way to determine the right size of the \"critic\" network. In general, given evaluating the metric requires training a network from scratch, it will be very difficult to make this consistent. This makes the proposed benchmark more impractical to use than its alternatives.\n\nIn the section “training against the metric”, the authors mention that a main criticism is the fact that a GAN directly optimises for the NND loss. In table 3 we indeed see that this is the case, however the authors argue that perhaps the GAN is simply the better model. I am worried by the fact that both PixelCNN++ and IAF-VAE perform worse than the training set on this benchmark. It seems like this particular benchmark would then work well specifically for GANs, but would (still) not allow us to compare with models trained using maximum likelihood.\n\nIn conclusion, I think the paper is well written and the authors clearly make progress towards a dependable benchmark for GANs. The paper does not introduce any new method, but instead has a thorough analysis and discussion of current methods which is worthwhile by itself.\n\nNits:\nPage 7, second paragraph, fifth line, spurious “q”\n\n########\nRevision\n\nI would like to thank the authors for a thoughtful revision and response. I have updated my score to a 7 and think this paper is a worthy contribution to ICLR. The new drawback section is well written and informative.", "Thank you very much for the review. We'd like to respond as follows:\n\n> I personally feel that if sample generation is the only goal, then this trivial algorithm is perfectly fine because, statistically, the empirical distribution is in many, though not all, ways, a good estimator of the underlying true probability measure (this is the idea that is used in the statistical technique of Bootstrap for example). \n\nWe absolutely agree! We write in the final paragraph \"In our work we assume that our final task is not usefully solved by memorizing the training set, but for many tasks such memorization is a completely valid solution. If so, the evaluation should reflect this...\"\n\n> However the underlying goal in unsupervised learning problems where GANs are used is hardly sample generation. The GANs also output a whole function in the form of a generative network which converts random samples into samples from the underlying generating distribution. This generative network is arguably more important and more useful than just the samples that it generates. An evaluation scheme for GANs should focus on the generative network directly rather than on a set of its generating samples. \n\nWe agree that learning a generative network with a specific structure is a very important task in unsupervised learning. The argument that GAN research should be steered away from sample generation is certainly interesting. However without taking an opinion on that argument, we observe that a significant number of strong papers have been oriented at the final task of unconditional sample generation (e.g. https://arxiv.org/abs/1710.10196, ICLR 2017 oral). Since presumably this trend will continue, we believe that it’s valuable to work towards proper benchmarks for this task. And developing proper benchmarks requires a definition of the task which is nontrivial, i.e. for which training set memorization isn’t a perfect solution.\n\n\n> A measure D_CNN is proposed as a benchmark. It must be remarked that D_CNN is not even properly defined (for example, there is a function \\Delta in its definition but it is never explained what this function is).\n\nWe give a detailed specification of D_CNN in Appendix D, and we’re releasing code along with this paper which will serve as a canonical reference. However, we think of our D_CNN as an example instantiation of the idea of NNDs -- as such, we don’t think the specifics are relevant to most of our experiments or conclusions.\n\n> D_CNN is a variant of the existing notion of Neural Network Divergences. Only a numerical study (with no theory) is done to illustrate the utility of D_CNN for evaluating samples generated by GANs. The entire paper is very anecdotal with very little rigorous theory.\n\nWe see sections 2-4 of our paper as a unification and expansion of existing theory from the particular point of view of whether an evaluation metric requires a large sample to be evaluated and whether neural network divergences satisfy this property. We believe this is a useful contribution which stands apart from the empirical results we present in Section 5.", "Thanks for taking the time to write this review. We'd like to respond to your points as follows:\n\n> (...) experimented with small datasets, both are not necessarily within scope but a welcome addition\n\nWe'd like to clarify why we consider the small-test-set experiment to be a crucial contribution. We've updated the paper (sections 3.3 and 4.1) to explain that a small test set might be hazardous specifically for NNDs, which are designed to require a large sample to estimate. Without evidence that the small-sample estimates correlate very well with the large-sample estimates, we wouldn't effectively be able to use NNDs for evaluation except in settings where our test set is much larger than our training set.\n\n> if someone wants to compare the generalisation and diversity of samples between GANs, they would need to train the exact same critic CNN to be able to make a comparison. (...) In general, given evaluating the metric requires training a network from scratch, it will be very difficult to make this consistent.\n\nYou’re absolutely right that it’s very difficult to reproduce network training identically across implementations and hardware. We’ve added a discussion of this problem in a section titled “Drawbacks of NNDs for Evaluation”. In short, NND-based evaluation will likely require standardized open-source hardware-independent implementations. In general, we don’t claim to have complete solutions for these problems - instead, we present a framework and a path forward for evaluating generative models based on samples alone. However, we do note that for our specific metric, CNN Divergence, the variance across multiple training runs of the critic network is quite small, as outlined in Appendix E.\n\n> However the authors do not provide any principled way to determine the right size of the \"critic\" network.\n\nUltimately, the best critic size will depend on the downstream application of the generative model. Since this downstream task is usually not well-defined theoretically, determining the “right” critic size by theory is a very difficult task and it’s perhaps best left as an empirical choice. More generally, we avoid attempting to prescribe hyperparameters or define a specific evaluation procedure in this work.\n\n> In table 3 we indeed see that this is the case, however the authors argue that perhaps the GAN is simply the better model. \n\nThis is a very important point; thanks for raising it. To clarify, we don't mean to suggest that the GAN is the \"best\" model in any aboslute sense. Instead, it simply is the model that performs best in terms of the CNN divergence. We believe the CNN divergence is more sensitive to certain properties of the learned distribution than, for example, log likelihood. Whether this means the GAN is better or worse will depend on the intended use of the generative model. We discuss this in a few places in the paper:\n\nIn Section 4.1, “Training Against The Metric”, we argue that the NND’s tendency to “unfairly” favor models trained against it appears to be mild compared to metrics like the Inception Score, which very greatly favors models trained against it, even though those models produce samples which resemble pure noise.\nIn Appendix A (newly added), we summarize and highlight new evidence for past arguments against any universal notion of a “best” metric or model: in short, different metrics always tend to prefer different models.\n\nWe note that some studies (e.g. https://arxiv.org/abs/1705.05263, https://arxiv.org/abs/1705.08868) have considered the performance of models trained against an NND in terms of log-likelihood by using a flow-based (invertible) generator and found that GAN training performs very poorly in terms of likelihood. This is a similar point to the one we make here - a model trained against one class of objective (e.g. via maximum likelihood) might not be expected to perform well against another class of objectives.\n\n> I am worried by the fact that both PixelCNN++ and IAF-VAE perform worse than the training set on this benchmark.\n\nAny useful metric will exhibit some trade-off between, for example, sample quality and diversity. In terms of why PixelCNN++ and IAF-VAE perform worse than the training set under CNN divergence, CNN divergence likely \"prefers\" sample quality to diversity to the extent that it prefers a small, perfectly realistic sample (i.e. the training set). We note that while the PixelCNN++ and IAF-VAE are certainly effective generative models, samples from those models are clearly distinguishable from the training set. We’ve updated the paper with a detailed discussion of this topic in Appendix A.\n\n> Nits\n\nThanks for catching this! We’ve fixed it in an update.\n", "Thank you for the thoughtful review! We'd like to respond to one point in particular:\n\n> On the down side, I think the proposed DNN metric is not exactly useful. It would be a subset of the metric that an MMD would give and it would focus only in some properties of the images but not on the whole distribution. So, if this metric does not capture the relevant aspects of the problem the GAN is trying to imitate, it will fail to provide that metric that we are looking for.\n\nWe agree completely that NNDs have inductive biases which cause them to ignore certain properties of the distribution: for example, our “CNN divergence” is likely to ignore small spatial shifts in its inputs. However, we actually see this as an advantage: NNDs let us design metrics which are sensitive only to the properties that are important for the final task, and invariant to the rest. We think this point is best made by Theis et al. (https://arxiv.org/abs/1511.01844), who argue that evaluation metrics should reflect the downstream task, and Huang et al. (https://arxiv.org/abs/1708.02511), who argue theoretically and empirically that NNDs in particular are good losses for generative modeling *because* of their inductive biases. We addressed this briefly in our section titled “Perceptual Correlation”, but we think it definitely deserves a longer discussion -- so we’ve updated the paper with a separate section (Appendix A) which clarifies this point in detail with examples and references.\n\nConcerning the MMD in particular, we review past work on its use for model evaluation in section 4. We’ve updated that paragraph to add an important point: the MMD with a generic kernel tends not to be very discriminative in high dimensions. Reddi et al. (https://arxiv.org/abs/1406.2083) show that the power of a two-sample test based on the MMD decreases polynomially in high dimensions, for many types of distributions. NNDs, on the other hand, leverage the inductive biases of neural networks in order to produce a discriminative metric even in high dimensions.\n", "We’d like to thank all the reviewers for your thoughtful comments. We’ve made the following significant updates to our paper based on your feedback:\n\n- Clarified throughout that our goal is to present a promising approach and motivate future work, rather than directly to propose a benchmark. To that end, added section 4.1, \"Drawbacks of NNDs for Evaluation\".\n- Added a detailed discussion of the need for evaluation metrics tailored to a specific task in Appendix A, \"The Importance of Tradeoffs in Evaluation Metrics\".\n- Clarified the situation with bias from a small test set in sections 3.3 and 4.1\n\nAdditionally, we've responded to your comments individually below.", "This paper is quite interesting as it tries to find a new metric for evaluating GANs. IS is a terrible metric, as memorization would achieve high score and test log-likelihood cannot be evaluated. I like the long discussion at the beginning of the paper about what a metric for evaluating implicit generative models would need to be a valid and useful metric. This problem is of great importance for GANs as proving that GANs solve the density estimation problem would be extremely hard and even more so, making sure we are close to a good solution with any finite sample even more so (I am talking to non-trivial examples in high dimensions). It is clear that in order to make GANs, in particular, or implicit models, in general, useful, we need to find metrics that would allow us to achieve progress. This paper is a direction in what it needed. In this sense I think the paper can be a good starting point for the discussion that we are not having right now, because we are too focused on making sure they converge, but not how they can be useful. \n\nOn the down side, I think the proposed DNN metric is not exactly useful. It would be a subset of the metric that an MMD would give and it would focus only in some properties of the images but not on the whole distribution. So, if this metric does not capture the relevant aspects of the problem the GAN is trying to imitate, it will fail to provide that metric that we are looking for. \n\nI would see this paper as a great workshop paper, in the sense of old-fashion NIPS workshops in which new ideas were tested and discussed. But it clearly would like the polished papers that we see in conferences these days. Bernhard Schoelkopf told me once, after receiving the ICML reviews, “People now focus more on reasons to reject a paper than in reason for accepting a paper.” (note that I am quoting from memory, the bad use of English in mine not his). There are many reasons to reject this paper, but also some reason to accept the paper. \n", "The paper aims to come up with a criterion for evaluating the quality of samples produced by a Generative Adversarial Network. The main goal is that the criterion should not reward trivial sample generation algorithms such as the one which generates samples uniformly at random from the samples in the training set. I personally feel that if sample generation is the only goal, then this trivial algorithm is perfectly fine because, statistically, the empirical distribution is in many, though not all, ways, a good estimator of the underlying true probability measure (this is the idea that is used in the statistical technique of Bootstrap for example). However the underlying goal in unsupervised learning problems where GANs are used is hardly sample generation. The GANs also output a whole function in the form of a generative network which converts random samples into samples from the underlying generating distribution. This generative network is arguably more important and more useful than just the samples that it generates. An evaluation scheme for GANs should focus on the generative network directly rather than on a set of its generating samples. \n\nEven if one were to regard the premise of the paper as valuable, the paper still does a poor job meeting its objective. A measure D_CNN is proposed as a benchmark. It must be remarked that D_CNN is not even properly defined (for example, there is a function \\Delta in its definition but it is never explained what this function is). D_CNN is a variant of the existing notion of Neural Network Divergences. Only a numerical study (with no theory) is done to illustrate the utility of D_CNN for evaluating samples generated by GANs. The entire paper is very anecdotal with very little rigorous theory. " ]
[ 7, -1, -1, -1, -1, 6, 3 ]
[ 4, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2019_HkxKH2AcFm", "HJl4hUgtnX", "BkxqKQQ527", "Bkx0g3Yn27", "iclr_2019_HkxKH2AcFm", "iclr_2019_HkxKH2AcFm", "iclr_2019_HkxKH2AcFm" ]
iclr_2019_HkxLXnAcFQ
A Closer Look at Few-shot Classification
Few-shot classification aims to learn a classifier to recognize unseen classes during training with limited labeled examples. While significant progress has been made, the growing complexity of network designs, meta-learning algorithms, and differences in implementation details make a fair comparison difficult. In this paper, we present 1) a consistent comparative analysis of several representative few-shot classification algorithms, with results showing that deeper backbones significantly reduce the gap across methods including the baseline, 2) a slightly modified baseline method that surprisingly achieves competitive performance when compared with the state-of-the-art on both the mini-ImageNet and the CUB datasets, and 3) a new experimental setting for evaluating the cross-domain generalization ability for few-shot classification algorithms. Our results reveal that reducing intra-class variation is an important factor when the feature backbone is shallow, but not as critical when using deeper backbones. In a realistic, cross-domain evaluation setting, we show that a baseline method with a standard fine-tuning practice compares favorably against other state-of-the-art few-shot learning algorithms.
accepted-poster-papers
This paper provides a number of interesting experiments for few-shot learning using the CUB and miniImagenet datasets. One of the especially intriguing experiments is the analysis of backbone depth in the architecture, as it relates to few-shot performance. The strong performance of the baseline and baseline++ are quite surprising. Overall the reviewers agree that this paper raises a number of questions about current few-shot learning approaches, especially how they relate to architecture and dataset characteristics. A few minor comments: - In table 1, matching nets are mistakenly attributed to Ravi and Larochelle. Should be Vinyals et al. - The notation for cosine similarity in section 3.2 is odd. It looks like you’re computing some cosine function of two vectors which doesn’t make sense. Please clarify this. - There are a few results that were promised after the revision deadline, please be sure to include these in the final draft.
train
[ "HklXEE2FeE", "B1gwuwZLeV", "S1lH_j94gN", "S1eOn4vQeV", "B1lauBoZlE", "Byl1iLIhRX", "HylrSpd9RQ", "S1ehsqeSAX", "BJlgV0er07", "rklgcRlSC7", "H1xDM6erA7", "B1l0Vper0X", "HyeAj7bFnX", "HJlAtk3vhm", "r1xNrc0Ts7" ]
[ "public", "author", "public", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for the answers. \nI do appreciate this work. It provides rigorous experiments.\n\n", "Hi, thanks for your questions! We reply to the three questions below.\n\n1. Did the authors run your learning for matching networks, prototypical networks, maml, and relation networks with episodic training (sampled from N-classes and K-Shot every episode) from plain networks(conv and resnet)? Or did you train from baseline networks(pre-trained)? \n\nYes, we train all the networks (including matching networks, prototypical networks, MAML, and relation networks) with episodic training from the plain networks. All the networks were randomly initialized with He initialization, the standard initialization used in ResNet.\n\n2. What is the number of iteration here? is it the number of episode? or the number you learn the feature extractor (baseline)?\n \nYes, the number of iterations refers to the number of the episode. Thanks for pointing this out. We will use the number of episode in the revised manuscript for clarity.\n\n3. In MAML paper, they stated that using 64 filters may cause overfitting, do the authors suffer the same thing as you change the backbone of MAML?\n\nWe do not see the overfitting effect from observing the validation loss. We believe that it is due to the data augmentation used in all our experiments.\n", "Hi,\n\nThis is a good insight for different backbones impacting the performance in few-shot classification.\n\nI want to verify several things here.\n\n1. Did the authors run your learning for matching networks, prototypical networks, maml, and relation networks with episodic training (sampled from N-classes and K-Shot every episode) from plain networks(conv and resnet)? Or did you train from baseline networks(pre-trained)? \n2. What is the number of iteration here? is it the number of episode? or the number you learn the feature extractor (baseline)?\n3. In MAML paper, they stated that using 64 filters may cause overfitting, do the authors suffer the same thing as you change the backbone of MAML?\n\nThanks in advance.", "Thank you, we will include them in the appendix of the revised manuscript.", "Thank the authors for the further response. The matrix of different settings seems informative. The authors are encouraged to include it in the paper. ", "Thanks R1 for the reply. Our goal of showing the cross-domain adaptation is to highlight the limitations of existing few-shot classification algorithms problem in handling domain shift. Our results in this setting show that 1) the baseline algorithm surprisingly outperforms all other few-shot classification methods and 2) the performance of few-shot classification algorithms can greatly benefit from further adaptation to the target domain even with a limited amount of data. We believe that our unified experimental setup will facilitate future efforts along this direction.\n\nIn the following, we also provide a taxonomy of existing work in related topics based on the availability of labeled/unlabeled data in the target domain, we would add the table to the appendix of the camera ready version to provide a more complete picture for the readers.\n\nDomain adaptation (DA): Evaluated on the *same* classes\t\t\t\t\t\n\t\t\t\t\t \t Source domain\t\t Target domain\t\n\t\t\t\t Domain shift\tLabeled \tUnlabeled\tLabeled (few) Unlabeled\nSupervised DA\t\t\n[Saenko et al., ECCV 2010]\t V\t\t V\t -\t\t V \t\t -\n[Motiian et al., NIPS 2017]\t\t\t\t\t\n\n\nSemi-supervised DA\t\t V\t\t V\t -\t\t V\t\t V\n\n\nUnsupervised DA \t\t V\t\t V\t -\t\t -\t\t V\n\n\n\n\nFew-shot classification: Evaluated on the *novel* classes\n\t\t\t\t\t\t Base class\t\t Novel class\t\n\t\t\t\t Domain shift\tLabeled \t Unlabeled\tLabeled (few) Unlabeled\nFew-shot\t\t\t -\t\t V\t -\t\t V\t\t -\n\n\nCross-domain few-shot\t\t\n[Ours (third setting);\t\t V\t\t V\t -\t\t V\t\t -\nDong et al. ECML-KDD 2018]\t\t\t\t\t\n\n\nSemi-supervised few-shot \t\t\t\t\n[Ren et al. ICLR 18]\t\t -\t\t V\t V\t\t V\t\t V", "I appreciate the authors' efforts in improving the experiments. Regarding the third setting (cross-domain adaptation), I still think it is not necessary to introduce it to few-shot learning, at least not now. Instead, it is probably better to focus on and try to advance the conventional problem setup for now. Moreover, as the authors point out, the third setting is related to several previously studied directions. I would recommend the authors to discuss those in the paper --- it is probably not a good idea to simply remove Motiian et al. (2017) in the revised PDF. ", "Thanks for your comments! Our responses are as follow:\nQ1: If a relatively simple modification could improve the baselines, are there simple modifications available to other meta-learning algorithms being investigated? \n\nA1: The simple modification we made for the baseline approach is to replace the softmax layer with a distance-based classifier. However, among other meta-learning algorithms, only the MAML method is applicable to this modification. Both ProtoNet and MatchingNet already use distance-based classifier in their algorithm. RelationNet has its own relation module so is not applicable for this modification. While MAML could adopt this strategy, we did not include it into our experiment since our primary goal is not to improve one specific method. \n\nQ2: If the other algorithms are not as good as they claimed, can you give any insights on why and what to improve?\n\nA2: \nMeta-learning for few-shot classification algorithms are not as good as they claimed because of the following two aspects:\n\nFirst, in the CUB setting, the gap among each algorithm diminished when using a deeper backbone. That is, with a deeper feature backbone, the improvement from different meta-learning algorithm become less significant. Our results suggest that both deeper backbones and meta-learning algorithms both aim to reduce intra-class variation for improving few-classification accuracy. Consequently, when intra-class variation has been dramatically reduced using a deeper backbone, the contribution from meta-learning becomes less significant.\n\nSecond, in the CUB -> mini-ImageNet setting where a larger domain shift exists, the Baseline method outperforms all meta-learning algorithms. That is, existing meta-learning algorithms are not robust to larger domain shift. As discussed in section 4.4, while meta-learning methods learn to learn from the support set during the meta-training stage, all of the base support sets are still within the same dataset. Thus, these algorithms did not learn how to learn from a support set with large domain shift.\n\nWith our results, we encourage the community to tackle the challenge of potential domain shifts in the context of few-shot learning. We will release the source code and evaluation setting that will facilitate future research directions.\n", "Q4: In the Matching Nets paper, there is a good baseline classifier based on k-NNs. Do you know how does that one compares to Baseline and Baseline++ models if used with the same architecture for the feature extractor?\n\nA4: Here we show our 1-shot and 5-shot accuracy of Baseline and Baseline++ with the softmax and 1-NN classifier on the mini-ImageNet dataset with a Conv4 backbone. We only include the result of k = 1 with cosine distance to match the setting of Matching Nets paper.\n\n1-shot\n \t\t softmax\t\t 1-NN (cosine distance)\nBaseline\t 42.11% +- 0.71%\t44.18% +- 0.69%\nBaseline++ 48.24% +- 0.75%\t49.57% +- 0.73%\n\n5-shot\n \t\t softmax\t\t 1-NN (cosine distance)\nBaseline\t 62.53% +- 0.69%\t56.68% +- 0.67%\nBaseline++ 66.43% +- 0.63%\t61.93% +- 0.65%\n\nAs shown above, using 1-NN classifier has better performance than that of using the softmax classifier in 1-shot setting, but softmax classifier is better in 5-shot setting instead. We note that that the number presented here are not directly comparable to the results reported in the Matching Nets paper because we use a different “mini-ImageNet” separation. In this paper, we follow the data split provided by [Ravi et al. ICLR 2017], which is used in most few-shot papers. We have included the result in the appendix of the revised paper.\n\nQ5: The conclusion from the network depth experiments is that “gaps among different methods diminish as the backbone gets deeper”. However, in a 5-shot mini-ImageNet case, this is not what the plot shows. Quite the opposite: the gap increased. Did I misunderstand something? Could you please comment on that?\n\nA5: Sorry for the confusion. As addressed in 4.3, gaps among different methods diminish as the backbone gets deeper *in the CUB dataset*. In the mini-ImageNet dataset, the results are more complicated due to the domain difference. We further discuss this phenomenon in Section 4.4 and 4.5. We have clarified related texts in the revised paper. \n", "Thanks for your opinions! Our responses are as follow:\nQ1: Is there an overlap between CUB and mini-ImageNet? If so, then domain shift experiments might be too optimistic or even then it is not a big deal?\n\nA1: There are only 3 out of 64 base classes that are *birds* in the mini-ImageNet dataset. Furthermore, these three categories (house_finch, robin, toucan) are different from the 200 bird categories in CUB. Thus, a large domain shift still exists between the mini-ImageNet and the CUB dataset.\n\nQ2: The paper includes much redundant information which could go to the appendix in order to not weary the reader. For instance, everything related to Table 1. There is also some overlap between Section 2 and 3.3, while MAML, for instance, is still not well explained. Also, tables with too many numbers are difficult to read, e.g. Table 4. \n\nA2: Thanks for the comments. \nFirst, our purpose for showing Table 1 is two-fold: 1) it validates our reimplementation by comparing results from the reported numbers and 2) it shows that the implementations of the Baseline method in prior works are underestimated.\n\nSecond, we have included a more detailed description of MAML in the revised paper. \n\nThird, thanks for the suggestion. To improve the readability, we have modified Table 4 in the original paper to a figure (see Figure 5 in the revised paper). We include the detailed numbers in the appendix for reference.\n\nQ3: Many of the few-shot learning papers use Omniglot, so I think it would be a valuable addition to the appendix. Moreover, there exists a cross-domain scenario with Omniglot-> MNIST which I would also like to see in the appendix.\n\nA3: Thanks for the suggestions. We did not include Omniglot because its performance has been saturated in most of the recent work (~99%). We will add the results to the appendix in the camera-ready version for completeness. We agree that the Omniglot-> MNIST experiment will be a good addition to the paper. We will also add the results to the appendix in the camera-ready version.\n", "Q3: Another concern is that the same number of novel classes is used in the training and the testing stage. A more practical application of the learned meta model is to use it to handle different testing scenarios.\n\nA3: Thanks for pointing this out. As suggested, we conduct the experiments of 5-way meta-training and N-way meta-testing (where we vary the number of N to be 5, 10, and 20) to examine the effect of handling testing scenarios that are different from training. We compare the methods Baseline, Baseline++, MatchingNet, ProtoNet, and RelationNet. Note that we are unable to apply the MAML method as MAML learns the initialization for the classifier and can thus only be updated to classify the same number of classes.\n\nWe show the experimental results on mini-ImageNet with 5-shot meta-training as follows.\n\nBackbone: Conv4\t\t\t\n\t 5-way test\t 10-way test\t 20-way test\nBaseline\t 62.53% +- 0.69%\t 46.44% +- 0.41%\t 32.27% +- 0.24%\nBaseline++\t66.43% +- 0.63%\t *52.26% +- 0.40%*\t*38.03% +- 0.24%*\nMatchingNet\t63.48% +- 0.66%\t 47.61% +- 0.44%\t 33.97% +- 0.24%\nProtoNet\t64.24% +- 0.68%\t 48.77% +- 0.45%\t 34.58% +- 0.23%\nRelationNet\t*66.60% +- 0.69%*\t47.77% +- 0.43%\t 33.72% +- 0.22%\n\t\t\t\nBackbone: ResNet18\t\t\t\n\t 5-way test\t 10-way test\t 20-way test\nBaseline\t 74.27% +- 0.63%\t 55.00% +- 0.46%\t 42.03% +- 0.25%\nBaseline++\t *75.68% +- 0.63%*\t*63.40% +- 0.44%*\t *50.85% +- 0.25%*\nMatchingNet\t 68.88% +- 0.69%\t 52.27% +- 0.46%\t 36.78% +- 0.25%\nProtoNet\t 73.68% +- 0.65%\t 59.22% +- 0.44%\t 44.96% +- 0.26%\nRelationNet\t 69.83% +- 0.68%\t 53.88% +- 0.48%\t 39.17% +- 0.25%\n\nOur results show that for classification with a larger-way (e.g., 10 or 20-way) in the meta-testing stage, the proposed Baseline++ compares favorably against other methods in both shallow or deeper backbone settings.\n\nWe attribute the results to two reasons. \n1) To perform well in a larger N-way classification setting, one needs to further reduce the intra-class variation to avoid misclassification. Thus, in both shallow and deeper backbone settings, Baseline++ has better performance than Baseline.\n\n2) As meta-learning algorithms were trained to perform 5-way classification in the meta-training stage, the performance of these algorithms may drop significantly when increasing the N-way in the meta-testing stage because the tasks of 10-way or 20-way classification are harder than that of 5-way classification. \n\nOne may address this issue by performing a larger N-way classification in the meta-training stage (as suggested in [Snell et al. NIPS 2017]). However, this may encounter the issue of memory constraint. For example, to perform a 20-way classification with 5 support images and 15 query images in each class, we need to fit a batch size of 400 (20 x (5 + 15)) that must fit in the GPUs. Without special hardware parallelization, the large batch size may prevent us from training models with deeper backbones such as ResNet. We have include the result in the appendix of the revised paper.\n\nQ4: It is misleading by the following: “Very recently, Motiian et al. (2017) addresses the few-shot domain adaptation problem.”...\n\nA4: Thanks for the correction. Indeed, both Saenko et al. Gong et al. address the supervised domain adaptation problem with only a few labeled instances prior to [Motiian et al., NIPS 2017]. \n\nOn the other hand, we would like to point out another research direction. Very recently, the method in [Dong et al. ECML-PKDD 2018] addresses the few-shot problem where both the domain *and* the categories change. This work is more related to our setting, as we also consider novel category accuracy in few-shot classification under domain differences. We have corrected the statement in the revised paper.", "Thanks for your comments! Our responses are as follow:\nQ1: “Using validation set to determine the free parameters...”\n\nA1: Thank you for the comment. In our paper, we did use the validation set to select the best number of training iterations for meta-learning methods. Specifically, the exact iterations for experiments on the mini-ImageNet in the 5-shot setting with a four-layer ConvNet are:\n\n- ProtoNet: 24,600 iterations\n- MatchingNet: 35,300 iterations\n- RelationNet: 37,100 iterations\n- MAML: 36,700 iterations\n\nWe have clarified this in the revised paper. \n\nOn the other hand, we were not able to use the validation set for the Baseline and Baseline++. Note that validation set for few-shot problem splits by class, and does not split data in one class. With these validation classes in meta-training stage, one can validate how well the model can predict novel classes in meta-testing stage. However, the Baseline and Baseline++ methods cannot predict validation classes, as they has a fixed softmax layer to predict base classes. On the other hand, for meta-learning methods, the class to predict is conditioned on the class in the support set. Thus, with the support set in validation class, meta-learning methods can predict the validation class. As an alternative for Baseline and Baseline++, we directly train 400 epoches. We observe convergence from the training curve in both the Baseline and Baseline++ methods.\n\nFor the learning rate and optimizer, we use Adam with an initial learning rate 0.001 for all of the methods because the ProtoNet, RelationNet, and MAML methods all use the same setting as described in the respective papers. However, we cannot find information about the learning rate for MatchingNet. The learning rate of 0.001 is also given as a default hyper-parameter for Tensorflow and PyTorch. The results in Table 1 of our paper ensure that the results reproduce the performance presented in the original papers.\n\nFor other hyper-parameters such as the network depth in the backbone architecture, we have a detailed comparison as shown in Section 4.3 of the paper.\n\nQ2: The results of RelationNet are missing in Table 4.\n\nA2: Adapting RelationNet using training data in the support set (from novel classes) at the meta-testing stage is non-trivial. As the relation module in RelationNet takes convolution maps as input, we are not able to not replace the relation module with a softmax layer as we do for the ProtoNet and MatchingNet. \n\nAs an alternative, at the meta-testing stage, we split the training data in the novel class into support and query data and use them to update the relation module. Specifically, we take the RelationNet with a ResNet-18 feature backbone. We randomly split the few training data in novel class into 3 support and 2 query data to finetune the relation module for 100 epochs. The results on CUB, mini-ImageNet and mini-ImageNet ->CUB are shown below.\n\n\t\t CUB\t\t\tmini-ImageNet\tmini-ImageNet -> CUB\noriginal\t 82.75% +- 0.58%\t69.83% +- 0.68%\t57.71% +- 0.73%\nadapted\t 83.17% +- 0.57%\t70.49% +- 0.68%\t58.54% +- 0.72%\n\nIn all three cases, adapting the relation module using the support data in the meta-testing stage improves the results. However, the improvement is somewhat marginal. We have included the additional results in the revised paper.\n", "The paper tried to propose a systematic/consistent way for evaluating meta-learning algorithms. I believe this is a great direction of research as the meta-learning community is growing quickly. However, my question is if a relatively simple modification could improve the baselines, are there simple modifications available to other meta-learning algorithms being investigated? If the other algorithms are not as good as they claimed, can you give any insights on why and what to improve?", "This paper gives a nice overview of existing works on few-shot learning. It groups them into some intuitive categories and meanwhile distills a common framework (Figure 2) employed by the methods. Moreover, the authors selected four of them, along with two baselines, to experimentally compare their performances under a cleaned experiment protocol. \n\nThe experiments cover three few-shot learning scenarios respectively for generic object recognition, fine-grained classification, and cross-domain adaptation. While I do *not* think the third scenario is “more practical”, it is certainly nice to have it included in the experiments. \n\nThe experiment setup is unfortunately questionable. Since there is a validation set, one should use it to determine the free parameters (e.g., the number of epochs, learning rates, etc.). However, it seems like the same set of free parameters are used for different methods, making the comparison unfair because this set may favor some methods and yet hurt the others. \n\nThe results of RelationNet are missing in Table 4.\n\nAnother concern is that the same number of novel classes is used in the training and the testing stage. A more practical application of the learned meta model is to use it to handle different testing scenarios. There could be five novel classes in one scenario, 10 novel classes in another, and 100 in the third, etc. The number of labeled examples per class may also vary from one testing scenario to anther. \n\nIt is misleading by the following: “Very recently, Motiian et al. (2017) addresses the few-shot domain adaptation problem.” There are a few variations in domain adaptation (DA). The learner has access to the fully labeled source domain and a small set of labeled target examples in supervised DA, to the source domain, a couple of labeled target examples, and many unlabeled target examples in semi-supervised DA, and to the source domain and many unlabeled target data points in the unsupervised DA. These have been studied long before (Motiian et al., 2017), for instance the works of Saenko et al. (2010) and Gong et al. (2013). \n\n[ref] Saenko K, Kulis B, Fritz M, Darrell T. Adapting visual category models to new domains. InEuropean conference on computer vision 2010 Sep 5 (pp. 213-226). Springer, Berlin, Heidelberg.\n\n[ref] Gong B, Grauman K, Sha F. Connecting the dots with landmarks: Discriminatively learning domain-invariant features for unsupervised domain adaptation. InInternational Conference on Machine Learning 2013 Feb 13 (pp. 222-230).\n\nOverall, the paper is well written and may serve as a nice survey of existing works on few-shot learning. The unified experiment setup can facilitate the future research for fair comparisons, along with the three testing scenarios. However, I have some concerns as above about the experiment setups and hence also the conclusions. ", "There are a few things I like about the paper. \n\nFirstly, it makes interesting observations about the evaluation of the few-shot learning approaches, e.g. the underestimated baselines, and compares multiple methods in the same conditions. In fact, one of the reasons for accepting this paper would be to get a unified and, hopefully, well-written implementation of those methods. \n\nSecondly, I like the domain shift experiments, but I have the following question. The description of the CUB says that there is an overlap between CUB and ImageNet. Is there an overlap between CUB and mini-ImageNet? If so, then domain shift experiments might be too optimistic or even then it is not a big deal?\n\nOne thing I don’t like is that, in my opinion, the paper includes much redundant information which could go to the appendix in order to not weary the reader. For instance, everything related to Table 1. There is also some overlap between Section 2 and 3.3, while MAML, for instance, is still not well explained. Also, tables with too many numbers are difficult to read, e.g. Table 4. \n\n---- Other notes -----\n\nMany of the few-shot learning papers use Omniglot, so I think it would be a valuable addition to the appendix. Moreover, there exists a cross-domain scenario with Omniglot-> MNIST which I would also like to see in the appendix. \n\nIn the Matching Nets paper, there is a good baseline classifier based on k-NNs. Do you know how does that one compares to Baseline and Baseline++ models if used with the same architecture for the feature extractor?\n\nThe conclusion from the network depth experiments is that “gaps among different methods diminish as the backbone gets deeper”. However, in a 5-shot mini-ImageNet case, this is not what the plot shows. Quite the opposite: the gap increased. Did I misunderstand something? Could you please comment on that?\n" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 5, 4 ]
[ "B1gwuwZLeV", "S1lH_j94gN", "iclr_2019_HkxLXnAcFQ", "B1lauBoZlE", "Byl1iLIhRX", "HylrSpd9RQ", "H1xDM6erA7", "HyeAj7bFnX", "r1xNrc0Ts7", "r1xNrc0Ts7", "HJlAtk3vhm", "HJlAtk3vhm", "iclr_2019_HkxLXnAcFQ", "iclr_2019_HkxLXnAcFQ", "iclr_2019_HkxLXnAcFQ" ]
iclr_2019_HkxStoC5F7
Meta-Learning Probabilistic Inference for Prediction
This paper introduces a new framework for data efficient and versatile learning. Specifically: 1) We develop ML-PIP, a general framework for Meta-Learning approximate Probabilistic Inference for Prediction. ML-PIP extends existing probabilistic interpretations of meta-learning to cover a broad class of methods. 2) We introduce \Versa{}, an instance of the framework employing a flexible and versatile amortization network that takes few-shot learning datasets as inputs, with arbitrary numbers of shots, and outputs a distribution over task-specific parameters in a single forward pass. \Versa{} substitutes optimization at test time with forward passes through inference networks, amortizing the cost of inference and relieving the need for second derivatives during training. 3) We evaluate \Versa{} on benchmark datasets where the method sets new state-of-the-art results, and can handle arbitrary number of shots, and for classification, arbitrary numbers of classes at train and test time. The power of the approach is then demonstrated through a challenging few-shot ShapeNet view reconstruction task.
accepted-poster-papers
The paper proposes a decision-theoretic framework for meta-learning. The ideas and analysis are interesting and well-motivated, and the experiments are thorough. The primary concerns of the reviewers have been addressed in new revisions of the paper. The reviewers all agree that the paper should be accepted. Hence, I recommend acceptance.
train
[ "H1ghXV0kgN", "ByldnORAJV", "SygBAooCkE", "B1gDhDl0y4", "HyxhLmE6JN", "r1eb9eOnyN", "rJe6s67O6m", "r1xx7y4dTQ", "Hye5i0QOpQ", "rkxV4A7_am", "Byxh3JK9nQ", "r1llzOxFhX", "Syewq7hpoX" ]
[ "author", "public", "author", "public", "author", "public", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for the follow up questions.\n\nThe experiments with 1 task per batch yield the following result on 5-way 5-shot learning with Versa: (66.75 + / - 0.9)%. This is within the error bars of the current numbers in the paper, and above Prototypical networks trained and tested on 5-way (65.77 + / - 0.7)%. Further, this result was achieved quickly in response to this discussion without any tuning of the optimization hyper-parameters (learning rate etc.) to this new setting. Based on past experience with Versa, we are confident that this performance can be further improved. \n\nWe will similarly investigate the performance in the 1-shot setting, and report final numbers for the next iteration of the paper. We would be happy to include the numbers achieved with both Versa training procedures above, as well as the numbers achieved by both training procedures achieved by Proto-nets in Table 3, along with a thorough discussion about the differences induced between the two procedures in the case of Versa and Prototypical networks.\n\nWe do disagree with your statement that the two settings (5-way classification with 4 tasks per batch and 20-way classification) are the same. A 20-way classification task is not equivalent to four 5-way classification tasks, namely in the independence assumptions induced over the predictive distributions by the different normalizing constants. This is consistent with the fact that, in contradiction to what your claims would imply, Versa's performance on the meta-test set is not impacted by reducing the number of tasks per batch, while prototypical networks significantly benefits from higher-way training. \n\nFinally, several members of our team are away (or are about to depart) for the holidays. So, whilst we're very happy to discuss these matters further, we will only be able to do so after the Christmas break. We would be very happy for you to reach out to us directly at this time so that we can continue discussion in a more direct way with a quicker response time. Thank you again for your input, and happy holidays.", "Thank you for your detailed replies.\n\nFirst of all, please, provide the numbers. There are a lot of phrases like \"within error bars\", \"no substantial effect\". Please, be specific. While your model with 1 task per meta-batch might be within the error bars of your current submission, it might no longer be within the error-bars of current SOTA results on some of the benchmarks. \n\nMatching to MAML protocol is understandable, however, the Prototypical Networks are trained using a single episode (=single task per meta-batch) per shared model update. If you insist on your current training protocol, the Prototypical Networks need to be trained using the same number of episodes per single task-specific (in fact shared!) update as you use. You need to truly match Prototypical Networks protocol to the one you and MAML use. Otherwise report your numbers with a single task per meta-batch protocol, whether they are within the error-bars of your current submission or not. \n\nIt certainly makes sense to take into consideration the number of query samples used for a single task-specific model (in fact HEAVILY shared across different tasks). The more query samples you use, the more regularised is the training of the shared feature extractor that both VERSA and Prototypical Nets utilise. You use from 15 * 20 to 15 * 40 samples for miniImageNet while the Prototypical Nets from your table only use 15 * 5. This is the advantage you give yourself by comparing your multitask-per-meta-batch protocol to the single-task-per-meta-batch protocol in Prototypical Networks.\n\nIn addition to that, if we are talking about using truly comparable settings for all models, similarly to MAML and the Prototypical Nets, your feature extractor should consist of 4 convolutional blocks for miniImageNet dataset, not 5, and the number of features better also be similar. I understand that this might not change the relative positions, however, it should be interesting to see what would be the margin between the results in this case. This would help to evaluate the advantage of using additional network compared to the Prototypical Nets.\n\nAnother approach would be to ignore all those minor differences in the setups and objectives and use acknowledged peer-reviewed best results for all models and try to make your model as good as possible. While it is good to have an additional table that compares the models in the almost exact settings, different models might benefit from different tricks, so these tricks should not be ignored in final assessment of the model. Absence of the best result by the Prototypical Networks together with your claims about setting SOTA is certainly misleading. It is your job to demonstrate the best potential of your model, so there needs to be a comparison to the Prototypical Networks at their best, even if it requires using additional protocol (even though I still consider your protocol to be interpretable within the protocol of the Prototypical Nets, you repeated my explanation with your own words confirming the the difference is only in the objective). The lack of a valid comparison is certainly a drawback.\n\nIf one discovers they can use some valid trick to improve the results of their model, they should not avoid using it. The Prototypical Networks are learning a more complicated task during the training stage than any of the other models in the table while using the same amount of data, and they should not be punished for that. It is not that the size of their model was drastically different (in fact, it is the smallest since they only use the same feature extractor as you and MAML do) or that the quality of their model was increasing linearly with the larger number of the classes used during the training. There is a trade-off, they explored it and used for regularising their model, in the same way other people use different architectures or objectives.", "Thanks for the follow up question!\n \nExisting comparisons:\n- As is shown in Section 4, prototypical networks is a special case of Versa, allowing for a direct comparison of the two when the training procedure is equivalent.\n- Our training procedure (and that of other methods presented in Table 3) is equivalent to the \"matched training condition\" in prototypical nets (i.e., train and test \"way\" are the same).\n- Experimental protocols (and in particular, makeup of the mini-batches) were found not to have a substantial effect on the performance of Versa. The experimental protocols described in the paper (number of tasks per batch, size of test sets, etc') closely follow that of MAML. See below for more details.\n- For example, we ran Versa with a meta-training batch of 1 (directly equivalent to the prototypical network setup), and found the final accuracy is within error-bars of the result of the submission, and still above the errors bars of prototypical networks trained and tested on 5-way.\n- We therefore consider the comparisons presented to be direct and fair.\n\nComparisons using higher way training\n- We have not investigated the effects of training with a higher way than testing.\n- This changes the objective function due to the normalisation constant in the softmax e.g. this would have a single normaliser for all 20 classes if considered together, versus separate ones for each of the tasks if the classes were split into 4 tasks of 5 classes each.\n- This is the key difference between these two training conditions and is not something Versa currently exploits.\n- We agree that it would be interesting to test whether using this modified objective improved Versa and indeed whether the same idea could lead to improvements in other methods too.\n\nMore details on the training protocol\n- For both Omniglot and miniImageNet, experiments demonstrated that the performance of Versa on the meta-test set is not sensitive to the number of tasks per batch during training.\n- As such, the experimental protocol (number of tasks per batch and the number of query points per batch) for both miniImageNet and Omniglot were chosen to match the MAML protocol.\n- We also ran the experiments with meta-training batch of size 1, which is directly equivalent to the prototypical network setup. \n- The performance in these experiments was very similar to our best results (within error bars), and significantly better than what is reported by prototypical nets for the same setup.\n- In summary the final performance was not found to be particularly sensitive to these choices.\n- Arguably, this is to be expected as this is equivalent to selecting the size of the mini-batch in conventional learning. In particular, our model makes an independence assumption across tasks (given \\theta).\n- As prototypical networks are a special case of Versa (with the amortization network set to identity around the mean encoding), we expect similar findings to hold for this model.", "In your submission it is stated that during training you used 16 tasks per batch for all four Omniglot setups, 4 tasks per batch for 5-shot 5-way task on miniImageNet and 8 tasks per batch for 1-shot 5-way task on miniImageNet. Technically, if we take a look at the features right before the softmax, due to the context independence assumption between the posteriors of different classes there is no difference between running four parallel 5-way tasks and one 20-way task, as long as you don't update the weights while running these four tasks. So your learning procedure can be viewed as a 20-way classification with masked softmax. From this point of view it seems that the Prototypical Networks try to learn a more complicated 20-way task during the training stage while your model is trained on a \"simplified and masked\" 5-way task while having access to the same 20 classes (in the 5-way 5-shot miniImagenet setup, and it has access to a much larger number of classes and total number of query samples in all the other setups, more query samples that the Prototypical Networks have access to during a single update of the fully shared model). The difference is only in the losses, and it is okay (I guess?) to have different losses (your way of computing losses can be viewed as a particular, masked instance of their way). It seems that if we are talking about truly fair comparison to the results by Prototypical Networks that you mention in your current version of the paper, your model should be trained with a single task per meta-batch, at least so that the number of query samples used for a single model update was comparable (now it is 15 * 5 for Prototypical Networks and 15 * 20 for 5-way 5-shot and 15 * 40 for 5-way 1-shot VERSA models on miniImageNet), otherwise it is also not fair.\n\nFinally, there is a difference between the claims \"within the error bars for all standard benchmarks\" and \"sets new SOTA results by a certain margin over the pervious best\".", "Thank you for your question!\n\nThe prototypical networks paper proposes a number of different experimental protocols. One of these protocols trains on higher \"way\" than what is ultimately used for testing. This is detailed in table 5 in Appendix B of the prototypical networks paper. There you will see that the number you are quoting is achieved by training the system to perform 20 way classification, and then testing it on 5 way classification (final row of the table). This experimental protocol differs from that used by all other methods. When matching experimental protocols are used, i.e., training and testing on 5-way classification, prototypical networks achieve the numbers we quote in our ICLR submission (row 10 of their table). \n\nThe discrepancy between the experimental protocols was pointed out to us by readers of the arxiv version of our paper who suggested reporting numbers from the same experimental protocol. Hence the differing numbers between this submission and the arxiv version. You are correct in pointing out that in the unconstrained setting prototypical networks achieve better performance, but we are of the opinion that a fairer comparison is made when all methods use the same experimental protocol. The note in our submission saying \"The tables include results for only those approaches with comparable training procedures ...\" was intended to clarify this, but we will clarify this more explicitly in the final version if it is accepted. \n\nFinally, note that even the improved prototypical networks number (68.20 ± 0.66%) is within error bars of the Versa number (67.37 ± 0.86%).", "Could you please comment on why the results of Prototypical Nets mentioned in Table 3 of your submission are different (lower) from those reported by the authors of the model in their paper [1]? Especially this concerns the 5-shot 5-way miniImageNet setup where their result is 68,20 +/- 0.66% which contradicts your claim of getting state-of-the-art results on any of the standard few-shot learning benchmarks. This is especially strange since the arxiv version of your paper includes the correct numbers. Thank you in advance for your answer.\n\n[1] - Snell, Jake, Kevin Swersky, and Richard Zemel. Prototypical networks for few-shot learning. Advances in Neural Information Processing Systems. 2017.", "Dear Reviewers,\n\nMany thanks for your detailed comments and suggestions. We really appreciate the time and effort you have put into reading our paper. Your comments are both insightful and constructive, and we believe have contributed to improving the quality of our paper.\n\nWe have uploaded a revised version of the paper, incorporating your comments and suggestions. Below, we address each of your reviews individually.\n", "“The important aspect of the algorithm is the context independence assumption between posteriors of different classes for learning weights. … The idea sounds great, but I am skeptical of the justification behind the independence assumption which, as per its justifications sounds contrived and only empirical.”\n\nWe thank the reviewer for imploring us to think more carefully about this point. We share the concern that providing only an empirical justification for the context independent assumption is slightly troubling. We have therefore considered this more carefully, and have found that there is a principled justification of this design choice, which is best understood through the lens of density ratio estimation [i, ii]. \n\nResults from Density Ratio Estimation [i, ii] show that an optimal softmax classifier learns the ratio of the densities\n\n Softmax(y=k | x) = p(x | y=k) / Sum_j p(x | y=j)\n\nassuming equal a priori probability for each class. Our system follows his optimal form by setting:\n\n log p(\\tilde{x} | y=c) proportional h_theta ( \\tilde{x})^T w_c\n\nwhere w_c ~ q_phi (w | {x_n ; y_n=c} ) for each class in a given task. Here {(x_n, y_n)} are the few-shot training examples, and $\\tilde{x}$ is the test example. This argument states that under ideal conditions (i.e., we can perfectly estimate p(y=c | x) ), the context-independent assumption is correct, and further motivates our design.\n\nWe have amended the paper to include this argument (see Appendix B). We thank the reviewer for pointing to this important issue, and we hope that this alleviates some of their concerns.\n\n[i] - S. Mohamed. The Density Ratio Trick. The Spectator (Blog). 2018\n[ii] - M. Sugiyama, T. Suzuki, and T. Kanamori. Density ratio estimation in machine learning. 2012\n", "“It would have been good if some of the experiments could be moved into the main paper. … the structure and organization of the paper could be improved by moving some of the methodological details and experimental results in the appendix to the main paper.”\n\nWe agree that a significant portion of interesting content has been relegated to the appendix in our submission. Much of this, of course, has to do with space constraints. However, we have addressed this in the revised version in line with your suggestions by (i) moving the appendix containing the toy-data experimentation to the main body of the paper (see Section 5.1), and (ii) moving some methodological details from the appendix in to the experiments section (see Section 5).\n\n“It would have been good if there was some validation of the time-performance of the model as one motivation of meta-learning is rapid adaptation to a test-time task. “\n\nWe strongly agree that the issue of performance timing is of great interest, and it is useful and important to validate this experimentally. We were originally hesitant to add any timing results as code released with research papers is often optimized for correctness as opposed to speed. That said, we measured the test time performance of both MAML (as implemented in the authors' publicly available repository at https://github.com/cbfinn/maml) and Versa in 5-shot 5-way experiments on mini-ImageNet, using the same architectures for both. We found Versa to achieve 5x speed up compared to MAML, while achieving significantly better accuracy (see Table 3). We have amended the paper to include this experimental data (see Section 5.2 for details). We believe this data demonstrates the performance gains achieved by relieving the need for test time optimization procedures.\n", "“Which of the competitors (if any) use the same restricted model setup (inference only on the top-layer weights)?”\n\nTo the best of our knowledge, almost all the competing methods adapt the entire network for new tasks. We have amended the paper to clarify this point (see Section 5.2).\n\n“Do competitors without train/test split also get k_c + 15 points, or only k_c points?”\n\nTo the best of our knowledge, all methods we compare to use train/test splits, with the exception of the VI methods referenced in Table 1. The VI methods used the same number of observations at train time (i.e., the data available to all methods was identical).\n\n“The main paper does not really state what the model or the likelihood is [in the ShapeNet experiments]. From F.4 in the Appendix, this model does not have the form of your classification models, but psi is input at the bottom of the network. Also, the final layer has sigmoid activation. What likelihood do you use?”\n\nThe terseness of the ShapeNet model details was a result of space constraints. We have amended the paper to include additional explanatory details (see Section 3). You are correct in observing that psi plays a different role from the classification case, namely as an input to the image-generator. The likelihood we used is Gaussian, the sigmoid activation ensures that the mean is between 0 and 1, reflecting the constraints on pixel-intensities. Your observation that using top-layer weights would allow us to perform exact inference is very insightful. We decided to use an architecture that passed the latent parameters underlying each shape instance through multiple non-linearities, but it would be very interesting to compare to the simpler baseline that you suggest. As this is a significant undertaking, we will leave it to future work,\n\n“Real Bayesian inference would just see features h_theta(x) as inputs, not the x's. Why not simply feed features in then? … Be clear how it depends on theta (I think nothing is lost by feeding in the h_theta(x)).”\n\nThank you for suggesting this cleaner way of presenting our work. We agree with your observations on the input to the inference network. We have amended Fig. 2 accordingly, and have improved the descriptions in Section 3.\n\n“The marginal likelihood has an Occam's razor argument to prevent overfitting. Why would your criterion prevent overfitting?”\n\nThe mechanism preventing overfitting in our criterion is the meta train / test splits, which explicitly encourages the model to generalize from the training observations to the test data. Methods based on held-out sets, like cross validation, are known to favor models which are more complex than those favoured by Bayesian model comparison [i, ii]. However, as is empirically demonstrated in the experimental section, our proposed criterion consistently outperformed variational objectives.\n\n“It is quite worrying that the prior p(psi | theta) drops out of the method entirely. Can you comment more on that?”\n\nThis is a subtle point that we view as both a feature and a bug. It is a feature in the sense that a prior is learned implicitly through the sampling procedure (as is shown for example in the simple Gaussian experiment -- see Section 5.1). This can be compared to VI, for example, where the prior enters through a KL regularization term which often favours underfitting. It is a bug if, for example, the user has a priori knowledge about the parameters that they would like to leverage. In this case, it could be possible to use synthetic training data to incorporate such knowledge into the scheme. However, for the predictive purposes explored in this work, we did not find that the lack of prior posed an issue.\n\n\n[i] - C. E. Rasumessen and Z. Ghahramani. Occam’s razor. 2001.\n[ii] - I. Murray and Z. Ghahramani. A note on the evidence and Bayesian Occam’s razor. 2005.\n", "This paper proposes both a general meta-learning framework with approximate probabilistic inference, and implements an instance of it for few-shot learning. First, they propose Meta-Learning Probabilistic inference for Prediction (ML-PIP) which trains the meta-learner to minimize the KL-divergence between the approximate predictive distribution generated from it and predictive distribution for each class. Then, they use this framework to implement Versatile Amortized Inference, which they call VERSA. VERSA replaces the optimization for test time with efficient posterior inference, by generating distribution over task-specific parameters in a single forward pass. The authors validate VERSA against amortized and non-amortized variational inference which it outperforms. VERSA is also highly versatile as it can be trained with varying number of classes and shots.\n\nPros\n- The proposed general meta-learning framework that aims to learn the meta-learner that approximates the predictive distribution over multiple tasks is quite novel and makes sense.\n- VERSA obtains impressive performance on both benchmark datasets for few-shot learning and is versatile in terms of number of classes and shots.\n- The appendix section has in-depth analysis and additional experimental results which are quite helpful in understanding the paper.\n\nCons\n- The main paper feels quite empty, especially the experimental validation parts with limited number of baselines. It would have been good if some of the experiments could be moved into the main paper. Some experimental results such as Figure 4 on versatility does not add much insight to the main story and could be moved to appendix.\n- It would have been good if there was some validation of the time-performance of the model as one motivation of meta-learning is rapid adaptation to a test-time task. \n\nIn sum, since the proposed meta-learning probabilistic inference framework is novel and effective I vote for accepting the paper. However the structure and organization of the paper could be improved by moving some of the methodological details and experimental results in the appendix to the main paper. \n", "This paper presents two different sections:\n1. A generalized framework to describe a range of meta-learning algorithms.\n2. A meta-learning algorithm that allows few shot inference over new tasks without the need for retraining. The important aspect of the algorithm is the context independence assumption between posteriors of different classes for learning weights. This reduces the number of parameters to amortize during meta-training. More importantly, it makes it independent of the number of classes in a task, and effectively doing meta-training across class inference instead of each task. The idea sounds great, but I am skeptical of the justification behind the independence assumption which, as per its justifications sounds contrived and only empirical. \n\nOverall, I feel the paper makes some progress in important aspects of meta-learning.", "Summary:\nThis work tackles few-shot (or meta) learning from a probabilistic inference viewpoint. Compared to previous work, it uses a simpler setup, performing task-specific inference only for single-layer head models, and employs an objective based on predictive distributions on train/test splits for each task (rather than an approximation to log marginal likelihood). Inference is done amortized by a network, whose input is the task training split. The same network is used for parameters of each class (only feeding training points of that class), which allows an arbitrary number of classes per task. At test time, inference just requires forward passes through this network, attractive compared to non-amortized approaches which need optimization or gradients here.\n\nIt provides a clean, decision-theoretic derivation, and clarifies relationships to previous work. The experimental results are encouraging: the method achieves a new best on 5-way, 5-shot miniImageNet, despite the simple setup. In general, explanations in the main text could be more complete (see questions). I'd recommend shortening Section 4, which is pretty obvious.\n\n- Quality: Several interesting differences to prior work. Well-done experiments\n- Clarity: Clean derivation, easy to understand. Some details could be spelled out better\n- Originality: Several important novelties (predictive criterion, simple model setup, amortized inference network). Closely related to \"neural processes\" work, but this happened roughly at the same time\n- Significance: The few-shot learning results are competitive, in particular given they use a simpler model setup than most previous work. I am not an expert on these kind of experiments, but I found the comparisons fair and rather extensive\n\nInteresting about this work:\n- Clean Bayesian decision-theoretic viewpoint. Key question is of course whether\n an inference network of this simple structure (no correlations, sum combination\n of datapoints, same network for each class) can deliver a good approximation to\n the true posterior.\n- Different to previous work, task-specific inference is done only on the weights of\n single-layer head models (logistic regression models, with shared features).\n Highly encouraging that this is sufficient for state-of-the-art few-shot classification\n performance. The authors could be more clear about this point.\n- Simple and efficient amortized inference model, which along with the neural\n network features, is learned on all data jointly\n- Optimization criterion is based on predictive distributions on train/test splits, not\n on the log marginal likelihood. Has some odd consequences (question below),\n but clearly works better for few-shot classification\n\nExperiments:\n- 5.1: Convincing results, in particular given the simplicity of the model setup and\n the inference network. But some important points are not explained:\n - Which of the competitors (if any) use the same restricted model setup (inference\n only on the top-layer weights)? Clearly, MAML does not, right? Please state this\n explicitly.\n - For Versa, you use k_c training and 15 test points per task update during\n training. Do competitors without train/test split also get k_c + 15 points, or\n only k_c points? The former would be fair, the latter not so much.\n- 5.2: This seems a challenging problem, and both your numbers and reconstructions\n look better than the competitor. I cannot say more, based on the very brief\n explanations provided here.\n The main paper does not really state what the model or the likelihood is. From\n F.4 in the Appendix, this model does not have the form of your classification\n models, but psi is input at the bottom of the network. Also, the final layer has\n sigmoid activation. What likelihood do you use?\n One observation: If you used the same \"inference on final layer weights\" setup\n here, and Gaussian likelihood, you could compute the posterior over psi in closed\n form, no amortization needed. Would this setup apply to your problem?\n\nFurther questions:\n- Confused about the input to the inference network. Real Bayesian inference would\n just see features h_theta(x) as inputs, not the x's. Why not simply feed features in\n then?\n Please do improve the description of the inference network, this is a major\n novelty of this paper, and even the appendix is only understandable by reading\n other work as well. Be clear how it depends on theta (I think nothing is lost by\n feeding in the h_theta(x)).\n- The learning criterion based on predictive distributions on train/test splits seem\n to work better than ELBO-like criteria, for few-shot classification.\n But there are some worrying aspects. The marginal likelihood has an Occam's\n razor argument to prevent overfitting. Why would your criterion prevent overfitting?\n And it is quite worrying that the prior p(psi | theta) drops out of the method\n entirely. Can you comment more on that?\n\nSmall:\n- p(psi_t | tilde{x}_t, D_t, theta) should be p(psi_t | D_t, theta). Please avoid a more\n general notation early on, if you do not do it later on. This is confusing\n" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 2, 4 ]
[ "ByldnORAJV", "SygBAooCkE", "B1gDhDl0y4", "HyxhLmE6JN", "r1eb9eOnyN", "iclr_2019_HkxStoC5F7", "iclr_2019_HkxStoC5F7", "r1llzOxFhX", "Byxh3JK9nQ", "Syewq7hpoX", "iclr_2019_HkxStoC5F7", "iclr_2019_HkxStoC5F7", "iclr_2019_HkxStoC5F7" ]
iclr_2019_HkxaFoC9KQ
Deep reinforcement learning with relational inductive biases
We introduce an approach for augmenting model-free deep reinforcement learning agents with a mechanism for relational reasoning over structured representations, which improves performance, learning efficiency, generalization, and interpretability. Our architecture encodes an image as a set of vectors, and applies an iterative message-passing procedure to discover and reason about relevant entities and relations in a scene. In six of seven StarCraft II Learning Environment mini-games, our agent achieved state-of-the-art performance, and surpassed human grandmaster-level on four. In a novel navigation and planning task, our agent's performance and learning efficiency far exceeded non-relational baselines, it was able to generalize to more complex scenes than it had experienced during training. Moreover, when we examined its learned internal representations, they reflected important structure about the problem and the agent's intentions. The main contribution of this work is to introduce techniques for representing and reasoning about states in model-free deep reinforcement learning agents via relational inductive biases. Our experiments show this approach can offer advantages in efficiency, generalization, and interpretability, and can scale up to meet some of the most challenging test environments in modern artificial intelligence.
accepted-poster-papers
The paper presents a family of models for relational reasoning over structured representations. The experiments show good results in learning efficiency and generalization, in Box-World (grid world) and StarCraft 2 mini-games, trained through reinforcement (IMPALA/off-policy A2C). The final version would benefit from more qualitative and/or quantitative details in the experimental section, as noted by all reviewers. The reviewers all agreed that this is worthy of publication at ICLR 2019. E.g. "The paper clearly demonstrates the utility of relational inductive biases in reinforcement learning." (R3)
test
[ "BJxkrbE5CX", "ByxqT6b9RQ", "BJxWAEv0nX", "ryemxVtKAQ", "HJeh45lmCm", "H1gQS1H367", "B1lNr-4nT7", "SJg6aYHuaQ", "Syg-myA9n7", "Bkee1fmqhX" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thanks for the response. Most of my concerns are addressed. I think this work is a nice contribution to the community. ", "I believe the authors have addressed most of my comments and the revision has certainly improved the quality of the paper. I still think the overall contribution of the paper is very limited however I agree with the authors that it is indeed an important step towards generalizing RL approaches. In that light, I have adjusted my score and support this paper for acceptance.", "The goal of this paper is to enhance model-free deep reinforcement techniques with relational knowledge about the environment such that the agents can learn interpretable state representations which subsequently improves sample complexity and generalization ability of the approach. The relational knowledge works as an inductive bias for the reinforcement learning algorithm and provides better understanding of complex environment to the agents.\nTo achieve this, the authors focus on distributed advantage actor-critic algorithm and propose a shared relational network architecture for parameterizing the actor and critic network. The relational network contains a self-attention mechanism inspired from recent work in that area. Using these new modules, the authors conduct evaluation experiments on two different environment - synthetic Box World and real-world StarCraft-II minigames where they analyze the performance against non-relational counterparts, visualize the attention weights for interpretability and test on out-of-training tasks for generalizability.\n\nOverall, the paper is well written and provide good explanation of proposed method. The experimental evaluation adequately demonstrates superior performance in terms of task solvability (strong result) and generalizability (to some extent). The idea of introducing relational knowledge into deep reinforcement learning algorithm is novel and timely considering the usefulness of relational representations. However, there are several shortcomings that makes this paper weak:\n\n1.) While it is true that relational representations help to achieve more generalizable approach and some interpretability to learning mechanism, however comparing it to model-based approaches seems a stretch. While the authors themselves present this speculatively in conclusion, they do mention it in abstract and try to relate to model-based approaches. \n2.) The relational representation network using pairwise interaction itself is not novel and has been studied extensively. Similarly the self-attention mechanism used in this paper is already available. \n3. ) Further, the author chose a specific A2C algorithm to add their relational module. But how about other model-free algorithms? Is this network generalizable to any such algorithm? If yes, will they see similar boost in performance? A comparison/study on using this as general module for various model-free algorithms would make this work strong.\n4.) I have some concerns on generalizability claims fro Box World tasks. Currently, the tasks shown are either on levels that require a longer path of boxes than observed or using a key lock combination never used before. But this appears to be a very limited setting. What happens if one just changes the box with a gem between train and test? What happens if the colors of boxes are permuted while keeping the box as it is. I believe the input are parts of scene so how does change in configuration of the scene affect the model's performance?\n5.) What is the role of extra MLP g_theta after obtaining A?\n\nOverall it is very important that the authors present some more analysis on use of relational module to generalize across different algorithms or explain the limitations with it. Further it is not clear what are the contributions of the paper other than parameterizing the actor-critic networks with an already known relational and attention module.", "We have now submitted a revised version of the paper addressing the criticisms and suggestions from all 3 reviewers. We have also included the results of a new set of experiments using the relational module in combination with different RL algorithms (A3C and distributed DQN), which more clearly demonstrate its general applicability. These results are mentioned in the main text and summarized in Figure 7 in Appendix.", "Thanks for the thorough response --- I appreciate the additional clarifications being added to the text, and I completely understand that the resource-intensive nature of StarCraft makes some quantitative results difficult to obtain!", "Thank you for your review! Our goal was precisely to show the utility of relational inductive biases in RL and we are very pleased to know you found the evidence we presented compelling.\n\nRegarding your suggestions:\n\n1) Thank you for pointing this out. We agree that a mention to NerveNet is justified. We will include a sentence in the text comparing the approaches.\n\n2) We agree this is a relevant discussion point. As we mentioned in a separate response, using self-attention diminishes the impact of the quadratic complexity compared to other approaches -- e.g Relation Networks (Santoro et al. 2017). This is due to the quadratic computation being reduced to a single matrix multiplication (dot product). Having said this, your point is still a valid one. We are happy to include a discussion point mentioning the scalability challenges and highlight some possible approaches to mitigate this issue.\n\n3) While we agree that further quantitative detail would benefit the paper, due to the resource intensive nature of StarCraft, we were faced with a harder constraint on the number of hyperparameter and seeds that we could test in each experiment. That being said, we are now running additional tests and computing standard errors to address your points and provide more information about the performance gap between the agents.\n\nThank you for spotting the incorrect use of the word \"seeds\" in the caption of Figure 8. To clarify, we ran around 100 combinations of hyperparameters for each mini-game (which included 3 different seeds) as described in page 13. We then used the 10 best runs (not seeds), out of 100, to generate the plot. Regarding the drop in performance after the 10th best run, it follows a linear decay, akin to what we observe for the top 10 runs. We will update the text accordingly to make both points clear.", "Thank you for your thorough review and suggestions, we are grateful you appreciated the work! \n\nTo answer your points, one by one:\n\n> Presentation\n\nThank you for the suggestion. We will add details about each of the StarCraft mini-games in the text to give a better intuition about the task requirements.\n\n> Evaluation\n\n1) Indeed we ran experiments using the model described in Santoro et al, 2017 as the “relational component” in our agent. We observed that, while the agents were able to learn the task to a certain extent, the training was extremely slow in Box-World and prohibitive in StarCraft-II. We attribute this to the application of a relatively large MLP over each pair of entities (N^2 elements). In fact, this is one of the reasons that attracted us to the multi-head attention to begin with, for its ability to compute pairwise interactions very efficiently -- through a single matrix multiplication (inner product) -- and instead apply an MLP over the resulting N entities (rather than N^2). \n\n2) We generally agree with your comment. First, it is not obvious the degree to which real-world tasks require explicit relational reasoning. Second, more conventional models, e.g. ConvNets, are capable of a form of relational reasoning, in the sense that they learn the relationships between image patches. Regarding the first point, we have seen recently an increasing number of publications using similar mechanisms to achieve SOTA in a variety of real-world tasks, e.g. visual question answering (Malinowski et al, 2018), face recognition (Xie et al, 2018), translation (Vaswani et al, 2017). This suggests that indeed more explicit ways of comparing/relating different entities helps solving real-world tasks. Regarding the second point, our view is that a capacity to learn relations in a non-local manner (as expressed by Wang et al, 2017) -- i.e. irrespective of how proximal the entities being related are -- will be critical to achieve a satisfying level of generalization in our RL agents. Our results support this hypothesis, but we acknowledge that more work is needed using real-world applications to further establish this idea.\n\n> Novelty\n\nWe agree with you that the focus is not on the novelty of these components themselves, but instead on the combination of these for RL, together with careful analyses and evaluation. The sentence you mention might be misleading in that regard and so we propose to change it in the revised version of the paper.\n\n> Length of distractor branches\n\nYes, the length of the distractor branches still matters. In order for an agent not to take the wrong branch (with perfect confidence) it needs to know the consequences of opening the whole sequence of boxes along that branch before opening the first box in that branch. For that matter, it is irrelevant that the level terminates after the first wrong decision, except for the fact that it reduces the amount of time spent on a level that cannot be solved anymore.\n\n> Missing references\n\nThank you for the references. These are indeed related to our work and deserve to be mentioned. We will include them.\n\n\n", "Thank you for your review! We appreciate your suggestions to improve the submission.\n\nTo answer each of your points:\n\n1) We agree that a hard comparison between our approach and model-based planning cannot be made. Our attempt was to bring this as a point of discussion rather than making a strong claim about their parallels. We are happy to revise the text where this is mentioned in the direction of toning down the comparison and avoid confusion.\n\n2) We tried to be careful throughout the paper not to suggest that the novelty of this work lies on these two components: pairwise interactions and self-attention. Instead, and as mentioned by Reviewer 1, we argue that the combination of learnable representations of entities and self-attention in an RL setting is a significant innovation that has not been attempted before. This was a non-trivial effort, especially when applied to complex RL tasks such as StarCraft-II. Perhaps most importantly, however, it was not clear before that pairwise interactions themselves could allow for improved generalization.\n\nWe believe this work is a small but important step that moves us towards addressing some of the criticism deep RL has received (namely, an inability to flexibly generalize 'out-of-distribution') by focusing on entity and relation-centric representations, as used in more symbolic approaches.\n\n3) Thank you for the suggestion. We agree that showing that the results extend to other model-free algorithms would make the paper stronger. We tested an asynchronous advantage actor-critic (A3C) agent early on and the results were similar, but we will re-run these experiments now, alongside an off-policy value-based RL algorithm (DQN), to get exact numbers.\n\n4) We appreciate your concerns here. We would like to clarify that indeed the Box-World levels have the features that you propose. Every level is randomly generated in almost every aspect, assuring that: (1) the box containing the gem changes in every level; (2) the colors of the boxes are randomly shuffled in every level; (3) the spatial position of each box is randomly chosen in every level. This random generation of levels makes the problem very hard. In fact the number of possible combinations is so large that the agents we trained on this task never encounter the same level twice. An agent that solves the training levels to 100%, like the relational agent that we proposed, is capable of solving previously unseen levels without making a single mistake.\n\n5) We found that it was useful to include a shared non-linear transformation over the elements that resulted from the attention mechanism, itself only comprising a weighted sum of elements produced by a single linear transformation. Informally speaking, while the attention produces mixtures of entities, the extra non-linearity (g_theta MLP) gives the model the capacity to compute more complex relationships between the entities. This is analogous to what is done in Relation Networks, by Santoro et al. 2017, described as having the role of “infer[ing] the ways in which two objects are related”. We are happy to include a sentence in the text to provide this intuition.\n", "This work presents a quantitative and qualitative analysis and evaluation of the self-attention (Vaswani et al., 2017) mechanism combined with relation network (Santoro et al., 2017) in the context of model-free RL. Specifically, they evaluated the proposed relational agent and a control agent on two sets of tasks. The first one “Box-World” is a synthetic environment, which requires the agent to sequential find and use a set of keys in a simple “pixel world”. This simplifies the perceptual aspect and focuses on relational reasoning. The second one is a suite a StarCraft mini-games. The proposed relational agent significantly outperforms the control agent on the “Box-World” tasks and also showed better generalization to unseen tasks. Qualitative analysis of the attention showed some signs of relational reasoning. The result on StarCraft is less significant besides one task “Defeat Zerglings and Banelings\". The analysis and evaluation are solid and interesting. \n\nPresentation: \nThe paper is well written and easy to follow. The main ideas and experiment details are presented clearly (some details in appendix). \n\nOne suggestion is that it would help if there can be some quantitive characteristics for each StarCraft task to help the readers understand the amount of relational reasoning required, for example, the total number of objects in the scene, the number of static and moving objects in the scene, etc. \n\nEvaluation:\nThe evaluation is solid and the qualitative analysis on the “Box-world” tasks is insightful. Two specific comments below:\n\n1. The idea is only compared against a non-relational \"control agent”. It would be interesting to compare with other forms of relation networks, for example, the ones used in (Santoro et al, 2017). This could help evaluate the effectiveness of self-attention for capturing interactions. \n\n2. The difference between relational and control agent is quite significant on the synthetic task but less so on the StarCraft tasks, which poses the question of what kind of real-world tasks requires the relational reasoning, and what type of relational reasoning is already captured by a simple non-relational agent. \n\nQuestion about novelty:\n\nThis paper claims it presents “a new approach for representing and reasoning…”. However, the idea of transforming feature map into “entity vectors” and self-attention mechanism are already introduced and the proposed approach is more like a combination of both. That being said, the analysis and evaluation of these ideas in RL are new and interesting. \n\nOne minor question: since a level will terminate immediately if a distractor box is opened, does the length of the distractor branches still matter? \n\nDespite the question about novelty, I think the analysis in the paper is solid and interesting. So I support the acceptance of this paper. \n\nMissing references: \nIn the conclusion section, several related approaches for complex reasoning are discussed. It might be also worth exploring the branch of work (Reed & Freitas, 2015; Neelakantan et al, 2015; Liang et al, 2016) that learns to perform multi-step reasoning by generating compositional programs over structured data like tables and knowledge graph. \n\nReed, Scott, and Nando De Freitas. \"Neural programmer-interpreters.\" arXiv preprint arXiv:1511.06279 (2015).\nNeelakantan, Arvind, Quoc V. Le, and Ilya Sutskever. \"Neural programmer: Inducing latent programs with gradient descent.\" arXiv preprint arXiv:1511.04834 (2015).\nLiang, C., Berant, J., Le, Q., Forbus, K. D., & Lao, N. (2016). Neural symbolic machines: Learning semantic parsers on freebase with weak supervision. arXiv preprint arXiv:1611.00020.\n\n\nTypo:\npage 1: \"using using sets...\"", "The authors present a deep reinforcement learning approach that uses a “self-attention”/“transformer”-style model to incorporate a strong relational inductive bias. Experiments are performed on a synthetic “BoxWorld” environment, which is specifically designed (in a compelling way) to emphasize the need for relational reasoning. The experiments on the BoxWorld environment clearly demonstrate the improvement gained by incorporating a relational inductive bias, including compelling results on generalization. Further experimental results are provided on the StarCraft minigames domain. While the results on StarCraft are more equivocal regarding the importance of the relational module—the authors do set a new state of the art and the results are suggestive of the potential utility of relational inductive biases in more general RL settings.\n\nOverall, this is a well-written and compelling paper. The model is well-described, the BoxWorld results are compelling, and the performance on the StarCraft domain is also quite strong. The paper clearly demonstrates the utility of relational inductive biases in reinforcement learning.\n\nIn terms of areas for potential improvement:\n\n1) With regards to framing, a naive reader would probably get the impression that this is the first-ever work to consider a relational inductive bias in deep RL, which is not the case, as the NerveNet paper (Wang et al., 2018) also considers using a graph neural network for deep RL. There are clear differences between this work and NerveNet—most prominently, NerveNet only uses a relational inductive bias for the policy network by assuming that a graph-structured representation is known a priori for the agent. Nonetheless, NerveNet does also incorporate a relational inductive bias for deep RL and shows how this can lead to better generalization. Thus, this paper would be improved by properly positioning itself w.r.t. NerveNet and highlighting how it is different. \n\n2) As with other work using non-local neural networks (or fully-connected GNNs), there is the potential issue of scalability due to the need to consider all input pairs. A discussion of this issue would be very useful, as it is not clear how this approach could scale to domains with very large input spaces. \n\n3) Some details on the StarCraft experiments could be made more rigorous and quantitative. In particular, the following instances could benefit from more experimental details and/or clarifications: \n\nFigure 6: The performance of the control model and relational model seem very close. Any quantitative insight on this performance gap would improve the paper. For instance, is the gap between these two models significantly larger than the average gap between runs over two different random seeds? It would greatly strengthen the paper to clarify that quantitive aspect. \n\nPage 8: ”We observed that—at least for medium sized networks—some interesting generalization capabilities emerge, with the best seeds of the relational agent achieving better generalization scores in the test scenario” — While there is additional info in the appendix, without quantitative framing this statement is hard to appreciate. I would suggest more quantitive detail and rigorous statistical tests, e.g., something like “When examining the best 10 out of ??? seeds, the relational model achieved an average performance increase of ???% compared to the control model (p=???, Wilcoxon signed-rank test). However, when examining all seeds ???? was the case.” \n\nPage 8: “while the former adopted a \"land sweep strategy\", controlling many units as a group to cover the space, the latter managed to independently control several units simultaneously, suggesting a finer grained understanding of the game dynamics.” This is a great insight, and the paper would be greatly strengthened by some quantitive evidence to back it up (if possible). For instance, you could compute the average percentage of agents that are doing the same action at any point in time or within some distance from each other, etc. Adding these kinds of quantitative statistics to back up these qualitative insights would both strengthen the argument, while also making it more explicit how you are coming to these qualitative judgements. \n\nFigure 8 caption: “Colored bars indicate mean score of the ten best seeds” — how bad is the drop to the n-10 non-best seeds? And how many seeds where used in total?\n\nPage 13: “following Table 4 hyperparameter settings and 3 seeds” — if three seeds are used in these experiments, how are 10+?? seeds used for the generalization experiments? The main text implies that the same models for the “Collect Mineral Shards” were re-used, but it appears that many more models with different seeds were trained specifically for the generalization experiment. This should be clarified. Alternatively, it is possible that “seeds” refers to both random seeds and hyperparameter combinations, and it would improve the paper to clarify this. It is possible that I missed something here, but I think it highlights the need for further clarification. " ]
[ -1, -1, 6, -1, -1, -1, -1, -1, 7, 7 ]
[ -1, -1, 4, -1, -1, -1, -1, -1, 3, 4 ]
[ "B1lNr-4nT7", "SJg6aYHuaQ", "iclr_2019_HkxaFoC9KQ", "iclr_2019_HkxaFoC9KQ", "H1gQS1H367", "Bkee1fmqhX", "Syg-myA9n7", "BJxWAEv0nX", "iclr_2019_HkxaFoC9KQ", "iclr_2019_HkxaFoC9KQ" ]
iclr_2019_HkxjYoCqKX
Relaxed Quantization for Discretized Neural Networks
Neural network quantization has become an important research area due to its great impact on deployment of large models on resource constrained devices. In order to train networks that can be effectively discretized without loss of performance, we introduce a differentiable quantization procedure. Differentiability can be achieved by transforming continuous distributions over the weights and activations of the network to categorical distributions over the quantization grid. These are subsequently relaxed to continuous surrogates that can allow for efficient gradient-based optimization. We further show that stochastic rounding can be seen as a special case of the proposed approach and that under this formulation the quantization grid itself can also be optimized with gradient descent. We experimentally validate the performance of our method on MNIST, CIFAR 10 and Imagenet classification.
accepted-poster-papers
This paper proposes an effective method to train neural networks with quantized reduced precision. It's fairly straight-forward idea and achieved good results and solid empirical work. reviewers have a consensus on acceptance.
train
[ "H1e-rddupm", "r1lmTPcDp7", "ByxwE2ROTm", "S1gcOCLUpX", "SJeTsd3tpX", "H1eXz1E_T7", "SJgl-zVD6m", "B1lLTW4P6X", "BylHD-NwTm", "ryxyLkND6Q", "rygmk1EDT7", "rJevdabGpQ", "BygITb1laQ", "SJgFk25qhQ", "rkgjbEeYnm" ]
[ "author", "public", "public", "public", "author", "author", "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Dear reviewers and commenters,\n\nWe have updated the submission to include all of the discussed points, except for the learning curves for VGG as we are currently rerunning the experiments in order to track them. We will perform another update as soon as that is finished. \n\nPlease also note that we have updated Figure 4 that contains the Imagenet results. We updated our BOP count implementation to correctly take into account the 8bit input of the models that have a full precision first layer. This resulted in a lower BOP count for these models. Nevertheless we still observe that the RQ models lie on the Pareto frontier, hence the conclusions do not change.\n\nEDIT: We have uploaded a new version of the paper that contains the VGG learning curves in the appendix. ", "Dear authors,\n\nThx for your detailed answer, but I still have some doubts.\n\n1): You have clarified differences between this submission and [1] via analysis. But I don't think it is too hard to implement your approach using pre-activation ResNet-18 and keep the first and the last layer to full-precision. Without any results provided, I am still not sure to what extend these modifications can influence the performance. After all, a 7% gap on ImageNet is not small.\n\n2): You argue that it is not hardware friendly to use a separate quantization grid per channel. However, since you did not implement on any hardware device, your argument cannot convince me. In fact, a NIPS2018 paper [1] this year claims that \"heterogeneously binarized (mixed bitwise) systems yield FPGA- and ASIC-based implementations that are correspondingly more efficient in both circuit area and energy efficiency than their homogeneous counterparts.\" In this paper, each parameter/activation has different bitwise, but they have shown that it is still efficient to implement on hardware platforms. \n\nAnd if you can provide any results here that will be better. Thanks again for your patient answer.\n\nReference: \n[1]: \"Heterogeneous Bitwidth Binarization in Convolutional Neural Networks\", NIPS20`18.", "Dear authors, \n\nThanks for your answer and I learn a lot. But I find some (potential) mistakes in the BOP metric. \n\n1): The width $w$ and height $h$ of the feature map are not included in the layer complexity. And I am sure this should be fixed. \n\n2): Let us assume weights and activations are all binary (1-bit). Then the convolutional operations becomes XNOR and popcount, which are all bitwise operations. So according to BOPs, the bitwise popcount complexity (for a single output calculation) is n{k^2}(2 + {\\log _2}n{k^2}) . However, it doesn't make sense since this complexity holds for floating-point additions rather than bitwise operations. \n\nAnd could you check my claims? \n", "Dear authors and reviewers, \n\nPlease check the performance of the current state-of-the-art approaches [1, 2, 3] on ImageNet. For 4-bit Resnet-18, they can achieve near lossless results. For example, in LQ-Net [1], it only has 0.3% and 0.4% Top1 and Top5 accuracy drop, respectively. But in this paper, it has more than 7% Top-1 accuracy drop. Even uniform quantization approach DOREFA-Net performs much better than this submission.\nAnd I don't know why this submission just \"ignores\" these approaches?\n\nReferences (Only list three of them) :\n[1]: \"LQ-Nets: Learned Quantization for Highly Accurate and Compact Deep Neural Networks\". ECCV2018. \n[2]: \"PACT: Parameterized Clipping Activation for Quantized Neural Networks\". https://arxiv.org/pdf/1805.06085\n[3]: \"DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients\", \n https://arxiv.org/abs/1606.06160.\n", "Dear anonymous commenter,\n\nOf course, you can find the answers below:\n\n1) In our computation of a model’s BOP count, we do take height and width of the feature maps into account. The formula as stated in [1] is given for “a single output calculation” and correspondingly by us multiplied for a whole layer’s BOP count. \n\n2) We merely aim to use the BOP count formula from [1] as a rough estimate of the actual BOPs of a given low-bit model, and not as an exact measure. Our aim was to have a sensible ranking of all the methods we compare. Indeed, for 1 bit weights and activations, the BOP approximation will be worse compared to fixed-point or floating-point networks. We would like to point out that the formula came with its own set of assumptions, which are stated at [1]. We agree that the BOP count is not a perfect measure of model complexity or execution speed, however it does serve as a normalizer for the purpose of comparison. Finally we recognize that execution speed might be identical or higher for example for a 4/4 bit model on a chip with a dedicated 4/4 instruction set compared to a 3/3 bit model on the same chip, due to suboptimal kernels. A similar conclusion could be drawn for a chip that does not possess a fixed-point instruction set when comparing fixed-point to floating-point models. That is to say the final execution speed/accuracy trade-off is very dependent on the targeted hardware and any measure that tries to generalize across different chips will either be very complex or always remain approximative. \n\n[1] Baskin, C., Schwartz, E., Zheltonozhskii, E., Liss, N., Giryes, R., Bronstein, A. M., & Mendelson, A. (2018). UNIQ: Uniform Noise Injection for the Quantization of Neural Networks. arXiv preprint arXiv:1804.10969.", "Dear anonymous commenter,\n\nThank you for your additional comment. We hope to address your doubts adequately:\n\n1) Indeed, implementing RQ for a pre-activation Resnet18 with the hyperparameters that you propose is feasible. Nevertheless, we also believe that it is not necessary: firstly, as we previously mentioned, the GBOP metric that we used in the submission “normalizes” against the choice of having a full precision first and last layer, therefore we can safely conclude that the 8/8 bit RQ model that quantizes everything is better, both BOP wise and accuracy wise, than the 4/4 bit LQ-Net model that does not quantize the first and last layer. Secondly, we chose to experiment with the standard ResNet18 architecture in order to be able to compare with the majority of the quantization literature. As a result, we do not believe that the experiments with the pre-activation ResNet18 will offer additional insight, besides allowing for a slightly more calibrated comparison against e.g LQ-Net or PACT. Instead, we believe that a completely different architecture (MobileNet) better complements our ResNet18 experiments.\n\nIn summary, we hope to have convinced you of the practical importance of quantizing the first and last layers. On the side of experiments provided, we believe to have produced significant evidence in favour of RQ. The code to reproduce our results as well as to do additional experiments is currently undergoing regulatory approval. Please stay tuned for our announcement and feel free to contact us with questions about your own re-implementation once the contact details are available.\n\n2) Thank you for the pointer to this work; we believe it provides interesting food for thought for future hardware choices. We base our argument not on a specific chipset, but argue for the properties of generally available chips on the market today. Examples of state-of-the-art chips that especially target fixed-point computations include: (Qualcomm) Hexacore 68X, (Intel) movidius, (ARM) NEON. In case the application warrants specialized hardware (ASIC) or FPGAs, there will always be highly efficient specialized solutions that might allow for different bit-precisions (or even mix fixed-point and floating-point representations [1]. However it becomes increasingly difficult to find a fair basis of speed/accuracy comparison when allowing for arbitrary hardware implementations and to account for the additional overhead of e.g. channel-wise grids. Again, we believe our experimental efforts to lay sufficient claim for the validity of RQ by comparing many works that use fixed-point shared grids. Any additional modifications such as mixed-precision, channelwise-grids or any of the other strategies referenced in our paper are orthogonal to our method and it is reasonable to believe that including them will benefit RQ as well.\n\n[1] Deep Convolutional Neural Network Inference with Floating-point Weights and Fixed-point Activations: https://arxiv.org/pdf/1703.03073.pdf\n\nEDIT: After fixing the BOP count metric (see general comment), the 4/4 bit LQ-Net BOP count lies between the 5/5 and 6/6 bit RQ models. In this case we observe that the accuracy of RQ is slightly worse than the LQ-net for an approximately same count of BOPs, which could be explained due to the non-uniform grid and channel-wise kernel quantization. \n", "Dear Reviewer 2, \n\nThank you for your review and comments for approval. \n\nWe will make sure to update the related work section with the work of Soudry et al. (2014). As for Williams (1992); to our understanding the focus of that paper was to introduce the unbiased score function estimator REINFORCE for the gradient of an expectation of a non-differentiable function. In this sense, Williams (1992) is more of a related work to the concrete / Gumbel-softmax approaches, rather than the stochastic rounding of Gupta et al. (2015). We will update the submission to include a brief discussion between the REINFORCE and concrete / Gumbel-softmax as choices for the fourth element of RQ. \n\nRegarding experiments on different tasks; we agree that it would be interesting to check performance on tasks that require more “precision”, such as regression. We chose classification for this submission, as this provides a large amount of literature to compare against, and leave the exploration of different tasks for future work.\n", "Dear Reviewer 1,\n\nThank you for your review and comments for approval.\n\nRegarding the bias of the local grid approximation; we mentioned in the main text that the local grid is constructed such that points that are within \\delta standard deviations from the mean are always part of it. For all of our experiments, we set \\delta = 3, which means that, roughly, only 2% of the probability mass of the logistic distribution is truncated. Unfortunately, due to lack of space we moved these experimental details about hyperparameters in the appendix.\n\nRegarding the regularization aspect; indeed we observed that for VGG, quantizing to 8/8 bits resulted in consistent improved test errors. We are definitely aware of [https://arxiv.org/abs/1804.05862] and believe that further research in this direction is a fruitful direction.", "Dear Reviewer 3,\n\nThank you for your review and comments for approval. \n\nAddressing the first point of training speed: training a neural network with the proposed method indeed imposes an additional burden in computing and sampling the categorical probabilities over the local grid for every weight and activation in the network. As such, this method introduces an overhead which is not present in methods that rely on deterministic rounding and the straight-through estimator for gradients. As for convergence speed, we will include an exemplary learning curve for the 32/32, 8/8 and 2/2 bit VGG in the appendix.\n\nAddressing your second point about non-uniform grids: as you have stated, this method can be easily extended to non-uniform grids. Doing so would only require evaluating the CDF of the continuous signal at different points on the real line. We have mentioned this possibility of non-uniform grids in the conclusion to our work. The reason for why we consider uniform grids only lies in that non-uniform grids, although more powerful, generally do not allow for a straightforward implementation in today’s low-bit hardware. We mention that we explicitly focus on uniform grids for this specific reason of hardware suitability. \n", "Dear anonymous commenter,\n\nAlthough the proposed relaxed quantization method shares some similarities with DARTS, this submission is by no means a duplicate. The similarities can be summarized in that both methods consider the computation of gradients through a non-differentiable selection mechanism. In our work, selection happens between grid-points. In DARTS, selection happens between choices of neural network architecture elements. Please note that in our work, we propose to use the relaxation of the categorical choice in order to draw samples, whereas in DARTS, the relaxation is performed by learning a weighted average. \n\nWe hope to have interpreted and answered your question appropriately. Please let us know if there are any remaining questions. \n", "Dear anonymous commenter,\n\nThank you for the interest in our work and for bringing [1, 2] into our attention. First of all, we would like to respectfully disagree with the comment of “very poor performance on Imagenet”. More specifically, we believe that there some important differences between e.g. [1] and this work that do not lend to a fair comparison.\n\nTo further elaborate, in [1] the authors propose a non-uniform quantization grid while arguing for it being compatible with bit operations. In our work we focus on uniform quantization grids because they lend themselves to straight-forward implementation on current hardware. The more powerful grid proposed in [1] is orthogonal to the contributions of this work and can be further employed to boost the performance of RQ. We will update the paper with an appropriate discussion.\n\nIt is also worth pointing out several subtleties w.r.t. the hyperparameters and details of the experiments in [1], that make a fair comparison difficult:\n\n - First of all, it seems that [1] used a modified pre-activation ResNet18 architecture (judging from the paper and publicly available code of LQ-net), which is different from the standard ResNet18 architecture that we and the other baselines employed (our ResNet18 was based on https://github.com/fchollet/deep-learning-models/blob/master/resnet50.py). \n\n - Secondly, [1, 2, 3] did not quantize the first and last layer of the network; while this can allow for better performance in terms of top1-5 accuracy it also negatively affects the model efficiency, as the BOP count will be (much) higher than our 4/4 model. For example, on a ResNet-18 with 4/4 bits and no quantization of the first and last layer we get approximately 24 GBOPs extra (according to the metric we used in the submission) compared to an 8/8 bit model that quantizes all weights and activations. In this sense, the 8 bit RQ has better accuracy while also maintaining better efficiency than the 4 bit LQ-net. Similar arguments can be made for [2, 3]. \n\n - Thirdly, it seems that [1] also used a much more flexible quantization grid construction for the weights; it assumed a separate quantization grid per channel, rather than per an entire layer (as in this work). This further increases the flexibility of the quantization model but it does make hardware implementations more difficult and less efficient. Similarly as before, such a grid construction is easily applied to RQ and can similarly further improve performance.\n\nFinally, we did not compare against [3] as it did not provide any results for the architectures we compare against in this paper. Their imagenet results were obtained using a variant of the AlexNet architecture, whereas we compare on the more recent ResNet18 and MobileNet. After reading [1] however, we were made aware of the Resnet18 results presented in their git repo, so we will update the paper with those numbers. Similarly to [1, 2], not quantizing the first and last layer results into worse accuracy / efficiency trade-offs than RQ.", "Isn't this a duplicated submission as DARTS?\nhttps://openreview.net/forum?id=S1eYHoC5FX", "The authors proposes a unified and general way of training neural network with reduced precision quantized synaptic weights and activations. The use case where such a quantization can be of use is the deployment of neural network models on resource constrained devices, such as mobile phones and embedded devices.\n\nThe paper is very well organized and systematically illustrates and motivates the ingredients that allows the authors to achieve their goal: a quantization grid with learnable position and range, stochastic quantization due to noise, and relaxing the hard categorical quantization assignment to a concrete distribution.\nThe authors then validate their method on several architectures (LeNet-5, VGG7, Resnet and mobilnet) on several datasets (MNIST, CIFAR10 and ImageNet) demonstrating competitive results both in terms of precision reduction and accuracy. \n\nMinor comments:\n- It would be interesting to know whether training with the proposed relaxed quantization method is slower than with full-precision activations and weights. It would have been informative to show learning curves comparing learning speed in the two cases.\n- It seems that this work could be generalized in a relatively straight-forward way to a case in which the quantization grid is not uniform, but instead all quantization interval are being optimized independently. It would have been interesting if the authors discussed this scenario, or at least motivated why they only considered quantization on a regular grid.\n", "Quality:\nThe work is well done. Experiments cover a range of problems and a range of quantization resolutions. Related work section in, particular, I thought was very nicely done. Empirical results are strong. \n\nIn section 2.2, it bothers me that the amount of bias introduced by using the local grid approximation is never really assessed. How much probability mass is left out by truncating the Gumbel-softmax, in practice?\n\nClarity:\nWell presented. I believe I'd be able to implement this, as a practitioner. \n\nOriginality:\nNice to see the concrete approximation having an impact in the quantization space. \n\nSignificance:\nQuantization has obvious practical interest. The regularization aspect is striking (quantization yielded slightly improved test error on CIFAR-10; is that w/in the error bars?). A recent work [https://arxiv.org/abs/1804.05862] links model compressibility to generalization; while this work is more focused on activations, there is no reason that it couldn't be used for weights as well.\n\nNits:\ntop of pg 6 'reduced execution speeds' -> times, or increased exec speeds\n'sparcity' misspelled", "Summary\n=======\nThis paper introduces a method for learning neural networks with quantized weights and activations. The main idea is to stochastically – rather than deterministically – quantize values, and to replace the resulting categorical distribution over quantized values with a continuous relaxation (the \"concrete distribution\" or \"Gumbel-Softax distribution\"; Maddison et al., 2016; Jang et al., 2016). Good empirical performance is demonstrated for LeNet-5 applied to MNIST, VGG applied to CIFAR-10, and MobileNet and ResNet-18 applied to ImageNet.\n\nReview\n======\nRelevance:\nTraining non-differentiable neural networks is a challenging and important problem for several applications and a frequent topic at ICLR.\n\nNovelty:\nConceptually, the proposed approach seems like a straight-forward application/extension of existing methods, but I'm unaware of any paper which uses the concrete distribution for the express purpose of improved efficiency as in this paper. There is a thorough discussion of related work, although I was missing Williams (1992), who used stochastic rounding before Gupta et al. (2015), and Soudry et al. (2014), who introduced a Bayesian approach to deal with discrete weights and activations.\n\nResults:\nThe empirical work is thorough, achieving state-of-the-art results in several classification benchmarks. It would be interesting to see how well these methods perform in other tasks (e.g., compression or even regression), even though the literature on quantization seems to focus on classification.\n\nClarity:\nThe paper is well written and clear." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "iclr_2019_HkxjYoCqKX", "rygmk1EDT7", "H1eXz1E_T7", "iclr_2019_HkxjYoCqKX", "ByxwE2ROTm", "r1lmTPcDp7", "rkgjbEeYnm", "SJgFk25qhQ", "BygITb1laQ", "rJevdabGpQ", "S1gcOCLUpX", "iclr_2019_HkxjYoCqKX", "iclr_2019_HkxjYoCqKX", "iclr_2019_HkxjYoCqKX", "iclr_2019_HkxjYoCqKX" ]
iclr_2019_HkzRQhR9YX
Tree-Structured Recurrent Switching Linear Dynamical Systems for Multi-Scale Modeling
Many real-world systems studied are governed by complex, nonlinear dynamics. By modeling these dynamics, we can gain insight into how these systems work, make predictions about how they will behave, and develop strategies for controlling them. While there are many methods for modeling nonlinear dynamical systems, existing techniques face a trade off between offering interpretable descriptions and making accurate predictions. Here, we develop a class of models that aims to achieve both simultaneously, smoothly interpolating between simple descriptions and more complex, yet also more accurate models. Our probabilistic model achieves this multi-scale property through of a hierarchy of locally linear dynamics that jointly approximate global nonlinear dynamics. We call it the tree-structured recurrent switching linear dynamical system. To fit this model, we present a fully-Bayesian sampling procedure using Polya-Gamma data augmentation to allow for fast and conjugate Gibbs sampling. Through a variety of synthetic and real examples, we show how these models outperform existing methods in both interpretability and predictive capability.
accepted-poster-papers
This paper presents a recurrent tree-structured linear dynamical system to model the dynamics of a complex nonlinear dynamical system. All reviewers agree that the paper is interesting and useful, and is likely to have an impact in the community. Some of the doubts that reviewers had were resolved after the rebuttal period. Overall, this is a good paper, and I recommend an acceptance.
train
[ "Hyx_x1UsyN", "rygVmgaR3Q", "SyetRg5a3Q", "S1emGeRIyE", "HJeZ36huRX", "ryxsWdw927", "r1eVWiHqRQ", "ryxq_63dRm", "B1eKkCndA7", "rke9PnnOCQ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "Thanks to the authors for the detailed and sufficient contents added to the appendix. I am satisfied with the new proof provided by the author and am willing to support it to be accepted. My score to the paper is also updated accordingly.", "This paper introduces a probabilistic model to model nonlinear dynamic systems with multiple granularities. The nonlinearity is achieved by using multiple local linear approximations. The method is an extension to rSLDS (recurrent switching linear dynamical systems), which in turn is an extension to SLDS. \n\nPros:\n1. Introducing the tree structure is a neat way of extending the existing rSLDS model to multiscale scenarios. \n2. The paper is written clearly. The background is well illustrated and the idea rises naturally from there. The paper is also solid in the part describing the model. \nCon:\n1. from the rSLDS paper (https://arxiv.org/pdf/1610.08466.pdf), the authors there was experimenting with some settings similar to those used in this paper. However, I am not able to find some explicit comparison between the TrSLDS and rSLDS in this work. I think it should be needed since TrSLDS itself is derived out from rSLDS, it would be good to show explicitly the advantage of the new model.\n", "PAPER SUMMARY:\n\nThis paper introduces a probabilistic generative framework to model linear dynamical systems at multiple levels of resolution, where the entire complex, nonlinear dynamics is approximated via a hierarchy of local regimes of linear dynamics -- the global dynamic is then characterized as a switching process that switches between linear regimes in the hierarchy.\n\nNOVELTY & SIGNIFICANCE:\n\nThe key contributions of this paper are (a) the use of tree-structured stick breaking to partition the entire dynamic space into a hierarchy of linear regimes; (b) the design of a hierarchical prior that is compatible to the tree structure; and (c) the developed Bayesian inference framework for it in Section 4.\n\nBy exploiting the tree-structured stick breaking process (Adams et al., 2010), the proposed framework is able to partition the entire dynamic space into a hierarchy of switching linear regimes.\n\nThis allows the dynamic to be queried at multiple levels of resolution. This appears to be the key difference between the proposed framework and the previous work of (Linderman et al., 2017) on recurrent switching dynamical systems that partition the dynamic space sequentially at the same level of resolution.\n\nThis seems like a non-trivial extension to the previous work of (Linderman et al., 2017) & I tend to consider this a novel contribution. That said, the paper was also not positioned against existing literature on hierarchical switching linear dynamic systems (see below) & I find it hard to evaluate the significance of the proposed framework (which explains the borderline rating)\n\n\"A Hierarchical Switching Linear Dynamical System applied to the detection of sepsis in neonatal condition monitoring\", Ioan Stanculescu, Christopher K. I. Williams and Yvonne Freer. In Proceeding of the 30th Conference on Uncertainty in AI (UAI-14), pages 752-761\n\nCould the authors please discuss the differences between the proposed work & (at least) the above? \n\nTECHNICAL SOUNDNESS:\n\nThe technical exposition makes sense to me. Please also discuss the processing complexity of the resulting TrSLDS framework. In exchange for the improved performance, how much slower TrSLDS is as compared\nto rSLDS? I am interested to see this demonstrated in the empirical studies.\n\nCLARITY:\n\nThe paper is clearly written.\n\nEMPIRICAL RESULTS:\n\nThe experiments look interesting and are very extensive on both test domains. However, I do not understand why the authors decided not to compare with rSLDS using its benchmark? \n\nI find this somewhat sloppy and hope the authors would clarify this too. \n\n****\n\nPost-rebuttal update: The authors have made significant revision to their work, which sufficiently addressed all my concerns. I have upgraded my score accordingly and I am willing to support the acceptance of this paper.", "Thank you for the significant updates and detailed clarification. The revision has sufficiently addressed all my concerns. I have upgraded my score and am willing to support the acceptance of this paper.", "We thank the reviewer for her/his insightful review and for bringing up the prior work done by Stanculescu et al. 2014. In Stanculescu et al. 2014., they propose adding a layer to factorized SLDS where the top-level discrete latent variables determine the conditional distribution of z_t, with no dependence on x_{t-1}. While the tree-structured stick-breaking used in TrSLDS is also a hierarchy of discrete latent variables, the model proposed in Stanculescu et al. 2014., has no hierarchy of dynamics, preventing it from obtaining a multi-scale view of the dynamics. Stanculescu et al. (2014) also reference preceding work on hierarchical SLDS by Zoeter & Heskes (2003), the only example they found in the literature. In Zoeter & Heskes (2003), the authors construct a tree of SLDSs where an SLDS with K possible discrete states is first fit. An SLDS with M discrete states is then fit to each of the K clusters of points. This process continues iteratively, building a hierarchical collection of SLDSs that allow for a multi-scale, low-dimensional representation of the observed data. While similar in spirit to TrSLDS, there are key differences between the two models.\nFirst, it is through the tree-structured prior that TrSLDS obtains a multi-scale view of the dynamics, thus we only need to fit one instantiation of TrSLDS; in contrast, they fit a separate SLDS for each node in the tree, which is computationally expensive. There is also no explicit probabilistic connection between the dynamics of a parent and child in Zoeter & Heskes (2003). We also note that TrSLDS aims to learn a multi-scale view of the dynamics while Zoeter & Heskes (2003) focuses on smoothing, that is, they aim to learn a multi-scale view of the latent states corresponding to data but not suitable for forecasting. We have amended the manuscript to include a section discussing prior and related work.\n\n>>Technical Soundness\nThe rSLDS and the TrSLDS share the same linear time complexity for sampling the discrete and continuous states, and both models learn K-1 hyperplanes to weakly partition the space. Specifically, both models incur: an O(TK) cost for sampling the discrete states, which increases to O(TK^2) if we allow Markovian dependencies between discrete states; an O(TD^3) cost (D is the continuous state dimension) for sampling the continuous states, just like in a linear dynamical system; and an O(KD^3) cost for sampling the hyperplanes. The only additional cost of the TrSLDS stems from the hierarchical prior on state dynamics. Unlike the rSLDS, we impose a tree-structured prior on the dynamics to encourage similar dynamics between nearby nodes in the tree. Rather than sampling K dynamics parameters, we need to sample 2K-1. Since they are all related via a tree-structured Gaussian graphical model, the cost of an exact sample is O(KD^3) just as in the rSLDS, with the only difference being a constant factor of about 2. Thus, we obtain a multi-scale view of the underlying system with a negligible effect on the computational complexity. We have amended the manuscript to make this clear.\n\nWe also note that tree-structured stick-breaking utilized by TrSLDS is a strict generalization of the sequential stick-breaking used by rSLDS. We can recover sequential stick-breaking from tree-structured stick-breaking by enforcing the left node at each level in the tree to be a leaf node. Our experiments only considered balanced binary trees for simplicity, but an interesting avenue of future work is to learn the tree structure, perhaps through additional MCMC. Learning such discrete representations is highly non-trivial and demands further investigation outside this submission. We have amended the manuscript to make this connection explicit.\n\n>>Empirical Results\nThe Lorenz attractor in experiment 2 was also used in as a benchmark for the rSLDS (Linderman et al, 2017; Fig. 4). The only difference is that Linderman et al generated binary observations with a Bernoulli GLM emission model. For completeness, we ran TrSLDS on the synthetic nascar example used to test rSLDS in Linderman et al. (2017) to see if we could recover the dynamics and the discrete latent state assignments and included the results in the appendix. We also note that we included another example in the appendix where the data generated from an alternative version of the synthetic nascar example from Linderman et al. (2017) where the underlying model is a TrSLDS and compared both TrSLDS and rSLDS.\n", "The authors develop a tree structured extension to the recently proposed recurrent switching linear dynamical systems. Like switching linear dynamical systems (sLDS) the proposed models capture non-linear dynamics by switching between a collection of linear regimes. However, unlike SLDS, the transition between the regimes is a function of a latent tree as well as the preceding continuous latent state. Experiments on synthetic data as well as neural spike train data are presented to demonstrate the utility of the model.\n\nThe paper is clearly written and easy to read. The tree structured model (TrSLDS) is a sensible extension to rSLDS. While one wouldn’t expect TrSLDS to necessarily fit the data any better than rSLDS, the potential for recovering multi-scale, possibly more interpretable decompositions of the dynamic process is compelling. \n\nWhile the authors do provide some evidence of being able to recover such multi-scale structures, overall the experiments are underwhelming and somewhat sloppy. First, to understand whether the sampler is mixing well, it would be nice to include an experiment where the true dynamics and the entire latent structure (including the discrete states) are known, and then to examine how well this ground-truth structure is recovered. Second, for the results presented in section 5, how many iterations was the sampler run for? In the figures, what is being visualized?, the last sample?, the MAP sample? or something else? I am not sure what to make of the real data experiment in section 5.3. Wouldn’t rSLDS produce nearly identical results? What is TrSLDS buying us in this scenario? Do the higher levels of the tree capture interesting low resolution dynamics that are not shown for some reason? \n\nMy other big concern is scalability. To use larger number of discrete states one would need deeper (or wider if the binary requirement is relaxed) trees. How well does the sampler scale with the number of discrete states? How long did the sampler take for the various 4-state results presented in the paper? \n\nMinor:\na) There is a missing citation in the first para fo Section 5. \nb) Details of message passing claimed to be in the supplement are missing.\n\n============\nThere are interesting ideas in this paper. However, experimental section could better highlight the benefits afforded by the model and scalability concerns need to be addressed.\n\n", "Thank you for the response and the significant updates to the paper. The rebuttal sufficiently addresses most of my concerns. Although, I still have some concerns about scalability, they are not a showstopper for me and I am willing to support the acceptance of this revised paper. ", "We thank the reviewer for her/his insightful review. We note that we added a section to the Appendix comparing the computational complexity of TrSLDS and rSLDS in which it states that the computational complexity of TrSLDS is of the same order as rSLDS; for specifics please refer to our response to AnonReviewer 1. Thus, we obtain a multi-scale view of the underlying system with a negligible effect on the computational complexity. We also note that tree-structured stick-breaking utilized by TrSLDS is a strict generalization of the sequential stick-breaking used by rSLDS; we have amended the manuscript to make this connection explicit.\n\nFor both synthetic experiments, the predictive power of TrSLDS and rSLDS (as well as SLDS and LDS) was compared using k-step R^2 (Figs 2 & 3). In both synthetic experiments, the predictive power of TrSLDS is at least as much rSLDS. \n\nTo better highlight the differences between TrSLDS and rSLDS, we added two more examples in the appendix. The first example is the synthetic NASCAR from (https://arxiv.org/pdf/1610.08466.pdf) where the underlying model is indeed an rSLDS (Fig. 6). The space is partitioned into 4 sections using sequential stick-breaking, where the trajectories trace out oval tracks similar to a NASCAR track. TrSLDS was fit to see if it could recover the dynamics even though it relies on tree-structured stick-breaking. From Fig. 6, it is evident that TrSLDS can recover the dynamics and obtain a multi-scale view. The second example is a twist on the TrSLDS, where the underlying model is a TrSLDS i.e. the space is partitioned using tree-structured stick-breaking (Fig. 7). We ran TrSLDS and rSLDS and compared their predictive performance using k-step R^2. From Fig. 7, we can see that rSLDS could not adequately learn the vector field due to its reliance on sequential stick-breaking. This provides empirical evidence that the expressive power of TrSLDS subsumes that of rSLDS which was stated in our response to AnonReviewer 1.", "We thank the reviewer for her/his insightful review. We note that tree-structured stick-breaking utilized by TrSLDS is a strict generalization of the sequential stick-breaking used in rSLDS. As stated in the response to AnonReviewer1 above, we can recover sequential stick-breaking from tree-structured stick-breaking by enforcing the left node at each level in the tree to be a leaf node. We have amended the manuscript to make this connection explicit.\n\nSince tree-structured stick-breaking is a strict generalization of sequential stick-breaking, the expressive power of TrSLDS theoretically subsumes that of rSLDS. The question reduces to a comparison of tree structures; in our experiments, a comparison of right-branching trees to balanced binary trees. We emphasize this by including two new examples in the appendix in which the true dynamics and the entire latent structure are known; the first being the “synthetic Nascar” example used in Linderman et al. (2017), where the true model follows a right-branching tree, as in the standard rSLDS, to emphasize that we can effectively learn these dynamics with a tree-structured model. The second example is a twist on the synthetic Nascar where the underlying model is a TrSLDS and where we test both rSLDS and TrSLDS. In this example (Fig. 7), rSLDS fails due to the sequential nature of stick-breaking that cannot adequately capture the locally-linear dynamics.\n\n>>Experiment Section\nWe thank the reviewer for pointing out the missing information in the experiments section and have amended the manuscript with corrections. As stated above, we have included two more examples in the appendix to highlight not only the expressive power of TrSLDS, but also to show that the sampler is indeed mixing well. Concerning the real data experiment, we have amended the manuscript with more results from the analysis. The orientations were chosen to resemble a tree where orientations 140 and 150 have the same parent; the same is true for orientations 230 and 240. Thanks to the multi-scale nature of TrSLDS, the method is able to learn this relation and assigns the two groups to different subtrees. It then refines the dynamics by focusing on each of these two groups separately. \n\n>>Scalability\nThe computational complexity of TrSLDS is of the same order as rSLDS; for specifics please refer to our response to AnonReviewer 1.\n\nTo address the concerns regarding the samplers mixing speed as a function of number of discrete states, we fit a TrSLDS with K = 2, 4, 8 discrete states, keeping the amount of data used to train the model fixed, and plotted the log joint density as function of samples and included it in the appendix. From the plots, the sampler seems to converge to a mode of the posterior after about 150-250 samples for each of the various numbers of discrete states. Due to the nature of Gibbs sampling, we are limited to batch updates to each of the conditional posteriors. While scalability has not been an issue in our experiments, we will explore stochastic variational approaches in future work.\n\n>>Minor\nWe thank the reviewer for pointing out these minor mistakes and have corrected them in the amended manuscript.\n", "We thank all the reviewers for their suggestions and have amended the manuscript accordingly. Here is a summary of the changes:\n1) Added a paragraph discussing prior work on hierarchical extensions of SLDS.\n2) Added section describing the Polya-Gamma data augmentation scheme.\n3) Redid the experiments to better highlight the multi-scale nature of our algorithm. (Figs 2, 3, 4)\n4) Added a section in the appendix describing how to handle Bernoulli observations using Polya-Gamma data augmentation scheme.\n5) Added a section in the appendix providing details on the message-passing used in the sampling.\n6) To better highlight the differences between rSLDS and TrSLDS, two new experiments have been added to the appendix including a benchmark experiment from the original rSLDS paper. (Figs. 6 & 7)\n7) Added a section in the appendix discussing the computational complexity of fitting the model. We also show empirically how the time till convergence of the MCMC sampler changes as a function of discrete latent states by fitting three TrSLDS of varying number of leaf nodes and plot the log of the joint density. (Fig. 5)\n" ]
[ -1, 7, 7, -1, -1, 6, -1, -1, -1, -1 ]
[ -1, 2, 2, -1, -1, 4, -1, -1, -1, -1 ]
[ "ryxq_63dRm", "iclr_2019_HkzRQhR9YX", "iclr_2019_HkzRQhR9YX", "HJeZ36huRX", "SyetRg5a3Q", "iclr_2019_HkzRQhR9YX", "B1eKkCndA7", "rygVmgaR3Q", "ryxsWdw927", "iclr_2019_HkzRQhR9YX" ]
iclr_2019_HkzSQhCcK7
STCN: Stochastic Temporal Convolutional Networks
Convolutional architectures have recently been shown to be competitive on many sequence modelling tasks when compared to the de-facto standard of recurrent neural networks (RNNs) while providing computational and modelling advantages due to inherent parallelism. However, currently, there remains a performance gap to more expressive stochastic RNN variants, especially those with several layers of dependent random variables. In this work, we propose stochastic temporal convolutional networks (STCNs), a novel architecture that combines the computational advantages of temporal convolutional networks (TCN) with the representational power and robustness of stochastic latent spaces. In particular, we propose a hierarchy of stochastic latent variables that captures temporal dependencies at different time-scales. The architecture is modular and flexible due to the decoupling of the deterministic and stochastic layers. We show that the proposed architecture achieves state of the art log-likelihoods across several tasks. Finally, the model is capable of predicting high-quality synthetic samples over a long-range temporal horizon in modelling of handwritten text.
accepted-poster-papers
The paper presents a generative model of sequences based on the VAE framework, where the generative model is given by CNN with causal and dilated connections. Novelty of the method is limited; it mainly consists of bringing together the idea of causal and dilated convolutions and the VAE framework. However, knowing how well this performs is valuable the community. The proposed method appears to have significant benefits, as shown in experiments. The result on MNIST is, however, so strong that it seems incorrect; more digging into this result, or sourcecode, would have been better.
test
[ "SkgDcXX-x4", "Skl3BDFoA7", "SJghxFJ5hm", "HyxGRY_Mk4", "ByxYKYdz1E", "Syg22hUs2Q", "SJlBz456pX", "BJeA9X5TTQ", "r1lmRf5p6Q", "S1xUAx5667", "BkeTXrG6nm" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "To better understand if the experimental improvements shown in our paper only stem from the hierarchical latent space or whether the synergy between the dilated CNNs and latent variable hierarchy is important, we ran additional experiments (as suggested by R1). We replaced the deterministic TCN blocks with LSTM cells and kept the latent structure intact, dubbed RNNLadder. We used TIMIT and IAM-OnDB for speech and handwriting datasets. The log-likelihood performance measured by ELBO is provided below:\n\n=======================================================\n TIMIT IAM-OnDB \n=======================================================\n 25x256-LadderRNN (Normal) 28207 1305 \n 25x256-LadderRNN-dense (Normal) 27413 1278 \n=======================================================\n 25x256-LadderRNN (GMM) 24839 1381 \n 25x256-LadderRNN-dense (GMM) 26240 1377 \n=======================================================\n 5x512-LadderRNN (Normal) 49770 1299 \n 5x512-LadderRNN-dense (Normal) 48612 1374 \n=======================================================\n 5x512-LadderRNN (GMM) 47179 1359 \n 5x512-LadderRNN-dense (GMM) 50113 1581 \n=======================================================\n 25x256-STCN (Normal) 64913 1327 \n 25x256-STCN-dense (Normal) 70294 1729 \n=======================================================\n 25x256-STCN (GMM) 69195 1339 \n 25x256-STCN-dense (GMM) 71386 1796 \n=======================================================\n\nModels in the table have similar number of trainable parameters. The most direct translation of the the STCN architecture into an RNN counterpart has 25 stacked LSTM cells with 256 units each. Similar to STCN, we use 5 stochastic layers. Please note that stacking this many LSTM cells is unusual and resulted in instabilities during training. The performance is similar to vanilla RNNs. Hence, we didn’t observe a pattern of improvement with densely connected latent variables. The second RNNLadder configuration uses 5 stacked LSTM cells with 512 units and a one-to-one mapping with the stochastic layers. \n\nThis experiments show that the modular structure of our latent variable design does allow for the usage of different building blocks. Even when attached to LSTM cells, it boosts the log-likelihood performance (see 5x512-LadderRNN), in particular when used with dense connections. However, the empirical results suggest that the densely connected latent hierarchy interacts particularly well with dilated CNNs. We believe this is due to the hierarchical nature in both sides of the architecture. On both datasets STCN models achieved the best performance and presented significant improvements with the dense connections. This supports our contribution of a latent variable hierarchy, which models different aspects of information from the input time-series. \n", "The new updates are much improved, and the direct discussion of closely related work greatly relieves my concern in this area. Thank you for the updates and improvements.\n\nHowever, I cannot accept the MNIST STCN-dense number without extraordinary evidence (the level of which is frankly impossible to give in a double blind conference review). It would be a serious issue for any follow-on work, and without extremely strong (to the level of replication / rerunning the code and at least some days of digging) evidence, I cannot update my score due to this point alone.\n\nI *strongly* urge the authors to avoid this particular number (even leaving the pure STCN without dense connections seems fine), as the rest of the results seem quite solid and the contribution of the paper is meaningful - there is no need to have this controversy when the focus of the paper is not really MNIST modeling. Other papers with similarly radical improvements (~62 to far lower) have had to be withdrawn or reworked due to methodology concerns, and I would really not like to see the same thing here, when it isn't necessary for the message or concept of the paper.\n\nAs far as debug strategies if you really, really want to be confident in the result, you can multiply every contribution in the dense connections which is connected to the original input by 0 (this may be tricky), the number should fall back to something reasonable. If it breaks entirely, or if the number stays really low, these are both serious causes for concern. Adding huge amounts of noise on these connections should also force the model to fall back to alternate connections, and shouldn't break things utterly if it is a real scenario - it should fall back to something roughly like the standard STCN.\n\nWithout that particular number as an issue, I would definitely raise my score - the updates address most of my other concerns.", "The focus on novelty (mentioned in both the abstract, and conclusion as a direct claim) in the presentation hurts the paper overall. Without stronger comparison to other closely related work, and lack of citation to several closely related models, the claim of novelty isn't defined well enough to be useful. Describing what parts of this model are novel compared to e.g. Stochastic WaveNet or the conditional dilated convolutional decoder of \"Improved VAE for Text ...\" (linked below, among many others) would help strengthen the novelty claim, if the claim of novelty is needed or useful at all. Stochastic WaveNet in particular seems very closely related to this work, as does PixelVAE. In addition, use of autoregressive models conditioned on (non-variational, in some sense) latents have been shown in both VQ-VAE and ADA among others, so a discussion would help clarify the novelty claim.\n\nEmpirical results are strong, though (related to the novelty issue) there should be greater comparison both quantitatively and qualitatively to further work. In particular, many of the papers linked below show better empirical results on the same datasets. Though the results are not always directly comparable, a discussion of *why* would be useful - similar to how Z-forcing was included.\n\nIn the qualitative analysis, it would be good to see a more zoomed out view of the text (as in VRNN), since one of the implicit claims of the improvement from dense STCN is improved global coherence by direct connection to the \"global latents\". As it stands now the text samples are a bit too local to really tell. In addition, the VRNN samples look quite a bit different than what the authors present in their work - what implementation was used for the VRNN samples (they don't appear to be clips from the original paper)? \n\nOn the MNIST setting, there are many missing numbers in the table from related references (some included below), and the >= 60.25 number seems so surprising as to be (possibly) incorrect - more in-depth analysis of this particular result is needed. Overall the MNIST result needs more description and relation to other work, for both sequential and non-sequential models.\n\nThe writing is well-done overall, and the presented method and diagrams are clear. My primary concern is in relation to related work, clarification of the novelty claim, and more comparison to existing methods in the results tables. \n\nVariational Bi-LSTM https://arxiv.org/abs/1711.05717\n\nStochastic WaveNet https://arxiv.org/abs/1806.06116\n\nPixelVAE https://arxiv.org/abs/1611.05013\n\nFiltering Variational Objectives https://github.com/tensorflow/models/tree/master/research/fivo\n\nImproved Variational Autoencoders for Text Modeling using Dilated Convolutions https://arxiv.org/abs/1702.08139\n\nTemporal Sigmoid Belief Networks for Sequential Modeling http://papers.nips.cc/paper/5655-deep-temporal-sigmoid-belief-networks-for-sequence-modeling\n\nNeural Discrete Representation Learning (VQ-VAE) https://arxiv.org/abs/1711.00937\n\nThe challenge of realistic music generation: modelling raw audio at scale (ADA) https://arxiv.org/abs/1806.10474\n\nLearning hierarchical features from Generative Models https://arxiv.org/abs/1702.08396\n\nAvoiding Latent Variable Collapse with Generative Skip Models https://arxiv.org/abs/1807.04863\n\nEDIT: Updated score after second revisions and author responses", "If it is advised by the reviewer, we would be glad to improve Figure 2. We aimed to visualize dense connections and highlight the difference between STCN and STCN-dense models in Figure 2 as a graphical model. Figure 5 (in appendix section) could be used as a replacement of Figure 2.\n\n“... decision to omit dependencies from the distributions p and q at the top of page 5...” this is because we don’t follow standard conditioning procedure. In other words, the top-most layer is only conditioned on d_t^L while the lower layers (l+1) depend on d_t^l and z_t^l.\n\nWe will update Table 3 to the same convention used in other tables, i.e., NLL measured by ELBO.\n", "We are glad that the reviewer finds the paper much improved. Furthermore, we agree that the MNIST experiment is not important to convey the contribution of our work and hence we are happy to remove it since it does not add much in this context. Since discarding only the STCN-dense result only, would result in an incomplete experiment, we suggest to remove the whole MNIST experiment - guidance welcome. We also appreciate the debug suggestions. We will follow up on these.", "This paper introduces a new stochastic neural network architecture for sequence modeling. The model as depicted in figure 2 has a ladder-like sequence of deterministic convolutions bottom-up and stochastic Gaussian units top-down.\n\nI'm afraid I have a handful of questions about aspects of the architecture that I found confusing. I have a difficult time relating my understanding of the architecture described in figure 2 with the architecture shown in figure 1 and the description of the wavenet building blocks. My understanding of wavenet matches what is shown in the left of figure 1: the convolution layers d_t^l depend on the convolutional layers lower-down in the model, thus with each unit d^l having dependence which reaches further and further back in time as l increases. I don't understand how to reconcile this with the computation graph in figure 2, which proposes a model which is Markov! In figure 2, each d_{t-1}^l depends only on on the other d_{t-1} units and the value of x_{t-1}, which then (in the left diagram of figure 2) generate the following x_t, via the z_t^l. Where did the dilated convolutions go…? I thought at first this was just a simplification for the figure, but then in equation (4), there is d_t^l = Conv^{(l)}(d_t^{l-1}). Shouldn't this also depend on d_{t-1}^{l-1}…? or, where does the temporal information otherwise enter at all? The only indication I could find is in equation (13), which has a hidden unit defined as d_t^1 = Conv^{(1)}(x_{1:t}).\n\nAdding to my confusion, perhaps, is the way that the \"inference network\" and \"prior\" are described as separate models, but sharing parameters. It seems that, aside from the initial timesteps, there doesn't need to be any particular prior or inference network at all: there is simply a transition model from x_{t-1} to x_{t}, which would correspond to the Markov operator shown in the left and middle sections of figure 2. Why would you ever need the right third of figure 2? This is a model that estimates z_t given x_t. But, aside from at time 0, we already have a value x_{t-1}, and a model which we can use to estimate z_t given x_{t-1}…!\n\nWhat are the top-to-bottom functions f^{(l)} and f^{(o)}? Are these MLPs?\n\nI also was confused in the experiments by the >= and <= on the reported numbers. For example, in table 2, the text describes the values displayed as log-likelihoods, in which case the ELBO represents a lower bound. However, in that case, why is the bolded value the *lowest* log-likelihood? That would be the worst model, not the best — does table 2 actually show negative log-likelihoods, then? In which case, though, the numbers from the ELBO should be upper bounds, and the >= should be <=. Looking at figure 4, it seems like visually the STCN and VRNN have very good reconstructions, but the STCN-dense has visual artifacts; this would correspond with the numbers in table 2 being log-likelihoods (not negative), in which case I am confused only by the choice of which model to bold.\n\n\n\nUPDATE:\n\nThanks for the clarifications and edits. FWIW I still find the depiction of the architecture in Figure 2 to be incredibly misleading, as well as the decision to omit dependencies from the distributions p and q at the top of page 5, as well as the use in table 3 of \"ELBO\" to refer to a *negative* log likelihood.\n", "***Missing citations and novelty claim\nWe thank the reviewer for useful pointers to additional related papers. In the revised version, we added a more complete related work section. In particular, we discuss the most closely related Stochastic Wavenet paper in detail. While SWaveNet and ours combine TCNs with stochastic variables there are important differences in how this is achieved. Furthermore, we show that these design choices have implications in terms of modelling power and our architecture outperforms SWaveNet despite not having access to future information. Furthermore, we provide log-likelihood results from Variational Bi-LSTM and Stochastic Wavenet are inserted into the result table. In order to provide more evidence, we also include experiments on the Blizzard dataset. \n\nWe would like to emphasize that the main difference between our model and the models with autoregressive decoders (i.e., PixelVAE, Improved Variational Autoencoders for Text Modeling using Dilated Convolutions) is the sequential structure of our latent space. For every timestep x_t we have a corresponding latent variable z_t, similar to stochastic RNNs, which helps modeling the uncertainty in sequence data. We aim to combine TCNs with a powerful latent variable structure to better model sequence data rather than learning disentangled or interpretable representations. The updated results show that our design successfully preserves the modeling capacity of TCNs and representation power of latent variables.\n\n*** Handwriting sample figure.\nIn order to make a direct comparison, we include a new figure (similar to VRNN) comparing generated handwriting samples of VRNN, Stochastic Wavenet and STCN-dense. The original figure referred to by the reviewer is now in the Appendix.\n\n*** MNIST results\n(Also see the answer to R1) We include a new figure comparing the performance of STCN, STCN-dense and VRNN on single test samples from seq-MNIST. We find that STCN-dense makes very precise probability predictions for the pixel values as opposed to other models, this explains the drastic increase in likelihood performance. \nWe include a table providing KL loss per latent variable across the whole dataset. We also provide a comparison between SKIP-VAE (Avoiding Latent Variable Collapse with Generative Skip Models) and our model. It shows that STCN-dense effectively uses the latent space capacity (indicated by high KL values) and encodes the required information to reconstruct the input sequence. We also provide generated MNIST samples in order to show that the discrepancy between the prior and approximate posterior does not degrade generative modeling capacity.\nFinally, in our MNIST experiments, we followed Z-forcing paper’s instructions. See reply to R1 for details of the experimental protocol. \n", "*** Clarifications for figures and equations\nWe apologize for the confusion. As the reviewer mentions the dilated convolutional stacks d_t^l has dependency reaching further and further back in time. \nIn the original Fig. 2 we aimed to simplify the model details and show only a graphical model representation. The caption provides an explanation of the (updated) figure in the revised version. Moreover, the “Conv” equation (Eq. 2 in the revised version) is now a corrected to be a function of multiple time-steps, explicitly showing the hierarchy across time.\n\n***Details of the inference and generative networks\nThe difference between the prior and the approximate posterior, i.e., inference network are the respective input time-steps. The prior at time-step t is conditioned on all the input sequence until t-1, i.e., x_{1:t-1}. The inference network, on the other hand, is conditioned on the input until step t, i.e., x_{1:t}. \nAt sampling time, we only use the prior. In other words, the prior sample z_t (conditioned on x_{1:t-1}) is used to predict x_t. Here we follow the dynamic prior concept of Chung et al. (2015). During training of the model, the KL term in the objective encourages the prior to be predictive of the next step. \n\n*** f^{(l)} and f^{(o)} functions.\nf^{(l)} stands for neural network layers consisting of 1d convolution operations with filter size 1: Conv -> ReLu -> Conv -> ReLu which is then used to calculate mu and sigma of a Normal distribution.\n\nf^{(o)} corresponds the output layer of the model. Depending on the task we either use 1d Conv or Wavenet blocks. Network details are provided in the appendix of the revised paper.\n\n*** Clarification on MNIST results.\nThis was indeed a typo. We report negative log-likelihood performance, measured by ELBO. We correct this in the revised version.\nIn Fig. 4 (in the submitted version) we wanted to emphasize that STCN-dense can reconstruct the low-level details such as noisy pixels, which results in large improvement in the likelihood. We agree the STCN and VRNN provide smoothed and perceptually beautiful results. However, such enhancements lower the likelihood performance. Since the figure did not convey this clearly, we updated the figure in the revised version.\f\n\n***References\nJunyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron C Courville, and Yoshua Bengio. A recurrent latent variable model for sequential data. In Advances in neural information processing systems, pp. 2980–2988, 2015.\n", "***\"significant challenges that the authors overcame in reaching the proposed method.\"\nThe goal of our work was to design a modular extension to the vanilla TCN, while improving the modelling capacity via the introduction of hierarchical stochastic variables. In particular, we did not want to modify deterministic TCN layers (as is the case for Stochastic WaveNet, Lai et al., 2016) since this may limit scalability, flexibility and may limit the maximum receptive field size. \nThese goals are motivated by findings from the initial phases of the project: \n1) Initial attempts involved standard hierarchical latent variable models, none outperformed the VRNN baseline. \n2) The precision-weighted update of approximate posterior, akin to LadderVAEs, significantly improved experimental results. \n3) As can be seen from our empirical results, the increasing receptive field of TCNs provides different information context to different latent variables. This enables our architectures to more efficiently leverage the latent space and partially prevents latent space collapse issues highlighted in the literature (Dieng et al., 2018, Zhao et al., 2017). The introduction of skip connections from every latent variable to the output layer directly in the STCN-dense variant seems to afford the network the most flexibility in terms of modelling different datasets (see p.8 & Tbl. 3 in the revised paper).\n\n*** Effectiveness of TCN and densely connected latent variables\nThanks for the interesting question. We agree that using multiple levels of the latent variables directly to make predictions is very effective. As we explain in the revised version of our submission, in STCN and STCN-dense models, the latent variables are provided with a different level of expressiveness. Hence, depending on the task and dataset, the model can focus on intermediate variables which have a different context. We think that this is an important aspect of our work, which can only be achieved by using the dilated CNNs. One can stack RNN cells similar to TCN blocks and use our densely connected latent space concept. In this scenario, the hierarchy would only be implicitly defined by the network architecture. However, since the receptive field size does not change throughout the hierarchy it is unclear whether the same effectiveness would be attained. Moreover, we note that combining our hierarchical stochastic variables with stacked LSTMs would inverse the effect on computational efficiency that we gain from the TCNs. \n\n***“MNIST performance\nYes, binarization of the MNIST is fixed in advance. We followed the procedure detailed in the Z-forcing paper closely. Naturally, we will release code and pre-processing scripts so that the results can be verified. Here is our experimental protocol:\n1) We used the binarized MNIST dataset of Larochelle and Murray (2011). It was downloaded from http://www.cs.toronto.edu/~larocheh/public/datasets/binarized_mnist/binarized_mnist_train.amat\n2) We trained all models without any further preprocessing or normalization. The first term of the ELBO, i.e., the reconstruction loss, is measured via binary cross-entropy. \nWe provide an in-depth analysis in the revised version, showing that the STCN-dense architecture makes very precise probability predictions, also for pixel values close to character discontinuities. This provides very accurate modeling of edges and in consequence, gives very good likelihood performance. See (new) Figure 4 in the revised version.\n\n*** Clarifications\nWe updated and clarified the Figure in the revised version. The generative model only relies on the prior. At sampling time, samples from the prior latent variables are used both in prediction of the observation and computation of the next layer’s latent variable. Therefore the generative model takes the input sequence until t-1, i.e., x_{1:t-1} in order to predict x_t.\n“The term \"kla\" appears in table 1, but it seems that it is otherwise not defined. I think this is the same term and meaning that appears in Goyal et al. (2017), but it should obviously be defined here.”\nYes. It stands for annealing of the weight of KL loss term. We now clarified the language in tables and captions. \n\n***References\nLai, G., Li, B., Zheng, G., & Yang, Y. (2018). Stochastic WaveNet: A Generative Latent Variable Model for Sequential Data. arXiv preprint arXiv:1806.06116.\nAdji B Dieng, Yoon Kim, Alexander M Rush, and David M Blei. Avoiding latent variable collapse with generative skip models. arXiv preprint arXiv:1807.04863, 2018.\nShengjia Zhao, Jiaming Song, and Stefano Ermon. Learning hierarchical features from generative models. arXiv preprint arXiv:1702.08396, 2017.\nLarochelle, Hugo, and Iain Murray. The neural autoregressive distribution estimator. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics. 2011.", "We thank all reviewers for their constructive comments. Our work combines the computational advantages of temporal convolutional networks (TCN) with the representational power and robustness of stochastic latent spaces. Based on the reviewer’s feedback we have prepared an updated revision of the paper. Furthermore, we will respond to each review in a detailed manner below.\n\nThe most important changes in the revised version can be summarized as follows:\n- We cleaned up the description of the background, method and improved the figures describing our model. \n- We include an extensive discussion of related work as suggested by R3 and include direct comparisons to the state-of-the-art, where possible.\n- During our experiments, we found that using separate \\theta and \\phi parameters for f^{l} is much more efficient, than to share the parameters of f^{l} (i.e., layers calculating mean and sigma of Normal distributions of the latent variables) for the prior and approximate posterior as suggested by Sønderby et al. (2016) and as was the case at submission time.\n- With this change implemented, we re-ran experiments and updated the tables in the paper. On IAM-OnDB, Deepwriting, TIMIT and MNIST we now report state-of-the-art log-likelihood results (even compared to additional models listed by R3). We also evaluate our model on the Blizzard dataset where only the Variational Bi-LSTM architecture is marginally better than STCN-dense (i.e., 17319 against 17128) but has access to future information.\n- We include additional results on MNIST and provide insights why STCN-dense gives a large improvement in terms of reconstruction.\n- We updated figures and equations throughout to improve clarity of presentation. \n\n-----\nCasper Kaae Sønderby, Tapani Raiko, Lars Maaløe, Søren Kaae Sønderby, and Ole Winther. Ladder variational autoencoders. In Advances in neural information processing systems, pp. 3738–3746, 2016.\n", "This paper presents a generative sequence model based on the dilated CNN\npopularized in models such as WaveNet. Inference is done via a hierarchical\nvariational approach based on the Variational Autoencoder (VAE). While VAE\napproach has previously been applied to sequence modeling (I believe the\nearliest being the VRNN of Chung et al (2015)), the innovation where is the\nintegration of a causal, dilated CNN in place of the more typical recurrent\nneural network. \n\nThe potential advantages of the use of the CNN in place of\nRNN is (1) faster training (through exploitation of parallel computing across\ntime-steps), and (2) potentially (arguably) better model performance. This\nsecond point is argued from the empirical results shown in the\nliterature. The disadvantage of the CNN approach presented here is that\nthese models still need to generate one sample at a time and since they are\ntypically much deeper than the RNNs, sample generation can be quite a bit\nslower.\n\nNovelty / Impact: This paper takes an existing model architecture (the\ncausal, dilated CNN) and applies it in the context of a variational\napproach to sequence modeling. It's not clear to me that there are any\nsignificant challenges that the authors overcame in reaching the proposed\nmethod. That said, it certainly useful for the community to know how the\nmodel performs.\n\nWriting: Overall the writing is fairly good though I felt that the model\ndescription could be made more clear by some streamlining -- with a single\npass through the generative model, inference model and learning. \n\nExperiments: The experiments demonstrate some evidence of the superiority\nof this model structure over existing causal, RNN-based models. One point\nthat can be drawn from the results is that a dense architecture that uses multiple levels of the\nlatent variable hierarchy directly to compute the data likelihood is\nquite effective. This observation doesn't really bear on the central message\nof the paper regarding the use of causal, dilated CNNs. \n\nThe evidence lower-bound of the STCN-dense model on MNIST is so good (low)\nthat it is rather suspicious. There are many ways to get a deceptively good\nresult in this task, and I wonder if all due care what taken. In\nparticular, was the binarization of the MNIST training samples fixed in\nadvance (as is standard) or were they re-binarized throughout training? \n\nDetailed comments:\n- The authors state \"In contrast to related architectures (e.g. (Gulrajani et\nal, 2016; Sonderby et al. 2016)), the latent variables at the upper layers\ncapture information at long-range time scales\" I believe that this is\nincorrect in that the model proposed in at least Gulrajani et al also \n\n- It also seems that there is an error in Figure 1 (left). I don't think\nthere should be an arrow between z^{2}_{t,q} and z^{1}_{t,p}. The presence\nof this link implies that the prior at time t would depend -- through\nhigher layers -- on the observation at t. This would no longer be a prior\nat that point. By extension you would also have a chain of dependencies\nfrom future observations to past observations. It seems like this issue is\nisolated to this figure as the equations and the model descriptions are\nconsistent with an interpretation of the model without this arrow (and\nincluding an arrow between z^{2}_{t,p} and z^{1}_{t,p}.\n\n- The term \"kla\" appears in table 1, but it seems that it is otherwise not\ndefined. I think this is the same term and meaning that appears in Goyal et\nal. (2017), but it should obviously be defined here.\n" ]
[ -1, -1, 6, -1, -1, 6, -1, -1, -1, -1, 6 ]
[ -1, -1, 4, -1, -1, 3, -1, -1, -1, -1, 5 ]
[ "BkeTXrG6nm", "SJlBz456pX", "iclr_2019_HkzSQhCcK7", "Syg22hUs2Q", "Skl3BDFoA7", "iclr_2019_HkzSQhCcK7", "SJghxFJ5hm", "Syg22hUs2Q", "BkeTXrG6nm", "iclr_2019_HkzSQhCcK7", "iclr_2019_HkzSQhCcK7" ]
iclr_2019_HyEtjoCqFX
Soft Q-Learning with Mutual-Information Regularization
We propose a reinforcement learning (RL) algorithm that uses mutual-information regularization to optimize a prior action distribution for better performance and exploration. Entropy-based regularization has previously been shown to improve both exploration and robustness in challenging sequential decision-making tasks. It does so by encouraging policies to put probability mass on all actions. However, entropy regularization might be undesirable when actions have significantly different importance. In this paper, we propose a theoretically motivated framework that dynamically weights the importance of actions by using the mutual-information. In particular, we express the RL problem as an inference problem where the prior probability distribution over actions is subject to optimization. We show that the prior optimization introduces a mutual-information regularizer in the RL objective. This regularizer encourages the policy to be close to a non-uniform distribution that assigns higher probability mass to more important actions. We empirically demonstrate that our method significantly improves over entropy regularization methods and unregularized methods.
accepted-poster-papers
The paper proposes a new RL algorithm (MIRL) in the control-as-inference framework that learns a state-independent action prior. A connection is provided to mutual information regularization. Compared to entropic regularization, this approach is expected to work better when actions have significantly different importance. The algorithm is shown to beat baselines in 11 out of 19 Atari games. The paper is well written. The derivation is novel, and the resulting algorithm is interesting and has good empirical results. A few concerns were raised in initial reviews, including certain questions about experiments and potential negative impacts of the use of nonuniform action priors in MIRL. The author responses and the new version were quite helpful, and all reviewers agree the paper is an interesting contribution. In a revised version, the authors are encouraged to (1) include a discussion of when MIRL might fail, and (2) improve the related work section to compare the proposed method to other entropy regularized RL (sometimes under a different name in the literature), for example the following recent works and the references therein: https://arxiv.org/abs/1705.07798 http://proceedings.mlr.press/v70/asadi17a.html http://papers.nips.cc/paper/6870-bridging-the-gap-between-value-and-policy-based-reinforcement-learning http://proceedings.mlr.press/v80/dai18c.html
train
[ "HJe0z7jRJE", "Hkg8ZQo0k4", "ryea6zsRy4", "rkeC9MsAkE", "SkeEX3Q6JV", "ryl__Ymjh7", "SkgnPit3k4", "SyejfjKhkN", "H1l9u2InyN", "Byx_Yunv37", "HJepxiJ507", "SJe43cJcR7", "BJgtvCRFCQ", "rJl62p0F0X", "B1lIeTW537", "Skgno2oEhX", "rJx6sNdCom", "rkxW2SgCo7", "HJxlECDns7" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "public", "author", "public" ]
[ "We are thankful to the reviewer for noticing the improvements and raising the score. \n", "We thank the reviewer for appreciating the improvements of the paper. \n\n\nThe attached link indeed shows a different epsilon value for evaluation (and other hyperparameters) used in this particular DQN implementation. An epsilon value for evaluation that differs from 0.05 was used in some of the previous literature (e.g. distributed DQN in Bellemare et al. 2017, prioritized double DQN in Schaul et al. 2016). However, earlier DQN papers do report an epsilon value of 0.05 for evaluation (original DQN in Mnih et al. 2015, double DQN in van Hasselt et al. 2016, prioritized DQN in Schaul et al. 2016). While an epsilon value of 0.01 might improve evaluation results, we feel a value of 0.05 is not unreasonable since we compare all methods under the same evaluation procedure. Additionally, we chose the other hyperparameters following the original DQN paper (Mnih et. al. 2015).\n\n", "We thank the reviewer for the feedback leading to improvements of the paper. \n\nIn the final version, we will add a couple of additional sentences clarifying why a limit on information rate might be beneficial at initial stages of learning. In short, in prior work, it has been shown that the rate-distortion framework improves generalization in a supervised learning setting (Leibfried and Braun 2016). The intuition is that limits in transmission rate prevent overfitting on the training set. Similarly, in our work for the RL setting, limits in transmission rate prevent the agents to bootstrap with a ‘harsh’ max-operator that would lead to overestimation and sample inefficiency, but instead use a softened version less prone to overestimation with an adaptive prior that additionally improves exploration. \n", "We thank the reviewer for raising the score and for the additional suggestions on analyzing potential limitations and drawbacks of our method. We will include a paragraph clarifying where our method might fail according to our pilot experiments, and perform additional experiments with a reward structure discouraging an infrequent action that is required to eventually succeed.", "I would like to thank the authors for their comments (both to mine and other's reviews) and the updated paper.\n\nThe changes improve the paper, correspondingly I raised my score from 6 to 7.\n\nHowever, I still believe that more informative experiments about the limitations and drawbacks of the proposed method would highly increase the value to the community as it would allow readers to better judge whether the method should be incorporated in their work and, more importantly, it could point towards further research opportunities to improve on the presented work.\nConsequently, I would strongly encourage the authors to incorporate such experiments in their CRC version if the paper gets accepts. \n(I don't believe the current gridworld experiment actually shows the limitations as its reward structure doesn't discourage the infrequent action only until _after_ the first and only reward was already found). ", "** Summary: **\n\nThe authors use the reformulation of RL as inference and propose to learn the prior policy. The novelty lies in learning a state-independent prior (instead of a state-dependent one) that can help exploration in the presence of universally unnecessary actions. They derive an equivalence to regularizing the mutual information between states and actions.\n\n** Quality: **\nThe paper is mathematically detailed and correct.\n\n** Clarity: **\nThe paper is sufficiently easy to follow and explains all the necessary background.\n\n** Originality & Significance: **\nThe paper proposes a novel idea: Using a learned state-independent prior as opposed to using a learned state-dependent prior. While not a big change in terms of mathematical theory, this could lead to positive and interesting results empirically for exploration. Indeed they show promising results on Atari games: It is easy to see how Atari games could benefit as they have up to 18 different actions, many of which are redundant. \n\nMy two main points where I think the paper could improve are:\n- More experimental results, in particular, how strong are the negative effects of MIRL if we have actions that are important, but have a lower probability in the stationary action distribution?\n- A related work section comparing their approach to the many recent similar papers in Maximum Entropy RL", "These answers address my questions.", "1. Great, the changes have improved clarity.\n\n2. The values are substantially different than previous work. See here for a summary of previous settings (https://github.com/google/dopamine/tree/master/baselines). This raises a red flag for the experiments.\n\n3. Great, appreciate the new experiments.", "Thank you for response.\n\nI think this mostly addresses the concerns I raised.\n\nI appreciate the additional information regarding the rate-distortion, although I'm not sure that this view is adding much over the more usual view (why limit the rate of information encoded by policy?).\n\nOverall, I think this is interesting work and now better addresses prior work.\n\nMy score was marginally positive, and I remain at this mostly due the idea being relatively straightforward and the gains being fairly marginal.", "The authors take the control-as-inference viewpoint and learn a state-independent prior (which is typically held fixed). They claim that this leads to better exploration when actions have different importance. They relate this objective to a mutual information constrained RL objective in a limiting case. They then propose a practical algorithm, MIRL and compare their algorithm against DQN and Soft Q-learning (SQL) on 19 Atari games and demonstrate improvements over both.\n\nGenerally I found the idea interesting and at a high level the deficiency of entropy regularization makes sense. However, I had great trouble understanding the reasoning behind their method and did not find the connection to mutual information helpful. Furthermore, I had a number of questions about the experiments. If the authors can clarify their motivation and reasoning and strengthen the experiments, I'd be happy to raise my score.\n\nIn Sec 3.1, why is it sensible to optimize the prior? Can the authors give intuition for maximizing \\log p(R = 1) wrt to the prior? This is critical for justifying their approach. Currently, the authors provide a connection to MI, but don't explain why this matters. Does it justify the method? What insight are we supposed to take away from that? \n\nThe experiments could be strengthened by addressing the following:\n* What was epsilon during training? Why was epsilon = 0.05 in evaluation? This is quite high compared to previous work, and it makes sense that this would degrade MIRLs performance less than DQN and SQL.\n* What is the performance of SQL if we use \\rho as the action selector in \\epsilon-greedy. This would help understand if the performance gains are due to the impact on the policy or due to the changes in the behavior policy.\n* Plotting beta over time\n* Comparing the action distributions for SQL and MIRL to understand the impact of the penalty. In general, a deeper analysis of the impact on the policy is important. \n* Are their environments we would expect MIRL to outperform SQL based on your theoretical understanding? Does it?\n* How many seeds were run per game?\n* How and why were the 19 games selected from the full set?\n\nComments:\n\nThe abstract claims state-of-the-art performance, however, what is actually shown is that MIRL outperforms DQN and SQL.\n\nWith a fixed prior, the action prior can be absorbed into the reward (e.g., Levine 2018), so it is of no loss of generality to assume a uniform prior.\n\nCould state that the stationary distribution is assumed to exist and be unique.\n\nIn Sec 3.1, why is the prior state independent?\n\nIn Sec 3.1, p(R = 1|\\tau) is defined to be proportional to exp(\\beta \\sum_t r_t). Is this well-specified? How would we compute the normalizing constant since p(R = 0 | \\tau) is not defined?\n\nThroughout, I suggest that the authors not use the phrases \"closed form\" and \"analytic\" for expressions that are in terms of intractable quantities. \n\nIt should be noted that Sec 3.2 Optimal policy for a fixed prior \\rho follows from Levine 2018 and others by transforming the fixed prior into a reward bonus.\n\nIn Sec 3.2, the last statement does not appear to be necessary for the next subsection. Remove or clarify?\n\nI believe that the connection to MI can be simplified. Plugging in the optimal \\rho into Eq 3, we can see that Eq 3 simplifies to \\max_\\pi E_q[ \\sum_t \\gamma^t r_t] - (1 - gamma)/\\beta MI_p(s, a) where p(s, a) = d^\\pi(s) * \\pi(a | s) and d^\\pi is the discounted state visitation distribution. Thus Eq 3 can be thought of as a lower bound on the MI regularized objective.\n\nIn Sec 4, the authors state the main difference between their soft operator and the typical soft operator. What other differences are there? Is that the only one?\n\nSec 5 references the wrong Haarnoja reference in the first paragraph.\n\nIn Sec 5, alpha_beta = 3 * 10^5. Is that correct?\n\n=====\n11/26\nAt this time, the authors have not responded to the reviews. I have read the other reviews and comments, and I'm not inclined to change my score.\n\n====\n12/7\nThe authors have addressed most of my concerns, so I have raised my score. I'm still concerned that the exploration epsilon is quite different than existing work (e.g., https://github.com/google/dopamine/tree/master/baselines).", "Comments:\nThe abstract claims state-of-the-art performance, however, what is actually shown is that MIRL outperforms DQN and SQL.\n\n---------->[Attenuated wording] We have adjusted the formulation regarding the performance in the paper. We outperform DQN and SQL, both recent and high-performing algorithms (though not the best algorithms on ATARI). Our normalized scores are also close to those reported in the recent state-of-the art RAINBOW paper, but we cannot make a direct comparison over different implementations and subsets of games.\n\n\n With a fixed prior, the action prior can be absorbed into the reward (e.g., Levine 2018), so it is of no loss of generality to assume a uniform prior.\n\n--------------->[Absorbing prior into reward] In case of a uniform prior that is unaffected in the course of training, this is possible. In our algorithm, the prior is adapted in the course of training. In this case, keeping the prior separate allows for overcoming the problem of non-stationarity in the reward function.\n\nCould state that the stationary distribution is assumed to exist and be unique.\n\n------------>[Unique stationary state distribution] We state now in the paper that the stationary distribution is assumed to exist and be unique.\n\n\nIn Sec 3.1, why is the prior state independent?\n---------->[State-independent prior] We base our formulation on the rate-distortion framework that generalizes entropy regularization by having optimal state independent priors. We provide some intuition for the one-step decision-making case in the background section.\n\n\nIn Sec 3.1, p(R = 1|\\tau) is defined to be proportional to exp(\\beta \\sum_t r_t). Is this well-specified? How would we compute the normalizing constant since p(R = 0 | \\tau) is not defined?\n\n----------->[Normalization constant] It is not required to compute the normalization constant explicitly since it would appear in Equation 5 as a constant that is unaffected by the optimization. More explicitly, the expectation of the log of the normalization constant of p(R=1|\\tau) w.r.t. q(\\tau) is just the log of the normalization constant of p(R=1|\\tau) without the expectation.\n\nThroughout, I suggest that the authors not use the phrases \"closed form\" and \"analytic\" for expressions that are in terms of intractable quantities.\n\n----------->[Wording] We modified the wording accordingly in the current version of the paper.\n\nIt should be noted that Sec 3.2 Optimal policy for a fixed prior \\rho follows from Levine 2018 and others by transforming the fixed prior into a reward bonus.\n\nIn Sec 3.2, the last statement does not appear to be necessary for the next subsection. Remove or clarify?\n---------->[[Clarity] We added some clarifications to this section.\n\nI believe that the connection to MI can be simplified. Plugging in the optimal \\rho into Eq 3, we can see that Eq 3 simplifies to \\max_\\pi E_q[ \\sum_t \\gamma^t r_t] - (1 - gamma)/\\beta MI_p(s, a) where p(s, a) = d^\\pi(s) * \\pi(a | s) and d^\\pi is the discounted state visitation distribution. Thus Eq 3 can be thought of as a lower bound on the MI regularized objective.\n----------->[On simplified connection to MI] We moved the connection to Mutual information for the case of gamma -> 1 to the appendix, and adopted another way to show this connection similar to what the reviewer has proposed.\n\nIn Sec 4, the authors state the main difference between their soft operator and the typical soft operator. What other differences are there? Is that the only one?\n------------>The two main differences are an adaptive prior and adaptive beta.\n\nSec 5 references the wrong Haarnoja reference in the first paragraph. In Sec 5, alpha_beta = 3 * 10^5. Is that correct?\n----------->We corrected this typo. It should be 3*10^-5.\n", "We are sorry for the delayed reply (the deadline was extended to the end of 26th November Anywhere on Earth time). We state the reviewers comments and denote with arrows ( ---------> ) our replies.\n\nThe authors take the control-as-inference viewpoint and learn a state-independent prior (which is typically held fixed). They claim that this leads to better exploration when actions have different importance. They relate this objective to a mutual information constrained RL objective in a limiting case. They then propose a practical algorithm, MIRL and compare their algorithm against DQN and Soft Q-learning (SQL) on 19 Atari games and demonstrate improvements over both.\n\nGenerally I found the idea interesting and at a high level the deficiency of entropy regularization makes sense. However, I had great trouble understanding the reasoning behind their method and did not find the connection to mutual information helpful. Furthermore, I had a number of questions about the experiments. If the authors can clarify their motivation and reasoning and strengthen the experiments, I'd be happy to raise my score.\n\nIn Sec 3.1, why is it sensible to optimize the prior? Can the authors give intuition for maximizing \\log p(R = 1) wrt to the prior? This is critical for justifying their approach. Currently, the authors provide a connection to MI, but don't explain why this matters. Does it justify the method? What insight are we supposed to take away from that?\n\n-------------> [On prior optimization and mutual-information] We extended the paper with an explanation on mutual information and rate distortion theory, in order to help with an intuitive understanding of why this prior can help learning. We also added a related work section to note that other algorithms have considered optimizing the ELBO with respect to both variational and prior policy. However, these approaches do not use the marginal prior or have any connection to mutual information but instead optimise the policy while staying close to the previous policy. Additionally, we moved the connection to Mutual information for the case of gamma -> 1 to the appendix, and adopted another way to show this connection similar to what the reviewer has proposed.\n\n\n\nThe experiments could be strengthened by addressing the following:\n* What was epsilon during training? Why was epsilon = 0.05 in evaluation? This is quite high compared to previous work, and it makes sense that this would degrade MIRLs performance less than DQN and SQL.\n\n----------->[Epsilon in training and evaluation] Epsilon during training was decayed from 1.0 to 0.1 over the first 10^6 steps of the experiment. We used a fixed evaluation epsilon of 0.05. This procedure is standard in the literature for ATARI, as introduced by the DQN paper (see e.g. Mnih et al, 2015 ). We understand that in later DQN papers (e.g. Rainbow) different values for these hyperparameters have been used but we feel our choice is not unreasonable.\n\n\n\n* What is the performance of SQL if we use \\rho as the action selector in \\epsilon-greedy. This would help understand if the performance gains are due to the impact on the policy or due to the changes in the behavior policy.\n\n----------->[On marginal exploration] We have run additional experiments combining SQL with marginal exploration. Using the marginal exploration helps SQL, but MIRL still achieves the best performance.\n\n* Plotting beta over time\n----------->[Plotting beta] We include the beta values evolving over time in the appendix. Additionally, we also include a more relevant term (beta x Qvalues).\n* Comparing the action distributions for SQL and MIRL to understand the impact of the penalty. In general, a deeper analysis of the impact on the policy is important.\n* Are their environments we would expect MIRL to outperform SQL based on your theoretical understanding? Does it? * How many seeds were run per game?\n----------->[Policy and grid world] Responding the previous two questions: We have added additional experiments and plots to the paper in an effort to provide more insight into the behavior of our method. These experiments include a simple grid world in which we expect MIRL to outperform SQL and a grid world in which we expect the prior to have negative effects (as suggested by another reviewer).\n* How and why were the 19 games selected from the full set?\n------------->[On other aspects] Due to computational constraints we were not able to run experiments on the full set of ATARI games. Therefore, we selected a subset of 20 random games, without prior experimentation on any of the games. We then evaluated our method using a single seed for every game. Data for experiments on 1 game were lost because of a cloud instance failure. \n\n\n", "We thank the reviewer for the comments. Below we attempt to address each of the points raised by the reviewer.\n\nBackground and related work:\n\nWe have expanded the paper with a section highlighting the connection between the rate distortion framework and the mutual information constraint. We hope that this connection can help providing some intuitive insight into why our method can improve performance.\n\nWe have also added a related work section more clearly positioning our work with respect to existing algorithms (such as MPO and DistRL).\n\nExperiments:\n\nWe have included a new set of experiments on a small tabular domain. While simple, we hope that this domain can provide more insight into the performance of the algorithm.\n\n\nDue to computational constraints we were not able to perform a complete search for optimal hyperparameter combinations in the Atari domain. Hyperparameter values were chosen by using values reported in the literature. Values for the new parameters introduced by MIRL were fixed by running a small number of exploratory experiments. Overall, we found the algorithm to be robust to changes in these values. All other hyperparameters were kept the same for all algorithms. \n\n\nWhile it is true that the prior does not converge in all of our ATARI experiments, we note that during the later stages of learning the plots do show a higher probability for subsets of actions. We have empirically observed that convergence of the prior can take a very long time, especially when the learner is still improving. We expect that, given enough time, the probabilities of the marginal policy will eventually settle. Additionally, in these experiments we used a non-decaying learning rate for the marginal policy. This means that we can expect some oscillation due to tracking behaviour of our approximation, while the policy and state distribution still change.\n", "We thank the reviewer for the comments. \n\nWe have updated the manuscript with additional experiments in a grid-world domain aimed at answering the reviewer’s concerns. The additional experiments are aimed at better understanding the behaviour of our mutual-information constraint. We demonstrate that our method clearly improves learning speed when there is a strong preference for a single action in the optimal policy. We also examine an example in which the optimal policy crucially depends on an action with low probability in the marginal distribution. While MIRL does not improve performance in this case, it does not exhibit negative effects. We show that the learnt policy overcomes the prior when necessary for performance. \n\nAdditionally, we have added a related work section that positions and compares our work to the existing literature on inference-based RL and maximum entropy RL in particular.\n", "This work introduces SoftQ with a learned, state-independent prior. One derivation of this objective follows standard approaches from an RL as inference to derive the ELBO objective.\n\nA more novel view derived here connects this objective with the rate-distortion problem to view the objective as an RL objective subject to a constraint on the mutual information between the state and action distribution.\n\nThey also outline a practical off-policy algorithm for optimizing this objective and compare it with Soft Q Learning (essentially, the same method but with a flat-prior) and DQN. They find that this results in small gains across most Atari games, with big gains for a few games.\n\nThis work is well-explained except in one-aspect. The rate-distortion view of the objective is not well-justified. In particular, why is it desirable in the context of RL to constrain this mutual information?\n\nEmpirical Deep RL performance is notoriously difficult to test (e.g. Henderson et al., 2017). The hyper-parameters are simply stated here, but no justification is given for how they are chosen / whether the baselines perform better under different choices. Given the gains compared with SoftQ are not that large, this information is important for understanding how much weight to place on the empirical result.\n\nThe fact that the prior does not converge in some environments (e.g. Seaquest) is noted, but it seems this bears further discussion.\n\nOverall it appears this work provides:\n- An algorithm for Soft Q learning with a learned independent prior\n- Moderate evidence for gains compared with a flat prior on Atari.\n- A connection with this approach and regularization by constraining the mutual information between state and action distributions.\n\nIt could be made a stronger piece of work by showing improvements in domains others than Atari, justifying the choice of regularization more. It would also benefit from positioning this work more clearly in relation to related approaches such as MPO (non-parametric state-dependent prior) and DistRL (state-dependent prior but shared across all games).", "Both our algorithm and MPO can be seen as optimizing the same evidence lower bound (ELBO). MPO proposes a general coordinate ascent type optimization in which the ELBO is updated in alternating steps, either with respect to the variational policy or the prior policy (while the other policy is kept fixed). Different design choices for the policies and optimization procedures give rise to different, but related algorithms. This approach is also common in variational inference based policy search and describes a large family of related policy search algorithms (see Deisendroth et al, 2013 for an overview.)\n\nOur algorithm follows recent soft Q-learning algorithms (e.g. Fox et al, 2016, Haarnoja et al. 2017). These algorithms consider the same ELBO, but omit the optimization with respect to the prior policy and only optimize the variational policy pi. This can be seen as an entropy-regularized version of standard Q-learning algorithms. When the prior is fixed to be a constant uninformative policy, this procedure reduces to max-entropy policy learning. The algorithm replaces the classic Bellman operator with a soft Bellman-operator to prevent deviations from a state-independent fixed prior policy. Several papers (e.g.Haarnoja et al 2017, Schulman et al 2017 ) have shown that these “softened” algorithms offer advantages over their unsoftened counterparts, in terms of exploration, generalization and composability. Our approach further improves on soft Q-learning (as shown in our Atari experiments) by allowing for optimizing the prior (while still being state-independent). As shown in the paper, this results in a mutual information constraint (rather than a max entropy constraint) on the resulting policy.\n\nSo while we follow the same general scheme as soft Q-learning, we do update our prior policy as in the MPO algorithm. However, contrary to MPO, we do not consider the alternating, coordinate descent style optimization. Rather than executing a separate prior maximization step, we solve the ELBO for the optimal prior in the special case of state-independent priors. We then directly estimate this optimal prior in our algorithm, instead of performing a gradient style update on the ELBO. While it is possible to consider the same class of state-independent priors with MPO, the way in which both algorithms optimize the ELBO will still be different. \n\nA modified MPO that uses a state-independent generative policy would converge to a solution that is penalized by an optimal marginal policy. However, since the parameter epsilon (that determines the deviation between the variational and the generative policy) is fixed and not scheduled in the course of training, the final solution is still constrained by the marginal policy which is sub-optimal because it is state-independent. This constraint would essentially limit the asymptotic performance of such a modified MPO. Of course, this could be alleviated by setting epsilon to a large value but this would correspond to an ordinary actor critic approach without any regularization in the policy.\n\nIf the prior policy in our algorithm is replaced by a state-dependent prior, the optimal solution for such a prior is the variational policy (i.e. pi) itself. This essentially would eliminate the KL-constraint and reduce our algorithm to standard Q-learning. Q-learning is known to suffer from sample-inefficiency caused by the hard max-operator in the target (this leads to overestimated q-values). This is exactly the problem that was been addressed by soft Q-learning with entropy regularization. \n\nDeisenroth, M. P., Neumann, G., & Peters, J. (2013). A survey on policy search for robotics. Foundations and Trends® in Robotics, 2(1–2), 1-142.\n\nSchulman, J., Chen, X., & Abbeel, P. (2017). Equivalence between policy gradients and soft q-learning. arXiv preprint arXiv:1704.06440.\n\nHaarnoja, T., Tang, H., Abbeel, P., & Levine, S. (2017). Reinforcement learning with deep energy-based policies. arXiv preprint arXiv:1702.08165.\n\nFox, Roy, Ari Pakman, and Naftali Tishby. Taming the noise in reinforcement learning via soft updates. UAI (2016).", "Thank you very much for the reply.\n\nThen if MPO use a state-independent generative policy, it will reduce to the proposed algorithm?\nI understand that a learned state-independent generative policy is better than a uniform one. My question is that, why state-independent generative policy should be better than state-dependent generative policy as used by MPO?", "Thank you for your comment. \n\nFraming RL as an inference problem has been addressed before in the literature [1,2] and can be done in different ways. The difference between the variational inference formulation in MPO and our variational inference formulation is the following:\n- The policy of the generative model in our case is state-independent (similar to [1]) with the optimal solution being the marginal distribution over actions ([1] does not consider an optimal marginal distribution though). In contrast, in MPO the generative policy is state-dependent and given by the previous-round behavioural policy. \n\nImportantly, our specific choice of state-dependent variational policy and state-independent generative policy directly leads to a mutual information regularizer. Note that the mutual information is not any expected KL, but a specific expected KL under the assumption of an optimal marginal policy (which is exactly what we model). MPO does not have the notion of an optimal marginal policy (in the sense of a state-independent marginal policy) and therefore the expected KL in MPO is not a mutual information.\n\nIn our experimental section we empirically validate that our mutual information regularized objective leads to improvements over soft-q learning (see [1]) where the generative policy is also state-independent but not subject to optimization (but instead given by a uniform distribution). \n\nWe will clarify this point in a revised version of the manuscript.\n\n[1] Levine, S. Reinforcement Learning and Control as Probabilistic Inference: Tutorial and Review. arXiv 2018.\n[2] Neumann, G. Variational Inference for Policy Search in changing Situations. ICML 2011.", "Hello,\n\nThanks for the paper. I would like to point out a paper from ICLR2018 that shares similarities in both \n\n1- The derivations of RL objective from Inference perspective \n2- The resulting objective function for learning the prior \n\nplease see,\n\nMaximum a-Posteriori Policy Optimisaiton\nhttps://arxiv.org/pdf/1806.06920.pdf\n\nIn the paper above, the mutual information (Or expected KL ) regularized objective is derived in E-step (see equation 7). And the optimal solution is given in (8) when a non parametric variational distribution is used. \n\nIt would be useful if authors discuss the connections and differences.\n\nThank you,\n" ]
[ -1, -1, -1, -1, -1, 7, -1, -1, -1, 6, -1, -1, -1, -1, 6, -1, -1, -1, -1 ]
[ -1, -1, -1, -1, -1, 4, -1, -1, -1, 3, -1, -1, -1, -1, 4, -1, -1, -1, -1 ]
[ "Byx_Yunv37", "SyejfjKhkN", "H1l9u2InyN", "SkeEX3Q6JV", "rJl62p0F0X", "iclr_2019_HyEtjoCqFX", "HJepxiJ507", "SJe43cJcR7", "BJgtvCRFCQ", "iclr_2019_HyEtjoCqFX", "SJe43cJcR7", "Byx_Yunv37", "B1lIeTW537", "ryl__Ymjh7", "iclr_2019_HyEtjoCqFX", "rJx6sNdCom", "rkxW2SgCo7", "HJxlECDns7", "iclr_2019_HyEtjoCqFX" ]
iclr_2019_HyGBdo0qFm
On the Turing Completeness of Modern Neural Network Architectures
Alternatives to recurrent neural networks, in particular, architectures based on attention or convolutions, have been gaining momentum for processing input sequences. In spite of their relevance, the computational properties of these alternatives have not yet been fully explored. We study the computational power of two of the most paradigmatic architectures exemplifying these mechanisms: the Transformer (Vaswani et al., 2017) and the Neural GPU (Kaiser & Sutskever, 2016). We show both models to be Turing complete exclusively based on their capacity to compute and access internal dense representations of the data. In particular, neither the Transformer nor the Neural GPU requires access to an external memory to become Turing complete. Our study also reveals some minimal sets of elements needed to obtain these completeness results.
accepted-poster-papers
This paper provides a theoretical analysis of the Turing completeness of popular neural network architectures, specifically Neural Transformers and the Neural GPU. The reviewers agreed that this paper provides a meaningful theoretical contribution and should be accepted to the conference. Work of a theoretical nature is, amongst other types of work, called for by the ICLR CFP, but is not a very popular category for submissions, nor is it an easy one. As such, I am happy to follow the reviewers' recommendation and support this paper.
train
[ "HkxVcSSeg4", "r1lxKiAJlV", "H1gfQ37Cy4", "BygjD2mA1E", "BJl39ddR37", "S1lRigk1y4", "SkxNcv55nm", "SkllClJky4", "rkxzZyk1kN", "SJedZxpFAX", "SyemDwc_AX", "SJgnoh3dp7", "Hkg1123_pX", "S1xAQn3dpX", "Syx12inOam", "r1ldhR0Knm", "Syla8U8ZcQ", "rJgJkiz-9X" ]
[ "author", "public", "public", "public", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "public" ]
[ "Thanks for your comment. \n\nWe believe that your doubt has been already clarified by the authors of the paper mentioned in your comment (\"Universal Transformers\"), and we thank the authors for their response. We just want to emphasize that our results only hold when unbounded precision is admitted, which is a standard assumption in the theoretical analysis of the computational power of neural networks (see, e.g., the Universal Approximation Theorem, or Turing Completeness for RNNs). As mentioned in the response provided by the authors of the Universal Transformers paper, when only bounded precision is allowed, then the model is no longer Turing complete. In fact, we formally prove in our paper that the latter holds even if one sees the Transformer as a seq-to-seq network that produces an arbitrary long output. We will include some further comments about this in the final version of our paper.\n", "We are the authors of the Universal Transformer paper ([1] above). As this comment is very similar to what was posted on that submission, please see our response there: https://openreview.net/forum?id=HyzdRiR9Y7&noteId=HyxfZDmCk4&noteId=rkginvfklN\n\nThe TLDR is that in this work the authors assume arbitrary-precision arithmetic, whereas in our case we focus on the fixed-precision setting and provide a fairly short and intuitive counterexample showing that the Transformer is not universal in that setting, whereas the Universal Transformer is (see our comment above). Our main focus in that work, however, is to show how this increased theoretical capacity leads to significant practical advantages by expanding the number of tasks the Transformer can solve, and by improving accuracies on multiple real-world sequence-to-sequence learning tasks such as MT.\n\n", "It's observed in [1] that Transformer is not universal. Also, the proofs in this paper are very technical without any intuitive explanation. The results seem very questionable. It is definitely necessary to address this concern before this paper can be accepted.\n\n[1] https://openreview.net/forum?id=HyzdRiR9Y7&noteId=HyxfZDmCk4", "It's observed in [1] that Transformer is not universal. Also, the proofs in this paper are very technical without any intuitive explanation. The results seem very questionable. \n\n[1] https://openreview.net/forum?id=HyzdRiR9Y7&noteId=HyxfZDmCk4", "The paper shows Turing completeness of two modern neural architectures, the Transformer and the Neural GPU. The paper is technically very heavy and gives very little insight and intuition behind the results. Right after surveying the previous work the paper starts stacking definitions and theorems without much explanations.\n\nWhile technical results are potentially quite strong I believe a major revision to the paper might be necessary in order to clarify the ideas. I would even suggest to split the paper into two, one about each architecture as in the current form it is quite long and difficult to follow. \n\nResults are claimed to hold without access to external memory, relying just on the network itself to represent the intermediate results of the computation. I am a bit confused by this statement -- what if the problem at hand is, say EXPSPACE-complete? Then the network would have to be of exponential size (or more generally of arbitrary size which is independent of the input). In this case the claim about not using external memory seems to be kind of vacuous as the network itself has unbounded size. The whole point of Turing-completeness is that the program size is independent of the input size so there seems to be some confusion here.\n", "Dear Reviewer 1,\n\nThanks again for you review. As you can see, the authors have written a detailed rebuttal to you and the other reviewers in separate post. Please take the time to consider it, and the other reviews, and respond if needed. I would appreciate it if you can review your own assessment of the paper, and, if you decide to stand by your score, present a short explanation of why you think the paper still falls short in light of the comments made by the authors.", "This paper presents interesting theoretical results on Turing completeness of the Transformer and Neural GPU architectures, as modern architectures based on attention and convolutions, under particular assumptions. The basis of proofs in the paper relies on Turing completeness of the seq2seq architecture, which is Turing complete since it contains Turing complete RNNs. Turing completeness of the Transformer and the Neural GPU is proven by showing they can simulate seq2seq architecture.\n\nThe Transformer, using additive hard attention and residual connections, is Turing complete in the case when positional encoding is used. Otherwise, if no positional encoding is used, the model is order-invariant which makes it not Turing complete.\n\nA version of the Neural GPU, dubbed Uniform Neural GPU is proven to be Turing complete. Moreover, the presented theoretical results are backed by a recent publication by Karlis and Liepins. Interestingly, Neural GPUs using circular convolutions are not Turing complete, while the ones using zero padding are.\n\nThe repercussion of the paper for similar architectures is the not just in the theoretical section but also in a set of discoveries of practical importance, like the importance of the use of residual connections, positional coding in Transformers, and zero padding in Neural GPUs.\n\nAlbeit the paper presents an original and significant theoretical progress and is well written, it is not fit for ICLR, primarily as the paper is impossible to review and verify without a thorough perusal and analysis of the appendix. Although the results and the proof sketches fit the body of the paper, the necessity of verifying proofs makes this paper 23 pages long and makes it a better fit for a journal and not a conference.", "Dear Area Chair,\n\nAs per my comment earlier, and given your comment saying the proof sketches are admissible, without the necessity to go through the main proof, I will increase my score. Essentially, that was my only issue with the paper.\n\nOther than that I still stick by what I've stated before - that the paper presents an original and significant theoretical progress with discoveries of practical importance, particularly as it fits well with related work corroborating said discoveries. As such, it should be welcomed to the community.", "Dear Reviewer 3,\n\nTo weigh in, the CFP for the conference calls for—amongst other things—\"theoretical issues in deep learning\", a category under which this work falls.\n\nTheoretical work requiring expansive proofs is indubitably better suited for journals for proper treatment. However, in a fast moving field, the role of conferences is to share work in this more preliminary form, provided suitable rigor has been applied in presenting and framing the work. In the case of theoretical work, this may mean that proof sketches are offered in lieu of proofs, especially in the main body of the paper, with further details to be included in the supplementary materials. This is perfectly acceptable, in my mind, as the role of the main body of the paper is to present the results and motivate them.\n\nIf you feel the paper has done so appropriately, and if you agree there is space for theoretical work at ICLR in line with what i have written and what is in the CFP, I invite you to reconsider your evaluation in light of your own appreciation for the papers contributions.\n\nAC", "We appreciate the comment. We empathize with your concern about the difficulty of checking long technical proofs in appendices, and in fact we often have to struggle with this ourselves as reviewers. Still, we decided to present our proofs in full detail so that they could be verified exhaustively by reviewers if needed. The only way we could do this was by presenting them as supplementary material. While we believe that in general the writing of the proof in the body of the paper and in the appendix is good and can be more or less easily followed, we will do our best to improve it further if the paper gets accepted.", "Please, do not get me wrong - I do think your paper is well written, that the insights from it are important for the theoretical understanding of architectures many researchers are using, and that the results from the paper can be of practical significance (and that they should entice other theoretical result).\n\nHowever, given that this is a theoretical paper, of a theoretical contribution, the proofs in it are not akin to 'companion code' to just back up the findings, nor should they be there for 'reproducibility purposes' - they are the core of the paper. Without a verification of these proofs, there is no contribution. This is my main concern. Proof sketches seem ok, the reasoning in the main body of the paper seems sound, but without proofs that have been verified, the conclusions are open to refutation. And the verification of the proofs requires detailed perusal of the appendix, which doesn't fit into the 11 page limit proposed by ICLR.\n\nI would leave the opinion of whether a thorough verification of the proofs is or is not warranted in this case to area chairs. In the case of latter, I support the paper.", "Responses to AnonReviewer1:\n\n\n** [comment] “Results are claimed to hold without access to external memory [...] what if the problem at hand is, say EXPSPACE-complete? Then the network would have to be of exponential size [...] The whole point of Turing-completeness is that the program size is independent of the input size so there seems to be some confusion here.”\n\n[response] As stated in the paper, Turing completeness for Transformer and Neural GPU is obtained by taking advantage of the internal representations used by both architectures. We prove that the Transformer and the Neural GPU can use the values in their internal activations to carry out the computations while having a network with a fixed number of neurons and connections. For the case of Neural GPUs we even restrict the architecture to ensure a fixed number of parameters (Uniform Neural GPUs). Thus our proof actually uses a “program size which is independent of the input size” as mentioned by the reviewer. The confusion might arise because of our assumption that internal representations are rational numbers with arbitrary precision; we are trading external memory by internal precision. This is a classical assumption in the study of the computational power of neural networks (e.g. Universal Approximation Theorem for FFNs and Turing Completeness for RNNs). We mention this property in the Introduction, in the Conclusions, and also when formally proving the results, but we will make it more explicit in the next version of the paper.\n\n** [comment] “The paper is technically very heavy [...] I believe a major revision to the paper might be necessary in order to clarify the ideas.”\n\n[response] It is true that the paper is a bit dense, but we prove a technically involved result. To be precise in our claims we needed to include all the definitions in the paper. Moreover, our formal definitions can be used in the future to prove more properties for these and similar architectures with theoretical and practical implications. Though technical, the two other reviewers explicitly mention that the paper is well written. \n\n** [comment] “The paper [...] gives very little insight and intuition behind the results.”\n\n[response] The main intuition in our results is that both architectures can effectively simulate an (Elman)RNN-seq2seq computation, which by Siegelmann and Sontag’s classical result [1] are Turing complete when internal representations are rational numbers of arbitrary precision. We mentioned this in the Introduction and in each proof sketch, but we will make it more explicit in the next version of the paper.\n\n**[comment] “I would even suggest to split the paper into two, one about each architecture”.\n\n[response] We wanted to have both architectures in the paper as they are two of the most popular architectures in use today, yet based on different paradigms; namely, self-attention mechanisms and convolution. We wanted to understand to what extent the use of these features could be exploited in order to show Turing completeness for the models. Moreover, the computational power of Transformers has been compared with that of Neural GPUs in the current literature, but both are only informally used. We wanted to provide a formal way of approaching this comparison.\n\n[1] Siegelmann and Sontag. On the computational power of neural nets. JCSS-95\n", "Responses to AnonReviewer3:\n\n** [comment] “Albeit the paper presents an original and significant theoretical progress and is well written, it is not fit for ICLR, primarily as the paper is impossible to review and verify without a thorough perusal and analysis of the appendix. Although the results and the proof sketches fit the body of the paper, the necessity of verifying proofs makes this paper 23 pages long and makes it a better fit for a journal and not a conference.”\n\n[response] We included the appendices to allow the interested reader to see the techniques used in our theoretical proof and potentially extend it or apply it to other architectures, to understand the full implications of the results, and to validate the results for themselves. We see the proofs in our appendix more as a “companion code to backup our findings” as one usually do for an experimental paper, and we include it mostly for reproducibility purposes. As we stated in the general comments, although submitting to a journal is an option, we do want to discuss the theoretical implications of our work face-to-face with people of the interested community without waiting for a long journal review process.\n", "Responses to AnonReviewer2:\n\n** [comment] “of the simplifications and approximations used for the proof, how much does that take the model away from what is used in practice?” \n\n[response] Most of our changes are actually simplifications, which means that models as used in practice can have even more space to simulate computations. Take for example the relationship between Uniform Neural GPUs that we use, and (regular) Neural GPUs that are used in practice. Uniform Neural GPUs have a number of parameters that cannot depend on the input size, while (regular) Neural GPUs have a number of parameters that depend linearly on the input size. Transformer on the other hand can use multiple heads per layer but we only use one head. For the case of the Transformer one difference is that we use additive attention in our proof while multiplicative attention is used in practice most of the time. A detailed comparison between both uses in terms of computational power is a good topic for future research.\n\n** [comment] “For example, the assumption of the piecewise linear sigmoid seems like a quite big change, as there are large regions of the space which now have zero gradients. If you run a real implementation of these models, with the normal sigmoid replaced by this one, does training still work? If not, what are the implications for the proof?”\n\n[response] This is a really interesting question. For the case of the Neural GPU, as we mention in the paper, there is a recent work by Freivalds and Liepins [2] showing that piece-wise linear activations dramatically increase the training performance. These activations (along other changes) allowed the learning of decimal multiplication from examples which was impossible with the original Neural GPU [2]. Thus having piecewise linear activations actually helps in practice. For the case of the Transformer more experimentation is needed to have a conclusive response. We will add some comments on this in the next version of the paper.\n\n** [comment] “[...] all floating points on a computer represent rationals, but it would be interesting to get a better understanding on how the lack of infinite precision rationals on real hardware affects the main results.”\n\n[response] This is similar to a comparison between a computer with bounded vs unbounded memory. With bounded memory a computer is, theoretically, just a finite state machine. Similarly, with rationals of bounded precision, a Transformer is computationally very weak. Actually, your question made us realize that from our results it follows that bounded precision Transformers cannot even simulate finite automaton (this is a corollary of Proposition 3.1 in our submission). We will add a discussion on this result since it will definitely improve the paper. Thank you for the comment.\n\n** [comment] “Does the proof rely on the input and output dimensionality being the same? Eg in the preliminaries, x_i and y_i are both d-dimensional - could this be changed?”\n\n[response] The short answer is “yes” it can be changed, as one can always pad the shorter with zeroes as a trick to make them of the same dimension. But having both of the same dimension is more of a practical concern of the architectures we use. For the case of the Transformer, the fact that the decoder puts attention over the output of the encoder, plus the use of residual connections in every layer, forces dimensions to coincide. For the case of the Neural GPU, input vectors are transformed without changing their dimensions, thus input and output vectors have naturally the same size.\n\n** [comment] “Circular convolution definition only appears to define the values directly adjacent to the border, would it be more appropriate to define S_{h+n, :, :} = S{n, :, :}?”\n\n[response] Yes, you are right. We will include this change in the next version, thank you.\n\n\n[2] Freivalds and Liepins. Improving the Neural GPU Architecture for Algorithm Learning. NAMPI-18 (workshop at ICML-18)", "We thank the reviewers for their comments. We first make some general comments and then answer directly to each reviewer. \n\nAll reviewers appear to agree that our technical results on the Turing completeness of the Transformer and the Neural GPU are potentially interesting/important. In two reviews, there is however a general question about the fit of our results for ICLR. One reviewer advised to go directly to a journal. We did consider submitting to a journal or to a theoretical conference, but we felt it important to discuss the computational properties of the Transformer and the Neural GPU directly with the community involved in their design, implementation, and practical use. We felt that submitting to ICLR would generate more impact noting that the Neural GPU was initially proposed at ICLR2016, and the Transformer architecture (proposed at NIPS2017) is used in several ICLR2018 papers (and also now ICLR2019 submissions). \n\nWe also observe that there is a need for more theoretical foundations with regards to the computational power of modern NN architectures at ICLR, and in particular about the two architectures that we study. Consider for example the following ICLR2019 submission: “Universal Transformers” (https://openreview.net/forum?id=HyzdRiR9Y7&noteId=HyzdRiR9Y7). Universal Transformers are networks that combine the parallelizability and ease of train of recently proposed feed-forward mechanisms based on self-attention, such as the Transformer, with the learning abilities of recurrent NNs. This is a strong paper, in our opinion, with a thorough experimental part and the potential for significant practical impact. Though it received three positive reviews, two reviewers would like to see a more thorough theoretical analysis of the proposed architecture (which is, admittedly, beyond the scope of the paper). One of the reviewers states “I miss a proof that the Universal Transformer is computationally equivalent to a Turing machine.” while the other states “I am having trouble understanding the universal aspect of the transformer”. Our paper brings light into this, by showing what are some of the minimal sets of features that make self-attention networks, in particular, the Transformer, Turing-complete. Moreover, in that paper, Neural GPUs are used as a yardstick to compare the computational power of the Transformer. Thus our paper presents a formal theoretical basis to address problems that are currently being discussed at ICLR. (We emphasize that we are not involved in any way with the “Universal Transformer” paper, and that we are not reviewers of it.) \n\nBelow we provide detailed responses to each one of the individual reviews. \n", "This paper seeks to answer the question of whether models which process sequences, but are not strictly classical RNNs, are Turing complete.\n\nThe authors present proofs that both the Transformer and Neural GPU are turing complete, under certain conditions. I do not consider myself qualified to properly verify the proof but it seems to be presented clearly. The authors note that the conditions involved are not how these models are used in the real world. Given the complex construction required for this more theoretically based proof, it seems reasonable that this should be published now, rather than waiting until the further work discussed in the final section is completed.\n\nI have a number of questions where if a brief answer is possible, this would enhance the manuscript. The main question is, of the simplifications and approximations used for the proof, how much does that take the model away from what is used in practice? For example, the assumption of the piecewise linear sigmoid seems like a quite big change, as there are large regions of the space which now have zero gradients. If you run a real implementation of these models, with the normal sigmoid replaced by this one, does training still work? If not, what are the implications for the proof?\n\nThe rational numbers assumption is interesting - again I wonder how this would affect the model in reality, obviously all floating points on a computer represent rationals, but it would be interesting to get a better understanding on how the lack of infinite precision rationals on real hardware affects the main results.\n\nDoes the proof rely on the input and output dimensionality being the same? Eg in the preliminaries, x_i and y_i are both d-dimensional - could this be changed?\n\nOverall this paper is novel and interesting, I have to give a slightly low confidence score because I'm unfamiliar with a lot of the background here (eg the Siegelamnn & Sontag work). The paper does seem concise and well written.\n\ntypos and minor points:\n\nCircular convolution definition only appears to define the values directly adjacent to the border, would it be more appropriate to define S_{h+n, :, :} = S{n, :, :}?\n\nparagraph above equation 5, 'vectores' -> 'vectors'", "Our proofs are based on having unbounded precision for internal representations (neuron values). For weights one can prove that fixed precision (actually very small) is enough.\n\nOur results say nothing about the computational power when fixed precision (like float32) is assumed for internal representations. We actually state the fixed-precision case as an interesting topic for future research.", "Do I understand correctly that your results only hold for weights in Q, meanining unbounded precision? I suppose it's not true with limited precision, like float32, or am I misunderstanding?" ]
[ -1, -1, -1, -1, 6, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, 7, -1, -1 ]
[ -1, -1, -1, -1, 2, -1, 2, -1, -1, -1, -1, -1, -1, -1, -1, 2, -1, -1 ]
[ "H1gfQ37Cy4", "H1gfQ37Cy4", "iclr_2019_HyGBdo0qFm", "SkxNcv55nm", "iclr_2019_HyGBdo0qFm", "BJl39ddR37", "iclr_2019_HyGBdo0qFm", "rkxzZyk1kN", "SyemDwc_AX", "SyemDwc_AX", "Hkg1123_pX", "Syx12inOam", "Syx12inOam", "Syx12inOam", "iclr_2019_HyGBdo0qFm", "iclr_2019_HyGBdo0qFm", "rJgJkiz-9X", "iclr_2019_HyGBdo0qFm" ]
iclr_2019_HyGEM3C9KQ
Improving Differentiable Neural Computers Through Memory Masking, De-allocation, and Link Distribution Sharpness Control
The Differentiable Neural Computer (DNC) can learn algorithmic and question answering tasks. An analysis of its internal activation patterns reveals three problems: Most importantly, the lack of key-value separation makes the address distribution resulting from content-based look-up noisy and flat, since the value influences the score calculation, although only the key should. Second, DNC's de-allocation of memory results in aliasing, which is a problem for content-based look-up. Thirdly, chaining memory reads with the temporal linkage matrix exponentially degrades the quality of the address distribution. Our proposed fixes of these problems yield improved performance on arithmetic tasks, and also improve the mean error rate on the bAbI question answering dataset by 43%.
accepted-poster-papers
pros: - Identification of several interesting problems with the original DNC model: masked attention, erasion of de-allocated elements, and sharpened temporal links - An improved architecture which addresses the issues and shows improved performance on synthetic memory tasks and bAbI over the original model - Clear writing cons: - Does not really show this modified DNC can solve a task that the original DNC could not and the bAbI tasks are effectively solved anyway. It is still not clear whether the DNC even with these improvements will have much impact beyond these toy tasks. Overall the reviewers found this to be a solid paper with a useful analysis and I agree. I recommend acceptance.
train
[ "rJxK_pj6TQ", "rygRU5E5nm", "H1epWZcnpQ", "B1eovy5nam", "BklGVJ92pX", "SkxQaAFh6m", "Hkg0R50bpQ", "H1g7-dMz3m" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thanks for addressing the main concerns of my review, I have updated my score accordingly. ", "\nOverview: \nThis paper proposes modifications to the original Differentiable Neural Computer architecture in three ways. First by introducing a masked content-based addressing which dynamically induces a key-value separation. Second, by modifying the de-allocation system by also multiplying the memory contents by a retention vector before an update. Finally, the authors propose a modification in the link distribution, through renormalization. They provide some theoretical motivation and empirical evidence that it helps avoiding memory aliasing. \nThe authors test their approach in the some algorithm task from the DNC paper (Copy, Associative Recall and Key-Value Retrieval), and also in the bAbi dataset.\n\n\nStrengths: Overall I think the paper is well-written, and proposes simple adaptions to the DNC architecture which are theoretically grounded and could be effective for improving general performance. Although the experimental results seem promising when comparing the modified architecture to the original DNC, in my opinion there are a few fundamental problems in the empirical session (see weakness discussion bellow).\n\nWeaknesses: Not all model modifications are studied in all the algorithmic tasks. For example, in the associative recall and key-value retrieval only DNC and DNC + masking are studied. \n\nFor the bAbi task, although there is a significant improvement (43%) in the mean error rate compared to the original DNC, it's important to note that performance in this task has improved a lot since the DNC paper was release. Since this is the only non-toy task in the paper, in my opinion, the authors have to discuss current SOTA on it, and have to cite, for example the universal transformer[1], entnet[2], relational nets [3], among others architectures that shown recent advances on this benchmark. \nMoreover, the sparse DNC (Rae el at., 2016) is already a much better performant in this task. (mean error DNC: 16.7 \\pm 7.6, DNC-MD (this paper) 9.5 \\pm 1.6, sparse DNC 6.4 \\pm 2.5). Although the authors mention in the conclusion that it's future work to merge their proposed changes into the sparse DNC, it is hard to know how relevant the improvements are, knowing that there are much better baselines for this task.\nIt would also be good if besides the mean error rates, they reported best runs chosen by performance on the validation task, and number of the tasks solve (with < 5% error) as it is standard in this dataset.\n\n\nSmaller Notes. \n1) In the abstract, I find the message for motivating the masking from the sentence \"content based look-up results... which is not present in the key and need to be retrieved.\" hard to understand by itself. When I first read the abstract, I couldn't understand what the authors wanted to communicate with it. Later in 3.1 it became clear. \n\n2) page 3, beta in that equation is not defined\n\n3) First paragraph in page 5 uses definition of acronyms DNC-MS and DNC-MDS before they are defined.\n\n4) Table 1 difference between DNC and DNC (DM) is not clear. I am assuming it's the numbers reported in the paper, vs the author's implementation? \n\n5)In session 3.1-3.3, for completeness. I think it would be helpful to explicitly compare the equations from the original DNC paper with the new proposed ones. \n\n--------------\n\nPost rebuttal update: I think the authors have addressed my main concern points and I am updating my score accordingly. ", "Following the suggestions of the reviewers, we updated our paper. We made the following changes:\n - Clarified the abstract\n - Added mean/std loss curves for the associative recall task for many models\n - Added mean/std error curves for the bAbI task in the appendix\n - Highlighted our modifications compared to DNC equations in Appendix A\n - Fixed missing definitions/variables/etc.\n", "Thank you for your thoughtful feedback!", "Thank you for your thoughtful and helpful comments. \n\nFollowing the suggestions, we added additional results for the associative recall task for many network variants. We also report mean and variance of losses for different seeds. This shows that masking improves performance on this task especially when combined with improved de-allocation, while sharpness enhancements negatively affect performance in this case. From the variance plots it can be seen that some seeds of DNC-M and DNC-MD converge significantly faster than plain DNC.\n\nIn our experimental section, we added requested references to methods performing better on bAbI, and point out that our goal is not to beat SOTA on bAbI, but to exhibit and overcome drawbacks of DNC.\n\nComparison to Sparse DNC is an interesting idea, and we are currently running experiments in this direction. We intend to make the results available in the near future.\n\nWe are unable to provide a fair comparison for the lowest bAbi scores, having reported 8 seeds compared to the 20 seeds reported by Graves et al. Indeed, the high variance of DNC (Table 1) suggests that it may benefit a lot from exploring additional seeds.\n\nWe incorporated all of the smaller notes, including a comparison to the original DNC equations in Appendix A.\n", "Thank you for your careful consideration and feedback. Following your request, we updated the paper to include mean learning curves for different models in Figure 6 in Appendix C. Our models converge faster than DNC. Some of them (especially DNC-MD) also have significantly lower variance than DNC.", "Summary:\n\nThis paper is built on the top of DNC model. Authors observe a list of issues with the DNC model: issues with deallocation scheme, issues with the blurring of forward and backward addressing, and issues in content-based addressing. Authors propose changes in the network architecture to solve all these three issues. With toy experiments, authors demonstrate the usefulness of the proposed modifications to DNC. The improvements are also seen in more realistic bAbI tasks.\n\nMajor Comments:\n\nThe paper is well written and easy to follow. The proposed improvements seem to result in very clear improvements. The proposed improvements also improve the convergence of the model. I do not have any major concerns about the paper. I think that contributions of the paper are good enough to accept the paper.\n\nI also appreciate that the authors have submitted the code to reproduce the results.\n\nI am curious to know if authors observe similar convergence gains in bAbI tasks as well. Can you please provide the mean learning curve for bAbI task for DNC vs proposed modifications?\n", "The authors propose three improvements to the DNC model: masked attention, erasion of de-allocated elements, and sharpened temporal links --- and show that this allows the model to solve synthetic memory tasks faster and with better precision. They also show the model performs better on average on bAbI than the original DNC.\n\nThe negatives are that the paper does not really show this modified DNC can solve a task that the original DNC could not. As the authors also admit, there have been other DNC improvements that have had more dramatic improvements on bAbI.\n\nI think the paper is particularly clearly written, and I would vote for it being accepted as it has implications beyond the DNC. The fact that masked attention works so much better than the standard cosine-weighted content-based attention is pretty interesting in itself. The insights (e.g. Figure 5) are interesting and show the study is not just trying to be a benchmark paper for some top-level results, but actually cares about understanding a problem and fixing it. Although most recent memory architectures do not seem to have incorporated the DNC's slightly complex memory de-allocation scheme, any resurgent work in this area would benefit from this study." ]
[ -1, 7, -1, -1, -1, -1, 8, 7 ]
[ -1, 5, -1, -1, -1, -1, 5, 5 ]
[ "BklGVJ92pX", "iclr_2019_HyGEM3C9KQ", "iclr_2019_HyGEM3C9KQ", "H1g7-dMz3m", "rygRU5E5nm", "Hkg0R50bpQ", "iclr_2019_HyGEM3C9KQ", "iclr_2019_HyGEM3C9KQ" ]
iclr_2019_HyGIdiRqtm
Evaluating Robustness of Neural Networks with Mixed Integer Programming
Neural networks trained only to optimize for training accuracy can often be fooled by adversarial examples --- slightly perturbed inputs misclassified with high confidence. Verification of networks enables us to gauge their vulnerability to such adversarial examples. We formulate verification of piecewise-linear neural networks as a mixed integer program. On a representative task of finding minimum adversarial distortions, our verifier is two to three orders of magnitude quicker than the state-of-the-art. We achieve this computational speedup via tight formulations for non-linearities, as well as a novel presolve algorithm that makes full use of all information available. The computational speedup allows us to verify properties on convolutional and residual networks with over 100,000 ReLUs --- several orders of magnitude more than networks previously verified by any complete verifier. In particular, we determine for the first time the exact adversarial accuracy of an MNIST classifier to perturbations with bounded l-∞ norm ε=0.1: for this classifier, we find an adversarial example for 4.38% of samples, and a certificate of robustness to norm-bounded perturbations for the remainder. Across all robust training procedures and network architectures considered, and for both the MNIST and CIFAR-10 datasets, we are able to certify more samples than the state-of-the-art and find more adversarial examples than a strong first-order attack.
accepted-poster-papers
The paper investigates mixed-integer linear programming methods for neural net robustness verification in presence of adversarial attckas. The paper addresses and important problem, is well-written, presents a novel approach and demonstrates empirical improvements; all reviewers agree that this is a solid contribution to the field.
train
[ "SylMABVPyE", "H1lIZjvUyE", "ByxMEVJ90Q", "HylmeSpd07", "BkgoiNTOCm", "B1xhqcM8CQ", "HJeQNsfI0Q", "rJxCCcM8CQ", "rJg-8cfU07", "SylXnSG8AQ", "rygywrfIRm", "Hkl4h-WLRX", "rJlzefhupX", "BygGl7Gva7", "r1eSMishhQ", "H1egVwcihm", "S1eVvi_9hm" ]
[ "public", "author", "public", "author", "author", "author", "author", "author", "author", "author", "author", "public", "public", "public", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for the explanation and the additional experimental data. \n\nIt seems for the 6x100 undefended networks, the method does not really improve over state of the art, which on top of that is an incomplete verifier (the main benefit of complete verifiers is precision gain for smaller networks). The approach is also slow, e.g., it times out for around 46% of cases for eps=0.02. It appears it would timeout even more for the 9x200 network and may prove less than the incomplete verifier. For the 100K net, the standard Interval analysis seems sufficient for 99% of cases.\n\nOverall, I do like the direction, but the authors need to show a use case which clearly surpasses prior work on networks beyond 3x20.\n\n\n", "Thank you for your comments.\n\nVerifying undefended networks: We would like to begin by clarifying that the runtimes in Figure 1 and the rest of our submission are not directly comparable. Figure 1 presents results on determining the _closest_ adversarial example; the rest of our submission presents results on finding _some_ adversarial example among perturbations with l-infinity norm bound ε (or proving that no adversarial example exists among those perturbations). Determining the closest adversarial example is always expected to take more time since it is a strictly more difficult task.\n\nTo determine how well our verifier performs when determining the robustness of larger undefended networks to perturbations with bounded l-infinity norm ε, we verified the 6x100 network found here https://github.com/eth-sri/eran#experimental-results on a range of values of ε. The results are over the first 500 samples [1], and we report a timeout if solve time for a sample exceeds 120s.\n\n\t\t| Verified \t| Verified \t| Total\n ε \t| Robust \t| Vulnerable\t| Verified\n-----------\t| ---------------\t| ---------------\t| -----------\n 0.005\t| 0.966\t| 0.034\t| 1.000\n 0.010\t| 0.910\t| 0.046 \t| 0.956\n 0.015\t| 0.756\t| 0.056\t| 0.812\n 0.020\t| 0.466\t| 0.072\t| 0.538\n 0.025\t| 0.238\t| 0.082\t| 0.320\n 0.030\t| 0.080\t| 0.098\t| 0.178\n\nThese results are comparable to or better than the best results reported for this network (via DeepPoly).\n\nPresolve step: During the presolve step, a significant fraction of nodes only require interval arithmetic to compute bounds. (This is precisely what allows progressive bounds tightening to reduce the overall time spent computing bounds). \n\nIn terms of the impact of using LP to compute bounds, we find that it does not significantly affect _median_ solve times; however, it _does_ make a big impact for samples that are complex and take a long time to verify. For example, for the 100K network [2], 108 (out of 10000) samples take more than 120s when using only interval arithmetic to compute bounds. Of these samples, 33 can be resolved within 120s when using LP to compute bounds.\n\n[1] The standard error rate on the first 500 samples was 2.8%.\n[2] We refer to this network in our submission as the CIFAR-Resnet network.", "Thank you for your response. \n\nIt seems the method will not work well for undefended networks, but I am wondering whether there is a benefit over simple Interval analysis with defended networks?\n\nFor undefended, in Figure 1, the 3x20 undefended net can have a maximum of 60 unstable units but your verifier seems to take longer on this than on the 100K one with 1906 unstable units. Thus, for undefended nets, the method will likely not scale for the 9x200 net I mentioned.\n\nFor defended nets, Table 5 suggests the solver branches very rarely (not more than 3 times on 95% of the images) on the 100K net and thus even if there are 1906 branches, they are rarely explored which may explain your timings.\n\nDuring the presolve step, what percentage of nodes use only Interval arithmetic for their bounds computation? In particular, what would the results look like for the 100K net if one only uses simple Interval Arithmetic with no LP/MILP at all? \n\nThanks in advance.\n", "Thank you for your comment.\n\nWe have not encountered a network trained to be robust where many cases required more than 24 hours to solve. This is the case even for the LPd-RES CIFAR classifier, where the mean number of unstable units is 1906.15, but the mean solve time for this classifier is only 15.23s. [1] \n\nThere may be two other reasons why verification takes a long time for the network you selected.\n\nFirstly, as discussed in section H in the appendix on Sparsification and Verifiability, having parameter values close to zero can lead to significantly increased verification time [2]. Two fixes are possible if this is the case. A) Without access to the training procedure, Table 7 shows that it is possible to modify the network to significantly improve verifiability at a small cost to test error, by setting some fraction of weights to zero. B) Alternatively, with access to the training procedure, adopting a principled sparsification approach could improve verifiability even further at a lower cost to test error.\n\nSecondly, attempting to verify a network not trained to be robust, such as the 9x200 network available in the repository, can lead to significantly increased verification time. We have observed that regular training (without a robustness objective) leads to networks where almost all ReLUs are unstable, even for input domains of modest size (such as an l-∞ ball of radius 0.1); in contrast, all robust training procedures we had access to produced networks where a significant fraction of ReLUs were provably stable over input domains of the same size. Fortunately, when working with a network not trained to be robust, the distance to the closest adversarial example for any input sample is typically small. When this is the case, you can reduce solve times by first attempting to find an adversarial example within a smaller input domain (such as an l-∞ ball of radius 0.01), and searching over larger input domains only when no adversarial example can be found within the smaller input domain.\n\nA quick note on stability of ReLUs: for the robust networks we verified, very few ReLUs were always provably stable for all test samples; instead, the set of possibly unstable ReLUs changes significantly between test samples.\n\n[1] We have also updated the paper so that results on the numbers of ReLUs that are provably stable are reported for all networks (either in Table 3 or in Table 6). Thank you for suggesting this!\n[2] We found that networks trained simply to minimize cross-entropy loss exhibit this behavior.", "Thank you for your comments and suggestions! We have updated the submission with revisions based on them.", "(This is the second part of our response to the reviewer.)\n\nComparison to Wong et al. [b]: The reviewer mentions that the results in our submission do not outperform all of the latest results in terms of upper bounds on adversarial error on MNIST and CIFAR classifiers. In particular, the reviewer was interested to see a comparison on our results with those in Wong et al. at https://arxiv.org/pdf/1805.12514.pdf [3].\n\nDuring the discussion period, we were able to run our verifier on all but two of the networks [4] presented in Wong et al. Results are presented below.\n\n| \t| \t| \t| \t| Certified Bounds on Adv. Error \t| Mean \t|\n| \t| \t| \t| Test \t| Lower Bound\t| Upper Bound \t| Time\t|\n| Dataset\t| Net\t| ε \t| Error \t| PGD \t| Ours \t| SOA[5]\t| Ours \t| / s \t|\n|----------------\t|----------\t|----------\t|----------\t|----------------------\t|----------\t|----------\t|----------\t|\n| MNIST\t\t| Small\t| 0.1\t| 1.21%\t| 3.05%\t| 3.22%\t| 5.06%\t| 3.22%\t | 2.55 \t|\n| \t| Large\t| 0.1 \t| 1.19%\t| 2.62%\t| 2.73%\t| 4.45%\t| 2.74%\t| 46.33\t|\n| \t| Small\t| 0.3 \t|14.77%\t|24.99%\t|28.37%\t|43.79%\t|28.37%\t| 3.71\t|\n| \t| Large\t| 0.3 \t|11.16%\t|19.70%\t|24.12%\t|41.98%\t|24.19%\t| 98.79 \t|\n| CIFAR10\t| Small\t| 2/255 \t|39.14%\t|48.23%\t|49.84%\t|53.59%\t|50.20%\t| 22.41 \t|\n| \t| Small\t| 8/255 \t|72.40%\t|77.36%\t|78.71%\t|79.46%\t|78.71%\t| 0.91 \t|\n| \t| Large\t| 8/255 \t|80.99%\t|82.66%\t|83.54%\t|83.97%\t|83.55%\t| 6.01 \t|\n| \t| Resnet\t| 8/255 \t|72.93%\t|76.51%\t|77.29%\t|78.52%\t|77.60%\t| 15.23\t|\n\nFor all of the networks we verify, we improve upon the upper bound on adversarial error provided by the certificate in Wong et al., and also improve on the lower bound provided by PGD. We also have better overall results compared to Wong et al. over all single-model networks [6] for MNIST at ε=0.1 (2.74% vs. 3.67%), MNIST at ε=0.3 (24.19% vs. 43.10%), and CIFAR10 at ε=8/255 (77.29% vs 78.22%). We perform worse only for CIFAR10 at ε=2/255 (50.20% vs 46.11%); this is a result of us only being able to verify the `Small` network for CIFAR10 at ε=2/255, which has worse underlying robustness.\n\nFinally, in response to the reviewer's question: the \"restricted domain\" contribution is as described --- we use the tightest possible bounds on the perturbed input, combining the fact that the inputs to the classifier are normalized to a given range and that they are no more than ε away from the nominal input. Though simple, our results in Table 1 show that using this makes a large difference in the performance of our verifier.\n\n[3] While the paper was available before the ICLR deadline, none of the networks described (other than the `Resnet` model for CIFAR10 at ε=8/255) were available until the end of October, and we were thus unable to evaluate the performance of our verifier on these more robust networks in our initial submission. The networks are now available here: https://github.com/locuslab/convex_adversarial/tree/master/models_scaled\n[4] We were not able to verify the `Large` and `Resnet` networks for the CIFAR10 dataset at ε=2/255 due to memory issues in our implementation when determining upper and lower bounds.\n[5] We note that these SOA bounds are not the same as the robust single-model errors reported in https://arxiv.org/pdf/1805.12514.pdf, since the networks were trained with a different seed.\n[6] The full \"cascade\" of networks that Wong et al. present in Table 2 of their paper is not currently available for verification.\n\n[b] Eric Wong et al. \"Scaling provable adversarial defenses.\" https://arxiv.org/pdf/1805.12514.pdf", "Thank you for your comment. The formulation presented in Ehlers [c] for the ReLU does correspond to our formulation when the integer constraint on a is relaxed from a∈{0,1} to 0≤a≤1. We will update our submission to reflect this, but we believe that binarizing the formulation in Ehlers to obtain our formulation is not trivial. \n\nFurthermore, viewing things from a MIP perspective can be insightful: for example, for the maximum function, relaxing the integrality constraints on the indicator variables produces a set of linear constraints complementary to those presented for the maximum function in Ehlers that is tighter when the input values x_i are closer to their upper bounds u_i.\n\n[c] Rüdiger Ehlers. \"Formal Verification of Piece-Wise Linear Feed-Forward Neural Networks.\" https://arxiv.org/pdf/1705.01320.pdf", "Thank you for your comment clarifying the point on asymmetric bounds.\n\nTo answer your question, when removing asymmetric bounds, we use M = max(-l, u) for all ReLUs, not just those that are unstable. In principle, a solver might still be able to identify all ReLUs that are stable, eliminating the associated binary variables from consideration. In practice, solves take significantly longer (mean of 0.08s vs 133.03s), and many more nodes are explored (mean of 2.05 vs. 1498.35), suggesting that not all these extraneous binary variables (added for stable ReLUs) are eliminated. ", "Thank you for your review; your comments will help us in revising the paper.\n\nComparison to Bunel et al. [a]: We consider the ideas in our paper and those in Bunel et al. to be complementary. Both our verifier and that of Bunel et al. rely on a branch-and-bound approach, and begin by solving an LP that corresponds to the MIP we formulate, but with all integrality constraints removed. In our work, branching occurs only when we split on an unstable ReLU, producing two sub-MIPs where that ReLU is fixed as active and inactive respectively. Bunel et al. observe that it is also possible to split on the _input domain_, producing two sub-MIPs where the input in each sub-MIP is restricted to be from a half of the input domain. Splitting on the input domain could be useful when tight bounds on the perturbed input are not available (as in the problems studied in the ACAS system mentioned by the reviewer), particular where the split selected tightens bounds sufficiently to significantly reduce the number of unstable ReLUs that need to be considered.\n\nWe have also reached out to the authors and are working on running their verifier on the networks for which we report results in this paper, and will provide an update as soon as one is available.\n\nSolver used: We understand the reviewer's concern about having to use a commercial solver like Gurobi. While we were unable to run a comparison on the SCIP solver suggested, we were able to run a comparison on the Cbc [1] and GLPK [2] solvers, two open-source mixed integer programming solvers. Verification is run on the MNIST classifier network LPd-CNN, with ε=0.1. The results are as follows:\n\n| \t\t| Adv. Error\t| Mean\t|\n| \t\t| Lower\t| Upper\t| Time \t|\n| Approach \t| Bound\t|Bound \t| / s \t|\n|----------------------\t|----------------------\t|----------\t|\n| Ours w/ Gurobi\t| 4.38%\t| 4.38%\t| 3.52 \t|\n| Ours w/ Cbc \t| 4.30%\t| 4.82%\t| 18.92 \t|\n| Ours w/ GLPK \t| 3.50%\t| 7.30%\t| 35.78 \t|\n| PGD / SOA \t| 4.11%\t| 5.82%\t| -- \t|\n\nWhen we use GLPK as the solver, our performance is significantly worse than when using Gurobi, with the solver timing out on almost 4% of samples. While we time out on some samples with Cbc, our verifier still provides a lower bound better than PGD and an upper bound significantly better than the state-of-the-art for this network. The performance of our verifier is affected by the underlying MIP solver used, but we are still able to improve on existing bounds using non-commercial solvers.\n\nWe will add this table to the appendix of the paper.\n\n[1] Coin-or branch and cut (https://projects.coin-or.org/Cbc)\n[2] GNU Linear Programming Kit (https://www.gnu.org/software/glpk/). The results presented are estimates computed from 1,000 samples.\n\n[a] Rudy Bunel et al. \"A Unified View of Piecewise Linear Neural Network Verification.\" https://arxiv.org/pdf/1711.00455.pdf", "Thank you for your review. We are glad that you found our paper easy to read! \n\nAddressing the bottlenecks in the scalability of the MIP solver was key in making the verification problem tractable. We look forward to utilizing other ideas from the Operations Research community (such as computing cutting planes that exploit our knowledge of the structure of our network) to further improve performance.", "Thank you for your positive feedback!", "I have a question in regards to the > 100,000 neurons claim, which I hope the authors can clarify.\n\nI tried the method from your paper on the publicly available networks from https://github.com/eth-sri/eran and observed that in many cases the MILP solver does not finish even after 24 hours on networks much smaller than the 100,000 neurons network, say a 9x200 network. This usually happens when the number of unstable ReLU units (with both + and - values) by the presolve algorithm is > 1000. \n\nIndeed, as observed in your experimental section, the runtime of the MILP solver is determined by the number of unstable ReLU units and *not* the total number of ReLU units in the network. \n\nDoes it mean the approach will only work for networks where the presolve algorithm can determine that only a very small fraction of the ReLU units are unstable? Could you please report the number of unstable units on the > 100,000 network (they are not in the paper now)? \n\nThanks in advance.\n", "To be fair, the asymmetric bounds were first (to the best of my knowledge) used in https://arxiv.org/pdf/1705.01320.pdf. The formulation in this current paper is simply a binarized version of the same. It's a little surprising that the paper above is not cited in this context.", "To add a datapoint of information with regards to the \"Originality\" section of the review, especially discussing the \"Asymmetric bounds\" contribution:\n\nI'm one of the authors of the paper that is being asked to compare to. Our first version (https://arxiv.org/pdf/1711.00455v1.pdf on Arxiv in November 2017) didn't have asymmetric bounds and we included them in a subsequent update, after reading about the idea in a previous version of the paper under review (which we cite, and highlight the difference in appendix https://arxiv.org/pdf/1711.00455v3.pdf ). Asking the authors to discuss the difference with our use of asymmetric bounds is therefore difficult, because it's their idea which we made use of.\n\nWhile on the subject of asymmetric bounds, would it be possible to clarify what the results of the ablation study means? When removing asymmetric bounds and instead using M = max(-l, u), could you confirm that this is only done for ReLUs that are unstable?\n\n\n", "The authors perform a careful study of mixed integer linear programming approaches for verifying robustness of neural networks to adversarial perturbations. They propose three enhancements to MILP formulations of neural network verification: Asymmetric bounds, restricted domain and progressive bound tightening, which lead to significantly more scalable verification algorithms vis-a-vis prior work. They study the effectiveness of MILP solvers both in terms of verifying robustness (compared to other complete/incomplete verifiers) and generating adversarial attacks (compared to PGD attacks) and show that their approach compares favorable across a number of architectures on MNIST and CIFAR-10. They perform careful ablation studies to validate the importance of the \n\nQuality: The paper is very well written and organized. The problem is certainly of great interest to the deep learning community, given the difficulty of properly evaluating (and then improving) defenses against adversarial attacks. The experiments are done carefully with convincing ablation studies.\n\nClarity: The authors explain the relevant concepts carefully and all the experimental results are clearly written and explained.\n\nOriginality: The authors propose conceptually simple but practically significant enhancements to MILP formulations of neural network verification. However, the novelty wrt https://arxiv.org/pdf/1711.00455.pdf is not discussed carefully in my view (the asymmetric bounds were already studied in this paper, as well as a novel branch and bound strategy). The progressive bound tightening is a novel idea as far as I can see - however, the ablation experiments show that this idea is not significant in terms of performance improvement. In terms of experiments, the authors indeed obtain strong results on verified adversarial error rates and generate attacks that PGD is unable to - however, again the results do not outperform latest results (in terms of the best achievable upper bounds on verified error rates) available well before the ICLR deadline - https://arxiv.org/pdf/1805.12514.pdf . It would be great if the authors addressed these issues in a revised version of the paper.\n\nSignificance: The work does establish a strong algorithm for complete verification of neural networks along with several ideas that are critical to obtain strong performance with this approach. \n\nQuestion:\n1. I am unclear on the \"restricted domain\" contribution claimed in the paper - is this just exploiting the fact that the inputs to the classifier are normalized to a given range, in addition to being no more than eps away from the nominal input? \n\nCons\n1. The authors do not compare their approach to that of https://arxiv.org/pdf/1711.00455.pdf , both in terms of conceptual novelty and in terms of experimental results. In particular, it is not clear to me whether the authors' approach remains superior on domains where tight bounds on the neural networks inputs are not available, like the problems studied in the ACAS system in the ReLuPlex paper.\n\n2. The authors' MILP solution approach relies on having access to the state of the art commercial MILP solver Gurobi. While Gurobi is free for academic research use, for large scale neural network verification applications, this does restrict use of the approach (particularly due to limited licenses being available). It would be interesting to see a comparison that uses a freely available MILP solver (like scip.zib.de) to see how critical the approach's scalability depends on the quality of the MILP solver.\n\n3. The authors do not outperform the latest SOA numbers in terms of verified adversarial error rates on MNIST and CIFAR classifers. It would be good to see a comparison on results from https://arxiv.org/pdf/1711.00455.pdf (I believe the training code and trained networks are available online).", "This paper studies a Mixed Integer Linear Programming (MILP) approach to verifying the robustness of neural networks with ReLU activations. The main contribution of the paper is a progressive bound tightening approach that results in significantly faster MILP solving. This in turn allows for verifying the robustness of larger networks than previously studied, and even larger datasets such as CIFAR-10.\n\nThis paper is a solid contribution and should be accepted to ICLR. It is quite well-written, addresses an important problem using a principled method, and achieves strong experimental results that were previously elusive, despite the large body of work in adversarial learning. In particular, the paper has the following strengths:\n\n- Clarity: the paper is well-written and easy to read. Tables, figures and pseudocode are nice and easy to understand.\n- Methodology: the authors take care of a number of bottlenecks in the scalability of MIP solvers for the verification problem. This is the standard approach in the Operations Research (OR) community, and I am really glad to see it in an ICLR submission!\n- Results: the efficiency of the MIP on the tightened model, and the improvements in the bounds on the adversarial error as compared to very recent methods from the literature are both very strong points in favor of the paper.\n\nI do not have any further questions for the authors - good job!", "This paper presents a mixed integer programming technique for verification of piecewise linear neural networks. This work uses progressive bounds tightening approach to determine bounds for inputs to units. The authors also show that this technique speeds up the bound determination by orders of magnitude as compared to other complete and incomplete verifiers. They also compare the advercerial accuracies on MNIST and CIFAR and improve on the lower bounds as compared to PGD and upper bounds as compared to SOA. The paper is well written and presents a valuable technique for evaluating robustness of classifiers to adversarial attacks. \n" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 8, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 1 ]
[ "H1lIZjvUyE", "ByxMEVJ90Q", "HylmeSpd07", "Hkl4h-WLRX", "iclr_2019_HyGIdiRqtm", "r1eSMishhQ", "rJlzefhupX", "BygGl7Gva7", "r1eSMishhQ", "H1egVwcihm", "S1eVvi_9hm", "iclr_2019_HyGIdiRqtm", "BygGl7Gva7", "r1eSMishhQ", "iclr_2019_HyGIdiRqtm", "iclr_2019_HyGIdiRqtm", "iclr_2019_HyGIdiRqtm" ]
iclr_2019_HyGcghRct7
Random mesh projectors for inverse problems
We propose a new learning-based approach to solve ill-posed inverse problems in imaging. We address the case where ground truth training samples are rare and the problem is severely ill-posed---both because of the underlying physics and because we can only get few measurements. This setting is common in geophysical imaging and remote sensing. We show that in this case the common approach to directly learn the mapping from the measured data to the reconstruction becomes unstable. Instead, we propose to first learn an ensemble of simpler mappings from the data to projections of the unknown image into random piecewise-constant subspaces. We then combine the projections to form a final reconstruction by solving a deconvolution-like problem. We show experimentally that the proposed method is more robust to measurement noise and corruptions not seen during training than a directly learned inverse.
accepted-poster-papers
This paper proposes a novel method of solving inverse problems that avoids direct inversion by first reconstructing various piecewise-constant projections of the unknown image (using a different CNN to learn each) and then combining them via optimization to solve the final inversion. Two of the reviewers requested more intuitions into why this two stage process would fight the inherent ambiguity. At the end of the discussion, two of the three reviewers are convinced by the derivations and empirical justification of the paper. The authors also have significantly improved the clarity of the manuscript throughout the discussion period. It would be interesting to see if there are any connections between such inversion via optimization with deep component analysis methods, e.g. “Deep Component Analysis via Alternating Direction Neural Networks ” of Murdock et al. , that train neural architectures to effectively carry out the second step of optimization, as opposed to learning a feedforward mapping.
train
[ "HyxbtBh0kE", "B1xEMHn0kV", "SklYOP5e67", "B1xxSs2nyE", "HJx2u7-s14", "HJxAoTvwCX", "BkgMpADPCQ", "Hkgez0wvAQ", "HJx7u3wPRm", "HJe46svvRX", "S1gHWHAjpm", "rklYC70opQ", "rkxR0Zh-0X", "SJg0kX0j6m", "BkeFsHCiam", "HygKVSRjTX", "HyxbyBRi6m", "Hke127Csa7", "r1lmQNCjTX", "BygsLNF62Q", "Sygyv3zq3X" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thank you for taking the time to read through all our responses. We are glad that you like our work.", "Thank you for taking the time to read through our responses and for the positive assessment. We definitely intend to add the suggested information to the final version. We were perhaps a bit conservative trying to avoid a “significant” change.", "Summary:\nGiven an inverse problem, we want to infer (x) s.t. Ax = y, but in situations where the number of observations are very sparse, and do not enable direct inversion. The paper tackles scenarios where 'x' is of the form of an image. The proposed approach is a learning based one which trains CNNs to infer x given y (actually an initial least square solution x_init is used instead of y).\n\nThe key insight is that instead of training to directly predict x, the paper proposes to predict different piecewise constant projections of x from x_init , with one CNN trained for each projection, each projection space defined from a random delaunay triangulation, with the hope that learning prediction for each projection is more sample efficient. The desired x is then optimized for given the predicted predicted projections.\n\nPros:\n- The proposed approach is interesting and novel - I've not previously seen the idea of predicting different picewise constant projections instead of directly predicting the desired output (although using random projections has been explored)\n- The presented results are quantitatively and qualitatively better compared to a direct prediction baseline\n- The paper is generally well written, and interesting to read\n\nCons:\nWhile the method is interesting, it is apriori unclear why this works, and why this has been only explored in context of linear inverse problems if it really does work.\n\n- Regarding limited demonstration: The central idea presented here is is generally applicable to any per-pixel regression task. Given this, I am not sure why this paper only explores it in the particular case of linear inversion and not other general tasks (e.g. depth prediction from a single image). Is there some limitation which would prevent such applications? If yes, a discussion would help. If not, it would be convincing to see such applications.\n\n- Regarding why it works: While learning a single projection maybe more sample efficient, learning all of them s.t. the obtained x is accurate may not be. Given this, I'm not entirely sure why the proposed approach is supposed to work. One hypothesis is that the different learned CNNs that each predict a piecewise projection are implicitly yielding an ensembling effect, and therefore a more fair baseline to compare would be a 'direct-ensemble' where many different (number = number of projections) direct CNNs (with different seeds etc.) are trained, and their predictions ensembled.\n\n\nOverall, while the paper is interesting to read and shows some nice results in a particular domain, it is unclear why the proposed approach should work in general and whether it is simply implicitly similar to an ensemble of predictors.", "I'd like to thank the authors for the revised version of the manuscript. I agree with the response that tackling linear inversion is of a more general interest than my initial review indicates, and is a good setting to study given the possibility of theoretical analysis. I also agree with the response the other review concern that the non-linearity is required for the inversion function, and also more positive about the presentation as the approach is presented much more clearly in the revised version.\n\nI am updating my rating primarily based upon the additional visualizations presented in the response regarding the performance of a simple ensemble method, and qualitative results showing the proposed method does better empirically. However, I do not think these results and a corresponding discussion are currently in the revised manuscript, and the comparison to the simple ensemble method is purely qualitative - I strongly encourage the authors to incorporate these results/discussion in the final version, and also add quantitative comparison to average predictions obtained via an ensemble.", "I read the reply to my review, the other reviews and the extended discussion on this paper. I am glad the OpenReview system is working well for this paper. \n\nMy current vote is on accepting this paper given that there is clearly extensive work put into it and it contains several interesting novel ideas. \n", "Let us now analyze your proposed reconstruction. We first look at the formula: \\hat v_k = pinv(A U_k)y, which corresponds to the expansion coefficients of the oblique projection in Appendix A. In general, how well the oblique projection \\hat z_k = U_k \\hat v_k approximates P_{S_k} x depends on the smallest principal angle between the subspaces R(A^*) and S_k. In the interesting case where this angle is close to pi/2 (i.e., where we are getting information about x in R(A^*)^\\perp = N(A)), the linear method fails spectacularly because the pseudoinverse explodes (please see further discussion below and numerical experiments).\n\n-- If S_k happens to lie completely within the nullspace of A, then the product A U_k is a zero matrix and pinv(A U_k) is also a zero matrix. Thus even if x in general has an arbitrarily large component in S_k, your estimate of this component will be zero.\n-- A more common case: If N(A) intersects S_k only trivially (only at origin), but the smallest principal angle between the two subspaces is small (i.e., the smallest singular value sigma_min of A U_k is small), then pinv(A U_k) will be very large (in any norm) since 1 / sigma_min is large, and the point (P_S^oblique) x will diverge to infinity. To see this geometrically, imagine that S_\\lambda in Figure 8 is being rotated so that the angle between the R(A^*) and S_\\lambda approaches pi/2. The oblique projection point will travel to infinity because the projection always take place along the line orthogonal to R(A^*) (along the nullspace of A).\n\nA naive proposal to fix this by choosing subspaces so that the R(A^*) and S_k are close is not useful because those subspaces give the same information as pinv(A). “Useful” subspaces reveal information about x in N(A) and those are precisely the ones that cause trouble. We want to choose the subspaces independently of A. \n\nAnother proposal could be to regularize the pinv by strategies such as Tikhonov regularization, but these methods will not reinstate the nullspace components because x does not live in any subspace, and the overall reconstruction would again be forced to be in a certain subspace, as explained in more detail in what follows.\n\nLet us see how this shows up in your suggested minimization \\min x \\sum_k \\|P_{S_k} x - \\hat z_k \\|_2^2 (with squared norm). Any solution to this convex problem satisfies (by setting the gradient to zero):\n\n\\sum_k P_{S_k}^* (P_{S_k} \\hat x - \\hat z_k) = 0\n\nUsing the fact that P_{S_k} is an orthogonal projection, hence self-adjoint (P_{S_k} = P_{S_k}^*) and idempotent (P_{S_k}^2 = P_{S_k}), and that \\hat z_k already lives in S_k, we can write this as \\sum_k (P_{S_k} \\hat x - \\hat z_k) = 0, or (dividing both sides by the total number of subspaces so that we can think in terms of averages):\n\n(1/K) ( \\sum_k P_{S_k} ) \\hat x = (1/K) \\sum_k \\hat z_k \n\nFor a large enough number of random subspaces K, the matrix R = (1/K) ( \\sum_k P_{S_k} ) on the left-hand side becomes full rank. Since \\hat z_k = U_k pinv(A U_k) A x (up to noise), the right-hand side can be written\n\n(1/K) \\sum_k U_k pinv(A U_k) A x = {[ (1/K) \\sum_k U_k pinv(A U_k) ] A} x.\n\nThe row space of the matrix G = [ (1/K) \\sum_k U_k pinv(A U_k) ] A multiplying x on the rhs is the same as the row space of A, so G is low-rank (it is an oblique projection matrix on some subspace). This gives\n\n\\hat x = inv(R) G x,\n\nwhich can only be a good estimate if x is already in the range of inv(R) G x (a subspace). But again, x is not constrained to any particular subspace. \n\nAny linear reconstruction, no matter how regularized, can only produce results in a fixed subspace with dimension at most the number of rows of A (for any matrix B, rank(BA) is at most the number of rows in A, so its column space is a fixed low-dimensional subspace). The nullspace of A dictates what can and what cannot be recovered. On the other hand, our method can easily provide information in the nullspace because it explores non-linear correlations between the nullspace and range space components of x (via a manifold model).\n\nTo empirically support the mathematical fact that oblique projections and linear reconstruction can be arbitrarily bad, we simulate your proposed approach ( https://tinyurl.com/obliqueandprojnetfigure ). The code can be found at https://tinyurl.com/obliqueandprojnetcode . Note that the subspaces we use are the same that we used in the experiments in the manuscript.\n", "TL;DR: Your proposed method is indeed equivalent to a linear oblique projection which we described in Appendix A. The oblique projection into a subspace can become arbitrarily bad when the subspace which we want to project into is not aligned well with the range of A^* (adjoint of A). In this response we explain why this is the case both mathematically and via numerical experiments for which we share the code.\n\n(A side remark: we hope that the reviewer was able to read our responses to all their other previous comments in Parts 2, 3 and 4 of our first response. Due to the way OpenReview displays comments, it may have been unclear that those were parts of our response. )", ">> “Thank you for the clarification on how the method works; this cleared up some things. However, it's still not clear to me why this should work. I agree with the other reviewer that the ensemble hypothesis is one potential explanation, but the paper would be strengthened by more depth in this regard.”\n\nResponse: As we have explained in our response to the Reviewer 2 (part 2 of our response), while ensembling is a nice interpretation, the fact that a single network, a SubNet, performs just as well as many ProjNets shows that ensembling is not essential. The gist of our method is that projections are easier to estimate, but they require a nonlinear estimator. We elaborate this in detail in response to your comments below. Currently the 8 pages of the manuscript are tightly filled up with the problem description, mathematical motivations, intuitions, and numerical examples which show that the method beats strong baselines. We also added new explanations and figures in the appendix motivated by the reviewers’ comments. We feel that it would be challenging to add more material without removing some important parts and that further results on the various aspects of the method should be part of future publications. \n\n>> “It would also help to see some concreteness to some of the explanations. I find Appendix A and Figure 8 difficult to follow. Is \\cal R(A*) the range of the adjoint of A? I can't find this defined anywhere. Likewise, I can't find a concrete definition of P_S^oblique.”\n\nResponse: \\cal R(A*) is indeed the range of the adjoint of A. Thank you for pointing out this omission. We have added the definition to Appendix A and Figure 8. The definition of P_S^oblique follows from the definition of an oblique projection with a given range and nullspace (and matches your \\hat z_k below). We also added a few other clarifications in the Appendix which hopefully make it easier to follow.\n\n>> “Consider this comparison point. Let S_k be a random subspace and U_k be a basis spanning it. Then z_k := P_{S_k} x = U_k v_k for some coefficient vector v_k. Thus one estimator of z_k is simply \\hat z_k = U_k \\hat v_k where \\hat v_k = pinv(AU_k)y. From these \\hat z_k's I could then estimate x via\n\\min x \\sum_k \\|P_{S_k} x - \\hat z_k \\|_F\n\nI think the above approach is consistent with the spirit of your method, but based on linear estimators of z_k instead of CNNs. But this raises several questions:\n\n1. Is the \\hat z_k above (which I think might correspond to your P_S^oblique) consistent with your appendix A claim that the oblique projection can be arbitrarily bad? I find this difficult to interpret.” \n\nResponse: Indeed, your proposed \\hat z_k is the oblique projection which is denoted P_{S}^{oblique} x in Appendix A and Figure 8. Further, your reconstruction (where we squared the norm):\n\n\\min x \\sum_k \\|P_{S_k} x - \\hat z_k \\|_2^2\n\nis the same as our (2), without the regularizer and constraints. To see this equivalence, assume without loss of generality that the columns of U_k are orthonormal so that P_{S_k} = U_k U_k^T. Since \\| . \\|_2 is unitarily invariant, left multiplication by orthonormal U_k does not change it so we can write the minimization as \\min_x \\sum_k \\|U_k^T x - \\hat v_k \\|_2^2. Noting that \\hat v_k is our q_\\lambda and U_k our B_\\lambda, and stacking the terms in the sum, we get the data term in (2).\n\nAnd yes, \\hat z_k can be arbitrarily bad. Let us try again to explain why this is the case (both mathematically and with numerical examples (see link https://tinyurl.com/obliqueandprojnetfigure ). In what follows N(A) will denote the nullspace of matrix A, R(A^*) the range of matrix A^*, where ^* denotes the adjoint (which is the transpose for real matrices). Superscript ^\\perp denotes the orthogonal complement.\n\nFirst, we note that in underdetermined inverse problems y = Ax + n, the role of any regularizer is to provide information about x in the nullspace of A. The unknown vector x has a component along the nullspace of A and along its orthogonal complement, N(A)^\\perp = R(A^*). The component along the orthogonal complement of N(A) is simply pinv(A)*y which is the orthogonal projection of x into R(A^*). \n\nThe only situation where linear methods can provide this nullspace information is when x is constrained to a *known* subspace. In this case the reconstruction is given by the oblique projection U pinv(A U) y (where the columns of U span the subspace) and there is no need for random projections. But this is not useful for us, because our x does not live in any subspace, let alone a known one. It is well known that most interesting signal classes (natural images, biomedical images, seismic images, anything with singularities such as edges) are not efficiently modeled by subspaces. That is why modern methods rely on sparse models, low-rank models, manifold models, and other non-linear models.", ">> “If I observe only a subset of entries in a vector that lies in a known subspace, under some conditions I can identify the original location in the subspace.” \n\nResponse: That is certainly true, and something one could use when x lives in a known subspace. But our x does not live in a subspace (let alone a known one). Again, most interesting classes of images (natural, biomedical, seismic, textures) are not well-modeled by subspaces but rather by sparse models, manifold models, etc. \n\nMoreover, often only a class of models is known: we assume that x lives on a low-dimensional manifold, but we do not know which one (so we have to learn it implicitly from the data). A different example is that we might know that x is sparse in a dictionary, but without knowing the dictionary we need to learn it.\n\nEven if we simply assume x lives in a subspace, we still do not know which one, and learning the subspace is a nonlinear problem. If the subspace is known, the recovery is indeed linear. Moving towards sparse models, everything becomes nonlinear. To learn a sparse (union of subspaces) model, one has to do something like dictionary learning (again nonlinear). But in this case the recovery is nonlinear as well (l1 minimization or something similar). More general models with unclear algebraic characterization such as low-dimensional manifolds require more powerful learning structures.\n\nFinally, let us look at the conditions you mention. Consider an observation operator A which returns a few entries of x. Its nullspace N(A) is spanned by the canonical basis vectors corresponding to the unobserved entries. If x belongs to a subspace S which intersects N(A) (for example, it contains one of the said canonical basis vectors), then x cannot be reconstructed from Ax. Suppose that this is only approximately the case: S contains one of those canonical basis vectors perturbed by a tiny amount of noise. Then in principle x can be reconstructed, and the reconstruction operator is F = U pinv(A U) so that FA is an oblique projection. If y = Ax and x \\in S, clearly Fy = x. But pinv(AU) will have a very large singular value, so if y = Ax + noise, the noise will dominate the reconstruction. While this is different from what we do (our x is not at all in any subspace), it is another instance where oblique projections can be very bad, this time due to noise. To get an idea of how noise aggravates things, please take a look at the new results with added noise.\n\n>> “This fact is at the heart of low-rank matrix completion and it seems to contradict your claim about how difficult to can be to compute these projections. How do I interpret your claim in this setting?”\n\nResponse: In the light of the above discussion, we respectfully disagree. Low-rank matrix completion relies on the fact that we can identify the right low-dimensional subspace spanned by rank-1 matrices given sufficient measurements. If this subspace is already known then we agree with the reviewer’s example, but this is not the gist of low-rank matrix completion: it would only allow reconstructing special low-rank matrices that are linear combinations of some fixed rank-1 matrices.\n\nGenerally, in low-rank matrix completion, we do not know the low-rank matrix basis, which makes the problem nonlinear (analogously, we do not know the sparse support in sparse models). Identifying the basis of rank-1 matrices is analogous to support recovery with sparse priors. The algorithms for low-rank matrix recovery are therefore not linear: they use regularizers such as the nuclear norm optimized by nonlinear schemes such as iterative singular value thresholding. \n\nFurther, because of the particular structure of the measurement operator A here (“return some entries”), we need conditions on x (the matrix to recover) related to the above example of subspaces with many zero entries. In particular, if the matrix is at once low-rank and sparse, it will be problematic for entrywise observations which will return those zeros with significant positive probability. That is why the guarantees in low-rank matrix completion from a few entries assume that the matrix is not simultaneously sparse and low rank (see, e.g., Section 1.1.1. and the paragraph before Theorem 1.3 in [1] where this is formulated as “incoherence of row and column spaces with the standard basis”). Clearly, even with many near-zero entries in the matrix the recovery is unstable.\n\n[1] Candès, E.J. and Recht, B., 2009. Exact matrix completion via convex optimization. Foundations of Computational mathematics, 9(6), p.717.", ">> “2. What is wrong with my approach? Is there an example where it would fail spectacularly but your method would work? Why? How does it compare empirically to the proposed approach? In other words, within this general framework, what is the benefit of nonlinear estimates of the z_k's?”\n\nResponse: Per all the discussions above, your approach will indeed fail spectacularly whenever the random subspaces are not well-aligned with R(A^*). It will work for those that are well-aligned, but in these cases it will not be much more informative than a variation on the pseudoinverse. To further support this claim, we ran numerical experiments to simulate your proposed approach. We have added results from these experiments in the manuscript. If this is too big a change, we are happy to remove it. We provide a link to the results ( https://tinyurl.com/obliqueandprojnetfigure ) and code ( https://tinyurl.com/obliqueandprojnetcode ) if you would like to experiment yourself. Again, note that the subspaces we use are the ones we used in the experiments in the manuscript.\n\nAgain, the benefit of nonlinear estimates of the z_k’s is that they can exploit nonlinear correlations between the nullspace and the R(A^*) components of x, while linear estimators cannot. That is why our reconstructions in the new examples are much better than the linear ones.\n\n>> “3. In my (admittedly possibly suboptimal) linear approach, do we have any insight into the role of the different orthogonal projections and how performance scales with the number of projections? Perhaps this could provide insight into how the nonlinear version works.”\n\nResponse: Unfortunately, as we argue in this response (and empirical results provided), the linear method provides little insight into what the non-linear method is doing (beyond pointing to the need for nonlinearity). Since it cannot exploit interesting signal models, everything is dictated by the nullspace of A.\n\nWe agree with the reviewer that our approach opens up questions and research opportunities beyond the current manuscript and that various parts merit deeper study. We are excited by this research and intend to write about it in due time. But we also feel that the manuscript proposes a new, useful approach to regularization, that the discussions therein (strengthened by the previous and this round of reviewer’s comments) motivate the method well and provide mathematical intuitions for why the method does work. The 8-page manuscript and the appendix are tightly packed with problem description, mathematical motivations, intuitions, and numerical examples which show that the method outperforms strong baselines. It now has additional discussions about oblique vs orthogonal projections and the need for nonlinearity, and some additional numerical results in the appendices motivated by the reviewer’s concerns. Everything will be backed up by reproducible code (it already is but we cannot publish it due to anonymity). It would thus be very challenging to add significant new material to the current draft. We hope that the reviewer finds our explanations and this statement reasonable.\n\n>> “4. What is the role of TV regularization in the final estimation of x? I thought that the different subspace projections were providing a form of regularization, so I was surprised that additional regularization was required.”\n\nAs we discuss in the manuscript, the TV-norm regularization is not essential. In fact, for our SubNet (single network that estimates all subspace projections) reconstructions we do not use any regularization, as we already state in the experimental section of the manuscript (Section 4.1.1 Paragraph 3). We make this more explicit by adding a sentence when discussing Equation 2 in Section 3.2. Please note that in the original problem TV regularization did not give workable reconstructions (see Introduction, Figure 1 bottom row and Part 2 of the response to your initial review) so it is an example of how the reformulated inverse problem is better behaved. We use TV regularization for ProjNet reconstructions because we have coefficients for fewer subspaces (130 vs 350) than for SubNet which makes the problem slightly underdetermined. It is not essential for the method to work and even without it it outperforms the baseline as evidenced by SubNet reconstructions, but it does point to the possibility of using more sophisticated strategies in Stage 2, as noted by Reviewer 1.", ">> “The learning part of this algorithm is in step 2, where m different convolutional neural networks are used to learn m good projections. The projections correspond to computing a random Delaunay triangulation over the image domain and then computing pixel averages within each triangle. It's not clear exactly what the learning part is doing, i.e. what makes a \"good\" triangulation, why a CNN might accurately represent one, and what the shortcomings of truly random triangulations might be.”\n\nResponse: Again, we feel that there is a misunderstanding about the role of the networks in our method. In one of the subsequent comments the reviewer posits that \n\n>> “... the core idea at the heart of the paper is to speed up this reconstruction using a neural network by viewing the projection onto the mesh space as a set of special filter banks which can be learned.”\n\nIt seems that the reviewer’s interpretation is that the triangulation is computed by the network, together with the projection. But as we elaborate in Section 3.1.1, and as we explained when commenting on the reviewer’s summary above, the role of the network is to compute an orthogonal projection of x into a given random subspace. This is ensured by an explicit, non-trainable, projector added as the last layer (see Section 4.1.1, Paragraph 2). As such, we do use *truly random triangulations*. Nothing about these triangulations is being learned from the training data (see Section 3.1.1, Paragraph 3).\n\nAs detailed in the discussion above, computing these projections from y (or x0) is a nonlinear problem, and it requires a nonlinear computational structure (please see the new Appendix A). Since the normal operator corresponding to ray transforms is convolutional, we decided to use a CNN (assuming that our rays provide a somewhat constant coverage of the domain). As the reviewer points out, the CNNs are not natural structures to produce images that live exactly on triangulations. That is why we add an explicit, non-trainable projection layer (see Section 4.1.1, Paragraph 2, and Figure 11).\n\n>> “More specifically, for each projection the authors start with a random set of points in the image domain and compute a Delaunay triangulation. They average x0 in each of the Delaunay triangles. Then since the projection is constant on each triangle, the projection into the lower-dimensional space is given by the magnitude of the function over each of the triangular regions. Next they train a convolutional neural network to approximate the above projection. The do this m times. It's not clear why the neural network approximation is necessary or helpful.”\n\nResponse: We again wish to emphasize that it is not x0 that we project (this is just a linear operation), but rather we nonlinearly regress orthogonal low-dimensional projections of x which implies that the network models some aspects of the distribution of x. We agree with the reviewer that in the former case, the network would be superfluous. \n\n>> “The core novelty of this paper is the portion that uses a neural network to calculate a projection onto a random Delaunay triangulation. The idea of reconstructing images using random projections is not especially new, and much of the \"inverse-ness\" of the problem here is removed by first taking the pseudoinverse of the forward operator and applying it to the observations. Then the core idea at the heart of the paper is to speed up this reconstruction using a neural network by viewing the projection onto the mesh space as a set of special filter banks which can be learned.”\n\nResponse: While we agree with the reviewer that random projections are a known idea, as far as we know and as noted by Reviewer 2, this is the first work that attempts to regress the orthogonal projections of the target signal x into random subspaces. We believe that this contribution sets it apart from previous work, especially because computing these projections from measurements is a truly nonlinear problem unlike the more common fixed linear projections. The reason to regress P_S x instead of x is that it is a more stable task, and a “clever” way to achieve randomization while at the same time controlling stability and hardness of learning. The role of the network is to approximate this nonlinear operator that maps y to projections of x, rather than to speed up a simple linear projection of x0.\n\nWe also respectfully disagree that much of the inverseness is removed by taking the pseudoinverse. In fact, this is one of our main contributions: we state in several places in the manuscript (for example Paragraph 3 of Introduction), that we work in a highly undersampled regime where the pseudoinverse (or any other simple regularizer for that matter) cannot do a reasonable job and the role of learning cannot be seen as denoising or artifact removal (see for example Figure 1 bottom row). This is also illustrated in Section 4 with the non-negative least squares reconstructions shown in Figures 6 and 7.\n", ">> “Regarding why it works: While learning a single projection maybe more sample efficient, learning all of them s.t. the obtained x is accurate may not be. Given this, I'm not entirely sure why the proposed approach is supposed to work. One hypothesis is that the different learned CNNs that each predict a piecewise projection are implicitly yielding an ensembling effect, and therefore a more fair baseline to compare would be a 'direct-ensemble' where many different (number = number of projections) direct CNNs (with different seeds etc.) are trained, and their predictions ensembled.”\n\nResponse: Recall that we are in a regime where we do not have access to a large ground-truth training dataset and the measurements are very sparse. For this reason, we cannot hope to get a method that reconstructs all the details of x. This is the motivation to split the problem in two stages: in the first stage we only estimate “stable” information, by learning a collection of nonlinear, but stable maps from y (or pinv(A)*y, or its non-negative least squares reconstruction) to projections of x. As shown experimentally, this strategy outperforms the baseline which uses the exact same number of measurements and training samples. In fact, all ProjNets are trained using half the number of samples as the baseline (we now make this more explicit in the manuscript).\n\nIn the second stage of computing x from the projections, in order to get a very accurate, detailed estimate, one would need to use more training samples, and those samples should correspond to ground truth images which we do not have. Furthermore, as Reviewer 1 suggests, this might involve new and better regularizers. \n\nWe agree with the reviewer’s hypothesis that the different learned CNNs are implicitly yielding an ensembling effect—that is a nice interpretation of the proposed method. However, because the direct inverse map from y to x is highly unstable, we design a randomization mechanism which is better behaved than just training neural networks with different seeds. The instability of the full inverse map y -> x (or x0 -> x) will result in large systematic errors that will not average out. To illustrate this, per reviewer’s suggestion, we trained ten new direct networks and repeated the erasure experiments (Figures 5b, 12, 13) for the case when p=1/8. If, for example, we consider the image in Figure 5b, we find that 9/10 direct network reconstructions look almost the same as the poor reconstruction shown in the manuscript (see: https://tinyurl.com/direct-new-seeds ), while one reconstruction looks a bit closer to the true x, but still quite wrong (much more so than the reconstructions from the ProjNets). Our randomization scheme operates by providing random, low-dimensional targets that are stable and have low variance so that the resulting estimates are close to their true values and the subsequent ensembling mechanism is deterministic (in the sense that it does not rely on “noise”). We stress again that the total number of training samples used to train all ProjNets, or the single SubNet is the same or smaller than that used to train the direct baseline.\n\nMoreover, we point out that we train two different architectures—one that requires a different network for each subspace (ProjNet) and one that works for any subspace (SubNet). The success of SubNet and the fact that it outperforms the direct baseline suggests that the important idea is indeed that of estimating low-dimensional projections.\n\nAnother important aspect of our choice of randomization is that it leads to interpretable, local measurements. These correspond to a new, equivalent forward operator B with favorable properties (see Section 3.2, 3.3 and Proposition 1). It would be hard to interpret the output of randomly initialized direct networks in a similar way (for example, it is not clear what we should expect the output distribution to be).\n\n[1] Stefanov, P. and Uhlmann, G., 2009. Linearizing non-linear inverse problems and an application to inverse backscattering. Journal of Functional Analysis, 256(9), pp.2842-2866.", "Thank you for the clarification on how the method works; this cleared up some things. However, it's still not clear to me why this should work. I agree with the other reviewer that the ensemble hypothesis is one potential explanation, but the paper would be strengthened by more depth in this regard.\n\nIt would also help to see some concreteness to some of the explanations. I find Appendix A and Figure 8 difficult to follow. Is \\cal R(A*) the range of the adjoint of A? I can't find this defined anywhere. Likewise, I can't find a concrete definition of P_S^oblique.\n\nConsider this comparison point. Let S_k be a random subspace and U_k be a basis spanning it. Then z_k := P_{S_k} x = U_k v_k for some coefficient vector v_k. Thus one estimator of z_k is simply \\hat z_k = U_k \\hat v_k where \\hat v_k = pinv(AU_k)y. From these \\hat z_k's I could then estimate x via\n\\min x \\sum_k \\|P_{S_k} x - \\hat z_k \\|_F\n\nI think the above approach is consistent with the spirit of your method, but based on linear estimators of z_k instead of CNNs. But this raises several questions:\n\n1. Is the \\hat z_k above (which I think might correspond to your P_S^oblique) consistent with your appendix A claim that the oblique projection can be arbitrarily bad? I find this difficult to interpret. If I observe only a subset of entries in a vector that lies in a known subspace, under some conditions I can identify the original location in the subspace. This fact is at the heart of low-rank matrix completion and it seems to contradict your claim about how difficult to can be to compute these projections. How do I interpret your claim in this setting? \n\n2. What is wrong with my approach? Is there an example where it would fail spectacularly but your method would work? Why? How does it compare empirically to the proposed approach? In other words, within this general framework, what is the benefit of nonlinear estimates of the z_k's?\n\n3. In my (admittedly possibly suboptimal) linear approach, do we have any insight into the role of the different orthogonal projections and how performance scales with the number of projections? Perhaps this could provide insight into how the nonlinear version works. \n\n4. What is the role of TV regularization in the final estimation of x? I thought that the different subspace projections were providing a form of regularization, so I was surprised that additional regularization was required. ", "We thank the reviewers for taking the time to read the paper and prepare their comments. All are informative and they made us aware of the parts of presentation that might have been confusing; we hope that our updates make the manuscript clearer.\n\nWith some of the comments, though, we have to respectfully disagree. We explain this in the responses to individual reviewers. Here we only summarize a few main points, before addressing the individual reviewers’ comments in detail.\n\n-- In our method we solve a linear inverse problem y = Ax + n which is very ill posed, without having access to ground truth training data. To do so, we train a non-linear regressor (a neural net) which maps y to orthogonal projections of x into random subspaces with an arbitrarily chosen training dataset. To simplify network structure, we precompute x0 which can be an application of a pseudoinverse of A to y, a non-negative least squares solution or some other simple estimator. Importantly, because the measurements are few and the problem is very ill posed, x0 is a very bad estimate of x.\n\n-- We do not project x0 into random subspaces as Reviewer 3 suggests—this is achieved by a simple linear operator and would be of limited interest. We rather compute *orthogonal* projections of x from x0. As we elaborate in the updated manuscript (see Appendix A) and in the response to Reviewer 3, this cannot be achieved by a linear operator and it requires training a nonlinear regressor (in our case, a neural network).\n\n-- The term “linear inverse problems” only implies that the forward operators are linear. In most interesting applications, the inverse operators are arbitrarily nonlinear. This is the case already with standard sparsity-based methods. In our case, since we do not know where x lives, the nonlinear modeling is achieved by learning. Many, if not most practical imaging problems have (approximately) linear forward operators: examples are synthetic aperture radar, seismic tomography, radio-interferometric astronomy, MRI, CT, etc. While certainly many are only approximately linear (or fully nonlinear), linearization techniques are at the core of both practical algorithms and theoretical analysis. The latter is true even for questions of uniqueness and stability as discussed beautifully in [1]. In this sense we are looking at a very important and large class of nonlinear operators to be learned, and we do not see our discussion of linear inverse problems as a harsh limitation. That said, our method could be applied to other problems such as depth sensing, as suggested by Reviewer 2, but the justification would require additional work. For example, the Lipschitz stability (which we have per [1]) would not be guaranteed. The fact that an inverse exists for the imaging tasks we consider is given by injectivity on \\mathcal{X}, which is a low-dimensional structure (a manifold) embedded in R^N. In the original manuscript this assumption was in a footnote which is now expanded into a short discussion in Section 3.1. We elaborate this further in the response to Reviewer 2.\n\n-- Our method can be interpreted as a randomization or an ensembling method. But unlike strategies such as randomizing the seed when training many neural networks to directly estimate x, which will be hampered by the instability of the problem and the fact that we do not have ground truth data, we use a particular randomization scheme where we randomize the learning target. That way we a) have a clear model for randomization which tells us exactly how to use the individual projection estimates, and b) make each individual member of the problem ensemble stable.\n\n[1] Stefanov, P. and Uhlmann, G., 2009. Linearizing non-linear inverse problems and an application to inverse backscattering. Journal of Functional Analysis, 256(9), pp.2842-2866.", ">> “The proposed method isn't bad, and the idea is interesting. But I can't help but wonder whether it works just because what we're doing is denoising the least squares reconstruction, and regression on many random projections might be pretty good for that. Unfortunately, the experiments don't help with developing a deeper understanding.” \n\nResponse: As we stress in the manuscript (Paragraph 3 of Introduction and Figure 1) we are precisely addressing the regime where the denoising or artifact removal paradigm fails. In Figure 1, we show that standard methods that would indeed correspond to denoising the least squares reconstruction, such as the TV-regularized least squares or non-negative least squares do not give a reasonable solution to our problem.\n\nWe feel the reviewer’s impression is based on their interpretation that we project x0 into random subspaces, but as we try to emphasize in our response, we are doing something very different. Estimating *orthogonal* projections of x (as opposed to x0) from few measurements cannot be interpreted as denoising, but rather as discovering different stable pieces of information about the conditional distribution of x which is supported on some a priori unknown low-dimensional structure, $\\mathcal{X}$, and the part of learning is to discover this structure (or rather, its projections into a set of random subspaces which is a simpler problem). We updated the manuscript to further emphasize this aspect in Section 3.1 and added Appendix A.\n\n[1] Jin, K.H., McCann, M.T., Froustey, E. and Unser, M., 2017. Deep convolutional neural network for inverse problems in imaging. IEEE Transactions on Image Processing, 26(9), pp.4509-4522.\n[2] Rivenson, Y., Zhang, Y., Günaydın, H., Teng, D. and Ozcan, A., 2018. Phase recovery and holographic image reconstruction using deep learning in neural networks. Light: Science & Applications, 7(2), p.17141.\n[3] Sinha, A., Lee, J., Li, S. and Barbastathis, G., 2017. Lensless computational imaging through deep learning. Optica, 4(9), pp.1117-1125.\n[4] Li, S., Deng, M., Lee, J., Sinha, A. and Barbastathis, G., 2018. Imaging through glass diffusers using densely connected convolutional networks. Optica, 5(7), pp.803-813.", ">> “At the heart of this paper is the idea that for an L-Lipschitz function f : R^k → R the sample complexity is O(L^k), so the authors want to use the random projections to essentially reduce L. However, the Cooper sample complexity bound scales with k like k^{1+k/2}, so the focus on the Lipschitz constant seems misguided. This isn't damning, but it seems like the piecewise-constant estimators are a sort of regularizer, and that's where we really get the benefits.”\n\nResponse: We apologize for using K in the manuscript while stating this result: this is unfortunate, especially because we later use K for subspace dimension (and in this case the reviewer is absolutely right). We are interested in the stability of the map from measurements y to the targets P_S x, so that the map f(y) operates on objects in R^M. Note that the number of measurements M (or k in the reviewer’s comment) is kept fixed. On the other hand, L changes because we are learning a simpler target.\n\nWe agree that the piecewise-constant estimators act as a regularizer in the sense of learning. They restrict the hypothesis class to “regular” or “simple” maps, and one standard way to quantify regularity is via the Lipschitz constant.\n\n>> “The authors only compare to another U-Net, and it's not entirely clear how they even trained that U-Net. It'd be nice to see if you get any benefit here from their method relative to other approaches in the literature, or if this is just better than inversion using a U-Net. Even how well a pseudoinverse does would be nice to see or TV-regularized least squares.”\n\nResponse: We describe the training of the U-Net (the direct baseline in the paper) in some detail in Section 4.1.1, which we now expanded to include information about the number of samples for training. We also now explicitly highlight that the training and test sets are entirely different in all experiments and for all networks. The U-Net that we use achieves state of the art results on a very long list of image recovery tasks [1,2,3,4], including tomographic problems that are similar to the one we experiment with. This suggests that it is a hard baseline to beat. Indeed, as the reviewer suggests, we already do show both the pseudoinverse in Figures 1, 6 and 7 and the TV-regularized least squares in bottom row of Figure 1. It can be observed that in all cases the U-Net, our baseline, outperforms them while the proposed method beats the U-Net.\n\n>> “Practically I'm quite concerned about their method requiring training 130 separate convolutional neural nets. The fact that all the different datasets give equal quality triangulations seems a bit odd, too. Is it possible that any network at all would be okay? Can we just reconstruct the image from regression on 130 randomly-initialized convolutional networks?”\n\nResponse: We agree that it is favorable to train fewer networks. However, we already do propose the SubNet (motivated exactly by this concern) which requires training only a single network (see Section 4.1.1 Paragraph 5), and which performs on par with the collection of ProjNets and better than the baseline. Note that we are using the same number of samples to train the SubNet and the direct baseline, and only half of those samples to train *all* the ProjNets. We now mention the number of samples explicitly in Section 4 under Robustness to Corruption.\n\nWe are not quite certain that we understand the comment about equal quality triangulations. The experiments on different datasets showcase that we can train on arbitrary image datasets and obtain comparable reconstructions. We reiterate that our networks are not computing triangulations, only projections into these triangular subspaces. All triangulations are generated at random, independently of the datasets and the networks. \n\nThe reviewer’s idea of regression on 130 randomly-initialized convolutional networks is interesting and a possible avenue for further research. However, each network would approximate the same unstable, high variance map (see, for example, the response to Reviewer 2, and examples https://tinyurl.com/direct-new-seeds ). One important aspect of our randomization via random triangulations is that it gives interpretable, local measurements, equivalent to a new forward operator B with favorable properties (see the discussion in Section 3.2 and 3.3). It is not immediately clear how one would interpret the outputs of randomly initialized convolutional networks.\n", "The reviewer summarizes our method as\n\n>> “This paper describes a novel method for solving inverse problems in imaging.\n \nThe basic idea of this approach is use the following steps:\n1. initialize with nonnegative least squares solution to inverse problem (x0)\n2. compute m different projections of x0\n3. estimate x from the m different projections by solving \"reformuated\" inverse problem using TV regularization.”\n\nResponse: We have to respectfully disagree with this summary, especially because it informs the remainder of the reviewer’s comments. There seems to be a misunderstanding about Step 2 and many later comments appear to stem from it. Since this step is the crux of our proposed method, we begin by summarizing it here, with references to the relevant parts of the manuscript.\n\nInstead of computing m different projections of x0 as the reviewer suggests, we regress subspace projections of x, the true image (see Section 3.1.1, Paragraphs 3 and 4). To do so, we must train a nonlinear regressor, in our case a convolutional neural network. (The need for nonlinearity is explained below.) To make this point clearer in the manuscript, we updated Figure 2 to explicitly show that x0 is not fed into linear subspace projectors of itself, but rather used as data from which we estimate projections of x. Indeed, projecting x0 would not be very interesting since it would simply imply various linear ways of looking at x0 and the networks would not be doing any actual inversion or data modeling. \n\nAgain, what we actually do is that we compute *orthogonal* projections P_S x from y = Ax (or x0 = pinv(A)y or something similar) into a collection of subspaces {S_\\lambda}_{\\lambda=1}^{\\Lambda} (see Section 3.1.1, Paragraph 3). While projecting x0 is a simple linear operation, regressing projections of an unknown x from the measurement data y is not. To explain why we need nonlinear regressors, we added a new figure and a short discussion to the manuscript (please see the new Appendix A). For the reviewer’s convenience, we summarize the discussion here (although it might be easier to read in the typeset pdf version):\n\nSuppose that there exists a linear operator F \\in R^{N \\times M} which maps y (or pinv(A)y) to P_S x. The simplest requirement on such an F is consistency: if x already lives in the subspace S, then we would like to have F A x = x. Another way to write this is that for any x, not necessarily in S, we require FA FA x = FA x, which implies that FA = (FA)^2 is an idempotent operator. However, because range(F) = S \\neq range(A^*), it will in general not hold that (FA)^* = FA. This implies that FA is not an orthogonal projection, but rather an oblique one.\n\nAs we show in the new Figure 8 (Appendix A), this oblique projection can be an arbitrarily poor approximation of the actual orthogonal projection that we seek. The nullspace of this projection is precisely N(A) = range^\\perp(A^*). Similar conclusions can be drawn for any other (ad hoc) linear operator, which would not even be a projection.\n\nThere are various assumptions one can make to guarantee that the map from Ax to P_S x exists. We assume that the models live on a low-dimensional manifold (please see updated Section 3.1; this low-dimensional structure assumption has previously been a footnote), and that the measurements are in general position with respect to this manifold. Our future work involves making quantitative statements about this aspect of the method.\n", ">> “Pros:\n- The proposed approach is interesting and novel - I've not previously seen the idea of predicting different picewise constant projections instead of directly predicting the desired output (although using random projections has been explored)\n- The presented results are quantitatively and qualitatively better compared to a direct prediction baseline\n- The paper is generally well written, and interesting to read”\n\nResponse: We are glad that the reviewer found the paper interesting.\n\n>> “While the method is interesting, it is apriori unclear why this works, and why this has been only explored in context of linear inverse problems if it really does work.\"\n\n>> \"Regarding limited demonstration: The central idea presented here is is generally applicable to any per-pixel regression task. Given this, I am not sure why this paper only explores it in the particular case of linear inversion and not other general tasks (e.g. depth prediction from a single image). Is there some limitation which would prevent such applications? If yes, a discussion would help. If not, it would be convincing to see such applications.”\n\nResponse: While we agree with the reviewer that the central idea is more widely applicable, we wish to emphasize that what the reviewer calls a “particular case of linear inversion” covers a very large variety of practically relevant problems. The list includes super-resolution, deconvolution, computed tomography, inverse scattering, synthetic aperture radar, seismic tomography, radio-interferometric astronomy, and many other problems. \n\nImportantly, the fact that the forward problem is linear (which is why the corresponding inverse problems are unfortunately called linear) does not at all imply that the sought inverse map which we are trying to learn (the solution operator) is linear. The inverse map of interest will not be linear for anything but the simplest Tikhonov regularized solution (and variations thereof). For instance, if x is modeled as sparse in a dictionary, the inverse map is nonlinear even though the vast majority of inverse problems regularized by sparsity are “linear\". The entire field of compressive sensing is concerned with linear inverse problems. With general manifold models for x, such as the one assumed in the paper, we depart further from linear inverse maps. We now state this more explicitly in Section 3.1 and a new Appendix A. The ability to adapt to such nonlinear prior models is part of the reason why CNNs perform well on related problems. Additionally, these nonlinear inverses may be arbitrarily ill-posed, which calls for ever more sophisticated regularizers. In this sense, we are looking at a very large class of hard, practically relevant problems, whose solution operators are nonlinear.\n\nWhile nothing prevents practical application of our proposed method to problems such as single-image depth estimation, one benefit of studying linear inverse problems is that as soon as we are in finite dimensions (e.g., a low-dimensional manifold in R^N and a finite number of measurements), and the forward operator is injective, Lipschitz stability is guaranteed (refer added citation: [1]). Injectivity can be generically achieved with a sufficient number of measurements that depends only on the manifold dimension.\n\nIn applications such as depth estimation from a single image it is less straightforward to obtain similar guarantees. Namely, injectivity fails as one can easily construct cases where the same 2D depth map corresponds to multiple 2D images. So, while in practice our method might give good results, the justification would require additional work.", "We are glad that the reviewer enjoyed the paper. Indeed one of the main ideas put forward is the separation into information that can be stably (but nonlinearly) extracted from the measurements in this very ill-posed, no ground truth regime, and information that requires a stronger regularizing idea which kicks in at stage 2. We find it encouraging that the reviewer’s comments on improving stage 2 are quite similar to our ideas on extending this work (we now mention this in the concluding remarks). Further, we now provide an additional discussion of why the method can work and why nonlinear regressors are necessary in Appendix A and an updated Section 3.1, as an effort to address the comments of other reviewers.", "This paper proposes a novel method of solving ill-posed inverse problems and specifically focuses on geophysical imaging and remote sensing where high-res samples are rare and expensive. \nThe motivation is that previous inversion methods are often not stable since the problem is highly under-determined. To alleviate these problems, this paper proposes a novel idea: \ninstead of fully reconstructing in the original space, the authors create reconstructions in projected spaces. \nThe projected spaces they use have very low dimensions so the corresponding Lipschitz constant is small. \nThe specific low-dimensional reconstructions they obtain are piecewise constant images on random Delaunay trinagulations. This is theoretically motivated by classical work (Omohundro'89) and has the further advantage that the low-res reconstructions are interpretable. One can visually see how closely they capture the large shapes of the unknown image. \n\nThese low-dimensional reconstructions are subsequently combined in the second stage of the proposed algorithm, to get a high-resolution reconstruction. The important aspect is that the piecewise linear reconstructions are now treated as measurments which however are local in the pixel-space and hence lead to more stable reconstructions. \n\nThe problem of reconstruction from these piecewise constant projections is of independent interest. Improving this second stage of their algorithm, the authors would get a better result overall. For example I would recommend using Deep Image prior as an alternative technique of reconstructing a high-res image from multiple piecewise constant images, but this can be future work. \n\nOverall I like this paper. It contains a truly novel idea for an architecture in solving inverse problems. The two steps can be individually improved but the idea of separation is quite interesting and novel. \n\n", "This paper describes a novel method for solving inverse problems in imaging.\n\nThe basic idea of this approach is use the following steps:\n1. initialize with nonnegative least squares solution to inverse problem (x0)\n2. compute m different projections of x0\n3. estimate x from the m different projections by solving \"reformuated\" inverse problem using TV regularization.\n\nThe learning part of this algorithm is in step 2, where m different convolutional neural networks are used to learn m good projections. The projections correspond to computing a random Delaunay triangulation over the image domain and then computing pixel averages within each triangle. It's not clear exactly what the learning part is doing, i.e. what makes a \"good\" triangulation, why a CNN might accurately represent one, and what the shortcomings of truly random triangulations might be.\n\nMore specifically, for each projection the authors start with a random set of points in the image domain and compute a Delaunay triangulation. They average x0 in each of the Delaunay triangles. Then since the projection is constant on each triangle, the projection into the lower-dimensional space is given by the magnitude of the function over each of the triangular regions. Next they train a convolutional neural network to approximate the above projection. The do this m times. It's not clear why the neural network approximation is necessary or helpful. \n\nEmpirically, this method outperforms a straightforward use of a convolutional U-Net to invert the problem.\n\nThe core novelty of this paper is the portion that uses a neural network to calculate a projection onto a random Delaunay triangulation. The idea of reconstructing images using random projections is not especially new, and much of the \"inverse-ness\" of the problem here is removed by first taking the pseudoinverse of the forward operator and applying it to the observations. Then the core idea at the heart of the paper is to speed up this reconstruction using a neural network by viewing the projection onto the mesh space as a set of special filter banks which can be learned.\n\nAt the heart of this paper is the idea that for an L-Lipschitz function f : R^k → R the sample complexity\nis O(L^k), so the authors want to use the random projections to essentially reduce L. However, the Cooper sample complexity bound scales with k like k^{1+k/2}, so the focus on the Lipschitz constant seems misguided.\nThis isn't damning, but it seems like the piecewise-constant estimators are a sort of regularizer, and that's where we\nreally get the benefits.\n\nThe authors only compare to another U-Net, and it's not entirely clear how they even trained that U-Net. It'd be nice to see if you get any benefit here from their method relative to other approaches in the literature, or if this is just better than inversion using a U-Net. Even how well a pseudoinverse does would be nice to see or TV-regularized least squares.\n\nPractically I'm quite concerned about their method requiring training 130 separate convolutional neural\nnets. The fact that all the different datasets give equal quality triangulations seems a bit odd, too. Is\nit possible that any network at all would be okay? Can we just reconstruct the image from regression\non 130 randomly-initialized convolutional networks? \n\nThe proposed method isn't bad, and the idea is interesting. But I can't help but wonder whether it works just because what we're doing is denoising the least squares reconstruction, and regression on many random projections might be pretty good for that. Unfortunately, the experiments don't help with developing a deeper understanding. \n" ]
[ -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 4 ]
[ -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "HJx2u7-s14", "B1xxSs2nyE", "iclr_2019_HyGcghRct7", "rklYC70opQ", "r1lmQNCjTX", "rkxR0Zh-0X", "rkxR0Zh-0X", "rkxR0Zh-0X", "rkxR0Zh-0X", "rkxR0Zh-0X", "HyxbyBRi6m", "Hke127Csa7", "HyxbyBRi6m", "iclr_2019_HyGcghRct7", "HygKVSRjTX", "S1gHWHAjpm", "Sygyv3zq3X", "SklYOP5e67", "BygsLNF62Q", "iclr_2019_HyGcghRct7", "iclr_2019_HyGcghRct7" ]
iclr_2019_HyGhN2A5tm
Multi-Agent Dual Learning
Dual learning has attracted much attention in machine learning, computer vision and natural language processing communities. The core idea of dual learning is to leverage the duality between the primal task (mapping from domain X to domain Y) and dual task (mapping from domain Y to X) to boost the performances of both tasks. Existing dual learning framework forms a system with two agents (one primal model and one dual model) to utilize such duality. In this paper, we extend this framework by introducing multiple primal and dual models, and propose the multi-agent dual learning framework. Experiments on neural machine translation and image translation tasks demonstrate the effectiveness of the new framework. In particular, we set a new record on IWSLT 2014 German-to-English translation with a 35.44 BLEU score, achieve a 31.03 BLEU score on WMT 2014 English-to-German translation with over 2.6 BLEU improvement over the strong Transformer baseline, and set a new record of 49.61 BLEU score on the recent WMT 2018 English-to-German translation.
accepted-poster-papers
A paper that studies two tasks: machine translation and image translation. The authors propose a new multi-agent dual learning technique that takes advantage of the symmetry of the problem. The empirical gains over a competitive baseline are quite solid. The reviewers consistently liked the paper but have in some cases fairly low confidence in their assessment.
train
[ "HklXxIdqn7", "HJg-jI7a1N", "HkeVjIX9hX", "ryxsHLHn0X", "BklUtMfnCX", "ryePF6wKRm", "H1l6suNjhX", "BJgSyP6fC7", "HklU6LTzRm", "rJxe6Bpf07", "HJgc0z6z07", "BJxU4lAYh7", "Hkxwf3VKnm", "Hkxo4B28n7", "Hylw94tIh7" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "public", "author", "public" ]
[ "The author's present a dual learning framework that, instead of using a single mapping for each mapping task between two respective domains, the authors learn multiple diverse mappings. These diverse mappings are learned before the two main mappings are trained and are kept constant during the training of the two main mappings. Though I am not familiar with BLEU scores and though I didn't grasp some of the details in 3.1, the algorithm yielded consistent improvement over the given baselines. The author's included many different experiments to show this.\n\nThe idea that multiple mappings will produce better results than a single mapping is reasonable given previous results on ensemble methods. \n\nFor the language translation results, were there any other state-of-the-art methods that the authors could compare against? It seems they are only comparing against their own implementations.\n\nObjectively saying that the author's method is better than CycleGAN is difficult. How does their ensemble method compare to just their single-agent dual method? Is there a noticeable difference there?\n\nMinor Comments:\n\nDual-1 and Dual-5 are introduced without explanation.\n\nPerhaps I missed it, but I believe Dan Ciresan's paper \"Multi-Column Deep Neural Networks for Image Classification\" should be cited.\n\n### After reading author feedback\nThank you for the feedback. After reading the updated paper I still believe that 6 is the right score for this paper. The method produces better results using ensemble learning. While the results seem impressive, the method to obtain them is not very novel; nonetheless, I would not have a problem with it being accepted, but I don't think it would be a loss if it were not accepted.", "Dear AnonReviewer1,\n\nBefore the final decision concludes, do you have further questions regarding our rebuttal and updated paper? Our paper revision includes reorganization of the introduction to our framework (Section 3.1), the additional experiments on WMT18 English->German translation challenge (Section 3.4), the additional study on diversity of agents (Appendix A), and quantitative evaluation on image-to-image translations (Section 4.3 and 4.4) following your suggestions.\n\nIn particular, we would like to highlight that: \n(1) The calibration of BLEU score: We would like to point out that our improvement over the previous state-of-the-art baselines is substantial. For example, on the WMT2014 En->De translation task, the performance of the transformer baseline is 28.4 BLEU score [1] (our baseline matches this performance). The improvement over this baseline is 0.61 in [2], 0.8 in [3] (1.3 BLEU improvement over the re-implemented 27.9 baseline in [3]) and 0.9 in [4], while ours is 1.65 BLEU score. \n(2) The baselines: As we explained in the previous response, we are using the state-of-the-art transformer as our backbone model, and comparing against all the relevant algorithms including KD, BT and the traditional 2-agent dual learning (Dual-1). Moreover, we also show on WMT18 En->De challenge that our method can further improve the state-of-the-art model trained with extensive resources (Section 3.4 of our updated paper).\n\nWe hope our rebuttal and paper revision could address your concerns. We welcome further discussion and are willing to answer any further questions.\n\n[1] Vaswani, Ashish, et al. \"Attention is all you need.\" Advances in Neural Information Processing Systems. 2017.\n[2] He, Tianyu, et al. \"Layer-Wise Coordination between Encoder and Decoder for Neural Machine Translation\". Advances in Neural Information Processing Systems. 2018. \n[3] Shaw, Peter, Jakob Uszkoreit, and Ashish Vaswani. \"Self-Attention with Relative Position Representations.\" In Proc. of NAACL, 2018.\n[4] Anonymous. Universal transformers. In Submitted to International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=HyzdRiR9Y7. Under review as a conference paper at ICLR 2019\n", "Summary:\nThe authors propose an extension of dual learning (DL). In DL, one leverages the duality of a dataset, by predicting both forward and backward, e.g. English to German, and German back to English. It’s been shown that training models using this duality is beneficial. This paper extends DL by introducing multiple models for the forward and backward, and using their output to regularise the training of the two main agents.\n\nThe authors show that this setup improves on the SotA, at only a training computation expense (inference/test time remains the same).\n\nReview:\nThe paper shows extensive experimentation and improves the previous result in all cases. The proposed method is a straightforward extension and can be readily implemented and used.\n\nI have difficulty understanding equation 8 and the paragraph below. It seems like the authors use an equal weighting for the additional agents, however they mention using Monte Carlo to “tackle the intractability resulting form the summation over the exponentially large space y”. According to the paper the size of y is the dataset, is it exponentially large? Do the authors describe stochastic gradient descent? Also what do the authors mean by offline sampling? Do they compute the targets for f_0 and g_0 beforehand using f_1…n and g_1…n?\n\nThe results mention computational cost a few times, I was wondering if the authors could comment on the increase in computational cost? e.g. how long does “pre-training” take versus training the dual? Can the training of the pre-trained agents be parallelised? Would it be possible to use dropout to more computationally efficient obtain the result of an ensemble?\n\nIn general I think the authors did an excellent job validating their method on various different datasets. I also think the above confusions can be cleared up with some editing. However the general contribution of the paper is not enough, the increase in performance is minimal and the increased computational cost/complexity substantial. I do think this is a promising direction and encourage the authors to explore further directions of multi-agent dual learning.\n\nTextual Notes:\n- Pg2, middle of paragraph 1: “which are pre-trained with parameters fixed along the whole process”. This is unclear, do you mean trained before optimising f_0 and g_0 and subsequently held constant?\n- Pg2, middle last paragraph: “typical way of training ML models”. While the cross entropy loss is a popular loss, it is not “typical”.\n- Pg 3, equation 4, what does “briefly” mean above the equal sign?\n- Perhaps a title referring to ensemble dual learning would be more appropriate, given the possible confusion with multi agent reinforcement learning. \n\n\n################\nRevision:\n\nI would like to thank the authors for the extensive revision, additional explanations/experiments, and pointing out extensive relevant literature on BLUE scores. The revision and comments are much appreciated. I have increased my score from 4 to 6.", "Dear Authors,\n\nThank you for pointing out the extensive relevant literature. I had indeed underestimated the improvement in BLUE score and will update my score to a 6.", "Dear AnonReviewer3:\n\nThanks for your response to our rebuttal. However, it is unclear to us why you believe that the general contribution of the paper remains too small for ICLR because of the subjectivity of your criticism. What does \"too small\" mean exactly? \n\nOur best interpretation of your concern is \"the increase in performance is minimal and the increased computational cost/complexity substantial\". While this is a legitimate concern, we do not believe the concern is sufficiently substantial to justify a rating of the paper below the acceptance threshold for the following reasons: \n\n1. \"The increase in performance is minimal \": \nWhile the performance improvement may appear to be small, it is known that the improvement of BLEU score is difficult, and the magnitude of improvement from our methods is better than or at least comparable to the reported improvement on this task by recent papers published in major venues such as NeurIPS. For example, on the WMT2014 En->De translation task, the performance of the transformer baseline is 28.4 BLEU score [1] (our baseline matches this performance). The improvement over this baseline is 0.61 in [2], 0.8 in [3] (1.3 BLEU improvement over the re-implemented 27.9 baseline in [3]) and 0.9 in [4], while ours is 1.65 BLEU score. We perform paired bootstrap sampling [5] for significance test using the script in Moses [6]. Our improvement over the baselines are statistically significant with p < 0.01 across all machine translation tasks.\nMoreover, as we pointed out in the previous response, our method has achieved the best performance so far on IWSLT 2014 De->En and WMT 2018 En->De. Our main point here is that our experimental results have provided solid evidence that the proposed new method has clearly advanced the state of the art on multiple tasks. \n\n2. \"The increased computational cost/complexity substantial\": \nAs we already explained in our previous response, the computational complexity can be further reduced (there are potentially other ways to further improve efficiency), so this is not an *inherent* deficiency of the proposed new approach, but rather interesting new research questions that can be further investigated in the future. Thus in this sense, our work has also opened up some new interesting research directions. \n\nWe welcome further discussion and are willing to answer any further questions. \n\n\n[1] Vaswani, Ashish, et al. \"Attention is all you need.\" Advances in Neural Information Processing Systems. 2017.\n[2] He, Tianyu, et al. \"Layer-Wise Coordination between Encoder and Decoder for Neural Machine Translation\". Advances in Neural Information Processing Systems. 2018. \n[3] Shaw, Peter, Jakob Uszkoreit, and Ashish Vaswani. \"Self-Attention with Relative Position Representations.\" In Proc. of NAACL, 2018.\n[4] Anonymous. Universal transformers. In Submitted to International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=HyzdRiR9Y7. Under review as a conference paper at ICLR 2019\n[5] Koehn, Philipp. \"Statistical significance tests for machine translation evaluation.\" Proceedings of the 2004 conference on empirical methods in natural language processing. 2004.\n[6] https://github.com/moses-smt/mosesdecoder/blob/master/scripts/analysis/bootstrap-hypothesis-difference-significance.pl\n", "\nDear Reviewers:\n\nThanks for the valuable comments and discussion. \n\nOur paper revision seeks to clarify the introduction to our framework and strengthen the experiment results, which includes: (1) reorganization and clarification in Section 3.1; (2) the additional study on diversity of agents (Appendix A); (3) the additional experiment results on WMT18 English->German translation challenge (Section 3.4); and (4) quantitative evaluation on image-to-image translations (Section 4.3 - 4.4).\n\nIn particular, we would like to highlight our contribution in this work. We are the first to incorporate multiple agents into the dual learning framework, extending traditional dual learning to a much more general concept. The multi-agent dual learning framework, which is generally applicable to many different tasks, has significantly pushed the frontier towards dual learning. In particular, we show how the proposed general framework can be adapted to the machine translation and image translation tasks. The method is non-trivial yet very easy to apply and has been proved to be very powerful across many different translation tasks with our extensive empirical studies:\n \n1) Our proposed framework has achieved broad success: we have evaluated our method on five image-to-image translation tasks, and six machine translation tasks across different language pairs, different dataset scale (small dataset like IWSLT and large dataset like WMT) and different machine learning setting (supervised and unsupervised). Our method demonstrates consistent and substantial improvements over the standard baseline and traditional (two-agent) dual learning method. \n \n2) The multi-agent dual learning framework also pushes forward the state-of-the-art performances. On IWSLT 2014 German->English translation, we set a new record of a 35.44 BLEU score. On the recent WMT 2018 English ->German translation, we achieve the state-of-the-art performance of a 49.61 BLEU score, outperforming the challenge champion by over 1.3 BLEU score.\n\nWe believe we have made decent contributions in this paper based on all these above. We welcome further discussion and are willing to answer any further questions. \n\nThanks,\nThe Authors", "Summary\n\nThe paper proposes to modify the \"Dual Learning\" approach to supervised (and unsupervised) translation problems by making use of additional pretrained mappings for both directions (i.e. primal and dual). These pre-trained mappings (\"agents\") generate targets from the primal to the dual domain, which need to be mapped back to the original input. It is shown that having >=1 additional agents improves training of the BLEU score in standard MT and unsupervised MT tasks. The method is also applied to unsupervised image-to-image \"translation\" tasks.\n\nPositives and Negatives\n+1 Simple and straightforward method with pretty good results on language translation.\n+2 Does not require additional computation during inference, unlike ensembling.\n-1 The mathematics in section 3.1 is unclear and potentially flawed (more below).\n-2 Diversity of additional \"agents\" not analyzed (more below).\n-3 For image-to-image translation experiments, no quantitative analysis whatsoever is offered so the reader can't really conclude anything about the effect of the proposed method in this domain.\n-4 Talking about \"agents\" and \"Multi-Agent\" is a somewhat confusing given the slightly different use of the same term in the reinforcement literature. Why not just \"mapping\" or \"network\"?\n\n-1: Potential Issues with the Maths.\n\nThe maths is not clear, in particular the gradient derivation in equation (8). Let's just consider the distortion objective on x (of course it also applies to y without loss of generality). At the very least we need another \"partial\" sign in front of the \"\\delta\" function in the numerator. But again, it's not super clear how the paper estimates this derivative. Intuitively the objective wants f_0 to generate samples which, when mapped back to the X domain, have high log-probability under G, but its samples cannot be differentiated in the case of discrete data. So is the REINFORCE estimator used or something? Not that the importance sampling matter is orthogonal. In the case of continuous data x, is the reparameterization trick used? This should at the very least be explained more clearly.\n\nNote that the importance sampling does not affect this issue.\n\n-2: Diversity of Agents.\n\nAs with ensembles, clearly it only helps to have multiple agents (N>2) if the additional agents are distinct from f_1 (again without loss of generality this applies to g as well). The paper proposes to use different random seeds and iterate over the dataset in a different order for distinct pretrained f_i. The paper should quantify that this leads to diverse \"agents\". I suppose the proof is in the pudding; as we have argued, multiple agents can only improve performance if they are distinct, and Figure 1 shows some improvement as the number of agents are increase (no error bars though). The biggest jump seems to come from N=1 -> N=2 (although N=4 -> N=5 does see a jump as well). Presumably if you get a more diverse pool of agents, that should improve things. Have you considered training different agents on different subsets of the data, or trying different learning algorithms/architectures to learn them? More experiments on the diversity would help make the paper more convincing.", "Summary: our response includes (1) Clarification on Equation 8 and its descriptions; (2) Explanations on computational cost; (3) Clarification on contribution and (4) Discussion on controlling complexity.\n \n** Equation 8 and its descriptions **\nWe apologize for the confusions with equation 8. We have reorganized Section 3.1 in the update paper. To answer your questions:\n1. Space Y: Space \\mathcal{Y} refers to the collection of all possible sentences of the Y domain language, instead of just the dataset (denoted by D_y, where we have D_y \\in \\mathcal{Y}). That's why it could be exponentially large.\n2. Offline sampling: We do offline sampling by sampling all the x_hat and y_hat with f_i and g_i respectively in advance (for i>=1). We reorganized Section 3.1 and Algorithm 1 to more clearly explain how to estimate the gradients and do the offline sampling. \n \n** Explanations on Computational Cost **\nThe computational cost refers to GPU time for training. Although pre-training can be parallelized, the total GPU time will not be reduced. For example, on WMT14 En<->De task, it takes 40 GPU days (5 days on 8 GPU) to train one model (agent). Pre-training more agents takes more GPU time with either more GPUs to train in parallel or longer training time. This is what we mean by \"increased computational cost\" with more agents. \nHowever, as is shown from our experiments, we can obtain significant improvements over the strong baseline models with multiple but not too much agents (e.g. with n=3, which brings tolerable increase in computational cost yet substantial gain). Note that we do not increase the computational cost during inference.\n \n** Contribution & Improvement **\nWe propose a new multi-agent dual learning framework that leverages more than one primal models and dual models in the learning system. Our framework has demonstrated its effectiveness on multiple machine translation and image translation tasks:\n1. We work on six NMT tasks to evaluate our algorithm (see Section 3). Our improvement over the strong baselines with the state-of-the-art transformer model is not minimal. As can be seen from the recent literature in NMT [2][3], transformer is a powerful and robust model, and improving BLEU by 1 point over such strong baseline is generally considered as a non-trivial progress. Our method yields consistent and substantial improvement across all the benchmark datasets.\n2. Our method is capable of further improving the state-of-the-art model. We work on WMT18 English-to-German translation tasks, and achieve a 49.61 BLEU score, which outperforms the champion system by 1.31 point and sets a new record on this task (see Table 4 in Section 3.4 of our updated paper).\n3. Our method also works for unsupervised image generation. We achieve consistent improvements over CycleGAN quantitatively and qualitatively (See Section 4).", "** Controlling Complexity **\nIn this paper, we focus on demonstrating the effectiveness of our proposed method, while the issue of efficiency is not yet well explored. We agree with you that training efficiency is indeed also a very important issue. Setting a reasonable number of agents as we did in the paper is one way to control the complexity within a tolerable level while obtaining substantial gain. \n\nAccording to your comments, we further present a simple yet effective strategy to minimize the training complexity without too much loss in performance -- by generating different agents from a single run with warm restart. Specifically, we work with the following two settings:\n\n1. Warm restart by learning rate schedule. \n(a) Setting: We employ the warm restart strategy in [1], where the warm restart is emulated by increasing the learning rate. Specifically, learning rate starts from an initial value L, then decays with a cosine annealing. Once a cyclic iteration is reached, the learning rate is increased back to the initial value and then followed with cosine decay. At the end of each cycle where the learning rate is of the minimal value, the model is approximately a local optimal. Thus, we can use multiple different such local optima as our agents.\n (b) Pre-training Cost: Training one agent on IWSLT takes 3 days on 1 GPU (i.e. 3 GPU days). Thus, for Dual-5 model which involves 4 additional pairs of agents, the total pre-training cost in our original way through independent runs is 4 (pairs) * 2 (directions: De->En and En->De) * 3, in total 24 GPU days. With the new learning rate schedule, we can obtain the 4 pairs of agents with a single run which takes 2 (directions: De->En and En->De) * 3, in total 6 GPU days. Such a method is three times more efficient than the original way.\n(c) Performance: With this strategy, we are able to achieve 35.07 and 29.40 BLEU with Dual-5 on IWSLT De->En and En->De respectively. Although not as good as our original method with higher complexity (e.g., 35.44 BLEU in De->En and 29.52 BLEU in En->De), such light-weighted version of our method is still able to outperform the baselines with large margin for over 1 BLEU score with minimal increase in training cost.\n\n2. Warm restart with different random seeds and training subsets. \n(a) Setting: We first train a model to a stage that the model is not converged but has relatively good performance. We then use this model as warm start, and train different agents with different iteration over the dataset and different subsets. This strategy intuitively works better with larger dataset. We present results in WMT En<->Fr translation. \n(b) Pre-training Cost: Training one agent on WMT En<->Fr dataset takes 7 days on 8 GPUs, in total 56 GPU days. For Dual-3 with 2 additional pairs of agents, the total pre-training cost is 2 * 2 * 56 = 224 GPU days. With the above strategy, we managed to decrease the cost into 2 * 56 + 2 * 8 = 128 GPU days.\n(c) Performance: We are able to achieve 43.87 BLEU and 40.14 with Dual-3 on WMT En-Fr and Fr-En respectively, which improves 1.37 and 1.74 points over the baselines (42.5 for En->Fr and 38.4 for Fr->En).\n\nWith the above two strategies, we demonstrate that our framework is also capable of improving performance with large margin while introducing minimal computational cost. We will definitely further study the best strategy to minimize the training complexity while maintaining the improvements in our future work.\n\n** Textual Notes **\nThanks for pointing it out. We edit the writing in our updated paper. \nAlthough with the same term, the \"multi-agent\" in this paper has no relationship with multi-agent reinforcement learning. To avoid further confusion in the discussion period, currently we decide not to change the paper title during rebuttal.\n\nWe hope the above explanations could address your concerns. Please kindly check our updated paper with clarification and new experimental results.\nThanks for your time and valuable feedbacks.\n\n[1] Loshchilov, Ilya, and Frank Hutter. \"Sgdr: Stochastic gradient descent with warm restarts.\" In Proc. of ICLR, 2017.\n[2] Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. Self-attention with relative position representations. In Proc. of NAACL, 2018.\n[3] Anonymous. Universal transformers. In Submitted to International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=HyzdRiR9Y7. Under review as a conference paper at ICLR 2019\n", "Thank you for your review and valuable comments!\n\nSummary: our response includes: (1) Clarification on language translation baselines; (2) Discussion on image translation evaluation; (3) Reference and clarification.\n \n** Language Translation Baselines **\n1. For the baseline models reported: \n1.1) We use the transformer model with \"transformer_big\" setting [1], which is a strong baseline that outperforms almost all previously popular NMT models based on CNN [2] and LSTM [3]. Transformer is the state-of-the-art NMT architecture. Our numbers of the baseline transformer model match the results reported in [1].\n1.2) In addition to the standard baseline models, we also compare our method against all the relevant algorithms including knowledge distillation (KD) and back translation (BT).\n1.3) As can be seen in many well-known and recent NMT works ([4], [5]), it is a common practice to use transformer as the robust baseline model. Furthermore, it is also shown from these works that it is hard to improve over the transformer baseline, and 0.5-1 BLEU score improvement is already considered substantial.\n\n2. We further add newly obtained results on the WMT18 challenge. We compare our method with both the champion translation system MS-Marian (WMT18 En->De challenge champion). Our method achieves the state-of-the-art result on this task. \n---------------------------------------------------------------------------\n WMT En->De 2016 2017 2018\n---------------------------------------------------------------------------\nMS-Marian (ensemble) 39.6 31.9 48.3\nOurs (single) 40.68 33.47 48.89\nOurs (ensemble) 41.23 34.01 49.61\n---------------------------------------------------------------------------\nPlease refer to Section 3.4 \"Study on generality of the algorithm\" for more details and Table 4 for full results in our updated paper.\n \n** Image Translation Evaluation **\nFor image-to-image translation tasks, we further add two quantitative measures: (1) We use the Fréchet Inception Distance (FID) [6], which measures the distance between generated images and real images to evaluate the painting to photos translation. (2) We use \"FCN-score\" evaluation on the cityscape dataset following [7]. The results are reported in Table 6 and Table 7 respectively. Multi-agent dual learning framework can achieve better quantitative results than the baselines.\n\nWe are not sure what you meant by “How does their ensemble method compare to just their single-agent dual method?”. The standard CycleGAN model (baseline) already leverages both primal and dual mappings, which is equivalent to our “Dual-1” model in NMT experiments, i.e., the dual method with only one pair of agents f_0 and g_0. Our model involves two additional pairs of agents (f_1 and g_1, f_2 and g_2) during training. Unlike ensemble learning, only one agent (f_0 for forward direction, or g_0 for backward direction) is used during inference.\n\n** Reference **\nThanks for pointing a reference paper \"Multi-Column Deep Neural Networks for Image Classification\" (briefly, MCDNN) and we have added reference to it (Section 4).\nAlthough MCDNN also uses multiple agents (i.e., several columns of deep neural networks), it differs from our model in two aspects: (1) Our work leverages the duality of a pair of dual tasks while this paper does not; (2) In an MCDNN framework, during the training phase, all the columns are updated by winner-take-all rule; and during inference, all columns work like an ensemble model through weighted average. In comparison, we only update one primal and one dual agent during training, and use one agent for inference.\n\n** Clarity **\nThanks for pointing out that our original introduction to the names of baselines and models is not very clear. Please kindly refer to first paragraph in Section 3.3.\n \nYou may check our updated paper with clarification and new experimental results.\nThanks for your time and feedbacks.\n\n[1] Vaswani, Ashish, et al. \"Attention is all you need.\" In NIPS. 2017.\n[2] Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. Convolutional Sequence to Sequence Learning. In Proc. of ICML, 2017.\n[3] Wu, Yonghui, et al. \"Google's neural machine translation system: Bridging the gap between human and machine translation.\" arXiv preprint arXiv:1609.08144 (2016).\n[4] Chen, Mia Xu, et al. \"The Best of Both Worlds: Combining Recent Advances in Neural Machine Translation.\" In Proc. of the ACL, 2018.\n[5] Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. Self-attention with relative position representations. In Proc. of NAACL, 2018.\n[6] Heusel, Martin, et al. \"Gans trained by a two time-scale update rule converge to a local nash equilibrium.\" In NIPS, 2017.\n[7] Isola, Phillip, et al. \"Image-to-image translation with conditional adversarial networks.\" In CVPR, 2017", "Thank you for your comments and suggestions!\n\nSummary: our response includes (1) Clarification on math equations; (2) Analysis on diversity of additional agents; (3) Quantitative analysis for image translation.\n\n** Clarification on Mathematics in Section 3.1 **\nWe apologize for the confusions in Section 3.1. We have reorganized this section, as shown in our updated paper. For your questions:\n1. About equation 8, indeed there is a typo and should be a \"partial\" sign in front of the \"\\delta\" function in the numerator. Thanks for pointing this out.\n2. The details of derivative estimation can be found in Section 3.1 (especially equation 9 and 10 in our updated version.\n \n** Study on diversity of agents **\n1. You are right. We obtained distinct \"agents\" f_i and g_i through multiple independent runs with different random seeds and different input orders of the training samples. As far as we know, there's no common quantitative metric to measure the diversity among models in NMT. But we agree with you that intuitively, more diversity among agents leads to greater improvements. \n\n2. Following your suggestions, we add a study on the diversity of agents (presented in Appendix A of the updated paper). We design three group of agents with different levels of diversity: (E1) Agents with the same network structure trained by independent runs, i.e., what we use in Section 3.3; (E2) Agents with different architectures and independent runs; (E3) Homogeneous agents of different iteration, i.e., the checkpoints obtained at different (but close) iterations from the same run. We evaluate the above three settings on IWSLT2014 De<->En dataset. The diversity of the above three settings would intuitively be (E2)>(E1)>(E3). We present full results in Figure 4 (Appendix A), where the BLEU scores with Dual-5 model are: \n\n--------------------------------------------------------\n E1 E2 E3\n--------------------------------------------------------\nEn -> De 35.44 35.56 34.97\nDe -> En 29.52 29.58 29.28\n--------------------------------------------------------\n\nFrom the above results, we can see that diversity among agents indeed plays an important role in our method. There are, of course, many other ways to introduce more diversity, including using different optimization strategies, or training with different subsets as you suggested. All of these can potentially bring further improvements to our framework, yet are not the focus of this work. From the current studies, we show that our algorithm is able to achieve substantial improvement with a reasonable level of diversity. We leave more comprehensive studies on diversity to future work.\n\nPlease kindly refer to Appendix A for more detailed results.\n\n** Quantitative analysis for image translation **\nThanks for your suggestions. We add two quantitative measures on image translation tasks: (1) We use the Fréchet Inception Distance (FID score) [1], which measures the distance between generated images and real images to evaluate the painting to photos translation. (2) We use \"FCN-score\" evaluation on the cityscape dataset following [2]. The results are reported in Table 6 and Table 7 respectively. Multi-agent dual learning framework can achieve better quantitative results than the baselines.\n\n** Term usage of \"multi-agents\" **\nAlthough with the same term, the \"multi-agent\" or \"agent\" in this paper has no relationship with multi-agent reinforcement learning. You are right in that the term \"agent\" in our context refers to \"mapping\" or \"network\". To avoid further confusion in the discussion period, currently we decide not to change the term usage throughout the paper during rebuttal; instead, we will change the term after the acceptance/rejection decision.\n\nYou can check our updated paper with clarification and new experimental results.\nThanks for your time and valuable feedbacks.\n\n[1] Heusel, Martin, et al. \"Gans trained by a two time-scale update rule converge to a local nash equilibrium.\" Advances in Neural Information Processing Systems. 2017.\n[2] Isola, Phillip, et al. \"Image-to-image translation with conditional adversarial networks.\" In CVPR, 2017\n", "Thanks for the information. Here are our settings and some initial observations:\n \n** Settings **\n  -  Hyperparameters: We set 'hparams_set=transformer_base', and experiment with the batch size of 4096 (default), 6400 (to approximate 320 sentences) and 320 tokens, and dropout rate of 0.1 (default) and 0.4 (since severe overfitting observed). The rest hyperparameters use the default value in 'transformer_base'.\n  -  Optimization: We use the Adam optimizer with the same setting described in the paper (section 3.2 Optimization and Evaluation). \n  -  Evaluation: We use beam search with a beam size of 6 (paper section 3.2) in inference and use multi-bleu.pl to evaluate the tokenized BLEU.\n\nWe run the baseline and our algorithm with 5 agents (Dual-5) with the above settings. For our multi-agent model, we still use the same agents as the paper (transformer_small with 4 blocks) for sampling. \nThe models are implemented with tensor2tensor v1.2.9 and trained on one M40 GPU.  \n \n** Results **\nBelow are the initial results. We are still working on the experiments.\n \n\tTable 1. With dropout rate of 0.1 (default)\n\t-------------------------------------------------------------------------------\n\tBatch Size                   4096                6400               320            \n\t-------------------------------------------------------------------------------\n\tBaseline                      32.24               32.22             2.17             \n\tOurs (Dual-5)             34.59               34.58             3.65             \n\t-------------------------------------------------------------------------------\n\t \n\tTable 2. With dropout rate of 0.4 \n\t-------------------------------------------------------------------------------\n\tBatch Size                   4096                6400               320            \n\t-------------------------------------------------------------------------------\n\tBaseline                      34.40               34.43             2.37             \n\tOurs (Dual-5)             35.12               35.45             3.91             \n\t-------------------------------------------------------------------------------\n \nWe have the following observations:\n\t1) The default 'transformer_base' setting appears to suffer from severe overfitting (Table 1). We tune the dropout ratio and present results with dropout=0.4 in Table 2, where we indeed obtain better baseline results than our baselines with 'small' setting reported in the paper. The stronger baseline achieves a 34.43 BLEU score (with batch size 6400).\n\t2) We notice that with a batch size of 320 tokens (as the setting you suggested), the model is not well optimized with either dropout ratio. We are curious whether you are also using other different hyperparameters or optimization settings. We would be happy to re-evaluate our approach under the stronger baseline setting.\n\t3) From the results we have so far, our algorithm can still outperform the stronger baseline with a large margin, achieving 35.45 BLEU score (with batch size 6400).\n \nWe will keep working on experiments of IWSLT De-En under the 'base' settings and update our findings. ", "Thanks for your reply, I used 320 tokens to obtain a better result compared to the default settings.", "Thanks for your comments. \n\nFor IWSLT De-En, we use the 'transformer_small' setting (in paper section 3.2), in which the batch size is set to be 4096 tokens. We use multi-bleu.pl to evaluate the tokenized BLEU. \n\nThanks for providing a stronger baseline and we are working on it. To confirm, by 'batch size=320', are you referring to 320 tokens or sentences? ", "What's the batch size of your baseline system for IWSLT De-En? And which evaluation script do you use to measure the BLEU score?\n\nI run the T2T with transform_base parameters(batch size = 320), and achieve a BLEU score of 34.38, which is higher than your baseline (33.42). I use the multi_bleu.pl and tokenize the English and German using Moses toolkit." ]
[ 6, -1, 6, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 2, -1, 3, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2019_HyGhN2A5tm", "HklXxIdqn7", "iclr_2019_HyGhN2A5tm", "BklUtMfnCX", "HkeVjIX9hX", "iclr_2019_HyGhN2A5tm", "iclr_2019_HyGhN2A5tm", "HkeVjIX9hX", "HkeVjIX9hX", "HklXxIdqn7", "H1l6suNjhX", "Hkxwf3VKnm", "Hkxo4B28n7", "Hylw94tIh7", "iclr_2019_HyGhN2A5tm" ]
iclr_2019_HyM7AiA5YX
Complement Objective Training
Learning with a primary objective, such as softmax cross entropy for classification and sequence generation, has been the norm for training deep neural networks for years. Although being a widely-adopted approach, using cross entropy as the primary objective exploits mostly the information from the ground-truth class for maximizing data likelihood, and largely ignores information from the complement (incorrect) classes. We argue that, in addition to the primary objective, training also using a complement objective that leverages information from the complement classes can be effective in improving model performance. This motivates us to study a new training paradigm that maximizes the likelihood of the ground-truth class while neutralizing the probabilities of the complement classes. We conduct extensive experiments on multiple tasks ranging from computer vision to natural language understanding. The experimental results confirm that, compared to the conventional training with just one primary objective, training also with the complement objective further improves the performance of the state-of-the-art models across all tasks. In addition to the accuracy improvement, we also show that models trained with both primary and complement objectives are more robust to single-step adversarial attacks.
accepted-poster-papers
This paper proposes adding a second objective to the training of neural network classifiers that aims to make the distribution over incorrect labels as flat as possible for each training sample. The authors describe this as "maximizing the complement entropy." Rather than adding the cross-entropy objective and the (negative) complement entropy term (since the complement entropy should be maximized while the cross-entropy is minimized), this paper proposes an alternating optimization framework in which first a step is taken to reduce the cross-entropy, then a step is taken to maximize the complement entropy. Extensive experiments on image classification (CIFAR-10, CIFAR-100, SVHN, Tiny Imagenet, and Imagenet), neural machine translation (IWSLT 2015 English-Vietnamese task), and small-vocabulary isolated-word recognition (Google Commands), show that the proposed two-objective approach outperforms training only to minimize cross-entropy. Experiments on CIFAR-10 also show that models trained in this framework have somewhat better resistance to single-step adversarial attacks. Concerns about the presentation of the adversarial attack experiments were raised by anonymous commenters and one of the reviewers, but these concerns were addressed in the revision and discussion. The primary remaining concern is a lack of any theoretical guarantees that the alternating optimization converges, but the strong empirical results compensate for this problem.
test
[ "rJe3xlA2yN", "r1eubJAnJN", "BkxPea7hyV", "HygCehX3yN", "r1lArwq9Am", "Syx-XwqcCQ", "H1lJy06u07", "rkg-IxqSAQ", "BkxI3CtBCQ", "S1ebZ7sRam", "HJlWte4WR7", "SyeTbV6eA7", "BkeB3kugCQ", "rJeeh8J5pm", "S1eR7XqFaQ", "S1eWWmcKaQ", "rJg8OW9FT7", "HJlOtAKta7", "r1ejg15YaQ", "Syx8_6U63X", "B1lv2Bdph7", "r1gc8uIw27" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "public", "public", "public", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "You are totally right. We did negate the complement entropy term (and added it to the primary objective) for maximizing complement entropy. We are sorry about the confusion and we will update the final manuscript to make this more clear: minimizing cross-entropy and maximizing complement entropy (e.g., in Algorithm 1).", "Thanks for the comment. We summed the cross-entropy with the normalized complement entropy (Eq.3), and the corresponding advantages were discussed in Section 3.1.", "Also, since the objective is to minimize cross-entropy and maximize complement entropy, I assume that when you tested the unified objective you actually negated the complement entropy term.\n", "Did you sum the cross-entropy with the complement entropy (Eq. 2) or the normalized complement entropy (Eq. 3)?\n", "Thank you for the ideas. Yes, we indeed directly added the two objectives together in our experiments. We agree that introducing two additional weights to merge the primary and complement objectives is a good idea, and with proper tuning, this approach may further improve the model's performance and reduce the training time. We aimed to design a methodology with fewer hyper-parameters, so we didn't explore this direction, and our current proposed method works in many scenarios, as shown in our experiments. With these promising results, we will continue to explore the approach of merging the two objectives, and build connections between these two approaches, in our immediate future work.\n\nRegarding reporting the increase in training time, we have added the information of training time in section 2.2 (on the top of page 4).\n", "Thanks for your clarification. Based on all of the experiment results we have so far, such as loss gap values, we are only able to claim that models trained by COT generalize better (i.e., better performance on separate test sets). While achieving better performance on separate test sets is a good indicator that COT does not produce models that overfit, further experiments and theoretical investigations on whether COT can be a rigorous option to guard against overfitting is left as a future work.", "We thank all reviewers and the anonymous for the constructive comments. We have updated the manuscript in Abstract, Section 2, Section 3.4, Conclusion and Appendix A to address your feedback and concerns. Here we provide a summary of these updates:\n\n(1) For AnonReviewer3’s main suggestion of forming adversarial attacks using “both” gradients from both primary and complement objectives, we have designed and conducted the additional FGSM (single-step) white-box experiments. The experiments set adversarial perturbations to be generated based on the sum of the primary gradient and the complement gradient (i.e., the gradient calculated from complement objective), while the results indicate that COT is more robust to single-step adversarial attacks under standard settings [1].\n\n(2) To provide more precise claim, we update the original claim “robustness to adversarial attacks” into “robustness to single-step adversarial attacks” according to (1). Additionally, more details of the original transfer attack experiments are provided in the manuscript.\n\n(3) We have added a description about the increase of training time and corrected typos pointed out by the reviewers in the manuscript.\n\n[1] Ian Goodfellow, Jonathon Shlens, and Christian Szegedy. “Explaining and harnessing adversarial examples”. In ICLR’15.", "This is interesting. From the response of the authors, I presume that the authors have simply added the two objectives together. However, it is more common to merge multiple objectives by premultiplying them with some weights. Since there are only two objectives, these two weights could be set with some kind of grid search (maybe along with cross-validation). I believe the tables given in the response would then change and the training times would decrease. \n\nPlease report the increase in training time in the manuscript.", "I understand. Unfortunately, the loss gap values in the table do not say much. I apologize for my typo \"complement from overfitting.\" It should be \"complement overfitting.\" To clarify my question, I wonder whether COT can be considered as a complementary or an alternative option against overfitting?\n\n", "Thank you for your comments. We understand that the adversarial attack techniques used here may not be state-of-the-art methods; however, we want to emphasize that the primary goal of this paper is to improve model's accuracy, although experimental results do show that robustness is also one of the benefits of the models trained by COT. \n\nWe agree with the reviewer that transfer adversarial attack is different from the classic settings of adversarial attacks. To verify our method under standard adversarial attacks, we have conducted additional experiments on white-box attack, and provided the results below; the experimental results confirmed that COT is indeed more robust to this type of attacks, and therefore we believe the main conclusion that COT is more robust (compared to baselines) to adversarial attack still holds. We will add these results of the white-box attack into the final version of the paper. Additionally, we will rename the current experiments to “transfer attacks” to avoid confusions. The definition of the transfer attacks can be found in several recent publications [1, 2, 3].\n\nFor the white-box attacks, we conducted the experiments as also suggested by AnonReviewer3. The update is to set adversarial perturbations to be Epsilon * Sign (Primary gradient + Complement gradient). Results indicate that COT is more robust to this type of white-box attacks under standard settings.\n\nTest errors on Cifar10 under FGSM white-box adversarial attacks\n===========================================================\n\t\t\t\t Baseline\t COT\nResNet-110 \t\t\t 62.23% \t\t52.72%\nPreAct ResNet-18 65.60% \t\t56.17%\nResNeXt-29 (2×64d) 70.24% \t\t61.55%\nWideResNet-28-10\t 59.39% \t\t55.53%\nDenseNet-BC-121 65.97% \t\t55.99%\n===========================================================\n \nThe reviewer also suggested to try out several recent methods on white-box and black-box attacks. We do agree with the reviewer that it's a great idea. However, since the main focus of the current paper is to improve accuracy, and the manuscript is already close to the page limit, we feel it's better to study this problem in a separate paper. As a matter of fact, we are planning on a follow-up work with the focus on the robustness of the models trained with COT. \n \n[1] Nicolas Papernot, Patrick McDaniel, Ian Goodfellow. “Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples.” Arxiv, 2016\n\n[2] Yanpei Liu, Xinyun Chen, Chang Liu, Dawn Song. “Delving into Transferable Adversarial Examples and Black-box Attacks.” In International Conference on Learning Representation, 2017.\n\n[3] Wieland Brendel, Jonas Rauber, Matthias Bethge. “Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models.” In International Conference on Learning Representation, 2018.", "Thank you for the clarifications on the recent research trending in adversarial attacks as well as your great suggestions on making the claim precise. We will adopt your suggestion and make it clear in the paper that the proposed training objectives make the models more robust to single-step adversarial attacks instead of claiming general robustness. We will use this new statement consistently across our updated version of the paper. ", "I agree with the Nov/19 anonymous comment, and one thing that I'll add is that I think it's worth discussing robustness to the FGSM attack, because it means that the decision boundary is being moved away from the data points, in a certain subset of directions. I think this is different from adversarial robustness in general, which considers perturbations which give maximum error. \n\nIt would be interesting to think about something like \"the volume of the subset of the epsilon-ball around the data points which increases error by k%\" - and then we could claim that some methods reduce that volume without claiming that every single point in the epsilon-ball has low error. ", "I understand this is not the main purpose of your paper, but again, you claim \"we also show that models trained with both primary and complement objectives are more robust to adversarial attacks.\" At present, you simply have not shown that fact.\n\nThank you for running some white-box numbers, but FGSM is unfortunately not sufficient. I hate to appeal to authority, to argue this, but see ( https://openreview.net/forum?id=SkgVRiC9Km&noteId=rkxYnt8JpQ&noteId=rkxYnt8JpQ ).\n\nPrior work, and papers under submission this year, make very careful claims with respect to adversarial examples. See for example the Manifold Mixup paper under submission this year that instead writes the correct and honest statement \"Manifold Mixup achieves ... robustness to single-step adversarial attacks\". You should claim only what you can demonstrate.\n\nIt is perfectly fine that you want to only show adversarial robustness as a side-effect of your main work, but you should be accurate in how you phrase what you have shown. There is a big difference between being robust to single-step attacks and transfer attacks, and actually being robust. Hundreds of papers claim the former, very few claim the latter.", "This paper argues in the abstract that \"we also show that models trained with both primary and complement objectives are more robust to adversarial attacks.\"\n\nHowever, in the evaluation section, the authors only attempt a very simple transferability attack: generate adversarial examples on one model, and transfer them to another. This does not imply adversarial robustness, neither in the white-box nor black-box setting.\n\nTo argue black-box robustness, the authors should evaluate against more recent black-box attacks such as the Boundary Attack (ICLR'18) or SPSA (ICML'18). Both of these attacks have effectively broken many black-box defenses in the past.\n\nIf the authors wish wish to argue full white-box adversarial robustness, they should further try optimization based attacks (Madry et al. 2018, Carlini & Wagner 2017).\n\nAs is, this paper should not claim robustness to adversarial examples: at best, it can claim a 10% improvement in accuracy to transfer attacks.", "\nWe sincerely thank the reviewer for the useful and detailed comments. Below we provide explanations for each of your comments or questions. \n\n\n(Q1) End of page 1: \"the model behavior for classes other than the ground truth stays unharnessed and not well-defined\". The probabilities should still sum up to 1, so if the ground truth one is maximized, the others are actually implicitly minimized. No?\n\n(A1) Your understanding is totally correct. We have changed the original text to a more clear statement:\n\n“Therefore, for classes other than the ground truth, the model behavior is not explicitly optimized --- their predicted probabilities are indirectly minimized when ŷ_ig is maximized since the probabilities sum up to 1.”\n\nWe want to thank the reviewer again for crystalizing the manuscript.\n\n\n(Q2) Page 3, sec 2.1: \"optimizing on the complement entropy drives ŷ_ij to 1/(K − 1)\". I believe that it drives each term ŷ_ij /(1 − ŷ_ig ) to be equal to 1/(K-1). Therefore, it drives ŷ_ij to (1 − ŷ_ig)/(K-1) for j!=g.\n\nThis indeed flattens the ŷ_ij for j!=g, but the effect on ŷ_ig is not controlled. In particular this latter can decrease. Then in the next step of the algorithm, ŷ_ig will be maximized, but with no explicit control over the complementary probabilities. There are two objectives that are optimized over the same variable theta. So the question is, are we sure that this procedure will converge? What prevents situations where the probabilities will alternate between two values? \n\nFor example, with 4 classes, we look at the predicted probabilities of a given sample of class 1:\nSuppose after step 1 of Algo 1, the predicted probabilities are: 0.5 0.3 0.1 0.1 \nAfter step 2: 0.1 0.3 0.3 0.3\nThen step 1: 0.5 0.3 0.1 0.1\nThen step 2: 0.1 0.3 0.3 0.3\nAnd so on... Can this happen? Or why not? Did the algorithm have trouble converging in any of the experiments?\n\n(A2) Thanks for the detailed comment. As the reviewer pointed out, “drives ŷ_ij to 1/(K − 1)” was indeed a typo and should be corrected to “drive ŷ_ij /(1 − ŷ_ig) to 1/(K-1)”. We have modified the manuscript correspondingly. Indeed, maximizing complement entropy in Eq(2) only drives “ŷ_ij /(1 − ŷ_ig) to 1/(K-1)”, and therefore in the example provided above, the predicted probabilities after step 2 can be “0.1 0.3 0.3 0.3” or “0.5, (1 - 0.5)/3, (1 - 0.5)/3, (1 - 0.5)/3”, or other values so long as the incorrect classes (ŷ_ij's) receive similar predicted probabilities. According to our observations from the experiments, the probabilities tend to converge to “0.5, (1 - 0.5)/3, (1 - 0.5)/3, (1 - 0.5)/3”. Experiments show that the algorithm does not have trouble converging; the algorithm converges smoothly in all the experiments we have conducted. Again, we thank the reviewer for the insightful comment; studying the theory of COT convergence is an intriguing topic and we leave it as a future work.\n\n\n(Q3) Sec 3.1: \"additional efforts for tuning hyper-parameters might be required for optimizers to achieve the best performance\": Which hyper-parameters are considered here? If it is the learning rate, why not use a different one, tuned for each objective?\n\n(A3) Hyper-parameters in this statement indeed refer to the learning rate, and we have modified the statement in the manuscript to avoid confusion; the modified statement is provided below:\n\n“therefore, additional efforts for tuning learning rates might be required for optimizers to achieve the best performance.”\n\nRegarding the second question about tuning learning rates, we have conducted several experiments with different learning rates specifically tuned for each objective. The experimental results show that using the same learning rate for both primary and complement objectives leads to the best performance when Eq(3) is used as the complement objective.\n\n\n(Q4) Sec 3.2: The additional optimization makes each training iteration more costly. How much more? How do the total running times of COT compare to the ones of the baselines? I think this should be mentioned in the paper.\n\n(A4) Yes, one additional backpropagation is required in each iteration when applying COT. On average, the total training time is about 1.6 times longer compared to the baselines. Thanks for the suggestion, and we have included this in the latest manuscript (section 2.2).", "\n(Q5) Sec 3.4: As the authors mention, the results are biased and so the comparison is not fair here. Therefore I wonder about the relevance of this section. Isn't there an easy way to adapt the attacks to the two objectives to be able to illustrate the conjectured robustness of COT? For example, naively having a two steps perturbation of the input: one based on the gradient of the primary objective and then perturb the result using the gradient of the complementary objective?\n\n(A5) Thanks for the comment. We should have made clear that “black box” [1] (rather than “white box”) adversarial attacks are considered in the manuscript. Specifically, we follow the common practice of generating adversarial examples using both FGSM and I-FGSM methods with the gradients from a baseline model; this way, the model trained by COT is actually a “black box” to these attacks. We have modified the manuscript to clarify this part. Also, thanks for the great suggestion of forming adversarial attacks using “both” gradients (from both primary & complement objectives). We are designing and conducting experiments at the moment and will share results when ready.\n\n\nFor the part of secondary comments and typos, we appreciate your thorough reading again and have corrected all these typos according to your suggestions. Meanwhile, in the following, we also provided explanations to your secondary comments.\n\n\n(Q1) Page 3, sec 2.1: \"...the proposed COT also optimizes the complement objective for neutralizing the predicted probabilities...\", using maximizes instead of optimizes would be clearer.\n\n(A1) Thanks for the suggestion. We have reworded the manuscript to “maximizes.”\n\n\n(Q2) In the definition of the complement entropy, equation (2), C takes as parameter only y^hat_Cbar but then in the formula, ŷ_ig appears. Shouldn't C take all \\hat_y as an argument in this case?\n\n(A2) Since the probabilities sum up to one, ŷ_ig can be inferred from y^hat_Cbar. Also, for us, it seems more direct and clear to show that complement entropy is calculated from y^hat_Cbar when C takes y^hat_Cbar as the only argument. Therefore, we incline to keep the orignal formulation. If the reviewer has strong preference, please kindly let us know and we are happy to make changes accordingly.\n\n\n(Q3) Algorithm 1 page 4: I find it confusing that the (artificial) variable that appears in the argmin (resp. argmax) is theta_{t-1}\n(resp. theta'_t) which is the previous parameter. Is there a reason for this choice?\n\n(A3) Thanks for the comment. Originally, we want to notify readers that there are two backprops within one iteration. We agree that those symbols are confusing and therefore we have modified the manuscript with those symbols removed.\n\n\n(Q4) Sec 3.2 Figure 4: why is the median reported and not the mean (as in Figure 3, Tables 2 and 3)?\n\n(A4) Thanks for pointing this out. This is a typo and we have already corrected it in the manuscript: median -> mean.\n\n\n(Q5) Sec 3.2, Table 3 and 4: why is it the validation error that is reported and not the test error?\n\n(A5) Thanks for the detailed comment. For a fair comparison, we report the error in the exact same way as the open-sourced repo from the ResNet authors:\nhttps://github.com/KaimingHe/deep-residual-networks.\n\n\n(Q6) Sec 3.3: \"Neural machine translation (NMT) has populated the use of neural sequence models\": populated has not the intended meaning.\n\n(A6) We thank the reviewer for pointing out this typo. We have already corrected it in our manuscript: populated -> popularized\n\n\n(Q7) \"Studying on COT and adversarial attacks..\" --> could be better formulated\n\n(A7) Thanks for the comment again. We have modified the manuscript as follows: \"Studying on the relationship between COT and adversarial attacks…”\n\n\n[1] Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, David Lopez-Paz. “Mixup: Beyond Empirical Risk Minimization.” In International Conference on Learning Representation, 2018.", "\n(Q1) One small suggestion is that the authors can also make some comments on the connection between the two-step update algorithm (Algorithm 1) with multi-objective optimization. In particular, I would suggest the authors also try some multi-objective optimization techniques apart from the simple but effective heuristics, and see if some Pareto-optimality can be guaranteed and better practical improvement can be achieved.\n\n(A1) We sincerely thank the reviewer for the helpful and constructive suggestion about associating COT with multi-objective optimization. This is really a brilliant idea. As a straight-line future work, we will survey multi-objective optimization techniques, and explore the direction of formulating COT into a multi-objective optimization problem.", "\n(Q4) Why combining the two objectives in a single optimization problem and then solving the resulting problem is not an option instead of the alternating method given in Algorithm 1?\n\n(A4) We are very grateful for this novel idea, and we have conducted several preliminary experiments to explore this idea. Below are the comparisons between (a) the original COT method, and (b) the approach of combining the two objectives into one single objective. The experimental results show that the original COT method works better in almost all cases, and we conjecture that these two methods converge to different local minima. This idea is worth exploring, and we leave it as a straight-line future work. \n\nTest error of the state-of-the-art architectures on Cifar10 \n===========================================================\n\t\t\t\t Combining into one objective\t COT\nResNet-110 \t\t\t 7.42% \t\t 6.84%\nPreAct ResNet-18 4.92% \t\t 4.86%\nResNeXt-29 (2×64d) 4.79% \t\t 4.55%\nWideResNet-28-10\t\t4.00% \t\t 4.30%\nDenseNet-BC-121 \t4.64% \t\t 4.62%\n===========================================================\n\nTest error of the state-of-the-art architectures on Cifar100\n===========================================================\n\t\t\t\t Combining into one objective\t COT\nResNet-110 \t\t\t 28.80% \t\t 27.90%\nPreAct ResNet-18 25.30% \t\t 24.73%\nResNeXt-29 (2×64d) 23.20% \t\t 21.90%\nWideResNet-28-10\t\t 21.96% \t\t 20.99%\nDenseNet-BC-121 \t 22.17% \t\t 20.54%\n===========================================================\n\n\n(Q5) How does alternating between two objectives change the training time? Do the authors use backpropagation?\n\n(A5) Yes, we do use backpropagation. One additional backpropagation is required in each iteration when applying COT, and therefore the overall training time is about 1.6 times longer according to our experiments.\n\n\n[1] Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. “Deep Residual Learning for Image Recognition.” In IEEE Conference on Computer Vision and Pattern Recognition, 2016.\n[2] Sergey Zagoruyko, Nikos Komodakis. “Wide Residual Networks\n.” In British Machine Vision Conference, 2016.\n[3] Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger, David Lopez-Paz. “Densely Connected Convolutional Networks\n.” In IEEE Conference on Computer Vision and Pattern Recognition, 2017.", "We would like to thank the reviewer for all the insightful feedbacks. Below we provide the explanations for each question or comment raised by the reviewer:\n\n\n(Q1) How is this idea related to regularization? If we increase the regularization parameter, we can attain sparse parameter vectors. \n\n(A1) Conventionally, regularization techniques (e.g., Ridge or Lasso) are applied on the parameter space. We want to point out that all the results reported in the manuscript, for both baselines and models trained by COT, have already used L2-norm regularization on the parameter space, exactly as specified in the original papers (e.g., ResNet [1], WideResNet [2], and DenseNet [3]). In other words, COT is applied on top of the existent of those regularization techniques.\n\nIf your questions haven’t been addressed satisfactorily, please kindly let us know and we will be happy to discuss further.\n\n\n(Q2) Would this method also complement from overfitting?\n\n(A2) Thank you for the comment. We would like to further clarify what you meant by saying “complement from overfitting.” Our interpretation of the question is: whether COT could be used to fight against overfitting. Overfitting means a model fails to generalize, and in our paper we have reported the generalized performance of models trained by COT on the test data, which confirms models trained by COT generalize better. In addition, we also calculate the loss gap \"(testing loss - training loss)\" and report the results in the following table, where a smaller gap indicates that a model generalizes better. Experimental results confirm that models trained by COT seem to generalize better due to the smaller gap between training and testing loss.\n\n \"(Testing loss - training loss)” from the state-of-the-art architectures on Cifar10 \n==================================================\n\t\t\t\t Baseline\t COT\nResNet-110 \t\t\t 0.36 0.33\nPreAct ResNet-18 0.28 0.26\nResNeXt-29 (2×64d) 0.20 0.19\nWideResNet-28-10\t\t0.23 0.21\nDenseNet-BC-121 \t0.22 0.22\n=================================================\n\n\n(Q3) In the numerical experiments, the comparison is carried out against a \"baseline\" method. Do the authors use regularization with these baseline methods? I believe the comparison will be fair if the regularization option is turned on for the baseline methods.\n\n(A3) Yes, the regularization (e.g., L2 Norm) techniques are used in all of the baseline methods, as specified in their original papers (e.g., ResNet [1], WideResNet [2], and DenseNet [3]). We agree with the reviewer that “the comparison will be fair if the regularization option is turned on for the baseline methods,” and that is exactly we did in our paper: all the hyper-parameters, regularization and other training techniques are configured in the same way as in the original papers. For the details of experimental setup, please refer to the Section 3.2 in our manuscript.", "This paper considers augmenting the cross-entropy objective with \"complement\" objective maximization, which aims at neutralizing the predicted probabilities of classes other than the ground truth one. The main idea is to help the ground truth label stands out more easily by smoothing out potential peaks in non-ground-truth labels. The wide application of the cross-entropy objective makes this approach applicable to many different machine/deep learning applications varying from computer vision to NLP. \n\nThe paper is well-written, with a clear explanation for the motivation of introducing the complement entropy objective and several good visualization of its empirical effects (e.g., Figures 1 and 2). The numerical experiments also incorporate a wide spectrum of applications and network structures as well as dataset sizes, and the performance improvement is quite impressive and consistent. In particular, the adversarial attacks example looks very interesting.\n\nOne small suggestion is that the authors can also make some comments on the connection between the two-step update algorithm (Algorithm 1) with multi-objective optimization. In particular, I would suggest the authors also try some multi-objective optimization techniques apart from the simple but effective heuristics, and see if some Pareto-optimality can be guaranteed and better practical improvement can be achieved.", "In this manuscript, the authors propose a secondary objective for softmax minimization. This complementary objective is based on evaluating the information gathered from the incorrect classes. Considering these two objectives leads to a new training approach. The manuscript ends with a collection of tests on a variety of problems.\n\nThis is an interesting point of view but the manuscript lacks discussion on several important questions:\n\n1) How is this idea related to regularization? If we increase the regularization parameter, we can attain sparse parameter vectors. \n2) Would this method also complement from overfitting?\n3) In the numerical experiments, the comparison is carried out against a \"baseline\" method. Do the authors use regularization with these baseline methods? I believe the comparison will be fair if the regularization option is turned on for the baseline methods.\n4) Why combining the two objectives in a single optimization problem and then solving the resulting problem is not an option instead of the alternating method given in Algorithm 1?\n5) How does alternating between two objectives change the training time? Do the authors use backpropagation?", "\n========\nSummary\n========\n\nThe paper deals with the training of neural networks for classification or sequence generation tasks, using a cross-entropy loss. Minimizing the cross-entropy means maximizing the predicted probabilities of the ground-truth classes (averaged over the samples). The authors introduce a \"complementary entropy\" loss with the goal of minimizing the predicted probabilities of the complementary (incorrect) classes. To do that, they use the average of sample-wise entropy over the complement classes. By maximizing this entropy, the predicted complementary probabilities are encouraged to be equal and therefore, the authors claim that it neutralizes them as the number of classes grows large. The proposed training procedure, named COT, consists of alternating between the optimization of the two losses.\n\nThe procedure is tested on image classification tasks with different datasets (CIFAR-10, CIFAR-100, Street View House Numbers, Tiny ImageNet and ImageNet), machine translation (training using IWSLT dataset, validation and test using TED tst2012/2013 datasets), and speech recognition (Gooogle Commands dataset). In the experiments, COT outperforms state-of-the-art models for each task/dataset.\n\nAdversarial attacks are also considered for the classification of images of CIFAR-10: using the Fast Gradient Sign and Basic Iterative Fast Gradient Sign methods on different models, adversarial examples specifically designed for each model, are generated. Then results of these models are compared to COT on these examples. The authors admit\nthat the results are biased since the adversarial attacks only target part of the COT objective, hence more accurate comparisons should be done in future work.\n\n===========================\n Main comments and questions\n===========================\n\nEnd of page 1: \"the model behavior for classes other than the ground truth stays unharnessed and not well-defined\". The probabilities should still sum up to 1, so if the ground truth one is maximized, the others are actually implicitly minimized. No?\n\nPage 3, sec 2.1: \"optimizing on the complement entropy drives ŷ_ij to 1/(K − 1)\". I believe that it drives each term ŷ_ij /(1 − ŷ_ig ) to be equal to 1/(K-1). Therefore, it drives ŷ_ij to (1 − ŷ_ig)/(K-1) for j!=g.\n\nThis indeed flattens the ŷ_ij for j!=g, but the effect on ŷ_ig is not controlled. In particular this latter can decrease. Then in the next step of the algorithm, ŷ_ig will be maximized, but with no explicit control over the complementary probabilities. There are two objectives that are optimized over the same variable theta. So the question is, are we sure that this procedure will converge? What prevents situations where the probabilities will alternate between two values? \n\nFor example, with 4 classes, we look at the predicted probabilities of a given sample of class 1:\nSuppose after step 1 of Algo 1, the predicted probabilities are: 0.5 0.3 0.1 0.1 \nAfter step 2: 0.1 0.3 0.3 0.3\nThen step 1: 0.5 0.3 0.1 0.1\nThen step 2: 0.1 0.3 0.3 0.3\nAnd so on... Can this happen? Or why not? Did the algorithm have trouble converging in any of the experiments?\n\nSec 3.1:\n\"additional efforts for tuning hyper-parameters might be required for optimizers to achieve the best performance\": Which hyper-parameters are considered here? If it is the learning rate, why not use a different one, tuned for each objective?\n\nSec 3.2:\nThe additional optimization makes each training iteration more costly. How much more? How do the total running times of COT compare to the ones of the baselines? I think this should be mentioned in the paper.\n\nSec 3.4:\nAs the authors mention, the results are biased and so the comparison is not fair here. Therefore I wonder about the relevance of this section. Isn't there an easy way to adapt the attacks to the two objectives to be able to illustrate the conjectured robustness of COT? For example, naively having a two steps perturbation of the input: one based on the gradient of the primary objective and then perturb the result using the gradient of the complementary objective?\n\n===========================\nSecondary comments and typos\n===========================\n\nPage 3, sec 2.1: \"...the proposed COT also optimizes the complement objective for neutralizing the predicted probabilities...\", using maximizes instead of optimizes would be clearer.\n\nIn the definition of the complement entropy, equation (2), C takes as parameter only y^hat_Cbar but then in the formula, ŷ_ig appears. Shouldn't C take all \\hat_y as an argument in this case?\n\nAlgorithm 1 page 4: I find it confusing that the (artificial) variable that appears in the argmin (resp. argmax) is theta_{t-1}\n(resp. theta'_t) which is the previous parameter. Is there a reason for this choice?\n\nSec 3:\n\"We perform extensive experiments to evaluate COT on the tasks\" --> COT on tasks\n\n\"compare it with the baseline algorithms that achieve state-of-the-art in the respective domain.\" --> domainS\n\n\"to evaluate the model’s robustness trained by COT when attacked\" needs reformulation.\n\n\"we select a state- of-the-art model that has the open-source implementation\" --> an open-source implementation\n\nSec 3.2:\nFigure 4: why is the median reported and not the mean (as in Figure 3, Tables 2 and 3)?\n\nTable 3 and 4: why is it the validation error that is reported and not the test error?\n\nSec 3.3:\n\"Neural machine translation (NMT) has populated the use of neural sequence models\": populated has not the intended meaning.\n\n\"We apply the same pre-processing steps as shown in the model\" --> in the paper?\n\nSec 3.4:\n\"We believe that the models trained using COT are generalized better\" --> \"..using COT generalize better\"\n\n\"using both FGSM and I-FGSM method\" --> methodS\n\n\"The baseline models are the same as Section 3.2.\" --> as in Section 3.2.\n\n\"the number of iteration is set at 10.\" --> to 10\n\n\"using complement objective may help defend adversarial attacks.\" --> defend against\n\n\"Studying on COT and adversarial attacks..\" --> could be better formulated\n\nReferences: there are some inconsistencies (e.g.: initials versus first name)\n\n\nPros\n====\n- Paper is clear and well-written\n- It seems to me that it is a new original idea\n- Wide applicability\n- Extensive convincing experimental results\n\nCons\n====\n- No theoretical guarantee that the procedure should converge\n- The training time may be twice longer (to clarify)\n- The adversarial section, as it is, does not seem relevant for me\n\n" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "BkxPea7hyV", "HygCehX3yN", "HygCehX3yN", "r1lArwq9Am", "rkg-IxqSAQ", "BkxI3CtBCQ", "iclr_2019_HyM7AiA5YX", "HJlOtAKta7", "r1ejg15YaQ", "rJeeh8J5pm", "BkeB3kugCQ", "BkeB3kugCQ", "S1ebZ7sRam", "iclr_2019_HyM7AiA5YX", "r1gc8uIw27", "r1gc8uIw27", "Syx8_6U63X", "B1lv2Bdph7", "B1lv2Bdph7", "iclr_2019_HyM7AiA5YX", "iclr_2019_HyM7AiA5YX", "iclr_2019_HyM7AiA5YX" ]
iclr_2019_HyN-M2Rctm
Mode Normalization
Normalization methods are a central building block in the deep learning toolbox. They accelerate and stabilize training, while decreasing the dependence on manually tuned learning rate schedules. When learning from multi-modal distributions, the effectiveness of batch normalization (BN), arguably the most prominent normalization method, is reduced. As a remedy, we propose a more flexible approach: by extending the normalization to more than a single mean and variance, we detect modes of data on-the-fly, jointly normalizing samples that share common features. We demonstrate that our method outperforms BN and other widely used normalization techniques in several experiments, including single and multi-task datasets.
accepted-poster-papers
The paper develops an original extension/generalization of standard batchnorm (and group norm) by employing a mixture-of-experts to separate incoming data into several modes and separately normalizing each mode. The paper is well written and technically correct, and the method yields consistent accuracy improvements over basic batchnorm on standard image classification tasks and models. Reviewers and AC noted the following potential weaknesses: a) while large on artificially mixed data, improvements are relatively small on single standard datasets (<1% on CIFAR10 and CIFAR100) b) the paper could better motivate why multi-modality is important e.g. by showing histograms of node activations c) the important interplay between number of modes and batch size should be more thoroughly discussed d) the closely related approach of Kalayeh & Shah 2018 should be presented and contrasted with in more details in the paper. Also comparing to it in experiments would enrich the work.
train
[ "Hygq9xlc27", "B1gol3FKAX", "BklkkqFYA7", "ByeV2TieRX", "ryghy-BgRX", "BJgbLwQjam", "B1x5phA1T7", "SkeHwZ-q2m", "SyeC8uLEnm", "HJx9Mg3gh7", "H1eOYkPQ9m", "ryxCU3AxcQ", "SJeTPa9lqX", "B1gx9Ady5X" ]
[ "official_reviewer", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "public", "author", "public", "author", "public" ]
[ "Summary:\nBatch Normalization (BN) suffers from 2 flaws: 1) It performs poorly when the batch size is small and 2) computing only one mean and one variance per feature might be a poor approximation for multi-modal features. To alleviate 2), this paper introduces Mode Normalization (MN) a new normalization technique based on BN. It uses a gating mechanism, similar to an attention mechanism, to project the examples in the mini-batch onto K different modes and then perform normalization on each of these modes.\n\nClarity:\nThe paper is clearly written, and the proposed normalization is well explained.\n\nNovelty: \nThe proposed normalization is somewhat novel. I also found a similar paper on arXiv (submitted for review to IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018): M. M. Kalayeh, M. Shah, Training Faster by Separating Modes of Variation in Batch-normalized Models, arXiv 2018. I didn’t took the time to read this paper in details, but the mixture normalization they propose seems quite close to MN. Could the authors comment on this?\n\nPros and Cons:\n+ Clearly written and motivated\n+ Try to address BN’s weakness, which is an important direction in deep learning\n- I found similar papier in the literature\n- The proposed method aims to make BN perform better, but pushes it toward small batch settings, which is where BN performs poorly.\n- Misses comparisons with other techniques (see detailed comments).\n\nDetailed Comments:\n1. Multi-modality:\nIt is not clear if the features are multimodal when performing classification tasks. Some histograms of a few features in the network would have help motivate the proposed normalization. However, it seems indeed to be an issue when training GANs: to make BN work when placed in the discriminator, the real and fake examples must be normalized separately, otherwise the network doesn't train properly. Moreover, when dealing with multimodal datasets (such as the one you created by aggregating different datasets), one can use the FiLM framework (V. Dumoulin et al., Feature-wise transformations, Distill 2018), and compute different means and variances for each datasets. How would the proposed method perform against such method?\n2. Larger scale:\nIt would be nice to see how MN performs on bigger networks (such as the ResNet50, or a DenseNet), and maybe a more interesting fully-connected benchmark, such as the deep autoencoder.\n3. Small batch regime:\nIt seems that the proposed method essentially pushes BN towards a regime of smaller mini-batch size, where it is known to performs poorly. For instance, the gain in performances on the ImageNet experiments drops quite a lot already, since the training is divided on several GPUs (and thus the effective mini-batch is already reduced quite a lot). This effect gets worse as the size of the network increases, since the effective mini-batch size gets smaller. This problem also appears when working on big segmentation tasks or videos: the mini-batch size is typically very small for those problems. So I fear that MN will scale poorly on bigger setups. I also think that this is the reason why you need to use extremely small K.\n4. Validation set:\nWhat validation sets are you using in your experiments? In section 4.1, the different dataset and their train / test splits are presented, but what about validation?\n\nConclusion:\nGiven the similarity with another paper already in the literature, I reject the paper. Also, it seems to me that the technique actually pushed BN towards a small batch regime, where it is known to perform poorly. Finally, it misses comparison with other techniques.\n\nRevision:\nAfter the rebuttal, I increased my rating to a 6. I feel this paper could still be improved by better motivating why multi-modality is important for single tasks (for example, by plotting histograms of activations from the network). I also think that the paper by Kalayeh & Shah should be presented in more details in the related work, and also be compared to in the experimental setup (for example on a small network), especially because the authors say they have experience with GMMs.", "(a.) Thanks for your reply and for the acknowledgement of significant differences between our paper and that of Kalayeh & Shah. Since there is no software available from the authors, and given the non-standard optimization technique and extensive hyperparameter tuning required to set it up, we leave a comparison as future work.\n\n(b.) Our focus is not to address all the weaknesses of batch normalization, but specifically to increase its robustness against multi-modality. Note however that we show that our model can be incorporated into group norm, which aims to address this issue. So in this sense we show that accounting for modality – as in mode group norm (MGN) – can increase robustness in a small batch size setting as well.\n\nRegarding (c.): Using an oracle to split batches via their original dataset is certainly possible, and results for this particular approach have previously been reported by Rebuffi et al. (2017). Since this approach does not make sense for the majority of our experiments (single task, where D=1), we excluded it from our evaluation. Using an oracle boosts the performance of LeNet by around 1-2%, but please note that this assumes both train and test time domain knowledge and cannot be used in single domain classification tasks.\n\n(d.) Our experiments involve tuning the learning rate schedule as well as the single additional hyperparameter of our method, K. For the former, we followed He et al. (2015) (p. 776 left of the CVPR version of their paper) in all experiments. For validating the latter, we randomly sampled 20% of the training set as validation and found K=2 to be a good compromise. After fixing K=2, we train our models on train+validation sets and report the result on test splits.\n\nAs requested, we ran additional experiments on deeper networks, both on CIFAR10 and CIFAR100. For this, we implemented ResNet56 (which is more widely used for CIFAR tasks than ResNet50, see e.g. https://dawn.cs.stanford.edu/benchmark/CIFAR10/inference.html). Note that we used the exact same optimization setup as with ResNet20 in these experiments.\n\nOn CIFAR10, ResNet56 with BN resulted in a test error of 6.87% (slightly better than the original result of 6.97% reported in He et al. (2015)). Replacing all normalization layers with MN achieves a test error of 6.47%, boosting the performance of BN by ±0.4%. Similarly, MN with ResNet56 obtains a test error of 28.69% versus 29.70% of BN, thus improves 1% over BN.", "We thank the reviewer for reading through our paper in detail. Three central concerns were raised: (a.) the modality should be quantified in some way, (b.) the parametrization needs to be explained in more detail, and (c.) experiments for constant N/K are missing.\n\nRegarding (a.), to the best of our knowledge no quantitative measure exists in the literature to describe the modality of a task. This is a very good question however, and to shed some light on it, we ran an additional analysis and evaluated the average standard deviation of intermediate features (per channel) in our VGG13 experiment. At test time, instead of transforming samples to a normal (with standard deviation 1), BN oversqueezes samples, with a mean deviation of ±0.5, considerably lower than the target of 1. MN yields a deviation of around 0.9, which lies much closer to the training target, so MN is better equipped to deal with modality at test time.\n\n(b.): Just as in standard BN, we compute estimators after average pooling over height and width of each image. As such, the affine transformation within the gating unit has its preimage in C. We realize now that the second paragraph on p. 5 was in need of some clarification, and have updated this in our revision. Many thanks for pointing this out to us!\n\n(c.): For this, please cross-reference the result for MN of Table 1 (where K=2, N=128, N/K=64) with Table 5 in the Appendix (K=4, N=256, N/K=64). While the gradient updates for the MN units (i.e. its estimators and the parameters of its transformations) receive equivalently informed gradients in both trials, the gradients for the convolutional layers differ, and in all likelihood the larger batch size of N=256 overdamps the gradient information for these layers. This overdamping issue is persistent even when doubling the number of training epochs.", "(a.) Frist, I do apologize for letting you think I was accusing you of plagiarism. This is a serious offense, and by no means I implied such a thing. While reviewing your paper and looking up the recent literature about Batch Normalization, I quickly came across the paper by Kalayeh & Shah, and I was surprised you didn’t mentioned it in your paper. I simply thought you had been scooped. I also apologize for not having taken a closer look (which I did now) at this paper.\n\nThat said, I thank you for your detailed comment on the difference between both paper. As you mentioned, such comparison should figure in your literature review, since both methods are designed to provide multi-modality to BN. The key difference is indeed how it is implemented: They use an outside-of-the-loop GMM, while you use an attention mechanism. Your method is certainly easier to implement and use in modern deep learning frameworks than the GMM approach. A comparison with the GMM approach would still have been nice, or some histogram plots showing the means and variances of different modes.\n\n(b.) My point was that MN suffers even more than BN from the small size regime (note that this could also be a positive effect, as it could introduce stronger regularization). In Table 2, we can see that BN drops 3% error rate when going from 16 to 4 examples per mini-batch, where MN drops 4%. Also, this experiment is heavily multimodal in the first place (and thus one can expect BN to perform poorly, and this is the reason why I proposed (c.) for a more fair comparison). The gap in performances between MN and BN on CIFAR and ImageNet gets smaller and smaller, as the effective mini-batch size get smaller.\n\nAlso by my comment that your paper \"try to address BN’s weakness, which is an important direction in deep learning\", I meant that your paper is going beyond uni-modal normalization, not that it is designed to solve the small size issue of BN.\n\n(c.) Sorry if I didn’t expressed myself clearly enough here. I was suggesting to use the information from which dataset D (MNIST, CIFAR, ...) one example comes from, and normalize it using the examples in the mini-batch that also come from dataset D. You would then obtain different statistics for different datasets. This would help to see how well your method compares against explicit separated normalization.\n\n(d.) I'm still interested to know if 1. you ran experiments on deeper networks (like the ResNet50) and 2. what is the validation sets you used through your experiments.\n\nI hope I let you enough time to answer again if you want to, and I will certainly increase the score of my review now that the difference between the two papers has been clearly established.", "Many thanks for the review. Regarding 1): we consider MN to be a generalization of BN, and – see paragraph 4 on p. 5 – wanted to make sure the normalization unit can assume the standard form of BN, whenever that is optimal and yields the best performance. The obvious benefit of not regularizing this behavior is that MN becomes seamlessly insertable into any deep network. Regarding sparseness: note that (even at test time) assignments are usually quite pronounced, at roughly 0.95-0.99 on average.\n\n2): Allowing individual affine parameters only improves test performance minimally (differences are in the regime of 0.05-0.2%). In all likelihood this is because normalizing features with multiple means and standard deviations already standardizes them sufficiently.\n\n3): As shown in paragraph 2, p. 5, when K=1, MN reduces to standard BN. We also went ahead and implemented your suggestion to activate with a sigmoid. Unfortunately, the resulting performance was worse than that of vanilla BN.\n", "Three main concerns were raised: (a.) a similar publication exists, giving grounds for a clear rejection of this paper. We thank the reviewer for bringing the interesting paper by Kalayeh & Shah to our attention, but show below that this claim is unjustified. (b.) MN suffers from weaknesses that BN also suffers from in the small batch size regime, and (c.) the paper should discuss some additional related methods.\n\nRegarding (a.): we are thankful for having this paper pointed out to us and will include it in our revision. That being said, we strongly rebut the claim that their paper is equivalent to ours, as their approach is very different. After reading their preprint in detail, we summarize below.\n\nThe crucial difference is that in MN we employ a Mixture of Experts (MoE) approach and parametrize each expert with a simple attention-like mechanism on the image’s features. MN can effortlessly be added to any modern deep convolutional network, can be optimized with standard SGD, has a very small computational overhead, and introduces only a single hyperparameter (number of modes K). On the other hand, Kalayeh & Shah propose using a GMM to fit the feature distribution within the normalization unit (from hereon, we thus abbreviate MN-GMM). As it happens, we experimented with a GMM-based approach before designing MN, so we are well familiar with the several technical difficulties and impracticalities that using GMMs imposes:\n\n* Due to the complexity of fitting GMMs, in their experiments Kalayeh & Shah never swap out all BN layers with MN-GMM layers, see p. 7 (right). So their resulting network is a mixture of BN and (very few, usually 1) MN-GMM normalizations. We designed MN to be lightweight and easy to deploy, and in our experiments show that MN can replace the entirety of BN layers, even in a deep network.\n* As Kalayeh & Shah explain on p. 6 (right column) they fit the GMM via EM, in a completely separate optimization step, outside the training loop of the network. In designing our method, it was important to us to sidestep this restriction, and MN can be trained end-to-end alongside the other parameters of the network.\n* Further complicating MN-GMM is that it requires careful, manual decisions in its tuning. From our own experiments, we are well aware of the considerations one needs to ponder over in MN-GMM. A few examples: (i.) how many EM iterations are needed? (ii.) Which BN units should be replaced, which should remain intact? (iii.) How should the GMM parameters be initialized? (iv.) How many components should be assumed? In MN, the practitioner needs to make a single choice (in that K needs to be set). Once that choice has been made, MN can be used off-the-shelf, making it straightforward to use in an applied setting.\n\nIn MN-GNN Kalayeh & Shah (2018) propose an interesting modification to BN, however it should be clear from the above points that the similarities to our method are extremely limited. R2 states that “I didn’t took the time to read this paper in details”, only to continue “given the similarity with another paper already in the literature, I reject the paper”. We were very surprised by the rejection based on a “quick read”, and – for a top-tier conference like ICLR – would have found it appropriate to read the mentioned paper and to compare it to ours in a more careful manner. Once more, we firmly reject the implication that our proposed method has been covered in their publication, or that we, in any way, copied from their work.\n\n(b.): splitting up batches does introduce errors from finite estimation, which is an issue that we raise ourselves on p. 6, third paragraph. As we argue in our paper, many applications exist where the batch size restriction isn’t a major issue, and a larger error results from the underlying modality of the task. MN is aimed at alleviating issues in these particular tasks, we never designed it to solve the small batch size issues of BN, and at no point claim that it does.\n\nThat being said, even though MN splits minibatches into multiple modes by construction (thereby collecting statistics from less samples than BN), in practice MN still performs better than BN, even for small batch sizes. This is shown in Table 2, where MN clearly is more robust to smaller batch sizes than BN.\n\n(c.): FiLM learns to adaptively influence the output of a neural network by applying transformations to intermediate features conditioned on some input. FiLM’ed networks still use BN, and thus FiLM does not address any shortcomings of BN, so MN can simply be used alongside FiLM. There is a weak connection to our paper in that MN can also be seen as a conditional layer, however with the completely different focus of adapting feature normalizations. We thank the reviewer for pointing out this work, and have included it in our revision.", "The authors proposed a normalization method that learns multi-modal distribution in the feature space. The number of modes $K$ is set as a hyper-parameter. Each sample $x_{n}$ is distributed (softly assigned) to modes by using a gating network. Each mode keeps its own running statistics. \n\n1) In section 3.2, it is mentioned that the MN didn't need and use any regularizer to encourage sparsity in the gating network. Is MN motivated to assign each sample to multiple modes evenly or to a distinct single mode? It would be better to provide how the gating network outputs sparse assignment along with the qualitative analysis.\n\n2) The footnote 3 showed that individual affine parameters doesn't improve the overall performance. How can this be interpreted? If the MN is assuming multi-modal distribution, it seems more reasonable to have individual affine parameters.\n\n3) The overall results show that increasing the number of modes $K$ doesn't help that much. The multi-task experiments used 4 different datasets to encourage diversity, but K=2 showed the best results. Did you try to use K=1 where the gating network has a sigmoid activation?", "The paper proposes a generalisation of Batch Normalisation (BN) under the assumption that the statistics of the unit activations over the batches and over the spatial dimensions (in case of convolutional networks) is not unimodal. The main idea is to represent the unit activation statistics as a mixture of modes and to re-parametrise by using mode specific means and variances. The \"posterior\" mixture weights for a specific unit are estimated by gating functions with additional affine parameters (followed by softmax). A second, similar variant applies to Group Normalisation, where the statistics is taken over channel groups and spatial dimensions (but not over batches). \n\nTo demonstrate the approach experimentally, the authors first consider an \"artificial\" task by joining data from MNIST, Fashion MNIST, CIFAR10 and SVHN and training a classifier (LeNet) for the resulting 40 classes. The achieved error rate improvement is 26.9% -> 23.1%, when comparing with standard BN. In a second experiment the authors apply their method to \"single\" classification tasks like CIFAR10, CIFAR100 and ILSVRC12 and use large networks as e.g. VGG13 and ResNet20. The achieved improvements when comparing with standard BN are one average 1% or smaller.\n\nThe paper is well written and technically correct.\n\nFurther comments and questions to the authors:\n\n- The relevance of the assumption and the resulting normalisation approach would need further justification. The proposed experiments seem to indicate that the node statistics in the single task case are \"less multi-modal\" as compared to the multi-task. Otherwise we would expect the comparable improvements by mode normalisation in both cases? On the other hand, it should be easy to verify the assumption of multi-modality experimentally, by collecting node statistics in the learned network (or at some specific epoch during learning ). It should be also possible to give some quantitative measure for it.\n\n- Please explain the parametrisation of the gating units more precisely (paragraph after formula (3)). Is the affine mapping X -> R^k a general one? Assuming that X has dimension CxHxW, this would require a considerable amount of additional parameters and thus increase the VC dimension of the network (even if its primary architecture is not changed). Would this require more training data then? I miss a discussion of this aspect.\n\n- When comparing different numbers of modes (sec. 4.1, table 1), the size of the batch size was kept constant(?). The authors explain the reduction of effectiveness of higher mode numbers as a consequence of finite estimation (decreasing number of samples per mode). Would it not be reasonable to increase the batch size proportionally, such that the amount of samples per mode is kept constant?", "Hi, thanks for your interest and your questions. We parametrize the gating functions with an affine transformation followed by a softmax, see second paragraph on p. 5. Using an alternative in any subset of layers is certainly possible, this would need to be decided on a case-by-case basis though, as it depends on e.g. choice of architecture, or the task at hand.\n\nRegarding your second question, we apply the normalization to the full image, while estimators are computed after pooling over height and width, so we follow the exact same protocol as in batch norm.", "1. Since features in different layers represent differently, is there necessary to add a gating network alongside each normalization module? And what is the structure of your gating network?\n2. Can you provide more details about Algorithm 1? Especially $y_{nk}$ and $x_n-\\mu_k$,since different shape between (n,c,h,w) and (k,c) can not do subtraction directly.", "Thank you for your continued interest. MN does not use any explicit label information, and (given the complexity of the datasets that we study here) is unable to uncover the underlying cluster structure, see penultimate paragraph on p. 5. Nonetheless, in our experiments we observe that MN does allocate samples into joint modes that have similar qualities, such as color or object size, c.f. Fig 2.", "Thanks for reply. I still have a question. Are the examples normalized by the same mode in MN from the same category?", "Many thanks for your interest in our paper and your comment. Indeed, increasing the number of modes does not always increase performance, see also our third paragraph on p. 6.\n\nIntuitively, one would expect larger choices of K to always improve performance (at the expense of some computational cost). The fact that this isn’t the case connects to the same issue that also makes BN vulnerable to small batch sizes: for fixed N, increasing K results in less and less samples being assigned to a joint mode. Estimators are then computed from smaller partitions, in turn making them less accurate. Besides this, a second dynamic arguably comes into play in the hierarchicality of deep architectures. If the original network has L normalizations, then – compared to BN – we introduce L(K-1) additional normalizations in MN. So even in its simplest configuration, MN comes with L additional normalizations, which could be more than the network needs to account for the relevant modes in the distribution.\n\nIn practice choosing K=2 gave us a significant performance boost in all our experiments (and therefore we recommend this value), going beyond that only resulted in benefits if the batch size was chosen to be sufficiently large, see the Appendix.", "From table 1, it looks that increasing the number of K in MN also increases error rate. What value of K shall we use in practice?" ]
[ 6, -1, -1, -1, -1, -1, 5, 6, -1, -1, -1, -1, -1, -1 ]
[ 4, -1, -1, -1, -1, -1, 4, 4, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2019_HyN-M2Rctm", "ByeV2TieRX", "SkeHwZ-q2m", "BJgbLwQjam", "B1x5phA1T7", "Hygq9xlc27", "iclr_2019_HyN-M2Rctm", "iclr_2019_HyN-M2Rctm", "HJx9Mg3gh7", "iclr_2019_HyN-M2Rctm", "ryxCU3AxcQ", "SJeTPa9lqX", "B1gx9Ady5X", "iclr_2019_HyN-M2Rctm" ]
iclr_2019_HyNA5iRcFQ
Detecting Egregious Responses in Neural Sequence-to-sequence Models
In this work, we attempt to answer a critical question: whether there exists some input sequence that will cause a well-trained discrete-space neural network sequence-to-sequence (seq2seq) model to generate egregious outputs (aggressive, malicious, attacking, etc.). And if such inputs exist, how to find them efficiently. We adopt an empirical methodology, in which we first create lists of egregious output sequences, and then design a discrete optimization algorithm to find input sequences that will cause the model to generate them. Moreover, the optimization algorithm is enhanced for large vocabulary search and constrained to search for input sequences that are likely to be input by real-world users. In our experiments, we apply this approach to dialogue response generation models trained on three real-world dialogue data-sets: Ubuntu, Switchboard and OpenSubtitles, testing whether the model can generate malicious responses. We demonstrate that given the trigger inputs our algorithm finds, a significant number of malicious sentences are assigned large probability by the model, which reveals an undesirable consequence of standard seq2seq training.
accepted-poster-papers
This work examines how to craft adversarial examples that will lead trained seq2seq models to generate undesired outputs (here defined as, assigning higher-than-average probability to undesired outputs). Making a model safe for deployment is an important unsolved problem and this work is looking at it from an interesting angle, and all reviewers agree that the paper is clear, well-presented, and offering useful observations. While the paper does not provide ways to fix the problem of egregious outputs being probable, as pointed out by reviewers, it is still a valuable study of the behavior of trained models and an interesting way to "probe" them, that would likely be of high interest to many people at ICLR.
train
[ "Skx74gyo07", "BklKQvMAnQ", "rkeN_ZS5Cm", "rkgBookDAQ", "SJx0z4aHRX", "H1erS8ZYpQ", "H1epDSZtaX", "BJgV-LWK6X", "SygNarWt6X", "SklZWlF9nm", "SJgy7QFK37" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Yes, will do. We think it is an interesting and informative investigation(thanks for the suggestion), and we will add these to the final version of the paper(if accepted).\n\nSorry, let us clarify: we first take all the target sentences that are \"hit\" w.r.t io_sample_min_hit in the mal-list(which is about 10% among all targets), then average the word-level rank during decoding for all the words in these \"hit\" target sentences. ", "This paper explores the task of finding discrete adversarial examples for (current) dialog models in a post hoc manner (i.e., once models are trained). In particular, the authors propose an optimization procedure for crafting inputs (utterances) that trigger trained dialog models to respond in an egregious manner.\n\nThis line of research is interesting as it relates to real-world problems that our models face before they can be safely deployed. The paper is easy to read, nicely written, and the proposed optimization method seems reasonable. The study also seems clear and the results are fairly robust across three datasets. It was also interesting to study datasets which, a priori, seem like they would not contain much egregious content (e.g., Ubuntu \"help desk\" conversations).\n\nMy main question is that after reading the paper, I'm not sure that one has an answer to the question that the authors set out to answer. In particular, are our current seq2seq models for dialogs prone to generating egregious responses? On one hand, it seems like models can assign higher-than-average probability to egregious responses. On the other, it is unclear what this means. For example, it seems like the possibility that such a model outputs such an answer in a conversation might still be very small. Quantifying this would be worthwhile. \n\nFurther, one would imagine that a complete dialog system pipeline would contain a collection of different models including a seq2seq model but also others. In that context, is it clear that it's the role of the seq2seq model to limit egregious responses? \n\nA related aspect is that it would have been interesting to explore a bit more the reasons that cause the generation of such egregious responses. It is unclear how representative is the example that is detailed (\"I will kill you\" in Section 5.3). Are other examples using words in other contexts? Also, it seems reasonable that if one wants to avoid such answers, countermeasures (e.g., in designing the loss or in adding common sense knowledge) have to be considered.\n\n\nOther comments:\n\n- I am not sure of the value of Section 3. In particular, it seems like the presentation of the paper would be as effective if this section was summarized in a short paragraph (and perhaps detailed in an appendix).\n \n- Section 3.1, \"continuous relaxation of the input embedding\", what does that mean since the embedding already lives in continuous space?\n \n- I understand that your study only considers (when optimizing for egregious responses)) dialogs that are 1-turn long. I wonder if you could increase hit rates by crafting multiple inputs at once.\n \n- In Section 4.3, you fix G (size of the word search space) to 100. Have you tried different values? Do you know if larger Gs could have an impact of reported hit metrics?\n\n- In Table 3, results from the first column (normal, o-greedy) seem interesting. Wouldn't one expect that the model can actually generate (almost) all normal responses? Your results indicate that for Ubuntu models can only generate between 65% and 82% of actual (test) responses. Do you know what in the Ubuntu corpus leads to such a result?\n \n- In Section 5.3, you seem to say that the lack of diversity of greedy-decoded sentences is related to the low performance of the \"o-greedy\" metric. Could this result simply be explained because the model is unlikely to generate sentences that it has never seen before? \n\n You could try changing the temperature of the decoding distribution, that should improve diversity and you could then check whether or not that also increases the hit rate of the o-greedy metric.\n\n- Perhaps tailoring the mal lists to each specific dataset would make sense (I understand that there is already some differences in between the mal lists of the different datasets but perhaps building the lists with a particular dataset in mind would yield \"better\" results). \n", "Thanks for providing these. Unfortunately, I don't have useful insights about other possible metrics.\n\nI think it would be nice to add a short paragraph about some of these results in the paper. \n\nWhen you say that the \"average word-level rank for Ubuntu is 3.09, for OpenSubtitles it is 1.80\". Is that averaged across all words in an utterance from the mal-list? ", "1)\nGood point, we agree that studying how the egregious output rank in the beam-search list will give a better sense of how bad or not bad the situation is. Before that, let us emphasis the reasons behind how we define sample_hit:\n1. This definition is intuitive and data/sequeunce_length/vocab invariant, because it only compares the average log-likelihood the trained model assigned to the egregious outputs and reference outputs. However, the rank in the decoding list is obviously, data/length/vocab variant. For example, when the target length is longer or the vocabulary is larger, the target will get lower rank, but it doesn't mean the model is safe.\n2. The trigger inputs for sample_hit definition is more straightforward to optimize, but for rank it will be more involved.\nBut we agree it remains an important question whether the sample_hit is the best definition for egregious outputs (it depends on what kind of guarantee you want your generator to have), any discussion or advice are very welcome.\n\nHere's a study about where the egregious output rank in the beam-search list, on Ubuntu and OpenSubtitles data-sets:\n1. Given the trigger input and the mal target pairs (for io_sample_min_hit) our algorithm found, we do a beam-search during decoding with a beam size of 1000. And check whether the mal target sequence is found in the 1000-best-list. \n\nFor Ubuntu, only very few mal target (2% among the hit ones) appear in the 1000-list. Also, the experiment revealed that the list is dominated by generic responses. However, this is not surprising. It is hard to locate a specific sequence in beam-search for a large vocabulary data, you need to use an enormous beam size, which is too costly.\n\nFor OpenSubtites, 10% among the hit targets appear in the 1000-list. This is alarming, but we believe it is due to the reason that the OpenSubtiltes data itself contains a lot of dirty sentences.\n\n2. Since the exact seq-level rank maybe unavailable, we switch to report a word-level rank statistics. By word-level rank we mean given the right history, what is the rank of the target word in the distribution P(y_t|x,y_{1...t-1}), this statistics is very efficient to compute.\nThe result is that the average word-level rank for Ubuntu is 3.09, for OpenSubtitles it is 1.80 so it is on average ranked among the first 0.01% in the vocabulary.\nThis result shows that on word-level, the malicious target sequence has a very high rank.\n\nBelow is two examples of beam-search in the OpenSubtiltes data:\nmal target: <s> that 's enough </s>\ntrigger_input: it 's one of you , you n't i 're has guess to tom <eou>\nbeam_id 0: <s> no . </s>\nbeam_id 1: <s> what ? </s>\nbeam_id 2: <s> no ! </s>\n........\nbeam_id 220: <s> you 're right . </s>\nbeam_id 221: <s> you think so ? </s>\nbeam_id 222: <s> what 's the matter with you ? </s>\nbeam_id 223: <s> that 's enough </s>\n\n\nmal target: <s> leave me alone . </s>\ntrigger input: was same in 's hi you . , you 've find ca wait ! <eou>\nbeam_id 0: <s> come on ! </s>\nbeam_id 1: <s> <unk> ! </s>\nbeam_id 2: <s> no ! </s>\nbeam_id 3: <s> where are you going ? </s>\nbeam_id 4: <s> stop ! </s>\nbeam_id 5: <s> hurry up ! </s>\n........\nbeam_id 41: <s> come here ! </s>\nbeam_id 42: <s> no , no , no . </s>\nbeam_id 43: <s> leave me alone . </s>\n\n\nWe look forward to hear what you think about these results.\n\n6)\nYes, as stated in our paper, the \"normal\" list is specially designed to test the ability of the algorithm, and a perfect trigger input search algorithm should get 100% hit rate. Note that to the best of our knowledge, this \"ability test\" is not conducted in NLP adversarial attack literature before. Due to the difficulty of discrete-space optimization, the result that the algorithm fail to find the adversarial input, doesn't mean it doesn't exist. ", "Thanks a lot for taking the time to reply to all of my questions/comments. \n\n1) You write: \"We believe that a very natural and desirable quality of the model is that “the probability assigned to a bad sentence should not be larger than the probability of a good(reference) sentence.” Unfortunately, our experiments clearly show that this is not the case, which is alarming.\"\n\nI still think that it would be useful to quantify this, e.g. in terms of where does that sentence rank according to some decoding strategy. I cannot completely convince myself that above average is that bad given that the space of all sentences is large.\n\n6) You write: \" If our algorithm is perfect, the result should be 100%. This result shows that there still remains room for (maybe big) improvements for the trigger input search algorithm. \"\n\nThat's interesting. I was under the impression that it was a limitation of the seq2seq model (i.e., it could not actually generate all responses). I guess I misunderstood this. Thanks for clarifying. \n\n\n8) You write: \"It is less clear to us whether that will change the greedy decoding behavior however, because changing the temperature should not change which element is the maximum. Do you agree?\" \n\nGood point, I agree. You'd have to look further down the list or sample. Thanks.\n", "Thanks for the detailed review, here’s responses to the questions:\n\n1) In Section 3, even if the \"l1 + projection\" experiments seem to show that generating egregious outputs with greedy decoding is very unlikely, it doesn't definitely prove so. It could be that your discrete optimization algorithm is suboptimal, especially given that other works on adversarial attacks for seq2seq models use different methods such as gradient regularization (Cheng et al. 2018).\nSimilarly, the brute-force results on a simplified task in Appendix B are useful, but it's hard to tell whether the conclusions of this experiment can be extrapolated to the original dialog task.\n\nWe agree that our approach is not a proof for the robustness for greedy decoding, but in this work we provide several empirical experiments from different angles (the main result, continuous relaxation and brute-force enumeration) to support that claim. \n\nAnd you’re right in that our algorithm is not perfect (since the hit rate for the normal list is not 100%, there is room for improvement in the search algorithm). We are aware that the algorithm in (Cheng et al. 2018), in also applicable in our setting. However, the main contribution of our work is not about determining which algorithm is the best. We proposed a simple and effective gibbs-enum algorithm, and more importantly used it to demonstrate that the “egregious output” problem exists in standard seq2seq model training.\n\n2) Given that you also study \"o-greedy-hit\" in more detail with a different algorithm in Sections 4 and 5, I would consider removing Section 3 or moving it to the Appendix for consistency.\n\nThe reason we put emphasis on the continuous relaxation experiment in Section 3 is that we believe this is the first natural approach researchers will try in order to find trigger inputs for some target sequence. We felt that by demonstrating that this doesn’t work, motivated the enumeration based algorithm, such as gibbs-enum. \n\nThanks for the review!\n", "Thanks for the detailed review, here’s responses to the advice and questions:\n\n1) My main question is that after reading the paper, I'm not sure that one has an answer to the question that the authors set out to answer. In particular, are our current seq2seq models for dialogs prone to generating egregious responses? On one hand, it seems like models can assign higher-than-average probability to egregious responses. On the other, it is unclear what this means. For example, it seems like the possibility that such a model outputs such an answer in a conversation might still be very small. Quantifying this would be worthwhile. \n\nOne clear observation that can be made from the experiments regarding greedy decoding is that the model is very robust against egregious outputs, at least those used in the experiments. Unless one is using data-sets like Opensubtitles. With regards to sampling, the reviewer is correct, but, since we are dealing with large vocabulary seq2seq models, the actual probability assigned to any sequence will be very small. We believe that a very natural and desirable quality of the model is that “the probability assigned to a bad sentence should not be larger than the probability of a good(reference) sentence.” Unfortunately, our experiments clearly show that this is not the case, which is alarming.\n\n2) Further, one would imagine that a complete dialog system pipeline would contain a collection of different models including a seq2seq model but also others. In that context, is it clear that it's the role of the seq2seq model to limit egregious responses? \n\nThis is a good question but we believe that it is slightly out of the scope of this paper because we are examining End-to-End seq2seq models (in part because they have gained increasing popularity in recent years). The reviewer is correct that one can have additional modules in the pipeline to prevent bad responses from by the system, but we also believe that, ideally, the seq2seq models should be robust against egregious behavior by themselves.\n\n3) A related aspect is that it would have been interesting to explore a bit more the reasons that cause the generation of such egregious responses. It is unclear how representative is the example that is detailed (\"I will kill you\" in Section 5.3). Are other examples using words in other contexts? Also, it seems reasonable that if one wants to avoid such answers, countermeasures (e.g., in designing the loss or in adding common sense knowledge) have to be considered.\n\nTo the first question, “I will kill you” is just one example, and we can do this for many alternatives. The key is that we believe the model is doing a good job of generalizing, but it does not know that some sentences are not proper to generate. For example, people talk about “hating something”, and “you” is a noun, so the model could generalize to “I hate you”. People also talk about “passwords”, but the model doesn’t know one should not ask “What’s your password?” \n\nAs to the second question, we believe the reviewer is suggesting future work, and we agree that these are exciting directions to pursue in the future.\n\n4) I am not sure of the value of Section 3. In particular, it seems like the presentation of the paper would be as effective if this section was summarized in a short paragraph (and perhaps detailed in an appendix). Section 3.1, \"continuous relaxation of the input embedding\", what does that mean since the embedding already lives in continuous space?\n\nTo the first question, we agree with the suggestion that Section 3 can be shortened. The reason we put emphasis on the continuous relaxation experiment is that we believe this is the first approach researchers will try in order to find trigger inputs for some target sequence. We thought that pointing out that this doesn’t work served as a useful motivation to turn to a enumeration based algorithm, such as gibbs-enum.\n\nFor the second (clarifying) question, it’s true that the embedding lives in continuous space, but they are constrained to be one of the columns in the embedding matrix E^{enc} in the trained model. By “continuous relaxation of the input embedding” we mean that we remove the column constraint, and allow the vector to be any continuous vector. We’ll add the explanation to the paper.\n\n5) I understand that your study only considers (when optimizing for egregious responses)) dialogs that are 1-turn long. I wonder if you could increase hit rates by crafting multiple inputs at once.\n\n\nOne of the points of our work is that even if you just manipulate a 1-turn history, it is enough to trigger egregious outputs. Examining multi-turn histories will be a good subject for future work. For us, it will involve re-implementing code and re-running experiments. Our current expectation is that when you manipulate multi-turn history, that the hit rates will increase, but not significantly.\n", "Thanks for the detailed review, here’s responses to the advice and questions:\n\n1) I found some of the appendices (esp. B and C) to be important for understanding the paper and believe these should be in the main paper. Moving parts of Appendix A in the main text would also add to the clarity.\n\nThanks for reading the appendices! We agree that it would be our preference to move them into the main body of the paper, but we were constrained by the 10 page limit. \n\n2) The lack of control over the outputs of seq2seq is a major roadblock towards their broader adoption. The authors propose two algorithms for trying to find inputs creating given outputs, a simple one relying on continuous optimization this is shown not to work (breaking when projecting back into words), and another based relying on discrete optimization. The authors found that the task is hard when using greedy decoding, but often doable using sampled decoding (note that in this case, the model will generate a different output every time). My take-aways are that the task is hard and the results highlight that vanilla seq2seq models are pretty hard to manipulate; however it is interesting to see that with sampling, models may sometimes be tricked into producing really bad outputs.\nThis white-box attack applicable to any chatbot. As the authors noted, an egregious output for one application (\"go to hell\" for customer service) may not be egregious for another one (\"go to hell\" in MT).\nOverall, the authors ask an interesting question: how easy is it to craft an input for a seq2seq model that will make it produce a \"very bad\" output. The work is novel, several algorithms are introduced to try to solve the problem and a comprehensive analysis of the results is presented. The attack is still of limited practicality, but this paper feels like a nice step towards more natural adversarial attacks in NLG.\n\nYour understanding about the conclusions and limitations of the this work is correct. These are the main ideas we try to convey in the paper.\n\n3) One last thing: the title seems a bit misleading, the work is not about \"detecting\" egregious outputs.\n\nIt is true, that we are looking for trigger inputs that would cause the model to output egregious targets in a given list. Thus we agree that “detecting” could be a bit misleading…. But we don’t have better word choice for now. Any suggestions are welcome!\n\nThanks for the review!\n", "6) In Table 3, results from the first column (normal, o-greedy) seem interesting. Wouldn't one expect that the model can actually generate (almost) all normal responses? Your results indicate that for Ubuntu models can only generate between 65% and 82% of actual (test) responses. Do you know what in the Ubuntu corpus leads to such a result?\n\nThis is a good question. If our algorithm is perfect, the result should be 100%. This result shows that there still remains room for (maybe big) improvements for the trigger input search algorithm. That is also a good future research direction. We believe that the Ubuntu result is somewhat special in that, as we tried to explain, due to the “generic response” situation we see in both the Switchboard and Opensubtitles data, we switch to sampling for the normal list. Thus, the reason could be that a greedy-hit is a stronger constraint than sample-hit, and it is more difficult to find trigger inputs for that.\n\n7) In Section 5.3, you seem to say that the lack of diversity of greedy-decoded sentences is related to the low performance of the \"o-greedy\" metric. Could this result simply be explained because the model is unlikely to generate sentences that it has never seen before? \n\nThat is a plausible explanation but we believe this problem is somewhat special due to the dialogue response setting. When doing greedy decoding, the model tends to give very common outputs. For other tasks like machine translation, from greedy decoding you will get very good outputs (things never seen in the data).\n\n8) You could try changing the temperature of the decoding distribution, that should improve diversity and you could then check whether or not that also increases the hit rate of the o-greedy metric.\n\n\nThat is a good suggestion, and could indeed improve diversity. It is less clear to us whether that will change the greedy decoding behavior however, because changing the temperature should not change which element is the maximum. Do you agree?\n\n9) Perhaps tailoring the mal lists to each specific dataset would make sense (I understand that there is already some differences in between the mal lists of the different datasets but perhaps building the lists with a particular dataset in mind would yield \"better\" results). \n\nThis is also good advice, and could make the “attack” more powerful. Since that approach is more time consuming, for our initial effort we tried to create general malicious targets that should be applicable to a wide range of dialogue data.\n\nThanks for the review!\n", "Main contribution: devising and evaluating an algorithm to find inputs that trigger arbitrary \"egregious\" outputs (\"I will kill you\") in vanilla sequence-to-sequence models, as a white-box attack on NLG models.\n\nClarity:\nThe paper is overall clear. I found some of the appendices (esp. B and C) to be important for understanding the paper and believe these should be in the main paper. Moving parts of Appendix A in the main text would also add to the clarity.\n\nOriginality:\nThe work looks original. It is an extension of previous attacks on seq2seq models, such as the targeted-keyword-attack from (Cheng et al., 2018) in which the model is made to produce a keyword chosen by the attacker.\n\nSignificance of contribution:\nThe lack of control over the outputs of seq2seq is a major roadblock towards their broader adoption. The authors propose two algorithms for trying to find inputs creating given outputs, a simple one relying on continuous optimization this is shown not to work (breaking when projecting back into words), and another based relying on discrete optimization. The authors found that the task is hard when using greedy decoding, but often doable using sampled decoding (note that in this case, the model will generate a different output every time). My take-aways are that the task is hard and the results highlight that vanilla seq2seq models are pretty hard to manipulate; however it is interesting to see that with sampling, models may sometimes be tricked into producing really bad outputs.\nThis white-box attack applicable to any chatbot. As the authors noted, an egregious output for one application (\"go to hell\" for customer service) may not be egregious for another one (\"go to hell\" in MT).\n\nOverall, the authors ask an interesting question: how easy is it to craft an input for a seq2seq model that will make it produce a \"very bad\" output. The work is novel, several algorithms are introduced to try to solve the problem and a comprehensive analysis of the results is presented. The attack is still of limited practicality, but this paper feels like a nice step towards more natural adversarial attacks in NLG.\n\nOne last thing: the title seems a bit misleading, the work is not about \"detecting\" egregious outputs.", "# Positive aspects of this submission\n\n- This submission explores a very interesting problem that is often overlooked in sequence-to-sequence models research.\n\n- The methodology in Sections 4 and 5 is very thorough and useful.\n\n- Good comparison of last-h with attention representations, which gives good insight about the robustness of each architecture against adversarial attacks.\n\n# Criticism\n\n- In Section 3, even if the \"l1 + projection\" experiments seem to show that generating egregious outputs with greedy decoding is very unlikely, it doesn't definitely prove so. It could be that your discrete optimization algorithm is suboptimal, especially given that other works on adversarial attacks for seq2seq models use different methods such as gradient regularization (Cheng et al. 2018).\nSimilarly, the brute-force results on a simplified task in Appendix B are useful, but it's hard to tell whether the conclusions of this experiment can be extrapolated to the original dialog task.\nGiven that you also study \"o-greedy-hit\" in more detail with a different algorithm in Sections 4 and 5, I would consider removing Section 3 or moving it to the Appendix for consistency." ]
[ -1, 7, -1, -1, -1, -1, -1, -1, -1, 7, 8 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, 3, 2 ]
[ "rkeN_ZS5Cm", "iclr_2019_HyNA5iRcFQ", "rkgBookDAQ", "SJx0z4aHRX", "SygNarWt6X", "SJgy7QFK37", "BklKQvMAnQ", "SklZWlF9nm", "H1epDSZtaX", "iclr_2019_HyNA5iRcFQ", "iclr_2019_HyNA5iRcFQ" ]
iclr_2019_Hye9lnCct7
Learning Actionable Representations with Goal Conditioned Policies
Representation learning is a central challenge across a range of machine learning areas. In reinforcement learning, effective and functional representations have the potential to tremendously accelerate learning progress and solve more challenging problems. Most prior work on representation learning has focused on generative approaches, learning representations that capture all the underlying factors of variation in the observation space in a more disentangled or well-ordered manner. In this paper, we instead aim to learn functionally salient representations: representations that are not necessarily complete in terms of capturing all factors of variation in the observation space, but rather aim to capture those factors of variation that are important for decision making -- that are "actionable". These representations are aware of the dynamics of the environment, and capture only the elements of the observation that are necessary for decision making rather than all factors of variation, eliminating the need for explicit reconstruction. We show how these learned representations can be useful to improve exploration for sparse reward problems, to enable long horizon hierarchical reinforcement learning, and as a state representation for learning policies for downstream tasks. We evaluate our method on a number of simulated environments, and compare it to prior methods for representation learning, exploration, and hierarchical reinforcement learning.
accepted-poster-papers
To borrow the succinct summary from R1, "the paper suggests a method for generating representations that are linked to goals in reinforcement learning. More precisely, it wishes to learn a representation so that two states are similar if the policies leading to them are similar." The reviewers and AC agree that this is a novel and worthy idea. Concerns about the paper are primarily about the following. (i) the method already requires good solutions as input, i.e., in the form of goal-conditioned policies, (GCPs) and the paper claims that these are easy to learn in any case. As R3 notes, this then begs the question as to why the actionable representations are needed. (ii) reviewers had questions regarding the evaluations, i.e., fairness of baselines, additional comparisons, and additional detail. After much discussion, there is now a fair degree of consensus. While R1 (the low score) still has a remaining issue with evaluation, particularly hyperparameter evaluation, they are also ok with acceptance. The AC is of the opinion that hyperparameter tuning is of course an important issue, but does not see it as the key issue for this particular paper. The AC is of the opinion that the key issue is issue (i), raised by R3. In the discussion, the authors reconcile the inherent contradiction in (i) based on the need of additional downstream tasks that can then benefit from the actionable representation, and as demonstrated in a number of the evaluation examples (at least in the revised version). The AC believes in this logic, but believes that this should be stated more clearly in the final paper. And it should be explained the extent to which training for auxiliary tasks implicitly solve this problem in any case. The AC also suggests nominating R3 for a best-reviewer award.
train
[ "SyewvpG7sX", "r1lO0VLPhm", "S1lyiF78AX", "HJeOi_mLRQ", "H1ghTFkVR7", "ByeyV10-TX", "HkexM-C7RQ", "Bke-uQ1WR7", "rylLCj0e0Q", "S1gtlIA6TQ", "r1x_0rCT6m", "HJlHiBC6pm", "B1gKDS0TpQ", "HkxTyBRTaX", "r1gAKUmA3Q" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "public" ]
[ "In this paper, the authors propose a new approach to representation learning in the context of reinforcement learning.\nThe main idea is that two states should be distinguished *functionally* in terms of the actions that are needed to reach them,\nin contrast with generative methods which try to capture all aspects of the state dynamics, even those which are not relevant for the task at hand.\nThe method of the authors assumes that a goal-conditioned policy is already learned, and they use a Kullback-Leibler-based distance\nbetween policies conditioned by these two states as the loss that the representation learning algorithm should minimize.\nThe experimental study is based on 6 simulated environments and outlines various properties of the framework.\n\nOverall, the idea is interesting, but the paper suffers from many weaknesses both in the framework description and in the experimental study that make me consider that it is not ready for publication at a good conference like ICLR.\n\nThe first weakness of the approach is that it assumes that a learned goal-conditioned policy is already available, and that the representation extracted from it can only be useful for learning \"downstream tasks\" in a second step. But learning the goal-conditioned policy from the raw input representation in the first place might be the most difficult task. In that respect, wouldn't it be possible to *simultaneously* learn a goal-conditioned policy and the representation it is based on? This is partly suggested when the authors mention that the representation could be learned from only a partial goal-conditioned policy, but this idea definitely needs to be investigated further.\n\nA second point is about unsufficiently clear thoughts about the way to intuitively advocate for the approach. The authors first claim that two states are functionally different if they are reached from different actions. Thinking further about what \"functionally\" means, I would rather have said that two states are functionally different if different goals can be reached from them. But when looking at the framework, this is close to what the authors do in practice: they use a distance between two *goal*-conditioned policies, not *state*-conditioned policies. To me, the authors have established their framework thinking of the case where the state space and the goal space are identical (as they can condition the goal-conditioned policy by any state=goal). But thinking further to the case where goals and states are different (or at least goals are only a subset of states), probably they would end-up with a different intuitive presentation of their framework. Shouldn't finally D_{act} be a distance between goals rather than between states?\n\nSection 4 lists the properties that can be expected from the framework. To me, the last paragraph of Section 4 should be a subsection 4.4 with a title such as \"state abstraction (or clustering?) from actionable representation\". And the corresponding properties should come with their own questions and subsection in the experimental study (more about this below).\n\nAbout the related work, a few remarks:\n- The authors do not refer to papers about using auxiliary tasks. Though the purpose of these works is often to supply for additional reward signals in the sparse reward context, then are often concerned with learning efficient representations such as predictive ones.\n- The authors refer to Pathak et al. (2017), but not to the more recent Burda et al. (2018) (Large-scale study of curiosity-driven learning) which insists on the idea of inverse dynamical features which is exactly the approach the authors may want to contrast theirs with. To me, they must read it.\n- The authors should also read Laversanne-Finot et al. (2018, CoRL) who learn goal space representations and show an ability to extract independently controllable features from that.\n\nA positive side of the experimental study is that the 6 simulated environments are well-chosen, as they illustrate various aspects of what it means to learn an adequate representation. Also, the results described in Fig. 5 are interesting. A side note is that the authors address in this Figure a problem pointed in Penedones et al (2018) about \"The Leakage Propagation problem\" and that their solution seems more convincing than in the original paper, maybe they should have a look.\nBut there are also several weaknesses:\n- for all experiments, the way to obtain a goal-conditioned policy in the first place is not described. This definitely hampers reproducibility of the work. A study of the effect of various optimization effort on these goal-conditioned policies might also be of interest.\n- most importantly, in Section 6.4, 6.5 and 6.6, much too few details are given. Particularly in 6.6, the task is hardly described with a few words. The message a reader can get from this section is not much more than \"we are doing something that works, believe us!\". So the authors should choose between two options:\n* either giving less experimental results, but describing them accurately enough so that other people can try to reproduce them, and analyzing them so that people can extract something more interesting than \"with their tuning (which is not described), the framework of the authors outperforms other systems whose tuning is not described either\".\n* or add a huge appendix with all the missing details.\nI'm clearly in favor of the first option.\n\nSome more detailed points or questions about the experimental section:\n- not so important, Section 6.2 could be grouped with Section 6.1, or the various competing methods could be described directly in the sections where they are used.\n- in Fig. 5, in the four room environment, ARC gets 4 separated clusters. How can the system know that transitions between these clusters are possible?\n- in Section 6.3, about the pushing experiment, I would like to argue against the fact that the block position is the important factor and the end-effector position is secundary. Indeed, the end-effector must be correctly positioned so that the block can move. Does ARC capture this important constraint?\n- Globally, although it is interesting, Fig.6 only conveys a quite indirect message about the quality of the learned representation.\n- Still in Fig. 6, what is described as \"blue\" appears as violet in the figures and pink in the caption, this does not help when reading for the first time.\n- In Section 6.4, Fig.7 a, ARC happens to do better than the oracle. The authors should describe the oracle in more details and discuss why it does not provide a \"perfect\" representation.\n- Still in Section 6.4, the authors insist that ARC outperforms VIME, but from Fig.7, VIME is not among the best performing methods. Why insist on this one? And a deeper discussion of the performance of each method would be much more valuable than just showing these curves.\n- Section 6.5 is so short that I do not find it useful at all.\n- Section 6.6 should be split into the HRL question and the clustering question, as mentioned above. But this only makes sense if the experiments are properly described, as is it is not useful.\n\nFinally, the discussion is rather empty, and would be much more interesting if the experiments had been analyzed in more details.\n\ntypos:\n\np1: that can knows => know\np7: euclidean => Euclidean\n", "The paper presents a method to learn representations where proximity in euclidean distance represents states that are achieved by similar policies. The idea is novel (to the best of my knowledge), interesting and the experiments seem promising. The two main flaws in the paper are the lack of details and missing important experimental comparisons.\n\nMajor remarks:\n\n- The author state they add experimental details and videos via a link to a website. I think doing so is very problematic, as the website can be changed after the deadline but there was no real information on the website so it wasn’t a problem this time.\n\n- While the idea seems very interesting, it is only presented in very high-level. I am very skeptical someone will be able to reproduce these results based only on the given details. For example - in eq.1 what is the distribution over s? How is the distance approximated? How is the goal-conditional policy trained? How many clusters and what clustering algorithm?\n\n- Main missing details is about how the goal reaching policy is trained. The authors admit that having one is “a significant assumption” and state that they will discuss why it is reasonable assumption but I didn’t find any such discussion (only a sentence in 6.4). \n\n- While the algorithm compare to a variety of representation learning alternatives, it seems like the more natural comparison are model-based Rl algorithms, e.g. “Neural Network Dynamics for Model-Based Deep Reinforcement Learning with Model-Free Fine-Tuning”. This is because the representation tries to implicitly learn the dynamics so it should be compared to models who explicitly learn the dynamics. \n\n- As the goal-conditional policy is quite similar to the original task of navigation, it is important to know for how long it was trained and taken into account.\n\n- I found Fig.6 very interesting and useful, very nice visual help.\n\n- In fig.8 your algorithm seems to flatline while the state keeps rising. It is not clear if the end results is the same, meaning you just learn faster, or does the state reach a better final policy. Should run and show on a longer horizon.\n\n", "Thank you for your response and helpful suggestions! The question you raise about the necessity of something beyond a goal-conditioned policy is a valuable one, and we answer it below. We have updated the discussion in Section 4 to reflect the same. We have also added an additional comparison to directly using goal conditioned policies in Section 6.6. \n\nAlthough GCPs trained to reach one state from another are feasible to learn, they possess some fundamental limitations (added discussion in Section 4): they do not generalize very well to new states, and they are limited to solving tasks expressible as just reaching a particular goal. The unifying explanation for why ARCs are useful over just a GCP is that the learned representation generalizes better than the GCP - to new tasks and to new regions of the environment. In our experimental evaluation, we show that ARCs can help solve tasks that cannot be expressed as goal reaching (Section 6.6, 6.7) and they enable learning policies on larger regions to which GCPs do not generalize (Section 6.5).\n\nAs the GCP is trained with a sparse reaching reward, it is unaware of possible reward structures in the environment, making it hard to adapt to tasks which are not simple goal reaching, such as the “reach-while-avoid” task in Section 6.6. For this task, following the GCP directly would cause the ant to walk through the red region and incur a large negative reward; a comparison we now explicitly add to Section 6.6. Tasks which cannot be expressed as simply reaching a goal are abundant in real life scenarios such as navigation with preferences or manipulation with costs on quality of motion, and fast learning on such tasks (as ARC does) is quite beneficial. We have explicitly emphasized this discussion in Section 4.1, and made the limitations of simple goal reaching clear at the start of Section 4. \n\nIn the tasks for Section 6.5, the GCP trained on the 2m region (in green) does not achieve high performance on the larger region of 8m, even when finetuned on the environment using the provided reward (Fig 8). However, shaping the reward function using ARCs enables learning beyond the the original GCP, showing that ARCs generalize better to this new region, and potentially can lead to learning progressively harder GCPs via bootstrapping. \n\nWe agree that the discussion would greatly benefit from an introductory paragraph putting things into context, we have added this discussion at the beginning of Section 4. Please let us know if this resolves the issues you brought up. If not, we’re happy to address any other concerns you might have. \n", "Thank you for your response! We are not sure we fully understand your concern about hyperparameter tuning, and were hoping for some additional clarifications regarding this. We have added additional details to the paper regarding hyperparameter tuning in Appendix D. We do not have many hyperparameters to tune for ARCs - the only free parameter is the size of the latent dimension, and for the downstream tasks, we tune the weight of the shaping term for reward shaping and the number of clusters for HRL for each comparison method on each task. \n\nThe size of the latent dimension is selected by performing a sweep on the downstream reward-shaping task for each domain and method. For the reward-shaping task, for each domain and comparison method, the parameter controlling the relative scaling of the shaped reward is selected according to a coarse hyperparameter sweep. The number of clusters for k-means for the hierarchy experiments is similarly selected for each domain and comparison method, although we found that all tasks and methods worked well with the same number of clusters. As you note, this is standard in deep reinforcement learning research, and we are simply following standard practice. Importantly, we give all methods a fair chance by tuning each comparison method separately. While we could certainly adjust hyperparameters differently, we did not find overall that hyperparameters were a major issue for our method. We would appreciate if you could clarify whether you are concerned about this issue in particular, and what a reasonably fair alternative might be?\n\nIf you believe that the issues in the paper have been addressed, we would appreciate it if you would revise your original review, or else point out what remaining issues you see with the paper or experimental evaluation.\n", "The authors have done a large effort in addressing a lot of our concerns, particularly regarding experimental details, how to learn goal conditioned policies (GCPs), and the related work section has been improved. The paper is now better and I will increase my score accordingly when appropriate.\n\nHowever, the fact that GCPs need to be learned in advance before the ARC representation can be learned still raises a major concern that needs further clarification, probably with some impact on the introduction and the positioning of the paper.\n\nThe question is the following: if GCPs are learned and authors considers it is rather \"easy\" to do so, why do we need something more? If you take the title of Sections 4.1 and 6.6, why \"leveraging actionable representation as feature for learning policies\" if you already learned policies ? The last paragraph suggests that policies learned in ARC space will generalize \"beyond GCPs\". Since GCPs limitations have not been made clear, this point is still vague.\n\nIn Section 4, the authors suggest three other valuable answers to this question: reward shaping, doing HRL, or clustering in ARC space (by the way, the latter could be used to help the former). My feeling is that treating those three points in addition to the one above is somewhat dispersive, the paper is trying to make to many points, at least a unifying perspective is missing.\n\nTo me, the paper lacks between Section 4 and Section 4.1 an introductiory text which should contain the last paragraph of Section 3 and would motivate the work more clearly with respect to the above issue.\n\nIf the paper does not get finally accepted at ICLR, I would suggest the authors to reconsider their positionning with respect to the perspective above and put forward a clearer message about what actionable representations really bring in a context where you already have \"good enough\" GCPs.\n\nFinally, the perspective mentioned in the discussion of interleaving ARC learning and GCP learning would of course change the picture about the above issue, I appreciate that the authors kept that for their last sentence.\n\n\n\n", "The paper suggests a method for generating representations that are linked to goals in reinforcement learning. More precisely, it wishes to learn a representation so that two states are similar if the policies leading to them are similar.\n\nThe paper leaves quite a few details unclear. For example, why is this particular metric used to link the feature representation to policy similarity? How is the data collected to obtain the goal-directed policies in the first place? How are the different methods evaluated vis-a-vis data collection? The current discussion makes me think that the evaluation methodology may be biased. Many unbiased experiment designs are possible. Here are a few:\n\nA. Pre-training with the same data\n\n1. Generate data D from the environment (using an arbitrary policy).\n2. Use D to estimate a model/goal-directed policies and consequenttly features F. \n3. Use the same data D to estimate features F' using some other method.\n4. Use the same online-RL algorithm on the environment and only changing features F, F'.\n\nB. Online training\n\n1. At step t, take action $a_t$, observe $s_{t+1}$, $r_{t+1}$\n2. Update model $m$ (or simply store the data points)\n3. Use the model to get an estimate of the features \n\nIt is probably time consuming to do B at each step t, but I can imagine the authors being able to do it all with stochastic value iteration. \n\nAll in all, I am uncertain that the evaluation is fair.\n", "This looks much better in terms of the details. I think that there's a minor weakness remaining, quite common in many deep learning RL papers: It seems that you are tuning the hyperparameters of the algorithms in the same environments in which they are testing them (you do not specify exactly how the tuning is done). While this is OK for preliminary results, it does have a biasing effect when trying to compare different methods. \n\nSo, for the moment this appears to be weak evidence in favour of this representation, but it is not entirely convincing. ", "Thank you for your interest in our paper and for your insightful comments. \n\nYou are correct that the ARC representation requires us to assume that we can train a goal-conditioned policy in the first place. For our experiments, the GCP was trained with TRPO using a sparse reward (see Section 6.2 and Appendix A.1) -- obtaining such a policy is not especially difficult, and existing methods are quite capable of doing so [1,2,3]. We therefore believe that this assumption is reasonable. \n\nTo ensure the comparisons are fair, every representation learning method that we compare to is trained using the same data (Section 6.2, 6.3). All representations are trained on a dataset of trajectories collected from the goal-conditioned policy, and we have updated the paper with full details of the training scheme (Section 6.3, Appendix A.2, B).\n\nWe also ensure that our experiments fairly account for the data required to train the GCP.\n- In the generalization experiment (Section 6.4), all methods initialize behaviour from the GCP, as policies trained from scratch fail, a new comparison we have added to Figure 7. \n- In the hierarchy experiment (Section 6.7), all representations use the GCP as a low-level controller, so ARC incurs no additional sample cost. Two comparisons (TRPO, Option Critic) which do not use the GCP make zero progress, even with substantially more samples.\n- In the experiment for learning non goal-reaching tasks (Section 6.6), the ARC representation can be re-used across many different tasks without retraining the GCP, amortizing the cost of learning the GCP. We plan to add an experimental comparison on a family of 100 tasks to demonstrate this amortization, and will update the paper with results. \n\n[1] Nair, Pong, Dalal, Bahl, Lin, and Levine. Visual reinforcement learning with imagined goals.NIPS 2018\n[2] Pong, Gu, Dalal, and Levine. Temporal difference models: Model-free deep rl for model-based control. ICLR 2018\n[3] Andrychowicz, Wolski, Ray, Schneider, Fong, Welinder, P., McGrew, B., Tobin, J., Abbeel, P., and Zaremba, W. (2017). NIPS 2017\n", "Thanks to all for the detailed reviews and review responses.\nI could summarize the reviews as: interesting ideas; needs evaluations that take into account original construction of the goal-directed policies; more details. The authors have provided detailed responses.\nA revised version is available; see the \"show revisions\" link, for either the revised PDF, or a comparison that highlights the revisions (I can recommend this).\n\nReviewers (and anonymous commenter), your further thoughts would be most appreciated.\n-- area chair\n", "Thank you for your insightful comments and suggestions! We have made many changes based on the comments provided by reviewers, which are summarized below. We would appreciate it if the reviewer could take another look at our changes and additional results, and let us know if they would like to revise their score or request additional changes that would alleviate their concerns.\n\nNew comparisons: \nWe have added two more comparisons as suggested - with model based RL methods ([5] Nagabandi et al) and learning representations via inverse dynamics models ([4] Burda et al). These have been described in Section 6.3 and added to plots in Fig 7, 8, 10. We have also added a new comparison to learning from scratch for the reward shaping experiment (Section 6.5, Fig 7). \n\nLack of details: \nWe apologize for the lack of clarity in the submission! We have updated the main text and added an appendix with additional details of the ARC representation and the experimental setup: how a goal-conditioned policy is trained (Sec 6.2, Appendix A.1), how the ARC representation is learned (Sec 6.2, Appendix A.2) , and how the methods are evaluated on downstream applications (Sec 6.5-7, Appendix A.3-6). We increased analysis of the performance of ARC and comparison methods for all the downstream applications (Sec 6.5-6.7), and added a discussion of how all methods are trained (Sec 6.3, Appendix A.2, B)\n\nRequirement for goal-conditioned policy:\nThe ARC representation is extracted from a goal-conditioned policy (GCP), requiring us to assume that we can train such a GCP. This assumption was explicit in our submission, but we have emphasized it more now by editing Section 1 and Section 3. For our experiments, the GCP was trained with existing RL methods using a sparse task-agnostic reward (Section 6.2, Appendix A.1) -- obtaining such a policy is not especially difficult, and existing methods are quite capable of doing so [1,2,3]. We therefore believe that this assumption is reasonable. We also ensure that our experiments fairly account for the data required to train the GCP.\n- In the generalization experiment (Section 6.4), all methods initialize behaviour from the GCP, as policies trained from scratch fail, a new comparison we have added to Figure 7. \n- In the hierarchy experiment (Section 6.7), all representations use the GCP as a low-level controller. Two comparisons (TRPO, Option Critic) which do not use the GCP make zero progress, even when provided with substantially more samples.\n- In learning non goal-reaching tasks (Section 6.6), ARC representation can be re-used across many tasks without retraining the GCP, amortizing the cost of learning the GCP. We plan to add an experimental comparison on a family of tasks to demonstrate this, and will update the paper.\n", "Find responses to particular comments below: \nRelated work:\n-> We cite and discuss all the papers mentioned in the related work section (Section 5). We additionally added comparison (Fig 7,8,10) to using inverse dynamics models and model-based RL methods, as discussed above. \n\n“Shouldn't finally D_{act} be a distance between goals rather than between states?”\n> D_{act} is indeed the actionable distance between goals, but given that the goal and the state space are the same the learned representation can be effectively used as a state representation as seen in Section 6.6.\n\n“in Fig. 5, in the four room environment, ARC gets 4 separated clusters. How can the system know that transitions between these clusters are possible?”\n-> We have added a discussion in Section 6.6 to clarify this. We use model free RL to train the high level policy which directly outputs clusters as described in Section 4.4. This high level policy does not need to explicitly model the transitions between clusters, that is handled by the low level goal reaching policy, and the high-level policy is trained model-free. \n\n“Indeed, the end-effector must be correctly positioned so that the block can move. Does ARC capture this important constraint?”\n-> ARC does not completely ignore the end effector position, this is evidenced from the fact that the blue region in Fig 6 is not a point but is an entire area. What ARC captures is that moving the block induces a greater difference in actions than inducing the arm. Moving the block to different positions requires the arm to move to touch the block and push it to the goal, while moving the arm to different positions can be done by directly moving it to the desired position. While both things are captured, the block is emphasized over the end-effector.\n\n“In Section 6.4, Fig.7 a, ARC happens to do better than the oracle. why?”\n-> The oracle comparison is a hand-specified reward shaping - we have updated Section 6.5 and Figure 7 to make this point clear. It is likely that the ARC representation is able to find an even better reward shaping, although the difference is fairly small. \n\n“from Fig.7, VIME is not among the best performing methods. Why insist on this one?”\n-> We intended to emphasize that ARC is able to outperform a method that is purely designed for better exploration, not just other methods for representation learning. The discussion in Section 6.5 has been appropriately altered.\n\n[1] Nair, Pong, Dalal, Bahl, Lin, and Levine. Visual reinforcement learning with imagined goals.NIPS 2018\n[2] Pong, Gu, Dalal, and Levine. Temporal difference models: Model-free deep rl for model-based control. ICLR 2018\n[3] Andrychowicz, Wolski, Ray, Schneider, Fong, Welinder, P., McGrew, B., Tobin, J., Abbeel, P., and Zaremba, W. (2017). NIPS 2017\n[4] Burda, Edwards, Pathak, Storkey, Darrell, and Efros. Large-scale study of curiosity-driven learning. arXiv preprint\n[5] Nagabandi, Kahn, Fearing and Levine. Neural Network Dynamics for Model-Based Deep Reinforcement Learning with Model-Free Fine-Tuning. ICRA 2018\n", "Thank you for your insightful comments and suggestions! We have made many changes based on the comments provided by reviewers, which are summarized below. We would appreciate it if the reviewer could take another look at our changes and additional results, and let us know if they would like to request additional changes that would alleviate their concerns.\n\nNew comparisons: We have added a model-based RL algorithm planning with MPC (Nagabandi et al.), as a comparison to learning features. On the “reach-while-avoid” task (Fig 8), model-based RL struggles compared to a model-free policy with ARC because of challenges such as model-bias, limited exploration and short-horizon planning. The updated plot and corresponding discussion have been added to Section 6.6. We have also added a comparison to representations from inverse dynamics models (Burda et al), described in Section 6.3.\n\nLack of details: We apologize for the lack of clarity in the submission! We have updated the main text and added an appendix with additional details of the ARC representation and the experimental setup: how a goal-conditioned policy is trained (Sec 6.2, Appendix A.1), how the ARC representation is learned (Sec 6.2, Appendix A.2) , and how the methods are evaluated on downstream applications (Sec 6.5-7, Appendix A.3-6). We have added a discussion of how all comparisons are trained, and measures taken to ensure fairness (Sec 6.3, Appendix A.2, B)\n\nRequirement for goal-conditioned policy: The ARC representation is extracted from a goal-conditioned policy (GCP), requiring us to assume that we can train such a GCP. This assumption was explicit in our submission, but we have emphasized it more now by editing Section 1 and Section 3. For our experiments, the GCP was trained with existing RL methods using a sparse task-agnostic reward (Section 6.2, Appendix A.1) -- obtaining such a policy is not especially difficult, and existing methods are quite capable of doing so [1,2,3]. We therefore believe that this assumption is reasonable, and have added this to the paper in Section 3. \n\nWe also ensure that our experiments fairly account for the data required to train the GCP.\n- In the generalization experiment (Section 6.4), all methods initialize behaviour from the GCP, as policies trained from scratch fail, a new comparison we have added to Figure 7. \n- In the hierarchy experiment (Section 6.7), all representations use the GCP as a low-level controller, so ARC incurs no additional sample cost in comparison. Two comparisons (TRPO, Option Critic) which do not use the GCP make zero progress, even when provided with substantially more samples.\n- In the experiment for learning non goal-reaching tasks (Section 6.6), the ARC representation can be re-used across many different tasks without retraining the GCP, amortizing the cost of learning the GCP. We plan to add an experimental comparison on a family of 100 tasks to demonstrate this amortization, and will update the paper with results. ", "Find responses to particular questions and comments below: \n“Should run and show on a longer horizon.”\n-> We have updated Figure 8 accordingly. All methods converge to the same average reward.\n\n“As the goal-conditional policy is quite similar to the original task of navigation, it is important to know for how long it was trained and taken into account.”\n-> We have added these details in Appendix A.1. It is important to note that for the task in Section 6.6, simply using a goal reaching policy would be unable to solve the task, since it has no notion of other rewards, like regions to avoid (shown in red in Fig 8), and would pass straight through the region. \n\n“eq.1 what is the distribution over s?” \n-> It is the distribution over all states over which the goal-conditioned policy is trained. This is done by choosing uniformly from states on trajectories collected with the goal-conditioned policy as described in Section 6.2 and Appendix A.2. \n\n“ How is the distance approximated?”\n-> In our experimental setup, we parametrize the action distributions of GCPs with Gaussian distributions - for this class of distributions, the KL divergence, and thus the actionable distance, can be explicitly computed (Appendix A.1).\n\n“How many clusters and what clustering algorithm?”\n-> We use k-means for clustering, with distance in ARC space as the metric. We perform a hyperparameter sweep over the number of clusters for each method, and thus varies across tasks and methods. We have added this clarification to Section 4.4 and Section 6.6. \n\n\nThe author state they add experimental details and videos via a link to a website.\n> OpenReview does not provide a mechanism for submitting supplementary materials. Providing supplementary materials via an external link is the instruction provided by the conference organizers -- we would encourage the reviewer to check with the AC if they are concerned.\n\n[1] Nair, Pong, Dalal, Bahl, Lin, and Levine. Visual reinforcement learning with imagined goals.NIPS 2018\n[2] Pong, Gu, Dalal, and Levine. Temporal difference models: Model-free deep rl for model-based control. ICLR 2018\n[3] Andrychowicz, Wolski, Ray, Schneider, Fong, Welinder, P., McGrew, B., Tobin, J., Abbeel, P., and Zaremba, W. (2017). NIPS 2017\n[4] Burda, Edwards, Pathak, Storkey, Darrell, and Efros. Large-scale study of curiosity-driven learning. arXiv preprint\n[5] Nagabandi, Kahn, Fearing and Levine. Neural Network Dynamics for Model-Based Deep Reinforcement Learning with Model-Free Fine-Tuning. ICRA 2018\n", "Thank you for your insightful comments and suggestions! We have made many changes based on the comments provided by reviewers, which are summarized below. We would appreciate it if the reviewer could take another look at our changes and additional results, and let us know if they would like to revise their score or request additional changes that would alleviate their concerns.\n\nNew comparisons:\n We have added two more comparisons - with model based RL methods ([1] Nagabandi et al) and learning representations via inverse dynamics models ([2] Burda et al). These have been described in Section 6.3 and added to plots in Fig 7, 8, 10. We have also added a new comparison to learning from scratch for the reward shaping experiment (Section 6.5, Fig 7). \n\nLack of details:\n We apologize for the lack of clarity in the submission! We have updated the main text and added an appendix with additional details of the ARC representation and the experimental setup: goal-conditioned policy (GCP) training (Sec 6.2, Appendix A.1), ARC representation learning (Sec 6.2, Appendix A.2) , downstream evaluation (Sec 4, 6.5-6.7, Appendix A.3-6). We have added a discussion of how all comparisons are trained, and measures taken to ensure fairness (Sec 6.3, Appendix A.2, B). We have clarified the algorithm and task descriptions in Section 4 and Section 6. \n\nFairness of comparisons: \nTo ensure the comparisons are fair, every comparison representation learning method is trained using the same data, and we have updated the paper to emphasize this (Section 6.2, 6.3). All representations are trained on a dataset of trajectories collected from the goal-conditioned policy, similar to the (A) scheme proposed by AnonReviewer1. We have updated the paper to include full details of the training scheme for all methods (Section 6.3, Appendix A.2, B).\n\nWe also ensure that our experiments fairly account for the data required to train the GCP.\n- In the generalization experiment (Section 6.4), all methods initialize behaviour from the GCP, as policies trained from scratch fail, a new comparison we have added to Figure 7. \n- In the hierarchy experiment (Section 6.7), all representations use the GCP as a low-level controller, so ARC incurs no additional sample cost. Two comparisons (TRPO, Option Critic) which do not use the GCP make zero progress, even with substantially more samples.\n- In the experiment for learning non goal-reaching tasks (Section 6.6), the ARC representation can be re-used across many different tasks without retraining the GCP, amortizing the cost of learning the GCP. We plan to add an experimental comparison on a family of 100 tasks to demonstrate this amortization, and will update the paper with results. \n\nFind responses to particular questions and comments below: \n\n“How is the data collected to obtain the goal-directed policies in the first place?”\n-> We train a goal-conditioned policy with TRPO using a task-agnostic sparse reward function. We have updated the paper to reflect this (Section 6.2, Appendix A.1).\n\n“why is this particular metric used to link the feature representation to policy similarity?”\n-> We add an explicit discussion of this in Section 3. We link feature representation to policy similarity by this metric, because it directly captures the notion that features should represent elements of the state which directly affect the actions. The KL divergence between policy distributions allows us to embed goal states which induce similar actions similarly into feature space. \n\n\n[1] Nagabandi, Kahn, Fearing and Levine. Neural Network Dynamics for Model-Based Deep Reinforcement Learning with Model-Free Fine-Tuning. ICRA 2018\n[2] Burda, Edwards, Pathak, Storkey, Darrell, and Efros. Large-scale study of curiosity-driven learning. arXiv preprint\n", "This is a nicely written paper, with some interesting and natural ideas about learning policy representations. Simplifying, the main idea is to consider two states $s_1,s_2$ similar if the corresponding policies $\\pi_1,\\pi_2$ for reaching $s_1, s_2$ are similar. \n\nHowever, it is unclear how this idea can be really applied when the optimal goal-directed policies are unknown. The algorithm, as given, relies on having access to a simulator for learning those policies in the first place. This is not necessarily a fatal fault, as long as the experiments compare algorithms in a fair and unbiased manner. How were the data collected in the first place for learning the representations? Was the same data used in all algorithms?\n\n" ]
[ 6, 6, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2019_Hye9lnCct7", "iclr_2019_Hye9lnCct7", "H1ghTFkVR7", "HkexM-C7RQ", "r1x_0rCT6m", "iclr_2019_Hye9lnCct7", "HkxTyBRTaX", "r1gAKUmA3Q", "iclr_2019_Hye9lnCct7", "SyewvpG7sX", "SyewvpG7sX", "r1lO0VLPhm", "r1lO0VLPhm", "ByeyV10-TX", "iclr_2019_Hye9lnCct7" ]
iclr_2019_HyeFAsRctQ
Verification of Non-Linear Specifications for Neural Networks
Prior work on neural network verification has focused on specifications that are linear functions of the output of the network, e.g., invariance of the classifier output under adversarial perturbations of the input. In this paper, we extend verification algorithms to be able to certify richer properties of neural networks. To do this we introduce the class of convex-relaxable specifications, which constitute nonlinear specifications that can be verified using a convex relaxation. We show that a number of important properties of interest can be modeled within this class, including conservation of energy in a learned dynamics model of a physical system; semantic consistency of a classifier's output labels under adversarial perturbations and bounding errors in a system that predicts the summation of handwritten digits. Our experimental evaluation shows that our method is able to effectively verify these specifications. Moreover, our evaluation exposes the failure modes in models which cannot be verified to satisfy these specifications. Thus, emphasizing the importance of training models not just to fit training data but also to be consistent with specifications.
accepted-poster-papers
This paper proposes verification algorithms for a class of convex-relaxable specifications to evaluate the robustness of neural networks under adversarial examples. The reviewers were unanimous in their vote to accept the paper. Note: the remaining score of 5 belongs to a reviewer who agreed to acceptance in the discussion.
val
[ "BJe6BOOoRQ", "BJeG6VW537", "rye8eGZ5CX", "rJlto34nam", "r1l4NRE3p7", "HJgffAVhpX", "ryej1AEhT7", "H1e5Sp42TX", "BJgDW64npX", "HygxxT42p7", "rJeS02Vhpm", "HkxsOnE2p7", "ryeLf5EhaX", "ryepAig167", "HklfxJG93Q" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thanks for expanding the explanation on the high level idea of this paper. To me, these high level ideas matter much more than technical derivations or extensive experimental results. I think this paper can be accepted.", "- Summary: This paper proposes verification algorithms for a class of convex-relaxable specifications to evaluate the robustness of the network under adversarial examples. Experimental results are shown for semantic specifications for CIFAR, errors in predicting sum of two digits and conservation of energy in a simple pendulum. \n\n- Clarity and correctness: It is a well-written and well-organized paper. Notations and expressions are clear. The math seems to be correct. \n\n- Significance: The paper claims to have introduced a class of convex-relaxable specifications which constitute specifications that can be verified using a convex relaxation. However, as described later in the paper, it is limited to feed-forward neural networks with ReLU and softmax activation functions and quadratic parts (it would be better to tone down the claims in the abstract and introduction parts.)\n\n- Novelty: The idea of accounting for label semantics and quadratic expressions when training a robust neural network is important and very practical. This paper introduces some nice ideas to generalize linear verification functions to a larger class of convex-relaxable functions, however, it seems to be more limited in practice than it claims and falls short in presenting justifying experimental results.\n\n** More detailed comments:\n\n** The idea of generalizing verifications to a convex-relaxable set is interesting, however, applying it in general is not very clear -- as the authors worked on a case by case basis in section 3.1. \n\n** One of my main concerns is regarding the relaxation step. There is no discussion on the effects of the tightness of the relaxation on the actual results of the models; when in reality, there is an infinite pool of candidates for 'convexifying' the verification functions. It would be nice to see that analysis as well as a discussion on how much are we willing to lose w.r.t. to the tightness of the bounds -- especially when there is a trade-off between better approximation to the verification function and tightness of the bound. \n\n** I barely found the experimental results satisfying. To find \"reasonable\" inputs to the model, authors considered perturbing points in the test set. However, I am not sure if this is a reasonable assumption when there would be no access to test data points when training a neural network with robustness to adversarial examples. And if bounding them is a very hard task, I am wondering if that is a reasonable assumption to begin with.\n\n** It is hard to have a sense of how good the results are in Figure 1 due to lack of benchmark results (I could not find them in the Appendix either.)\n\n** The experimental results in section 4.4 are very limited. I suggest that the authors consider running more experiments on more data sets and re-running them with more settings (N=2 for digit sums looks very limited, and if increasing N has some effects, it would be nice to see them or discuss those effects.)\n\n** Page 2, \"if they do a find a proof\" should be --> \"if they do find a proof\" \n** Page 5, \"(as described in Section (Bunel et al., 2017; Dvijotham et al., 2018)\", \"Section\" should be omitted.\n\n******************************************************\nAfter reading authors' responses, I decided to change the score to accept. It got clear to me that this paper covers broader models than I originally understood from the paper. Changing the expression to general forms was a useful adjustment in understanding of its framework. Comparing to other relaxation technique was also an interesting argument (added by the authors in section H in the appendix). Adding the experimental results for N=3 and 4 are reassuring.\nOne quick note: I think there should be less referring to papers on arxiv. I understand that this is a rapidly changing area, but it should not become the trend or the norm to refer to unpublished/unverified papers to justify an argument.", "Dear Paper915 Authors,\n\nThanks for clarifying my concerns and adding new materials on the toy example and scalability to the paper. I am okay with this paper now if the AC wants to accept it.\n\nBTW please make sure also adding detailed structures of each model evaluated to the appendix, or release source code with model specifications.\n\nThanks,\nPaper915 AnonReviewer2\n", "Comment 2: “Report the details on how they solve the relaxed convex problem, and report verification time. What is the largest scale of network that the algorithm can handle within a reasonable time?”\n\nAnswer 2: Thanks for the suggestion. We have added Appendix E (Scaling and Implementation) where we explain how we solved the relaxed convex problem. For CIFAR 10 semantic specification and downstream task specification (since all constraints are linear) we have used the open source LP solver GLOP (https://developers.google.com/optimization/lp/glop) and on average this takes 3-10 seconds per data point on a desktop machine (with 1 GPU and 8G of memory) for the largest network we handled. This network consists of 4 convolutional and 3 linear layers comprising of in total 860000 parameters. For the conservation of energy, we used SDP constraints - this relaxation scales quadratically with respect to the input and output dimension of the network. To solve for these set of constraintt we used the CVXOPT solver (https://cvxopt.org/) accessed via the python interface CVXPY (http://www.cvxpy.org/), which is slower than GLOP, and we have only tested this on a small network consisting of two linear layers with a total of 120 parameters. However, we expect that with stronger SDP solvers (like Mosek - https://www.mosek.com/) or by using custom scalable implementations of SDPs (for example, the techniques described in https://people.eecs.berkeley.edu/~stephentu/writeups/first-order-sdp.pdf), we will be able to scale to larger problem instances - we plan to pursue this in future work.", "Comment 5: “I barely found the experimental results satisfying. To find \"reasonable\" inputs to the model, authors considered perturbing points in the test set. However, I am not sure if this is a reasonable assumption [...]”\n\nAnswer 5: We do make the assumption that for the verification task, we should be given both a pre-trained network and a held-out set to do verification on.\n\nIt’s true that in the ideal case, we would be able to verify for all possible inputs in the true distribution, however, in practice this is infeasible. Therefore, verification on a held-out set is considered a suitable proxy in the same way that accuracy on a validation/test data set is considered a suitable proxy and this has been a way to measure robustness in both verification and adversarial communities (see [Dvijotham et al., 2018; Bunel et al., 2017; Athalye et al., 2018; Carlini & Wagner, 2017b; Uesato et al. (2018); Madry et al., 2017; Tjeng & Tedrake, 2017; Cheng et al., 2017; Huang et al., 2017; Ehlers, 2017; Katz et al., 2017; Weng et al., 2018; Wong & Kolter, 2018]). Both prior work and our experiments in Section 4.3 and 4.4 indicate that robustness on the test points is informative. \n\nComment 6: “It is hard to have a sense of how good the results are in Figure 1 due to lack of benchmark results.”\n\nAnswer 6: The reason we did not include comparisons to benchmark results from literature is that, to the best of our knowledge, this is the first paper which attempts to verify non-linear specifications as presented in our experiments. \nTo resolve the lack of existing benchmarks, we have attempted to come up with strong baseline results (blue line in Figure 1) to compare with our verification results (green line in Figure 1). The strong baselines we’ve chosen are:\nStronger adversarial attacks by having 20 random seeds as initial states for projected gradient descent. \nFor the pendulum, we note that we can discretize the entire input space (as it lies on the circle). By discretizing the input space into smaller subspaces to do verification on - this is as close as we can get to the true bound. Thus we can treat the exhaustive verification (blue line) as pseudo ground truth.\n\nOne thing we want to emphasize again is that since there are no baselines that can do better than the blue line (adversarial bound), the difference between the green and blue line gives us an accurate measure of how suboptimal our algorithm is. \n\nAn example, to see how good the results are, is the pendulum (third picture in Figure 2). Here, we see that at perturbation radius delta=0.01, the exhaustive verification gives exactly the same bound as our verification scheme. This means that for this perturbation radius we have essentially found the true percentage of the test set which satisfies the specification. As we increase this perturbation radius to delta=0.06 we find that the difference between the verification bound and exhaustive verification bound is 22%. We had 27000 points in our test set, this means the number of points where we are unable to prove is 5940, but for the rest (21060 points) we are either able to find an adversarial attack which is successful or a proof that specification is satisfied via verification.\n\nComment 7: [The experimental results are very limited. Suggestion to run more experiments on more data sets and re-running them with more settings. N=2 for digit sums looks limited.]\n\nAnswer 7: We thank the reviewer for this comment. We extended the results for the digit sum problem as suggested. We want to also respectfully note that the number of datasets considered in this paper is in line with other papers in the space of verification and adversarial robustness [Madry et al., 2017; Athalye et al., 2018; Uesato et al. (2018);Carlini & Wagner, 2017b; ;Dvijotham et al., 2018; Bunel et al., 2017; Tjeng & Tedrake, 2017; Cheng et al., 2017; Huang et al., 2017; Ehlers, 2017; Katz et al., 2017; Weng et al., 2018; Wong & Kolter, 2018], and that the compute used to perform all experiments in this paper is already extensive (e.g. the exhaustive verification for the pendulum baseline).\nWe have now added experimental results for the digit sum problem for N=3 and N=4 in Appendix H.1. In brief: As expected the verification bound becomes looser for larger N, since error accumulates when summing up more digits, however with increasing N performance stabilizes. We have also added Appendix I on what we call entropy specification which we referred to in reply to your first comment about significance (see comment on entropy specification).\n", "Comment 3: The idea of generalizing verifications to a convex-relaxable set is interesting, however, applying it in general is not very clear.\n\nAnswer 3: The framework outlined in the paper is general, however, for verification to be meaningful the bound is required to be sufficiently tight. We have hence approached this on a case by case basis, as getting the convex hull of an arbitrary set is hard (which is what would ensure the tightest bound). A trivial recipe could be given by a bounding box whose bounds are given by the upper and lower bounds of the sets, but in general this is not sufficiently tight. For sufficiently tight convex-relaxations, we need to make use of functional constraints which are specific to the function itself. There has been a lot of work in approximation algorithms (see http://www.designofapproxalgs.com/book.pdf for a general overview) which try to give provable guarantees by approximating this problem. The cases we have chosen to focus on, namely semantics; physics and downstream specifications, are ones we think are important, thus we have chosen to develop convex-relaxations for these specific specifications. In addition, we have included an extra example in Appendix I regarding entropy specifications please refer to Answer 1 for more details.\n\nComment 4: [One of my main concerns is regarding the relaxation step. There is no discussion on the effects of the tightness of the relaxation [...] especially when there is a trade-off between better approximation to the verification function and tightness of the bound.]\n\nAnswer 4: In Figure 1, we have attempted to address the tightness of the relaxation. Here, we show two bounds: adversarial bound (blue line) and verification bound (green line). One thing to make clear is that there exists no verification algorithm which can have a bound past the adversarial bound. In other words, the difference between the two bounds is a strong indicator of how tight our verification algorithm is. An example is the first plot of Figure 1. Here, the difference between the adversarial bound and the verification bound is at most only 0.9% for CIFAR 10 test set. Intuitively, this means that we failed to find a provable guarantee for only 90 points out of 10000 in the test set. For the other 19910 we are able to either find an adversarial example which violates the specification or a provable guarantee that the specification is satisfied. \n\nIn all cases better approximations of the verification function should give tighter bounds, we don’t expect a trade-off between the two. However, one trade-off which is important in verification is between the computational costs and the quality of the approximation of the verification function. \n\nAdditionally we agree that it is desirable to understand how different relaxations can affect the tightness of the algorithm. To address this we added Appendix H (Comparison of Tightness), where we compare two different relaxation techniques for the physics based specification (conservation of energy). In brief: we consider two different relaxations to the quadratic function, one using semi-definite programming (SDP) techniques and one using linear programming. We find that the SDP relaxation does give tighter bounds, but comes at additional computational costs.\n", "We thank the reviewer for the detailed feedback and criticism. We made adjustments to the paper to address all your concerns and detail the changes below. We hope the changes clarify the concerns regarding the generality of our algorithm and the requested additional experiments.\n\nComment 1: [The method is limited to feed-forward neural networks with ReLU and softmax activation functions and quadratic parts (it would be better to tone down the claims in the abstract and introduction parts.)]\n\nAnswer 1: We want to clarify that although we have demonstrated most of the results on ReLU feedforward neural networks, it is not limited to such networks. The feedforward nature is indeed required but the ReLU activation function can be replaced with arbitrary activation functions, for example tanh or sigmoid activations (please see https://arxiv.org/abs/1803.06567 for more details). We initially used the ReLU example for clarity of presentation, as a result, maybe the generality of our result is not clear. To address this we have updated Sections 3.1, 3.3 and 3.4. Specifically, we changed the equation: X_{k+1} = ReLU(W_k x_k + b_k) to X_{k+1} = g_k(W_k x_k + b_k). The only change required going from the ReLU equation to the more general equation is the way the bounds ([l_k, u_k]) are propagated through the network and the relaxations applied on the activation functions. For a more general overview of the bound propagation techniques and relaxation of arbitrary activation functions we refer to the following papers https://arxiv.org/abs/1803.06567, https://arxiv.org/pdf/1610.06940.pdf, https://arxiv.org/abs/1805.12514 .\n\nWe would also like to clarify that this paper provides a framework for general nonlinear specifications that are convex-relaxable. Although we presented softmax and quadratic specifications this algorithm is not limited to these two cases. To demonstrate this further, we have added Appendix I where we find a convex-relaxation to the entropy of a softmax distribution from a classifier network and use it to verify that a given network is never overly confident. In other words; we would like to verify that a threshold on the entropy of the class probabilities is never violated. The specification at hand is the following:\nF(x, y) = E + \\sum_i exp(y_i)/(\\sum_j exp(y_j)) log(exp(y_i)/(\\sum_j exp(y_j))) <=0\nwhich is a non-convex function of the network outputs. \n\nComment 2: Novelty: The idea of accounting for label semantics and quadratic expressions when training a robust neural network is important and very practical. This paper introduces some nice ideas to generalize linear verification functions [...] it seems to be more limited in practice than it claims and falls short in presenting justifying experimental results.\n\nAnswer 2: We emphasize that our method is not limited to quadratic expressions and label semantics and refer to Answer 1, above, for comments regarding the generality. Regarding your concerns wrt the novelty of the approach: as far as we are aware there is no prior paper considering the problem of verifying nonlinear specifications for neural networks. Regarding the presentation of results: We refer to Answer 6, below, for a detailed justification of our experimental procedure. Additionally we want to highlight that our verification tool was a useful diagnostic in finding the failure modes of pendulum and CIFAR10 models. An example is that when we are able to verify that the pendulum model satisfies energy conservation more - the long term dynamics of the model always reaches a stable equilibrium. \n", "From your comments it seems that there was a misunderstanding regarding the general applicability of our method. We have updated the paper and provided extensive additional explanations below (please also consider our reply to all authors). To address the comments you have made:\n\nComment 1: Is it critical that the non-linear verifications need to be convex relaxable. Recently, people have observed that a lot of nonconvex optimization problems also have good local solutions. Is it true that the convex relaxable condition is only required for provable algorithm? As the neural network itself is nonconvex, constraining the specification to be convex is a little awkward to me.\n\nAnswer 1: For verification purposes it is indeed critical that we have either the global optimum value or an upper bound on the global optimum value. Verification of neural networks tries to find a proof that the specification, F(x, y) <= 0, is satisfied for all x and y within a bounded set (https://arxiv.org/abs/1803.06567). Note that this condition is equivalent to max_{x,y} F(x,y) <= 0, thus if we have the global maximum - the problem is solved. However, to find the global optimum value is often NP-hard even for ReLU networks (https://arxiv.org/abs/1705.01320). We can try to find a lower bound to the global optimum value by doing gradient descent to maximize the value of F(x,y). This is called a falsification procedure (as explained in Section 3.1). However, even if the value found is not greater than zero this is not sufficient to give a guarantee that there exists no x and y which can violate the specification, as the value is always a lower bound to the global optimum. Thus, we are motivated to find provable upper bounds on max F(x, y), ie, a number U such that F(x, y) <= U for all x, y in the input and output domain. If this U <=0 then we have found a guarantee that the specification is never violated. In order to do this, we study convex relaxations of this problem that enable computation of provable upper bounds.\n\nWe also do not require the specification to be convex (for example the physics specification isn’t if Q is not a semi-definite matrix), the specification can be some complicated nonlinear function - we just require that it be convex-relaxable, which is a weaker requirement. We slightly rephrased Section 3 to make this point more obvious.\n\nComment 2: The paper contains the example specification functions derived for three specific purpose, I'm wondering how broad the proposed technique could be. Say if I need my neural network to satisfy other additional properties, is there a general recipe or guideline. If not, what's the difficulty intuitively speaking?\n\nAnswer 2: This proposed technique is capable handling all specifications which are convex-relaxable, i.e. any specification for which the set of values that (x, y, F(x, y)) can take can be bounded by a convex set. The difficulty here is always getting a tight convex set on the specification you would like to verify for. There is a lot of literature in finding tight convex sets (https://eng.uok.ac.ir/mfathi/Courses/Advanced%20Eng%20Math/Linear%20and%20Nonlinear%20Programming.pdf), we have chosen to demonstrate the generality of our framework with three specifications that we deem to be important. In general any convex-relaxable specification can be treated in the same manner as in the paper but, of course, finding a tight convex set should be done on a case-by-case basis. We added an additional example, going beyond quadratic constraints in Appendix I. Here we verify that a given classifier is never overly confident, in other words; we would like to verify that a threshold on the entropy of the class probabilities is never violated.\n\nWe would also like to emphasize that this paper is aimed to do post-hoc verification, where we consider a scenario that we are given a pre-trained neural network. Thus this is different to training your neural network to satisfy desirable properties, it is rather a safety measure before the network is put into deployment for real world applications. \n\nComment 3: [The reviewer also commented on a lack of commas]\nCould you please expand upon this point ?\n", "Comment 5: “Is it possible to show how loose the convex relaxation is for a small toy example? For example, the specification involving quadratic function is a good candidate.”\n\nAnswer 5: We have now added a section in the Appendix H.2 (Tightness with a Toy Example), here we consider a toy example where the specification is :F(x,y) = x^2 - y^2 - 4<=0. The variables x and y are from two interval sets (-b_x, b_x) and (-b_y, b_y) respectively. Throughout the toy example we keep b_y=9. In Appendix H.2, we have added Figure 8, where the plot on the left shows the true set which satisfies the specification and we also show our convex relaxed set using our relaxation. The convex relaxed set is simply a box around the true set which is bounded hyperbolically (shown in green). In the same figure with the plot on the right, we also show the tightness of our relaxation as the interval set increase in length, specifically as we increase b_x. What we find is that our relaxation becomes looser linearly with respect to the increase in interval length.\n\nMinor Comments:\n[In (4), k is undefined]\nThanks for spotting this, k was indeed a typo, this is now changed to n. \n\n[In (20), I am not sure if it is equivalent to the four inequalities after (22). There are 4 inequalities after (22) but only 3 in (20). ]\nThere were only three constraints in equation 20 as we enforce X- aa^T to by a symmetric semi-definite matrix. The constraints X_ij - l_j a_i - u_i a_j + l_j u_i >=0, X_ij - l_i a_j - u_ja_i + l_i u_j >=0 becomes the same constraint when X_ij is symmetric. Thanks for spotting this, we have made this clearer in the appendix. In fact, this allowed us to spot that we also missed some constraints which we enforced this is now also added.\n", "Comment 4: [For the Mujoco experiment, I am not sure how to interpret the delta values in Figure 1. Is the delta trivial?]\n\nAnswer 4: Thanks pointing this out, we should have been clearer about this. We have added these details to the appendix. For completeness we also list them here.\n- The pendulum model takes [cos(theta), sin(theta), v] as input. Here theta is the angle of the pendulum and v is the scaled angular velocity (angular velocity / 10) - the data is generated such that the initial angular velocity lies between (-10, 10), by scaling this with 10 we make sure [cos(theta), sin(theta), v] lies in a box where each side is bounded by [-1, 1]. \n- The pendulum setup is the Pendulum from the DeepMind Control Suite (https://arxiv.org/abs/1801.00690). The pendulum is of length 0.5m and hangs 0.6m above ground. \n- When the perturbation radius is 0.01. Since the pendulum is of length 0.5m, the perturbation in space is about 0.005 m in the x and y direction. The perturbation of the angular velocity is true_angular_velocity +- 0.1 radians per second (since the input is a scaled angular velocity by a factor of 10). The largest delta value we verified for was 0.06, this means the angular velocity can change upto 0.6 radians per second which is about ⅕ of a full circle, thus this is not a trivial perturbation.\n", "Comment 3: [Detailed network architecture (Model A, Model B). Comment on the scalability of the proposed method]\n\nAnswer 3: For the CIFAR 10 Semantic Specification, Model A and Model B are identical in terms of network architecture and consist of 4 convolutional and 3 linear layers interleaved with ReLU functions and 860000 parameters. For the MNIST Downstream Task the models consist of two linear layers interleaved with ReLU activation and 15880 parameters - which was enough to get good adversarial accuracy. For the pendulum physics specification we used a two layer neural network with ReLU activations and in toal 120 parameters. Regarding the scalability please see our previous comment.\n", "Comment 1: [The authors should distinguish the proposed technique to techniques from [1] and [2] which could be used to convert some non-linear specifications to linear specifications.]\n\nAnswer 1: We thank the reviewer for highlighting this point, we have now added a paragraph in the section ‘Specifications Beyond Robustness’ to distinguish between existing techniques and convex relaxable specifications. The reviewer is correct in pointing out that some non-linearities can indeed be linearized through the use of different element-wise activation functions. However, in terms of generality as the reviewer mentioned, this mechanism does not work in many cases - an example is the softmax function, which needs every input in the layer to give it’s output. In this particular case, it is a non-separable nonlinear function and current literature does not support verification with such non-linearities.\n", "We thank all reviewers for the detailed reviews and thoughtful remarks. We have addressed all concerns in an updated version of the paper and you can find responses to your questions below.\n\nWe would like to clarify two points that came up in multiple reviews: \n1) The nonlinear specifications that can be verified with our method do not have to be convex. We only require the specification to convex-relaxable - which is a weaker condition. We have rephrased parts of Section 3 to make this more clear.\n2) The framework outlined in the paper is general, however, for verification to be meaningful the bound is required to be sufficiently tight, which requires a tight convex-relaxation that is dependent on the form of the function and thus has to be problem specific. We also refer to Answer 3 to Reviewer 1 for a more detailed response.\n", "This paper uses convex relaxation to verify a larger class of specifications\nfor neural network's properties. Many previous papers use convex relaxations on\nthe ReLU activation function and solve a relaxed convex problem to give\nverification bounds. However, most papers consider the verification\nspecification simply as an affine transformation of neural network's output.\nThis paper extends the verification specifications to a larger family of\nfunctions that can be efficiently relaxed.\n\nThe author demonstrates three use cases for non-linear specifications,\nincluding verifying specifications involving label semantics, physic laws and\ndown-stream tasks, and show some experiments that the proposed verification\nmethod can find non-vacuous bound for these problems. Additionally, this paper\nshows some interesting experiments on the value of verification - a more\nverifiable model seems to provide more interpretable results.\n\nOverall, the proposed method seems to be a straightforward extension to\nexisting works like [2]. However the demonstrated applications of non-linear\nspecifications are indeed interesting, and the proposed method works well on \nthese tasks.\n\nI have some minor questions regarding this paper:\n\n1) For some non-linear specifications, we can convert these non-linear elements\ninto activation functions, and build an equivalent network for verification\nsuch that the final verification specification becomes linear. For example, for\nverifying the quadratic specification in physics we can add a \"quadratic\nactivation function\" to the network and deal with it using techniques in [1] or\n[2]. The authors should distinguish the proposed technique with these existing\ntechniques. My understanding is that the proposed method is more general, but\nthe authors should better discussing more on the differences in this paper.\n\n2) The authors should report the details on how they solve the relaxed convex\nproblem, and report verification time. Are there any tricks used to improve\nsolving time? What is the largest scale of network that the algorithm can\nhandle within a reasonable time?\n\n3) The detailed network architecture (Model A, Model B) is not shown. How many\nlayers and neurons are there in these networks? This is important to show the\nscalability of the proposed method.\n\n4) For the Mujoco experiment, I am not sure how to interpret the delta values\nin Figure 1. For CIFAR I know it is the delta of pixel values but it is not\nclear about the delta in Mujoco model. What is the normal range of predicted\nnumbers in this model? How does the delta compare to it? Is the delta very\nsmall or trivial?\n\n5) Is it possible to show how loose the convex relaxation is for a small toy\nexample? For example, the specification involving quadratic function is a\ngood candidate.\n\nThere are some small glitches in equations:\n\n* In (4), k is undefined\n* In (20), I am not sure if it is equivalent to the four inequalities after (22).\nThere are 4 inequalities after (22) but only 3 in (20).\n\n\nMany papers uses convex relaxations for neural network verification. However\nvery few of them can deal with general non-linear units in neural networks.\nReLU activation is usually the only non-linear element than we can handle in\nmost neural network verification works. Currently the only works that can\nhandle other general non-linear elements are [1][2]. This paper uses more\ngeneral convex relaxations than these previous approaches, and it can handle\nnon-separable non-linear specifications. This is a unique contribution to this\nfield. I recommend accepting this paper as long as the minor issues mentioned\nabove can be fixed.\n\n[1] \"Efficient Neural Network Robustness Certification with General Activation\nFunctions\" by Huan Zhang, Tsui-Wei Weng, Pin-Yu Chen, Cho-Jui Hsieh, Luca Daniel.\nNIPS 2018\n\n[2] \"A dual approach to scalable verification of deep networks.\" by\nKrishnamurthy Dvijotham, Robert Stanforth, Sven Gowal, Timothy Mann, and\nPushmeet Kohli. UAI 2018.\n\n", "This paper considers more general non-linear verifications, which can be convexified, for neural networks, and demonstrate that the proposed methodology is capable of modeling several important properties, including the conversation law, semantic consistency, and bounding errors.\n\nA few other comments\n\n*) Is it critical that the non-linear verifications need to be convex relaxable. Recently, people have observed that a lot of nonconvex optimization problems also have good local solutions. Is it true that the convex relaxable condition is only required for provable algorithm? As the neural network itself is nonconvex, constraining the specification to be convex is a little awkward to me.\n\n*) The paper contains the example specification functions derived for three specific purpose, I'm wondering how broad the proposed technique could be. Say if I need my neural network to satisfy other additional properties, is there a general recipe or guideline. If not, what's the difficulty intuitively speaking?\n\nThe paper needs to be carefully proofread, and a lot of commas are missing." ]
[ -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5 ]
[ -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3 ]
[ "H1e5Sp42TX", "iclr_2019_HyeFAsRctQ", "HkxsOnE2p7", "ryepAig167", "BJeG6VW537", "BJeG6VW537", "BJeG6VW537", "HklfxJG93Q", "ryepAig167", "ryepAig167", "ryepAig167", "ryepAig167", "iclr_2019_HyeFAsRctQ", "iclr_2019_HyeFAsRctQ", "iclr_2019_HyeFAsRctQ" ]
iclr_2019_HyeGBj09Fm
Generating Liquid Simulations with Deformation-aware Neural Networks
We propose a novel approach for deformation-aware neural networks that learn the weighting and synthesis of dense volumetric deformation fields. Our method specifically targets the space-time representation of physical surfaces from liquid simulations. Liquids exhibit highly complex, non-linear behavior under changing simulation conditions such as different initial conditions. Our algorithm captures these complex phenomena in two stages: a first neural network computes a weighting function for a set of pre-computed deformations, while a second network directly generates a deformation field for refining the surface. Key for successful training runs in this setting is a suitable loss function that encodes the effect of the deformations, and a robust calculation of the corresponding gradients. To demonstrate the effectiveness of our approach, we showcase our method with several complex examples of flowing liquids with topology changes. Our representation makes it possible to rapidly generate the desired implicit surfaces. We have implemented a mobile application to demonstrate that real-time interactions with complex liquid effects are possible with our approach.
accepted-poster-papers
This paper presents a novel method for synthesizing fluid simulations, constrained to a set of parameterized variations, such as the size and position of a water ball that is dropped. The results are solid; there is little related work to compare to, in terms of methods that can "compute"/recall simulations at that speed. The method is 2000x faster than the orginal simulations. This comes with the caveats that: (a) the results are specific to the given set of parameterized environments; the method is learning a compressed version of the original animations; (b) there is a loss of accuracy, and therefore also a loss of visual plausibility. The AC notes that the paper should use the ICLR format for citations, i.e., "(foo et al.)" rather than "(19)". The AC also suggests that limitations should also be clearly documented, i.e., as seen from the perspective of those working in the fluid simulation domain. The principle (and only?) contentious issue relates to the suitability of the paper for the ICLR audience, given its focus on the specific domain of fluid simulations. The AC is of two minds on this: (i) the fluid simulation domain has different characteristics to other domains, and thu understanding the ICLR audience can benefit from the specific nature of the predictive problems that come the fluid simulation domain; new problems can drive new methods. There is a loose connection between the given work and residual nets, and of course res-nets have also been recently reconceptualized as PDEs. (ii) it's not clear how much the ICLR audience will get out of the specific solutions being described; it requires understanding spatial transformer networks and a number of other domain-specific issues. A problem with this type of paper in terms of graphics/SIGGRAPH is that it can also be seen as "falling short" there, simply because it is not yet competitive in terms of visual quality or the generality of fluid simulators; it really fulfills a different niche than classical fluid simulators. The AC leans slightly in favor of acceptance, but is otherwise on the fence.
train
[ "ByefRpv5nm", "SJeEcGl9RX", "BylpiCech7", "S1l7TYggCX", "BklwYtxlCm", "Skg-NFgeC7", "rkeUwxkin7" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "This is an application paper on dense volumetric synthesis of liquids and smoke. Given densely registered 4D implicit surfaces (volumes over time) for a structured scene, a neural-network based model is used to interpolate simulations for novel scene conditions (e.g. position and size of dropped water ball). The interpolation model composes two components -- given these conditions, it first regresses weights combining a set of precomputed deformation fields, and then a second model regresses dense volumetric deformation corrections -- these are helpful as some events are not easily modeled with a set of basis deformations. \n\nI found the paper hard to read at first, since the paper is heavy on terminology, only really understood what is going on when I went through the examples in the appendix, which are helpful and then on a second read the content was clear and appears technically correct. I would advise considering defining in more detail early the problem setup (e.g. Fig 13 was helpful), explain some of the variables in context. \n\nThis is primarily an application paper on simulating liquids in controlled scenes using nets and appears novel in that narrow domain. The specific way deformations are composed -- using v_inv to backwards correct basis deformations, following up the mixing of those with a correction model -- is intuitive and is also something I see for the first time. \n\nThe experimental results are sufficient for simulating liquids/smoke, except I would like to also see a comparison to using deformation field network only, without its predecessor. This was done for Fig 6, but would be nice to also see it numerically in ablation in Fig. 4. Another useful experiment would be to vary the number of bases and/or the resolution of the deformation correction network and see the effects. \n\nMore importantly, it would be very helpful is to try this approach for modeling deforming object and body shapes for which there are many datasets (e.g. Shapenet). Right now the implicit surface deformation model is only tested on liquids examples, which limits the impact to that specialist domain -- it's a bit more of a SIGGRAPH type of paper than ICLR. \n\n---- Post author feedback comment ---- \nI raised my rating to 7 as the paper itself is solid, main concern as another reviewer points out is it may be a bit too specialist for ICLR. If the AC decides to reject based on this fact I am ok with that as well. \n\nI think it would be helpful to add more ablation (deformation-only results for all cases) and experiments with different numbers of bases in the final version. If that's added it will strengthen the paper. \n", "Thank you for clarifying. I saw that but it would help to have deformation-learning-only column in Fig 4, since \"flat\" case is a bit of a corner case. ", "The paper presents a coupled deep learning approach for generating realistic liquid simulation data that can be useful for real-time decision support applications. While this is a good applied paper with a large variety of experimental results, there is a significant lack of novelty from a machine learning perspective. \n\n1. The primary novelty here is in the problem formulation (e.g., defining cost function etc.) where two networks are used, one for learning appropriate deformation parameters and the other to generate the actual liquid shapes. This is an interesting idea to generate the required training data and build a generalizable model. \n\n2. But based on my understanding, this does not really explicitly incorporate the physical laws within the learning model and can't guarantee that the generated data would obey the physical laws and invariances. So, this is closer to a graphics approach and deep learning has been used before extensively in a similar manner for shape generation, shape transformation etc. \n\n3. In terms of practical applications, to the best of my knowledge there are sophisticated physics-based and graphics based approaches that perform very fast fluid simulations. So, the authors need to provide accuracy and computation cost/time comparisons with such methods to establish the benefits of using a deep learning based surrogate model. \n\nxxxxxxxxxxxxxxxxxxx\n\nI appreciate the rebuttals from the authors, updated my score, but I still believe (just like another reviewer) that this is better suited for a workshop or a conference like SIGGRAPH. ", "Thank you for the valuable comments which are addressed as follows.\n\n“...does not really explicitly incorporate the physical laws within the learning model…”\n\nWe agree, this would be an interesting direction for future work. We have focused on a more generic method that could also applied to other areas with different or no physical constraints. And for all such extensions it is important to establish how well the core of the method works, which we have focus on in our submission.\n\n“...sophisticated physics-based and graphics based approaches that perform very fast fluid simulations…”\n\nYes, this is a very good point, and we believe the performance of our trained models is the main strength of our method. We have a variety of solvers available in our group, and we have long standing experience with high-performance of fluid solvers. We can confidently say that no other method currently comes close in terms of performance. Our cell phone implementation is a good indicator of this: despite being published in early 2017, no other 3D liquid simulations is available on Android so far (to the best of our knowledge).\n\n“... provide accuracy and computation cost/time comparisons with such methods …”\n\nAs we mention in the paper, our method is currently more than three orders of magnitude faster than a reference Eulerian CPU-based solver. We’d be happy to add a comparison where the CPU solver accuracy is reduced to levels matching our deformation learning algorithm (cf. Fig. 4). We have previously also performed tests with SPH-based solvers on Android smartphones. However, despite GPU-optimizations we were only able to achieve simulations with less than 10k particles, which led to smooth simulations with very few details. Thus, for a given computational budget, neither Lagrangian nor Eulerian simulations methods can currently give a quality similar to our deep learning model.", "Thank you for the helpful feedback.\n\n- Regarding “... would be nice to also see it numerically as ablation in Fig. 4 …”:\n\nActually, our examples contain an ablation study, although we agree that it could be more clearly presented as such. There is an entry for a version without parameter learning in Fig. 4, which we refer to as ‘flat’, indicating that there is only a as-simple-as possible initial shape, and no pre-computed deformation. Thus, an ablation study is given for the drop setup with the versions: 1) initial error, 2) parameter learning only without deformation learning, 3) deformation learning only (‘flat’ example), 4) full method. We will revise our document accordingly to clarify this.\n\n- Regarding “... trying this approach for modeling deforming object and body shapes … “:\n\nThis would be very interesting. In our case, we have focused on fluid simulations because they are already a difficult problem, but another test case would help to prove the generalizability of our approach.", "Thank you for the detailed suggestions and encouraging comments.\n\n- Regarding “... applying the deformation backwards to enforce consistency …”:\n\nThis is an interesting direction, checking consistency between forward and backward steps could yield an estimate of, e.g., loss of momentum. However, as a first additional constraint, we would target divergence freeness, i.e., conservation of mass.\n\n- Regarding “... adaptive grid-structure (say Octree) to increase the resolution …“:\n\nThat is a good direction. A focus on the surface would be particularly useful for liquids, and we believe our approach for learning deformations would transfer nicely to adaptive representations like octrees. However, we were not able to try this up to now.\n\n- Regarding “... increase the number of pre-computed deformations to improve the approximation ...”:\n\nYes, it would improve the approximation quality. We increased the deformations in one of the tests and obtained correspondingly better results. The size of the basis is a compromise between memory/pre-computing time and quality. In our case, we have reduced the basis as much as possible, taking into account the degrees of freedom of interaction. If the reviewers wish, we could add the example with additional deformations to our submission.\n", "This paper introduces a deep learning approach for physical simulation. The approach combines two networks for synthesizing 4D data that represents 3D physical simulations. Here the first network outputs an initial guess, and the second network adds details. The first network utilizes a set of precomputed deformations, while the weights can be set to generate different output shapes. The precomputed deformations are applied in a recurrent manner. The second network is a variant of STN. \n\nThe results are impressive from the perspective of the current abilities of deep neural networks. The synthesized simulations are not physically accurate, but with certain visual realism. Experimental results are sufficient. \n\nHowever, it is also necessarily to add more intuitions to the current approach. First, it would be good to discuss why the current network design is desired. For example, when designing the first network, can we also design another neural network that applies the deformation backwards and enforce some consistency to improve the results? Also, many simulations use adaptive sampling (high-resolution near the surface and low-residual in the interior). Can we use an adaptive grid-structure (say Octree) to increase the resolution? \n\nAlso, is there a simple setting so that the current network design generates accurate results. If not, would increase the number of pre-computed deformations improve the approximation. If so, what would be the optimal basis for $u_i$? What is the tradeoff between using more basis for the first network and increasing the complexity of the second network?\n\nFor visualization, it would also good to show the 3D grid.\n\nOverall, it is good paper to see at ICLR.\n" ]
[ 7, -1, 5, -1, -1, -1, 7 ]
[ 4, -1, 4, -1, -1, -1, 3 ]
[ "iclr_2019_HyeGBj09Fm", "BklwYtxlCm", "iclr_2019_HyeGBj09Fm", "BylpiCech7", "ByefRpv5nm", "rkeUwxkin7", "iclr_2019_HyeGBj09Fm" ]
iclr_2019_HyePrhR5KX
DyRep: Learning Representations over Dynamic Graphs
Representation Learning over graph structured data has received significant attention recently due to its ubiquitous applicability. However, most advancements have been made in static graph settings while efforts for jointly learning dynamic of the graph and dynamic on the graph are still in an infant stage. Two fundamental questions arise in learning over dynamic graphs: (i) How to elegantly model dynamical processes over graphs? (ii) How to leverage such a model to effectively encode evolving graph information into low-dimensional representations? We present DyRep - a novel modeling framework for dynamic graphs that posits representation learning as a latent mediation process bridging two observed processes namely -- dynamics of the network (realized as topological evolution) and dynamics on the network (realized as activities between nodes). Concretely, we propose a two-time scale deep temporal point process model that captures the interleaved dynamics of the observed processes. This model is further parameterized by a temporal-attentive representation network that encodes temporally evolving structural information into node representations which in turn drives the nonlinear evolution of the observed graph dynamics. Our unified framework is trained using an efficient unsupervised procedure and has capability to generalize over unseen nodes. We demonstrate that DyRep outperforms state-of-the-art baselines for dynamic link prediction and time prediction tasks and present extensive qualitative insights into our framework.
accepted-poster-papers
After discussion, all reviewers agree to accept this paper. Congratulations!!
train
[ "rkl_EsCKpX", "BJeonJJo3X", "SylfmIbU07", "S1g3PD-L07", "ryx61DWL0X", "SkePF8Z8CQ", "HygUKS-807", "S1g5-1zxRm", "BJgv8BVFTm", "rygMqV4YTX", "Sygt67NF67", "SyeBsVc93m", "rkx45Fl3jX", "S1eidr-Eom", "HJg59orrqQ", "SkeXykWAYm" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "public", "author", "public" ]
[ "Overall the paper suffers from a lack of clarity in the presentation, especially in algorithm 1, and does not communicate well why the assumption of different dynamical processes should be important in practice. Experiments show some improvement compared to (Trivedi et al. 2017) but are limited to two datasets and it is unclear to what extend end the proposed method would help for a larger variety of datasets. \n\nNot allowing for deletion of node, and especially edges, is a potential draw-back of the proposed method, but more importantly, in many graph datasets the type of nodes and edges is very important (e.g. a knowledge base graph without edges loses most relevant information) so not considering different types is a big limitation. \n\nComments on the method (sections 2-4).\n\nAbout equation (1):\n \\bar{t} is not defined and its meaning is not obvious. The rate of event occurrence does not seem to depend on l (links status) whereas is seems to be dependent of l in algorithm 1. \n\nI don’t see how the timings of association and communication processes are related, both \\lambda_k seem defined independently. Should we expect some temporal dependence between different types of events here? The authors mention that both point processes are “related through the mediation process and in the embedding space”, a more rigorous definition would be helpful here. \n\nThe authors claim to learn functions to compute node representations, however the representations z^u seem to be direct embeddings of the nodes. If the representations are computed as functions it should be clear what is the input and which functional form is assumed.\n\nI find algorithm 1 unclear and do not understand how it is formally derived, its justification seems rather fuzzy. It is also unclear how algorithm 1 relates to the loss optimisation presented in section 4. \n\nWhat is the mechanism for addition of new nodes to the graph? I don’t see in algorithm 1 a step where nodes can be added but this might be handled in a different part of the training. \n\nComments on the experiments section.\n\nSince the proposed method is a variation on (Trivedi et al. 2017), a strong baseline would include experiments performed on the same datasets (or at least one dataset) from that paper. \n\nIt is not clear which events are actually observed. I can see how a structural change in the network can be observed but what exactly constitutes a communication event for the datasets presented?\n", "Overall, the contribution of the paper is somewhat limited [but a little more than my initial assessment, thanks to the rebuttal]. It is essentially an extension of (Trivedi et al. 2017), adding attention to provide self-exciting rates, applied to two types of edges (communication edges and “friendship” edges). Conditioned on past edges, future edges are assumed independent, which makes the math trivial. The work would be better described as modeling a Marked Point Process with marks k \\in {0,1}.\nOther comments:\n1.\t[addressed] DyRep-No-SP is as good as the proposed approach, maybe because the graph is assumed undirected and the embedding of u can be described by its neighbors (author rebuttal describes as Localized Propagation), as the neighbors themselves use the embedding of u for their own embedding (which means that self-propagation is never \"really off\"). Highly active nodes have a disproportional effect in the embedding, resulting in the better separated embeddings of Figure 4. [after rebuttal: what is the effect of node activity on the embeddings?]\n2.\t[unresolved, comment still misundertood] The Exogenous Drive W_t(t_p – t_{p−1}) should be more personalized. Some nodes are intrinsically more active than others. [after rebuttal: answer \"$W_t(t_p - t_{p-1})$ is personalized as $t_p$ is node specific\", I meant personalized as in Exogenous Drive of people like Alice or Bob]\n3.\t[unresolved] Fig 4 embeddings should be compared against (Trivedi et al. 2017) [after rebuttal: author revision does not make qualitative comparison against Trivedi et al. (2017)]\n\nBesides the limited innovation, the writing needs work. \n4.\t[resolved] Equation 1 defines $g_k(\\bar{t})$ but does not define \\bar{t}. Knowing (Trivedi et al. 2017), I immediately knew what it was, but this is not standard notation and should be defined. \n5.\t[resolved] $g_k$ must be a function of u and v\n6.\t[resolved] “$k$ represent the dynamic process” = > “$k$ represent the type of edge” . The way it is written $k$ would need to be a stochastic process (it is just a mark, k \\in {0,1})\n7.\t[resolved] Algorithm 1 is impossibly confusing. I read it 8 times and I still cannot tell what it is supposed to do. It contains recursive definitions like $z_i = b + \\lambda_k^{ji}(t)$, where $\\lambda_k^{ji}(t)$ itself is a function of $z_i(t)$. Maybe the z_i(t) and z_i are different variables with the same name?\n8.\t[resolved] The only hint that the graph under consideration is undirected comes from Algorithm 1, A_{uv}(t) = A_{vu}(t) = 1. It is *very* important information for the reader.\nRelated work (to be added to literature):\nDynamic graph embedding: (Yuan et al., 2017) (Ghassen et al., 2017)\nDynamic sub-graph embedding: (Meng et al., 2018)\n\nMinor:\nstate-of-arts => state-of-the-art methods\nlist enumeration “1.)” , “2.)” is strange. Decide either 1) , 2) or 1. , 2. . I have never seen both.\nMAE => mean absolute error (MAE)\n\nYuan, Y., Liang, X., Wang, X., Yeung, D. Y., & Gupta, A., Temporal Dynamic Graph LSTM for Action-Driven Video Object Detection. ICCV, 2017.\nJerfel, , Mehmet E. Basbug, and Barbara E. Engelhardt. \"Dynamic Collaborative Filtering with Compound Poisson Factorization.\" AISTATS 2017. \nMeng, C., Mouli, S.C., Ribeiro, B. and Neville, J., Subgraph Pattern Neural Networks for High-Order Graph Evolution Prediction. AAAI 2018.\n\n--- --- After rebuttal \n\nAuthors addressed most of my concerns. The paper has merit and would be of interest to the community. I am increasing my score.", "- Functional form of Computing Representation: Eq 4. provides the functional form that computes the representations with inputs being the three terms and parameterized by the W parameters. We state this clearly in revised version. Note that z^u(t) in Eq. 4 is qualified by $t$ and it keeps getting updated as the node $u$ gets involved in events. It does not represent direct embedding, rather just the placeholder for evolving embedding. For learning direct embedding of nodes (as done in *transductive* setting), one needs to have node-specific parameter i.e. one dimension of parameter matrix need to be of size = number of nodes in graph. In contrast to that, our setting is *inductive* where the parameters are not node-specific and hence it allows to learn general functions to compute representations given input information for a node. This allows to compute node embeddings for new (unseen) nodes without any necessity of altering the parameter space. This difference in transductive vs. inductive settings is well summarized for graphs in (Hamilton et. al. 2017).\n\n- Algorithm 1: It seems there is a misunderstanding on this point. Algorithm 1 is not a part of training (Algorithm 2 makes training tractable). Algorithm 1 constitutes a vital part of the forward pass (our novel Temporal Point Process based Attention mechanism) that computes node embeddings. As Algorithm 1 is used in an involved process, we believe that a figure accompanying the process may provide easier access to the mathematics behind it. To this end, we have now added an auxiliary figure in Appendix A describing the use of Algorithm 1 and how the whole process works. In addition to the accompanying figure, we have also updated the description of Algorithm 1 in the main paper to make it more readable in the revised version.\n\n- Adding new nodes: It is important to note the *inductive* ability of our framework described in response to your above question on computing functions, as that gives us an inherent ability to support new nodes. In practice, as described in Section 2.3 of the paper, the data contains a set of dyadic events ordered in time. Hence, each event involves two nodes $u$ and $v$. A new node will always appear as a part of such an event and it will be processed by the framework like any other node. We provide some more details on the mechanism in Appendix B.\n\n- Comments on Experiment Section: Both datasets in (Trivedi et. al. 2107) are purely interaction datasets (i.e. contains information about activities on the network, e.g. visit, fight, etc.) but do not consider any topological events i.e. there do not exist an underlying topology between the nodes that interact in those events. One way to remedy that would be to augment such a dataset with an underlying fixed topology knowledge graph such as Freebase or Wikidata. We considered this approach but the issue in this case is the absence of time points for the formation of topological edges. As we require time-stamped events, we chose the datasets that naturally provided both network evolution and activities on network with timestamps in lieu of constructing an artificial network by combining multiple sources where the quality of such construction will also play a role. We believe that the two datasets used in this work contain lot of interesting properties observed in real-world dynamic graphs that helps to adequately evaluate our proposed contributions and serve as a strong empirical evidence of the success of our approach. \n\nIn the interest of space, we provide preliminary details on datasets in Section 5.1 while more details on the two datasets are available in Appendix G.1.\n\nPlease let us know if something is still not clear and we will be happy to further discuss and address your concerns.\n\nWilliam L. Hamilton et. al., Representation Learning on graphs: Methods and Applications, 2017.\n\n", "We’d like to thank all the reviewers for your helpful comments. We’ve made the following updates to our paper based on your feedback: \n\nMain paper:\n=========\n- Revised Section 2.3 based on reviews and discussion.\n- Removed $l$ as it was only used for book-keeping and only invoked in Algorithm 1. However, as input to Algorithm 1 is A(\\bar{t}), the most recent adjacency matrix, $l$ is redundant and can be removed. This helps to make a cleaner presentation\n- Revised the text under Section 3.2.1 including explanation of Algorithm 1 and also rectified minor notations.\n- Made \\bar notation consistent to signify past time points. Henceforth, for an event at time $t_p$, $\\bar{t_p}$ represents the global timepoint just before the current event while for a node $u$ involved in current event at time $t_p$, $\\bar{t_p}^u$ represents the timepoint of previous event involving node $u$. This makes all notation consistent and removes any use of $t_{p-1}$ in Eq 4 and $t-$ in Algorithm 1.\n- Rectified any minor flaws suggested by the reviewers.\n\nAppendix:\n=======\nAdded two new sections:\n- Section A: Pictorial exposition of DyRep’s representation learning module that visualizes Localized Embedding Propagation Principle, Temporal Point Process based Self-Attention and Algorithm 1.\n- Section B: Discusses rationale behind DyRep framework - includes discussion on marked process view of DyRep clarifying differences of edge type vs dynamic, consolidated comparison to (Trivedi et. al. 2017) and description on support for node, edge types and unseen nodes in our framework.\n\nFurther, we have responded to individual comments below.\n", "Thank you for updating your review. We added a clarification on the point process perspective as a response to your previous comment. Here we address your updated review comments and re-emphasize the contributions of our work:\n\nExogenous Drive: Do you mean Alice/Bob is a person inside network? The exogenous drive constitutes the changes in features of node caused by external influences. However, activities external to network are not observed in the dataset. Hence for a node $u$ (or Alice which will be a node in social network) , the term allows a smooth latent approximation of change in $u$’s features over time caused by such an external effect. Please note, $\\bar{t_{p}}^u$ is not the time of previous event in the global dataset, it is time for previous event of node $u$. \n\nContributions: While one can augment the event specification in (Trivedi et. al. 2017) with additional mark information, that itself is not adequate to achieve our proposed method of modeling dynamical process over graphs at multiple time scales. A subtle but key difference in our deep point process formulation that allows us to achieve our goal of two time-scale expression, is the form of conditional intensity function (Eq 3 in our paper). We employ a softplus function for $f$ which contains a dynamic specific scale parameter $\\psi_k$ to achieve this while (Trivedi et al. 2017) uses an exponential (exp) function for $f$ with no such parameter. The exponential choice of $f$ also restricts their model to Rayleigh dynamics while DyRep can capture more general dynamics. \n\nHowever, we wish to emphasize that our major contributions for learning dynamic graph representation in this work extend well beyond this conditional intensity function. To the best of our knowledge, our work is the first to adopt the paradigm of expressing network processes at different time-scales (widely studied in network dynamics literature) to representation learning over dynamic graphs and propose an end-to-end framework for the same. Further our novel representation learning module that incorporates *graph structure* - using Temporal Point Process based Self-Attention (a principled advancement over all existing graph based neural self-attention techniques) and Localized Embedding Propagation - is not a straightforward extension or variant of (Trivedi et al. 2017).We will release the code and datasets with the final version of the paper.\n\nWe again thank you for your time and discussions. Please let us know if there are still unclear points and we would be happy to clarify your further concerns.\n", "Thank you for a detailed response. We believe that we were describing similar things but from different perspectives and your response has greatly helped us to distill that. Below we provide further clarifications on our perspective:\n\nFirst, we clarify that $l$ was only used for book-keeping to check the status of link in Algorithm 1, so it should not be part of event representation $e$ and we rectify that in our revision by removing it completely as adjacency matrix A already provides that information.\n\nMarked Process: From a mathematical viewpoint, we agree with you that for any event $e$ at time $t$, any information other than the time point can be considered a part of mark space describing the events. Hence, in our case, given a one-dimensional timeline, we can consider O=\\{(u,v,k)_p, t_p)_{p=1}^P as a marked process with the triple (u,v,k) representing the mark. \n\nHowever, using a single-dimensional process with such marks does not allow to efficiently and effectively discover or model the structure in the point process useful for learning intricate dependencies between events, participants of the events and dynamics governing those events. Hence, it is often important to extract the information out of the mark space and build an abstraction that helps to *discover the structure* in point process and make this learning *parameter efficient*. In our case, this translates to two components: \n\ni) The nodes in the graph are considered as dimensions of the point process, thus making it a multi-dimensional point process where an event represents interaction/structure between the dimensions, thus allowing us to explicitly capture dependencies between nodes. \nii) The topological evolution of networks happen at much different temporal scale than activities on a fixed topology network (e.g. rate of making friends vs liking a post on a social network). However both these processes affect each other’s evolution in a complex and nonlinear fashion. Abstracting $k$ to associate it with these different scales of evolution facilitates to model our purpose of expressing dynamic graphs at two time scales in a principled manner. It also provides an ability to explicitly capture the influential dynamics (Chazelle et. al. 2012) of topological evolution on dynamics of network activities and vice versa (through the learned embedding -- aka evolution through mediation which is the most crucial part of this whole framework). \n\nNote that this distinction in use of mark information is also important as we learn representations for nodes (dimensions) but not for $k$. Our overall intention here is to make sure that $k$ representing two different scales of event dynamics is not confused with edge or interaction type. For instance, in case of typed structural edge (e.g. wasbornIn, livesIn) or typed interaction (e.g. visit, fight etc. as in Trivedi et. al. 2017), one would add type as another component in the mark space to represent an event while $k$ still signifying different dynamic scales. In that sense, (Trivedi et. al. 2017) can also be viewed as a marked process that only models the typed interaction dynamics at a single time-scale and does not model topological evolution. \n\nIndependence: We agree with you but we would paraphrase your statement as follows: The next event and its mark (u,v,k) at time $t$ is conditionally independent of all past events and their marks given the conditional intensity function, which itself is a function of the model and the most recent *learned representations* of nodes (this is the most important part for this to hold) at time $t$. \n\nBernard Chazelle. Natural Algorithms and Influence Systems, 2012.\n", "We thank the reviewer for providing detailed comments. Below we provide clarifications on your specific points:\n\n- Importance of Two-time scale Process: We emphasize that the two-time scale expression of dynamic processes over graphs is not an assumption of our work; it is a naturally observed phenomenon in any dynamic network. For instance, consider the dynamics over a social network. The growth of network (topology change) by addition of new users (nodes) or new friendships (edges) occurs at significantly different rate/dynamics compared to various activities on a *fixed* network topology (self evolution of user’s features, effect on user from activities external to network, information propagation on network or interactions (sending a message, liking a post, comments, etc.). Further, both these dynamics affect each other significantly - befriending someone on social network increases the likelihood of activities between those nodes and on the other way around, activities such as regularly liking or sharing a post or mere prolonged interest in posts from friends of friends may lead to a friendship or follow edge between non-friends.\n\nThis dichotomy of expressing network processes at two different time-scales (dynamic *of* the network or network evolution) and (dynamic *on* the network or network activities) is a widely known phenomenon that is subject of several studies in dynamic networks literature [1,2,3,4,5]. However, to the best of our knowledge, our work is the first to adopt this paradigm for large scale representation learning over dynamic graphs and propose an end-to-end framework for the same. \n\n- Support for Node and Edge Types is inherent in our approach and not a limitation of our model. As both node and edge types are essentially features, our model does not require any modification in the approach incorporate them. We have added a brief discussion in Appendix B to explain how our model works in presence of them. Consequently, DyRep can learn representations over various categories of dynamic graphs including but not limited to social networks, biological networks, dynamic knowledge graphs etc. as long as data provides time stamped events for both network evolution and activities on the network. \n\n- Support for Deletion: Being a continuous-time model, our work captures fine-grained temporal dependencies among network processes. To achieve this, the model needs time stamped edges for graphs. However, as we mention in conclusion of our paper, it is difficult to procure data with fine grained deletion time stamps. Further, the temporal point process model requires more sophistication to support deletion. For example, one can augment the model with a survival process formulation to account for lack of node/edge at future time which is an involved task and requires a dedicated investigation outside the scope of this paper.\n\n- Temporal Dependence between events: $lambda$ is the conditional intensity function the *conditional* part represents the occurrence of current event conditional on all past events. Hence, $\\lambda(t)$ can also be written as $\\lambda(t|\\amthcal{H}_t)$ to mention the conditional part where $\\mathcal{H}_t$ represents history of all previous event occurrences. In the point process literature, $\\mathcal{H}_t$ is often omitted as it is well understood. Next, the conditional intensity function is derived based on the most recent embeddings of the two nodes in the event. However the node embeddings get updated after every event (whether k = 0 or k=1). For instance, consider that a node $u$ was involved in a communication event (k=1) at time $t1$, association event (k=0) at time $t2$ and another communication event (k=1) at time $t3$ ($t1$ < $t2$ < $t3$). In this case, the conditional intensity function computed for time $t3$ (when k = 1) will use most recent embeddings of node $u$ updated after its event at time $t2$ (when k =0) and similarly the conditional intensity function computed for time $t2$ (when k=0) will use most recent embeddings of node $u$ updated after its event at time $t1$ (when k=1). This is how the two processes are interleaved with each other through evolving representations whose learning is the latent mediation process.\n\n[1] Bernard Chazelle. Natural Algorithms and Influence Systems, 2012.\n[2] Damien Farine. The dynamics of transmission and the dynamics of networks, 2017.\n[3] Oriol Artime et. al., Dynamics on networks: competition of temporal and topological correlations, 2017.\n[4] Haijun Zhou et. al., Dynamic pattern evolution on scale-free networks, 2005.\n[5] Farajtabar et. al., Coevolve: A Joint Point Process Model for Information Diffusion and Network Evolution, 2015.\n\n", "Thank you for your reply. I do realize now that the process is not Poisson as the definition of \\lambda clearly depends on past marks (it is not an externally driven process like a non-homogeneous Poisson process). I will change my review accordingly. \n\nI also apologize but I fear we are talking past each other here (“We disagree with these comments as this is an incorrect characterization of our work” … ). I will strive to be more specific from now on. \n\n“It seems that the misunderstanding arises from your assumption (including point 6) that … ‘k’ is a mark” => By your own definition of O = \\{(u, v, t, l, k)_p\\}_{p=1}^P , which fits the Definition 2.1.2 of Jacobsen (2006) where T_p is your p-th event time and Y_p = (u, v, l, k) is an element of a Polish space E. When you say O is a not a Marked point process, what is the basis for the claim? Why would Y_p not be represented by a Polish space? \n\nFormally, any time-varying graph is a Marked point process where the edges are the marks. When I say “Graph process”, it is implicit that it has edge marks. Thus, my comment “Graph process” with edge marks k implies a measure (density) over the sigma algebra (sequence) given by O = \\{(u, v, t, k)_p\\}_{p=1}^P. The variable “l” is not properly a mark because it can be re-constructed from the process (l_p = 1 if there has been any event with k=0 in the past). Algorithm 1 uses this marks definition when it does “if k = 0 then Auv(t) = Avu(t) = 1“, i.e., k=0 is a mark of an observable edge (see description next). \n\n“It seems that the misunderstanding arises from your assumption (including point 6) that ‘k’ is type of an edge,\nPossibly my general use of the ill-defined term “edge” was not clear. I am thinking of (u,v) as a tuple. If (u,v) is a physical edge or a virtual edge “interaction”, k \\in \\{0,1\\} defines a mark (physical or virtual). \n\n“It seems that the misunderstanding arises from your assumption (including point 6) that ‘k’ has independence, none of which is true.” \nWe seem be to talking about different things. Marks (u, v, t, k) are conditionally independent given the model and past marks, per your likelihood \\mathcal{L}. This is the independence I was referring to. Adding these marks to Trivedi et al. (2017) is rather (mathematically) straightforward given the independent nature of the model. Mathematically straightforward does not mean it is easy to get it to work in practice and releasing the code would be important.\n\nJacobsen, Martin. Point process theory and applications: marked point and piecewise deterministic processes. Springer Science & Business Media, 2006.\n\nMinor: \nPage 3, λ(t)dt:= P[event .. ] missing brackets\n\n", "Thank you for your review! We appreciate your time and supportive feedback and we are glad that you find our work interesting. Details about the corresponding association and communication events in the two datasets are provided in Appendix E.1. We uploaded a revised version that contains your suggested changes.", "Responses to Other Comments:\n========================\n\n1) This is incorrect as self-propagation mainly captures the recurrent evolution of one’s own latent features independent of others. Self-propagation principle states: A node evolves in the embedded space with respect to its previous position (e.g. set of features) and not in a random fashion. Based on Localized Propagation principle described above, a node's embedding is described by information it receives from other node and not exclusively it's own neighbors. The good performance of DyRep-No-SP signifies that the Localized Propagation term in Eq 4. is able to account for the relative position of node with respect to its previous position more often than not. Further, both dynamic of network and dynamic on network contribute to updates to a node's embedding. The interplay of multi-scale temporal behavior of these processes and evolving features leads to better discriminative embeddings, not just the rate of activities - this is evident by other exploratory use cases we discuss.\n\n2) $W_t(t_p - t_{p-1})$ is personalized as $t_p$ is node specific.\n\n3,4) We add the suggested changes to the revised version.\n\n5) The intention for the *qualitative* exploratory analysis was not to make a performance comparison, which is already available against dynamic baselines in our *quantitative* predictive analysis. The goal of Figure 4 and appendix experiments is to draw the comparison between how embeddings learned using state-of-the-art static methods would differ from our dynamic model in terms of capturing evolving properties over time. To our knowledge, such extensive analysis for dynamic embeddings is not available in previous works. Further, we believe that visualizing embeddings from another dynamic method against our model may not provide informative insights.\n\n6) This is incorrect - please check our main response above\n\n7) “z\" in Algorithm 1 is a temporary variable whose scope is limited to the algorithm. Please note that $\\lambda$ is an input to the algorithm and hence “z\" within Algorithm 1 has no interaction with the node embedding z (which always has a superscript) used throughout the paper. Hence, there is no recurrence, however, to avoid any further confusion, we change the temporary variable to “y\".\n\nDetails explaining Algorithm 1 in full are available on Page 7. Here we provide a simplified high-level explanation. As a starting point, we refer you to the point 2 in paragraph before Eq 4 page 5. To capture the effect described there, we parameterize the attention module with element of matrix S corresponding to an existing edge that signifies information/effect propagated by that edge. Algorithm 1 computes/updates this S matrix. Please note that S is parameter for a structural temporal attention which means temporal attention is only applied on structural neighborhood of a node. Hence, the value of S are only updated/active in two scenarios: a) the current event is between nodes which already has structural edge (communication between associated nodes or l=1, k=1) and b) the current event is an association event (l=0, k=0). Now, given a neighborhood of node ‘u’, $b$ represents background (base) attention for each edge which is uniform attention based on neighborhood size. Whenever an event occurs between two nodes, this attention changes in following ways: For case (a), just change the attention value for corresponding S entry using the intensity of the event. For case (b), repeat same as (a) but also adjust the background attention for each node as the neighborhood size grows in this case.\n\n8) Thank you for pointing this. It is true that we consider undirected graphs in proposed work. However, our model can be easily generalized to directed graphs. Specifically, the difference would appear in the update of matrix A used in Algorithm 1, which would subsequently lead to different neighborhood and attention flow for each node. We will add this clarification in the revised paper.\n\nWe have uploaded a revised version of the paper to add the above clarifications, address your points and discuss related work cited by you (thank you for the pointers). Please let us know if something is still not clear and we will be happy to further discuss and address your concerns.\n", "Thank you for your review! We appreciate your comments and suggestions.\n\nAs a preface to our response, we wish to mention that, unlike existing approaches, our work expresses dynamic graphs at multiple time-scales as follows:\na) Dynamic ”of” the Network: This corresponds to the topological changes of the network – insertion or deletion of nodes and edges. We use \"Association\" to label the observed process corresponding to this dynamic.\nb) Dynamic ”on” the Network: This corresponds to activities on a *fixed* network topology – self evolution of node’s features, change in node’s features due to exogenous drive (activities external to network), information propagation within network and interactions between nodes which may or may not have direct edge between them. We use \"Communication\" to label the observed process of interaction between nodes (only the observed part of dynamic ”on” the network).\n\nGeneral Comment:\n==============\nOverall, the contribution of the paper is limited. It is essentially a minor extension of (Trivedi et al. 2017), adding attention, applied to two types of edges (communication edges and “friendship” edges). Edges are assumed independent, which makes the math trivial. The work would be better described as modeling a Marked Poisson Process with marks k \\in {0,1}.\n\nResponse:\n=========\nWe politely disagree with these comments as this is an incorrect characterization of our work. It seems that the misunderstanding arises from your assumption (including point 6) that ‘k’ is type of an edge, ‘k’ is a mark and ‘k’ has independence, none of which is true. ‘k’ truly distinguishes scale of event dynamics (not type of edge) in our two-time scale model. In fact, when k=1, it is an interaction event which is not considered as an edge between nodes in our model. The edge (which forms graph structure) only appears through an association event (k=0). Indeed, ‘k’ corresponds to stochastic processes at different time scales and hence $\\psi_k$ is the rate (scale) parameter corresponding to each dynamic. Further, every time when k=0, an edge is created between different node pairs. As we clearly mention in the paper, we do not consider edge type in this work and hence ‘k’ is not a mark. However, edge type can be added to Eq 4 in case it is available. Finally, dynamic processes realized by k=0 and k=1 are not independent and are highly interleaved in a nonlinear fashion. For instance, formation of a structural edge (k=0) affects interactions (k=1) and vice versa. Algorithm 1 captures this intricate dependencies as we will describe below. Based on the above points, it follows that our model is not a marked Poisson process. In fact, it does not take any specific form of point process - rather learns the conditional intensity function through a function approximation.\n\nIn terms of contributions, we argue that our approach of modeling dynamic graphs at multiple scales and learning dynamic representations as latent mediation process bridging the two dynamic processes, is a significant innovation compared to any existing approaches. This is a non-trivial effort for a setting where the dynamic processes evolve in a complex and nonlinear fashion. Further, our temporal point process based structural-temporal self-attention mechanism to model attention based on event history of a node is very novel and has not been attempted before. Our attention model can: 1) take into account temporal dynamics of activities on edge and 2) capture effects from faraway nodes due to dependence on event history. This is a formal advancement to state-of-the-art models of non-uniform attention (such as Graph Attention networks). \n\nFurther, the paper provides an in-depth comparison with (Trivedi et. al. 2017) (including Table 1). Here we reiterate the differences: (Trivedi et. al. 2017) model events at single time scale and do not distinguish between two dynamic processes. They only consider edge level information for learning the embeddings. Our model considers a higher order neighborhood structure to compute embeddings. More importantly, in their work, the embedding update for a node ‘u’ considers the edge information for the same node ‘u’ at a previous time step. This is entirely different from our structural model based on ”Localized Embedding Propagation” principle which states: Two nodes involved in an event form a temporary (communication) or a permanent (association) pathway for the information to propagate from the neighborhood of one node to the other node. This means, during the update of embedding for node ‘u’, information is propagated from the neighborhood of node ‘v’ (and not node ‘u’, please check Eq. 4) to node ‘u’. Subsequently, (Trivedi et. al. 2017) does not have any attention mechanism as they don't consider structure.\n\n\n", "The paper is very well written. The proposed approach is appropriate on modeling the node representations when the two types of events happen in the dynamic networks. Authors also clearly discussed the relevance and difference to related work. Experimental results show that the presented method outperforms the other baselines.\nOverall, it is a high-quality paper. \nThere are only some minor comments for improving the paper:\nν\tPage 6, there is a typo. “for node v by employing …” should be “for node u”\nν\tPage 6, “Both GAT and GaAN has” should be “Both GAT and GaAN have”\nν\tIn section 5.1, it will be great if authors can explain more what are the “association events” and “communication events” with more details in these two evaluation datasets.\n", "Thank you for your interest in our work.\n\nInspired from [1], our work expresses dynamic graphs at multiple scales as follows:\na.) Dynamic ”of” the Network: This corresponds to the topological changes in network – insertion or deletion of nodes and edges\nb.) Dynamic ”on” the Network: This corresponds to various activities in the network – self evolution of node’s interests/features, change in node’s features due to exogenous drive (activities external to net-work), information propagation within network and within-network interactions between nodes which may or may not have direct edge between them. \n\nWe do not define \"Association\" and \"Communication\" as two new concepts or constraints on dynamic graphs neither do we claim that in the paper. Instead, we use those two words to label the well-known and naturally *observed* processes corresponding to the dynamics mentioned in (a) and (b) – Association events maps to observed insertion of nodes or edges and Communication events maps to observed interactions between nodes (which is observed part of dynamic ”on” the network). Nevertheless, this dichotomy of dynamic network processes is well-known and has been subject of several studies [1, 2, 3, 4, 5] in segregated manner. But none of the existing machine learning approaches has jointly modeled them for representation learning over dynamic graphs (our key objective) to the best of our knowledge. \n\n”In reality, dynamic networks are represented by insertion and deletion of nodes and insertion or deletion of edges between existing nodes.”\n\nThis is a rather limited or constrained view of dynamic graphs as there are many dynamic processes (as listed in b above) occurring on such a graph which cannot be realized by just modeling growth or shrinkage of graph. Approaches based on such model of dynamic network cannot distinguish or model interleaved evolution of network processes which leads to multiple shortcomings:\n– Such a model may capture structural evolution, but it lacks the ability to effectively and correctly capture dynamics ”on” the network. Concretely, the dynamic process under which a node’s features evolve or node interactions happen within a network (thus leading to information propagation) has vastly different behavior from the dynamic process that leads to growth (shrinkage) of the network structure. For example, social network activities such as liking a post or posting on discussion or sharing a video happen at much accelerated rate compared to slow rate of making friends and thereby growing the network. Hence it is important to express dynamic graphs at different time scales. \n– Edge types only serve as feature information and they can be readily added in our model if available. Edge weights may or may not be available apriori and may need to be inferred. Both of them are insufficient to effectively model the evolutionary multi-time scale dynamics of structure and network activities and their influence on each other. Further, neither of them express node specific dynamic properties. This, in turn, will not help to learn the effect of evolving node representations on observed processes and vice versa.\n\nExtended Details on use of both datasets is available in Appendix E. \n\n[1] Natural algorithms and influence systems.\n[2] The dynamics of transmission and the dynamics of networks.\n[3] Dynamics on networks: competition of temporal and topological correlations.\n[4] Dynamic pattern evolution on scale-free networks.\n[5] Coevolve: A Joint Point Process Model for Information Diffusion and Network Evolution.", "The paper presents its content in the most complicated way. It defines new concepts of Association (refers to topological evolution) and Communication (refers to node interactions) for dynamic graphs and formulate the problem based on them. In reality, dynamic networks are represented by insertion and deletion of nodes and insertion or deletion of edges between existing nodes. The edges and nodes may have features or labels. The paper defines two new concepts of communication and association which I think are inherited from the edge concept with subtle differences. Association has global effects and communication has local effects on information exchange. I am really confused if we really need to define such new concepts and then propose a model for that, while in reality dynamic graphs usually do not contain these kinds of constraints. Assuming we have the realization of these concepts, can we formulate the problem using simpler models such as networks with typed edges or weighted edges? I am skeptical about how the authors use the datasets in the experiment. For example, in the Social Evolution Dataset, what is association and what is communication? How did you interpret the dataset to find these concepts? Do we really need to consider these concepts in the Social Evolution Dataset to do the link prediction? I think authors can elaborate on new concepts definitions and necessity for considering them in their method.", "We view the work on geometric deep learning as a very interesting direction for representation learning over graphs. However, most current works including cited papers in geometric deep learning over graphs primarily deal with static graphs, while our work focuses on dynamic graphs to jointly model both - topological evolution (dynamic of the network) and node interactions (dynamic on the graph). It would be interesting complimentary direction to extend cited spectral/spatial domain methods to derive local graph operators that can take into account both both temporal and spatial dynamics. We will add a related discussion section in the updated version of the paper.", "I would like to draw the authors' attention to multiple recent works on deep learning on graphs directly related to their work. Among spectral-domain methods, replacing the explicit computation of the Laplacian eigenbasis of the spectral CNNs Bruna et al. with polynomial [1] and rational [2] filter functions is a very popular approach (the method of Kipf&Welling is a particular setting of [1]). On the other hand, there are several spatial-domain methods that generalize the notion of patches on graphs. These methods originate from works on deep learning on manifolds in computer graphics and recently applied to graphs, e.g. the Mixture Model Networks (MoNet) [3] (Note that Graph Attention Networks (GAT) of Veličković et al. are a particular setting of the MoNet [3]). MoNet architecture was generalized in [4] using more general learnable local operators and dynamic graph updates. Finally, the authors may refer to a review paper [5] on non-Euclidean deep learning methods. \n\n\n1. Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering, arXiv:1606.09375\n\n2. CayleyNets: Graph convolutional neural networks with complex rational spectral filters, arXiv:1705.07664,\n\n3. Geometric deep learning on graphs and manifolds using mixture model CNNs, CVPR 2017. \n\n4. Dynamic Graph CNN for learning on point clouds, arXiv:1712.00268\n\n5. Geometric deep learning: going beyond Euclidean data, IEEE Signal Processing Magazine, 34(4):18-42, 2017\n" ]
[ 6, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, -1, -1, -1, -1 ]
[ 4, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1 ]
[ "iclr_2019_HyePrhR5KX", "iclr_2019_HyePrhR5KX", "rkl_EsCKpX", "iclr_2019_HyePrhR5KX", "BJeonJJo3X", "S1g5-1zxRm", "rkl_EsCKpX", "Sygt67NF67", "SyeBsVc93m", "BJeonJJo3X", "BJeonJJo3X", "iclr_2019_HyePrhR5KX", "S1eidr-Eom", "iclr_2019_HyePrhR5KX", "SkeXykWAYm", "iclr_2019_HyePrhR5KX" ]
iclr_2019_HyeVtoRqtQ
Trellis Networks for Sequence Modeling
We present trellis networks, a new architecture for sequence modeling. On the one hand, a trellis network is a temporal convolutional network with special structure, characterized by weight tying across depth and direct injection of the input into deep layers. On the other hand, we show that truncated recurrent networks are equivalent to trellis networks with special sparsity structure in their weight matrices. Thus trellis networks with general weight matrices generalize truncated recurrent networks. We leverage these connections to design high-performing trellis networks that absorb structural and algorithmic elements from both recurrent and convolutional models. Experiments demonstrate that trellis networks outperform the current state of the art methods on a variety of challenging benchmarks, including word-level language modeling and character-level language modeling tasks, and stress tests designed to evaluate long-term memory retention. The code is available at https://github.com/locuslab/trellisnet .
accepted-poster-papers
The paper proposes a novel network architecture for sequential learning, called trellis networks, which generalizes truncated RNNs and also links them to temporal convnets. The advantages of both types of nets are used to design trellis networks which appear to outperform state of art on several datasets. The paper is well-written and the results are convincing.
train
[ "r1esyAwhkE", "rJxgjpN31E", "rkenN3O267", "S1gYPHu26m", "HkearHd367", "rJeU7ruhaX", "BJxMvLjkam", "BkeXf-gTnm", "Byll8TOK37" ]
[ "author", "public", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your interest in our paper! \n\nTo obtain the 54.67 ppl on PTB using MoS, we trained for 400 epochs (similar for the 54.19 ppl result). We did not use finetuning step like Yang et al.\n\nIn addition, the code will be made available so that you can run on your own as well :-)", "In your paper, you report the result for Yang et al., 2018 without finetuning; with finetuning, they report a test perplexity of 54.44, which is slightly better than your reported result for the base TrellisNet with MoS of 54.67. To make the comparison fair, could you report the number of epochs you train the TrellisNet for? I believe Yang et al., 2018 train for 1000 epochs and then allow for another 1000 in the finetuning step as long as the loss keeps improving. Thanks!", "We want to thank all reviewers for their feedback and suggestions. In order to address the comments in the reviews, we have updated our paper. The key changes in the revision are as follows:\n \n1) We included a short introduction to temporal convolutional networks (TCN) at the beginning of Section 4.1.\n \n2) We reorganized the experiment result tables in Section 5 for better clarity. Following the reviewers’ advice, we also include the performance of the generic TCN (from [1]) on the word-level PTB and char-level PTB tasks. \n \nFor the questions and the other interesting points that the reviewers brought up, such as the usage of full weights, we have included clarifications in our responses. We are happy to discuss further.\n \n[1] Bai, Shaojie, J. Zico Kolter, and Vladlen Koltun. \"An empirical evaluation of generic convolutional and recurrent networks for sequence modeling.\" arXiv preprint arXiv:1803.01271 (2018).\n", "Thank you for the comments.\n \nNote that input injection and weight-tying across depth may seem reasonable in retrospect, but these ideas were not obvious a priori. They seemed quite alien to us until they emerged from the construction in Theorem 1.\n \nRegarding the performance of TCN on word- and char-level PTB, we have included these in the tables. They are much worse (by about 30 ppl on word-PTB and 0.14 bpc on char-PTB) than the prior SOTA results, despite the strong regularizations added to the original TCN. The significant improvement of TrellisNet over TCN is due to the ideas presented in our submission.\n \nConcerning the increase in computation time, the construction in Theorem 1 does produce deep networks with M + L − 1 layers. However, in practice we do not have to use this precise number. As highlighted in Section 4.3 and Appendix B, existing techniques can help us quickly expand the horizon (i.e. context size) of a TrellisNet, for example by larger kernel sizes, dilations, or history repackaging. We investigate the problem of long-range modeling in Section 5.2 (Table 4) as well, where the temporal dependency in sequential MNIST, permuted MNIST and sequential CIFAR-10 is typically over 700 or 1000. In that case, it would be impossible to fit a TrellisNet with (M+L-1) layers on a GPU. Our experiments have shown that with the help of dilations and other techniques, TrellisNets can achieve very strong performance with a smaller number of layers than a strict interpretation of Theorem 1 would suggest.\n \nConcerning the dotted link in Figure 1a, we would like to offer two kinds of perspectives, one from the RNN side and one from the TCN side. In the context of an RNN, this is quite similar to what an LSTM or a GRU cell would do, where the gating mechanism would involve the hidden state or cell state (e.g., c_{t-1} in LSTM) propagated from the previous time step (see Figure 3(a) and 4). From a TCN perspective, this resembles a residual connection, except that we shift the connection by one time step in the temporal dimension of the input tensor. An interesting connection is that the introduction of cell state propagation in LSTMs was used to alleviate the vanishing gradient problem, while residual connections have a similar effect and motivation in deep CNNs. The dotted connection in TrellisNet reflects both ideas.\n \nConcerning giving the full name of TCN and a brief introduction, we agree and have addressed this in the revision.\n", "Thank you for the comments.\n \nFirst, while it may not seem surprising that a truncated RNN can be approximated by feed-forward networks in general (after all, any computational graph of finite length can be simply unrolled to form a feedforward network), we believe it is surprising that general RNNs can be represented with a simple kernel-2 TCN. The construction presented in our work has not been presented before and the details are quite interesting. For example, weight-tying across depth and input injection have not been used in TCNs for sequence modeling, and seemed quite strange to us until they emerged from the construction. These make sense in retrospect, but were not obvious a priori. Now that we have a better understanding of these architectural elements, they may see broader use in the community.\n \nSecond, about the techniques from both kinds of networks, we included a list of examples that TrellisNet can absorb from TCNs/RNNs in section B.1. While methods such as dilations and deep supervision are more common in ConvNets, we found the RNN-inspired techniques are equally important for a well-performing TrellisNet. For instance, the variational dropout that was specifically designed for RNN sequence models, gated activations motivated by LSTM/GRU, and history repackaging are all rarely seen in ConvNets. Besides the ablation study, we have now included more results from the generic TCN in section 5 in our latest revision. In all cases, the TrellisNet (which benefits from the above modifications) vastly outperform the generic TCN, even though the TCN is equipped with standard ConvNet ideas (e.g., dilation). We do believe that the inspiration from RNNs contributes a lot to improving TrellisNet beyond the TCN boundary.\n \nThird, regarding the modeling power of the full weight matrices. We introduced mixed group convolutions (Figure 2) to model the “layers” in RNNs. Once we generalize to a full weight matrix, there is no longer an interpretation in terms of “layers”. This can be seen in Eq. (6) and Figure 2(b): using a full convolutional kernel essentially mixes hidden units with different starting histories at each layer of the TrellisNet; this is impossible in RNNs. Theoretically, the full weight matrices can learn the blocked diagonal structure of Eq. (5) by gradient updates, if such diagonal structure is truly the optimal arrangement for sequence task parameters. In other words, optimal RNNs can be recovered by TrellisNet through training. However, as we showed in large-scale tasks such as WT103, TrellisNet gains quite a bit by using the full convolutional kernel (which mixes feature maps across all channels).\n \nRegarding the ablation study, we think the idea of input injection is very interesting indeed, and can now see broader use in light of our results. However, concerning the use of dense weight matrices, we believe this is quite significant as well. Note that we controlled for model capacity in the ablative analysis: when we replace full weight matrices with sparse ones we are actually using LSTMs with the same number of parameters, which were the SOTA on PTB. A >2 perplexity improvement is a large improvement at this level of performance: e.g., the ICLR 2018 oral paper [1] improved upon [2] by 2.8 units of perplexity via MoS, and [3] improved upon [2] by 0.5 perplexity via extensive hyperparameter search. As another datapoint, on WikiText-103, generalizing from sparse weight (LSTM) to full weight (TrellisNet) leads to a significant improvement by about 6 units of perplexity (we use the best LSTM results reported, which is [4]); i.e. by 16%.\n \n \n[1] Yang, Zhilin, et al. \"Breaking the softmax bottleneck: A high-rank RNN language model.\" arXiv preprint arXiv:1711.03953(2017).\n[2] Merity, Stephen, Nitish Shirish Keskar, and Richard Socher. \"Regularizing and optimizing LSTM language models.\" arXiv preprint arXiv:1708.02182 (2017).\n[3] Melis, Gábor, Chris Dyer, and Phil Blunsom. \"On the state of the art of evaluation in neural language models.\" arXiv preprint arXiv:1707.05589 (2017).\n[4] Rae, Jack W., et al. \"Fast Parametric Learning with Activation Memorization.\" arXiv preprint arXiv:1803.10049 (2018).\n", "Thank you for the positive feedback. \n \nWe briefly recap the difference between TrellisNet and existing methods (specifically RNNs and TCNs). We showed that truncated RNNs are sparse TrellisNets with only weight parameters on the diagonal (see Eq. (5)). Essentially, once we replace the sparse weight with a dense weight matrix, there is no longer the interpretation of “RNN layers”. The off-diagonal parameters of the full, dense weight matrices mix hidden units at different history starting points, and there is no analog of this in recurrent networks. Similarly, for TCNs, the very idea of weight-tying and input-injection (inspired by RNNs in our work) is very unusual and has not (to the best of our knowledge) been used in existing temporal ConvNets on sequences. TrellisNet bridges both architectures, and is thus able to absorb architectural and regularization techniques from both sides.\n", "The authors propose a family of deep architecture (Trellis Networks ) for sequence modelling. Paper is well written and very well connected to existing literature. Furthermore, papers organization allows one to follow easily. Trellis Networks bridge truncated RNN and temporal convolutional networks. Furthermore, proposed architecture is easy to extend and couple with existing RNN modules e.g. LSTM Trellis networks. Authors support their claims with an extensive empirical evidence. The proposed architecture is better than existing networks.\nAlthough the proposed method has several advantages, I would like to see what makes proposed architecture better than existing methods.\n", "The authors propose a new type of neural network architecture for sequence modelling : Trellis Networks. A trellis network is a special case of temporal convolutional network with shared weights across time and layers and with input at each layer. As stated by the authors, this architecture does not seem really interesting. The authors show that there exists an equivalent Trellis Network to any truncated RNN and therefore that truncated RNN can be represented by temporal convolutional network. This result is not surprising since truncated RNN can be unrolled and that their time dependency is bounded. The construction of the Trellis Network equivalent to a truncated RNN involves sparse weight matrices, therefore using full weight matrices provides a greater expressive power. One can regret that the authors do not explain what kind of modelling power one can gain with full weight matrices. \n\nThe author claim that bridging the gap between recurrent and convolutional neural networks with Trellis Network allows to benefit from techniques form both kinds of networks. However, most of the techniques are already used with convolutional networks. \n\nExperiments are conducted with LSTM trellis network on several sequence modelling tasks : word-level and character-level language modelling, and sequence modelling in images (sequential MNIST, permuted MNIST and sequential CIFAR-10). Trellis network yield very competitive results compare to recent state of the art models. \n\nThe ablation study presented in Annex D Table 5 is interesting since it provides some hints on what is really useful in the model. It seems that full weight matrices are not the most interesting aspect (if dense kernel really concerns this aspect) and that the use if the input at every layer has most impact.\n", "This paper introduces a novel architecture for sequence modeling, called the trellis network. The trellis network is in a sense a combination of RNNs and CNNs. The authors give a constructive proof that the trellis network is a special case of a truncated RNN. It also resembles CNNs since the neurons at higher levels have bigger receptive fields. As a result, techniques from RNN and CNN literature can be conveniently brought in and adapted to trellis network. The proposed method is evaluated on benchmark tasks and shows performance gain over existing methods.\n\nThe paper is well-written and easy to follow. The experimental study is extensive. The reviewer believes that this paper will potentially inspire future research along this direction. However, the novelty of the proposed method compared to the TCN seems limited: only weight sharing and input injection. It would be great to include the performance of the TCN on the PTB dataset, on both word and character levels in Table 1 and 2.\n\nAccording to Theorem 1, to model an M-truncated L-layer RNN, a trellis network needs M + L − 1 layers. When M is large, it seems that a trellis network needs to be deep. Although this does not increase to model size due to weight sharing, does it significantly increase computation time, both during training and inference?\n\nThe review might have missed it, but what is the rationale behind the dotted link in Figure 1a, or the dependence of the activation function $f$ on $z_t^{(i)}$? It seems that it is neither motivated by RNNs nor CNNs. From RNN's point of view, as shown in the proof of Theorem 1, $f$ only depends on its first argument. From CNN's point of view, the model still gets the same reception field without using $z_t^{(i)}$.\n\nMinor comments:\nThe authors might want to give the full name of TCN (temporal convolutional networks) and a short introduction in Section 2 or at the beginning of Section 4." ]
[ -1, -1, -1, -1, -1, -1, 7, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, 3, 3, 3 ]
[ "rJxgjpN31E", "iclr_2019_HyeVtoRqtQ", "iclr_2019_HyeVtoRqtQ", "Byll8TOK37", "BkeXf-gTnm", "BJxMvLjkam", "iclr_2019_HyeVtoRqtQ", "iclr_2019_HyeVtoRqtQ", "iclr_2019_HyeVtoRqtQ" ]
iclr_2019_HyexAiA5Fm
Scalable Unbalanced Optimal Transport using Generative Adversarial Networks
Generative adversarial networks (GANs) are an expressive class of neural generative models with tremendous success in modeling high-dimensional continuous measures. In this paper, we present a scalable method for unbalanced optimal transport (OT) based on the generative-adversarial framework. We formulate unbalanced OT as a problem of simultaneously learning a transport map and a scaling factor that push a source measure to a target measure in a cost-optimal manner. We provide theoretical justification for this formulation, showing that it is closely related to an existing static formulation by Liero et al. (2018). We then propose an algorithm for solving this problem based on stochastic alternating gradient updates, similar in practice to GANs, and perform numerical experiments demonstrating how this methodology can be applied to population modeling.
accepted-poster-papers
After revision, all reviewers agree that this paper makes an interesting contribution to ICLR by proposing a new methodology for unbalanced optimal transport using GANs and should be accepted.
train
[ "Byx88RU9kE", "B1lUURJanX", "B1xdSvrYy4", "rklJuu_Spm", "rygfS6-Qk4", "BklmCVWU3X", "Hkenm8WQJV", "H1xQ6gNIC7", "B1lLbqXI0m", "rygDCYmLA7", "By6VIm8AQ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "i have read the revised version. i also support accept. i have revised my score upwards.", "In this paper the authors consider the unbalanced optimal transport problem between two measures with different total mass. The authors introduce first the now standard Kantorovich-like formulation, which considers a coupling whose marginals are penalized to look like the two target measures. The authors introduce a second formulation in (2), somewhat a Kantorovich/Monge hybrid that involves a \"random\" Monge map where the target point T(x) of a point x now depends also on an additional random variable z, to desribe T(x,z). The authors also consider a local mass creation term (\\xi) to weight the initial measure \\mu.\n\nThe authors emphasize the interest of the 2nd formulation, which, much like the original Monge problem, has an intractable push-forward constraint. This formulation is similar to recent work on Wasserstein Autoencoders (to which is added the scaling parameter). As with WAE, this constraint is relaxed to penalize the deviation between the \"random\" push-forward and the desired marginal. \n\nThe authors show then that the resulting problem, which involves a transportation cost integrated both on the random variable z and on the input domain x, weighted by xi + a simple penalization for xi + a divergence penalizing the deviation between push-forward and desired marginal, can be optimized altogether by using three NN: 1 for the parameterization of T, 1 for the parameterization of \\xi, and one to optimize using a function f a variational bound on the divergence. 2 gradient descents (T,\\xi), 1 gradient ascent (f, variational bound).\n\nThe authors then make a link between that penalize formulation and something that resembles unbalanced transport (I say resembles because there is some assymetry, and that the type of couplings is restricted). Finally the authors show that by letting increase the penalty in front of the divergence in (6) they recover something that looks like the solution of (2).\n\nFor the sake of completeness, the authors provide in the appendix an implementation of a simple dual ascent scheme to approximate unbalanced OT inspired from previous work by Seguy'17, and show that, unlike that work, their implicit parameterization of the scaling factor \\xi can help, and illustrate this numerically.\n\nI give credit to the authors for addressing a new problem and providing an algorithmic formulation to do so. That algorithm is itself recovered from an alternative formulation of unbalanced OT, and is therefore interesting in its own right. Unfortunately, I have found the presentation rushed. I really believe the paper would deserve an extensive re-write. Everything is fairly clear until Section 3. Then, the authors introduce their main contribution. Basically the section tries to prove two things at the same time, without really completing its job. One is to prove that \"dualizing\" the scaling+ random push-forward equality constraint is ok if one uses big enough regularizers (intuitive), the other that this scaled + random push-forward formulation is closely related to W_{ub}. This is less clear to me (see below). \n\nThe experiments are underwhelming. For faces they happen in latent spaces, and therefore one recovers transport between latent spaces later re-visualized through a decoder. For digits, all is fairly simple. They do not clearly mention whether this alternative UOT approach approximates UOT at all. Despite the title, there's no generation. Therefore my grade is really split between a 5 and a 6.\n\nminor comments and questions:\n\n- Is the reference to a local scaling (\\xi) for unbalanced transport entirely new? your paper is not clear on that, and it seems to me this idea already appears in the OT literature.\n\n- I do not understand the connexion you make with GANs. In what sense can you interpret any of your networks as generators? To me it just feels like a simultaneous optimization of various networks, yet without a clear generative purpose. Technically there may be several similarities (as we optimize on networks), but I am not sure this justifies referencing GANs in the title. Additionally, and almost mechanically, putting GAN in your paper, the reader will expect some generation results..\n\n- Numerical benchmarks: Is the technique you propose supposed to approximate the optimal value of Unbalanced OT at all? If yes, is there a way you could compare yourselves with Chizat's approach?\n\n- Somewhere in Lemma 3.2 the fact that you had to use an alternative definition \\tilde{W} (by restricting the class of couplings) is not really clarified to the reader. Qualitatively, what does it mean that you restrict the class of couplings to have the same support as \\mu? In which situations would \\tilde{W} be very different from W_{ub} ? (which, if I understand correctly, only appears in (2) but not elsewhere in the paper?)\n\n- I think it would help for the simple sake of readability to add integration domains under your \\int symbols.\n\n- T is used as a subset in Lemma 3.1, while it is used after and before as a map of (x,z)\n\n- T(x,z) looks intuitively like a noisy encoder as in Wasserstein AEs (with, of course, the addition of your term \\xi). Could you elaborate?\n\n- I have scanned the paper but did not see how you set lambda.", "I have read the revised manuscript. I found that the revised version is more clear, more precise and reflects better the quality of the underlying ideas. It is better at distinguishing between what was known and what is new. I have also appreciated the new numerical experiment. For these reasons and also the ones I mentioned in my previous review, I suggest acceptance and update my score to 6.", "REVIEW\n\nThe authors propose a novel approach to estimate unbalanced optimal transport between sampled measures that scales well in the dimension and in the number of samples. This formulation is based on a formulation of the entropy-transport problems of Liero et al. where the transport map, growth maps and Lagrangian multipliers are parameterized by neural networks. The effectiveness of the approach is shown on some tasks.\n\nThis is overall an ingenious contribution that opens a venue for interesting uses of optimal transport tools in learning problems (I can think for instance of transfer learning). As such I think the idea would deserve publication. However, I have some concerns with the way the theory is presented and with the lack of discussions on the theoretical limitations. Also, the theory seems a bit disconnected from the practical set up, and this should be emphasized. These concerns are detailed below. \n\nREMARKS ON SECTION 3\n\nI think the theoretical part does not exhibit clearly the relationships with previous literature. The formulation proposed in the paper (6) is not new and consists in solving the optimal entropy-transport problem (2) on the set of product measures gamma that are deterministic, i.e. of the form\ngamma(x,y) = (id x T)_# (xi mu) for some T:X -> Y and xi : X -> R_+ (here (id x T)(x) =(x,T(x)) )\nIt is classical in optimal transport to switch between convex/transport plan formulation (easier to study) to non-convex/transport map formulations (easier to interpret). (As a technical note, the support restriction in Lemma 3.2 is automatically satisfied for all feasible plans, for super-linear costs c_2=phi_1).\n\nMore precisely, since the authors introduce a reference measure lambda on a space Z (these objects are not motivated anywhere, but I guess are used to allow for multivalued transport maps?), they look for plans of the form\ngamma(x,y) = (pi_x x T)_# (xi mu otime lambda) where (pi_x x T)(x,z) = (x,T(x,z) and \"otime\" yields product measures) (it is likely that similar connections could be made with the \"static\" formulations in Chizat et al.).\n\nIntroduced this way, the relationship to previous literature would have been clearer and the theoretical results are simple consequences of the results in Liero et al., who have characterized when optimal solutions of this form exist. Also this contradicts the remark that the authors make that it is better to model \"directly mass variation\" as their formulation is essentially equivalent.\n\nThe paragraph \"Relation to Unbalanced OT\" is, in my opinion, incomplete. The switch to non-convex formulation introduce many differences to convex approaches that are not mentioned: there is no guarantee that a minimizer can be found, there is a bias introduced by the architecture of the neural network, ... Actually, it is this bias that make the formulation useful in high dimension since it is know that optimal transport suffers from the curse of dimensionality (thus it would be useless to try to solve it exactly in high dimension). I suggest to improve this discussion.\n\nOTHER REMARKS\nA small remark: lemma 3.1 is the convex conjugate formula for the phi-divergence in the first argument. I suggest to call it this way to help the reader connect with concepts he or she already knows. Its rigorous proof (with measurability issues properly dealt with) can be found, for instance, in Liero et al. Theorem 2.7. It follows that the central objective (8) is a Lagrangian saddle-point formulation of the problem of Liero et al., where transport plans, scalings and Lagrange multipliers are parameterized by neural networks. I generally think it is best to make the link with previous work as simple as possible.\n\nAlso, Appendix C lacks details to understand precisely how the experiments where done. It is written :\n\"In practice, [the correct output range] can be enforced by parameterizing f using a neural network with a final layer that maps to the correct range. In practice, we also found that employing a Lipschitz penalty on f stabilizes training.\"\nThis triggers two remarks: \n- (i) how precisely is the correct range enforced? This should be stated.\n- (ii) a Lipschitz penalty on f yields a class of functions which is very unlikely to have the properties of Lemma 3.1 ; in fact, this amounts to replacing the last term in (6) by a sort of \"bounded Lipschitz\" distance which has very different property from a f-divergence. This makes the theory of section 3 a bit disconnected from the practice of section 4.\n", "We are happy to hear that you appreciated the revisions and will be happy to include the zebrafish experiment in the main paper for the camera ready. Thanks again for all of your helpful feedback.", "### post rebuttal### authors addressed most of my concerns and greatly improved the manuscript and hence I am increasing my score. \n \nSummary: \n\nThe paper introduces a static formulation for unbalanced optimal transport by learning simultaneously a transport map T and scaling factor xi .\n\nSome theory is given to relate this formulation to unbalanced transport metrics such as Wasserstein Fisher Rao metrics for e.g. Chizat et al 2018. \n\nThe paper proposes to relax the constraint in the proposed static formulation using a divergence. furthermore using a bound on the divergence , the final discrepancy proposed is written as a min max problem between the witness function f of the divergence and the transport map T , and scaling factor xi. \n\nAn algorithm is given to find the optimal map T as a generator in GAN and to learn the scaling factor and the witness function of the divergence with a neural network paramterization , the whole optimized with stochastic gradient. \n\nSmall experimentation on image to image transportation with unbalance in the classes is given and show how the scaling factor behaves wrt to this kind of unbalance. \n\n\nNovelty and Originality:\n\nThe paper claims that there are no known static formulations known with a scaling factor and a transport map learned simultaneously. We refer the authors to Unbalanced optimal Transport: Geometry and Kantrovich Formulation Chizat et al 2015. In page 19 in this paper Equation 2.33 a similar formulation to Equation 4 in this paper is given. (Note that phi corresponds to T and lambda to xi). This is known as the monge formulation of unbalanced optimal transport. The main difference is that the authors here introduce a stochastic map T and an additional probabilty space Z. Assuming that the mapping is deterministic those two formulations are equivalent. \n\nCorrectness: \n\nThe metric defined in this paper can be written as follow and corresponds to a generalization of the monge formulation in chizat 2015 :\nL(mu,nu)= inf_{T, xi} int c_1(x,T_x(z) ) xi(x) lambda(z) dmu(x) + int c_2(x_i(x)) dmu(x)\n \t\t s.t T_# (xi mu)=nu\nIn order to get a kantorovich formulation out of this chizat et al 2015 defines semi couplings and the formulation is given in Equations 3.1 page 20. \n\nThis paper proposes to relax T_# (xi mu)=nu with D_psi (xi \\mu, \\nu) and hence proposes to use:\n\nL(mu,nu)= inf_{T, xi} int c_1(x,T_x(z) ) xi(x) lambda(z) dmu(x) + int c_2(x_i(x)) dmu(x)+ D_psi (xi \\mu, \\nu)\n\nLemma 3.2 of the paper claims that the formulation above corresponds to the Kantrovich formulation of unbalanced transport. I doubt the correctness of this:\n\nInspecting the proof of Lemma 3.2 L \\geq W seems correct to me, but it is unclear what is going on in the proof of the other direction? The existence of T_x is not well supported by rigorous proof or citation? Where does xi come from in the third line of the equalities in the end of page 14? I don’t follow the equalities written at the end of page 14. \n\nAnother concern is the space Z, how does the metric depend on this space? should there be an inf on all Z?\n\nOther comments:\n\n- Appendix A is good wish you baselined your experiments with those algorithms. \n\n- The experiments don’t show any benefit for learning the scaling factor, are there any applications in biology that would make a better case for this method?\n\n- What was the architecture used to model T, xi, and f?\n\n- Improved training dynamics in the appendix, it seems you are ignoring the weighting while optimizing on theta? than how would the weighing be beneficial ?", "I have read the rebuttal and the revision of the authors. I thank authors for updating their manuscript that improved a lot. The proof in the appendix is now much easier to follow thanks for improving it. Authors answered most of my concerns. \n\n I think the new experiment Zebrafish embrogenesis is interesting and deserves to be in the main paper you can move it to the main paper (you are allowed up to 10 pages in the main). it would be great to explore more interesting applications like this one. \n\nIn conclusion the paper is in much better shape for publication and hence I am increasing my score to 6. ", "Thank you for the very helpful feedback. We have heavily revised the theoretical parts for clarity and added an experiment to showcase the usefulness of the scaling factor. We believe the problematic aspects have been corrected by revising the proof for clarity and adding a citation for the step that was found questionable.\n\n- Originality\n\nThanks for pointing out the Monge formulation by Chizat et al. (2015). We have revised Section 3 accordingly and now start by pointing out this relation at the beginning of the section.\n\n- Correctness\n\nWe have rewritten the proof to improve clarity. A source of confusion may have been that we were not sufficiently clear about our choice of \\Z and \\lambda: in particular, the lemma holds when \\lambda is an atomless measure on \\Z. In this case it follows from standard results (now cited from Dudley's Real Analysis and Probability) that there exists a measurable function T_x from \\Z to \\Y such that \\gamma_{y|x} is the pushforward of \\lambda under T_x. The choice of \\Z, \\lambda has been clarified in the revised version of the main text, and the steps of the proof in the appendix have been rewritten.\n\n- The experiments don't show any benefit for learning the scaling factor, are there any applications in biology that would make a better case for this method?\n\nAn important problem in biology is lineage tracing of cells between different stages (e.g. of development or disease progression). In these applications it is important to account for the scaling factor since the transport is not balanced; particular cells in the earlier stage are poised to develop into cells seen in the later stage, and those cells should have higher scaling factors. To showcase the relevance of learning the scaling factor for determining these poised cells, we have added an application to single-cell gene expression data taken during zebrafish embryogenesis (see the end of the paper and Appendix D). Namely, we found that the cells in the source population with higher scaling factors were significantly enriched for genes associated with differentiation and development of the mesoderm. This experiment shows that analysis of the scaling factor can be applied towards interesting and meaningful biological discovery.\n\n- What was the architecture used to model T, xi, and f?\n\nThanks for pointing out the missing information. For our experiments, we used fully-connected feedforward networks with ReLU activations. The network for \\xi has a softplus activation layer at the end to enforce non-negative values. We now describe this in Appendix C.\n\n- Improved training dynamics in the appendix, it seems you are ignoring the weighting while optimizing on theta? than how would the weighing be beneficial ?\n\nFor training f and \\xi, the weights are directly used. For training T, while the weights are not directly used, they are still indirectly beneficial to T (theta) because they directly affect the training of f which in turn directly affects the training of T. \n\nThanks again for helping us improve our paper with your insightful comments.", "(continued)\n\n- Somewhere in Lemma 3.2 the fact that you had to use an alternative definition \\tilde{W} (by restricting the class of couplings) is not really clarified to the reader. Qualitatively, what does it mean that you restrict the class of couplings to have the same support as \\mu? In which situations would \\tilde{W} be very different from W_{ub} ?\n\nIn the optimal entropy-transport problem (3), the objective contains a \\psi-divergence that penalizes the difference between \\mu and \\gamma_X (the marginal of \\gamma with respect to \\X). Depending on which \\psi-divergence is chosen, it is possible that \\gamma_X has non-zero measure outside of the support of \\mu. Intuitively, this means that the optimal transport scheme adds some mass to \\X where there was previously no mass (since it is outside of the support of \\mu) and then transports this mass to \\Y. But in the asymmetric Monge formulation of (6), all the mass transported to \\Y must come from somewhere within the support of \\mu, since the scaling factor \\xi allows mass to grow but not to materialize outside of its original support. Qualitatively, this is the effect of the support restriction. Thanks for pointing out the lack of clarity; we revised the text accordingly to make this clear to the readers. \n\n- I think it would help for the simple sake of readability to add integration domains under your \\int symbols.\n\nDone; thanks for pointing this out.\n\n- T is used as a subset in Lemma 3.1, while it is used after and before as a map of (x,z)\n\nWe agree that this was confusing and we adjusted the notation accordingly. Thanks for pointing this out.\n\n- T(x,z) looks intuitively like a noisy encoder as in Wasserstein AEs (with, of course, the addition of your term \\xi). Could you elaborate?\n\nIf one disregards the scaling factor \\xi and the unbalanced aspect of our problem, both the WAE paper and our work present Monge-like formulations of the OT problem, where the objective is to learn a stochastic transport map to push one distribution to the other. In our paper, the stochastic transport map is T(x,z). In their paper, since there is a latent space, the stochastic map is the composition of the noisy encoder with the decoder map G. The notation of z is unrelated, however -- we use z as a random variable that introduces randomness into the map T, while in their work it denotes the variable in the latent space. \n\n- I have scanned the paper but did not see how you set lambda.\n\nThanks for pointing this out. We added it to the paper in Appendix C, namely: \"One can take \\lambda to be the standard Gaussian measure if a stochastic mapping is desired ... if a deterministic mapping is desired, then \\lambda can be set to a deterministic distribution.\"\n\nThanks again for helping us improve our paper with your insightful comments.", "Thanks for your helpful comments. We have heavily revised section 3 to clarify our contributions and the relation to previous literature, taking into account the comments of all reviewers.\n\n- The experiments are underwhelming. For faces they happen in latent spaces, and therefore one recovers transport between latent spaces later re-visualized through a decoder. For digits, all is fairly simple. \n\nWe agree that learning transport maps between these domains is nothing new. Rather, the main innovation in our numerical experiments is the simultaneous learning of the scaling factor that adjusts mass and accounts for class imbalances between the distributions. For example, in the MNIST experiment, the scaling factor reflects the digit imbalances between the datasets; and in the CelebA faces experiment, the scaling factor reflects the gender imbalance (i.e. predominance of males) in the aged group.\n\nTo further showcase the usefulness of learning the scaling factor, we have added an application to genomics, namely based on single-cell gene expression data taken during zebrafish embryogenesis (see the end of the paper and Appendix D). When modeling transport between populations of cells from different stages of development, one needs to account for the scaling factor since the transport is not balanced: particular cells in the earlier stage are poised to develop into cells seen in the later stage and are thus overrepresented in the later stage. The new experiment shows that the scaling factor can discover these poised cells. Namely, we found that the cells in the source population with higher scaling factors were significantly enriched for genes associated with differentiation and development of the mesoderm. This experiment shows that analysis of the scaling factor can be applied towards interesting and meaningful biological discovery.\n\n-They do not clearly mention whether this alternative UOT approach approximates UOT at all.\n\nOur algorithm solves the formulation of unbalanced OT in (6). The relation to optimal entropy-transport is now clarified in Section 3; namely, the formulations are equivalent when the support of \\gamma for optimal-entropy transport is subject to a support restriction. Therefore our approach does approximate unbalanced OT. Thanks for pointing out the lack of clarity. \n\n- Is the reference to a local scaling (\\xi) for unbalanced transport entirely new? your paper is not clear on that, and it seems to me this idea already appears in the OT literature.\n\nReviewer 2 provided a reference to an existing formulation that uses \\xi. The relation to our work is now made clear in the revised version at the beginning of Section 3. \n\n- I do not understand the connexion you make with GANs. In what sense can you interpret any of your networks as generators?...\n\nIn the revised version, the connection with GANs is clarified. We discuss how one can interpret T as a generator and Algorithm 1 as a generative-adversarial game between (T, \\xi) and f, similar to a GAN. In particular,\n\n- T takes a point x ~ \\lambda and transports it from X to Y by generating T(x, z) where z ~ \\lambda.\n- \\xi determines the importance weight of each transported point\n- their shared objective is to minimize the divergence between transported samples and real samples from \\nu that is measured by the adversary f\n- cost functions c_1 and c_2 encourage T, \\xi to find the most cost-efficient strategy\n\nTo clarify, our paper does not contain results where images are generated from random noise; the generator in our framework is the transport map that takes a random sample from the source distribution and generates a sample in the target distribution. This is in line with previous works (e.g. unpaired image translation, CycleGAN by Zhu et al, https://arxiv.org/abs/1703.10593) where the generator in the GAN transports samples between domains rather than generating samples from random noise. \n\n- Numerical benchmarks... Is there a way you could compare yourselves with Chizat's approach?\n\nA numerical comparison of the methods would not really be meaningful. For discretized problems, we would expect the Chizat et al. method to outperform our method, since it was designed particularly for the discrete setting and solves a convex optimization problem with convergence guarantees. However, for high-dimensional/continuous problems, Chizat et al. cannot be used. Hence the methods should be considered complementary, each with its own application domains.\n", "Thanks for your kind and constructive comments. \n\nWe agree that section 3 could have been written more clearly, both in terms of connecting our work to existing work and in terms of motivating the material better and making it more accessible to readers. We heavily revised section 3 based on your feedback. In particular, we now begin the section by relating our formulation to the formulation of unbalanced Monge OT by Chizat et al. (2015) and then equate the relaxed problem with the optimal entropy-transport problem by Liero et al. (2018) as per your suggestion. The point that optimal entropy-transport is the convex/transport plan version of (6) is now conveyed more clearly. What we meant by \"directly modeling mass variation\" is that for applications, it is often important or more intuitive to directly learn the scaling factor that indicates how much local mass dilation/contraction there is. We did not mean to imply that optimal entropy-transport does not involve mass variation; we clarified this in our revision. Additionally, the discussion comparing our approach with the existing methods based on the convex formulation has been expanded at the end of Section 3.\n\nIn general, Appendix C has been expanded with more implementation details. In response to specific comments:\n\nAppendix C: \n- (i) how precisely is the correct range enforced? This should be stated.\n\nWe have added to Table 1 in the Appendix some examples of final layers that show precisely how the correct range is enforced. \n\n- (ii) a Lipschitz penalty on f yields a class of functions which is very unlikely to have the properties of Lemma 3.1 ; in fact, this amounts to replacing the last term in (6) by a sort of \"bounded Lipschitz\" distance which has very different property from a f-divergence. This makes the theory of section 3 a bit disconnected from the practice of section 4.\n\nIt should be noted that our algorithm also works without the gradient penalty on f. We added the gradient penalty since in practice this improves the stability of the training, as has also been reported in the GAN literature. \n\nIn addition, we describe what the theoretical implications are of using the gradient penalty in the Appendix as follows:\n\n \"A gradient penalty on f changes the nature of the relaxation of (5) to (6): the right-hand side of (7) [convex conjugate form of divergence] is no longer equivalent to the \\psi-divergence, but is rather a lower-bound with a relation to bounded Lipschitz metrics (Gulrajani 2017). In this case, while the problem formulation is not equivalent to optimal entropy transport, it is still a valid relaxation of (5) [unbalanced Monge OT].\"\n\nThanks again for helping us improve our paper with your insightful comments." ]
[ -1, 7, -1, 6, -1, 6, -1, -1, -1, -1, -1 ]
[ -1, 4, -1, 4, -1, 4, -1, -1, -1, -1, -1 ]
[ "B1lLbqXI0m", "iclr_2019_HyexAiA5Fm", "By6VIm8AQ", "iclr_2019_HyexAiA5Fm", "Hkenm8WQJV", "iclr_2019_HyexAiA5Fm", "H1xQ6gNIC7", "BklmCVWU3X", "B1lUURJanX", "B1lUURJanX", "rklJuu_Spm" ]
iclr_2019_Hyfn2jCcKm
Solving the Rubik's Cube with Approximate Policy Iteration
Recently, Approximate Policy Iteration (API) algorithms have achieved super-human proficiency in two-player zero-sum games such as Go, Chess, and Shogi without human data. These API algorithms iterate between two policies: a slow policy (tree search), and a fast policy (a neural network). In these two-player games, a reward is always received at the end of the game. However, the Rubik’s Cube has only a single solved state, and episodes are not guaranteed to terminate. This poses a major problem for these API algorithms since they rely on the reward received at the end of the game. We introduce Autodidactic Iteration: an API algorithm that overcomes the problem of sparse rewards by training on a distribution of states that allows the reward to propagate from the goal state to states farther away. Autodidactic Iteration is able to learn how to solve the Rubik’s Cube and the 15-puzzle without relying on human data. Our algorithm is able to solve 100% of randomly scrambled cubes while achieving a median solve length of 30 moves — less than or equal to solvers that employ human domain knowledge.
accepted-poster-papers
The paper introduces a version of approximate policy iteration (API), called Autodidactic Iteration (ADI), designed to overcome the problem of sparse rewards. In particular, the policy evaluation step of ADI is trained on a distribution of states that allows the reward to easily propagate from the goal state to states farther away. ADI is applied to successfully solve the Rubik's Cube (together with other existing techniques). This work is an interesting contribution where the ADI idea may be useful in other scenarios. A limitation is that the whole empirical study is on the Rubik's Cube; a controlled experiment on other problems (even if simpler) can be useful to understand the pros & cons of ADI compared to others. Minor: please update the bib entry of Bottou (2011). It's now published in MLJ 2014.
train
[ "SJx3iUcc27", "r1xDiKEcC7", "Bkgout45CX", "Skg0UYN5Cm", "Bye2VTcq27", "HJlaAp7wn7" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The authors show how to solve the Rubik cube using reinforcement learning (RL) with Monte-Carlo tree search (MCTS). As common in recent applications like AlphaZero, the RL part learns a deep network for policy and a value function that reduce the breadth (policy) and depth (value function) of the tree searched in MCTS. This basic idea without extensions fails when trying to solve the Rubik cube because there is only one final success state so the early random policies and value functions never reach it. The solution proposed by the authors, called autodidactic iteration (ADI) is to start from the final state, construct a few previous states, and learn value function on this data where in a few moves a good state is reached. The distance to the final state is then increased and the value function learn more and more. This is an interesting idea that solves the Rubik cube, but the paper lacks a more detailed study. What other problems can be solved like this? Would a single successful trajectory be enough to use it in a wider context (as in https://blog.openai.com/learning-montezumas-revenge-from-a-single-demonstration/) ? Is the method to increase distance from final state specific to Rubik cube or general? Is the training stable with respect to this or is it critical to get it right? The lack of analysis and ablations makes the paper weaker.\n\n[Revision] Thanks for the replies. I still believe experiments on more tasks would be great but will be happy to accept this paper.", "We would like to thank the reviewer for their helpful comments and for pointing us to the github resource.\n\n> “I am slightly disappointed that the paper does not link to a repository with the code. Is this something the authors are considering in the future?”\n\nWe fully agree with the reviewer that releasing the code is important. We plan to release the code if the paper gets accepted. We have not done so yet to maintain anonymity.\n\n> “I am also curious whether/how redundant positions are handled by the proposed approach...Does the algorithm forbid the reverse of the last action? Is the learned value/policy function good enough that backwards moves are seldom explored? Since the paper mention that BFS is interesting to remove cycles, I assume identical states are not duplicated. Is this correct?”\n\nWe did not strictly forbid reverse moves during the search. However, because we penalize longer solutions, because MCTS attempts many paths simultaneously, and because the virtual loss prevents duplicate exploration, the solver rarely explored repeat states. The BFS expansion of the path was a post-processing step we applied to the resulting path to obtain slightly better solutions. Although this did remove duplicates (if they existed), it more importantly allowed us to find \"shortcuts\" within our path. For example, we can replace say a 7-move sequence with a slightly more efficient 5-move sequence that MCTS didn't find. This effect was minimal but consistent.\n", "We would like to thank the reviewer for their helpful comments.\n\n> “What other problems can be solved like this?”\n\nThis approach can be used in two different types of problems. The first is planning problems in environments with a high number of states. The second type of problem is when you need to find one specific goal but might not know what the goal is. However, if you have examples of solved examples you can train a value function using ADI on these solved examples and hopefully it will transfer to the new problems. For instance, in protein folding, the goal is to find the protein conformation with minimal free energy. We don’t know what the optimal conformation is beforehand, but we can train a value network using ADI on proteins where we know what their optimal conformation is. \n\n\n> “Would a single successful trajectory be enough to use it in a wider context? (as in https://blog.openai.com/learning-montezumas-revenge-from-a-single-demonstration/)”\n\nFor our method to work, all we need is the ability to start from the goal state and take moves in reverse. Therefore, not only is a single successful trajectory sufficient, all that is needed is the final state of that successful trajectory: the goal state. Using only the goal state, it can generate other states by randomly taking actions away from the goal state.\n\n> “Is the method to increase distance from final state specific to Rubik cube or general?”\n\nThe core concept is that the agent uses dynamic programming to propagate knowledge from easier examples to more difficult examples. Therefore, this method is applicable to any scenario in which one can generate a range of states whose difficulty ranges from easy to hard. For our method, we achieved this by randomly scrambling the cube 1 to N times. There has been other work in the field of robotics [1], as well as the work on Montezuma’s Revenge provided by the reviewer, that builds a curriculum starting by first generating states close to the goal and then progressively increasing the difficulty as performance increases. Instead of adaptively changing the state distribution during training, our method fixes the state distribution before training while the targets for the state values change as the agent learns.\n\n> “Is the training stable with respect to this or is it critical to get it right?”\n\nWe found that the value of N, the maximum number of times to scramble the solved cube, was not crucial to the stability of training. It only had an effect on the final performance. If N was too low (e.g. 5), then DeepCube only performed well on cubes close to the solutions, but not on more complicated cubes. If N was too high (e.g. 100), then it took more iterations to learn; nonetheless, the agent would still learn. We found that N=30 resulted in both good value function estimation as well as reasonable training time.\n\n[1] Florensa, C., Held, D., Wulfmeier, M., Zhang, M., & Abbeel, P. (2017). Reverse curriculum generation for reinforcement learning. arXiv preprint arXiv:1707.05300.\n", "We would like to thank the reviewer for their helpful comments.\n\n> “I am not very clear how to assign the rewards based on the stored states?”\n\nThe environment returns a reward of +1 for the solved state and a reward of -1 for all other states. From this single positive reward given at the solved state, DeepCube learns a value function. Using dynamic programming, DeepCube improves its value estimate by first learning the value of states one move away from the solution and then building off of this knowledge to improve its value estimate for states that get progressively further away from the solution.\n\n> “Do you have solving time comparison between your method and other approximate methods?”\n\nYes, we have improved the efficiency of our solver since we last submitted our paper by optimizing our code. Our method takes, on average, 40 seconds; whereas the fastest optimal solver we could find (implemented by Tomas Rokicki to find “God’s number” [1]) for the Rubik’s Cube takes 2.7 seconds. These results are summarized in section C of the appendix of the updated paper. While Rokicki’s algorithm is faster, Rokicki’s algorithm also uses knowledge of groups, subgroups, cosets, symmetry, and pattern databases. On the other hand, our algorithm does not exploit any of this knowledge and learns how to solve the Rubik’s Cube given only basic information about the problem. In addition, Rokicki’s solver uses 182GB of memory to run whereas ours uses at most 1GB. These differences are summarized in the updated paper. We are currently making better use of parallel processing and memory to improve the speed of our algorithm.\n\n[1] Rokicki, T., Kociemba, H., Davidson, M., & Dethridge, J. (2014). The diameter of the Rubik's Cube group is twenty. SIAM Review, 56(4), 645-670.\n\n", "The authors provide a good idea to solve Rubik’s Cube using an approximate policy iteration method, which they call it as Autodidactic iteration. The method overcomes the problem of sparse rewards by creating its own rewards system. Autodidactic iteration starts with solved cube and then propagate backwards to the state. \n\nThe testing results are very impressive. Their algorithm solves 100% of randomly scrambled(1000 times) cubes and has a median solve length of 30 moves. The God’s number is 26 in the quarter turn metric, while their median moves 30 is only 4 hands away from the God’s number. I appreciate the non-human domain knowledge part most because a more general algorithm can be used to other area without enough pre-knowledges. \n\nThe training conception to design rewards by starting from solved state to expanded status is smart, but I am not very clear how to assign the rewards based on the stored states? Only pure reinforcement learning method applied sounds simple, but performance is great. The results are good enough with the neural network none-random search guidance. Do you have solving time comparison between your method and other approximate methods? \n\nPros: - solved nearly 100% problems with reasonable moves.\n - a more general algorithm solving unknown states value problems.\n\nCons: - the Rubik’s cube problem has been solved with other optimal approaches in the past. This method is not as competitive as other optimal solution solver within similar running time for this particular game.\n - to solve more dimension cubes, this method might be out of time. \n", "This paper introduces a deep RL algorithm to solve the Rubik's cube. The particularity of this algorithm is to handle the huge state space and very sparse reward of the Rubik's cube. To do so, a) it ensures each training batch contains states close to the reward by scrambling the solution; b) it computes an approximate value and policy for that state using the current model and c) it weights data points based by the inverse of the number of random moves from the solution used to generate that training point. The resulting model is compared to two non-ML algorithms and shown to be competitive either on computational speed or on the quality of the solution. \n\nThis paper is well written and clear. To the best of my knowledge, this is the first RL-based approach to handle the Rubik's cube problem so well. The specificities of this problem make it interesting. While the idea of starting from the solution seemed straightforward at first, the paper describes more advanced tricks claimed to be necessary to make the algorithm work. The algorithm seems to be quite successful and competitive with expert algorithms, which I find very nice. Overall, I found the proposed approach interesting and sparsity of reward is an important problem so I would rather be in favor of accepting this paper. \n\nOn the negative side, I am slightly disappointed that the paper does not link to a repository with the code. Is this something the authors are considering in the future? While it does not seem difficult to code, it is still nice to have the experimental setup.\n\nThere has been (unsuccessful) attempts to solve the Rubik's cube using deep RL before. I found some of them here: https://github.com/jasonrute/puzzle_cube . I am not sure whether these can be considered prior art as I could not find associated accepted papers but some are quite detailed. Some could also provide additional baselines for the proposed methods and highlight the challenges of the Rubik's cube.\n\nI am also curious whether/how redundant positions are handled by the proposed approach and wished this would be discussed a bit. Considering the nature of the state space and the dynamics, I would have expected this to be a significant problem, unlike in Go or chess. Does the algorithm forbid the reverse of the last action? Is the learned value/policy function good enough that backwards moves are seldom explored? Since the paper mention that BFS is interesting to remove cycles, I assume identical states are not duplicated. Is this correct?" ]
[ 7, -1, -1, -1, 7, 7 ]
[ 4, -1, -1, -1, 4, 3 ]
[ "iclr_2019_Hyfn2jCcKm", "HJlaAp7wn7", "SJx3iUcc27", "Bye2VTcq27", "iclr_2019_Hyfn2jCcKm", "iclr_2019_Hyfn2jCcKm" ]
iclr_2019_Hyg1G2AqtQ
Variance Reduction for Reinforcement Learning in Input-Driven Environments
We consider reinforcement learning in input-driven environments, where an exogenous, stochastic input process affects the dynamics of the system. Input processes arise in many applications, including queuing systems, robotics control with disturbances, and object tracking. Since the state dynamics and rewards depend on the input process, the state alone provides limited information for the expected future returns. Therefore, policy gradient methods with standard state-dependent baselines suffer high variance during training. We derive a bias-free, input-dependent baseline to reduce this variance, and analytically show its benefits over state-dependent baselines. We then propose a meta-learning approach to overcome the complexity of learning a baseline that depends on a long sequence of inputs. Our experimental results show that across environments from queuing systems, computer networks, and MuJoCo robotic locomotion, input-dependent baselines consistently improve training stability and result in better eventual policies.
accepted-poster-papers
This paper proposes an input-dependent baseline function to reduce variance in policy gradient estimation without adding bias. The approach is novel and theoretically validated, and the experimental results are convincing. The authors addressed nearly all of the reviewer's concerns. I recommend acceptance.
train
[ "BJx1GFr92X", "S1e9N1YX6m", "HklKcMENAm", "ByxufjGtaX", "SylCvSzKaX", "SylcuqGYp7", "H1xPbNztp7", "ryxDVNFc2X" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "\n\nSummary: This work considers the problem of learning in input-driven environments -- which are characterized by an addition stochastic variable z that can affect the dynamics of the environment and the associated reward the agent might see. The authors show how the PG theorem still applied for a input-aware critic and then they show that the best baseline one can use in conjecture with this critic is a input-dependent one. My main concerns are highlighted in points (3) and (4) in the detailed comments below. \n\nClarity: Generally it reads good, although I had to go back-and-forth between the main text and appendix several times to understand the experimental side. Even with the supplementary material, examples in Section 3 and Sections 6.2 could be improved in explanation and discussion.\n\nOriginality and Significance: Limited in this version, but could be improved significantly by something like point (3)&(4) in detailed comments. Fairly incremental extension of the PG (and TRPO) with the conditioning on the potentially (unobserved) input variables. The fact that a input-aware critic could benefit from a input-aware baseline is not that surprising. The fact that it reduces variance in the PG update is an interesting result; nevertheless I strongly feel the link or comparison needed is with the standard PG update. \n\nDisclaimer: I have not checked the proofs in the appendix.\n\nDetailed comments:\n\n1) On learning the input-dependent baselines: Generalising over context via a parametric functional approximation, like UVFAs [1] seems like a more natural first choice. Also these provide a zero-shot generalisation, bypassing the need for a burn-in period of the task. Can you comment on why something like that was not used at least as baseline?\n\n2) Motivating example. The exposition of this example lacks a bit of clarity and can use some more details as it is not a standard MDP example, so it’s harder to grasp the complexity of this task or how standard methods would do on it and where would they struggle. I think it’s meant to be an example of high variance but the performance in Figure 2 seems to suggest this is actually something manageable for something like A2C. It is also not clear in this example how the comparison was done. For instance, are the value functions used, input-dependent? Is the policy input aware? \n\n3) Input-driven MDP. Case 1/Case 2 : As noted by the authors, in case 1 if both s_t and z_t are observed, this somewhat uninteresting as it recovers a particular structured state variable of a normal MDP. I would argue that the more interesting case here, is where only s_t is observed and z_t is hidden, at least in acting. This might still be information available in hindsight and used in training, but won’t be available ‘online’ -- similar to slack variable, or privileged information at training time. And in this case it’s not clear to me if this would still result in a variance reduction in the policy update. Case 2 has some of that flavour, but restricts z_t to an iid process. Again, I think the more interesting case is not treated or discussed at all and in my opinion, this might add the best value to this work.\n \n4) Now, as mentioned above the interesting case, at least in my opinion, is when z is hidden. From the formulae(eq. (4),(5)), it seems to be that the policy is unaware of the input variables. Thus we are training a policy that should be able to deal with a distribution of inputs z. How does this compare with the normal PG update, that would consider a critic averaged over z-s and a z-independent baseline? Is the variance of the proposed update always smaller than that of the standard PG update when learning a policy that is unaware of z? \n\nReferences:\n[1] Schaul, T., Horgan, D., Gregor, K. and Silver, D., 2015, June. Universal value function approximators. In International Conference on Machine Learning (pp. 1312-1320).\n\n[POST-rebuttal] I've read the author's response and it clarified some of the concerns. I'm increase the score accordingly.", "\nIntroduction: \n“Since the state dynamics and rewards depend on the input process” -> why do the rewards depend on the input process conditioned on the state? \n\nDoes the scenario being considered basically involve any scenario with stochastic dynamics? Or is the fact that the disturbances may come from a stateful process what makes this distinct?\n\nif the input sequence following the action -> vague, would help if this would just be written a bit more clearly. \n\nIs just the baseline input dependent or does the policy need to be input dependent as well? From later reading, this point is still quite confusing. One line says “At time t, the policy only depends only on (st, zt).”. Another line says that the policy is pi_theta(a|s), with no mention of z. I’m pretty confused by the consistency here. This is also important in the proof of Lemma 1, because P(a|s,z) = pi_theta(a|s). Please clarify this.\n\nSection 4:\n Is the IID version of Figure 3 basically the same as stochastic dynamics? (Case 2)\n\nSection 4.1\n“In input-driven MDPs, the standard input-agnostic baseline is ineffective at reducing variance” -> can you give some more intuition/proof as to why. \n\nIn Lemma 2, how come the Q function is dependent on z, but the policy is only dependent on s (not even the current and past z’s). \n\nI think the proof of theorem 1 should be included in the main paper rather than unnecessary details about policy gradient. \n\nTheorem 1 and theorem 2 are really some of the most important parts of the paper, and they deserve a more thorough discussion besides the 2 lines that are in there right now. \n\n\nAlgorithm 1 -> should it be eqn 4?\n\nThe meta-algorithm provided in Section 5 is well motivated and well described. An experimental result including what happens with LSTM baselines would be very helpful. \n\nOne question is whether it is actually possible to know what the z’s are at different steps? In some cases these might be latent and hard to infer?\n\nCan you compare to Clavera et al 2018? It seems like it might be a relevant comparison. \n\nThe difference between MAML and the 10 value network seems quite marginal. Can the authors discuss why this is? And when we would expect to see a bigger difference. \n\nRelated work: Another relevant piece of work\nMeta-Learning Priors for Efficient Online Bayesian Regression\n\nMajor todos:\n1. Improve clarity of what z's are observed, which are not and whether the policy is dependent on these or not. \n2. Compare with other prior work such as Clavera et al, Harrison et al. \n3. Add more naive baselines such as training an LSTM, etc. \n4. Provide more analysis of the meta-learning component, how much does it actually help.\n\nOverall impression: 
I think this paper covers an interesting problem, and proposes a simple, straightforward approach conditioning the baseline and the critic on the input process. What bothers me in the current version of the paper is the lack of clarity about the observability of z, where it comes from and also some lack of comparisons with other prior methods. I think these would make the paper stronger.", "We again thank all reviewers for their comments, and have updated our paper accordingly.\n\nSpecifically, the major changes are:\n - In §4.1, we improved the clarity of our notations by explicitly defining the observation \\omega_t at each time t. We used \\omega_t instead o_t because the letter o is visually too similar to a. We updated our theorems and proofs using this notation.\n - We extended the case 2 of input-driven MDP to include the POMDP case (Figure 3b), and have showed all our derivation and conclusions apply. \n - We added a comparison to the meta-policy optimization approach (Clavera et al. 2018) in Appendix N.\n - In addition to mentioning our findings with LSTM in §5, we also added the corresponding learning curves in appendix G.\n - We updated our motivating example (§3) to give a better intuition.\n - We shortened the policy gradients description in the introduction and background sections, and moved the proof of theorem 1 into the main text in §4.1.\n - In §5, we added a discussion of when we expect the gain of MAML to further exceed that of the multi-value-network approach.\n\nPlease let us know if you have further comments. Thanks!\n", "\n-- Why do the rewards depend on the input process conditioned on the state? \n\nTo clarify, by “state dynamics and rewards depend on the input process,” we mean that the input process can affect the rewards because it affects the state transitions. However, our model indeed covers the general case, in which the reward might depend on both the state and the input. For example, consider a robotics task in which the reward is the speed of the robot, the state is the current position of the robot’s joints, and the input is an external force applied to the robot at each step. The speed of the robot (reward) depends on the force (input) even with knowledge of its current position (state). \n\n-- What makes the input process we considered distinct from any stochastic dynamics?\n\nThe main distinction here is that the input process must be “exogenous,” i.e. it doesn’t depend on the state and actions; see the graphical models in Figure 3. This property is necessary for the input-dependent baseline to not introduce bias. \n\n-- A strong action could end up with a lower-than-average return if the input sequence following the action is unfavorable -> vague\n\nThis sentence was trying to give an intuition for why the variance in reward caused by the input process can confuse a policy gradient algorithm. We will rephrase the sentence and explain this better. We will also provide more intuition about this point in Section 3. \n\nConsider the load balancing example in Section 3. The return (total reward) for an action depends on the job arrival sequence that follows that action. For example, if the arrivals consist of a burst of large jobs, the reward (negative number of jobs in the system) will be poor, regardless of the action. We will add this intuition to Section 3. \n\n-- Is just the baseline input dependent or does the policy need to be input dependent as well? \n\nThe baseline depends on the sequence of input values z_{t:\\infty}, but the policy can only depend on the input observed at the current step t. Note that the policy cannot depend on the future input values, since at time t, the agent has no way of knowing z_{t+1,\\infty}. \n\n-- “In input-driven MDPs, the standard input-agnostic baseline is ineffective at reducing variance” -> can you give some more intuition/proof as to why.\n\nAs mentioned above, we will add more intuition for this to Section 3. \n\n-- More discussions about theorem 1 and 2.\n\nThanks for the suggestion! We will trim the discussion of policy gradient and include the proof of theorem 1.\n\n-- Algorithm 1 should use eqn 4.\n\nYes, it is more appropriate to refer to Equation 4 in Algorithm 1. We will use this.\n\n-- Is it possible to know z at each step? What if z is not observable and hard to infer\n\nIn many applications, the input process is naturally observable to the agent. For example, in most computer systems applications, the inputs to the environments (e.g., network bandwidth, workload) are measured or readily observed. However, even if the agent does not observe the input at each step, our proposed approach (multi-value-network and meta-learning) can still work as long as we can repeat the same input sequence during training. As discussed in Section 5, this can be done with a simulator (e.g., control the wind in MuJoCo simulator) or by repeating input sequences (e.g., repeat the same workload for a load balancing agent) in an actual system. For future work, we think that investigating efficient architectures for input-dependent baselines for cases where the input process cannot be controlled in training is an interesting direction.\n\n-- Meta Learning Priors for Efficient Online Bayesian Regression\n\nThank you for the suggestion. This is a relevant piece of work on applying meta learning for faster adaptation of GP regression. We will add it in the related work session.\n", "Thank you for the insightful comments!\n\nRegarding these comments:\n\n1) UVFAs predict values based on specific goals. These methods require taking “goal embedding” explicitly as input. In our formulation of input driven environment, however, there aren’t really different goals in each task. Nonetheless, one can still use similar idea to take the exogenous sequence as an explicit input in the value function, using recurrent neural network structures such as LSTM. We actually did this and reported our findings in the paper, in the beginning of Section 5: “A natural approach to train such baselines is to use models that operate on sequences (e.g., LSTMs). However, learning a sequential mapping in a high-dimensional space can be expensive. We considered an LSTM approach but ruled it out when initial experiments showed that it requires orders of magnitude more data to train than conventional baselines for our environments.” We intend to add an experiment showing the learning curve with an LSTM approach to the appendix.\n\n2) The point of this example is to show that the variance from the input process can negatively affect the policy, even for an extremely simple task. In this 2-server load balance task, the agent should just learn the simple optimal policy of joining the shortest queue (visualized in Figure 2(c) left). However, the variance in the input sequence makes the PG unable to converge to the optimal. Here, we compared the vanilla A2C with the standard state-only baseline to that with the input-dependent baseline. It is clear that vanilla A2C performs suboptimally (Figure 2(b) right); and this is due to the significant difference in the PG variance in different baselines (Figure 2(b) left, notice the log scale). \n\nThe reason that vanilla A2C is ineffective in this example is that the return (total reward) for an action depends on the job arrival sequence that follows that action. For example, if the arrivals consist of a burst of large jobs, the reward (negative number of jobs in the system) will be poor, regardless of the action. We will expand the discussion in Section 3 to provide more details and intuition. \n\nAbout the input to the baseline and policy: the input-dependent baseline takes state s_t and the entire future input process z_{t:\\infty} as input; the state-only baseline only takes s_t as input; in both cases, the policy network takes s_t and z_t (only at time t) as input.\n\n3 and 4) Thank you for this interesting comment. We focused on the two cases in Figure 3 mainly because they result in fully observable MDPs, and in many applications of interest, the input is readily observable. However, the scenario in which the input z_t is not observed is indeed also interesting. This case results in a POMDP. \n\nInput-dependent baselines reduce variance in the POMDP case as well. Our results (e.g., Theorems 1 and 2) also apply to this setting. In fact, in the POMDP case, the input process does not even need to be Markov; it can be any general stochastic process that does not depend on the states and actions. \n\nIntuitively, the reason is that much of the variance in PG in input-driven environments is caused by the variance in the input sequence that follows an action. For example, in the windy walker environment (Figure 1c), it is the entire sequence of wind after step t that affects the total reward, not just the wind observation at time t. As a result, regardless of whether or not the input is observed at each step t, using the entire input sequence in the baseline reduces variance. \n\nInterestingly, the HalfCheetah with floating tiles environment (Figure 1d) is actually a POMDP---the agent only observes the torques of the cheetah’s body but not the buoyancy of the tiles. As shown in Figure 4 (middle), our technique helped reduce variance and improve PG performance. Also, we re-ran our experiments on the Walker2d with wind environment without providing z (the wind) to the policy. The results show that our input-dependent baseline improves the policy performance similar to the case where z is observed. We will shortly add this result to the paper. \n\nIn summary, we are making the following changes to the paper. We will add a case for POMDP to Figure 3, and discuss the derivation for the POMDP (which is almost identical to the MDP case). We will also include the POMDP version of Walker2d with wind result. \n\nWe also realize that the notation was confusing. As mentioned in the 2nd paragraph of page 5, we were using s_t to denote the tuple (s_t, z_t) in the derivations. We will improve the notation by explicitly defining the observed signal, o_t, used by the policy in each case. For the MDP case, o_t = (s_t, z_t). For the POMDP case, o_t = s_t. \n", "Thank you for the constructive comments!\n\nWe first address the major comments and then respond to the detailed questions in a separated comment.\n\n1. [What is observed?] During policy inference at each MDP step t, the agent observes s_t and z_t (the current value of the input process). Therefore the policy can depend on the current observed value of the input z_t, but not on the future input sequence z_{t:\\infty} (which has not yet happened). At training time, however, the baseline computation for step t depends on the entire future sequence z_{t:\\infty}. As explained in the beginning of Section 4.1, this is possible because the entire input sequence is known at training time. \n\nWe realize that the notation was confusing. As mentioned in the 2nd paragraph of page 5, we use s_t to denote the tuple (s_t,z_t) for the derivations. We will improve the notation by explicitly defining the observed signal, o_t = (s_t, z_t), which the policy takes as input at each step t.\n\n2. [Additional comparisons to prior work] Policy adaptation approaches like Clavera et al. learn a “meta-policy” that can be quickly adapted for different environments. By contrast, our goal is to learn a single policy that performs well in the presence of a stochastic input process. In other words, we are improving policy optimization itself in environments with stochastic inputs. We do not consider transfer of a policy trained for one environment to another. In terms of training a common policy, our work is more related to RARL (Pinto et al.), which we discuss and compare with in Appendix L.\n\nIt is worth noting that approaches like Clavera et. al. are well-suited to handling model discrepancy between training and testing. However, in our setting, there isn’t any model discrepancy. In particular, the distribution of the input process is the same during training and testing. Nonetheless, our work shows that standard policy gradient methods have difficulty in input-driven environments, and input-dependent baselines can substantially improve performance.\n\nTherefore our work is orthogonal and complementary to policy adaptation approaches. Since some of these methods require a policy optimization step (e.g., Section 4.2 of Clavera et al. 2018), our input-dependent baseline can help these methods by reducing variance during training. Appendix L shows an example of such improvements for RARL. We will try to also add an example for a policy adaptation approach. \n \n3. [The LSTM method for learning input-dependent baselines] LSTM suffers from unnecessarily high complexity in training. In our experiments, we considered an LSTM approach but ruled it out when initial experiments showed that it requires orders of magnitude more data to train than conventional baselines for our environments (cf. beginning of Section 5). We will add the learning curves with LSTM baseline in the appendix.\n\n4. [The meta-learning baseline] The actual performance gain for a meta-learned baseline over a multi-value-network is environment-specific. Conceptually, the multi-value-network falls short when the task requires training with a large number of input instantiations to generalize to new input instances. We have not analyzed how policy quality varies with the number of input instantiations considered during training. However, we expect that this depends on a variety of factors, such as the distribution of the input process (e.g., from a large deviations standpoint); the time horizon of the problem; the relative magnitude of the variance due to the input process compared to other sources of randomness (e.g., actions). The advantage of the meta-learning approach compared to the multi-value network approach is that we can train with an unbounded number of input instantiations. We will add this discussion to Section 5.", "We appreciate your encouraging comments!\n\nWe agree that the traffic control environment is a perfect fit for the techniques we proposed. Thanks for the suggestions and the pointers to the existing simulators---we will mention these potential applications in the introduction/conclusions. \n\nIn our submission, we moved the proofs to appendix due to space constraints. We will trim down the text of the facts about PG methods to clear up rooms for the proof of Theorem 1.\n", "The paper introduces and develops the notion of input-dependent baselines for Policy Gradient Methods in RL.\n\nThe insight developed in the paper is clear: in environments such as data centers or outside settings external factors (traffic load or wind) constitute high magnitude perturbations that ultimately strongly change rewards.\nLearning an input-dependent baseline function helps clear out the variance created by such perturbations in a way that does not bias the policy gradient estimate (the authors provide a theoretical proof of that fact).\n\nThe authors propose different methods to train the input dependent baseline function:\n o) a multi-value network based approach\n o) a meta-learning approach\nThe performance of these two methods is compared on simulated robotic locomotion tasks as well as a load balancing and video bitrate adaptation task.\nThe input dependent baseline strongly outperforms the state dependent baseline in both cases.\n\nStrengths:\n o) The paper is well written\n o) The method is novel and simple while strongly reducing variance in Monte Carlo policy gradient estimates without inducing bias.\n o) The experiment evidence is strong.\n\nWeaknesses:\n o) Vehicular traffic has been the subject of recent development through deep reinforcement learning (e.g. https://arxiv.org/pdf/1701.08832.pdf and https://arxiv.org/pdf/1710.05465.pdf). In this particular setting exogenous noise (demand for throughput and accidents) could strongly benefit from input dependent baselines. I believe the authors should mention such potential applications of the method which may have major societal impact.\n o) There is a lot of space dedicated to well know facts about policy gradient methods. I believe it could be more impactful to put the proof of Theorem 1 in the main body of the paper as it is clearly a key theoretical property." ]
[ 6, 7, -1, -1, -1, -1, -1, 9 ]
[ 4, 4, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2019_Hyg1G2AqtQ", "iclr_2019_Hyg1G2AqtQ", "iclr_2019_Hyg1G2AqtQ", "SylcuqGYp7", "BJx1GFr92X", "S1e9N1YX6m", "ryxDVNFc2X", "iclr_2019_Hyg1G2AqtQ" ]
iclr_2019_HygQBn0cYm
Model-Predictive Policy Learning with Uncertainty Regularization for Driving in Dense Traffic
Learning a policy using only observational data is challenging because the distribution of states it induces at execution time may differ from the distribution observed during training. In this work, we propose to train a policy while explicitly penalizing the mismatch between these two distributions over a fixed time horizon. We do this by using a learned model of the environment dynamics which is unrolled for multiple time steps, and training a policy network to minimize a differentiable cost over this rolled-out trajectory. This cost contains two terms: a policy cost which represents the objective the policy seeks to optimize, and an uncertainty cost which represents its divergence from the states it is trained on. We propose to measure this second cost by using the uncertainty of the dynamics model about its own predictions, using recent ideas from uncertainty estimation for deep networks. We evaluate our approach using a large-scale observational dataset of driving behavior recorded from traffic cameras, and show that we are able to learn effective driving policies from purely observational data, with no environment interaction.
accepted-poster-papers
Reviewers are in a consensus and recommended to accept after engaging with the authors. Please take reviewers' comments into consideration to improve your submission for the camera ready.
train
[ "ByeJ-YFahQ", "HJlDT4Zq0m", "HkeR-X8w2m", "Hkgnb3au07", "SJgb6s6uR7", "HJl8WopdRQ", "Ske7vSaORQ", "H1g7krauRQ", "SyeU6MK03Q" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "The paper addresses the difficulty of covariate shift in model-based reinforcement learning. Here, the distribution over trajectories during is significantly different for the behaviour or data-collecting policy and the target or optimised policy. As a mean to address this, the authors propose to add an uncertainty term to the cost, which is realised by the trace of the covariance of the outputs of a MC dropout forward model. The method is applied to driving in dense traffic, where even single wrong actions can be catastrophic.\n\nI want to stress that the paper was a pleasure to read. It was extraordinarily straightfoward to follow, because the text was well aligned with the necessary equations.\n\nThe introduction and related work seem complete to me, with two exceptions:\n\n- Depeweg, S., Hernandez-Lobato, J. M., Doshi-Velez, F., & Udluft, S. \n (2018, July). Decomposition of Uncertainty in Bayesian Deep Learning for Efficient and Risk-sensitive Learning. In *International Conference on Machine Learning* (pp. 1192-1201).\n- Thomas, Philip S. *Safe reinforcement learning*. Diss. University of Massachusetts Libraries, 2015.\n\nThe work by Depeweg et al addresses quite the same question as the authors of this work, but with a broader scope (i.e. not limited to traffic) but very much the same machinery. There are some important theoretical insights in this work and the connection to this submission should be drawn. In particular, the proposed method needs to be either compared to this work or it needs to be clarified why it is not applicable.\n\nThe latter appears to be of less significance in this context, but I found robust offline policy evaluation underrepresented in the related work. \n\nI wonder if there is a way for a neural network to \"hack\" the uncertainty cost. I suppose that the proposed approach is an approximation to some entropy term, and it would be informative to see how exactly. \n\nThe approach shown by Eq 1 appears to be an adhoc way of estimating whether the uncertainty resulting from an action is due to the data or the model. What happens if this approach is not taken?\n\nThe objective function of the forward model is only given in the appendix. I think it needs to be moved to the main text, especially because the sum-of-squares term indicates a homoskedastic Gaussian for a likelihood. This has implications for the uncertainty estimates (see point above).\n\nOverall, the separation of data uncertainty/risk vs model uncertainty is not done. This indicates that heterskedastic environments are candidats where the method can fail, and this limitation needs to be discussed or pointed out.\n\nFurther, the authors did not observe a benefit from using a stochastic forward model. Especially, if the prior instead of the approximate posterior is used. My point would be that, depending on the exact grapical model and the way the sampling is done to train the policy, it is actually mathematically *right* to sample from the prior. This is also how it is described in the last equation of section 2. \n\n## Summary\n\nOverall, I liked the paper and the way it was written. However, there are some shortcomings, such as the comparison to the work by Depeweg et al, which does a very similar thing. Also, justifying the used heuristics as approximations to a principled quantity would help. It appears that the question why and how stochastic forward models should be used requires further investigation.", "We have made a few additional formatting changes, please see the updated version. ", "Pros:\nThe paper formulates the driving policy problem as a model-based RL problem. Most related work on driving policy has been traditional robotics planning methods such as RRT or model-free RL such as policy gradient methods.\n\nThe policy is learned through unrolling a learned model of the environment dynamics over multiple time steps, and training a policy network to minimize a differentiable cost over this rolled-out trajectory.\n\nThe cost combine the objective the policy seeks to optimize (proximity to other cars) and an uncertainty cost representing the divergence from the states it is trained on.\n\nCons:\n\nThe model based RL formulation is pretty standard except that the paper has a additional model uncertainty cost.\n\nRealistically, the output of driving policy should be planning decision, i.e. the waypoints instead of steering angles and acceleration / deceleration commands. There does not seem to be a need to solve the control problem using learning since PID and iLQR has solved the control problem very well. \n\nThe paper did not seem to reach a conclusion on why stochastic forward model does not yield a clear improvement over the deterministic model. This may be due to the limitation of the dataset or the prediction horizon which seems to be 2 second. \n\nThe dataset is only 45 minutes which captured by a camera looking down a small section of the road. So the policies learned might only do lane following and occasionally doing collision avoidance. I would encourage the authors to look into more diverse dataset. See the paper DESIRE: Distant Future Prediction in Dynamic Scenes with Interacting Agents, CVPR 2017.\n\nOverall, the paper makes an interesting contribution: formulate the driving policy problem as a model-based RL problem. The techniques used are pretty standard. There are some insights in the experimental section. However, due to the limitation of the dataset, it is not clear how much the results can generalize to complex settings such as nudging around other cars, cutting in, pedestrian crossing, etc.\n\nResponse to rebuttal:\nIt is good to know that the authors have a new modified VAE posterior distribution for the stochastic model which can achieve significant gain over the deterministic model. Is this empirical and specific to this dataset? Without knowing the details, it is not clear how general this new stochastic model is.\n\nI agree that it is worthwhile to test the model using the 45 minute dataset. However, I still believe the dataset is very limiting and it is not clear how much the experimental results can apply to other large realistic datasets.\n\nMy rating stays the same.\n\n", "\n>“I wonder if there is a way for a neural network to \"hack\" the uncertainty cost. I suppose that the proposed approach is an approximation to some entropy term, and it would be informative to see how exactly.”\n“Overall, the separation of data uncertainty/risk vs model uncertainty is not done. This indicates that heterskedastic environments are candidats where the method can fail, and this limitation needs to be discussed or pointed out.”\n\n\nIn Section 2.3 we perform a similar uncertainty decomposition as Depeweg et. al (for covariance matrices, rather than scalar variances), and show that the uncertainty cost is obtained using the trace of the covariance matrix reflecting the epistemic uncertainty. Note also that the covariance matrix corresponding to the aleatoric uncertainty (second term in Equation 2) will change depending on the inputs. This allows our approach to handle heteroscedastic environments, where the aleatoric uncertainty will vary for different inputs. Intuitively, the latent variables in the VAE capture aleatoric uncertainty, whereas the change across different dropout masks reflects epistemic uncertainty. \n\n>”The objective function of the forward model is only given in the appendix. I think it needs to be moved to the main text, especially because the sum-of-squares term indicates a homoskedastic Gaussian for a likelihood. This has implications for the uncertainty estimates (see point above).”\n>“Further, the authors did not observe a benefit from using a stochastic forward model. Especially, if the prior instead of the approximate posterior is used. My point would be that, depending on the exact grapical model and the way the sampling is done to train the policy, it is actually mathematically *right* to sample from the prior. This is also how it is described in the last equation of section 2.”\n\nWe have moved the objective function to the main text. We have also proposed a modification to the VAE posterior distribution which now leads to a significant gain in performance of the stochastic model over the deterministic model, which is described in Section 2.1. (please also see top comment). \n\nPlease let us know if these address your concerns, and if you would consider updating your score if so. \n", "Thank you for the constructive suggestions. We have made several updates to the paper based on them, and we provide answers to specific points below. \n\n>“The work by Depeweg et al addresses quite the same question as the authors of this work, but with a broader scope (i.e. not limited to traffic) but very much the same machinery. There are some important theoretical insights in this work and the connection to this submission should be drawn. In particular, the proposed method needs to be either compared to this work or it needs to be clarified why it is not applicable.”\n\n\nThank you for pointing us to the work of Depeweg et al. [3]. It is indeed relevant and we have updated the paper to relate our work to theirs. The main difference between our approaches is that they use the framework of Bayesian neural networks trained with alpha-divergence minimization, whereas we use variational autoencoders trained with Dropout Variational Inference (VI). \n\nBoth approaches aim to model aleatoric and epistemic uncertainties, but do so in different ways. Alpha-BNNs place a factorized Gaussian prior both over latent variables and network weights, and learn the parameters of these distributions by minimizing an energy function whose minimizer corresponds to a local minimum of alpha-divergences. \nVariational Autoencoders also represent latent variables as factorized Gaussians, whereas Dropout VI corresponds to placing a prior over network weights - specifically, a mixture of two Gaussians with small variances, with the mean of one component fixed at zero. As described in the new Section 2.3 which we have added, our approach corresponds to defining a variational distribution which is the composition of these two distributions. \n\nAn advantage of using alpha-divergences over variational inference (pointed out in [1, 2, 3]) is that VI can underestimate model uncertainty by fitting to a local mode of the exact posterior, whereas alpha-divergence minimization can give better coverage of the distribution. However, there are also challenges associated with alpha-BNNs. One which was pointed out by [2] is that they require significant changes in existing deep learning models and code bases, and the functions they optimize are less intuitively interpretable by non-experts. We investigated the approach described in [2], which proposes a dropout-based reparameterization of the alpha-divergence objective, which seems to offer a balance between compatibility with existing frameworks and better-calibrated uncertainty estimates. However, this requires performing several stochastic passes through the forward model at training time in order to calculate the proposed loss. In our setup, doing 10 stochastic passes (the number used in the paper) required reducing the minibatch size from 64 to 8 to fit in memory, which significantly slowed down training. We did not obtain any reasonable results after 5 days of training on GPU, whereas with our current approach the model finishes training after 4 days. Since the minibatch size with the dropout-based alpha-divergence objective is 8x smaller than our original minibatch size, a rough estimate would place training time for the forward model at around 30 days. We note that the work of Depeweg et al. is applied to much lower-dimensional problems (2-30 dimensions, <100,000 transitions), whereas our setting involves high-dimensional images and a larger dataset (around 2 million transitions). We believe that investigating alternate methods for uncertainty estimation in our setting would be interesting, but to do so thoroughly is best left for future work.\n\nReferences:\n[1] “Learning and Policy Search In Stochastic Dynamical Systems with Bayesian Neural Networks”, Depeweg S, Hernandez-Lobato H, Doshi-Velez F, Udluft S. ICLR 2017. \n[2] “Dropout Inference in Bayesian Neural Networks with Alpha-Divergences”, Yingzhen Li and Yarin Gal. ICML 2017. \n[3] “Decomposition of Uncertainty in Bayesian Deep Learning for Efficient and Risk-Sensitive Learning” Depeweg et al, ICML 2018. \n\n\n", "Thank you for the helpful review. We have made updates to the paper, please see our main comment and our answer below. \n\n>”The paper did not seem to reach a conclusion on why stochastic forward model does not yield a clear improvement over the deterministic model. This may be due to the limitation of the dataset or the prediction horizon which seems to be 2 second.” \n\n\nWe have proposed a modification to the VAE posterior distribution for the stochastic model which now leads to a significant gain in performance over the deterministic model (please see top comment, and Section 2.1). Note also that we show, at least qualitatively, that the stochastic model without this modification does not respond very well to the input actions, even though it produces reasonable predictions. This is likely the reason for the suboptimal performance. The stochastic model with the modified posterior responds better, and also translates into better performance. \n\n\n>\"The dataset is only 45 minutes which captured by a camera looking down a small section of the road. So the policies learned might only do lane following and occasionally doing collision avoidance. I would encourage the authors to look into more diverse dataset. See the paper DESIRE: Distant Future Prediction in Dynamic Scenes with Interacting Agents, CVPR 2017.\"\n\nThank you for the pointer to this work. It seems very relevant and will be worth investigating in future work. We would like to note that two interesting features of our dataset are that it consists of real human driver behavior, and involves dense traffic. We believe this addresses an underexplored setting: as noted in the related work section, most other works deal with the problem of doing lane following or avoiding static obstacles in visually rich environments. Our setting instead focuses on visually simplified environments, but with complex and difficult to predict behavior by other drivers. The longer-term goal is to learn policies in visually rich settings with complicated driver behavior, and we believe solving this dataset is a step towards that goal. Also note that for autonomous driving, the success rate needs to be extremely high, and although our approach performs well in comparison to others, it is still far from 100%. We therefore believe that to obtain satisfactory performance, policies will have to learn fairly complex policies, and this dataset can serve as a useful testing environment. \n\nPlease let us know if these address your concerns, and if you would consider updating your score if so. \n", "Thank you for the helpful suggestions, we have updated the paper. Please see our answers to specific points below:\n\n>“Unclear motivation to penalize prediction uncertainty to make the predicted states stay in the training data”\n“More theoretical explanation is needed or perhaps some intuition.”\n\nAs requested, we have added a section (Section 2.3 and Appendix B), where we show that our approach can be seen as training a Bayesian neural net with latent variables using variational inference. We also perform a similar uncertainty decomposition as Depeweg et. al [1], and show that the uncertainty cost is obtained using the trace of the covariance matrix reflecting the epistemic uncertainty.\n\n>“Without any addition of data, the variance reduction, which results by penalizing the high variance during training, might indicate over-fitting to the current training data. As the penalty forces the model to predict states only in the training dataset, it is unclear how this shows better test-time performance. The output of the policy network will simply be biased towards the training set as a result of the uncertainty cost. \n\nWe would like to clarify that the uncertainty penalty does not necessarily bias the policy network towards the training trajectories, but rather toward the states where the forward model has low uncertainty. This includes the training trajectories, but it also includes regions of the state space where the forward model generalizes well, which were not seen during training. The prediction results, which are obtained by feeding initial states from the testing set which the forward model was not trained on, still look reasonable, which indicates that the forward model is able to generalize fairly well. Note also that we evaluate the trained policy network on trajectories from the testing set, which the forward model was not trained on. \n\n>”Also, in some cases references to existing work that includes real robotic systems is out of context at minimum. So yes there are similarities between this paper and existing works on learning control for robotics systems using imitation learning, model based control and uncertainty aware cost function. However there is a profound difference in terms of working in simulation and working with a real system for which model and environment uncertainty is a very big issue. There are different challenges in working with a real uncertain system which you will have to actuate, and working with set of images for making predictions in simulation.” \n\n\nWe agree that there is a big difference between our setup and a real robotic system. We felt it fair to include references to other work in imitation learning and model-based control, even if the setups are quite different. We are happy to update our related work section with additional references, if you have any suggestions. \n\nPlease let us know if these address your concerns, and if you would consider updating your score if so. \n\n[1] “Decomposition of Uncertainty in Bayesian Deep Learning for Efficient and Risk-Sensitive Learning” Depeweg et al, ICML 2018. \n", "We would like to thank all the reviewers for their helpful feedback. We have made several updates to the paper which we hope address the reviewers’ concerns, which we describe below. We give more detailed responses to the individual comments. \n\nBoth Reviewer 2 and Reviewer 3 mentioned the fact that the stochastic model did not yield an improvement over the deterministic model as a limitation. In the updated version of the paper we propose a modified posterior distribution for the VAE, which gives improved performance relative to both the standard stochastic model and the deterministic model. This modification is simple to implement, and involves sampling the latent variable from the prior, rather than posterior, a fraction of the time during training. In addition to improving the performance of the trained policies (in terms of success and distance travelled), upon visual inspection (shown at the URL) this modification makes the forward model more responsive to the input actions, which we believe is the reason for the standard stochastic model’s suboptimal performance. This modification can be seen as “dropping out” the latent code with some probability, and although simple, we are not aware of it being proposed elsewhere in the literature. \n\n\nBoth Reviewer 1 and Reviewer 2 mentioned they would like to see more theoretical explanation. We have added a new section (Section 2.3 and Appendix B) which shows that our approach can be viewed as training a Bayesian neural network with latent variables using variational inference. We show that the loss function which we optimize is in fact an approximation to the negative evidence lower bound obtained by using a variational distribution which is the composition of a diagonal Gaussian (over latent variables) and the dropout approximating distribution (over model parameters) described in [1]. We also perform a decomposition of the covariance of the distribution over predictions induced by this approximate posterior (similar to [2]) into two covariance matrices, which represent the aleatoric and epistemic uncertainties. Our uncertainty penalty is in fact penalizing the trace of the matrix representing the epistemic uncertainty. \n\nWe have moved certain parts of the main text to the appendix to make room for this new section and stay within the page limit. We have also rerun the experiments with different seeds to obtain more robust performance estimates, and made some changes in our training procedure/hyperparameters (these are detailed in the Appendix, and will be available in our code release). Note that the MPUR results are now somewhat higher than in the first version, although their relative performance is similar (i.e, deterministic and stochastic are still similar to each other, although the stochastic model with our modified posterior is better than both). \n\n[1]: \"Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning\", Gal and Ghahramani. ICML 2016. \n\n[2]: “Decomposition of Uncertainty in Bayesian Deep Learning for Efficient and Risk-Sensitive Learning” Depeweg et al, ICML 2018. ", "- Does the paper present substantively new ideas or explore an under explored or highly novel question? \n\nSomewhat, the paper combines two popular existing approaches (Imitation Learning, Model Based Control and Uncertainty Quantification using Dropout). The novelty is in combining pre-existing ideas. \n\n- Does the results substantively advance the state of the art? \n\nNo, the compared methods are not state-of-the-art.\n\n- Will a substantial fraction of the ICLR attendees be interested in reading this paper? \n\nYes. I think that the topics of this paper would be very interesting to ICLR attendees. \n\n-Quality: \n\nUnclear motivation to penalize prediction uncertainty to make the predicted states stay in the training data. Also, in some cases references to existing work that includes real robotic systems is out of context at minimum. So yes there are similarities between this paper and existing works on learning control for robotics systems using imitation learning, model based control and uncertainty aware cost function. However there is a profound difference in terms of working in simulation and working with a real system for which model and environment uncertainty is a very big issue. There are different challenges in working with a real uncertain system which you will have to actuate, and working with set of images for making predictions in simulation. \n\n \n\n-Clarity: \n\nEasy to read. Experimental evaluation is clearly presented. \n\n-Originality: \n\nSimilar uncertainty penalty was used in other paper (Kahn et al. 2017). Therefore the originality is in some sense reduced.\n\n- Would I send this paper to one of my colleagues to read?\n\nYes I would definitely send this paper to my colleagues. \n\n- General Comment: \n\nDropout can be used to represent the uncertainty/covariance of the neural network model. The epistemic uncertainty, coming from the lack of data, can be gained through Monte Carlo sampling of the dropout-masked model during prediction. However, this type of uncertainty can only decrease by adding more explored data to current data set. Without any addition of data, the variance reduction, which results by penalizing the high variance during training, might indicate over-fitting to the current training data. As the penalty forces the model to predict states only in the training dataset, it is unclear how this shows better test-time performance. The output of the policy network will simply be biased towards the training set as a result of the uncertainty cost. More theoretical explanation is needed or perhaps some intuition. \n\nThis observation is also related to the fact that the model based controller used is essentially a risk sensitive controller. \n" ]
[ 6, -1, 7, -1, -1, -1, -1, -1, 6 ]
[ 4, -1, 5, -1, -1, -1, -1, -1, 5 ]
[ "iclr_2019_HygQBn0cYm", "iclr_2019_HygQBn0cYm", "iclr_2019_HygQBn0cYm", "SJgb6s6uR7", "ByeJ-YFahQ", "HkeR-X8w2m", "SyeU6MK03Q", "iclr_2019_HygQBn0cYm", "iclr_2019_HygQBn0cYm" ]
iclr_2019_Hyg_X2C5FX
GAN Dissection: Visualizing and Understanding Generative Adversarial Networks
Generative Adversarial Networks (GANs) have recently achieved impressive results for many real-world applications, and many GAN variants have emerged with improvements in sample quality and training stability. However, visualization and understanding of GANs is largely missing. How does a GAN represent our visual world internally? What causes the artifacts in GAN results? How do architectural choices affect GAN learning? Answering such questions could enable us to develop new insights and better models. In this work, we present an analytic framework to visualize and understand GANs at the unit-, object-, and scene-level. We first identify a group of interpretable units that are closely related to object concepts with a segmentation-based network dissection method. Then, we quantify the causal effect of interpretable units by measuring the ability of interventions to control objects in the output. Finally, we examine the contextual relationship between these units and their surrounding by inserting the discovered object concepts into new images. We show several practical applications enabled by our framework, from comparing internal representations across different layers, models, and datasets, to improving GANs by locating and removing artifact-causing units, to interactively manipulating objects in the scene. We provide open source interpretation tools to help peer researchers and practitioners better understand their GAN models.
accepted-poster-papers
The paper proposes an interesting framework for visualizing and understanding GANs, that will be of clear help for understanding existing models and might provide insights for developing new ones.
train
[ "SJlKyI2FA7", "Syga2ShtR7", "HJlb9ShFRm", "BJgk8S3K0m", "rylRgFDnnQ", "Bklj6-einQ", "H1lQioJchm" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your comments and questions; we have incorporated your suggestions in the revision, and we also answer your questions below.\n\nQ7: apply the author's methods to other architecture, and to other application domains? \n \nA7: We have applied our method to WGAN-GP model with a different generator architecture, as shown in Figure 16 in Section S-6.3. Our method can find interpretable units for different GANs objectives and architectures.\n\nThe general framework can be extended beyond generative models for vision, although that topic is beyond the scope of the current paper. Concurrent work submitted to ICLR 2019 is an example of similar ideas being applied to natural language translation. (https://openreview.net/forum?id=H1z-PsR5KX)\n\nQ8: how to choose the 'units' for which they seek interpretation when reporting their results?\n\nA8: We do two analyses. For the dissection analysis examining correlation, u are analyzed as individual units (i.e., |U| = 1). We analyze every individual unit in a layer, and we plot all units that match a segmented concept with IoU exceeding 5%.\n\nFor the causal analysis, we choose the elements of U by doing the optimization described in equation (6), which finds an alpha that specifies a contribution for every unit to maximize causal effects, ranking units according to highest alpha, and choosing the number needed to achieve a desired causal effect.\n\nQ9: How large does u tend to be? How would one choose it? Is it one filter out of all filters in a certain layer?\n\nA9: To choose U to have strong causal effects, we measure and plot the causal effect of different numbers of units for U as in Figure 4. The increase in causal effect diminishes after about 20 units. To be able to compare different causal sets on an equal basis, we set |U| = 20 for most of our experiments.\n\nQ10: When optimizing for sets of units together (using the alpha probabilities and the optimization in eq. 6) what is d? Is it performed for all units in a single layer? More details would be useful here.\n\nA10: Yes, we perform an optimization for all units in a single layer. d is the number of all units in a single layer (512, for the case of layer 4 of our Progressive GAN).\n\nFor the dissection analysis, we analyze every individual unit in a layer, and we plot all units that match a segmented concept with IoU exceeding 5%. The causal analysis requires identifying sets of units, which is done through the optimization in equation (6).\n\nBeyond this objective, learning U involves several additional details including how to specify the big constant for positive intervention, how to sample class-relevant positions, and how to initialize the coefficient alpha. We have added a section S-6.4 to supplementary materials with these implementation details.\n\nQ11: Regarding SWD and FID\n\nA11: SWD and FID are measures which estimate realism of the GAN output by measuring the distance between the generated distribution of images and the true distribution of images; Borji (arXiv 2018) surveys and compares these methods at https://arxiv.org/abs/1802.03446. We have clarified these terms and added citations in the paper.\n\nQ12: No reference to supp. info and minor typos: \n\nA12: Thank you for your detailed comments; we have updated the text and expanded the supplementary materials. We also added a brief summary of the supplementary material in each section of the main paper. \n", "Thank you for your comments and questions; we have incorporated your suggestions in the revision, and we answer your questions below.\n\nQ3: About diagnosing and improving GANs, please give more details of the human annotation for the artifacts.\n\nA3: We visualize the top 10 highest activating images for each unit, and we manually identify units with noticeable artifacts in this set. (This human annotation was done by an author.) \nDetails have been added to section 4.2. This method for diagnosing and improving GANs is further analyzed and expanded in the supplementary materials, in section S-6.1.\n\nQ4: Minor - I think there is a typo in the first and second paragraphs in section 4.2, Figure 14 -> Figure 8.\n\nA4: Thanks for your detailed comments. We have fixed it. \n\nQ5: Have you ever considered to handle these imperfect semantic segmentation models?\n\nA5: We totally agree with the reviewer: the success of our method is linked to the accuracy and comprehensiveness of the segmentation model used. We have performed a human evaluation regarding the accuracy of our method on a Progressive GAN model (on LSUN living rooms), and have found that, our method provides correct labels for 96% of interpretable units. Further details of the evaluation can be found in section S-6.2.\n\nIn addition, a semantic segmentation model can perform poorly if the analyzed images are very different from the images on which the semantic segmentation was trained. For example in the “bedroom” scene category, if a unit is labeled as correlating with ‘swimming pool’ this may be due to a poorly performing GAN model. We have partly addressed this issue by measuring the average realism of each unit using the FID metric. In practice, in Figure 16, we show the effect of such a filter in which we only report “realistic” and interpretable units. Details of such an approach have been added to section S-6.3.\n\nAs more accurate and robust segmentation models are developed, we expect our method to be able to identify more semantic concepts inside a representation.\n\n\nQ6: Is there a way to apply the framework to the training process of GANs?\n\nA6: By using a per-unit realism score based on the FID metric on generator units learned by the GAN, we can identify units that should be zeroed to improve the realism of the GAN output. (We assign a realism score to each unit by measuring FID for a subset of images that highly activate the unit.) Zeroing the units with the highest FID score as measured this way will improve the quality of the output nearly as well as ablating units identified manually. This modification could be incorporated into an automatic training process. S-6.1 has further details and a preliminary evaluation of this idea for introducing per-unit analysis in an automatic process. A full development of this idea is left to future work.\n\nDissection can also be used to monitor the progress of training by quantifying the emergence, diversity, and quality of semantic units. For example, in Figure 18 we show dissections of layer4 representations of a Progressive GAN model trained on bedrooms, captured at a sequence of checkpoints during training. As training proceeds, the number of units matching objects (and the number of object classes with matching units) increases, and the quality of object detectors as measured by average IoU over units increases. During this successful training, dissection suggests that the model is learning the structure of a bedroom, because increasingly units converge to meaningful bedroom concepts. We add this analysis to section S-6.6.", "Thank you for your comments and questions; we have incorporated your suggestions in the revision, and we also answer your questions below.\n\nQ1: Theoretical interpretation of the visualization, and comparisons to the Class Activation Maps (CAM)?\n \nA1: Our visualization is very simple and corresponds to equation (2): we upsample a single channel of the activation featuremap and show the region exceeding a threshold: unlike CAM, no gradients are considered. The threshold used is chosen to maximize relative mutual information with the best-matching object class based on semantic segmentation, however, a fixed threshold such as a top 1% quantile level would look very similar.\n\nIt is also informative to consider a CAM-like visualization of the causal impact of interventions in the model on later layers: we can create a heatmap where each pixel shows the magnitude of the last featuremap layer change that results when making an intervention at each pixel in an early layer. The result is shown in Figure 17 of supplementary materials S-6.4: this visualization shows that the effects of an intervention at different locations are not uniform. The heatmap pattern reveals the structure of the model’s sensitivity to a specific concept at various locations.\n\nQ2: How is the rate of finding the correct sets of units for a particular visual class?\n\nA2: Our method provides a correct label for 96% of interpretable units, as measured by the following human evaluation, which we have added to supplementary materials, section S-6.2.\n\nFor each of 512 units of layer 4 of a \"living room\" progressive GAN, 5-9 human labels are collected (3728 labels total), where the AMT worker is asked to provide one or two words describing the highlighted patches in a set of top-activating images for a unit. Of the 512 units, 201 units were described by a consistent word (such as \"sofa\", \"fireplace\" or \"wicker\") that was supplied by 50% or more of the human labels.\n\nApplying our segmentation-based dissection method, 154/201 of these units are also labeled with a confident label with IoU > 0.05 by dissection. In most of the cases (104/154), the segmentation-based method gave the same label word as the human labelers, and most others are slight shifts in specificity (e.g. segmentation says \"ottoman\" or \"curtain\" or \"painting\" when a person says \"sofa\" or \"window\" or \"picture\"). A second AMT evaluation was done to rate the accuracy of both segmentation-derived and human-derived labels. Human-derived labels scored 100% (i.e., of the 201 human-labeled units, all of the labels were rated to be accurate by most raters). Of the 154 of our segmentation-generated labels, 149 (96%) were rated as accurate by most AMT raters as well.\n\nThe five failure cases (where the segmentation is confident but rated as inaccurate by humans) arise from situations in which human evaluators saw one pattern from seeing only 20 top-activating images, while the algorithm, in evaluating 1000 images, counted a different concept as dominant. (E.g., in one example shown in Figure 14a, there are only a few ceilings highlighted and mostly sofas, whereas in the larger 1000-image set, mostly ceilings are triggered.)\n\nThere were also 47/201 cases where the segmenter was not confident while humans had consensus. Some of these are due to missing concepts in the segmenter. For example, several units are devoted to letterboxing (white stripes at the top and bottom of images), and the segmentation had no confident label to assign to these (Figure 14b).\n\nWe expect that as semantic segmentations improve to be able to identify more concepts such as abstract shapes, more of these units can be automatically identified.\n", "We thank all the reviewers for their helpful comments. We are glad that they found the topic important, the idea new, and the visualization results convincing. We have addressed individual questions raised by the reviewers in separate posts. Below we summarize the major changes in this revision. \n\n- In supplementary material S-6.1, we show an automatic evaluation of per-unit realism that can be done using FID measurements, and we show that zeroing these units improves the quality of the output. We have also corrected our FID computation by eliminating JPEG artifacts in our evaluation pipeline and recomputed FID comparisons in Table 1. (R2Q3, R2Q6)\n- In S-6.2, we conduct a human evaluation of dissection label accuracy for interpretable units. (R1Q2, R2Q5) \n- In S-6.3, we show how unit realism can be used to filter the results to protect the segmenter against unrealistic images that can be produced by some GAN models. (R2Q5, R3Q7)\n- In S-6.4, we provide details of our method for optimizing causal units. To eliminate a hyperparameter, we have defined the large constant “c” used for positive interventions to be a mean conditioned on the target class, rather than an unconditional 99 percentile value. Figures 4, 9, 10, and 11 have been updated with results based on this adjustment. (R3Q10) \n- In S-6.5, we have traced the effects of interventions through downstream layers and show how a CAM-like heatmap can be used to visualize these effects. (R1Q1)\n- In S-6.6, we show how dissection can be used to monitor the emergence of unit semantics during the training epochs of a GAN. (R2Q6)\n- We have fixed minor typos and grammar errors (R2Q4, R3Q12)\n- We have clarified the method for manually identifying artifact units (R2Q3)\n- We have clarified the method for identifying causal sets of units described in equations 5 and 6 (R3Q8,9,10)\n- We have clarified the definition of SWD and FID and added citations (R3Q11)", "The paper proposes a method for visualizing and understanding GANs representation. This seems an important topic as several such methods were performed for networks trained in supervised learning, which relate\nto the predicted outcome, but there is lack of methods for interpreting GANs which are learned in an unsupervised manner and it is generally unclear what is the representation learned by GANs. \nThe method is finding correlations between the appearance of objects and the activation of units in each layer of the learned network. \nIn addition, the paper presents a 'causal' measure, where a causal effect of a unit is measured by removing and adding this unit from/to the network and computing the average effect on object appearance.\nThe authors demonstrate how the methods are applied by improving the appearance of images, by modifying units which were detected as important for specific objects. \nThe authors also provide an interactive interface where users can manually examine and modify their trained GANs in order to add/remove objects and to remove artifacts. \n\nThe method proposed by the authors seem to be appropriate for convolutional neural networks, where 'units' in each layer may correspond to objects and can be searched for in particular locations of image. \nIt is not clear to me if and how one can apply the author's methods to other architecture, and to other application domains (besides images), or whether the method is limited to vision applications. \nThe authors do not explain specifically how do they choose the 'units' for which they seek interpretation when reporting their results. It is written that each layer is divided into two sets: \nu and u-bar, where we seek interpretation of u. But how large does u tend to be? how would one choose it? is it one filter out of all filters in a certain layer? when optimizing for sets of units together\n(using the alpha probabilities and the optimization in eq. 6) what is d? is it performed for all units in a single layer? more details would be useful here. \n\nThe paper is overall clearly written, with lots of visual examples demonstrating the methods presented in it. \nThe paper presents a new methodological idea, which allows for nice practical contribution. There is no theoretical contribution or any deep analysis. \nThere is no reference in the paper to the supp. info. figures and therefore it is not clear if and how the supp. info. adds valuable information to the reader. \nThe authors use scores like SWD and FIT for performance, but give no explanations for what do these scores measure. \n\n\nMinor: \n\nAbstract: immprovements -> improvements \n\nPage 6, middle: 'train on four LSUN' -> 'trained on four LSUN'\n\nPage 7, bottom: Fig. 14a and 14b should be Fig. 8a and 8b\n", "## Summary\nThis work proposes a novel analytic framework exploited on a semantic segmentation model to visualize GANs at unit (feature map) level. The authors show that some GAN representations can be interpreted, correlate with the parsing result from the semantic segmentation model but as variables that have a causal effect on the synthesis of semantic objects in the output. This framework could allow to detect and remove the artifacts to improve the quality of the generated images.\n\nThe paper is well-written and organized. The dissection and intervention for finding relationships between representation units and objects are simple, straightforward and meaningful. The visualizations are convincing and insightful. I recommend to accept the paper.\n\n## Detail comments\nAbout diagnosing and improving GANs, please give more details of the human annotation for the artifacts.\n\nI think there is a typo in the first and second paragraphs in section 4.3, Figure 14 -> Figure 8. \n\nThe whole framework is based on a semantic segmentation model. The model is highly possibly imperfect and could have very different performances on different objects. Have you ever considerate to handle these imperfect models?\n\nIs there a way to apply the framework to the training process of GANs?\n", "This paper provides a visualization framework to understand the generative neural network in GAN models. To achieve this, they first find a group of interpretable units and then quantify the causal effect of interpretable units. Finally, the contextual relationship between these units and their surrounding is examined by inserting the discovered object concepts into new images. Extensive experiments are presented and a video is provided.\n\nOverall, I think this paper is very valuable and well-written. The experiments clearly show the questions proposed in the introduction are answered. Two concerns are as follows.\n\nCons:\n1) The visualization seems to be very heuristic. What I want to know is the theoretical interpretation of the visualization. For example, the Class Activation Maps (CAM) can be directly calculated by the output values of softmax function. How about the visual class for the generative neural networks?\n2) I am also very curious, how is the rate of finding the correct sets of units for a particular visual class?\n" ]
[ -1, -1, -1, -1, 7, 7, 8 ]
[ -1, -1, -1, -1, 3, 4, 4 ]
[ "rylRgFDnnQ", "Bklj6-einQ", "H1lQioJchm", "iclr_2019_Hyg_X2C5FX", "iclr_2019_Hyg_X2C5FX", "iclr_2019_Hyg_X2C5FX", "iclr_2019_Hyg_X2C5FX" ]
iclr_2019_HygjqjR9Km
Improving MMD-GAN Training with Repulsive Loss Function
Generative adversarial nets (GANs) are widely used to learn the data sampling process and their performance may heavily depend on the loss functions, given a limited computational budget. This study revisits MMD-GAN that uses the maximum mean discrepancy (MMD) as the loss function for GAN and makes two contributions. First, we argue that the existing MMD loss function may discourage the learning of fine details in data as it attempts to contract the discriminator outputs of real data. To address this issue, we propose a repulsive loss function to actively learn the difference among the real data by simply rearranging the terms in MMD. Second, inspired by the hinge loss, we propose a bounded Gaussian kernel to stabilize the training of MMD-GAN with the repulsive loss function. The proposed methods are applied to the unsupervised image generation tasks on CIFAR-10, STL-10, CelebA, and LSUN bedroom datasets. Results show that the repulsive loss function significantly improves over the MMD loss at no additional computational cost and outperforms other representative loss functions. The proposed methods achieve an FID score of 16.21 on the CIFAR-10 dataset using a single DCGAN network and spectral normalization.
accepted-poster-papers
The submission proposes two new things: a repulsive loss for MMD loss optimization and a bounded RBF kernel that stabilizes training of MMD-GAN. The submission has a number of unsupervised image modeling experiments on standard benchmarks and shows reasonable performance. All in all, this is an interesting piece of work that has a number of interesting ideas (e.g. the PICO method, which is useful to know). I agree with R2 that the RBF kernel seems somewhat hacky in its introduction, despite working well in practice. That being said, the repulsive loss seems like something the research community would benefit from finding out more about, and I think the experiments and discussion are sufficiently extensive to warrant publication.
train
[ "BJe9UVmbaX", "rJll-a6hnX", "rkxQfAW0CX", "rkxJC6g79Q", "rylo1DbthX", "SyxxllUYC7", "r1gTa0rtRQ", "Bkx58VUK07", "HklD4r-bTQ", "BJxg9N3IcQ" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "public" ]
[ "Thank you for your precious comments. Below we would try to clarify our study and address your concerns. \n\nQ1. What specifically contributed to the improvements in Table 1. Other good-scoring models need to be tested empirically. \nA1: In this study, we focused on comparing the proposed repulsive loss function with other representative loss functions. The experiments in Table 1 were done in an almost identical setup: DCGAN + spectral normalization + Adam + 16 learning rate combinations + 100k iterations (see Section 5.1 Experiment Setup). That is, the methods in Table 1 differ mainly in the loss functions used. Thus, we attribute the improvements to our proposed repulsive loss. We highlighted this in Table 1 note 1 and Sec. 5.2 in the revised manuscript. \n\nThe experiment setup was \"almost identical\" because, for MMD-related losses, the output layer of DCGAN has 16 neurons, while for logistic and hinge losses, it is one. In Appendix D.2 of the revised manuscript, we tested the discriminator with 1, 4, 16, 64, 256 output neurons and show that repulsive loss performed better when more than one output neuron was used. \n\nWe agree that it would be interesting to test the repulsive loss in more general experiment setups, e.g., ResNet, gradient penalty, self-attention modules, supervised training, etc. In Appendix D.1, we show that the repulsive loss performed well using the gradient penalty from [1]. However, we are afraid to admit that a comprehensive study using other setups would require substantially more computational resources. We would try our best to fill in this gap in the future. \n\nQ2: Does GAN performance heavily depend on the loss functions used in training? \nA2: We agree that this is an overstatement and changed this to: \"their performance may heavily depend on the loss functions, given a limited computational budget\". [2], [3] and our study did find that different loss functions lead to quite different performances in practice with a limited computational budget. \n\nQ3: What does 'data structure' mean in that case that MMD may discourage learning of data structure? \nA3: In the revised manuscript, we have changed “data structure” to “differences among real data”. These differences, or fine details, separate the real samples. For example, in CIFAR-10 dataset, \"ship\" and \"cat\" should be quite different, but discriminator trained using MMD may overlook such differences (see Figure 4). \n\nQ4: The Literature review on other loss functions does not belong to the main body of the text. \nA4: We moved the literature review to Appendix B.1. \n\nQ5: What does it mean by assuming linear activation is used at the last layer of D. \nA5: We mean there is no activation function applied to the discriminator outputs. In the case of minimax and non-saturating loss functions, we could absorb the sigmoid function into the loss which results in the formation of softmax function. \n\nQ6: No need to include Arjovsky et al. (2017)'s statement on perfect discriminator. \nA6: We agree with the reviewer and deleted the statement. \n\nQ7: Why propose a generalized power iteration method when singular values can be computed as in [3]? \nA7: When only the first singular value is needed, the power iteration used in our study and [3] is computationally simpler than the method in [4] which uses Fourier transform and SVD. However, the strength of [4] is that all singular values can be computed in a single run, which may eventually inspire more powerful regularization methods for GAN. We discussed this in Appendix C.1 in the revised manuscript.\n\nQ8: “MS-SSIM is not compatible with CIFAR-10 and STL-10 which have data from many classes”; just calculate Intra-class MS-SSIM for CIFAR-10 and STL-10. \nA8: We deleted the statement in the revised manuscript. \n\nQ9: Should FID be used to evaluate a model trained with an MMD-loss when the discriminator uses almost the same architecture as the Inception model in FID? \nA9: We would like to point out that all loss functions in our study were paired with plain DCGAN architecture (see Appendix Table S1 and S2), which is much simpler than the Inception model. \n\nQ10: Which models in Table 1 used the spectral norm? \nA10: Spectral normalization was applied for all models in Table 1. We highlighted this in Table 1 note 1 and Sec. 5.2 in the revised manuscript.\n\n------------------------------------------------- \n[1]: On Gradient Regularizers for MMD GANs. NIPS, 2018.\n[2] Are GANs Created Equal? A Large-Scale Study. NIPS, 2018. \n[3] Spectral Normalization for Generative Adversarial Networks. ICLR, 2018 \n[4] The Singular Values of Convolutional Layers. Under review at ICLR 2019.\n", "This paper proposed two techniques to improve MMD GANs: 1) a repulsive loss for MMD loss optimization; 2) a bounded Gaussian RBF kernel instead of original Gaussian kernel. The experimental results on several benchmark shown the effectiveness of the two proposals. The paper is well written and the idea is somehow novel. \n\nDespite the above strong points, here are some of my concerns:\n1.The two proposed solutions seem separated. Do the authors have any clue that they can achieve more improvement when combined together, and why?\n\n2. They are limited to the cases with spectral normalization. Is there any way both trick can be extended to other tricks (like WGAN loss case or GP).\n\n3. Few missed references in this area:\na. On gradient regularizers for MMD GANs\nb. Regularized Kernel and Neural Sobolev Descent: Dynamic MMD Transport\n\nRevision: after reading rebuttal (as well as to other reviewers), I think they addressed my concerns. I would like to keep the original score. ", "We thank the reviewers and area chair for their thoughtful comments and hard work, which we believe have contributed significantly to the improvement of our work. Here we summarize the changes to the manuscript:\n1. In Appendix A, we added a proof of the local stability of MMD-GAN trained using the proposed loss, as requested by a public reader.\n2. In Appendix C, we added a detailed comparison of the proposed power iteration method for convolution kernel against the one used in [1], as requested by Reviewer 2 and the public reader.\n3. In Appendix D.1, we added an experiment exploring the repulsive loss with gradient penalty, as requested by Reviewer 1 and 3. \n4. In Appendix D.2, we added an experiment exploring the effects of discriminator output dimension on the performance of proposed loss, as requested by Reviewer 2.\n5. We revised the text to clarify our ideas and highlight important information in the experiment design and results.\n\nFor more information, please read our answers to each individual reviewer thread below.\n\nWe would also like to mention that the code (and some raw results) for this work can be found at the anonymized GitHub repository:\nhttps://anonymous.4open.science/repository/e8675209-4393-4dbc-ad04-aad36cd5d738/\n\nThank you very much for reading. Any feedback on the manuscript and code will be much appreciated.\n\n------------------------------------------------- \n[1] Spectral Normalization for Generative Adversarial Networks. ICLR, 2018 ", "Dear readers,\nThe code (and some raw results) for this paper can be found at the anonymized GitHub repository:\nhttps://anonymous.4open.science/repository/e8675209-4393-4dbc-ad04-aad36cd5d738/\nAny suggestion/feedback on the paper and code is much appreciated.", "The paper proposes a new discriminator loss for MMDGAN which encourages repulsion between points from the target distribution. The discriminator can then learn finer details of the target distribution unlike previous versions of MMDGAN. The paper also proposes an alternative to the RBF kernel to stabilize training and use spectral normalization to regularize the discriminator. The paper is clear and well written overall and the experiments show that the proposed method leads to improvements. The proposed idea is promising and a better theoretical understanding would make this work more significant. Indeed, it seems that MMD-rep can lead to instabilities during training while this is not the case for MMD-rep as shown in Appendix A. It would be good to better understand under which conditions MMD-rep leads to stable training. Figure 3 suggests that lambda should not be too big, but more theoretical evidence would be appreciated.\nRegarding the experiments: \n- The proposed repulsive loss seems to improve over the classical attractive loss according to table 1, however, some ablation studies might be needed: how much improvement is attributed to the use of SN alone? The Hinge loss uses 1 output dimension for the critic and still leads to good results, while MMD variants use 16 output dimensions. Have you tried to compare the methods using the same dimension?\n-The generalized spectral normalization proposed in this work seems to depend on the dimensionality of the input which can be problematic for high dimensional inputs. On the other hand, Myato’s algorithm only depends on the dimensions of the filter. Moreover, I would expect the two spectral norms to be mathematically related [1]. It is unclear what advantages the proposed algorithm for computing SN has.\n- Regarding the choice of the kernel, it doesn’t seem that the choice defined in eq 6 and 7 defines a positive semi-definite kernel because of the truncation and the fact that it depends on whether the input comes from the true or the fake distribution. In that case, the mmd loss loses all its interpretation as a distance. Besides, the issue of saturation of the Gaussian kernel was already addressed in a more general case in [2]. Is there any reason to think the proposed kernel has any particular advantage?\n\nRevision:\n\nAfter reading the author's response, I think most of the points were well addressed and that the repulsive loss has interesting properties that should be further investigated. Also, the authors show experimentally the benefit of using PICO ver PIM which is also an interesting finding.\nI'm less convinced by the bounded RBF kernel, which seems a little hacky although it works well in practice. I think the saturation issues with RBF kernel is mainly due to discontinuity under the weak topology of the optimized MMD [2] and can be fixed by controlling the Lipschitz constant of the critic.\nOverall I feel that this paper has two interesting contributions (Repulsive loss + highlighting the difference between PICO and PIM) and I would recommend acceptance.\n\n\n\n\n\n\n[1]: Sedghi, Hanie, Vineet Gupta, and Philip M. Long. “The Singular Values of Convolutional Layers.” CoRR \n[2]: M. Arbel, D. J. Sutherland, M. Binkowski, and A. Gretton. On gradient regularizers for MMD GANs.\n\n\n\n", "Thank you for your constructive comments. Below we would try to address your concerns.\n\nQ1: It seems that MMD-rep can lead to instabilities during training while this is not the case for MMD-rep as shown in Appendix A. What are the conditions MMD-rep leads to stable training?\nA1: We would like to point out that the training stability is different from the local stability considered in Appendix A. \n\nAppendix A demonstrates the local stability of MMD-rep. That is, if MMD-rep is initialized sufficiently close to an equilibrium and trained by gradient descent, it will converge to the equilibrium. In contrast, Wasserstein GAN does not have this property [1].\n\nIn practice, training stability often refers to the ability of model converging to a desired state measured by some criterion. The repulsive loss may result in unstable training, due to factors including initialization (see Appendix A.2 and Fig. S1), learning rate (see Fig. 3) and Lipschitz constraints imposed by the proposed spectral normalization method (see Appendix C3. and Fig. S2). \n\nIn diverged cases, we often observed that the discriminator outputs caused the Gaussian kernel to saturate. To alleviate this issue, we proposed the bounded Gaussian kernel. Fig. 3 and Appendix Fig. S2 show that the bounded kernel stabilized MMD-rep training in many cases. \n\nQ2: Figure 3 suggests that lambda should not be too big, but more theoretical evidence would be appreciated.\nA2: We suspect the reason is larger lambda leads to more focus on repulsing real sample scores. Consider lambda>>1, the model would simply 1) expand real sample scores, 2) pull generated sample scores to real samples’, and 3) ignore the attraction on generated sample scores. This process is divergent. We included this in Section 5.2 Paragraph 2 of the revised manuscript. \n\nQ3: How much improvement in Table 1 is attributed to spectral normalization? For hinge loss, the discriminator uses 1 output neuron; for repulsive loss, it is 16. How about repulsive loss with 1 output neuron? \nA3: We would like to point out that spectral normalization was used for every loss function in Table 1. In addition, Appendix Fig. S2 shows the results for other spectral normalization configurations. Given almost identical experiment setups, we attribute the improvement of MMD-rep and MMD-rep-b over MMD-rbf and MMD-rbf-b to the proposed repulsive loss. We clarified this in Section 5.2 Paragraph 1 of the revised manuscript. \n\nIn Appendix D.2, we evaluated MMD-rep with various discriminator output dimensions: 1, 4, 16, 64, 256 on CIFAR-10 dataset; and found that the performance can be significantly improved using more than one output neuron. Additionally, MMD-rep with 1 discriminator output neuron was slightly better than the hinge loss. \n\nQ4: Comparison between the proposed generalized power iteration method and the one in [2]\nA4: In Appendix C.2 and C.3 of the revised manuscript, we compared the proposed power iteration for convolution kernel (PICO) against the method for matrix (PIM) used in [2]. In summary, \n1) the spectral norm estimated by PIM may vary in a range related to the spectral norm by PICO; \n2) PIM impose an indefinite and often loose upper bound on the Lipschitz constant of discriminator; \n3) PICO performed better than PIM on cases using repulsive loss. \nWe admit the PICO has higher computational cost than PIM, esp. when a small batch size has to be used in training. We recommend using PICO when the computational cost is less of a concern. \n\nQ5: Using the bounded RBF kernel, the MMD loss cannot be interpreted as a distance.\nA5: We would like to point out that the bounded RBF kernel is only used in the discriminator loss. The generator always attempts to minimize the MMD loss with a characteristic kernel. We highlighted this in Section 4.1 of the revised manuscript. \n\nQ6: The issue of saturation of the Gaussian kernel was already addressed in a more general case in [3]. Is there any advantage of the proposed kernel?\nA6: The gradient penalty from Scaled MMD of [3] is designed to impose a Lipschitz constraint on the discriminator w.r.t. real samples. We argue the method may have only partially addressed the saturation issue, as the following two scenarios may cause saturation: 1) the real sample scores may be very similar as encouraged by both the MMD loss and gradient penalty; 2) the generated sample scores may be very distinct or similar as the gradient penalty has no effects w.r.t. the generated samples. \n\nThe proposed bounded kernel is designed to address the saturation issue, with the advantage of low computational cost. However, it does not impose Lipschitz constraints and may need to be used with methods like the gradient penalty from [3].\n\n------------------------------------------------- \n[1] Gradient descent GAN optimization is locally stable. NIPS, 2017. \n[2] Spectral Normalization for Generative Adversarial Networks. ICLR, 2018.\n[3] On Gradient Regularizers for MMD GANs. NIPS, 2018.", "We appreciate your valuable comments. We would try to address your concerns below.\n\nQ1: The proposed repulsive loss and bounded RBF kernel seem separated. Would they achieve better performance when combined and why?\nA1: In the revised manuscript, Fig. 3 and Appendix Fig. S2 show that the repulsive loss may result in unstable training, where we often observed that the discriminator outputs caused the Gaussian kernel to saturate. This issue motivated us to propose the bounded kernel. Table 1, Fig. 3 and Appendix Fig. S2 show that the repulsive loss combined with bounded kernel achieved comparable or better performance than the repulsive loss alone. Moreover, the bounded kernel managed to stabilize MMD-rep training under a variety of learning rate combinations and spectral normalization configurations. \n\nQ2: The experiments are limited to the cases with spectral normalization. Can both tricks be extended to other tricks (like WGAN loss or gradient penalty)?\nA2: We agree with the reviewer that it would be interesting to test the repulsive loss and bounded kernel in more general experiment setups, e.g., ResNet, gradient penalty, self-attention modules, supervised training, etc. In Appendix D.1 of revised manuscript, we show that the repulsive loss performed well using the gradient penalty from [1]. However, we are afraid to admit that a comprehensive study with other setups would require substantially more computational resources. We would continue our study to fill in this gap in the future. \n\n------------------------------------------------- \n [1]: On Gradient Regularizers for MMD GANs. NIPS, 2018. ", "Thank you very much for your valuable comments. We try to address your concerns as below.\n\nQ1: Appendix A demonstrates the local stability of MMD loss. Are the results applicable to the proposed repulsive loss?\nA1: Yes.\nFor the realizable case (the equilibrium P_X = P_G), we explicitly state the local stability of MMD-GAN using the repulsive loss in Appendix A.1 and proved it in Appendix A.3. \nFor the non-realizable case (the real sample distribution is impossible to be fit by the generator), we used a simulation study (Figure S1) to show that both MMD loss and repulsive loss may be locally exponentially stable near equilibrium. \n\nQ2: What is the point of estimating the true spectral norm given the additional computational cost and no improvements?\nA2: Since the first submission, we've added experiments to compare the proposed power iteration for convolution (PICO) against the one on a matrix (PIM) [4]. The results can be found in Appendix C. In summary,\n1) the spectral norm estimated by PIM may vary in a range related to the spectral norm by PICO; \n2) PIM impose an indefinite and often loose upper bound on the Lipschitz constant of discriminator; \n3) PICO performed better than PIM on cases using repulsive loss. \n\nCompared to PIM, PICO has a higher computational cost, which roughly equals the additional cost incurred by increasing the sample size by two. We recommend using PICO when the repulsive loss is used and the computational cost is less of a concern. \n\nRegarding the novelty, we notice that our proposed PICO is similar to that of [2][3], which we were not aware of during this study. We mentioned [1][2][3] as related work in the revised manuscript.\n\n---------------------------------------------\n[1] Hanie Sedghi, Vineet Gupta, Philip M. Long. The Singular Values of Convolutional Layers. arXiv 1805.10408, 2018\n[2] Yusuke Tsuzuku, Issei Sato, Masashi Sugiyama. Lipschitz-Margin Training: Scalable Certification of Perturbation Invariance for Deep Neural Networks. NIPS, 2018\n[3] Kevin Scaman, Aladin Virmaux. Lipschitz regularity of deep neural networks: analysis and efficient estimation. NIPS, 2018\n[4] Takeru Miyato, Toshiki Kataoka, Masanori Koyama, Yuichi Yoshida. Spectral Normalization for Generative Adversarial Networks. ICLR, 2018", "OVERALL COMMENTS:\n\nI haven't had much time to write this, so I'm giving a low confidence score and you should feel free to correct me.\n\nI didn't think this paper was very clear. \nI had trouble grasping what the contributions were supposed to be\nand I had trouble judging the significance of the experiments. \n\nThat said, now that (I think) I understand what's going on,\nthe idea seems well motivated, the connection between the repulsion and the use of label information in other\nGAN variants makes sense to me, and the statements you are making seem (as much as I had time to check them) correct. \n\nThis leaves the issue of scientific significance. \nI feel like I need to understand what specifically contributed to the improvements in table 1 to evaluate significance. \nFirst of all, it seems like there are a lot of other 'good-scoring' models left out of this table. \nI understand that you make the claim that your improvement is orthogonal, but that seems like something that needs to\nbe tested empirically. You have orthogonal motivation but it might be that in practice your technique works for a reason\nsimilar to the reason other techniques work. I would like to see more exploration of this. \nSecond, are the models below the line the only models using spectral norm? I can't tell.\nOverall, it's hard for me to envision this work really seriously changing the course of research on GANs,\nbut that's perhaps too high a bar for poster acceptance.\n\nFor these reasons, I am giving a score of 6.\n\nDETAILED COMMENTS ON TEXT:\n\n> their performance heavily depends on the loss functions used in training.\nThis is not true, IMO. See [1]\n\n\n> may discourage the learning of data structures\nWhat does 'data structures' mean in this case?\nIt has another more common usage that makes this confusing.\n\n> Several loss functions have been proposed\nIMO this list doesn't belong in the main body of the text.\nI would move it to an appendix.\n\n> We assume linear activation is used at the last layer of D\nI'm not sure what this means?\nMy best guess is just that you're saying there is no activation function applied to the logits.\n\n> Arjovsky et al. (2017) showed that, if the supports of PX and PG do not overlap, there exists a perfect discriminator...\nThis doesn't affect your paper that much, but was this really something that needed to be shown?\nIf the discriminator has finite capacity it's not true in general and if it has infinite capacity its vacuous.\n\n\n> We propose a generalized power iteration method...\nWhy do this when we can explicitly compute the singular values as in [2]?\nGenuine question.\n\n> MS-SSIM is not compatible with CIFAR-10 and STL-10 which have data from many classes;\nJust compute the intra-class MS-SSIM as in [3].\n\n> Higher IS and lower FID scores indicate better image quality\nI'm a bit worried about using the FID to evaluate a model that's been trained w/ an MMD loss where \nthe discriminator is itself a neural network w/ roughly the same architecture as the pre-trained image classifier\nused to compute the FID. What can you say about this?\nAm I wrong to be worried?\n\n> Table 1: \nWhich models use spectral norm?\nMy understanding is that this has a big influence on the scores.\nThis seems like a very important point.\n\n\n\nREFERENCES:\n\n[1] Are GANs Created Equal? A Large-Scale Study\n[2] The Singular Values of Convolutional Layers\n[3] Conditional Image Synthesis With Auxiliary Classifier GANs", "Dear authors,\n\n- Do similar results with Appendix A hold when we use the proposed repulsive loss?\n- The calculation of the spectral norm does not appear novel to me. [1] offers an efficient and exact calculation of the spectral norm, and [2, 3] proposed the generalized power iteration. But my concern is rather a computational cost than the novelty. Appendix B reports no performance improvements over the original method [4]. Concerning computational cost and memory consumption, the original method is superior. Are there any reasons to estimate the true spectral norm with paying additional overheads?\n\nThanks,\n\n[1] Hanie Sedghi, Vineet Gupta, Philip M. Long. The Singular Values of Convolutional Layers. arXiv 1805.10408, 2018\n[2] Yusuke Tsuzuku, Issei Sato, Masashi Sugiyama. Lipschitz-Margin Training: Scalable Certification of Perturbation Invariance for Deep Neural Networks. NIPS, 2018\n[3] Kevin Scaman, Aladin Virmaux. Lipschitz regularity of deep neural networks: analysis and efficient estimation. NIPS, 2018\n[4] Takeru Miyato, Toshiki Kataoka, Masanori Koyama, Yuichi Yoshida. Spectral Normalization for Generative Adversarial Networks. ICLR, 2018" ]
[ -1, 7, -1, -1, 7, -1, -1, -1, 6, -1 ]
[ -1, 5, -1, -1, 5, -1, -1, -1, 2, -1 ]
[ "HklD4r-bTQ", "iclr_2019_HygjqjR9Km", "iclr_2019_HygjqjR9Km", "iclr_2019_HygjqjR9Km", "iclr_2019_HygjqjR9Km", "rylo1DbthX", "rJll-a6hnX", "BJxg9N3IcQ", "iclr_2019_HygjqjR9Km", "iclr_2019_HygjqjR9Km" ]
iclr_2019_Hygn2o0qKX
Deterministic PAC-Bayesian generalization bounds for deep networks via generalizing noise-resilience
The ability of overparameterized deep networks to generalize well has been linked to the fact that stochastic gradient descent (SGD) finds solutions that lie in flat, wide minima in the training loss -- minima where the output of the network is resilient to small random noise added to its parameters. So far this observation has been used to provide generalization guarantees only for neural networks whose parameters are either \textit{stochastic} or \textit{compressed}. In this work, we present a general PAC-Bayesian framework that leverages this observation to provide a bound on the original network learned -- a network that is deterministic and uncompressed. What enables us to do this is a key novelty in our approach: our framework allows us to show that if on training data, the interactions between the weight matrices satisfy certain conditions that imply a wide training loss minimum, these conditions themselves {\em generalize} to the interactions between the matrices on test data, thereby implying a wide test loss minimum. We then apply our general framework in a setup where we assume that the pre-activation values of the network are not too small (although we assume this only on the training data). In this setup, we provide a generalization guarantee for the original (deterministic, uncompressed) network, that does not scale with product of the spectral norms of the weight matrices -- a guarantee that would not have been possible with prior approaches.
accepted-poster-papers
Existing PAC Bayes analysis gives generalization bounds for stochastic networks/classifiers. This paper develops a new approach to obtain generalization bounds for the original network, by generalizing noise resilience property from training data to test data. All reviewers agree that the techniques developed in the paper (namely Theorem 3.1) are novel and interesting. There was disagreement between reviewers on the usefulness of the new generalization bound (Theorem 4.1) shown in this paper using the above techniques. I believe authors have sufficiently addressed these concerns in their response and updated draft. Hence, despite the concerns of R3 on limitations of this bound and its dependence on pre-activation values, I agree with R2 and R4 that the techniques developed in the paper are of interest to the community and deserve publication. I suggest authors to keep comments of R3 in mind while preparing the final version.
train
[ "BygQ30jpRm", "HklxW5QoAQ", "H1eo-DD537", "HJloFLguCX", "B1eHBLxOAm", "rkgv9MsD0Q", "SylZrfAXC7", "Byeh4EbxCX", "ryxUAhWDAm", "SkxjMyes3m", "SygikY27Am", "BygoO_mfAQ", "ByePH_GzR7", "Bkgs5rO8TX", "BJxhNVMz07", "SygAMq2eCm", "SJxW0Xje0X", "SJeNlQZlCQ", "B1gAw-ZxAX", "H1eypq036Q", "rkxgiN236m", "HJxRVajna7", "r1eCm0tqTQ", "B1x5VnK5T7", "BJxLnKFq67", "Hyxu-tF9TQ", "rkgCLsBDT7", "ByxPcMBGam", "H1l1xiHMpQ", "BJxgsmSfa7", "H1gqC-a3hQ" ]
[ "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "We received an email notification from openreview with Reviewer 3's comment but we can't find it here on the website. The following is the comment we received:\n\n=============================\nComment: Thanks for the authors feedbacks. It is great to discuss the problems. \n\nAs I have discussed with the author, I do think that the deterministic PAC-Bayesian bound itself maybe of interest if one can apply it to derive a stonger gengeralization bound. If the authors can demonstrate such superiority of the deterministic PAC-Bayesian bound by another example, I will further appreciate this result. \n\nHowever, my concern is the current derived theoretical result is not ease to interpret and there are quantities that heavily depend on empirical values (that can be very large). The product of norms of may not be good, but it provides an explicit way to control the capacity of the networks so that we can have guaranteed bounds. Also as I mentioned earlier, empirical studies have already shown that by explicitly controlling the spectral norms of weight to be (nearly) 1, the performance of the network is not affected so that the product of the spectral norm is not an issue (i.e., close 1). I am not sure how the pre-activation will be in such scenarios, but it seems highly likely that the pre-activation is still large. Removing the product of norms and introducing some empirical quantities may not be always good, especially such quantities are very sensitive to data and can results in even worse bounds than the product of norms. \n\nIn summary, I do repect the authors that they provide a different angle to view the problem. On the other than, I do think that what is needed for the generalization bound of neural nets is not a new result that can be vacuous and can not be guarantted to push the edge of better understanding/interpreting the bound. I have updated my score to reflect my such a concern.\n\n=============================\nOUR RESPONSE\n\nWe thank the reviewer for their response, for increasing their score and for appreciating our new perspectives on the problem.\n\n\n>>>> Also as I mentioned earlier, empirical studies have already shown that by explicitly controlling the spectral norms of weight to be (nearly) 1, the performance of the network is not affected so that the product of the spectral norm is not an issue (i.e., close 1). \n\nWe apologize for repeating ourselves a bit here, we are in disagreement with your point that the existence of spectral-norm-controlled networks makes our bounds and our claimed conceptual contributions & specific numerical improvements less interesting. If we understand your argument right, this argument is similar to saying that \"all extremely large vacuous bounds on extremely overparamterized networks are less interesting because there are relatively smaller overparameterized networks that generalize almost as well and on which a VC dimension bound would be smaller than the large vacuous bounds on the larger networks.\" The fact that extremely overparameterized networks exist and generalize well demands theoretical explanation, and this question is independent of other networks that may be either smaller or whose norms maybe controlled explicitly. \n\n>>>> \"The product of norms of may not be good, but it provides an explicit way to control the capacity of the networks so that we can have guaranteed bounds\"\n\nIt is not clear to us why the quantities in our bound \"can't\" be explicitly controlled. During training, one could potentially add regularizers that minimize the norm of the layers' outputs, the Jacobians norms of the layers, and maximize the pre-activation values.\nOf course, this all maybe highly non-trivial and way beyond the scope of the paper, but we want to establish that our quantities are in no way different from the spectral norms of the matrices in terms of how and whether they \"can be controlled\" or not. For a better comparison, we believe the quantities in our bound are just as \"controllable/optimizable\" as the quantities in Arora et al.,\n\nBut more importantly, even if it is the case that our quantities can somehow not be controlled, we believe that evaluating the quality of ageneralization bound in terms of \"does it contain quantities that can be explicitly controlled?\" is an orthogonal goal to the theoretical question of \"what properties of deep networks -- trained with SGD, without any explicit regularization/norm control -- will help us understand why they generalize well?\" \n\n>>>> \"If the authors can demonstrate such superiority of the deterministic PAC-Bayesian bound by another example, I will further appreciate this result. \"\n\nWe understand and appreciate your request. We'd love to think about this to improve future versions of this paper. But we're afraid there's not much time left in the rebuttal period for us to provide a concrete answer to this, nor do we think we have the option to update the paper at this point. ", "Over the course of this discussion we've done our best to address the different concerns raised by Reviewer 3. We think it'll be useful to have a quick summary of these. We thank them for their response so far and hope to continue the conversation until the rebuttal deadline so that as many of their concerns are addressed as possible.\n\n=========\nSummary of their Nov 2 comment and our Nov 8 response\n=========\n\nConcern: Our approach does not tighten the error bound from a more refined/structured way\nOur response: Our general PAC-Bayesian approach to generalizing noise-resilience involves carefully and iteratively generalizing a sequence of conditions without incurring a product-of-spectral norm term.\n\nConcern: It is not clear where the noise resilience shows up from the analysis or the result/the title and the way the authors explain as noise resilience is somewhat misleading. More detailed explanation will help.\nOur response: We provided a detailed explanation of our contribution which we believe was misunderstood, and how it is about generalizing noise-resilience from training data to unseen data\n\nConcern: The analysis seems to be standard as in the PAC-Bayesian analysis, \nOur response: Our PAC-Bayesian analysis is novel, non-trivial and far from standard analyses which do not generalize noise-resilience. \n\nConcern: Lack of a comparison of our bound and existing ones to see the quantitative difference of the results. \nOur response: Added plots on 19 Nov.\n\nThe reviewer has acknowledged in their 22 Nov response that our PAC-Bayes result about generalizing noise-resilience (Theorem 3.1) \"might be of independent interest here.\" and in their 29 Nov response that \"I do repect the authors that they provide a different angle to view the problem.\".\n\n======\nSummary of Nov 22 comment-response\n=======\nConcern: Lack of comparison with Arora et al., '18\nOur response: Unfair to compare with bound on the compressed network from Arora et al., Bound on a compressed network does not have full explanatory power. Important to study how one can extend the benefits of noise-resilience enjoyed by bounds on stochastic/compressed network to the original network.\n\nIn their Nov 24th comment, the reviewer agreed that a bound on the original network is important. \n\nConcern: Using the 5% and median pre-act values are not fair comparisons with other bounds.\nOur response: We have been careful and transparent in presenting these hypothetical variations and we never compared them with the older bounds.\n\n\n======\nSummary of Nov 24 comment-Nov 25 response\n=======\nConcern: The regime where we claim improvement is not practically relevant.\nOur response: The question of why (Large D, small H)-networks can generalize well is still a question that needs to be answered and the question holds theoretical value.\n\nConcern: The claim about improvements in the said regime is vague from the plots.\nOur response: We reported the exact numerical increase in the plots to justify our claim, and referred the Reviewer to Fig 2 (b)\n\nConcern: Training the network by explicitly controlling spectral norms to be 1 works pretty well. Our bound when applied on these networks won't show any improvement. \nOur response: The question of why networks with uncontrolled spectral norms can generalize well is still a question that needs to be answered and the question holds theoretical value.\n\n\nConcern: Explicit polynomial dependence on depth is worse. \nOur response: We agree but why should one ignore the exponential dependence on depth or any improvements on it?\n\nConcern: The major claim of this paper is about the generalization bound for neural nets rather than the deterministic PAC-Bayesian bound.\nOur response: Our claim is two-fold, with the general framework of generalizing noise-resilience one half of it. We argued why generalizing noise-resilience is interesting, and an important, highly non-trivial contribution to understanding generalization in deep learning.\n\n=====\nSummary of Nov 29 comment-Nov 29 response\n=====\nConcern: The terms in our bound, unlike the product of spectral norms, do not provide an explicit way to control the capacity of the networks so that we can have guaranteed bounds.\nOur response: It is not clear to us why the quantities in our bound \"can't\" be explicitly controlled, or why it would be harder to do so when compared to the equally nuanced terms present in bounds like in Arora et al., Even if we can't control them, the metric of \"does the bound contain quantities that can be explicitly controlled?\" is orthogonal to the metric of \"does this bound help explain deep network generalization in some way?\"\n\nConcern: Demonstrate the superiority of the deterministic PAC-Bayesian bound by another example\nOur response: This will certainly help improve future versions of this paper and we'll work on it. But we don't have the option of updating the paper, or much time left in the rebuttal period to think about this to provide a concrete answer. ", "The authors demonstrate the generalization bound for deep neural networks using the PAC-Bayesian approach. They adopt the idea of noise resilience in the analysis and obtain a result that has improved dependence in terms of the network dimensions, but involves parameters (e.g., pre-activation) that may be large potentially. \n\nMy major concern is also regarding the dependence on the pre-activation that can be very large in practice. This is also shown in the numerical experiments. Therefore, the overall generalization bound can be larger than existing results, though the later have stronger dependence on the network sizes. By examining the analysis for the main result, it seems to me that the reason the authors can induce weaker dependence on network sizes is essentially they involved the pre-activation parameters. This can be viewed as a trade-off how strong the generalization bound depend on the network sizes and other related parameters (like the pre-activation here) rather than strictly tighten the error bound from a more refined/structured way. I also suggest that the authors provide the comparison of their bound and existing ones to see the quantitative difference of the results. \n\nRegarding the noise resilience, it is not clear to where the noise resilience shows up from the analysis or the result. From the proof of the main result, the analysis seems to be standard as in the PAC-Bayesian analysis, which is based on bounding the difference of the network before and after injecting randomness into the parameters. The difference with respect to the previous result due to the different way of bounding such a gap, where the Jacobian, the pre-activation and function output pop up. But this does not explain how well a network can tolerate the noise, either in the parameter space of the data space. This is different with the previous analysis based on the noise resilience, such as [1]. So, the title and the way the authors explain as noise resilience is somewhat misleading. More detailed explanation will help.\n\n[1] Arora et al. Stronger generalization bounds for deep nets via a compression approach. \n", "Deriving a generalization bound on the original network is important as bounds on modified networks have limited explanatory powder. That has been the main premise and motivation of this paper, and we are happy to learn about your agreement with us on this! \n\nWe are also glad you effectively agree that a comparison with [1] is unfair.\n============\nOver to the subjective points about your claim of the paper, we believe it is simplistic to state that \"the major claim of this paper is about the generalization bound for neural nets rather than the deterministic PAC-Bayesian bound\". \n\nThe claim of the paper is two-fold: informally, \"a) here is a new method to use train-time noise-resilience of the network to derive a bound on the original network by generalizing noise-resilience and b) here's one particular way of characterizing noise-resilience (in terms of jacobians and pre-activations) and generalizing it gives us spectral-norm independent bounds; additionally, here's a particular regime where our bound can do better despite dependence on pre-activations.\" The claim of the paper is not \"here's a bound on the original network\" (which would only be Theorem 4.1). \n\nWhile the dependence on the pre-activation is something you find bothering -- and we do agree that is very, very reasonable -- the limitation of the dependence on pre-activations in Thm 4.1 is a limitation in how we characterize noise-resilience and not in how we generalize noise-resilience. \n\nYou might still ask \"why is 'generalizing noise-resilience' interesting? Why should I care about it if at this point, I do not know if it can help me provide stronger bounds for \"practically relevant\" deep networks (i.e., large H, not so large D)?\" \n\nFirst, while it is true that we do not have stronger bounds for (large H, small D) networks, the theoretical question of \"why do overparametrized networks generalize well?\" applies even for the (small H, large D regime).\n\nNext, most of the really strong (both non-vacuous and vacuous) bounds that we know so far apply only on modified networks. A BIG gap in these bounds is essentially about how to carry over the benefits of these bounds to the original network. Unfortunately, it might not be obvious how \"big\" a gap this is, because to the best of our knowledge, research so far has not explicitly focused on closing this gap. We believe that the pursuit of closing the gap and providing a bound on the original network is a highly significant and non-trivial pursuit as otherwise these existing papers would have achieved that. \n\nSo far, it seems like one has had to somehow modify the network -- either by dropping/modifying many of its parameters [1], or by adding noise to reduce the dependence of the parameters on the training data, or by doing both! [2,3] -- thereby \"cheating\" the actual question at hand about the original network, only to provide a strong generalization bound on a modified network. Our paper fills this significant conceptual gap here by providing the idea & specific technique of generalizing noise-resilience (Thm 3.1) and further illustrating its promise by showing how it can extend the benefits of noise-resilience to the original network's bound in a specific case -- even if it may not be a practically popular case. \n\n\nEffectively, we provide a novel conceptual answer to a big piece in the puzzle and clearly demonstrate its benefits in a specific regime -- we think this will be valuable to the community and therefore worth publishing. Furthermore, our conceptual answer is quite general [i.e., Thm 3.1 is a general framework] and might inspire researchers to think about ways in which the multitudes of existing bounds on modified networks can be extended to their original networks.\n\n\n\n\n\n[1] Arora et al., Stronger generalization bounds for deep nets via a compression approach\n[2] Zhou et al., Compressibility and Generalization in Large-Scale Deep Learning\n[3] Dziugate and Roy, Computing Nonvacuous Generalization Bounds for Deep (Stochastic) Neural Networks with Many More Parameters than Training Data\n\n", "Thank you for the detailed response. \n\nBelow we first address the factual concerns you have..\n\n===================\nIn your review you say \"you do not think the derived generalization bound is tighter than existing ones (e.g., [1,2])\", we suppose this is a typo and you mean [2,3]? We've compared our results only with [2,3]; as we said a comparison with [1] is extremely unfair. \n=====================\n\nOn your factual concerns about our plots:\n\n While it may not be visually apparent, in Figure 2 (a), the maximum - minimum y value of the blue line is 11.66 - 8.9 = 2.28 while for the line corresponding to [3] is 7.58-3.75 = 3.83. (Note that the y value corresponds to the log of the bound). The amount by which our bound increases with depth is definitely smaller than the amount by which [2,3] increase; even a seemingly small difference in the rates of the increase results in an exponential difference of the actual bound. For these two lines (not the hypothetical versions!), the rates translate to 1.57^D vs 2.15^D specifically and we have mentioned this in the paper. Furthermore, Fig 2 (b) clearly demonstrates the tipping point where ours improves over [2,3]. We hope this clears up any question about the vagueness/validity of our claim that for large D and small H our bound does better. \n\nNext, the hypothetical versions of our bound are plotted for the sake of comparison with our own bound to demonstrate that the pre-activation values are indeed the limiting factor in our bound. In the discussion in the paper which begins \"We also plot hypothetical variations of our bound...\" we clearly state\n\n \".... perform orders of magnitude better than our actual bound (note that these two hypothetical bounds do not actually hold good) ... This indicates that the only bottleneck in our bound comes from the dependence on the smallest pre-activation magnitudes, and if this particular dependence is addressed, our bound has the **potential** to achieve tighter guarantees for even smaller D such as D = 8.\" \n\nWe have been careful and transparent in presenting these hypothetical variations and made sure not to draw any explicit comparisons with [2,3] here. \n\nIn short, we have NOT made any unfair comparisons!\n\n============\n\nThe point about the effectiveness of constrained spectral norm sounds quite interesting! Thanks for sharing it.\n\nHowever, we *strongly disagree* that it makes our result seem any less interesting: the fact that such a constrained-spectral-norm scenario works in practice, does not void the theoretical question of \n\"What is a generalization bound on deep networks where the spectral norm each matrix has not been constrained to be 1 and typically lies around 2.1-3?\". The fact that our bound might show no improvements in your scenario does not invalidate whatever claim we make about (small H, large D, unconstrained spectral norm) \n\nWe understand that it is a worthwhile exercise to compare the polynomial dependence on depth/width and we agree that our bound has worse polynomial dependence on depth if we ignore the spectral norm terms. But it is not clear to us, from a theoretical point of view, why one would choose to ignore the existence of an exponential depth factor, and any possible improvement over that factor at the cost of extra polynomial dependence. \n\n===============", "Thanks for the authors’ update and clarification. I do agree that the result that state the bound in terms of the original network is important (unlike [1]), and the derived deterministic PAC-Bayesian type of generalization bound may be of independent interest. But since the major claim of this paper is about the generalization bound for neural nets rather than the deterministic PAC-Bayesian bound, I tend to judge from a view of the former instead of the latter. I do not think the derived generalization bound is tighter than existing ones (e.g., [1,2]) in the scenarios of interesting/practical settings. \n\n1. The network with a small width is not an interesting setting in general. Both practice and recent theoretical efforts show that over-parameterization is more interesting in general, which can help both optimization and generalization.\n\n2. The claim that the derived result has better performance in increasing depths is too vague to see from the experiment results (e.g., Fig 2). It is ok to have the 5% and median plots as a way to see how the bound performs in the non-worst-case scenarios, but it is not fair to compare with [1,2]. I think only looking at the general bound (e.g., blue line in Fig. 2) is a fair game. There is no significant trend that the derived bound increases slower for a larger value of depth compared with [1,2]. \n\nOn the other hand, if I understand it correctly, the numerical results are obtained when there are no explicit constraints on weight matrices. The product of norms is indeed an issue in this case. However, it has been shown that using unit spectral norm weight matrices has as good empirical performance as those without such constraints in real tasks [4,5] (they have orthogonal weights). In the latter case, the product of spectral norms is simply 1, where I believe the bounds for [2,3] can be significantly lower without sacrificing the performance. It is not clear how the pre-act value will differ then, but it seems it will still be significantly larger than 1. In addition, when we only compare the polynomial dependence of the bound on the depth and width, the derived bound has a universally worse dependence on depth and the dependence on width is better only when the pre-act values are large over the entire parameter space and data (which seems highly impossible in practice). \n\n[1] Arora et al. Stronger generalization bounds for deep nets via a compression approach, 2018. \n[2] Bartlett et al. Spectrally-normalized margin bounds for neural networks, 2017.\n[3] Neyshabur et al. A PAC-bayesian approach to spectrally-normalized margin bounds for neural networks, 2018.\n[4] Xie et al. All you need is beyond a good init: Exploring better solution for training extremely deep convolutional neural networks with orthonormality and modulation, 2017.\n[5] Huang et al. Orthogonal weight normalization: Solution to optimization over multiple dependent stiefel manifolds in deep neural networks, 2017.", "Dear Reviewer, \n\nThanks for the response. We understand your concern is about i) a lack of comparison with Arora et al., and ii) how big the numerical value of our bound can be in comparison with Arora et al., and about the larger explicit polynomial dependence on depth. We have two concrete points to address your concern and we hope it helps you appreciate our result better:\n\n1. First, we would like to remind you, as we have stated at multiple points throughout our paper and in our earlier responses here, the bound of Arora et al., is NOT on the original network but on a compressed network (as has been noted by them in Remark (1) under page 4 of their arxiv version https://arxiv.org/pdf/1802.05296.pdf) \nWhile they introduce a lot of interesting noise-resilience properties and show how a network can be compressed using those properties, their final bound which is small, holds only for the compressed network. Extending the benefits of noise-resilience in the form of a generalization bound on the original network is another non-trivial part of the puzzle, which is what we accomplish. It would be quite unfair to compare our bound with their bound on a compressed network because -- as we have stated everywhere -- our goal is to be able to say something about the original network. \n\nThe reason we care about a bound on the original network and not on the compressed network is that a bound on the compressed network could potentially tell us very, very little about the original network. For example, one can provide a compression bound by simply getting rid of all the parameters in the original network, and training a much smaller network from scratch on the given training dataset. Of course, a generalization bound on the smaller network will be small; but does it say anything at all about the original network? \n\n\n2. We have been careful in stating everywhere in our paper that the 5% bound and median bound are just hypothetical quantities. Our main claim through the experiments is that our bound has better asymptotic behavior w.r.t depth -- at least for networks of small width -- as is evident from the reported slope of our actual bound named \"Ours\" vs Neyshabur+ '17 and Bartlett+ '17 Figure 2 a). In fact, we report the actual value of our bound vs these bounds for a really deep network and show that across multiple runs, the distribution of our bound concentrates over smaller quantities (Figure 2 b). Essentially, we have identified a regime (large D, small H) where, DESPITE the dependence on pre-activation values (which needs to be improved) our bound -- not just the hypothetical variations -- does better than existing bounds on the original network in practice. We hope that the existence of this regime in practice helps you appreciate the usefulness of our approach of generalizing noise-resilience, and the promise it holds in terms of providing bounds on the original network. \n\nThe reason our bound does better in this regime is that when H is small, the pre-activations tend to be large enough, and more importantly, when D is large, the product of spectral norms is exponentially larger than our terms including the extra D^2 in our bound. \n\nWe'd love to know if this addresses your concerns, or if you have further questions.", "We added experiments in the paper demonstrating the actual values of the bound in comparison with existing product-of-spectral-norm based bounds. We want to emphasize that our bound shows weaker dependence on depth, and performs asymptotically better with depth. Specifically, we show an improvement over two popular, existing bounds for $D=28$ and $H=40$. We argue that for larger depth, our bound promises greater improvements over product-of-spectral-norm-based bounds.\n\nNote: The paper was originally within 8 pages, but is now 8.5 pages because of the additional plots & their accompanying discussion. ", "Thank you for increasing your score and for taking into consideration our discussion with Reviewer 4! We thank Reviewer 4 too for their constructive feedback and active interest in helping us improve the quality of the paper. ", "The fact that a number of current generalization bounds for (deep) neural networks are not expressed on the deterministic predictor at stake is arguably an issue. This is notably the case of many recent PAC-Bayesian studies of neural networks stochastic surrogates (typically, a Gaussian noise is applied to the network weight parameters). The paper proposes to make these PAC-Bayesian bounds deterministic by studying their \"noise-resilience\" properties. The proposed generalization result bounds the margin of a (ReLU) neural network classifier from the empirical margin and a complexity term relying on conditions on the values of each layer (e.g., via layer Jacobian norm, the layer output norm, and the smallest pre-activation value). \n\nI have difficulty to attest if the proposed conditions are sound. Namely, the authors genuinely admit that the empirically observed pre-activation values are not large enough to make the bound informative (I must say that I truly appreciate the authors' candor when it comes to analyzing their result). That being said, the fact that the bounds does not scale with the spectral norm of the weight matrices, like previous PAC-Bayesian result for neural networks, is an asset of the current analysis.\n\nI must say that I had only a quick look to it the proofs, all of them being in the supplementary material along most of the technical details. Nevertheless, it appears to me as an honest, original and rigorous theoretical study, and I think it deserves to be presented to the community. It can bring interesting discussion and suggest new paths to explore to explain the generalization properties of neural networks.\n\nMinor comment: For the reader benefit, Theorem F.1 in page 7 should quickly recall the meaning of some notation, even if it's the \"short version\" of the theorem statement.\n\n====\nupdate: The bound comparison added value to the paper. It strengthens my opinion that this work deserves to be published. I therefore increase my score to 7. ", "Thanks for the authors' update. \n\nI still do not quite understand what benefit this new result provides compared with existing ones. For example, Neyshabur’18 and Bartlett’17 have the bound of the order (spectral norm product)*sqrt(D^3 H rank/m) by ignoring log factors, and Arora'18 has the bound of the order (max function output)*sqrt(D^3 H^2 /m). This paper (Theorem 4.1) has the bound of the order sqrt(D^7 H max(1/(H pre-act^2), max Jacobian norm)/m). It seems that the order 1/pre-act can be even significantly larger than the spectral norm product and the max function output, which leads to an overall larger bound than existing ones. The empirical result of Arora'18 is not provided, which should be a lot better than Neyshabur’18 and Bartlett’17, hence the proposed bound as well. Moreover, the poly(depth, width) dependence is also stronger than the existing ones. I do not think using the 5% and median pre-act values are fair comparisons with other bounds, which could have been tighter as well if they also use analogous worst-case exempted results. \n\nThe analysis of the PAC-Bayes result in terms of the original function (Theorem 3.1) might be of independent interest here. But since the derived result for network functions is worse than existing ones (the dependence on depth/width and pre-act parameters), I do not see their significance in better understanding the generalization performance of neural nets here.", "Thank you for your quick responses, for your useful suggestions, and for updating your score!", "Thanks for updating the figure. At this point, all my concerns are addressed properly and hence I updated the score.", "This paper presents a PAC-Bayesian framework that bounds the generalization error of the learned model. While PAC-Bayesian bounds have been studied before, the focus of this paper is to study how different conditions in the network (e.g. behavior of activations) generalize from training set to the distribution. This is important since prior work have not been able to handle this issue properly and as a consequence, previous bounds are either on the networks with perturbed weights or with unrealistic assumptions on the behavior of the network for any input in the domain.\n\nI think the paper could have been written more clearly. I had a hard time following the arguments in the paper. For example, I had to start reading from the Appendix to understand what is going on and found the appendix more helpful than the main text. Moreover, the constraints should be discussed more clearly and verified through experiments.\n\nI see Constraint 2 as a major shortcoming of the paper. The promise of the paper was to avoid making assumptions on the input domain (one of the drawbacks in Neyshabur et al 2018) but the constraint 2 is on any input in the domain. In my view, this makes the result less interesting.\n\nFinally, as authors mention themselves, I think conditions in Theorem F.1 (the label should be 4.1 since it is in Section 4) could be improved with more work. More specifically, it seems that the condition on the pre-activation value can be improved by rebalancing using the positive homogeneity of ReLU activations.\n\nOverall, while I find the motivation and the approach interesting, I think this is not a complete piece of work and it can be improved significantly.\n\n===========\nUpdate: Authors have addressed my main concern, improved the presentation and added extra experiments that improve the quality of the paper. I recommend accepting this paper. ", "Hi! We replaced the table reporting a single value with a distribution of values from 12 different runs instead of just reporting averages which we think can be misleading here. Note that we have done this for D=28 instead of 26 as before. We hope we have addressed your above concerns through our previous response below and with the updated figure!", "Thanks for engaging in a discussion with us and for providing prompt responses -- we really appreciate it!\n\nWe are glad you agree with the asymptotic benefits of our bound. \n\nYour concern about Table b is understandable. The change in values is likely due to the fact that we used different training hyperparameters for D=26 (we will be sure to highlight the difference in the main text in the next revision, if the table persists). Training the networks beyond D=12 or 13 using vanilla SGD was tricky, and we realized we had to experiment with larger depths to convince the readers of the asymptotic benefits, so we had to pick a different D and resort to tuning the hyperparameters differently.\n\nWe appreciate your different suggestions about Table b and we will work on it. \n\nAs for H=1000, as we said we show plots for H=1280 in Figure 4 including the individual terms in the bound and the overall bound. The goal of the experiments in the main paper was to identify and showcase the specific regime where we can hope the pre-activation values to not spoil the benefits of generalizing noise resilience. Improving the dependence on the pre-activation is crucial to achieve reasonable bounds for larger widths. ", "Thanks for adding the plot. I think it is very helpful and improves the quality of the paper. I understand that revisions take time and energy but I think there are two issues with the current Figure 2:\n\nMore important:\n\nI agree with your conclusion that for sufficiently large D, your bound becomes lower than others. However, I find table (b) in Figure 2 a bit misleading. The main reason is that your bound is very sensitive to the value of pre-activations and hence if you train the same model with different random seeds, your bound gives very different values on each of the trained model. As a result, one cannot rely on reporting a single number here. Another thing that is a bit mysterious is that the slopes in figure (a) suggest that other bounds should be around 10^11 at depth 26 if they increase with the same rate but then their value is around 10^14 in table (b). So what happens between depth 13 and depth 26? \n\nI can think of three solutions here: 1) remove the table 2) report the average of 10 runs in table (b) 3) remove the table but extend the plot (a) to depth 30.\n\n\nLess important:\n\nI requested evaluating the bound for a network with 1K hidden units in each layer because that is the number which is typically used in practice. I still believe 40 hidden units is too low and it would be better to have at least 256 hidden units but this is not very important and I'm not going to insist on this.", "Dear Reviewer,\n\nWe want to let you know that, like you've suggested, we've added Figure 2 in the main paper, demonstrating the value of our bound for different values of D, for H=40. We want to highlight that our bound has weaker dependence on depth and does better than other product-of-spectral-norm-based bounds for sufficiently deep, not-so-wide networks. We hope this helps you better appreciate the contribution and significance of our work.", "Hi again!\n\nWe want to let you know that we've incorporated all your suggestions and presented some additional experiments too. \n\nSpecifically, in the main paper, we have demonstrated the value of our bound for H=40, and varying depth and compared with spectral-norm-bounds Neyshabur et al., '18 and Bartlett et al., 17. We argue that for this H, our bound should perform asymptotically better and show that our bound does better for D=25. \n\nDue to space constraints, we had to present some of the plots in the appendix. \n\n>>>>>>>>> Please fix the number of layers and plot the quantities vs \"#hidden units per layer\" as well (up to at least 2K hidden units per layer).\n\nThe plots in Appendix Figure 5 show the quantities and the overall bound (including existing bounds) for H=40 until 2000, for depth D=8.\nAdditionally, Figure 6 shows a similar plot for depth D=14, for H=40 until 1280.\n\n\n>>>>>>>> Please also report the numerical value of the generalization bound on a network with 1K hidden units and 10 layers.\n\nYou can find the plots in Appendix Figure 4 for no. of units H=1280, where we show both the individual quantities and the actual bound for different depths uptil D=14.\n\n\n>>>>>>> If you have time, compare it to at least one of the other generalization bounds.\nCompared our bounds with both Neyshabur et al., '18 and Bartlett et al., 17 which have pretty similar orders of magnitudes with each other. Please refer to Figure 2 in the main paper.\n\nWe are eager to hear back from you if you have any feedback or further questions, and would love to know your updated review.", "Hi! We wanted to let you know that we've uploaded a revision with suggestions 1 2 and 3(a) incorporated. We are still working on 3b. \n\n1. We're glad you find the theorem interesting. Indeed, we believe that the generality and the novelty in this theorem leaves a lot of opportunity for exploration by the both the deep learning theory community and the learning theory community.\n\n2. We moved the network-related notations to Section 4. In Section 3, we completely rephrased the description of \"INPUT-DEPENDENT PROPERTIES OF WEIGHTS\" and the description following Constraint 2, without using neural network notations. We also modified it to read better. We hope that the rewritten version of this discussion, and the additional text we've squeezed into Theorem 3.1 can help parse the notation more easily. However, we think it's hard to get rid of the other notations involving T, r, \\rho etc., which are integral to describing the abstract setup. Having said that, we are happy to consider further suggestions here! We really appreciate your above suggestions in this context and believe it helps reduce the burden on the reader.\n\n3 (a) Again, this is a good point and we have incorporated it as follows: \nIn the last paragraph of \"Our Contributions\" we say:\n\"Intuitively, we make this assumption to ensure that under sufficiently small parameter perturbations, the activation states of the units are guaranteed not to flip.\"\nand again after Thm 4.1, we modified the paragraph at the end of page 7, and added the line:\n\"Specifically, using the assumed lower bound on the pre-activation magnitudes we can ensure that, under noise, the activation states of the units do not flip; then the noise propagates through the network in a tractable, “linear” manner. Improving this analysis is an important direction for future work.\"", "Dear Reviewer, \n\nThanks for considering our clarification and accepting it. Also, thanks for studying the paper more carefully and providing concrete, valuable feedback. We will work on them! \n\nCurrently, there are plots for dependence on width, upto 1280 hidden units, present in Figure 3 and 4. We will present more plots as soon as possible.\n\n\n\n\n", "Thanks a lot for clarifying constraint 2. I think my confusion was because you have not mentioned the constraints in the Theorem 3.1 statement but used it in the proof of the theorem (and of course because I did not read the proof of Theorem 4.1 carefully). I have spent more time reading your paper and here is some feedback:\n\n1- I find Theorem 3.1 interesting and useful. First of all, please clearly mention the assumptions in the statement of theorem 3.1, i.e. constraint 1 and 2. \n\n2- There is too much notation in the paper. I understand that there is no easy way to figure out how to reduce the notation but this complexity hides the result of the paper and not many readers are willing to spend hours figuring out the notation. I suggest to put the neural net notation after the Theorem 3. With very simple notation, you should be able to write the assumptions and Theorem 3. I think this is the most interesting part of the paper and it worth spending time to present it properly.\n\n3- I believe Theorem 4.1 is needed to demonstrate how Theorem 3.1 can be useful but the limitations of Theorem 4.1 (which are not related to Theorem 3.1) should be discussed clearly. You already mentioned the main limitation which is the dependence of the bounds on the inverse of smallest pre-activation. I have two suggestions:\na) Even though it is mentioned indirectly in the discussion, I think you should clearly mention early in the discussion that this limitation is due to the fact that the proof does not allow activations to flip. This helps the reader to have a better understanding of this limitation and potentially build on your work.\n\nb) Most plots show the quantities vs depth. Please fix the number of layers and plot the quantities vs \"#hidden units per layer\" as well (up to at least 2K hidden units per layer). Please also report the numerical value of the generalization bound on a network with 1K hidden units and 10 layers. If you have time, compare it to at least one of the other generalization bounds. To be clear, I am not going to evaluate your generalization bound based on these plots but what matters is that these plots help the reader to have a clearer picture.\n\nI am looking forward to the revision and then I will decide about the final score (up to 8 if all the suggestions are applied).", "Hi! Based on Reviewer 1's feedback, we uploaded a revision with Appendix G that now describes and compares the noise-resilience conditions assumed in our work vs. the ones assumed in prior work. We believe that in addition to our earlier responses to your review, this section might better highlight how noise-resilience is studied in our paper.\n\nOverall, we hope our comments\ni) clarify the main contribution of this paper, which lies in showing how noise-resilience of the network generalizes from training data to test data. \nii) convince you that our analysis is not a standard application of PAC-Bayes theorems (and is on the contrary, quite nuanced and novel)\niii) justify the title.\n\nWe are eager to know if you have any questions remaining; if your concerns have been clarified, we sincerely hope it helps you re-evaluate our paper and update your score. \n", "Hi again! \n\nFirst of all, a quick note: we updated the label of Theorem F.1 to 4.1. Thanks for your note!\n\nNext, we'd like to get in touch with you again to know if we clarified your concern regarding Constraint 2. (By the way, please let us know in case we misunderstood your concern.) \n\nWe'd like to reiterate, like we state throughout the text of the main paper, we do not make any assumption that holds on all input datapoints. The lack of such an assumption is the main strength/contribution of the paper. We'd also like to point out that the mathematical statement of Constraint 2 and the text following it, and the mathematical statements of Theorem 3.1 and 4.1, all reflect this fact!\n\nIn the light of this discussion, we respectfully encourage you to reevaluate the paper & update your score. Thank you!", "As you suggested, we have recalled some of the notation in the text preceding Theorem F.1 (which by the way is now Theorem 4.1 as it should be, thanks to Reviewer 4). \nThanks for your suggestion!", "Dear Reviewer,\n\nThanks for your positive feedback!\n\nWe have uploaded a revised version with Appendix G where we have added a one-page discussion relating our noise-resilience conditions and the conditions in prior work. We hope this provides you better context to understand our assumptions. Happy to provide more details if needed.\n\n", "Dear Reviewer, thanks for your precise summary of the paper's approach and your thoughts about it! \n\nWe strongly disagree with your remark that Constraint 2 is \"a major shortcoming of the paper\". Here's why:\n\nConstraint 2 is not restrictive and is in fact a very natural/intuitive constraint of the properties in the network -- and it provably holds good. At a high level, all the constraint says is the following:\n\n**For a given point x** for which the first r-1 sets of properties are bounded (say the first 3 layers have small l2 norm), the r-th property is noise-resilient (i.e., under noise injected into the parameters, the 4th layer's l2 norm does not suffer much change under parameter perturbation).\n\nThis is a pretty natural constraint **which provably holds** for networks because of how the output of a particular layer depends only on the output of the preceeding layers.\n\nWe make NO assumption of the form that something about the network holds good for ALL inputs in the domain. As you can see in Theorem 3.1, we say \"if W satisfies T_r(W, x, y) > Delta_r^* ... for all (x,y) in S\" which means that these properties are bounded only for the training data. \n\nWe hope this clears the misunderstanding surrounding the constraint and convinces you that this is not at a drawback at all!\n\nThe drawback that we acknowledge is regarding the dependence on the pre-activations, which we hope to improve upon in the future. But as it is, we believe the paper makes a conceptual contribution in terms of a new methodology of generalizing noise-resilience, and accomplishes a PAC-Bayes based product-of-spectral-norm independent bound in specific settings where it wasn't possible. \n\nAs you've suggested, we will improve the discussion of the constraints; thanks for your comment!\n", "Thanks for you comments! In this response, we'll address the second half of your comment and explain the contributions of the paper, which we believe has been misunderstood. \n\nWe first note that our contribution is not just about getting rid of the dependence on the products of the spectral norms of the weight matrices; our contribution is also that we arrive at such a bound on the *original network* and not just a compressed network/stochastic network. While compression-based bounds like [1] or other PAC-Bayes based bounds like [2,3] numerically evaluate to smaller values, and provide a partial answer for why deep networks generalize well, these bounds are not on the original network learned by SGD. An extremely important and **non-trivial** piece of the puzzle is to extend the benefits of these bounds (or at least some of its benefits -- in this case the lack of a product-of-spectral-norm dependence) over to the original network. \n\n\nWe do this by presenting a structured and novel technique which \"generalizes noise-resilience\" presented in Section 3. Thus we disagree with the observation that our bound does not \"strictly tighten the error bound from a more refined/structured way.\" Below we describe what we mean by \"generalizing noise-resilience\", in effect justifying our title, and also clarifying what exactly our contribution is.\n\nLike in [1,2], we model noise-resilience in terms of certain \"conditions\". For example, [1] assume conditions like \"the interlayer smoothness of the network is sufficiently large on training data\". We assume similar conditions (e.g., \"the output of each layer has small l2 norm on the training data\") this allows us to bound the output perturbation of the network without incurring a product-of-spectral-norm dependence. Crucially, our theory and the theory in [1,2] assume these conditions to hold only **on training data**. \n\nWith reference to your comment:\n \"The difference with ... previous result due to the different way of bounding such a gap... But this does not explain how well a network can tolerate the noise\":\n\n While there are technical differences in how these conditions are formulated in [1,2] vs. our work, and how the perturbation in the output is bounded in terms of these conditions, the exact formulation of the conditions is NOT our key contribution. As mentioned in Page 3 under our contributions, our conditions are in fact philosophically similar to those in [1] and [2] and at a high level essentially characterize how the activated parts of the weight matrices in the network interact with each other. We strongly emphasize the following points:\n\n\n=====> The novelty in our paper is NOT primarily about explaining why a network is noise-resilient (on training data). \n\n=====> Our main contribution, when compared to [1] or [2], is that we take a step beyond these existing approaches and present an approach to how conditions assumed about the network on the training data *can be generalized to test data*. This step is crucial and allows us to claim that the network is noise-resilient on test data as well. \n\n\n The key reason [1,2] were not able to present product-of-spectral-norm independent bounds on the original network (but only on a modified network) was that they did not generalize these conditions about the behavior of the network from the training data to test data. \n\nTo achieve this, we present a structured approach that iterates through the layers and generalizes these conditions one after the other, in a specific order. It requires a lot of care to not incur product-of-spectral-norm dependency (or other extra dependencies on the width) while generalizing any of these multiple O(depth^2) conditions. Besides, to generalize each condition, we require a particular style of reducing PAC-Bayesian bounds to deterministic bounds. Overall, we hope you understand that our analysis is quite far from \"standard as in the PAC-Bayesian analysis, which is based on bounding the difference of the network before and after injecting randomness into the parameters\".\n\nThe idea of generalizing these conditions is novel and is an important step to explain the noise-resilience of these networks on testing data. Besides being refined and structured, most importantly, our approach is general and leaves scope for future work to use it as a hammer on different sets of conditions (hopefully one that doesn't assume large preactivation values on all units!).\n\n\nWe hope our detailed response better explains the contribution of our work to answering the generalization puzzle, in the context of the results in [1,2].\n\n[1] Arora et al., \"Stronger generalization bounds for deep nets via a compression approach.\" \n[2] Neyshabur et al., \"Exploring gen- eralization in deep learning.\"\n[3] Dziugaite et al., \"Computing nonvacuous generalization bounds ... than training data.\"\n\n", "Thank you for your positive response! We are glad you agree that many of the current generalization bounds for deep networks apply only to a compressed/stochastic network; indeed, even though these bounds provide valuable intuition about generalization, we believe that an extremely important and non-trivial piece of the puzzle is to extend the benefits of these bounds (or at least some of its benefits -- in this case the lack of a product-of-spectral-norm dependence) over to the original network. And we achieve this through an approach that \"generalizes noise resilience\".\n\nWith regards to your suspicion about the proposed \"conditions\", the only pesky condition in our result is the one involving the pre-activation values. The other bounds on the other quantities certainly hold favorably in practice as seen in our plots. We must also note that these conditions themselves are not the main contribution of our paper (and we have stated this point in \"Our Contribution\" in Page 3); the main contribution lies in how we generalize these conditions assumed about the network on the training data, to test data (without ever incurring a product-of-spectral-norms dependence). The conditions themselves are in fact philosophically similar to conditions examined and verified in prior work [1,2]; in essence, they dictate how the parts of the weight matrices activated by a particular datapoint, interact with each other. \n\nEven as far as the condition involving the pre-activation values are concerned, it appears in our analysis to ensure that the hidden units don't jump their non-linearity under parameter perturbations; the assumption that only a small proportion of the hidden units do not jump the non-linearity under perturbations has been made in prior works, although in a more relaxed form e.g., \"Interlayer Smoothness\" in [1] or condition C2 in [2], and *these have been verified in practice*. Intuitively, we believe that this assumption allows one to argue that the network is \"linear\" in a small local neighborhood in the parameter space, and this local linearity helps imply that the network has lesser complexity. \n \nAgain, we thank the reviewer for appreciating our contributions. We hope that the community finds our approach of generalizing noise-resilience useful. Our framework is general in that one could think of designing different sets of conditions that imply noise-resilience of the network, and argue how these conditions would generalize; with a better understanding of the source of noise-resilience in deep networks, we might identify better sets of conditions which can be generalized this way to obtain tighter bounds on the original network.\n\nWe will take note of the reviewer's comment about Theorem F.1!\n\n[1] Arora et al., \"Stronger generalization bounds for deep nets via a compression approach.\" \n[2] Neyshabur et al., \"Exploring gen- eralization in deep learning.\"\n", "We provide some context as to why the dependence on pre-activation values is not outrageous, and is to some extent necessary:\n\ta) Here's our intuition: the larger the pre-activation values, the less likely is it that, under parameter perturbations, the hidden units jump the non-linearity in the ReLU; in other words, the network is more likely to behave \"linearly\" under small perturbations. Roughly speaking, the more locally linear the network is, the simpler is the fit that the network has found, and hence better the generalization. \n\tb) The assumption that only a small proportion of the hidden units do not jump the non-linearity under perturbations has been made in prior works e.g., \"Interlayer Smoothness\" in [1] or condition C2 in [2], and *these have been verified in practice*. Overall, it is intuitively reasonable that a generalization bound depends on a quantity that characterizes this behavior. Currently, for our bound to be small, one would need that none of the hidden units jump the non-linearity, which as we admitted in the paper, does not reflect reality completely. Since our framework is quite general, with an even more careful analysis, in the future, one might be able to apply our framework for the case where this assumption is relaxed to better reflect reality (i.e., all but a small proportion of hidden units have a sufficiently large pre-activation value).\n", "This paper provides new generalization bounds for deep neural networks using the PAC-Bayesian framework. Recent efforts along these lines have proved bounds that \neither apply to a classifier drawn from a distribution or to a compressed form of the trained classifier. In contrast, the paper uses PAC Bayesian bounds to \nprovide generalization bounds for the original trained network. At this same time, the goal is to provide bounds that do not scale exponentially in the depth of the\nnetwork and depend on more nuanced parameters such as the noise-stability of the network. In order to do that the paper formalizes properties that a classifier must \nsatisfy on the training data. While these are a little difficult to understand in general, in the context of ReLU networks these boil down to bounding the l2-norms\nof the Jacobian and the hidden layer outputs on each data point. Additionally, the paper also requires the pre-activations to be sufficiently large, which as the authors \nacknowledge, is an unrealistic assumption that is not true in practice. Despite that, the paper makes an important contribution towards our current understanding of \ngeneralization of deep nets. It would have been helpful if the authors had a more detailed discussion on how their assumptions relate to the specific assumptions in the papers\nof Arora et al. and Neyshabur et al. This would help when comparing the results of the paper with existing ones. " ]
[ -1, -1, 5, -1, -1, -1, -1, -1, -1, 7, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7 ]
[ -1, -1, 4, -1, -1, -1, -1, -1, -1, 3, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2 ]
[ "rkgv9MsD0Q", "H1eo-DD537", "iclr_2019_Hygn2o0qKX", "rkgv9MsD0Q", "rkgv9MsD0Q", "SylZrfAXC7", "SygikY27Am", "iclr_2019_Hygn2o0qKX", "SkxjMyes3m", "iclr_2019_Hygn2o0qKX", "H1eo-DD537", "ByePH_GzR7", "BJxhNVMz07", "iclr_2019_Hygn2o0qKX", "SJxW0Xje0X", "SJxW0Xje0X", "Byeh4EbxCX", "H1eo-DD537", "HJxRVajna7", "HJxRVajna7", "HJxRVajna7", "B1x5VnK5T7", "H1eo-DD537", "Bkgs5rO8TX", "SkxjMyes3m", "H1gqC-a3hQ", "Bkgs5rO8TX", "H1eo-DD537", "SkxjMyes3m", "H1eo-DD537", "iclr_2019_Hygn2o0qKX" ]
iclr_2019_HygsfnR9Ym
Recall Traces: Backtracking Models for Efficient Reinforcement Learning
In many environments only a tiny subset of all states yield high reward. In these cases, few of the interactions with the environment provide a relevant learning signal. Hence, we may want to preferentially train on those high-reward states and the probable trajectories leading to them. To this end, we advocate for the use of a \textit{backtracking model} that predicts the preceding states that terminate at a given high-reward state. We can train a model which, starting from a high value state (or one that is estimated to have high value), predicts and samples which (state, action)-tuples may have led to that high value state. These traces of (state, action) pairs, which we refer to as Recall Traces, sampled from this backtracking model starting from a high value state, are informative as they terminate in good states, and hence we can use these traces to improve a policy. We provide a variational interpretation for this idea and a practical algorithm in which the backtracking model samples from an approximate posterior distribution over trajectories which lead to large rewards. Our method improves the sample efficiency of both on- and off-policy RL algorithms across several environments and tasks.
accepted-poster-papers
The paper presents "recall traces", a model based approach designed to improve reinforcement learning in sparse reward settings. The approach learns a generative model of trajectories leading to high-reward states, and is subsequently used to augment the real experience collected by the agent. This novel take on combining model-based and model-free learning is conceptually well motivated and is empirically shown to improve sample efficiency on several benchmark tasks. The reviewers noted the following potential weaknesses in their initial reviews: the paper could provide a clearer motivation of why the proposed approach is expected to lead to performance improvements, and how it relates to learning (and uses of) a forward model. Details of the method, e.g., model parameterization is unclear, and the effect of hyperparameter choices is not fully evaluated. The authors provided detailed replies to all reviewer suggestions, and ran extensive new experiments, including experiments to address questions about hyperparameter settings, and an entirely new use of the proposed model in a learning from demonstration setting. The authors also clarified the paper as requested by the reviewers. The reviewers have not responded to the rebuttal, but in the AC's assessment their concerns have been adequately addressed. The reviewers have updated their scores in response to the rebuttal, and the consensus is to accept the paper. The AC notes that the authors seem unaware of related work by Oh et al. "Self Imitation Learning" which was published at ICML 2018. The paper is based on a similar conceptual motivation but imitates high-value traces directly, instead of using a generative model. The authors should include a discussion of how their paper relates to this earlier work in their camera ready version.
train
[ "HyxOprf0n7", "Bkg6-MfsAm", "H1eoDATPRQ", "SJeB9tx4CX", "HyxLpF2lC7", "rye0EHNcpm", "BkgkxrE567", "BkxisNV5Tm", "SkxjYXEq6Q", "HylIyNNcpQ", "S1lBH745TX", "B1lWBJF93Q", "r1lsbRy5hm" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Revision:\nThe authors have thoroughly addressed my review and I have consequently updated my rating accordingly.\n\nSummary:\nModel-free reinforcement learning is inefficient at exploration if rewards are\nsparse / low probability.\nThe paper proposes a variational model for online learning to backtrack\nstate / action traces that lead to high reward states based on best previous\nsamples.\nThe backtracking models' generated recall traces are then used to augment policy\ntraining by imitation learning, i.e. by optimizing policy to take actions that\nare taken from the current states in generated recall traces.\nOverall, the methodology seems akin to an adaptive importance sampling\napproach for reinforcement learning.\n\nEvaluation:\nThe paper gives a clear (at least mathematically) presentation of the core idea\nbut it some details about modeling choices seem to be missing.\nThe experimental evaluation seems preliminary and it is not fully evident when\nand how the proposed method will be practically relevant (and not relevant).\n\nMy knowledgable of the previous literature is not sufficient to validate the\nclaimed novelty of the approach.\n\nDetails:\nThe paper is well written and easy to follow in general.\n\nI'm not familiar enough with reinforcment learning benchmarks to judge the\nquality of the experiments compared to the literature as a whole.\nAlthough there are quite a few experiments they seem rather preliminary.\nIt is not clear whether enough work was done to understand the effect of the\nmany different hyperparameters that the proposed method surely must have.\n\nThe authors claim to show empirically that their method can improve sample\nefficiency.\nThis is not necessarily a strong claim as such and could be achieved on\nrelatively simple tests.\nIn the discussion the authors claim their results indicate that their approach\nis able to accelearte learning on a variety of tasks, also not a strong claim.\n\nThe paper could be improved by adding a more clear explanation of the exact way\nby which the method helps with exploration and how it affects finding sparse\nrewards (based on e.g. Figure 1).\nIt seems that since only knowledge of seen trajectories can be used to generate\npaths to high reward states it only works for generating new trajectories\nthrough previously visited states.\n\nQuestions that could be clarified:\n- It is not entirely obvious to me what parametric models are used for the\nbacktracking distributions.\n- Does this method not also potentially hinder exploration by making the agent\nlearn to go after the same high rewards / Does the direction of the variational\nproblem guarantee coverage of the support of the R > L distribution by samples?\n- What would be the effect of a hyperparameter that balances learning the recall\ntraces and learning the true environment?\n- Are there also reinforcement learning tasks where the proposed methods'\nimprovement is marginal and the extra modeling effort is not justified (e.g.\ndue to increase complexity).\n\nPage 1: iwth (Typo)\nPage 2: r(s_t) -> r(s_t, a_t)\nPage 6: Prioritize d (Typo)\n", "We thank the reviewers for the detailed feedback on our paper. We are glad that the reviewers found our paper to be \"solid contribution with well-written motivation with theoretical interpretations\" (reviewer 2) and \"well written in general\" (reviewer 1). \n\nWe made the following changes to the manuscript to address the reviewers comments.\n\n- We conducted more ablation experiments for the 3 hyper-parameters associated with our model, as asked by the Reviewer 1.\n\n- Training backtracking model by using demonstrations (Reviewer 2) and then using the backtracking model for training another policy from scratch. We did experiments on Ant env from mujoco and Seaquest from atari, where we first train a backtracking model from the expert demonstrations, and then use that for training policy. We achieve 2.5x and about 2x sample efficiency in our very preliminary experiments. \n\n- Comparison with the forward model (Section G and H) as pointed by Rev 2. Rev 2 mentioned an interesting point of training forward and backward model. Our conclusion is building the backward model is necessarily neither harder nor easier. Realistically, building any kind of model and having it be accurate for more than, say, 10 time steps is pretty hard. But if we only have 10 time steps of accurate transitions, it is probably better to take them backward.\n\nWe feel that by conducting extra experiments have improved the quality of the paper a lot, and we are grateful to reviewers for very useful feedback. ", "Thank you again for the thoughtful review. We would like to know if our rebuttal (see below, \"Thanks for your feedback! (n/3) \") adequately addressed your concerns. We would also appreciate any additional feedback on the revised paper. Are there any other aspects of the paper that you think could be improved?\n", "We would appreciate it if the reviewer could take another look at our changes and additional results, and let us know if the reviewer would like to request additional changes that would alleviate reviewers concerns. We hope that our updates to the manuscript address the reviewer's concerns about clarity, and we hope that the discussion above addresses the reviewer's concerns about empirical significance. We once again thank the reviewer for the thorough feedback of our work.\n", "We have updated the paper with the following changes to address reviewer comments:\n\n- Added comparison to forward model (Reviewer 2)\n- Conducted preliminary experiments to show that the backtracking model can be trained just by using the demonstrations. (Reviewer 2)\n- Effect of the 3 hyperparameter(s) associated with the proposed model. \n\nThank you for your time! The authors appreciate the time reviewers have taken for providing feedback. which resulted in improving the presentation of our paper. Hence, we would appreciate it if the reviewers could take a look at our changes and additional results, and let us know if they would like to either revise their rating of the paper, or request additional changes that would alleviate their concerns. \n\n", ">> Are there also reinforcement learning tasks where the proposed methods' improvement is marginal and the extra modeling effort is not justified (e.g. due to increase complexity).\n\nWe think that having a backtracking model could always improve the performance. As We evaluate it on a large number of very different domains (when the backtracking model is given as well as when we are learning the backtracking model as in off policy case and on-policy case) and find that in all cases it improves performance. But we also think, that for some environments the backtracking model can be very hard to learn. For other problems, learning a model of the environment is difficult in either direction so those problems would be hard as well. The first issue would be severe if the forward dynamics are strongly many-to-one, for example. The second case applies to any complex environment and especially partially observed ones. Our method shines most when the dynamics are relatively simple but the problems are still hard due to sparse rewards. \n\nOn the other hand, the backtracking model could also be used in practical settings like robotics, that involve repeatedly attempting to solve a particular task, and hence resetting the environment between different attempts. Here, we can use a model that learns both a forward policy and a backtracking model, and resetting of the environment can be approximated using the backtracking model. By learning this backtracking model, we can also determine when the policy is about to enter a non-reversible state, and hence can be useful for safety. It remains future work, to investigate this. \n\n>> Does this method not also potentially hinder exploration by making the agent learn to go after the same high rewards / Does the direction of the variational problem guarantee coverage of the support of the R > L distribution by samples?\n\nThis is a tricky subject and it is hard to come up with principles that will improve exploration in general and to be sure that something doesn't hinder exploration for some problems. In our setup, the exploration comes mostly from the goal generation methods. The backwards model helps more to speed up the propagation of high value to nearby states (indirectly), such that fewer environment interactions are needed but that could perhaps lead to fewer trips to locations with incorrectly assumed low value. On the other hand, the method might cause the exploration of different (better) paths to the same high value states as well, which should be a good thing. In general, since we are seeking high value (i.e. high expected return), so it shouldn't hinder exploration much. But instead if we seek “high reward” states, then it would hinder performance, (as our experiments show). \n\n\nClosing:\nThank you for your time. We hope you find that our revision addresses your concerns.\nPlease let us know if anything is unclear here, if you’re uncertain about part of the argument, or if there is any other comparison that would be helpful in clarifying things more. \n", ">> What would be the effect of a hyperparameter that balances learning the recall traces and learning the true environment? >> whether enough work was done to understand the effect of the many different hyperparameters that the proposed method surely must have.\n\nIn order to address reviewer’s question, we did more experiments on four room maze as well as on mujoco domain. \nWe have 3 parameters associated. \n1) How many traces to sample from backtracking model. \n2) How many steps each trace should be sampled for i.e is the length of the trajectory sampled. \n3) And as the reviewer pointed out, the effect of a hyperparameter that balances learning the recall traces and learning the true environment.\n\nQ1) How many traces to sample from backtracking model.\n\nFor most of our experiments, we sample only single a trace from the backtracking model. But we observe that sampling more traces actually helps for more complex environments. This is also again in contrast as compared to the forward model. . \n\nQ2) How many steps each trace should be sampled for ?\nIn practice, if the agent is limited to one or a few initial states, a concern related to the length of generated backward traces is that longer traces become increasingly likely to deviate significantly from the traces that the agent can generate from its initial state. Therefore, in our experiments, we sample fairly short traces. Figure 8 (Appendix, Section B) shows the Performance of our model (with TRPO) by varying the length of traces from backtracking model. All the time-steps are in thousands i.e (x1000). As evident by the figure, sampling very long traces seems to hinder the performance on all the domains.\n\nQ3) Effect of a hyperparameter that balances learning the recall traces and learning the true environment\n\nWe have added a Section H in the Appendix containing ablations for the four-room environment and some Mujoco tasks which tells about the effect this hyperparameter has on effective performance. \n\nIn Figure 17(Appendix, Section H) we noticed that as we increase the ratio of updates in the true environment to updates using recall traces from the backward model, the performance decreases. This highlights again the advantages of learning from the recall traces. In the second experiment, we see the effect of training from the recall traces multiple times for every iteration of training in the true environment. Figure 18(Appendix, Section H) shows that as we increase the number of iterations of learning from recall traces, we correspondingly need to choose a smaller trace length. For each update in the real environment, making more number of updates from recall traces helps if the trace length is smaller, and if the trace length is larger, it has a detrimental effect on the learning process. \n\nIn Figure 19(Appendix, Section H) we again find that for Mujoco tasks doing more updates using the recall traces is beneficial. Also for more updates we need to choose smaller trajectory length.\n\nIn essence, there is a balance between how much we should train in the actual environment and how much we should learn from the traces generated from the backward model. In the smaller four room-environment, 1:1 balance performed the best. In Mujoco tasks and larger four room environments, doing more updates from the backward model helps, but in the smaller four room maze, doing more updates is detrimental. So depending upon the complexity of the task, we need to decide this ratio. \n", "Thanks for the very thorough feedback. We have conducted additional experiments to address the concerns raised about the evaluation, and we clarify specific points below. We believe that these additions address all of your concerns about the work, though we would appreciate any additional comments or feedback that you might have.\n\n\"I'm not familiar enough with reinforcement learning benchmarks to judge the quality of the experiments compared to the literature as a whole.\"\n\nThe goal of our experimental evaluation is to demonstrate the effectiveness of the proposed algorithm. We demonstrate that the effectiveness by comparing the proposed algorithm in case when the true backtracking env. was avaliable, as well as when we learned the backtracking model too. We compare our methods to the state-of-the-art SAC algorithm on MuJoCo tasks in OpenAI gym (Brockman et al., 2016) and in rllab (Duan et al., 2016). We use SAC as a baseline as it notably outperforms other existing methods like DDPG, Soft-Q Learning and TD3. The results show that our method outperform on par with SAC in simple domains like swimmer, walker etc. They also provide evidence that the proposed method outperform SAC in challenging high dimensional domains like humanoid and Ant (Figure 7, Main Paper).\n\n\"It is not entirely obvious to me what parametric models are used for the backtracking distributions.\"\n\n\nThe backtracking model we used for all the experiments consisted of two multi-layer perceptrons: one for the backward action predictor Q(a_t | s_t+1) and one for the backward state predictor Q(s_t | a_t, s_t+1). Both MLPs had two hidden layers of 128 units. The action predictor used hyperbolic tangent units while the inverse state predictor used ReLU units. Each network produced as output the mean and variance parameters of a Gaussian distribution. For the action predictor the output variance was fixed to 1. For the state predictor this value was learned for each dimension. We have also mentioned this in the appendix.\n\n", "The authors thank the reviewer for the positive and constructive feedback. We appreciate that the reviewer finds that our method is clearly explained.\n\n\"how does the backtracking model correspond to a forward-model? And it doesn't seem to be contradictory to me that the two can work together.\"\n\nThe reviewer raises a good point. This is indeed very useful. The Dyna algorithm uses a forward model to generate simulated experience that could be included in a model-free algorithm. This method was used to work with deep neural network policies, but performed best with models which are not neural networks (Gu et al., 2016a). Our intuition (and as we empirically show, Figure 19, Section H of Appendix) says that it might be better to generate simulated experience from a backtracking model (starting from a high value state) as compared to forward model, just because we know that traces from the backtracking model are good traces, as they lead to high value state, which is not necessarily the case for the simulated experience from a forward model.\n\nWe have added Figure 16 in Appendix( Section G) where we evaluate the Forward model with On-Policy TRPO on Ant and Humanoid Mujoco tasks. We were not able to get any better results on with forward model as compared to the Baseline TRPO, which is consistent with the findings from (Gu et al., 2016a).\n\nIn essence, building the backward model is necessarily neither harder nor easier. Realistically, building any kind of model and having it be accurate for more than, say, 10 time steps is pretty hard. But if we only have 10 time steps of accurate transitions, it is probably better to take them backward model from different states as compared to from forward model from the same initial state. (as corroborated by the findings in Fig 16 of Appendix G, and Figure 19 of Appendix H). \n\nSomething which remains as a part of future investigation is to train the forward model and backtracking model jointly. As the backtracking model is tied to high value states, the forward model could extract the intended goal value from the high value state. When trained jointly, this should help the forward model learn some reduced representation of the state that is necessary to evaluate the reward. Ultimately, when planning, we want the model to predict the goal accurately, which helps to optimize for this ”goal-oriented” behaviour directly. This also avoids the need to model irrelevant aspects of the environment. We also mention this in Appendix (Section G).\n\n\n[1] (Gu et al, 2016) Continuous Deep Q-Learning with Model-based Acceleration http://proceedings.mlr.press/v48/gu16.html\n", "\"Would it still work if to train the backtracking model offline by, say, watching demonstration?\"\n\nAgain, The reviewer raises a good point. Yes, it's possible to train the backtracking model offline by watching demonstrations. And hence, the proposed method can also be used for imitation learning. In order to show something like this, we conducted the following experiment. We trained an expert policy on Mujoco domain (Ant) using TRPO. Using the trained policy, we sample expert trajectories, and using these trajectories we learned the backtracking model in an offline mode. Now, we trained another policy from scratch, but at the same time we sample the traces from the backtracking model. This method is about(2.5)x more sample efficient as compared to PPO, with the same asymptotic performance. We have not done any hyperparameter search right now, and hence it should be possible to improve these results.\n\nWe conducted additional experiments for Atari domain(Seaquest) too. For atari we trained an expert policy using a2c. And then using samples from the expert policy we learned a backtracking model. And then we use this backtracking model for learning a new policy from scratch. This method is about(1.8)x more sample efficient as compared to A2C, with the same asymptotic performance. These results are very preliminary but it shows that it may be possible to train the backtracking model in offline mode, and use it for learning a new policy from scratch. \n\nPlease let us know if anything is unclear here, if you’re uncertain about part of the argument, or if there is any other comparison that would be helpful in clarifying things more. \n", "We thank the reviewer for the positive and constructive feedback.\n\n\"I would like to see experiments to show the computational time for these components.\"\n\nIf a backtracking model model is available (like in the maze example), then there is no extra computation time, but in the case where we have to learn a bw model, learning a bw model requires more updates compared to only earning a policy (but a similar number of updates as compared to learning a forward model, i.e., dynamics model of the environment).\n\nPlease let us know if anything is unclear here, or if there is any other comparison that would be helpful in clarifying things more. ", "This paper nicely proposes a back-tracking model that predicts the trajectories that may lead to high-value states. The proposed approach was shown to be effective in improving sample efficiency for a number of environments and tasks.\n\nThis paper looks solid to me, well-written motivation with theoretical interpretations, although I am not an expert in RL.\n\nComments / questions:\n- how does the backtracking model correspond to a forward-model? And it doesn't seem to be contradictory to me that the two can work together.\n- could the authors give a bit more explanation on why the backtracking model and the policy are trained jointly? Would it still work if to train the backtracking model offline by, say, watching demonstration?\n\nOverall this looks like a nice paper. ", "The authors propose a bidirectional model for learning a policy. In particular, a backtracking model was proposed to start from a high-value state and sample back the sequence of actions and states that could lead to the current high-value state. These traces can be used later for learning a good policy. The experiments show the effectiveness of the model in terms of increase the expected rewards in different tasks. However, learning the backtracking model would add some computational efforts to the entire learning phase. I would like to see experiments to show the computational time for these components. \n" ]
[ 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6 ]
[ 2, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2019_HygsfnR9Ym", "iclr_2019_HygsfnR9Ym", "HyxOprf0n7", "rye0EHNcpm", "iclr_2019_HygsfnR9Ym", "BkgkxrE567", "BkxisNV5Tm", "HyxOprf0n7", "B1lWBJF93Q", "SkxjYXEq6Q", "r1lsbRy5hm", "iclr_2019_HygsfnR9Ym", "iclr_2019_HygsfnR9Ym" ]
iclr_2019_Hygxb2CqKm
Stable Recurrent Models
Stability is a fundamental property of dynamical systems, yet to this date it has had little bearing on the practice of recurrent neural networks. In this work, we conduct a thorough investigation of stable recurrent models. Theoretically, we prove stable recurrent neural networks are well approximated by feed-forward networks for the purpose of both inference and training by gradient descent. Empirically, we demonstrate stable recurrent models often perform as well as their unstable counterparts on benchmark sequence tasks. Taken together, these findings shed light on the effective power of recurrent networks and suggest much of sequence learning happens, or can be made to happen, in the stable regime. Moreover, our results help to explain why in many cases practitioners succeed in replacing recurrent models by feed-forward models.
accepted-poster-papers
The paper presents both theoretical analysis (based upon lambda-stability) and experimental evidence on stability of recurrent neural networks. The results are convincing but is concerns with a restricted definition of stability. Even with this restriction acceptance is recommended.
test
[ "Ske9haIsRX", "ByeBXzI5AQ", "r1xhUKOThQ", "r1gN66Jqam", "rkgdKTUw6X", "BJeX698vpQ", "SkgZ4aMPTm", "SJxBPyJHam", "SJeu1Uh7aQ", "rkeKirnQp7", "HJe5FH37pX", "HJeGRbFZT7", "rylX3rB_27" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thanks for the clarification and fixing the notations in Theorem 1. I think the discussion of unitary RNN models makes the paper more well-rounded. I hope this work will inspire more research in this direction in the future and help us understand the dynamics of recurrent networks. I would like to keep my rating.", "I don't have much to add to the thorough discussion below. I was already in the \"accept\" camp, and I remain there. I will confer with the other reviewers and consider a revised score.", "+ An interesting problem to study on the stability of RNNs\n+ Investigation of spectral normalization to sequential predictions is worthwhile, especially Figure 2\n+ Some theoretical justification of SGD for learning dynamic systems following Hardt et al. (2016b).\n\n- The take-home message of the paper is not clear. First, it defines a notion of stability based on Lipchitz-continuity and proves SGD can learn it. Then the experiments show such a definition is actually not correct, but rather a data-dependent one. \n- The theory only looks at the instantaneous dynamics from time t to t+1, without unrolling the RNNs over time. Then it is not much different from analyzing feed-forward networks. The theorem on SGD is remotely related to the contribution of the paper. \n- The spectral normalization technique that is actually used in experiments is not new", "Thank you for your response. We have updated the paper to reflect our discussion. In particular, \n- we make clear the sufficient stability conditions are only new in the case of the LSTM and appropriately cite Jin et al. for the 1-layer RNN\n- we added a discussion around the relationship between stability and data-dependent stability\n- we clarify our notion of \"equivalence\" is only in terms of the context required to make predictions and not, e.g., in terms of number of parameters or some other measure, and added further discussion of this distinction to Section 5.\n\nWe're happy to address any additional concerns with the current presentation.", "Appreciate your response. I am willing to upgrade the rating if the authors can tone down the theoretical claims.", "Thank you for your prompt response. We address these concerns in turn.\n\nGap between stability conditions: \nThe data-dependent condition is a strict relaxation of the Lipschitz condition. Two additional comments are in order.\n1) Stability is a clarifying concept. The Lipschitz condition is clean and allows us to understand the core phenomena associated with stability. The data-dependent definition is a useful diagnostic-- when our sufficient (Lipschitz) stability conditions fail to hold, the data-dependent condition addresses whether the model is still operating in the stable regime. \n2) In many cases, we can still prove results with the data-dependent guarantee.\n--If the input representation is fixed, then all of the proofs go through with the data-dependent condition. If S is the set of inputs from the data distribution, we can simply replace all instances of “for all x” with “for all x in S”. This is the case with polyphonic music modeling. \n--When the input is not fixed (e.g. word vectors that are updated during training), the proofs go through provided S is interpreted as “all word vectors generated during training.”\n\nIn section 2.2, the subscript t is dropped because the Lipschitz definition of stability (eq 2) must hold for all x. \n\nTheoretical contribution: \nOur main theoretical contribution is feed-forward approximation of stable recurrent models, especially Proposition 3 and Theorem 1. The results in section 2.2 give concrete examples of our general stability definition. For a 1-layer RNN, the cited paper [1] gives similar stability conditions. However, [1] does not touch on the question of feed-forward approximation, particularly approximation during training, nor does it mention LSTMs. We will add the appropriate citation, but note the RNN stability conditions are a routine one-line calculation and far from our main technical contribution. \n\nEquilibrium states: \nWe only claim equivalence between *stable* RNNs and feed-forward networks. In stable RNNs, all trajectories converge to an equilibrium state. Certainly, general (unstable) RNNs cannot be approximated with feed-forward networks. Understanding to what extent models trained in practice are stable or can be made stable is then an empirical question, and we address this question in Section 4. \n\nImplementing truncated models as feed-forward networks increases the number of weights by a factor of $k$. This increase is an artifact of our analysis, and it is an interesting open question to find more parsimonious approximations. From a memory perspective, a feed-forward network with more weights is still a feed-forward network, and our result establishes stable recurrent models cannot have more memory than feed-forward models.", "- there is a gap between 'Lipschitz' and 'data-dependent' stability. why is that? In the proof of Section 2.2, in order to satisfy the contractive mapping condition, input data x does not have subscript t, can you justify?\n\n- the global stability property for one-layer RNN based on the Lipschitz condition of the activation function is a known result (e.g.[1]). what is the new contribution here?\n\nJin, Liang, Peter N. Nikiforuk, and Madan M. Gupta. \"Absolute stability conditions for discrete-time recurrent neural networks.\" IEEE Transactions on Neural Networks 5.6 (1994): 954-964.\n\n- The equivalence between RNN and feedforward networks is at the equilibrium state. But how about non-equilibrium states? and the number of weights? It is misleading to claim the two to be equivalent.\n\n", "Thank you for the prompt and thoughtful response. I wanted to let you know that I have read it (and your other responses) and am thinking about follow-up questions. Expect me to reply by mid-next week.", "Thank you for your detailed comments and feedback. We have incorporated some of these suggestions into a revision of the paper. We discuss your concerns below.\n\nMotivation of stable models: \nThere are two reasons to consider stability in recurrent models: \n1) Stability is natural criterion for learnability in recurrent models. Outside the stable regime, learning recurrent models requires a delicate mix of heuristics. Studying stable models addresses whether this collection of tricks is actually necessary, and our results suggest a better-behaved model class can solve many of the same problems. \n\n2) Understanding whether models trained in practice are in the stable regime helps answer when recurrent models are truly necessary. As the reviewer noted, whether the stable model is “desirable” depends on experimentation. However, when a stable model achieves similar performance with an unstable model, the conclusion is a feed-forward network suffices to solve the task. We demonstrate sequence learning happens in the stable regime, and this helps explain the widespread success of feed-forward models on sequence problems.\n\n\nVanishing Gradients: \nStable recurrent models always have vanishing gradients, and vanishing gradients are an important part of proving our approximation results. However, vanishing gradients are not unique to stable models. In the updated version of the paper, we show unstable language models also exhibit vanishing gradients. This corroborates the evidence in section 4.3 showing these models operate in the stable regime.\n\nThe cited unitary RNN models may help reduce vanishing gradients. Even in these works, there is still gradient decay over time (e.g. Figure 4, ii in [1]), but the rate of decay is slower. The updated version of the paper includes a brief discussion of these works. At minimum, these models have not yet seen widespread use, and our work demonstrates models frequently trained in practice are either stable or can be made stable without performance loss.\n\nEmpirical study of the difference between recurrent and truncated models: \nIn the revision, we added experiments studying truncation in the unstable models and also show unstable models satisfy a qualitative version of Theorem 1. All of the models considered, including the LSTM language models, exhibit sharply diminishing returns to larger values of the truncation parameter. As predicted by theorem 1, the difference between the truncated and full recurrent matrix during training becomes small for moderate values of the truncation parameter.\n\nComparison between stable and unstable models: \nWe disagree with the interpretation of Table 1. Except for the LSTM language models, the variation in performance between stable and unstable models is within standard-error. We do not retune the hyperparameters when imposing stability, and the near equivalence of the results is evidence the unstable models do not offer a large performance boost. For the LSTM language models, in section 4.3 and 4.4, we argue the unstable LSTM language models are close to the stable regime, and the gap between stable and unstable models is an artifact of the particular way we impose stability. \n", "Thank you for your comments and feedback. We address each of your concerns below.\n\nTake-home message: \nThe message of the paper is that sequence learning happens, or can be made to happen, in the stable regime. The Lipschitz definition of stability (eq. 2) and the “data-dependent” definition introduced in the experiments are complementary. The data-dependent definition is just a relaxation of the Lipschitz criteria-- we only require equation 2 to hold for inputs from the data-distribution. For the proofs and the majority of the experiments, the strict Lipschitz condition suffices. Most models can be made stable in the sense of equation 2 without performance loss. For LSTMs on language modeling, the data-dependent version illustrates even the nominally unstable LSTMs are close to the stable regime-- a truly unstable model would not satisfy even this weaker definition. We view results with both definitions as evidence recurrent models trained in practice operate in the stable regime.\n\nInstantaneous dynamics: \nThe theory in our paper does consider unrolling the RNNs over time. While the stability condition is stated purely in terms of the the state-transition function from step t to step t+1, the main theoretical results (Proposition 3 and Theorem 1) specifically concern the unrolled RNN. In particular, our results show that the unrolled (stable) RNN can be approximated by a feed-forward network. \n\nSpectral Normalization: \nIn our experiments, our focus is more on comparing the performance of stable and unstable models and less on the particular form of normalization used to achieve stability. In the RNN case, enforcing stability via constraining the spectral norm of the recurrent matrix is fairly routine. In the LSTM case, the stability conditions given in Proposition 2 are new and allow one to experiment with stable LSTMs. The updated version of the paper includes a discussion of these other works.\n", "Thank you for your detailed comments and feedback. \n\nWe agree it is difficult to know a priori whether particular dataset will be amenable to stable models. However, stability can still be a clarifying idea in practice. Given a dataset where stable models perform comparably with unstable models, either the dataset does not require long-term memory (i.e. feed-forward approximation suffices), or the unstable models do not take advantage of it. We conjecture most recurrent models successfully trained in practice are operating in the stable regime. To further test this claim, it would be interesting to find datasets (if any) where unstable models significantly outperform stable models, or datasets where non-recurrent models aren’t competitive with their recurrent counterparts. \n\nIn the revision, we added discussion of the several recent works constraining RNN matrices. These works try to keep the model just outside the stable regime to avoid vanishing gradients and side-step exploding gradients (i.e. take lambda ~ 1). The spectral norm thresholding technique for RNNs is straightforward, whereas the stability conditions for the LSTM is new. In either case, our focus is on using these techniques to understand the consequences of imposing stability on recurrent models.\n\nIn general, answering the question of accuracy is fairly delicate. We’re able to show stable and truncated/feed-forward models have the same accuracy. Bounds relating the accuracy of an unstable model with the accuracy of an stable one almost certainly require further assumptions on the data distribution. Obtaining such accuracy bounds for neural networks has been elusive, and part of the contribution of our work is proving a connection between the performance two model classes (stable RNNs and truncated/feed-forward models) without needing to resolve these questions. \n", "This is an interesting paper that I expect will generate some interest within the ICLR community and from deep learning researchers in general. The definition of stability is both intuitive and sound and the connection to exploding gradients is perhaps the most interesting and useful part of the paper. The sufficient conditions yield practical techniques for increasing the stability of, e.g., an LSTM, by constraining the weight matrices. They also show that stable recurrent models can be approximated by models with finite historical windows, e.g., truncated RNNs. Experiments in Sec 4 suggest that stable models produced by constraining standard RNN architectures can compete with their unconstrained unstable counterparts, and often without necessitating significant changes to architecture or hyperparameters. The perhaps most interesting observations are in Sec 4.3, in which the authors claim that even fundamentally unstable models, e.g., unconstrained RNNs, often operate in a stable regime, at least when being applied to in-sample data. I lean toward acceptance at the moment, but I am eager to discuss with the authors and other reviewers as I am not 100% confident that I fully understood the theory.\n\nSUMMARY\n\nThis paper proposes a simple, generic definition of “stability” for recurrent, non-linear dynamical systems such as RNNs: that given two hidden states h, h’, the difference between their updated states given input x is bounded by the product between the difference between the states themselves and a small multiplier. The paper then immediately draws a connection between stability, asserting that unstable models are prone to gradient explosions during gradient descent-based training. In Sec 2.2, the paper presents sufficient conditions for basic RNNs and LSTMs to be stable. Secs 3.2 and 3.3 argue that stable recurrent models can be approximated by feedforward models during both inference and training with a finite history horizon, such as a RNN with a truncated history. Experiments in language and music modeling substantiate this claim: constrained, stable models are competitive with standard unconstrained models. Sec 4.3 sheds some light on this phenomenon, arguing that there is a weaker form of data-dependent stability and that even unstable models may operate in a stable regime for some problems, thus explaining the parity between stable and unstable models.\n\nSTRENGTHS\n\n* This paper is surprisingly engaging and easy to read.\n* The theorems are clearly stated and the proofs appear sound to me, though I will admit that I am not confident that I would catch a significant bug.\n* This paper provides a new (to me, anyway) and thought-provoking analysis of RNNs. In particular, I was especially interested in the observation that stable models can be approximated by truncated models and that there is a connection between stability and long-term dependencies. This seems consistent with the fact that for many problems, non-recurrent models (ConvNets, Transformers, etc.) are often competitive with more complex architectures.\n\nWEAKNESSES\n\n* In practice it seems as though stability may depend on not only choice of model architecture but also the data themselves. There is probably no good way to know a priori what the stability characteristics of a given data set are, making it tough to apply the ideas of this paper in practice\n* The literature review seems a bit limited and appears to ignore the growing body of work on constraining RNN weight matrices to address both exploding and vanishing gradients. For example, I am pretty confident that the singular thresholding trick for renormalizing neural net weights has been described in the literature previously.\n* Although stable and unstable models appear to be competitive in experiments, the theoretical analysis provides no insights into stability and how it relates to accuracy.", "In this paper, the authors study the stability property of recurrent neural networks. Adopting the definition of stability from the dynamical system literature, the authors present a generic definition of stable recurrent models and provide sufficient conditions of stable linear RNNs and LSTMs. The authors also study the \"feed-forward\" approximation of recurrent networks and theoretically show that the approximation works for both inference and training. Experimental studies compare the performance of stable and unstable models on various tasks.\n\nThe paper is well-written and very pleasant to read. The notations are clear and the claims are relatively easy to follow. The theoretical analysis in Section 3 is novel, interesting and solid. However, the reviewer has concerns about the motivation of the presented analysis and insufficient empirical results.\n\nThe stability property only eliminates the exploding gradient problem, but not the vanishing gradient problem. The reviewer suspects that a stable recurrent model always suffers from vanishing gradient. Therefore, stability might not necessarily be a desirable property. There has been a line of work that constrain the weight matrix in RNNs to be orthogonal or unitary so that the gradient won't explode, e.g. [1], [2], [3]. It seems that the orthogonal or unitary conditions are stronger than the stability condition, and are probably less prone to the vanishing gradient problem. \n\nThe vanishing gradient problem is also related to the analysis in Section 3. If a recurrent network is very stable and has vanishing gradient, then a small perturbation of the initial hidden state has little effect on later time steps. This intuitively explains why it can be well approximated by using only the last k time steps. However, the recurrent model itself might not be a desirable model. In other words, although Theorem 1 shows that $y_T$ and $y_T^k$ can be arbitrarily close, $y_T$ might not be a good prediction.\n\nThe experimental study seems weak. Again, in the RNN case, constraining the singular values of the weight matrix is not a new idea. Furthermore, the results in Table 1 seem to suggest that the stable models perform worse than unstable ones. What is the benefit in using stable models? Proposition 2 is only a sufficient condition of a stable LSTM and it seems very restrictive, as the authors point out. This might explain the worse performance of the stable LSTMs in Table 1. The reviewer was expecting more experimental results to support the claims in Section 3. For example, an empirically study of the difference between a recurrent model and a \"feed-forward\" or truncation approximation.\n\nMinor comments:\n* Lemma 1: $\\lambda$-contractive => $\\lambda$-contractive in $h$?\n* Theorem 1: $k=O(...)$ => $k=\\Omega(...)$? Intuitively, a bigger k leads to a better feed-forward approximation.\n\n[1] Martin Arjovsky, Amar Shah, and Yoshua Bengio. Unitary evolution recurrent neural networks. ICML, 2016.\n[2] Scott Wisdom, Thomas Powers, John Hershey, Jonathan Le Roux, and Les Atlas. Full-capacity unitary recurrent neural networks. NIPS, 2016.\n[3] Eugene Vorontsov, Chiheb Trabelsi, Samuel Kadoury, and Chris Pal. On orthogonality and learning recurrent networks with long term dependencies. ICML, 2017." ]
[ -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6 ]
[ -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4 ]
[ "SJeu1Uh7aQ", "SJxBPyJHam", "iclr_2019_Hygxb2CqKm", "rkgdKTUw6X", "BJeX698vpQ", "SkgZ4aMPTm", "rkeKirnQp7", "HJe5FH37pX", "rylX3rB_27", "r1xhUKOThQ", "HJeGRbFZT7", "iclr_2019_Hygxb2CqKm", "iclr_2019_Hygxb2CqKm" ]
iclr_2019_HylTBhA5tQ
The Limitations of Adversarial Training and the Blind-Spot Attack
The adversarial training procedure proposed by Madry et al. (2018) is one of the most effective methods to defend against adversarial examples in deep neural net- works (DNNs). In our paper, we shed some lights on the practicality and the hardness of adversarial training by showing that the effectiveness (robustness on test set) of adversarial training has a strong correlation with the distance between a test point and the manifold of training data embedded by the network. Test examples that are relatively far away from this manifold are more likely to be vulnerable to adversarial attacks. Consequentially, an adversarial training based defense is susceptible to a new class of attacks, the “blind-spot attack”, where the input images reside in “blind-spots” (low density regions) of the empirical distri- bution of training data but is still on the ground-truth data manifold. For MNIST, we found that these blind-spots can be easily found by simply scaling and shifting image pixel values. Most importantly, for large datasets with high dimensional and complex data manifold (CIFAR, ImageNet, etc), the existence of blind-spots in adversarial training makes defending on any valid test examples difficult due to the curse of dimensionality and the scarcity of training data. Additionally, we find that blind-spots also exist on provable defenses including (Kolter & Wong, 2018) and (Sinha et al., 2018) because these trainable robustness certificates can only be practically optimized on a limited set of training data.
accepted-poster-papers
Reviewers are in a consensus and recommended to accept after engaging with the authors. Please take reviewers' comments into consideration to improve your submission for the camera ready.
train
[ "SylEXOI9AX", "HkgrM_e9R7", "SylN6rJ5Cm", "H1gVYFVS3X", "HkeeRxhmAX", "B1gGnv0Y07", "BylEr05YAX", "rkeMtGiyCQ", "BJlit-i1Rm", "rkxhfWj1C7", "BylRCJCM6m", "Hygsieas2X", "rJe1e3jj3X" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "\nWe have addressed all the concerns of AnonReviewer3. During the discussion with AnonReviewer3, we found that there might be some confusions on how we generate adversarial examples from blind-spot images, and how we calculate the $\\ell_p$ distortions for adversarial examples. Thus we slightly revise Section 3.3 and 4.4 to make things clear. We hope this will make our paper easier to follow.\n\nAgain we thank all the reviewers for the encouraging and constructive comments!\n\nThanks,\nPaper1584 Authors\n", "We really appreciate the reviewer's fruitful suggestions, and we see where the confusion is. \n\nIn Sec. 3.3, blind-spot attack uses scaling and shifting to generate new natural reference images x' = \\alpha * x + \\beta. We still apply C\\&W L_inf attacks on x’ to generate adversarial images x'_adv for all \\alpha and \\beta. We will revise our paper to make this clearer.\n\nThank you again for your comments and we will make the writing better.\n\nThank you!\nPaper 1584 Authors", "Thanks for the clarification. \n\n\"We use alpha and beta to obtain new natural reference images instead of adversarial images.\"\n\nThis is a key point which makes reviewer confusing, since in Sec. 3.3, blind-spot attacks seem to generate adversarial images only using scaling and shifting. However, in experiments x’ = \\alpha * x + \\beta is used to generate natural reference image. I am Okay with that only if the experiment is consistent, e.g., applying C\\&W attack on x’ = \\alpha * x + \\beta for all \\alpha and \\beta discussed in this paper. \n\nPlease carefully revise Sec. 3.3. and experiment section to make the aforementioned point clearer. \n\nBased on the authors's current response, I increase my score to 6.\n", "In this paper, the authors associated with the generalization gap of robust adversarial training with the distance between the test point and the manifold of training data. A so-called 'blind-spot attack' is proposed to show the weakness of robust adversarial training. Although the paper contains interesting ideas and empirical results, I have several concerns about the current version. \n\na) In the paper, the authors mentioned that \"This simple metric is non-parametric and we found that the results are not sensitive to the selection of k\". Can authors provide more details, e.g., empirical results, about it? What is its rationale?\n\nb) In the paper, \"We find that these blind-spots are prevalent and can be easily found without resorting to complex\ngenerative models like in Song et al. (2018). For the MNIST dataset which Madry et al. (2018) demonstrate the strongest defense results so far, we propose a simple transformation to find the blind-spots in this model.\" Can authors provide empirical comparison between blind-spot attacks and the work by Song et al. (2018), e.g., attack success rate & distortion? \n\nc) The linear transformation x^\\prime = \\alpha x + \\beta yields a blind-spot attack which can defeat robust adversarial training. However, given the linear transformation, one can further modify the inner maximization (adv. example generation) in robust training framework so that the $\\ell_infty$ attack satisfies max_{\\alpha, \\beta} f(\\alpha x + \\beta) subject to \\| \\alpha x + \\beta \\|\\leq \\epsilon. In this case, robust training framework can defend blind-spot attacks, right? I agree with the authors that the generalization error is due to the mismatch between training data and test data distribution, however, I am not convinced that blind-spot attacks are effective enough to robust training. \n\nd) \"Because we scale the image by a factor of \\alpha, we also set a stricter criterion of success, ..., perturbation must be less\nthan \\alpha \\epsilon to be counted as a successful attack.\" I did not get the point. Even if you have a scaling factor in x^\\prime = \\alpha x + \\beta, the universal perturbation rule should still be | x - x^\\prime |_\\infty \\leq \\epsilon. The metric the authors used would result in a higher attack success rate, right? \n", "Dear AnonReviewer3,\n\nThank you again for your insightful and constructive comment!\n\nWe hope that we have addressed your questions. We understand you may be discussing our paper with other reviewers and you can take your time. As the revision period is closing soon, we will really appreciate it if you could let us know if you find anything unclear in our response, or have any further concerns about our paper. We will try our best to revise our paper based on your suggestions before the revision period ends.\n\nThank you!\nPaper 1584 Authors\n", "Dear AnonReviewer3,\n\nThank you for your response and further questions. We would like to answer them as below:\n\n“I assumed that the distortion condition will be examined as $| \\alpha x + \\beta |_infty \\leq \\eps$, right?”\nNo, this is not how we examine the Linf distortion success condition in Table 2.\n\nWe use alpha and beta to obtain new natural reference images instead of adversarial images. For example, for an original image x from the test set, we scale and shift this image to obtain a new natural reference image x’ = \\alpha * x + \\beta. Then we run C&W attack on x’ to obtain its adversarial image x’_adv. Note that x’ = \\alpha * x + \\beta is not considered as an adversarial image but as a natural image since in the blind-spot attack we are finding the blind-spots (where the model do not have good robustness) in the natural data distribution.\n\nThe distortion condition is examined as the distance between x’ and x’_adv: $|x’ - x’_adv|_\\infty \\leq \\eps$, but not $| \\alpha x + \\beta |_infty \\leq \\eps$. We will try to make this clearer in our revision.\n\n“In the last column of Table 2, alpha = 0.7 & beta = 0.15, I wonder why ASRs under thr = 0.3 and thr = 0.21 are the same.”\nThe reason is that most adversarial examples generated from blind-spot images with alpha=0.7 and beta=0.15 have small distortions, less than both 0.3 and 0.21. So they are considered successful in both criteria. \n\n“it quite surprising that ASRs for the two cases (alpha = 0.7, beta = 0, thr = 0.21) and (alpha = 0.7, beta = 0.15, thr = 0.21) have a large gap. Any rationale behind that?”\nThe ASR for the case with non-zero beta is much higher than beta=0 case indicates that scaling+shifting is more effective than scaling alone to reduce the robustness of the model under attack. Scaling+shifting is a more powerful blind-spot attack.\n\nWe are glad to discuss further with you if you have any additional questions. Thanks again for the constructive feedback!\n\nThank you!\nPaper 1584 Authors\n", "\"We want to emphasize that the “blind-spot attack” is a class of attacks, which exploits the gap between training and test data distributions (see our definition in Section 3.3). The linear transformation used in our paper is one of the simplest attacks in this class. If we know the details of this specific attack before training, it is possible defend against this specific simple attack.\"\n\nOk, I agree with the authors at this point. \n\n\"The stricter criterion actually makes our attack success rates *lower* rather than higher. Finding adversarial examples with smaller distortions is harder than finding adversarial examples with large distortions. As an extreme case, if the criterion is distortion<=0, the attack success rate will always be zero, since we cannot fool the model using unmodified natural images. In Table 2, the success rates under the column 0.27 are strictly lower than the numbers under the column 0.3. We consider this additional stricter criterion because images after scaling are within a smaller range, so we also restrict the noise to be smaller, to keep the same signal-to-noise ratio and make an absolutely fair comparison. If we don’t use this stricter criterion, our attack success rates will look even better.\n\"\n\nYes, the authors are correct that finding adversarial examples with smaller distortions is harder than finding adversarial examples with large distortions, thus $\\alpha \\epsilon$ will make attack success rate (ASR) LOWER. Based on that, I checked Table 2, which is still unclear to me. \n\nIn the last column of Table 2, alpha = 0.7 & beta = 0.15, I wonder why ASRs under thr = 0.3 and thr = 0.21 are the same. Since an attack is considered as successful if its Linf distortion is less than given thrs, I assumed that the distortion condition will be examined as $| \\alpha x + \\beta - x |_infty \\leq \\eps$, right? If so, it quite surprising that ASRs for the two cases (alpha = 0.7, beta = 0, thr = 0.21) and (alpha = 0.7, beta = 0.15, thr = 0.21) have a large gap. Any rationale behind that?\n\n\nI will adjust my score based on the authors' further clarification.", "During the rebuttal period, we further enhanced our experiments by conducting blind-spot attacks on two certified, state-of-the-art adversarial training methods, including (Wong & Kolter 2018) and (Singha et al. 2018). Surprisingly, although they can provably increase robustness on the training set, they still suffer from blind-spot attacks by slightly transforming the test set images. See Tables 4, and 5 in the Appendix. The attack success rates go significantly higher after a slight scale and shift on both MNIST and Fashion MNIST test sets, for both two defense models.\n\nAdditionally, we also add results for a relatively larger dataset, GTS (german traffic sign) in Appendix (Section 6.2). The results (in histograms) we observed are similar to the ones we observed on CIFAR.\n\nWith these new results, our conclusion is not limited to the adversarial training method proposed by (Madry et al. 2018). Our paper uncovers the weakness of many state-of-the-art adversarial training methods, even including those with theoretical guarantees on the training dataset. By identifying a new class of adversarial attacks, even in its simplest form (small shift + scale), many good defense methods become vulnerable again. \n\nIn conclusion, we show that many state-of-the-art strong adversarial defense methods, even including those with robustness certificates on training datasets, cannot well generalize their robustness on unseen test data from a very slightly changed domain. This partially explains the difficulty in applying adversarial training on larger datasets like CIFAR and ImageNet. We believe that our results are significant. We also think these experiments are important to further understanding adversarial examples and proposing better defenses.\n", "Thank you for the encouraging comments. First of all, we would like to mention that we add more experiments on two additional state-of-the-art strong and certified defense methods, and observe that they are also vulnerable to blind-spot attacks. Please see our reply to all reviewers.\n\nWe agree that the K-L based method is complicated and computationally extensive. Fortunately, we only need to compute it once per dataset. To the best of our knowledge, currently, there is no perfect metric to measure the distance between a training set and a test set. Ordinary statistical methods (like kernel two-sample tests) do not work well due to the high dimensionality and the complex nature of image data. So the measurement we proposed is a best-effort attempt that can hopefully give us some insights into this problem. \n\nAs suggested by the reviewer, we added a new metric based on the mean of \\ell_2 distance on the histogram in Section 4.3. The results are shown in Table 1 (under column “Avg. normalized l2 Distance”). The results align well with our conclusion: the dataset with significant better attack success rates has noticeably larger distance. It further supports the conclusion of our paper and indicates that our conclusion is distance metric agnostic.\n\nWe hope that we have made everything clear, and we again appreciate your comments. Let us know if you have any additional questions.\n\nThank you!\nPaper 1584 Authors\n\n", "Thank you for your insightful comments to help us improve our paper. First of all, we would like to mention that we add more experiments on two additional state-of-the-art strong and certified defense methods, and observe that they are also vulnerable to our proposed attacks. Please see our reply to all reviewers.\n\nHere are our responses to your concerns in “Cons” and “Minor comments”.\n\nAlthough we were not able to provide theoretical analysis in this paper, our proposed attacks are very effective on state-of-the-art adversarial training methods, and we believe our conclusions\nCurrently, there is relatively few theoretical analysis in this field in general, and many analysis makes unpractical assumptions. We believe our results can inspire other researcher’s theoretical research.\n\nRegarding the “blind-spot attack” phrase, we are open to suggestions from the reviewers. Other phrases we considered including “evasion attack”, “generalization gap attack” and “scaling attack”. Which one do you think is a better option?\n\nRegarding the distances in Figure 3:\nThanks for raising this concern. We have added a note to clarify this issue. The difference in distance can be partially explained by the sparsity in an adversarially trained model. As suggested in [1], the adversarially trained model by Madry et al. tends to find sparse features (see Figure 5 in [1]), where many components are zero. Thus, the distances tend to be overall smaller.\n\nRegarding the results in Table 1:\nIn our old version, we only used the adversarially trained network. In our revision, we added K-L divergence computed from both adversarially trained and naturally trained networks. Additionally, we also add a new distance metric proposed by AnonReviewer1. The K-L divergences by both networks, as well as the newly added distance metric, show similar observations.\n\nRegarding adding more visualizations:\nWe added some more visualizations in Fig 10 in the appendix. It is worth noting that the Linf distortion metric used in adversarial training is sometimes not a good metric to reflect visual differences. However, the test images under our proposed attack indeed have much smaller Linf distortions.\n\nWe hope that we have answered all your questions, and we are glad to discuss with you if you have any further concerns about our paper.\n\n[1] Tsipras, Dimitris, et al. \"Robustness may be at odds with accuracy.\" arXiv preprint arXiv:1805.12152 (2018).\n\nThank you!\nPaper 1584 Authors", "Dear AnonReviewer3,\n\nThank you for your insightful questions. They are very helpful for us to improve the paper. We would like to answer your 4 questions as below.\n\na) We added more figures with k=10, 100, 1000 in the appendix (in main text, we used k=5). Our main conclusion does not change regardless the value of k: there is a strong correlation between attack success rate and the distance between test examples to training dataset. A larger distance usually implies a higher attack success rate. The rational to use this metric is that it is simple, and nearest neighbour based methods are usually robust to hyper-parameter selection. We don’t want our observations depend on hyper-parameters during distance measurement.\n\nb) Song et al. (2018) does not have ordinary metrics like distortion or (ordinary) attack success rates to compare with. In their attack, the input is a random noise for GAN, and they generate adversarial images from scratch. In typical adversarial attacks, people start from a specific reference (natural) image x and add adversarial distortion to obtain x_adv. In their paper, adversarial images are generated by GANs directly and there is no reference images at all, so distortion cannot be calculated (see definitions 1 and 2 in their paper). They have to conduct user study to determine what is the true class label for a generated image, and see if the model will misclassify it. The success rate is the model’s misclassification rate from user study.\n\nIn our paper, our attacks first conduct slight transformations on a natural test image x to obtain x’, and then run ordinary gradient based adversarial attacks on x’ to obtain x’_adv. We have a reference image x’, so we can compute the distortion between x’ and x’_adv, and determine the success by a certain criterion on distortion. This setting is different from Song et al. (2018) so we cannot directly compare distortion and success rates with them.\n\nc) We want to emphasize that the “blind-spot attack” is a class of attacks, which exploits the gap between training and test data distributions (see our definition in Section 3.3). The linear transformation used in our paper is one of the simplest attacks in this class. If we know the details of this specific attack before training, it is possible defend against this specific simple attack. However, it is always possible to find some different blind-spot attacks (for example, by using a generative model). Rather than starting a new arm race between attacks and defenses, our argument here is to show the fundamental limitations of adversarial training -- it is hard to cover all the blind-spots during training time because it is impossible to eliminate the gap between training and test data especially when data dimension is high. \n\nd) The stricter criterion actually makes our attack success rates *lower* rather than higher. Finding adversarial examples with smaller distortions is harder than finding adversarial examples with large distortions. As an extreme case, if the criterion is distortion<=0, the attack success rate will always be zero, since we cannot fool the model using unmodified natural images. In Table 2, the success rates under the column 0.27 are strictly lower than the numbers under the column 0.3. We consider this additional stricter criterion because images after scaling are within a smaller range, so we also restrict the noise to be smaller, to keep the same signal-to-noise ratio and make an absolutely fair comparison. If we don’t use this stricter criterion, our attack success rates will look even better.\n\n\nIn our updated revision, we also include additional experiments on GTS dataset, as long as two other state-of-the-art adversarial training methods by Wong et al. and Sinha et al.. We observe very similar results on all these methods and datasets, further confirming the conclusion of our paper.\n\nWe hope our answers resolve all the doubts you had with our paper. We would like to further discuss with you if you have any unclear things or additional questions, and hope you can reconsider the rating of our paper. \n\nThank you!\nPaper 1584 Authors\n", "This paper provides some insights on influence of data distribution on robustness of adversarial training. The paper demonstrates through a number of analysis that the distance between the training an test data sets plays an important role on the effectiveness of adversarial training. To show the latter, the paper proposes an approach to measure the distance between the two data sets using combination of nonlinear projection (e.g. t-SNE), KDE, and K-L divergence. The paper also shows that under simple transformation to the test dataset (e.g. scaling), performance of adversarial training reduces significantly due to the large gap between training and test data set. This tends to impact high dimensional data sets more than low dimensional data sets since it is much harder to cover the whole ground truth data distribution in the training dataset.\n\nPros:\n- Provides insights on why adversarial training is less effective on some datasets.\n- Proposes a metric that seems to strongly correlate with the effectiveness of adversarial training.\n\nCons:\n- Lack of theoretical analysis. It could have been nice if the authors could show the observed phenomenon analytically on some simple distribution.\n- The marketing phrase \"the blind-spot attach\" falls short in delivering what one may expect from the paper after reading it. The paper would read much better if the authors better describe the phenomena based on the gap between the two distribution than using bling-spot. For some dataset, this is beyond a spot, it could actually be huge portion of the input space!\n\nMinor comments:\n- I believe one should not compare the distance shown between the left and right columns of Figure 3 as they are obtained from two different models. Though the paper is not suggesting that, it would help to clarify it in the paper. Furthermore, it would help if the paper elaborates why the distance between the test and training dataset is smaller in an adversarially trained network compared to a naturally trained network.\n- Are the results in Table 1 for an adversarially trained network or a naturally trained network? Either way, it could be also interesting to see the average K-L divergence between an adversarially and a naturally trained network on the same dataset.\n- Please provide more visualization similarly to those shown in Fig 4.\n", "The paper is well written and the main contribution, a methodology to find “blind-spot attacks” well motivated and differences to prior work stated clearly.\n\nThe empirical results presented in Figure 1 and 2 are very convincing. The gain of using a sufficiently more complicated approach to assess the overall distance between the test and training dataset is not clear, comparing it to the very insightful histograms. Why for example not using a simple score based on the histogram, or even the mean distance? Of course providing a single measure would allow to leverage that information during training. However, in its current form this seems rather complicated and computationally expensive (KL-based). As stated later in the paper the histograms themselves are not informative enough to detect such blind-spot transformation. Intuitively this makes a lot of sense given that the distance is based on the network embedding and is therefore also susceptible to this kind of data. However, it is not further discussed how the overall KL-based data similarity measure would help in this case since it seems likely that it would also exhibit the same issue.\n" ]
[ -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, 7, 7 ]
[ -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, 3, 2 ]
[ "iclr_2019_HylTBhA5tQ", "SylN6rJ5Cm", "B1gGnv0Y07", "iclr_2019_HylTBhA5tQ", "BylRCJCM6m", "BylEr05YAX", "HkeeRxhmAX", "iclr_2019_HylTBhA5tQ", "rJe1e3jj3X", "Hygsieas2X", "H1gVYFVS3X", "iclr_2019_HylTBhA5tQ", "iclr_2019_HylTBhA5tQ" ]
iclr_2019_HylTXn0qYX
Efficiently testing local optimality and escaping saddles for ReLU networks
We provide a theoretical algorithm for checking local optimality and escaping saddles at nondifferentiable points of empirical risks of two-layer ReLU networks. Our algorithm receives any parameter value and returns: local minimum, second-order stationary point, or a strict descent direction. The presence of M data points on the nondifferentiability of the ReLU divides the parameter space into at most 2^M regions, which makes analysis difficult. By exploiting polyhedral geometry, we reduce the total computation down to one convex quadratic program (QP) for each hidden node, O(M) (in)equality tests, and one (or a few) nonconvex QP. For the last QP, we show that our specific problem can be solved efficiently, in spite of nonconvexity. In the benign case, we solve one equality constrained QP, and we prove that projected gradient descent solves it exponentially fast. In the bad case, we have to solve a few more inequality constrained QPs, but we prove that the time complexity is exponential only in the number of inequality constraints. Our experiments show that either benign case or bad case with very few inequality constraints occurs, implying that our algorithm is efficient in most cases.
accepted-poster-papers
This paper proposes a new method for verifying whether a given point of a two layer ReLU network is a local minima or a second order stationary point and checks for descent directions. All reviewers agree that the algorithm is based on number of new techniques involving both convex and non-convex QPs, and is novel. The method proposed in the paper has significant limitations as the method is not robust to handle approximate stationary points. Given these limitations, there is a disagreement between reviewers about the significance of the result . While I share the same concerns as R4, I agree with R3 and believe that the new ideas in the paper will inspire future work to extend the proposed method towards addressing these limitations. Hence I suggest acceptance.
train
[ "BJx4YOG6C7", "rylPNDchRX", "H1giRmc30Q", "HJlqYuFqAX", "rkl6k5D7am", "SklaadxYRm", "H1xBzsAQAQ", "rJxE090Q0X", "SyxnsqC7RQ", "SyxXdq07Rm", "rJl-Gq0mAQ", "rkeveQE02m", "SklEPann2X", "SkxeZJD5nQ" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Sorry for the confusion. What worries me most is not a practical implementation. \n\nFrom a theoretical point of view, the current version can only test if a point is a real SOSP. Thus this is only a qualitative result. I expect a theoretical machine learning paper in ICLR/ICML/NIPS/COLT to have at least some quantitative analysis on the error (even it has a large polynomial or exponential dependency), i.e., robust analysis for the problem studied in this paper. ", "Thank you for your positive feedback and support! As you pointed out, analyzing and implementing a robust version of the algorithm will require a significant amount of additional effort. We hope to tackle this in the future.", "Thank you for your response. Allow us to summarize our point again: we are providing the foundation for a practical check of local optimality that can be eventually turned into a useful algorithm in practice. Our paper and our review response express these results and contributions, acknowledging current limitations honestly and without any overclaims.\n\nIn particular, our work is a theoretical contribution, and implementing (plus analyzing) a robust version requires much more additional effort, well beyond the current paper; to be fair, no non-trivial theory is developed in one day, and the amount of work that went into laying the foundations for a future robust analysis is fairly substantial already, in our opinion.\n\nWe are disappointed by Reviewer 4’s decision to deduct their rating from 5 to 3. We believe that a rating of “clear rejection,” just because the paper does not build the whole tower already, is a bit harsh, as it ignores the importance of the foundations established herein. We would like to ask the reviewers to take this aspect into account in deciding their final ratings—ultimately, because we believe that the paper is well on-topic, and will spur follow up work.", "Thanks for your response! \nI have updated my review. I encourage author(s) to add the robust analysis and submit to the next top machine learning conference.", "Updates:\nAuthor(s) acknowledged that they cannot get a robust analysis. Furthermore, the optimality test also requires a robust analysis. Therefore, I believe the current version is still incomplete so I changed my score. I encourage author(s) to add the robust analysis and submit to the next top machine learning conference.\n\n-------------------------------------------\nPaper Summary:\nThis paper gives a new algorithm to check whether a given point is a (generalized) second-order stationary point if not, it can return a strict descent direction even at this point the objective function (empirical risks of two-layer ReLU or Leaky-ReLU networks) is not differentiable.\nThe main challenge comes from the non-differentiability of ReLU. While testing a second-order stationary point is easy, because of the non-differentiability, one needs to test 2^M regions in the ReLU case. This paper exploits the special structure of two-layer ReLU network and shows it suffices to check only the extreme rays of the polyhedral cones which are the feasible sets of these 2^M linear programs. \n\nComments:\n1. About Motivation. While checking the optimality on a non-differentiable point is a mathematically interesting problem, it has little use in deep learning. In practice, SGD often finds a global minimum easily of ReLU-activated deep neural networks [1].\n2. This algorithm can only test if a point is a real SOSP. In practice, we can only hope to get an approximate SOSP. I expect a robust analysis, i.e., can we check whether it is a (\\epsilon,\\delta) SOSP?\n3. About writing: g(z,\\eta) and H(z,\\eta) appear in Section 1 and Section 2, and they are used to define generalized SOSP. However, their formal definitions are in Lemma 2. I suggest give the formal definitions in Section 1 or Section 2 and give more intuitions on their formulas.\n\nMinor Comments:\n1. Many typos in references, e.g., cnn -> CNN.\n2. Page 4: Big-Oh -> Big O.\n\n\n\nOverall I think this paper presents some interesting ideas but I am unsatisfied with the issues above. I am happy to see the authors’ response, and I may modify my score. \n\n\n[1] Zhang, C., Bengio, S., Hardt, M., Recht, B., & Vinyals, O. (2016). Understanding deep learning requires rethinking generalization. arXiv preprint arXiv:1611.03530.\n", "I read the authors' response. Given the shared concerns by all reviewers about robustness and applicability, I am not quite as positive as I was before, but I still support the paper. The authors seem well-aware of the shortcomings of the work, which would require a major new work to address. I think this work is an interesting stepping-stone, and shows original thought.", "We thank the reviewer for their efforts in reviewing our paper. We will address the concerns by the reviewer below:\n\n1. Due to multiple reviewers raising a similar point, we addressed this issue in a separate comment above. Please refer to item (2) of the comment.\n\n2. For the discussion on the robustness of the algorithm, we wrote a separate comment above to address common concerns raised by the reviewers. Please refer to item (1) of the comment.\n\n3. Thank you for the suggestion! Since our analysis is specialized for ReLU neural networks, it would be a good idea to place Lemma 2 in Section 2.1. We will update the paper in our next revision.\n\nAs for “minor comments”: Thank you for pointing out the typos. We will fix those issues as we revise our paper.", "We appreciate the reviewer for their time and thoughtful comments. Below, we will provide answers to the reviewer’s concerns.\n\n1) For the discussion on the precision of the algorithm, we wrote a separate comment above to address common concerns raised by the reviewers. Please refer to item (1) of the comment.\n\n2) It is true that the set of nondifferentiable points has measure zero. On the other hand, please note that a nondifferentiable point can have multiple boundary data points, i.e., x_i’s that satisfy [W_1 x_i + b_1]_k = 0 for some k (input to the k-th hidden node is zero for this x_i). Also, such nondifferentiable points with many boundary data points lie on the intersection of subsets of the parameter space, where each subset corresponds to one boundary data point x_i and contains the parameter values satisfying [W_1 x_i + b_1]_k = 0.\nOur experiments were run for an extended period of time with exponentially decaying step size, to get as close to the exact nondifferentiable point (potentially local minimum) as possible. And then, we counted the number of “approximate” boundary data points, i.e., x_i’s that satisfied abs( [W_1 x_i + b_1]_k ) < 1e-5 for some k. In our experimental settings, it turns out that gradient descent pushes the parameters to a point with multiple boundary data points (i.e., M is large), but there are usually very few flat extreme rays (i.e., L is small).\n\n3a) At our current stage of results, we are not claiming that our algorithm is useful in practice. By the experimental results, we are claiming the following: 1) given that M can be large in practice, our analysis of nondifferentiable points is meaningful, 2) L is usually very small in our experiments, so testing local minimality at nondifferentiable points can be tractable.\n\n3b) Our analysis for now is limited to one-hidden-layer networks. For deeper networks, perturbation on the first layer may affect later layers, so the extension to deep networks is beyond the scope of this paper. For now, we leave this extension as future work.\n\n3c) Due to multiple reviewers with similar concerns, we addressed this issue in a separate comment above. Please refer to item (2) of the comment.\n\n3d) Your observation is true, because other activation functions do not have nondifferentiable points. In such cases, we can directly compute the gradient and Hessian, so the second-order stationarity test is straightforward. However, ReLU is one of the most popular activation functions, and it inevitably introduces nondifferentiable points in the empirical risk, which are difficult to analyze. The goal of our paper is to shed light on a better understanding of such nonsmooth points.", "We thank the reviewer for the time and effort invested in reviewing our paper. Below, we will address the comments point by point:\n\n1) For the discussion on ideal conditions of the algorithm, we wrote a separate comment above to address common concerns raised by the reviewers. Please refer to item (1) of the comment.\n \n2) We agree that we do not have a good theoretical bound on L, so in the worst case we might suffer exponential running time. Due to the complex nature of the loss surface of empirical risk minimization problems, providing tight theoretical bound for L might be very difficult, so we instead provide some empirical evidence showing L is usually small. We leave the theory side as future work.\n\n3) Indeed, the computational cost of calculating exact (sub)differentials and Hessians grow proportionally with the number of data points m. It seems difficult to obtain a stochastic version unless we add assumptions on the distribution of data points. If we can develop a robust version of the algorithm as mentioned in item 1), then with some distributional assumptions on data, we expect that we can get some high probability results for a stochastic version.\nHowever, even without the stochastic version, we expect that (a numerical implementation of) our algorithm will be used only for testing local optimality almost at the end of training, not every iteration. Thus, its computational cost will not be too big.\n\nThank you very much for pointing out those typos. $[N(x_i)]_k$ is originally meant to be $[W_1 x_i + b_1]_k$. We will fix these typos in the next revision.", "Thank you very much for your feedback. We are glad that you enjoyed reading our paper. We list our answers to your comments, by their numbering:\n\n(1) Yes, we agree that there is certainly room for improvement; we will make our best efforts in revising the paper accordingly.\n\n(2) For the discussion on the robustness of the algorithm in general, we wrote a separate comment above to address common concerns raised by the reviewers. Please refer to item (1) of the general comments.\nRegarding the specific concern of testing if a directional derivative is zero, we believe that the reviewer is talking about testing the existence of flat extreme rays. In our experiments, to count the number of “approximate” flat extreme rays, we used our lemma A.1 that gives conditions for existence of flat extreme rays, and tested if these conditions are approximately satisfied. For more details, please refer to the end of the 2nd paragraph of Section 4.\n\n(3) The main purpose of our numerical experiments is to provide an empirical evidence of how many boundary data points (M) and flat extreme rays (L) we can have, because these quantities are difficult to estimate/bound theoretically. Our experiments show that, in our settings, there can be nonsmooth local minima with large M (implying that our analysis on nonsmooth points is meaningful) but L is usually surprisingly small.\n", "Dear reviewers,\n\nWe truly appreciate the time and effort put in reviewing our paper, and we thank you all for your thoughtful comments and suggestions.\n\nThere were some concerns common to multiple reviewers, so we address them in a separate comment here.\n\n(1) Regarding precision / robustness / ideal conditions issues\nAll reviewers raised concerns about numerical applicability of our algorithm. We would like to emphasize that this paper is a theoretical contribution seeking to understand the nondifferentiable points of the empirical risk surface of ReLU networks. As noted in the introduction, our understanding of nonsmooth points of empirical risk is limited, but in some cases, nonsmooth points can be precisely the “interesting points” that we care about. In this paper, we are theoretically/empirically showing that testing local optimality/second order stationarity at nondifferentiable points can be tractable (if the number of flat extreme rays is small), by exploiting the geometric structure of empirical risk.\nBut we fully agree (as also noted in Section 1 in the Remarks and in Section 5) that creating a numerically robust version of this algorithm that works for “close-to-nondifferential” points and approximate SOSPs will be needed before our theoretical work can attain its true practical significance---this goal requires a fairly substantial amount of effort (both theory and practice) and we hope to tackle it in the future.\n\n(2) Regarding practical usefulness of the algorithm \nReviewers 1 and 4 raised concerns whether this algorithm is really meaningful in practice, given that SGD already performs well enough without our algorithm. It is true that in practice, SGD easily achieves near-zero empirical risk most of the time. However, please note that the solutions that we obtain at the end of training are not necessarily global or even local minima, because in practice we don’t have optimality tests / certificates during training.\nIn contrast, one of the most important beneficial features of convex optimization is existence of an optimality test (e.g., norm of the gradient is smaller than a certain threshold) for termination, which gives us a certificate of optimality. One of our motivations is that deep learning may also benefit from such optimality tests. Our analysis and experimental results suggest that even in the nonconvex and nonsmooth case of ReLU networks, it is sometimes not too difficult to get such a certificate of local optimality (we remind the readers that in general detecting local optimality for nonconvex problems is “NP-Hard”).\nWith a proper numerical implementation of our algorithm (although we leave it for future work), one can run a first-order method until it gets stuck near a point, and run our algorithm to test for optimality/second-order stationarity. If the point is an (approximate) SOSP, we can terminate without further computation time over many epochs; if the point has a descent direction, our algorithm will return a descent direction and we can continue on optimizing. Note that the descent direction may come from the second-order information; our algorithm even allows us to escape nonsmooth second-order saddle points.\n\nWe address the remaining points individually.", "This paper proposes an efficient method to test whether a point is a local minimum in a 1-hidden-layer ReLU network. If the point is not a local minimum, the algorithm also returns a direction for descending the value of the loss function. \n\nThe tests include a first-order stationary point test (FOSP), and a second-order stationary point test (SOSP). As these test can be written as QPs, the core challenge is that if there are M boundary points in the dataset, i.e., data points on a non-differentiable region of the ReLU function, then the FOSP test requires 2^M tests of extreme rays -- each boundary partition the whole space into at least two parts. This paper observes that since the feasible sets are pointed polyhedral cones. Therefore checking only these extreme rays suffices. This results in an efficient test with only 2M tests. \n\nLastly, the paper performs experiments on synthetic data. It turns out there are surprisingly many boundary points.\n\nComments:\nThis paper proposes an interesting method of testing whether a given point is a local minimum or not in a ReLU network. The technique is non-trivial and requires some key observation to make it computationally efficient. However, I have the following concerns:\n1) such a test may need very high numeric precision. For instance, you cannot make sure whether a floating point number is strictly greater than 0 or not. The small error may critically affect the property of a point. \n2) boundary points of a ReLU network should have measure 0 (correct me if not). The finding in the experiment shows surprisingly many boundary points. This is counter-intuitive. Is it because of numeric issues? You might misclassify non-boundary points.\n3) Usefulness. \n a. The paper claims that such a test would be very useful in practice. However, they cannot even perform an experiment on real datasets. \n b. Such a method only works for one-hidden layer network. It is not clear deeper network admit similar structure. \n c. Practical training of neural-network usually trains the network using SGD, which always obtain a solution with a non-zero gradient. In this sense, there is no need for such a testing. \n d. It seems like it is much easier to perform a test with different activation function, e.g., sigmoid.\n \nIf the authors can address these concerns convincingly, I would be happy to change the rating.\n", "Summary:\nThis work proposes a theoretical algorithm for checking local optimality and escaping saddles when training two-layer ReLU networks. The proposed \"checking algorithm\" involves solving convex and non-convex quadratic programs (QP) which can be done in polynomial time. The paper is well organized and technically correct with detailed proofs.\n\nComments:\n1) Applicability issue: the conditions required by the proposed checking algorithm are too ideal, making it difficult to apply in practical applications. For example, the first step of the proposed algorithm is to check whether 0 belongs to the subdifferential. In practice, the iterates may get very close to a stationary point, but arriving to a stationary point might be too time-consuming and unrealistic. If the problem is smooth, then the gradient is expected to be small so that one can easily relax this first order optimality condition by allowing a small gradient. However, since here the problem is nonsmooth, in general the subgradient could be still very large even when the iterate is very close to a stationary point. Therefore, one would need to relax the ideal conditions in the proposed algorithm to make it more applicable.\n\n2) Another concern is that the efficiency of the proposed method relies too much on the empirical result that the number of flat extreme ray is small. The computational complexities for the test of the local optimality is exponentially depending on the number of flat extreme rays. Thus to guarantee a high efficiency of the proposed test algorithm and to make the main theory sound, it is important to provide a theoretical bound on this number. Without appropriate theoretical guarantees on the upper-bound of this number, it is not persuasive to claim that the proposed theoretical algorithm is of high efficiency.\n \n3) The computational complexity is proportional to the number of training data points which could be huge. Is it possible to have a stochastic version?\n\nTypos:\n1) On page 2, under Section 2, ``$h(t):=$\" should be ``$h(x):=$\"\n\n2) In section 2.1, at the end of the paragraph \"Bisection by boundary data points\": change $b_1$ by $\\delta_1$ in ``$\\Delta_1x_i+b_1$\".\n\n3) On page 4, when defining B_k, change x by x_i. \n\n4) On page 5, above Lemma 1, when defining C_k, N(x_i) is not well defined.", "The paper proposes a method to check if a given point is a stationary point or not (if not, it provides a descent direction), and then classify stationary points as either local min or second-order stationary. The method works for a specific non-differentiable loss. In the worst case, there can be exponentially many flat directions to check (2^L), but usually this is no the case.\n\nOverall, I'm impressed. The analysis seems solid, and a lot of clever ideas are used to get around issues (such as exponential number of regions, and non-convex QPs that cannot be solved by the S-procedure or simple tricks). A wide-variety of techniques are used: non-smooth analysis, recent analysis of non-convex QPs, copositive optimization.\n\nThe writing is clear and makes most arguments easy to follow.\n\nThere are some limitations:\n\n(1) the technical details are hard to follow, and most are in a lengthy appendix, which I did not check\n\n(2) there was no discussion of robustness. If I find a direction eta for which the directional derivative is zero, what do you mean by \"zero\"? This is implemented on a computer, so we don't really expect to find a directional derivative that is exactly zero. I would have liked to see some discussions with epsilons, and give me a guarantee of an epsilon-SOSP or some kind of notion. In the experiments, this isn't discussed (though another issue is touched on a little bit: you wanted to find real stationary points to test, but you don't have exactly stationary points, but rather can get arbitrarily close). To make this practical, I think you need a robust theory.\n\n(3) The numerical simulations mainly provided some evidence that there are usually not too many flat directions, but don't convince us that this is a useful technique on a real problem. The discussion about possible loss functions at the end was a bit opaque. Furthermore, if you can't find a dataset/loss, then why is this technique useful?\n\nThe paper is interesting and novel enough that despite the limitations, I am supportive of publishing it. It introduces new ideas that I find refreshing. The technique many not ever make it into the state-of-the-art algorithms, but I think the paper has intellectual value regardless of practical value.\n\nIn short, quality = high, clarity=high, originality=very high, and significance=hard-to-predict" ]
[ -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, 6, 6, 8 ]
[ -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, 2, 3, 3 ]
[ "H1giRmc30Q", "SklaadxYRm", "HJlqYuFqAX", "H1xBzsAQAQ", "iclr_2019_HylTXn0qYX", "SyxXdq07Rm", "rkl6k5D7am", "rkeveQE02m", "SklEPann2X", "SkxeZJD5nQ", "iclr_2019_HylTXn0qYX", "iclr_2019_HylTXn0qYX", "iclr_2019_HylTXn0qYX", "iclr_2019_HylTXn0qYX" ]
iclr_2019_HylVB3AqYm
ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware
Neural architecture search (NAS) has a great impact by automatically designing effective neural network architectures. However, the prohibitive computational demand of conventional NAS algorithms (e.g. 10 4 GPU hours) makes it difficult to directly search the architectures on large-scale tasks (e.g. ImageNet). Differentiable NAS can reduce the cost of GPU hours via a continuous representation of network architecture but suffers from the high GPU memory consumption issue (grow linearly w.r.t. candidate set size). As a result, they need to utilize proxy tasks, such as training on a smaller dataset, or learning with only a few blocks, or training just for a few epochs. These architectures optimized on proxy tasks are not guaranteed to be optimal on the target task. In this paper, we present ProxylessNAS that can directly learn the architectures for large-scale target tasks and target hardware platforms. We address the high memory consumption issue of differentiable NAS and reduce the computational cost (GPU hours and GPU memory) to the same level of regular training while still allowing a large candidate set. Experiments on CIFAR-10 and ImageNet demonstrate the effectiveness of directness and specialization. On CIFAR-10, our model achieves 2.08% test error with only 5.7M parameters, better than the previous state-of-the-art architecture AmoebaNet-B, while using 6× fewer parameters. On ImageNet, our model achieves 3.1% better top-1 accuracy than MobileNetV2, while being 1.2× faster with measured GPU latency. We also apply ProxylessNAS to specialize neural architectures for hardware with direct hardware metrics (e.g. latency) and provide insights for efficient CNN architecture design.
accepted-poster-papers
This paper integrates a bunch of existing approaches for neural architecture search, including OneShot/DARTS, BinaryConnect, REINFORCE, etc. Although the novelty of the paper may be limited, empirical performance seems impressive. The source code is not available. I think this is a borderline paper but maybe good enough for acceptance.
train
[ "rJe_QI3wlV", "rkx-525OJ4", "Hke4jK9OkE", "H1eNgOayAX", "rJe1QITpR7", "HyxWxsxaCQ", "BklS-ur9h7", "HJelAtlcRm", "rJl4Qn5KAQ", "S1x0oTiHR7", "SJlO7KpVp7", "rJxuErCmTm", "BkxbEO0XpX", "S1lC-BtXpX", "B1lXIuW-CQ", "SkeDbIW-CX", "H1x6eyDe0X", "HyxqxNT5aQ", "BygWA7OK6X", "rkle4Pwda7", "S1xBSKTN6m", "HkxRDmY7am", "HkxEMyl33m", "rJl-2uUshQ", "rJeFYtGK27" ]
[ "public", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "public", "author", "author", "author", "author", "author", "author", "public", "public", "public", "public", "author", "public", "official_reviewer", "official_reviewer", "author" ]
[ "Dear the authors,\n\nI want to echo with the reviewers/public readers that releasing your detailed training pipeline is quite crucial given the good performances reported in the paper. Furthermore, only evaluation code/model ckpts is definitely not enough since people have various unreasonable ways to obtain a good ckpt only on the test set (I'm not meaning you are doing this and sorry for possible offense here in advance).\n\nBest ", "Thanks for your questions. Please see responses below.\n\n>>> “What was the search time on CIFAR-10 in GPU hours? For Proxyless-R and Proxyless-G?”\n\nThe search time depends on the size of the backbone architectures (e.g., number of blocks). For example, when searching with 54 blocks, it takes around 4 days on a single GPU for both Proxyless-R and Proxyless-G. When searching with fewer blocks (e.g. 8 blocks), it takes less than 1 day. \n\n>>> “Is Batch Normalization in training or evaluation mode when optimizing architecture parameters?”\n\nThe batch normalization is in the training mode.\n\n>>> “For REINFORCE, what do you use as optimization metric on validation set for architecture parameters on CIFAR-10? Normal loss, like cross entropy or actually misclassification rate?”\n\nWe use the misclassification rate. Normal loss, like cross entropy, may also be a feasible optimization metric\n\n>>> “For REINFORCE, do you use any kind of baselining? Do you use multiple architecture samples per update?”\n\nThe baseline is the moving average of previous mean metrics with a decay of 0.99. And we update every 8 samples.\n", "Apologize for the mistake. The correct one is setting \"replacement=False\". Beta2 is set to be the default value in Pytorch (i.e., 0.999). As for network parameters, we use SGD optimizer with Nesterov momentum 0.9 and cosine learning rate schedule.", "Hi Robin, \n\nThanks for your interest in our work and your detailed questions. \n\n>>> Response to \"Rescaling architecture parameters\" \nYour understanding of the gradient-based updates is correct. \nAs for sampling two paths according to the multinomial distribution, we use \"torch.multinomial()\". And by setting \"replacement=False\", the same path will not be chosen twice. \n\n>>> Response to \"Adam optimizer for architecture parameters\" \nWe also consider it would be problematic to use the adaptive gradient averages for this case where most of the paths are not chosen. So we set beta1 to be 0 in the Adam optimizer. Sampling multiple times before making an Adam update step is a nice idea. We will try it later. Thanks for your suggestion. \n", "Thank you for your helpful feedback. We have revised our paper according to your suggestion.\n\n>>> “in the new mobile phone results you have presented there is a network that actually has better latency with slightly worse accuracy, which makes it hard to compare”\n\n2.6% top-1 accuracy improvement on ImageNet is significant. To achieve the same accuracy, MobileNetV2 needs 2x latency (143ms v.s. 78ms). Please see Figure 4.\n\n>>> “It would be nice to actually have a table showing the strengths/weaknesses along these axes for all of these methods”\n\nThanks for your suggestion. We will add the table to our paper. \n\nModel\t Top-1\t Top-5\tLatency\tHardware-Aware\t No-Proxy\tNo-Repeating\tTime\tMemory\nMobilenetV1\t 70.6\t 89.5\t 113ms\t -\t -\t No\t -\t -\nMobilenetV2\t 72.0\t 91.0\t 75ms\t -\t -\t No\t - -\nNASNet-A\t 74.0\t 91.3\t 183ms\t No\t No No 10^4 \t 10^1\nAmoebaNet-A\t 74.5\t 92.0\t 190ms\t No\t No\t No\t 10^4 10^1\nDarts\t 73.1\t 91.0\t -\t No\t No\t No\t 10^2\t 10^2\nMnasNet\t 74.0\t 91.8\t 79ms\t Yes\t No\t No\t 10^4 \t 10^1\nProxylessNAS (mobile) 74.6\t 92.2\t 78ms\t Yes\t Yes\t Yes 10^2 \t 10^1\n\n>>> “precisely define what is novel about the method” and “emphasize exactly the empirical contribution”\n\nWe summarize our contributions as follows:\n\n> Methodologically,\na) We provided a new path-level pruning perspective for NAS.\n\nb) We proposed a gradient-based approach (Section 3.3.1) to handle non-differentiable hardware objectives (e.g. latency), making them differentiable by introducing regularization loss.\n\nc) We proposed a path-level binarization approach to address the high memory consumption issue of differentiable NAS. Notably, different from BinaryConnect that binarizes each weight, our path-level binarization approach binarizes the entire path.\n\n> Empirically,\na) We significantly reduced the cost of memory/compute for the training of large over-parameterized networks and thereby scaled to large-scale datasets (ImageNet) without proxy and repeating blocks.\n\nb) We studied specialized neural network architectures for different hardware architectures and showed its advantage, raising people’s awareness of specializing neural network architectures for hardware.\n\nc) We achieved strong empirical results on both CIFAR-10 and ImageNet. On different hardware platforms (GPU, CPU and mobile phone), our models not only significantly outperform previous state-of-the-arts, but also peer submissions.\n\nWe sincerely thank your feedback and hopefully have cleared your concerns.\n", "Thank you for your reply and detailed suggestion. We have uploaded a revision of our paper and removed the number of search space size. ", "The algorithm described in this paper is part of the one-shot family of architecture search algorithms. In practice this means training an over-parameterized architecture, of which the architectures being searched for are sub-graphs. Once this bigger network is trained it is pruned into the desired sub-graph. The algorithm is similar to DARTS in that it it has weights that determine how important the various possible nodes are, but the interpretation here is stochastic, in that the weight indicates the probability of the component being active. Two methods to train those weights are being suggested, using REINFORCE and using BinaryConnect, both having different trade offs.\n\n- (minor) *cumbersome* network seems the wrong term, maybe over-parameterized network?\n- (minor) I do not think that the size of the search space a very meaningful metric\n\nPros:\n- Good exposition\n- Interesting and fairly elegant idea\n- Good experimental results\n\nCons\n- tested on a limited amount of settings, for something that claims that helps to automate the creation of architecture. I think this is the main shortcoming, although shared by many NAS papers\n- No source code available\n\nSome typos:\n\n- Fo example, when proxy strategy -> Fo*r* example\n- normal training in following ways. -> in *the* following ways\n- we can then derive optimized compact architecture.", "Thank you for your response. I particularly appreciate the release of the source code, while I did not have time to dig into it, it definitely increases the trust from the reader.\n\nRegarding the limited experiments, consider it a criticism towards the sub-field in general, not to this paper in particular. It just seems a bit counter to the narrative of automatically selecting architectures if only a very limited amount of architectures are found.\n\nI do appreciate how this paper is searching a slightly more varied architecture search compared to some previous methods, but I do not think the search space absolute size (10^547) says much in this regard, it would be easy to artificially come up with large search spaces with little variety as well as small search spaces with a lot of variety. My personal opinion is that it would be better to omit the number, mist giving the impression that it has more meaning than it has, but consider it a very minor point :)\n", "\n Thanks for the detailed response. Please see comments below. \n\n> a) Our proxy-less NAS is the first NAS algorithm that directly learns architectures on the large-scale dataset (e.g. ImageNet) without any proxy. \n\nI agree but this is not a method/algorithmic contribution but an empirical one. The way you achieve this is by combining existing methods (which I listed in the original review), which allows the reduction of memory usage/computation compared to One-Shot/DART. I should emphasize that there is nothing particularly wrong with combining methods (especially across areas/fields) but just makes the empirical contribution and thoroughness of the analysis more important. However, the method/algorithmic contributions should be made clear in a precise manner, rather than making large general statements. \n\n> b) Our proxy-less NAS is the first NAS algorithm that breaks the convention of repeating blocks in neural architecture design. \n\n I am not sure this is the case. Neuroevolution methods (which you should cite more heavily) do not necessarily require this, e.g. [1]. However, I agree that within the regime of training over-parameterized networks or methods scalable. Again, please state your advantages explicitly; you seem to mention one axis/dimension at a time (e.g. scalability, no proxy, no repeating cell structure) yet your advantages are really at the combination of these. It would be nice to actually have a table showing the strengths/weaknesses along these axes for all of these methods, which would make it more clear.\n\n[1] Large-Scale Evolution of Image Classifiers, Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin, https://arxiv.org/abs/1703.01041\n\n> The new interesting design patterns, found by our method, can provide new insights for efficient neural architecture design.\n\nI agree with this and mentioned it in the review.\n\n> c) Our method builds upon methods from two communities (one-shot architecture search from NAS community and Pruning/BinaryConnect from model compression community). \n\nAgain, I agree but this means that it *is* a combination of methods (which contradicts your rebuttal title). \n\n> With latency constraints, our optimized models also achieved state-of-the-art results (3.1% higher top-1 accuracy while being 1.2x faster on GPU and 2.6% higher top-1 accuracy with similar latency on mobile phone, compared to MobileNetV2). \n> Besides, we directly optimize the latency, rather than an inaccurate proxy (i.e. FLOPs). \n\nI agree it's interesting to optimize for these non-differentiable objectives. However, it seems to me that given that you are optimizing directly for them, the actual gains are not that large. For example, in the new mobile phone results you have presented there is a network that actually has better latency with slightly worse accuracy, which makes it hard to compare:\n\nMobileNet V2\t\t72.0\t\t91.0\t\t75ms\nProxyless NAS (ours)\t74.6\t\t92.2\t\t78ms\n\nIn all, it would be great for the authors to precisely define what is novel about the method (if it is not a combination of existing methods, as you claim in the rebuttal title). If it is a combination of methods (which again should not necessarily be seen as a bad thing), then it would be great to emphasize exactly the empirical contribution (the largest of which seems to be the reduction of memory/compute for training of large over-parameterized networks, scaled to ImageNet-sized datasets). The optimization of a non-differentiable objective can also be a smaller contribution, but is common to RL-based methods. Again, I think this paper presents some nice results, but it is important to be precise and not make more general claims than warranted. \n", "Thanks for answering the questions so far, I also have some further questions.\n\n1. What was the search time on CIFAR-10 in GPU hours? For Proxyless-R and Proxyless-G?\n2. Is Batch Normalization in training or evaluation mode when optimizing architecture parameters?\n3. For REINFORCE, what do you use as optimization metric on validation set for architecture parameters on CIFAR-10? Normal loss, like cross entropy or actually misclassification rate?\n4. For REINFORCE, do you use any kind of baselining? Do you use multiple architecture samples per update? For example, right now I sample 10 architectures for each validation data batch and also subtract the mean metric/reward/loss before I compute the gradients.", "We sincerely thank you for your comprehensive comments and constructive advices.\n\n>>> Response to “combination of existing methods”: \nThanks for your kind advice on organizing the paper to make our contributions more clear. Here, we would like to emphasize our contributions:\n\na) Our proxy-less NAS is the first NAS algorithm that directly learns architectures on the large-scale dataset (e.g. ImageNet) without any proxy. We also solved an important problem improving the computation efficiency of NAS as we reduced the computational cost (GPU hours and GPU memory) of NAS to the same level as normal training. Moreover, the GPU memory requirement of our method keeps at O(1) complexity rather than grows linearly with the number of candidate operations O(N) [3, 4]. Therefore, our method can easily support a large candidate set while DARTS and One-Shot cannot. \t\n\nb) Our proxy-less NAS is the first NAS algorithm that breaks the convention of repeating blocks in neural architecture design. From Alexnet and VGG to ResNet and MobileNet, manually designed CNNs used to repeat blocks within the same stage. Previous NAS works keep the tradition as otherwise the searching cost will be unaffordable. Our work breaks the constraints, and we found this is actually a stereotype that needs to be corrected. \n\nThe new interesting design patterns, found by our method, can provide new insights for efficient neural architecture design. For example, people used to stack multiple 3x3 convs to replace a single large kernel conv, as this uses fewer parameters while keeping a similar receptive field. But we found this pattern may not be proper for designing efficient (low latency) networks: Two 3x3 depthwise separable convs actually run slower than a single 5x5 depthwise separable conv. Our GPU model, shown in Figure 4, incorporates large kernel convs and aggressively pools at early stages to shrink network depth. Then the model chooses computation-expensive operations at low-resolution stages. It also tends to choose computation-expensive operations in the first block within each stage where the feature map is downsampled. As a consequence, our GPU model can outperform previous SOTA efficient architectures in accuracy performances (e.g. 3.1% higher top-1 than MobileNetV2), while running faster than them (e.g. 1.2x faster than MobileNetV2). Such patterns cannot be found by previous NAS, as they optimize on proxy task and force blocks to share structures.\n\nc) Our method builds upon methods from two communities (one-shot architecture search from NAS community and Pruning/BinaryConnect from model compression community). It is the first time to incorporate ideas from the model compression community to the NAS community and we also provide a new path-level pruning perspective for one-shot architecture search. Moreover, we provide a unified framework for both gradient-based updates and REINFORCE-based updates. \n\nd) Our proxy-less NAS achieved very strong empirical results on two most representative benchmarks (i.e. CIFAR and ImageNet). On CIFAR-10, our optimized model reached 2.08% error rate with only 5.7M parameters, outperforming previous state-of-the-art architecture (AmeobaNet-B with 34.9M parameters). On ImageNet, we searched specialized neural network architectures for three different platforms (GPU, CPU and mobile phone). With latency constraints, our optimized models also achieved state-of-the-art results (3.1% higher top-1 accuracy while being 1.2x faster on GPU and 2.6% higher top-1 accuracy with similar latency on mobile phone, compared to MobileNetV2). \n\nBesides, we directly optimize the latency, rather than an inaccurate proxy (i.e. FLOPs). It’s an important concept that low FLOPs doesn’t translate to low latency. All our speedup numbers are reported with real measured latency. We believe both our efficient search methodology and the resulting efficient models have big industry impact. ", "We sincerely thank you for the detailed comments on our paper. We have revised the paper and fixed the typos accordingly.\n\n>>> Response to “limited amount of tested settings”: \nAs our proxy-less NAS has reduced the cost to the same level of normal training (100x more efficient on ImageNet), it is of great interest for us to apply proxy-less NAS to more settings and datasets. However, for this work, considering the resource constraints and time limits, we have strong reasons to believe that our experiment settings are sufficient:\n\na) Our experiments are conducted on two most representative benchmarks (CIFAR and ImageNet). It is in line with previous NAS papers and also makes it possible to compare our method with previous NAS methods. We also experimented with 3 different hardware platforms and observed consistent latency improvement over previous work. \n\nb) Moreover, on the challenging ImageNet classification task, we have conducted architecture search experiments under three different settings (GPU, CPU and Mobile) while previous NAS papers mainly transfer learned architectures from CIFAR-10 to ImageNet without conducting architecture search experiments on ImageNet [1, 2]. \n\n>>> Response to “no source code available”: \nReviewer 2 also has similar requests, based on the concern on our strong empirical results. Our pre-trained models and the evaluation code are provided in the following anonymous link: https://goo.gl/QU3GhA. Besides, we have also uploaded the video visualizing the architecture search process: https://goo.gl/VAzGJs. We plan to open source our project upon publication.\n\n>>> Response to “the size of the search space is not a very meaningful metric”: \nThis might be a misunderstanding. We do not intend to use the size of our search space as a metric for comparison; instead, it is an important reason why our accuracy is much better than previous NAS methods. Previous NAS methods forced different blocks to share the same structure and only explored a limited architecture space (e.g. 10^18 in [2] and 10^10 in [3]). Our method, breaking the constraints, allows all of the blocks to be specified and has much larger search space (i.e. 10^547).\n\n[1] Zoph B, Vasudevan V, Shlens J, Le QV. Learning transferable architectures for scalable image recognition. CVPR 2018.\n[2] Liu H, Simonyan K, Yang Y. Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055. 2018.\n[3] Bender G, Kindermans PJ, Zoph B, Vasudevan V, Le Q. Understanding and simplifying one-shot architecture search. ICML 2018.", "We sincerely thanks for the detailed feedback. Our pre-trained models and the evaluation code are provided in the following anonymous link for verifying our results: https://goo.gl/QU3GhA. We have also made a video to visualize the architecture search process: https://goo.gl/VAzGJs. We would like to release the entire codebase upon publication. \n\n>>> Response to “performances are too good to be true”: \nWe consider the comment as a compliment rather than a drawback. There are several reasons for our good results:\na) Our proxy-less NAS *directly* learns on the *target* task while previous NAS methods *indirectly* learn on *proxy* tasks. For example, on CIFAR-10, DARTS [1] conducted architecture search experiments with 8 blocks due to their high memory consumption and then transferred the learned block structure to a much larger network with 20 blocks. This indirect optimization scheme would lead to suboptimal results while our proxy-less NAS does not suffer from this problem. \n\nb) We broke the convention in neural architecture design by *not* repeating the same building block structure. Our method explores a much larger architecture space compared to previous NAS methods (10^547 vs 10^18). Furthermore, our method has much larger block diversity and is able to learn preferences at different positions in the architecture.\n \nFor example, our optimized neural network architectures for GPU, CPU and mobile phone prefer to choose more computation-expensive operations (e.g. 7x7 MBConv6) for the last few stages where the resolution of feature map is low. They also prefer to choose more computation-expensive operations in the first block within each stage where the feature map is downsampled. We consider the ability to learn such patterns which are absent in previous NAS papers also helps to improve our results.\n\n>>> Response to “DPP-Net and NAO citations”: \nApologize for the typo and missing a relevant paper in our reference part. We have fixed typo and added a reference to “Neural Architecture Optimization”. Thanks for pointing out our mistakes.\n\n[1] Liu H, Simonyan K, Yang Y. Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055. 2018.\n", "Thanks for your interest in our work. The evaluation code and pretrained models are accessible at https://goo.gl/QU3GhA. We also made a video to visualize the architecture search process at https://goo.gl/VAzGJs . You are welcome to validate the performance. The entire codebase will be released upon publication.\n\nOur implementation is repeatable and reproducible. We used the same code base to search CPU/GPU/Mobile models. On all three platforms the performance consistently outperformed previous work, thanks to our Proxyless NAS enables searching over a large design space efficiently.\n", "We have added the results for Proxyless-G on ImageNet to the paper (please see Table 6 in Appendix D). We find that without taking latency as a direct objective, Proxyless-G has no incentive to choose computation-cheap operations. Consequently, it designs a very slow network that has 158ms latency on mobile phone. After rescaling the network using depth multiplier [1, 2], the latency of the network reduces to 83ms. However, this model can only achieve 71.8% top-1 accuracy on ImageNet which is 2.8% lower than Proxyless-R. Therefore, as discussed in our previous responses, it is essential to take latency which is non-differentiable as a direct optimization objective. And REINFORCE-based approach provides a solution to this problem.\n\nBeside REINFORCE, we have recently designed a differentiable approach to handle the non-differentiable objectives (please see Appendix D). Specifically, we propose the latency regularization loss based on our proposed latency prediction model (please see Appendix C). The key to the latency regularization loss is an observation that the expected latency of a mixed operation is actually differentiable w.r.t. architecture parameters. Therefore, by incorporating the expected latency into the loss function as a regularization term, we are able to directly optimize the trade-off between accuracy and latency. Further details are provided in Appendix D. \n\n[1] Sandler, Mark, et al. \"MobileNetV2: Inverted Residuals and Linear Bottlenecks.\" Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018.\n[2] Tan, Mingxing, et al. \"Mnasnet: Platform-aware neural architecture search for mobile.\" arXiv preprint arXiv:1807.11626 (2018).", "Hi all,\n\nWe have uploaded a revision of our paper with the following new methods and stronger experiment results:\n\na) “Economical alternative to mobile farm”. In Appendix C, we introduce an accurate latency prediction model and remove the need of building an expensive mobile farm infrastructure [1] when learning specialized neural network architectures for mobile phone. We add new experiment results on the mobile setting, where our model achieves state-of-the-art top-1 accuracy on ImageNet under mobile latency constraints. \n\nb) “Make latency differentiable”. In Appendix D, we present a *differentiable* approach to handle the non-differentiable objectives (i.e. latency in our case). Specifically, we propose the latency regularization loss based on our proposed latency prediction model. By incorporating the predicted latency of the network into the loss function as a regularization term, we are able to directly optimize the trade-off between accuracy and latency. We also add new experiments on ImageNet to justify the effectiveness of the proposed latency regularization loss. \n\n[1] Tan, Mingxing, et al. \"Mnasnet: Platform-aware neural architecture search for mobile.\" arXiv preprint arXiv:1807.11626 (2018).", "Thanks for your answers! I assume you man replacement=False, right?\nbeta1 is set zero, and what value do you use for beta2?\nAnd for the network parameters, what is your optimizer and hyperparameters , including learning rate schedule (for CIFAR-10)?", "So, on further thought I assume you might have meant rescaling probabilities of sampled operations by a factor such that probabilities of unsampled operations stay the same. And update the corresponding alphas for the sampled operations such that this matches.\n\nI have tried to do this here:\nhttps://gist.github.com/robintibor/83064d708cdcb311e4b453a28b8dfdca\n\nDoes this look correct to you?", "Let me expand a little bit on the question and just write my understanding and open questions regarding the Gradient-Based Updates from section 3.1.\n\nSo, given a_i's as architecture weights, I am implementing it as follows:\n1. Compute p_i's from a_i's using softmax\n2. Use computed p_i's as sampling probabilities for the multinomial distribution to select two operations. [Possibly resample, if same operation chosen twice?]\n3. Recompute p_i's of the chosen a_i's by only pushing the two chosen a_is through softmax? Let's call them pnew_i's\n4. Use pnew_i's as input to binarize function, which will select one operation as active and one as inactive\n5. Compute outputs for both chosen operations, let's call them o_1, o_2, with o_1 the active operation according to the binarize function computed before\n6. Compute overall output as g_1(=1)*o_1 + g_2(=0)*o_2 (g_1, g_2 from binarize)\n7. Compute gradient on chosen a_i's as (gradient of loss wrt g_i) * (gradient of pnew_i wrt a_i) [or using full softmax, i.e. (gradient of loss wrt g_i) * (gradient of p_i wrt a_i)?]\n8. Make update step on a_i's with optimizer\n9. Multiply updated and chosen a_is by a factor that keeps probabilities p_is of unchosen operations identical to before [or see update below]\n\nWhat is correct, what is not?\n\nAlso, you use Adam for the architecture parameters, do you think it can be a problem for the adaptive gradient averages that in a single update, most operations are not chosen? Or do you sample multiple times before you make an Adam update step?\n\n\n\n", "Thanks for the fascinating research work.\nI am trying to reimplement your method and have a question regarding:\n\"Finally, as path weights are computed by applying softmax to the architecture parameters, we need to rescale the value of these two updated architecture parameters by multiplying a ratio to keep the path weights of unsampled paths unchanged.\"\n\nI am not sure how to do this correctly, can you provide the formula for this ratio or code? I am a bit stuck there, how to compute the ratio :)\n\nAnother question regarding:\n\"Following this idea, within an update step of the architecture parameters, we first sample two paths according to the multinomial distribution (p1,···,pN) and mask all the other paths as if they do not exist.\"\n\nCould this sampling result in the same path being chosen twice? And do you handle that in some way?", "\n>>> Response to “comparison with One Shot and DARTS”: \nApologize for the unclear explanation for this experiment. We will revise this part to make it more clear. \n\nAll of three methods are evaluated under the same condition except DARTS [3]. Same as the original paper, DARTS *has to* use a smaller scale setting for learning architectures due to the high memory consumption. So for DARTS, the first cell structure setting is chosen to fit the network into a single GPU to learn cell structure. Then we evaluated the learned cell structure on two larger settings by repeatedly stacking it, same as the original DARTS paper [3]. \n\nFor our method, since we solved the high memory consumption issue via binarized path, our method can directly learn architectures under both small-scale and large-scale settings with *limited* GPU memory. As it is one of the key advantages of our method over previous NAS methods, we consider it reasonable to keep such differences. \n\n>>> Response to “add results for Proxyless-G on ImageNet”: \nThanks for suggesting this new experiment. We have launched this experiment and will add the results to the paper.\n\nHowever, it is important to take latency as a *direct* objective when learning specialized neural network architectures for a platform. Otherwise, NAS would fail to make a good trade-off between accuracy and latency. For example, NASNet-A [1] and AmoebaNet-A [2] has shown compelling accuracy results compared to MobileNetV2 1.4 with similar number of parameters and FLOPs. But they are optimized without the awareness of the latency, their measured latencies on mobile phone are much worse than MobileNetV2 1.4 (see below). Therefore, we employ REINFORCE to directly optimize the non-differentiable objective (i.e. latency).\n\nModel\t\t\t\tParams\t FLOPS\t Top-1\tMobile latency\nMobileNet V2 1.4\t\t6.9M\t\t585M\t\t74.7\t\t143ms\nNASNet-A\t\t\t5.3M\t\t564M\t\t74.0\t\t183ms\nAmeobaNet-A\t\t5.1M\t\t555M\t\t74.5\t\t190ms\n\n[1] Zoph B, Vasudevan V, Shlens J, Le QV. Learning transferable architectures for scalable image recognition. CVPR 2018.\n[2] Real E, Aggarwal A, Huang Y, Le QV. Regularized evolution for image classifier architecture search. arXiv preprint arXiv:1802.01548. 2018.\n[3] Liu H, Simonyan K, Yang Y. Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055. 2018.\n[4] Bender G, Kindermans PJ, Zoph B, Vasudevan V, Le Q. Understanding and simplifying one-shot architecture search. ICML 2018.", "Dear authors, can you release your source code for readers to validate your experiment?", "\nThis paper addresses the problem of architecture search, and specifically seeks to do this without having to train on \"proxy\" tasks where the problem is simplified through more limited optimization, architectural complexity, or dataset size. The paper puts together a set of existing complementary methods towards this end, specifically 1) Training \"cumbersome\" networks as in One Shot and DARTS, 2) Path binarization to address memory requirements (optimized using ideas in BinaryConnect), and 3) optimizing a non-differentiable architecture using REINFORCE. The end result is that this method is able to find efficient architectures that achieve state of art performance with fewer parameters, can be optimized for non-differentiable objectives such as latency, and can do so with smaller amounts of GPU memory and computation.\n\nStrengths\n\n + The paper is in general well-written and provides a clear description of the methods.\n\n + Different choices made are well-justified in terms of the challenge they seek to address (e.g. non-differentiable objectives, etc.)\n\n + The results achieve state of art while being able to trade off other objectives such as latency\n\n + There are some interesting findings such as the need for specialized blocks rather than repeating blocks, comparison of architectures for CPUs vs. GPUs, etc. \n\nWeaknesses\n \n - In the end, the method is really a combination of existing methods (One Shot/DART, BinaryConnect, use of RL/REINFORCE, etc.). One novel aspect seems to be factorizing the choice out of N candidates by making it a binary selection. In general, it would be good for the paper to make clear which aspects were already done by other approaches (or if it's a modification what exactly was modified/added in comparison) and highlight the novel elements.\n\n - The comparison with One Shot and DARTS seems strange, as there are limitations place on those methods (e.g. cell structure settings) that the authors state they chose \"to save time\". While that consideration has some validity, the authors should explicitly state why they think these differences don't unfairly bias the experiments towards the proposed approach.\n\n - It's not clear that the REINFORCE aspect is adding much; it achieves slightly higher parameters when compared against Proxyless-G, and while I understand the motivation to optimize a non-differentiable function in this case the latency example (on ImageNet) is never compared to Proxyless-G. It could be that optimized the normal differentiable objective achieves similar latency with the smaller number of parameters. Please show results for Proxyless-G in Table 4.\n\n - There were several typos throughout the paper (\"great impact BY automatically designing\", \"Fo example\", \"is build upon\", etc.)\n\n In summary, the paper presents work on an interesting topic. The set of methods seem to be largely pulled from work that already exists, but is able to achieve good results in a manner that uses less GPU memory and compute, while supporting non-differentiable objectives. Some of the methodological issues mentioned above should be addressed though in order to strengthen the argument that all parts of the the method (especially REINFORCE) are necessary. ", "It seems the authors propose an efficient method to search platform-aware network architecture aiming at high recognition accuracy and low latency. Their results on CIFAR-10 and ImageNet are surprisingly good. But it is still hard to believe that the author can achieve 2.08% error rate with only 5.7M parameter on CIFAR10 and 74.5% top-1 accuracy on ImageNet with less GPU hours/memories than prior arts.\n\nGiven my concerns above, the author must release their code and detail pipelines since NAS papers are difficult to be reproduced. \n\nThere is a small typo in reference part:\nJing-Dong Dong's work should be DPP-Net instead of PPP-Net (https://eccv2018.org/openaccess/content_ECCV_2018/papers/Jin-Dong_Dong_DPP-Net_Device-aware_Progressive_ECCV_2018_paper.pdf)\nand I think this paper \"Neural Architecture Optimization\" shoud be cited.", "Hi all,\n\nOur efficient algorithm allows us to specialize neural network architectures for different devices easily. Recently, we extended our proxyless NAS to the mobile setting and achieved SOTA result with mobile latency constraint (< 80ms latency on Pixel 1 phone) as well. The following is our current results on ImageNet (Device: Pixel 1. Batch size: 1. Framework: TF-Lite):\n\nModel\t\t\t\tTop-1\tTop-5\tMobile latency\nMobileNet V1\t\t70.6\t\t89.5\t\t113ms\nMobileNet V2\t\t72.0\t\t91.0\t\t75ms\nNASNet-A\t\t\t74.0\t\t91.3\t\t183ms\nAmeobaNet-A\t\t74.5\t\t92.0\t\t190ms\nMnasNet\t\t\t74.0\t\t91.8\t\t76ms\nMnasNet (our impl.)\t74.0\t\t91.8\t\t79ms\nProxyless NAS (ours)\t74.6\t\t92.2\t\t78ms\n\nThe detailed architectures of our searched models and their learning process are provided in the following anonymous link:\nhttps://drive.google.com/open?id=1nut1owvACc9yz1ZPqcbqoJLS2XrVPp1Q" ]
[ -1, -1, -1, -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, -1 ]
[ -1, -1, -1, -1, -1, -1, 2, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 2, -1 ]
[ "S1lC-BtXpX", "S1x0oTiHR7", "H1x6eyDe0X", "rkle4Pwda7", "rJl4Qn5KAQ", "HJelAtlcRm", "iclr_2019_HylVB3AqYm", "rJxuErCmTm", "SJlO7KpVp7", "iclr_2019_HylVB3AqYm", "HkxEMyl33m", "BklS-ur9h7", "rJl-2uUshQ", "HkxRDmY7am", "S1xBSKTN6m", "iclr_2019_HylVB3AqYm", "H1eNgOayAX", "BygWA7OK6X", "rkle4Pwda7", "iclr_2019_HylVB3AqYm", "SJlO7KpVp7", "iclr_2019_HylVB3AqYm", "iclr_2019_HylVB3AqYm", "iclr_2019_HylVB3AqYm", "iclr_2019_HylVB3AqYm" ]
iclr_2019_Hyl_vjC5KQ
Hierarchical Reinforcement Learning via Advantage-Weighted Information Maximization
Real-world tasks are often highly structured. Hierarchical reinforcement learning (HRL) has attracted research interest as an approach for leveraging the hierarchical structure of a given task in reinforcement learning (RL). However, identifying the hierarchical policy structure that enhances the performance of RL is not a trivial task. In this paper, we propose an HRL method that learns a latent variable of a hierarchical policy using mutual information maximization. Our approach can be interpreted as a way to learn a discrete and latent representation of the state-action space. To learn option policies that correspond to modes of the advantage function, we introduce advantage-weighted importance sampling. In our HRL method, the gating policy learns to select option policies based on an option-value function, and these option policies are optimized based on the deterministic policy gradient method. This framework is derived by leveraging the analogy between a monolithic policy in standard RL and a hierarchical policy in HRL by using a deterministic option policy. Experimental results indicate that our HRL approach can learn a diversity of options and that it can enhance the performance of RL in continuous control tasks.
accepted-poster-papers
This paper proposes a method for hierarchical reinforcement learning that aims to maximize mutual information between options and state-action pairs. The approach and empirical analysis is interesting. The initial submission had many issues with clarity. However, the new revisions of the paper have significantly improved the clarity, better describing the idea and improving the terminology. The main remaining weakness is the scope of the experimental results. However, the reviewers agree that the paper exceeds the bar for publication at ICLR with the existing experiments.
train
[ "BylepC6n3m", "H1gOFYTK3m", "rJxEm9nX0Q", "Syegx9hmAX", "BJlN6Qoy07", "ryex2YGdpm", "Hkgrq6jvaQ", "SJxeXTsD6Q", "rJx6n5iDa7", "BklAL5sPT7", "BJg4ANaphm" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "The authors propose an HRL algorithm that attempts to learn options that maximize their mutual information with the state-action density under the optimal policy.\n\nSeveral key terms are used in ways that differ from the rest of the literature. The authors claim options are learned in an \"unsupervised\" manner, but it is unclear what this means. Previous work (none of which is cited) has dealt with unsupervised option discovery in the context of mutual information maximization (Variational intrinsic control, diversity is all you need, etc), but they do so in the absence of reward, unlike this paper. \"Optimal policy\" is similarly abused, with it appearing to mean optimal from the perspective of the current model parameters, rather than optimal in any global sense. Or at least I think that is what the authors intend. If they do mean the globally optimal policy, then its unclear how to interpret Equation 8, with its reference to a behavior policy and an advantage function, neither of which would be available if meant to represent the global optimum.\n\nEquation 10 comes out of nowhere. One must assume they meant \"maximize mutual information\" and not \"minimize\", but who knows. Why is white-noise being added to the states and actions? Is this some sort of noise-contrastive estimation approach to mutual information estimation? It doesn't appear to be, but it is unclear what else could motivate it. Even the appendices fail to shine light on this equation.\n\nThe algorithm block isn't terribly helpful. The \"t\" variable is used outside of its for loop, which draws into question the exact nesting structure of the underlying algorithm (which isn't obvious for HRL methods). There aren't any equations referenced, with the option policy network's update not even referencing the loss nor data over which the loss would be evaluated.\n\nSome of the experimental results show promise, but the PPO Ant result raises some questions. Clearly the OpenAI implementation of PPO used would have tuned for the OpenAI gym Ant implementation, and the appendix shows it getting decent results. But it never takes off in the harder RlLab version -- were the hyper-parameters adjusted for this new environment?\n\nIt is also odd that no other HRL approaches are evaluated against, given the number cited. Running these methods might be too costly, but surely a table comparing results reported in those papers should be included.\n\nA minor point: another good baseline would be TD3 with the action repeat adjusted to be inline with the gating policy.\n\nI apologise if this review came off as too harsh -- I believe a good paper can be made of this with extensive rewrites and additional experiments. But the complete lack of clarity makes it feel like it was rushed out prematurely.\n\nEDIT: Now this is a paper that makes sense! With the terminology cleared up and the algorithm fully unpacked, this approach seems quite interesting. The experimental results could always be stronger, but no longer have any holes in them. Score 3-->6", "Revision: The authors addressed most of my concerns and clearly put in effort to improve the paper. The paper explains the central idea better, is more precise in terminology in general, and the additional ablation gives more insight into the relative importance of the advantage weighting. I still think that the results are a bit limited in scope but the idea is interesting and seems to work for the tasks in the paper. I adjusted my score to reflect this.\n\nSummary:\nThe paper proposes an HRL system in which the mutual information of the latent (option) variable and the state-action pairs is approximately maximized. To approximate the mutual information term, samples are reweighted based on their estimated advantage. TD3 is used to optimize the modules of the system. The system is evaluated on continuous control task from OpenAI gym and rllab.\n\nFor the most part, the paper is well-written and it provides a good overview of related work and relevant terminology. The experiments seem sound even though the results are not that impressive. The extra analysis of the option space and temporal distribution is interesting. \n\nSome parts of the theoretical justification for the method are not entirely clear to me and would benefit from some clarification. Most importantly, it is not clear to me why the policy in Equation 7 is considered to be optimal. Given some value or advantage function, the optimal policy would be the one that picks the action that maximizes it. The authors refer to earlier work in which similar equations are used, but in those papers this is typically in the context of some entropy maximizing penalty or KL constraint. A temperature parameter would also influence the exploration-exploitation trade-off in this ‘optimal’ policy. I understand that the rough intuition is to take actions with higher advantage more often while still being stochastic and exploring but the motivation could be more precise given that most of the subsequent arguments are built on top of it. However, this is not the policy that is used to generate behavior. In short, the paper is clear enough about how the method is constructed but it is not very clear to me *why* the mutual information should be optimized with respect to this 'optimal' policy instead of the actual policy one is generating trajectories from.\n\nHRL is an interesting area of research with the potential to learn complicated behaviors. However, it is currently not clear how to evaluate the importance/usefulness of hierarchical RL systems directly and the tasks in the paper are still solvable by standard systems. That said, the occasional increase in sample efficiency over plain TD3 looks promising. It is somewhat disappointing that the number of beneficial option is generally so low. To get more insight in the methods it would have been nice to see a more systematic ablation of related methods with different mutual information pairings (action or state only) and without the advantage weighting. Could it be that the number of options has to remain limited because there is no parameter sharing between them? It would be interesting to see results on more challenging control problems where the hypothesized multi-modal advantage structure is more likely to be present.\n\nAll in all I think that this is an interesting paper but the foundations of the theoretical motivation need a bit more clarification. In addition, experiments on more challenging problems and a more systematic comparison with similar models would make this a much stronger paper.\n\nMinor issues/typos:\n- Contributions 2 and 3 have a lot of overlap.\n- The ‘o’ in Equation 2 should not be bold font. \n- Appendix A. Shouldn’t there be summations over ‘o’ in the entropy definitions?\n\n\n", "- The PPO baseline is updated to address the concern from the reviewer 3\n- The experimental results are updated to include the performance of a variant of the proposed method which does not use the advantage-weighted importance for computing mutual information.\n", "To evaluate the benefit of the advantage-weighted importance, we evaluated a variant of adInfoHRL, which does not use the advantage-weighted importance for computing mutual information. The results show that the proposed method outperforms the version without the advantage-weighted importance on all the four tasks. We added the result of the results in the revised manuscript.\n", "We performed a parameter sweep to tune the performance of PPO, and we updated the result graph. We observe that there is a trade-off of the performance across the tasks. For example, when obtaining the better performance on Ant-rllab, the performance on the Walkder2d-v1 gets lower. We picked the hyperparameters of PPO that give the performance comparable to the one reported in [Haarnoja, ICML 2018], although the hyperparameters of PPO used in [Haarnoja, ICML 2018] are not provided.", "While you didn't tune hyperparameters for specific tasks, surely you picked hyperparameters by maximizing performance across all tasks. PPO's hyperparameters were tuned without knowledge of Ant-rllab, making the current comparison unfair. Rerunning a PPO hyperparameter sweep with your collection tasks would solve this issue, as would limiting the set of tasks to those used to tune PPO (i.e. switching out Ant-rllab for Ant-gym).", "Dear reviewers\n\nThank you for constructive comments. We made major revision, especially on the part where we motivate and explain our method. We believe that the manuscript is significantly improved thanks to the reviewers’ comments.\n\nHere is the summary of our revision and answers to reviewers’ questions\n1.\tRemoval of the term “optimal policy”\nIn the initial manuscript, a policy of the form \\frac{\\exp(A(s,a ))}{Z} is referred to as “optimal policy”, and we removed this expression. We consider a policy of this form in order to reduce the problem of finding the modes of the advantage function to that of finding modes of the probability density of state-action pairs. Any policy from which a sample is drawn and that results in a higher return with higher probability can be used for this purpose. In the revised manuscript, a policy of the form \\frac{f(A(s,a ))}{Z} is referred to as “a policy based on the advantage function,” where f is a monotonically increasing function with respect to the input variable. We replaced \\exp with a monotonically increasing function f in the revised manuscript so that we can emphasize that the form of Equation 7 is not limited to the exponential function. Although we used f() = exp() in our implementation and a policy of the form \\frac{\\exp(A(s,a ))}{Z} is optimal in entropy-regularized RL, our method is not related to entropy-regularized RL. We revised the manuscript to avoid the confusion.\n\n2.\tClarification of the motivation of using the advantage-weighted importance\nWe can reduce the problem of finding the modes of the advantage function to that of the modes of the density of state-action pairs with the advantage-weighted importance. However, without the advantage-weighted importance, modes of the density of the state-action pairs induced by an arbitrary policy do no correspond to those of the advantage function in general. We revised the manuscript to clarify this point.\n\n3.\tBenefit of the deterministic option policies\nReviewer 1 questioned the benefit of the deterministic option policies. When learning stochastic option policies, the option-value function needs to be learned in addition to the action-value functions. As discussed in Section 4 in the revised manuscript, the option-value function does not need to be learned, since it can be estimated from the action-value function and the option policies when the option policies are deterministic. When option policies are stochastic, learning the option-value function needs to be updated if the option policies are updated. However, in the case of deterministic option policies, this additional learning cost is not necessary. Hence, the use of deterministic option policies can be more sample-efficient than that of stochastic option policies. \n\n4.\tComparison with other HRL methods\nWe put a table for comparison with recent HRL methods in Appendix. In terms of the achieved returns, our method outperforms IOPG (Smith et al., ICML 2018). Compared with SAC-LSP (Haanoja, ICML2018), our method outperforms SAC-LSP on Walker2d and Ant-rllab, and SAC-LSP shows its superiority on Hopper.\n\n5.\tRevision of premature descriptions\nReviewer 3 pointed out some issues of the description in Algorithm 1, and Reviewer 2 also pointed out some typos. We modified those points and revised several descriptions to improve the clarity. In addition, the term “unsupervised” was confusing in the initial manuscript, we removed the related descriptions. We also cited missing related work, such as variational intrinsic control and diversity is all you need.\n", "Thank you for the comments. We revised our manuscript to clarify the motivation. Please refer to the above post for details. We also answer your question here.\n\n-\tClarification of the objective function for learning the latent variable\nReviewer 3 raised a concern on the objective function for learning the latent variable. The objective function is based on regularized information maximization (RIM) . Since the objective function is negative to the MI term, the latent variable is learned by minimizing the objective function. The KL term in the objective function is the regularization term based on virtual adversarial training (VAT). We revised our manuscript to make the story more easily followable.\n\n-\tHyperparameters of PPO\nWe used the default parameters in the baseline implementation, and we did not tune the parameter for Ant-rllab. We fixed the hyperparameters of adInfoHRL and TD3 as well, and we did not tune hyperparameters for specific tasks. The hyperparmeters are provided in Appendix.", "Thank you for the comments. We revised our manuscript to clarify the motivation. Please refer to the above post for details. We would also like to clarify some points here.\n\n- The reason why the advantage-weighted importance is necessary\nIf we do not use the advantage-weighted importance, we learn the latent variable with respect to the density of state-action pairs visited during the learning phase. However, modes of such a density correspond to not modes of the advantage function but the current location of the option policies. Therefore, the latent variable learned without the advantage-weighted importance do not improve the location of the option policies. By using the advantage-weighted importance, we can learn the discrete variable that corresponds to the modes of the advantage function. \n", "Thank you for the comments. We answer some of your questions here. Please refer to the above post for other concerns and questions.\n\n- questions about information maximization\nOur approach is to maximize the mutual information between the latent variable of the hierarchical policy and the state-action pairs, which results in learning discrete representations of the state-action space. We revised the manuscript to clarify this point.\n\n- Please add more discussion on why the options are switched at every step\nThe options are not switched at every time step as shown in Figure 2. For example, the option indicated by yellow is activated for about 30 time-steps at most. \n\n- Question about whether our method is off-policy or not\nWe do not intend to list “off-policy” as one of the contributions, although it is one of the features of our approach. Our approach is off-policy in several points even though we employed an on-policy buffer for learning the options. In our method, samples are collected using a behavior policy instead of the “raw” learned policy, and both the Q-function and the option policies are trained using the replay buffer in an off-policy manner. Therefore, we think that our method should be categorized as an off-policy method.\n\n-\tAvailability of the advantage function\nWe do not assume the availability of the advantage function. In practice, it is necessary to approximate the advantage function. Our approach finds the latent variable with respect to the current estimate of the advantage function. Since the Q-function converges to the optimum as learning progresses, our method can learn the latent variable with respect to the optimal advantage function at convergence. In actor critic, policy parameters are updated with respect to the current approximation of the Q-function or the advantage function. Likewise, one can interpret that the latent variable of our hierarchical policy is updated with respect to the current approximation of the advantage function in our method.", "The paper considers the problem of hierarchical reinforcement learning, and proposes a criterion that aims to maximize the mutual information between options and state-action pairs.\n\nThe idea of having options partition the state-action space is appealing, because this allows options visit the same states, so long as they act differently, which is natural. The authors show empirically that the learned options do indeed decompose the state-action space, but not the state space.\n\nThere is a lot in the paper already, but the exposition could be much improved. Many of the design choices appear very ad hoc, and some are outright confusing. Some detailed comments:\n\n* I got really confused in Section 3 re: advantage-weighted importance sampling. Why do this? If the option policies are trying to optimize reward, won’t they become optimal eventually (or so we usually hope in RL)? This section seems to assume that the advantage function is somehow given. It also doesn’t look like this gets used in the actual algorithm, and in fact on page 5 it is stated that “we decided to use the on-policy buffer in our implementation”. Then why introduce the off-policy bit at all, and list it as a contribution?\n* Please motivate the choices. The paper mentions that one of its contributions are options with deterministic policies. This isn’t a contribution unless it addresses some problem that stochastic policies fail at. For example, DPG allows one to address continuous control problems.\nSame with using information maximization. The paper literally states that “an interpretable representation can be learned by maximizing mutual information”. Representation of what? MI between what?\n* Although the qualitative results are nice (separation of the state-action space), empirical results are modest at best. This may be ok, because based on the partition of the state-action space it seems that the option policies learn diverse behaviors in the same states. Maybe videos visualizing different options from the same states would be informative.\n* Please add more discussion on why the options are switched at every step" ]
[ 6, 7, -1, -1, -1, -1, -1, -1, -1, -1, 5 ]
[ 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2019_Hyl_vjC5KQ", "iclr_2019_Hyl_vjC5KQ", "Hkgrq6jvaQ", "rJx6n5iDa7", "ryex2YGdpm", "SJxeXTsD6Q", "iclr_2019_Hyl_vjC5KQ", "BylepC6n3m", "H1gOFYTK3m", "BJg4ANaphm", "iclr_2019_Hyl_vjC5KQ" ]
iclr_2019_Hyx4knR9Ym
Generalizable Adversarial Training via Spectral Normalization
Deep neural networks (DNNs) have set benchmarks on a wide array of supervised learning tasks. Trained DNNs, however, often lack robustness to minor adversarial perturbations to the input, which undermines their true practicality. Recent works have increased the robustness of DNNs by fitting networks using adversarially-perturbed training samples, but the improved performance can still be far below the performance seen in non-adversarial settings. A significant portion of this gap can be attributed to the decrease in generalization performance due to adversarial training. In this work, we extend the notion of margin loss to adversarial settings and bound the generalization error for DNNs trained under several well-known gradient-based attack schemes, motivating an effective regularization scheme based on spectral normalization of the DNN's weight matrices. We also provide a computationally-efficient method for normalizing the spectral norm of convolutional layers with arbitrary stride and padding schemes in deep convolutional networks. We evaluate the power of spectral normalization extensively on combinations of datasets, network architectures, and adversarial training schemes.
accepted-poster-papers
Adversarial training has quickly become important for training robust neural networks. However this training generally results in poor generalization behavior. This paper proposes using margin loss with adversarial training for better generalization. The paper provides generalization bounds for this adversarial training setup motivating the use of spectral regularization. The experimental results using the spectral regularization with adversarial training are very promising and all the reviewers agree that they show non-trivial improvement. Even though the spectral regularization techniques have been tried in different settings, hence of limited novelty, the experimental results in the paper are encouraging and I believe will motivate further study on this topic. Reviewers also opined that the writing in the paper is currently not that great with limited explanation of the theoretical results. More discussions interpreting the theoretical results and their significance can help the readers appreciate the paper better.
train
[ "HygGEVhzAQ", "H1x3aUom2X", "Skx_uPmGRQ", "BJg4GCdkR7", "ByeNL6dJAQ", "HyxOkadJAX", "H1eRd3dyCQ", "SkeXPX7hhm", "B1xw3F5KhQ", "SkljOqLS9X", "rkgEjrnm9Q" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "public" ]
[ "Thank you for your reply. I have updated my rating.", "This paper is well set-up to target the interesting problem of degraded generalisation after adversarial training. The proposal of applying spectral normalisation (SN) is well motivated, and is supported by margin-based bounds. However, the experimental results are weak in justifying the paper's claims.\n\nPros:\n* The problem is interesting and well explained\n* The proposed method is clearly motivated\n* The proposal looks theoretically solid\n\nCons:\n\n* It is unclear to me whether the \"efficient method for SN in convolutional nets\" is more efficient than the power iteration algorithm employed in previous work, such as Miyato et al. 2018, which also used SN in conv nets with different strides. There is no direct comparison of performance.\n\n* Fig. 3 needs more explanation. The horizontal axes are unlabelled, and \"margin normalization\" is confusing when shown together with SN without an explanation. Perhaps it's helpful to briefly introduce it in addition to citing Bartlett et al. 2017.\n\n* The epsilons in Fig. 5 have very different scales (0 - 0.5 vs. 0 - 5). Are these relevant to the specific algorithms and why?\n\n* Section 5.3 (Fig. 6) is the part most relevant to the generalisation problem. However, the results are unconvincing: only the results for epsilon = 0.1 are shown, and even so the advantage is marginal. Furthermore, the baseline models did not use other almost standard regularisation techniques (weight decay, dropout, batch-norm). It is thus unclear whether the advantage can be maintained after applying these standard regularsisers.\n\nA typo in page 6, last line: wth -> with", "The authors have addressed all my questions. \n\nFor 1. it is still weird that the robustness of adversarial training in this paper is much better than the previous papers (previous papers achieves similar accuracy with only 0.031 distortion). But maybe it's because of different network structure. I think this could be resolved later once the authors release their code after iclr review. ", "We thank the reviewers for their valuable time and constructive feedback. In response to the comments raised in the reviews, we have modified Figures 3, 5, and 6 in the main text to more clearly convey their messages. We have also performed the following additional numerical experiments and added the results to the Appendix:\n\n1. We reran and timed all 42 experiments in Table 1 for 40 epochs with and without spectral normalization to clearly illustrate the difference in training time when using our proposed spectral normalization method (Appendix Table 2). We see that the training time with our proposed method is comparable, often being roughly the same and in the worst case taking 1.84 times as long.\n\n2. We provide an extensive comparison of our spectral normalization method for convolutional layers to that proposed by Miyato et al. (2018) in Appendix A.1. We provide numerical evidence that our method properly controls the spectral norm of convolution layers through figures and the estimated spectral norms of the layers post-training. The proposed normalization scheme also results in better generalization performance (Figure 10). We also compare the runtimes of architectures trained using our spectral normalization method versus Miyato et al.’s spectral normalization method (Table 3) and observe that our method takes only slightly longer, as expected.\n\n3. We empirically compare spectral normalization to other common regularization techniques for deep neural nets (DNNs): batch normalization, weight decay, and dropout. We see that spectral normalization achieves the best generalization performance in adversarial training settings. The results are provided in Appendix A.2.\n\nWe have also made the appropriate modifications in the main text and cited relevant works raised by the reviewers. We provide our code in an anonymous zip file that can be accessed at: https://www.dropbox.com/s/hl9q2f6epdu80qp/dl_spectral_normalization.zip?dl=0.", "We thank Reviewer 1 for the constructive feedback. Here is our point-to-point response to the comments and questions raised in the review:\n\n1. “The numbers reported in Figure 5 do not match with the performance of adversarial training in previous paper… I wonder why the numbers are so different.” \n\nTable 1 of \"Obfuscated Gradients Give a False Sense of Security\" reports an accuracy of 47% under 0.031 norm-inf perturbation for the CIFAR10 dataset (55% is reported for the MNIST dataset), approximately the same as the 44% accuracy in our Figure 5. The difference in performance stems from how we preprocessed the CIFAR10 images: exactly in the manner described by (Zhang et al., 2017)’s ICLR paper “Understanding deep learning requires rethinking generalization” (we whiten and crop each image). \n\n2. “What's the training time of the proposed method compared with vanilla adversarial training?” \n\nWe have added Table 2 to the Appendix which reports the increase in runtime for each of the 42 experiments discussed in Table 1 after introducing spectral normalization. For 39 of the cases, our TensorFlow implementation of the proposed method results in longer training times (from 1.02 to 1.84 times longer). In the 3 cases of iterative adversarial attacks with the Inception architecture, the proposed method actually results in faster training time. This is likely due to how TensorFlow handles training in the backend. We provide the code for full transparency.\n\n3. “The idea of using SN to improve robustness has been introduced in the following paper: \"Lipschitz-Margin Training: Scalable Certification of Perturbation Invariance for Deep Neural Networks\" (but this paper did not combine it with adv training).”\n\nThank you for bringing this recent work to our attention. We cite and discuss this NIPS paper in our updated draft.", "We thank Reviewer 3 for the constructive feedback. Here is our point-to-point response to the comments and questions raised in this review:\n\n1. “The novelty of the algorithm itself is limited, since GAN and adversarial training are both minmax problems, and the original algorithm can be carried over easily”\n\nGAN inference and adversarial training seek different goals. Adversarial training addresses a supervised learning task while GAN inference focuses on an unsupervised learning problem. Due to the inherent difference between supervised and unsupervised learning problems, the notion of generalization is defined differently between them. Arora et al. (2017) provide the standard definition of generalization error for GANs which is very different from the standard generalization error considered in supervised learning. Furthermore, no work in the literature theoretically guarantees that spectral normalization closes the generalization gap for either adversarial supervised learning or GAN unsupervised learning.\n\n2. “It is not clear to me that these are some novel results that can better help adversarial training”\n\nOur work’s main contribution is the theoretical generalization guarantees for spectrally-normalized adversarially-trained DNNs. Introducing the adversary can significantly grow the capacity of a DNN. Therefore, existing DNN generalization bounds are not applicable to adversarial training settings. Our work, to our best knowledge, is the first to show that the adversarial learning capacity of a DNN for FGM, PGM, WRM training schemes can be effectively controlled by regularizing the spectral norm of the DNN’s weight matrices. Our numerical results further support our theoretical contribution.", "We thank Reviewer 2 for the constructive feedback. Here is our point-to-point response to the comments and questions raised in the review:\n\n1. “It is unclear to me whether the \"efficient method for SN in convolutional nets\" is more efficient than the power iteration algorithm employed in previous work, such as Miyato et al. 2018, which also used SN in conv nets with different strides. There is no direct comparison of performance.”\n\nWe do not claim that our method is more efficient than Miyato et al.’s method, which uses the spectral norm of the convolution kernel matrix to approximate the spectral norm of the convolution operation. In fact, our proposed method is computationally more expensive than their approximate scheme because each power iteration in our method requires a conv/deconv operation rather than a simple division used by Miyato et al.’s. \n\nWe introduce our new spectral normalization scheme for convolutional layers because there exist examples where the true spectral norm of a convolution operation can be arbitrarily larger than Miyato et al.’s approximation. Therefore, Miyato et al.’s normalization scheme is not guaranteed to control the spectral norm of convolutional layers which is critical for controlling a DNN’s generalization performance (please see our generalization bounds in Section 3). To further support our argument, we performed additional experiments demonstrating how our proposed method better controls the spectral norm of convolution layers, resulting in better generalization and test performance. The results are presented in Appendix A.1. Furthermore, we run several experiments to show that our method is not significantly slower than Miyato et al.’s method, and we report the results in Appendix A.1, Table 3. \n\n2. “Fig. 3 needs more explanation. The horizontal axes are unlabelled, and \"margin normalization\" is confusing”\n\nWe relabel the axes and add a more thorough explanation in the caption. We note that the text explaining Figure 3 mentions how the margin normalization is performed (paragraph 3 in section 5.1): the margin normalization factor is exactly the capacity norm \\Phi described in Theorems 1-4. We clarify that we divide the obtained margins by the values of \\Phi estimated on the dataset.\n\n3. “The epsilons in Fig. 5 have very different scales (0 - 0.5 vs. 0 - 5). Are these relevant to the specific algorithms and why?” \n\nYes, the epsilons are chosen to be different depending on whether we are looking at norm_inf attacks or norm_2 attacks. This is because the two norms can behave very differently in adversarial attack experiments. For example, a norm_inf attack of 0.5 implies that all pixels can be changed by 0.5. On the other hand, a norm_2 attack of 0.5 means the overall Euclidean norm of perturbation across all pixels is bounded by 0.5, resulting in a much less powerful attack. Based on this comment, we update the plots with the same attack-norm to have the same scale.\n\n4. \"Section 5.3 (Fig. 6) is the part most relevant to the generalisation problem. However, the results are unconvincing: only the results for epsilon = 0.1 are shown, and even so the advantage is marginal.\" \n\nWe redo the visualization in Figure 6 to make the gains provided by SN clearer. We see that using SN can improve the test performance by over 12% for some FGM, PGM, and WRM cases.\n\n5. \"The baseline models did not use other almost standard regularisation techniques (weight decay, dropout, batch-norm). It is thus unclear whether the advantage can be maintained after applying these standard regularisers.\"\n\nWe did not originally discuss weight decay, dropout, and batch normalization as none of these methods were motivated by the theory we introduced in section 3. However, due to the reviewers’ concern in the updated draft we compare spectrally-normalized networks to networks with the same architecture except with weight decay, dropout, or batch norm in Appendix A.2. In our experiments, the SN-regularized network still performs better in terms of test accuracy. ", "The paper first provides a generalization bounds for adversarial training, showing that the error bound depends on Lipschitz constant. This motivates the use of spectral regularization (similar to Miyato et al 2018) in adversarial training. Using spectral regularization to improve robustness is not new, but it's interesting to combine spectral regularization and adversarial training. Experimental results show significant improvement over vanilla adversarial training. \n\nThe paper is nicely written and the experimental results are quite strong and comprehensive. I really like the paper but I have two questions about the results: \n\n1. The numbers reported in Figure 5 do not match with the performance of adversarial training in previous paper. In PGM L_inf adversarial training/attack (column 3 of Figure 5), the prediction accuracy is roughly 50% under 0.1 infinity norm perturbation. However, previous papers (e.g., \"Obfuscated Gradients Give a False Sense of Security\") reported 55% accuracy under 0.031 infinity norm perturbation. I wonder why the numbers are so different. \n\nMaybe it's because of different scales? Previous works usually scale each pixel to [0,1] or [-1,1], maybe the authors use the [0, 255] scale? But 0.1/255 will be much smaller than 0.031. \n\nAnother factor might be the model structure. If Alexnet has much lower accuracy, it's probably worthwhile to conduct experiments on the same structure with previous works (Madry et al and Athalye et al) to make the conclusion more clear. \n\n2. What's the training time of the proposed method compared with vanilla adversarial training? \n\n3. The idea of using SN to improve robustness has been introduced in the following paper: \n\"Lipschitz-Margin Training: Scalable Certification of Perturbation Invariance for Deep Neural Networks\"\n(but this paper did not combine it with adv training). \n", "This paper proposes using spectral normalization (SN) as a regularization for adversarial training, which is based on [Miyato et. al., ICLR 2018], where the original paper used SN for GAN training. The paper also uses the results from [Neyshabur et. al., ICLR 2018], where the original paper provided generalization bounds that depends on spectral norm of each layer. \n\nThe paper is well written in general, the experiments are extensive. \n\nThe idea of studying based on the combination of the results from two previous papers is quite natural, since one uses spectral normalization in practice for GAN training, and the other provides generalization bound that depends on spectral norm. \n\nThe novelty of the algorithm itself is limited, since GAN and adversarial training are both minmax problems, and the original algorithm can be carried over easily. The experimental result itself is quite comprehensive. \n\nOn the other hand, this paper provides specific generalization bounds under three adversarial attack methods, which explains the power of SN under those settings. However, it is not clear to me that these are some novel results that can better help adversarial training.\n", "Hello, thank you for your feedback and your interest in our work. Regarding your comments:\n\n1) References [1] and [2] propose standard ERM training while regularizing the Lipschitz constant to improve robustness of the trained network against future adversarial attacks. On the other hand, the main concern of our work is the lack of generalizability in *adversarial* training settings, e.g. FGM and PGM training, which can be significantly worse than in the ERM case as demonstrated by Schmidt et al. (2018). This observation is further supported by the generalization bounds in Theorems 1-4, which motivate the regularization of spectral norms. While there exist multiple approaches for regularizing the Lipschitz constant, we specifically propose applying spectral normalization because this allows us to directly enforce our adversarial generalization bounds.\n\n2) Thank you for bringing the recent NIPS work [3] to our attention. We note that while the two iterative approaches for computing a convolution layer’s spectral norm both yield the same result, the implementations are different. [3]’s computation of spectral norm requires computing the gradient of the Euclidean norm of the convolution operation. Ours leverages the deconvolution operation, which circumvents needing to take the gradient.\n\n3) We observed in several experiments (e.g. for training Inception over CIFAR10) that batch normalization helps with training speed but does not offer a considerable improvement in adversarial test accuracy over the no-regularization case. ", "Hi, thank you for the nice work. I have three comments below.\n\n1. Regularizing Lipschitz constant for improved generalization/robustness seems not novel. It backs to [1] and [2] showed enhanced performance on both clean and adversarial examples. The main difference seems you used normalization instead of regularization. So I would like authors to clarify the advantages to use normalization.\n\n2. The method to calculate the spectral norm of convolution is already proposed by a recent NIPS paper in a more generalized form [3].\n\n3. Removing standard regularization techniques such as dropout and batch-normalization may degrade the baseline performance. It will be helpful if experiments with dropout and batch-normalization are available. For example, other Lipschitz-concerned work reports their accuracy with batch-normalization [2][4].\n\n[1] Szegedy et al. Intriguing properties of neural networks. ICLR2014\n[2] Cisse et al. Parseval Networks: Improving Robustness to Adversarial Examples. ICML2017\n[3] Tsuzuku et al. Lipschitz-Margin Training: Scalable Certification of Perturbation Invariance for Deep Neural Networks. NIPS2018\n[4] Yoshida and Miyato. Spectral Norm Regularization for Improving the Generalizability of Deep Learning. https://arxiv.org/abs/1705.10941" ]
[ -1, 6, -1, -1, -1, -1, -1, 6, 5, -1, -1 ]
[ -1, 4, -1, -1, -1, -1, -1, 5, 3, -1, -1 ]
[ "H1eRd3dyCQ", "iclr_2019_Hyx4knR9Ym", "ByeNL6dJAQ", "iclr_2019_Hyx4knR9Ym", "SkeXPX7hhm", "B1xw3F5KhQ", "H1x3aUom2X", "iclr_2019_Hyx4knR9Ym", "iclr_2019_Hyx4knR9Ym", "rkgEjrnm9Q", "iclr_2019_Hyx4knR9Ym" ]
iclr_2019_Hyx6Bi0qYm
Adversarial Domain Adaptation for Stable Brain-Machine Interfaces
Brain-Machine Interfaces (BMIs) have recently emerged as a clinically viable option to restore voluntary movements after paralysis. These devices are based on the ability to extract information about movement intent from neural signals recorded using multi-electrode arrays chronically implanted in the motor cortices of the brain. However, the inherent loss and turnover of recorded neurons requires repeated recalibrations of the interface, which can potentially alter the day-to-day user experience. The resulting need for continued user adaptation interferes with the natural, subconscious use of the BMI. Here, we introduce a new computational approach that decodes movement intent from a low-dimensional latent representation of the neural data. We implement various domain adaptation methods to stabilize the interface over significantly long times. This includes Canonical Correlation Analysis used to align the latent variables across days; this method requires prior point-to-point correspondence of the time series across domains. Alternatively, we match the empirical probability distributions of the latent variables across days through the minimization of their Kullback-Leibler divergence. These two methods provide a significant and comparable improvement in the performance of the interface. However, implementation of an Adversarial Domain Adaptation Network trained to match the empirical probability distribution of the residuals of the reconstructed neural signals outperforms the two methods based on latent variables, while requiring remarkably few data points to solve the domain adaptation problem.
accepted-poster-papers
BMIs need per-patient and per-session calibration, and this paper seeks to amend that. Using VAEs and RNNs, it relates sEEG to sEMG, in principle a ten-year old approach, but do so using a novel adversarial approach that seems to work. The reviewers agree the approach is nice, the statements in the paper are too strong, but publication is recommended. Clinical evaluation is an important next step.
val
[ "Byx8cczQRm", "rkxSLkcxCQ", "rkgH8gGa6Q", "HkxTjbBj67", "Byx8TyLcpX", "H1gRv1IqpX", "B1lX3AHqa7", "B1gJaHNcpm", "BJlvxzmJaQ", "S1gZuAoJ3X", "SJljUupdoQ", "Hke5lmpc9m", "SyeDE0WYqX" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public" ]
[ "Dear reviewer #1,\n \nWe have submitted a response to your review and a revised version of our paper. We hope to have succeeded in answering all questions and comments. If there are any remaining concerns, please let us know so that we can address them before the deadline.\n\nThanks, ", "Dear reviewers, \n\nWe have submitted a revised version of our paper. We have done our best to address all your comments, and hope you will find the changes to be positive. We would be glad to address any additional or remaining concerns. Thank you for your feedback and comments; the revisions you suggested have strengthened the paper.\n", "We thank the reviewer for the feedback and comments. \n\nQ: “The paper considers invasive BMIs and studies various ways to avoid daily recalibration due to changes in the brain signals. While I like the paper and studied methods -- using adversarial domain adaptation is interesting to use in this context --, I think that the authors oversell a bit. The problem of nonstationarity rsp. stability is an old one in non-invasive BCIs (shenoy et al JNE 2006 was among the first) and a large number of prior methods have been defined to robustify feature spaces, to project to stable subspaces etc. Clearly no Gans at that time. The least the authors could do is to make reference to this literature, some methods may even apply also for the invasive data of the paper.”\n\nA: We thank the reviewer for the positive comment about our work. We do not claim to be the first to address the issue of stability in the presence of non-stationary recorded signals. We have added references to Zhang & Chase 2013, Nuyujukian et al 2014, Dyer et al 2017, and Downey et al 2018 to the papers we had already listed in the section on Related Work: Orsborn et al 2012, Dangi et al 2013, Bishop et al 2014, Jarosiewicz et al 2015, Susillo et al 2016, Kao et al 2017, and Pandarinath et al 2017. We could not find the Shenoy et al JNE 2006 article mentioned by the reviewer. We would appreciate more details regarding this paper – does this refer to the Nature 2006 or the JNE 2007 paper from the Shenoy group? In the revised version of our paper, we have expanded the description of the methods previously put forward by these authors. Our claim to novelty is in formulating the problem as one of domain adaptation for neural signals, a problem that can be addressed through the use of adversarial training. \n\nQ: “While the authors did not clearly say that they present an offline analysis; one method, the GAN, gets 6% better results then the competitors. I am not sure whether this is practically relevant in an online setting. But this needs to be clearly discussed in the paper and put into perspective to avoid wrong impression. Only an online study would be convincing.” \n\nA: The reviewer is correct in pointing out that our evaluation of BMI performance is offline, in an open-loop scenario and that we cannot claim improved ease of use since the aligned BMI has not been tested in an online, closed-loop scenario. We have removed the statement to that effect in the revised version of the paper. However, the 6% improvement over the competitors in open-loop was statistically significant. A further advantage of ADAN in comparison to CCA and KLDM is that it is an unsupervised method that involves no assumption on the statistics of the latent activity. \nThe question of online vs offline comparison is an important one. Online BMI performance is not perfectly correlated with offline accuracy; this is actually the reason that offline comparison is important in this case. In an online evaluation of BMI performance, the user’s ability to adapt at an unknown rate and to an unknown extent to an imperfect BMI obscures the performance improvements obtained with domain adaptation. Although experiments, both open and closed loop, with additional animals and involving additional tasks, are in process as required to validate our results, the open loop performance improvement demonstrated here is a more stringent metric than improvements achieved in a closed loop setting.\n\nQ: “Overall, I think the paper could be accepted, the experiments are nice, the data is interesting, if it is appropriately toned down (avoiding statements about having done something for the first time) and properly references to prior work are given. It is an interesting application domain. I additionally recommend releasing the data upon acceptance.” \n\nA: We once more thank the reviewer for the positive comments. We have followed the advice and are more careful and specific in claiming novelty in the revised version of the paper. We plan to make data and code available on GitHub upon acceptance.", "We thank the reviewer for a careful reading of our paper and the positive comments about our work. \n\nQ: “Some parts could be improved. The results of Fig. 2B to investigate the role of latent variables extracted from the trained autoencoder are not clear, the simultaneous training could be better explained. As the authors claimed that their method allows to make an unsupervised alignment neural recording, independently of the task, an experiment on another dataset could enforce this claim.”\n\nA: In the revised version of our paper we have clarified the procedure for training the AE. It is based on a loss function that includes not only the unsupervised neural reconstruction loss but also a supervised regression loss that quantifies the quality of EMG prediction (see Eq 1). This combined training resulted in low-dimensional latent variables that were then used as inputs to a muscle predictor; this predictor performed as well as a muscle predictor based directly on the high-dimensional neural activity. \nWe do agree that additional experiments, both open loop (offline) and closed loop (online), with additional animals, and involving additional tasks, are required to fully validate our results; we are currently in the process of developing and running these experiments. A vetting of the computational ideas within the machine learning community is crucial before embarking into extremely time-consuming experiments. \n", "We thank the reviewer for the detailed feedback and comments. \n\nQ: “Here the authors define a BMI that uses an autoencoder -> LSTM -> EMG. The authors then address the problem of data drift in BMI and describe a number of domain adaptation algorithms from simple (CCA to more complex ADAN) to help ameliorate it. There are a lot of extremely interesting ideas in this paper, but the paper is not particularly well written, and the overall effect to me was confusion. What problem is being solved here? Are we describing using latent variables (AE approach) for BMI? Are we discussing domain adaptation, i.e. handling the nonstationarity that so plagues BMI and array data? Clearly the issue of stability is being addressed but how? A number of different approaches are described from creating a pre-execution calibration routine whereby trials on the given day are used to calibrate to an already trained BMI (e.g. required for CCA) to putting data into an adversarial network trained on data from earlier days. Are we instead attempting to show that a single BMI can be used across multiple days?”\n\nA: The reviewer correctly highlights a number of the important aspects of the work. Our main objective is indeed to stabilize a fixed BMI so as to increase its longevity and usability across many days. To this end, we started by describing the general architecture and training of the BMI, which is trained on day-0 and consists of two components. The first is an autoencoder that provides a nonlinear map from neural signals to a low dimensional space of latent signals. The second is an EMG predictor that maps the latent signals onto muscle activity. The first point of our paper is that better BMI performance is achieved when the training of the AE is based on a loss function (see Eq 1) that includes not only the unsupervised neural reconstruction loss but also a supervised regression loss that quantifies the quality of EMG prediction. \nThe goal is to keep this BMI fixed so that the user only needs to adapt to it once. However, the performance of a fixed BMI will deteriorate because of neural turnover (see blue data on Fig 3A). A way to maintain performance in the face of changing neural signals is to keep on retraining the interface (see red data on Fig 3A), but this is not a viable solution as it requires the user to keep on adapting to a new interface on an almost daily basis. We have expanded in the revised version of our paper on the problems caused by frequent BMI recalibration. To avoid this problem, our approach was to investigate interventions on the latent space representations to align and stabilize the inputs to the EMG predictor without changing the AE. To this end, we explored three domain adaptation approaches and showed that the use of ADAN provides the most effective solution. \n\nQ: “AE to RNN to EMG is that the idea to compare vs. Domain adaptation via CCA/KLDM/ADAM. Of course a paper can explore multiple ideas, but in this case the comparisons and controls for both are not adequate.”\n\nA: As explained above, we first described the architecture and training of the BMI. We trained and fixed this BMI on the data of day-0. In the loss function of Eq 1, used to train the AE on day-0, the index t from 0 to T labels the day-0 data. Each input to the BMI is an n-dimensional vector x of neural data. The performance of this BMI, kept fixed, quickly deteriorates due to neural turnover. We implemented three domain adaptation methods (CCA, KLDM, and ADAN) to stabilize the performance of the fixed BMI across subsequent days and identified which domain adaptation technique provides the most stability to a fixed BMI. ", "Q: “What are meaningful comparisons for all for the AE and DA portions? The AE part is strongly related to either to Kao 2017 or Pandarinath 2018 but nothing like that is compared. The domain adaptation part evokes data augmentation strategies of Sussillo 2016 but that is not compared.”\n\nA: The use of dimensionality reduction to design BMIs is not novel. This approach has been triggered by the observation of a high degree of correlation in the activity of individual M1 neurons, the expectation of obtaining a more compact and denoised representation of neural activity, and the convenience of using a low-dimensional signal as input to the EMG predictor to simplify its training and avoid overfitting. Most of the earlier work used linear dimensionality reduction methods such as PCA and FA to obtain the latent variables (e.g. Yu et al., 2009; Shenoy et al., 2013; Sadtler et al., 2014; Gallego et al., 2017a). More recently, the use of AEs as a nonlinear dimensionality reduction method has been investigated by Pandarinath et al. (2018). Our contribution here, as discussed above, is to combine unsupervised and supervised goals in the AE training, and to show that this results in improved BMI performance. We have clarified this point in the revised version of the paper.\nThere is not much previous work on the question of stabilizing BMI performance against neural turnover. In Sussillo 2016, the authors used months of recordings to train a BMI and to make it robust to neural changes. Here, we seek to find methods that allow us to stabilize the BMI using single session data. Data augmentation strategies and domain adaptation techniques are inherently different but complementary approaches. \n\nQ:” If I were reviewing this manuscript for a biological journal a rigorous standard would be online BMI results in two animals. Is there a reason why this isn’t the standard for ICLR? Is the idea that non-biological journals / conferences are adequate to vet new ideas before really putting them to the test in a biological journal? The manuscript is concerned with the vexing problem of BMI stability of time, which seems to be a problem where online testing in two animals would be critical. (I appreciate this is a broader topic relevant to the BMI field beyond just this paper, but it would be helpful to get some thinking on this in the rebuttal).”\n\nA: The question of online vs offline comparison is an important one. Online BMI performance is not perfectly correlated with offline decoder accuracy; this is actually the reason that offline comparison is important in this case. In an online evaluation of BMI performance, the user’s ability to adapt at an unknown rate and to an unknown extent to an imperfect BMI obscures the performance improvements obtained with domain adaptation. We do agree that additional experiments, both open and closed loop, with additional animals and involving additional tasks, are required to fully validate our results; we are currently in the process of developing and running these experiments. A vetting of the computational ideas within the machine learning community is invaluable before implementing closed-loop experiments. \n\nQ: “This paper needs to be pretty seriously clarified. The mathematical notation is not adequate to the job, nor is the motivation for the varied methodology. I cannot tell if the subscript is for time or for day. Also, what is the difference between z_0 vs. Z_0? I do not know what exactly is going into the AE or the ADAN.\n\nA: The subscript t in Eq 1 labels the time ordered data points that constitute the training set on day-0. This has been further clarified in the paper. As indicated in the original version, day-k is the notation adopted to indicate successive days following day-0. Capital letters are used to represent matrices: X0 and Xk are n by T matrices that aggregate the neural data for day-0 and day-k, respectively; while Z0 and Zk are l by 8τ matrices (as explained in the paper) that aggregate the latent activity for day-0 and day-k, respectively. Lowercase letters represent vectors, with x referring to neural activity, z to latent activity, and y to muscle activity. The inputs to the BMI and the ADAN are neural recordings, as shown in Fig 1. \n\nQ: “The neural networks are not described to a point where one could reproduce this work. The notation for handling time is inadequate. E.g. despite repeated readings I cannot tell how time is handled in the auto-encoder, e.g. nxt is vectorized vs feeding n-sized vector one time step at a time?”\n\nA: As indicated in Eq 1, the training of the AE was guided by a loss function that is a sum over the loss for each input vector xt. Each n-dimensional input vector, labeled by t, is individually fed to the BMI to obtain the corresponding neural activity reconstruction and muscle activity prediction. The corresponding loss is calculated and combined additively to compute a cumulative gradient used for batch training. ", "Q: “What is the point of the latent representation in the AE if it is just fed to an LSTM? Is it to compare to not using it?”\n\nA: The high degree of correlation in the activity of M1 neurons makes the use of dimensionality reduction methods a common practice in BMI design. Expected advantages are the denoising of the neural recordings and the possibility of using a more compact representation of neural activity as input to the predictor of muscle activity. Here we proposed an approach to AE training that results in a latent space based muscle predictor that performs as well as a muscle predictor based directly on the high dimensional neural activity. \n\nQ: “Page 3, how precisely is time handled in the AE? If time is just vectorized, how can one get real-time readouts? In general there is not enough detail to understand what is implemented in the AE. If only one time slice is entered into AE, then it seems clear AE won’t be very good because one desires latent representation of the dynamics, not single time slices.”\n\nA: The AE is trained using batch training on a loss function that accumulates in additive form the loss associated with each individual training example. The AE is trained to produce an optimal reconstruction of the neural vectors xt regardless of their temporal order. After training, real-time readouts for the latent activity can be obtained for every successively presented neural activity input. If the goal is to track dynamics in latent space, latent trajectories can be constructed by concatenating the latent representations in the appropriate order, as when using CCA. The other two domain adaptation techniques that we implemented, KLDM and ADAN, do not require this concatenation, as they focus on matching the statistics of the latent variables as opposed to their dynamics. \n\nQ: “How big is the LSTM used to generate the EMG?”\n\nA: The number of units in the LSTM layer is equal to the number of the recorded muscles (m = 14). We have added this information in the revised paper. \n\nQ: “It seems like a the most relevant baseline is to compare to the data perturbation strategies in Sussillo 2016. If you have an LSTM already up and running to predict EMG, this seems very doable.”\n\nA: The data perturbations applied in Sussillo 2016 include electrode dropping or changing the average firing rate of individual neurons. The actual neural turnover results in complex transformations of the latent activity that cannot be simulated using these simple data perturbations. The high degree of correlation in the firing activity of M1 neurons implies that dropping individual channels should not have a large impact on the BMI’s performance, an expectation supported by our own preliminary analysis, unpublished. Moreover, our analysis shows that an alignment method that only compensates for translation (i.e. by changing individual average firing rates) and scaling perturbations, would fail to implement the complex transformations needed to match latent distributions (see Fig S2).\n\nQ: “Page 4, “We then use an ADAN to align either the distribution of latent variables or the distributions of the residuals of the reconstructed neural data, the latter a proxy for the alignment of the neural latent variables.” This sentence is not adequate to explain the concepts of the various distributions, the residuals of reconstructed neural data (where do the residuals come from?), and why is one a proxy for the other. Please expand this sentence into a few sentences, if necessary to define these concepts for the naive reader.”\n\nA: Initially, we used an adversarial network to directly match the PDF of the latent variables across days; the resulting improvements in EMG prediction with this approach to domain adaptation were comparable to those obtained with KLDM and CCA. Next, we implemented ADAN to match PDFs in neural space; not the PDF of the reconstructed neural activity but that of the L1 norm of their residuals (the difference between actual and reconstructed neural activity). In the ADAN architecture, the discriminator is an AE that receives as inputs the neural activity of day-0 and day-k and outputs their reconstructions. The residuals follow from the difference between the discriminator’s inputs and outputs. The ADAN aligns the day-k statistics to those of day-0 by minimizing the distance between the PDFs of the respective scalar residuals. This procedure results in the alignment of the neural recordings and consequently their latent representation across days (Fig 3 C-E). We have expanded this sentence and clarified these points in the revised version of our paper. ", "Q: “Page 5, What parameters are minimized in equation (2)? Please expand the top sentence of page 5.”\n\nA: The KLD of Eq 2 is always positive, and reaches its minimum at zero when the mean and covariance matrix for day-k match those for day-0. The KLDM method thus aligns the latent statistics of day-k to those of day-0 by implementing a transformation that equalizes the first and second moments of these complex PDFs. To minimize the KLD, we used a map from neural activity to latent activity implemented by a network with the same architecture as the encoder section of the BMI’s AE. This network was initialized with the weights obtained after training the BMI’s AE on the day-0 data. Training proceeded on inputs provided by day-k recordings of neural activity. The loss function on the latent variables was as shown in Eq 2. We have added these clarifications to the revised version of our paper. \n\nQ: “Page 6, top - “In contrast, when the EMG predictor is trained simultaneously with the AE…” Do you mean there is again a loss function defined by both EMG prediction and AE and summed, and then backprop is used to train both in an end-to-end fashion? Please clarify.”\n\nA: When the EMG predictor is trained simultaneously with the AE, the AE is trained using the joint loss function of Eq 1. The alternative, is to independently train the AE in a purely unsupervised manner, not including the second term in Eq 1. We have clarified this point in the revised version of our paper. \n\nQ: “Page 8, How do the AE results and architecture fit into the EMG reconstruction “BMI” results? Is that all decoding results are first put through the AE -> LSTM -> EMG pipeline? I.e. your BMI is neural data -> AE -> LSTM -> EMG? If so, then how does the ADAN / CCA and KLDM fit in? You first run those three DA algorithms and then pipe it through the BMI?”\n\nA: The BMI consists of two computational modules: the neural AE and the EMG predictor. These were trained using only the data of day-0 and remained fixed afterward. Once the BMI is trained, the fixed encoder part of the AE maps neural activity into latent activity. Both CCA and KLDM were designed to match latent variables across days. Therefore, when using these methods, we first obtained the latent variables Zk of subsequent days using the encoder part of the fixed AE of the BMI, then applied CCA and KLDM to align these latent variables to those of day-0, and finally used the fixed EMG predictor to predict EMGs from the aligned latent variables. In contrast, ADAN was designed to match high-dimensional neural recordings across days. Therefore, when using ADAN, first we aligned the neural recordings Xk of a subsequent day to those of a day-0 and then used the aligned vectors of neural activity as inputs to the fixed BMI. We have clarified this aspect of domain adaptation in the revised version of our paper.\n\nQ: “Page 8, How can you say that the BMI improvement of 6% is meaningful to the BMI user if you did not test the BMI online?”\n\nA: We agree. We have removed this sentence from the revised version of our paper. \n\nWe thank the reviewer again for the feedback and comments, which have improved the manuscript. \n", "This contribution describes a novel approach for implanted brain-machine interface in order to address calibration problem and covariate shift. A latent representation is extracted from SEEG signals and is the input of a LTSM trained to predict muscle activity. To mitigate the variation of neural activities across days, the authors compare a CCA approach, a Kullback-Leibler divergence minimization and a novel adversarial approach called ADAN.\n\nThe authors evaluate their approach on 16-days recording of neurons from the motor cortex of rhesus monkey, along with EMG recording of corresponding the arm and hand. The results show that the domain adaptation from the first recording is best handled with the proposed adversarial scheme. Compared to CCA-based and KL-based approaches, the ADAN scheme is able to significantly improve the EMG prediction, requiring a relatively small calibration dataset.\n\nThe individual variability in day-to-day brain signal is difficult to harness and this work offers an interesting approach to address this problem. The contributions are well described, the limitation of CCA and KL are convincing and are supported by the experimental results. The important work on the figure help to provide a good understanding of the benefit of this approach.\n\nSome parts could be improved. The results of Fig. 2B to investigate the role of latent variables extracted from the trained autoencoder are not clear, the simultaneous training could be better explained. As the authors claimed that their method allows to make an unsupervised alignment neural recording, independently of the task, an experiment on another dataset could enforce this claim.", "Here the authors define a BMI that uses an autoencoder -> LSTM -> EMG. The authors then address the problem of data drift in BMI and describe a number of domain adaptation algorithms from simple (CCA to more complex ADAN) to help ameliorate it. There are a lot of extremely interesting ideas in this paper, but the paper is not particularly well written, and the overall effect to me was confusion. What problem is being solved here? Are we describing using latent variables (AE approach) for BMI? Are we discussing domain adaptation, i.e. handling the nonstationarity that so plagues BMI and array data? Clearly the issue of stability is being addressed but how? A number of different approaches are described from creating a pre-execution calibration routine whereby trials on the given day are used to calibrate to an already trained BMI (e.g. required for CCA) to putting data into an adversarial network trained on data from earlier days. Are we instead attempting to show that a single BMI can be used across multiple days?\n\n\nThis paper is extremely interesting but suffers from lack of focus, rigor, and clarity. \nFocus : \nAE to RNN to EMG is that the idea to compare vs. Domain adaptation via CCA/KLDM/ADAM. \nOf course a paper can explore multiple ideas, but in this case the comparisons and controls for both are not adequate.\n\nRigor: \nWhat are meaningful comparisons for all for the AE and DA portions? The AE part is strongly related to either to Kao 2017 or Pandarinath 2018 but nothing like that is compared. The domain adaptation part evokes data augmentation strategies of Sussillo 2016 but that is not compared.\n \nIf I were reviewing this manuscript for a biological journal a rigorous standard would be online BMI results in two animals. Is there a reason why this isn’t the standard for ICLR? Is the idea that non-biological journals / conferences are adequate to vet new ideas before really putting them to the test in a biological journal? The manuscript is concerned with the vexing problem of BMI stability of time, which seems to be a problem where online testing in two animals would be critical. (I appreciate this is a broader topic relevant to the BMI field beyond just this paper, but it would be helpful to get some thinking on this in the rebuttal).\n\nClarity : \nThis paper needs to be pretty seriously clarified. The mathematical notation is not adequate to the job, nor is the motivation for the varied methodology. I cannot tell if the subscript is for time or for day. Also, what is the difference between z_0 vs. Z_0? I do not know what exactly is going into the AE or the ADAN.\n\nThe neural networks are not described to a point where one could reproduce this work. The notation for handling time is inadequate. E.g. despite repeated readings I cannot tell how time is handled in the auto-encoder, e.g. nxt is vectorized vs feeding n-sized vector one time step at a time?\n\n\nQuestions \n\nWhat is the point of the latent representation in the AE if it is just fed to an LSTM? Is it to compare to not using it? \n\nPage 3, how precisely is time handled in the AE? If time is just vectorized, how can one get real-time readouts? In general there is not enough detail to understand what is implemented in the AE. If only one time slice is entered into AE, then it seems clear AE won’t be very good because one desires latent representation of the dynamics, not single time slices.\n\nHow big is the LSTM used to generate the EMG?\n\nIt seems like a the most relevant baseline is to compare to the data perturbation strategies in Sussillo 2016. If you have an LSTM already up and running to predict EMG, this seems very doable.\n\nPage 4, “We then use an ADAN to align either the distribution of latent variables or the distributions of the residuals of the reconstructed neural data, the latter a proxy for the alignment of the neural latent variables.” This sentence is not adequate to explain the concepts of the various distributions, the residuals of reconstructed neural data (where do the residuals come from?), and why is one a proxy for the other. Please expand this sentence into a few sentences, if necessary to define these concepts for the naive reader. \n\nPage 5, What parameters are minimized in equation (2)? Please expand the top sentence of page 5.\n\nPage 6, top - “In contrast, when the EMG predictor is trained simultaneously with the AE…” Do you mean there is again a loss function defined by both EMG prediction and AE and summed, and then backprop is used to train both in an end-to-end fashion? Please clarify.\n\nPage 8, How do the AE results and architecture fit into the EMG reconstruction “BMI” results? Is that all decoding results are first put through the AE -> LSTM -> EMG pipeline? I.e. your BMI is neural data -> AE -> LSTM -> EMG? If so, then how does the ADAN / CCA and KLDM fit in? You first run those three DA algorithms and then pipe it through the BMI? \n\nPage 8, How can you say that the BMI improvement of 6% is meaningful to the BMI user if you did not test the BMI online?\n", "The paper considers invasive BMIs and studies various ways to avoid daily recalibration due to changes in the brain signals. \nWhile I like the paper and studied methods -- using adverserial domain adaptation is interesting to use in this context --, I think that the authors oversell a bit. \nThe problem of nonstationarity rsp. stability is an old one in non-invasive BCIs (shenoy et al JNE 2006 was among the first) and a large number of prior methods have been defined to robustify feature spaces, to project to stable subspaces etc. Clearly no Gans at that time. The least the authors could do is to make reference to this literature, some methods may even apply also for the invasive data of the paper.\nWhile the authors did not clearly say that they present an offline analysis; one method, the GAN, gets 6% better results then the competitors. I am not sure whether this is practically relevant in an online setting. But this needs to be clearly discussed in the paper and put into perspective to avoid wrong impression. Only an online study would be convincing. \n\nOverall, I think the paper could be accepted, the experiments are nice, the data is interesting, if it is appropriately toned down (avoiding statements about having done something for the first time) and properly references to prior work are given. It is an interesting application domain. I additionally recommend releasing the data upon acceptance. \n\n", "Thank you for your comment. The paper that you mentioned in your comment (which we will refer to it as the LFADS paper) is an important work that we are very familiar with. We cite this work in our paper, referring to https://www.biorxiv.org/content/early/2017/06/20/152884, the version posted to bioRxiv in June 2017. We will update the citation to the now published version when our paper is revised. \nThe LFADS paper introduces a denoising auto-encoder to extract low-dimensional latent variables from neural recordings. These latent variables are then used as inputs to a predictor of movement related variables. This aspect of our work is indeed similar to theirs. However, our goal is not to extract a latent space from the neural, a project that several BMI groups have already contributed to. Our goal is to obtain a statistically stable latent representation, one that can provide stable inputs to a fixed predictor of movement related variables. \nThe need for the stabilization of the latent space arises because of continuous changes in the recording device. To address this issue, we introduced an adversarial domain adaptation technique that matches the probability distribution of the residuals of the reconstructed neural recordings across days, as a proxy to matching the probability distribution of the latent variables. To our knowledge, this is the first implementation of an adversarial domain adaptation method to successfully align latent variables across days and achieve stable predictions of movement related variables. The LFADS paper does not propose a method to compensate for the daily changes in the neural recordings; they deal with this instability by continuing to train the interface over as long as five months. As we write in our paper when describing Related Work: “Pandarinath et al. (2017) extract a single latent space from concatenating neural recordings over five months, and show that a predictor of movement kinematics based on these latent signals is reliable across all the recorded sessions.” As we discuss in our paper, this is not a viable solution in practical applications, because it requires the user to continuously adapt to a changing interface.\n\nTo summarize, there is no overlap in the design of the interface between our manuscript and the LFADS paper beyond the fact that in both papers an auto-encoder is used to reduce the dimensionality of the recorded neural signals. The idea of extracting a latent space through dimensionality reduction is not new. In recent years it has become well established that there is a high degree of correlation across neural signals recorded from the primary motor cortex (M1); the practice of extracting a low-dimensional latent space from neural recordings has thus become quite common among many BMI groups. The LFADS paper is a recent and important publication on this topic, joining a relatively large number of preceding studies, such as Yu et al., 2009; Shenoy et al., 2013; Sadtler et al., 2014; Gallego et al., 2017a (see the full citations in our manuscript). \nAlthough both the LFADS paper and our paper achieve dimensionality reduction through an auto-encoder, the architectures of the two networks are very different. LFADS is a sequential, variational auto-encoder with two RNNs, based on the assumption that spikes are samples from a Poisson process. In contrast, we have implemented a simple feed-forward auto-encoder architecture. We thus emphasize the statistics of the latent variables as opposed to their dynamics. \nYet another difference between the interface presented here and LFADS is that we simultaneously train the neural auto-encoder and the network that predicts movement related variables from latent variables. LFADS uses a sequential approach of first extracting the latent space followed by training a movement predictor. We provide evidence in our paper that the supervision of the dimensionality reduction step through the integration of relevant movement information leads to a latent representation that better captures neural variability related to movement intent and therefore significantly improves the performance of the interface. \n", "Hi:\n\nThe idea of using an encoder and decoder and a small latent spaces to design a brain machine interface has already been done by this paper:\n\nInferring single-trial neural population dynamics using sequential auto-encoders\nhttps://www.nature.com/articles/s41592-018-0109-9\n\nCould you elaborate the difference between yours and their paper?\n" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 9, 5, 7, -1, -1 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 5, -1, -1 ]
[ "S1gZuAoJ3X", "iclr_2019_Hyx6Bi0qYm", "SJljUupdoQ", "BJlvxzmJaQ", "S1gZuAoJ3X", "S1gZuAoJ3X", "S1gZuAoJ3X", "S1gZuAoJ3X", "iclr_2019_Hyx6Bi0qYm", "iclr_2019_Hyx6Bi0qYm", "iclr_2019_Hyx6Bi0qYm", "SyeDE0WYqX", "iclr_2019_Hyx6Bi0qYm" ]
iclr_2019_HyxAfnA5tm
Deep Online Learning Via Meta-Learning: Continual Adaptation for Model-Based RL
Humans and animals can learn complex predictive models that allow them to accurately and reliably reason about real-world phenomena, and they can adapt such models extremely quickly in the face of unexpected changes. Deep neural network models allow us to represent very complex functions, but lack this capacity for rapid online adaptation. The goal in this paper is to develop a method for continual online learning from an incoming stream of data, using deep neural network models. We formulate an online learning procedure that uses stochastic gradient descent to update model parameters, and an expectation maximization algorithm with a Chinese restaurant process prior to develop and maintain a mixture of models to handle non-stationary task distributions. This allows for all models to be adapted as necessary, with new models instantiated for task changes and old models recalled when previously seen tasks are encountered again. Furthermore, we observe that meta-learning can be used to meta-train a model such that this direct online adaptation with SGD is effective, which is otherwise not the case for large function approximators. We apply our method to model-based reinforcement learning, where adapting the predictive model is critical for control; we demonstrate that our online learning via meta-learning algorithm outperforms alternative prior methods, and enables effective continuous adaptation in non-stationary task distributions such as varying terrains, motor failures, and unexpected disturbances.
accepted-poster-papers
The reviewers appreciated this contribution, particularly its ability to tackle nonstationary domains which are common in real-world tasks.
train
[ "Syx2MNoERQ", "HylGj6OV0m", "HJg0BTu4C7", "Hkg_xYA3n7", "rygBgFLi3m", "BJgYeRFP2X" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your review. We added an appendix to the paper that addresses your question, and we have also added this information (as well as illustrative videos) to the project website. To illustrate results with less meta-training data, we have evaluated the test-time performance of models from various meta-training iterations, showing that performance does indeed improve with more meta-training data. To clarify, this statement of performance improving with meta-training data is different from the statement in the text regarding online updating the meta-learner not improving results. We meant that incorporating the EM weight updates during meta-training did not improve results, but we did not mean that additional meta-learning was harmful. We added text at the end of section 5 in the updated paper to reduce the potential for confusion. \n\nRegarding the amount of data used, the number of datapoints used during metatraining on each of the agents in our experiments is 382,000: This is 12 iterations of alternating model training plus on-policy rollouts, where each iteration collects data from 16 different environment settings, and each setting consists of 2000 datapoints. At a simulator timestep of 0.02sec/step, this sample complexity converts to around only 2 hours of real-world data.", "Thank you for your review. We have corrected the typo in the test in the middle of Algorithm 1: it should have been argmin instead of argmax. We have also clarified the caption of figure 3 to indicate that the two plots simply illustrate two different runs for the indicated agent, showing that our method chooses to assign only a single task variable even throughout runs including changing terrain slopes.", "Thank you for your review. We have corrected the typo in both places of Algorithm 1: it should indeed have been the opposite inequality sign, and argmin instead of argmax. \n\nWe definitely agree with your comment that a mixture model that grows with time can sometimes be considered quite heavyweight. This is precisely where we plan to focus the efforts of our future work, by introducing a refreshing scheme where an offline retraining step can periodically condense the mixture model into fewer components (perhaps in a batch-mode training setting, so not all past data needs to be saved). We are also interested in goals such as making this mixture only as big as the agent “needs” it to be, allowing for better and more compressed sharing and organization of seen data. The performance of this current method makes us hopeful and excited to work toward such future work in this area.", "The authors proposed a new method to learn streaming online updates for neural networks with meta-learning and applied it to multi-task reinforcement learning. Model-agnostic meta-learning is used to learn the initial weight and task distribution is learned with the Chinese restaurant process. It sounds like an interesting idea and practical for RL. Extensive experiments show the effectiveness of the proposed method.\n\nThe authors said that online updating the meta-learner did not improve the results, which is a bit surprised. Also how many data are meta-trained is not clearly described in the paper. Maybe the authors can compare the results with less data for meta-training.\n", "The paper presents a nonparametric mixture model of neural networks for learning in an environment with a nonstationary distribution. The problem setup includes having access to only a few \"modes\" of the distribution. Training of the initial model occurs with MAML, and distributional changes during test/operation are handled by a combination of online adaptation and creations of new mixture components when necessary. The mixture is nonparametric and modeled with a CRP. The application considered in the paper is RL, and the experiments compare proposed model against baselines that do not utilize meta-learning (achieved in the proposed method with MAML), and baselines which utilize only a single model component.\n\nI thought the combination of meta-learning and a CRP was a neat way to tackle the problem of modeling and learning the \"modes\" of a nonstationary distribution. Applications in other domains would have been nice, but the presented results in RL sufficiently demonstrate the benefits of the proposed method.\n\n* Questions/Comments\n\nFigure 3 left vs right?\n\nIs the test in the middle of Algorithm 1 correct?", "The paper introduces a method for online adaptation of a model that is expected to adapt to changes in the environment the model models. The method is based on a mixture model, where new models are spawned using a Chinese restaurant process, and where each newly spawned model starts with weights that have been trained using meta-learning to quickly adapt to new dynamics. The method is demonstrated on model-based RL for a few simple benchmarks.\n\nThe proposed method is well justified, clearly presented, and the experimental results are convincing. The paper is generally clear and well written. The method is clearly most useful for situations where the environment suddenly changes, which is relevant in some real-world problems. As a drawback, using a mixture model (that also grows with time) for such modelling can be considered quite heavy in some situations. Nevertheless, the idea of combining a spawning process with meta-learned priors is neat, and clearly works well.\n\nMinor comments:\n- Algorithm 1: is the inequality correct, and is T* supposed to be an argmin instead of argmax?" ]
[ -1, -1, -1, 7, 7, 7 ]
[ -1, -1, -1, 3, 3, 3 ]
[ "Hkg_xYA3n7", "rygBgFLi3m", "BJgYeRFP2X", "iclr_2019_HyxAfnA5tm", "iclr_2019_HyxAfnA5tm", "iclr_2019_HyxAfnA5tm" ]
iclr_2019_HyxCxhRcY7
Deep Anomaly Detection with Outlier Exposure
It is important to detect anomalous inputs when deploying machine learning systems. The use of larger and more complex inputs in deep learning magnifies the difficulty of distinguishing between anomalous and in-distribution examples. At the same time, diverse image and text data are available in enormous quantities. We propose leveraging these data to improve deep anomaly detection by training anomaly detectors against an auxiliary dataset of outliers, an approach we call Outlier Exposure (OE). This enables anomaly detectors to generalize and detect unseen anomalies. In extensive experiments on natural language processing and small- and large-scale vision tasks, we find that Outlier Exposure significantly improves detection performance. We also observe that cutting-edge generative models trained on CIFAR-10 may assign higher likelihoods to SVHN images than to CIFAR-10 images; we use OE to mitigate this issue. We also analyze the flexibility and robustness of Outlier Exposure, and identify characteristics of the auxiliary dataset that improve performance.
accepted-poster-papers
The paper proposes a new fine-tuning method for improving the performance of existing anomaly detectors. The reviewers and AC note the limitation of novelty beyond existing literature. This is quite a borader line paper, but AC decided to recommend acceptance as comprehensive experimental results (still based on empirical observation though) are interesting.
train
[ "ByehoqpU1E", "Bye5XYYT37", "SygTG-v2Am", "r1xNRARj0X", "ryg5L0no0Q", "HkeMqN7qCm", "H1g7uFXf07", "HklI1QZGA7", "Skl4qxWN67", "rJl4Tt7MC7", "HJgkrz7_pX", "H1lavG27nQ", "B1xOlGHljQ", "rkgIONKsq7", "ByxTzKbkqX" ]
[ "public", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "public", "official_reviewer", "author", "public", "author" ]
[ "Thanks! Of course, we will be happy to cite your work on the first occasion.", "This paper describes how a deep neural network can be fine-tuned to perform outlier detection in addition to its primary objective. For classification, the fine-tuning objective encourages out-of-distribution samples to have a uniform distribution over all class labels. For density estimation, the objective encourages out-of-distribution samples to be ranked as less probability than in-distribution samples. On a variety of image and text datasets, this additional fine-tuning step results in a network that does much better at outlier detection than a naive baseline, sometimes approaching perfect AUROC.\n\nThe biggest weakness in this paper is the assumption that we have access to out-of-distribution data, and that we will encounter data from that same distribution in the future. For the typical anomaly detection setting, we expect that anomalies could look like almost anything. For example, in network intrusion detection (a common application of anomaly detection), future attacks are likely to have different characteristics than past attacks, but will still look unusual in some way. The challenge is to define \"normal\" behavior in a way that captures the full range of normal while excluding \"unusual\" examples. This topic has been studied for decades.\n\nThus, I would not classify this paper as an anomaly detection paper. Instead, it's defining a new task and evaluating performance on that task. The empirical results demonstrate that the optimization succeeds in optimizing the objective it was given. What's missing is the justification for this problem setting -- when is it the case that we need to detect outliers *and* have access to the distribution over outliers?\n\n--------\n\nUPDATE AFTER RESPONSE PERIOD:\n\nMy initial read of this paper was incorrect -- the authors do indeed separate the outlier distribution used to train the detector from the outlier distribution used for evaluation. Much of these details are in Appendix A; I suggest that the authors move some of this earlier or more heavily reference Appendix A when describing the methods and introducing the results. I am not well-read in the other work in this area, but this looks like a nice advance.\n\nBased on my read of the related work section (again, having not studied the other papers), it looks like this work fills a slightly different niche from some previous work. In particular, OE is unlikely to be adversarially robust. So this might be a poor choice for finding anomalies that represent malicious behavior (e.g., network intrusion detection, adversarial examples, etc.), but good for finding natural examples from a different distribution (e.g., data entry errors).\n\nMy main remaining reservation is that this work is still at the stage of empirical observation -- I hope that future work (by these authors or others) can investigate the assumptions necessary for this method to work, and even characterize how well we should expect it to work. Without a framework for understanding generalization in this context, we may see a proliferation of heuristics that succeed on benchmarks without developing the underlying principles.", "Thank you for your reply and good questions. Due to space limitations, in Appendix A we list the results for each D_out^test distribution, and we give the full descriptions of the D_out^test distributions. The test D_out^test distributions consist in Gaussian Noise, Rademacher Noise, Bernoulli Noise, Blobs, Icons-50 (emojis), Textures, Places365, LSUN, ImageNet (the 800 ImageNet-1K classes not in Tiny ImageNet and not in D_out^OE), CIFAR-10/100, and Chars74K anomalies. For NLP we use SNLI, IMDB, Multi30K, WMT16, Yelp, and various subsets of the English Web Treebank. We therefore test our models with approximately double the number of D_out^test image distributions compared to prior work; we also test in NLP, unlike nearly all other recent work in OOD detection.\n\nYour read is correct that 80 Million Tiny Images are used for SVHN, CIFAR-10, CIFAR-100; these images are too low-resolution (32x32x3) for Tiny ImageNet, so for that we use ImageNet-22K (minus ImageNet-1K). For NLP, we use WikiText-2, but in the discussion we note using the Project Gutenberg corpus also works, so the dataset choice has flexibility even in NLP. Thanks to your comment, we will add a link to Appendix A in the caption of Table 1 for the full results and make the interactions between D_in, D_out^OE, D_out^test clearer.\n\nAs for accuracy, the fixed coefficient of lambda = 0.5 for the vision experiments leads to slight degradation when tuning with OE, like other approaches. For example, a vanilla CIFAR-10 Wide ResNet has 5.16% classification error, while with OE tuning it has 5.27% error. This degradation can be further reduced by training from scratch (Appendix E). We will look into ``negative transfer.'' Thank you.", "Thanks for the clarification. Yes, it makes a big difference that the \"training\" outliers are from different datasets than the \"test\" outliers -- I'm happy I was mistaken in my previous understanding.\n\nI'll study the paper some more, but after quickly rereading some key sections, I don't understand exactly what combinations of D_in, D_out^OE, and D_out^test were used, e.g., in Table 1. From the row labels, I can figure out what D_in is. From Section 4.2.2., it sounds like you used 80 Million Tiny Images as D_out^OE for SVHN, CIFAR10, and CIFAR-100. Was ImageNet-22K used as D_out^OE for Tiny ImageNet? The text is ambiguous. And then, what was used for D_out^test?\n\nIn general, the effectiveness of these techniques will rely heavily on the nature of the datasets used. With some combinations, we should expect OE to reduce the accuracy of anomaly detection, much like the \"negative transfer\" phenomenon in transfer learning. I didn't see much discussion of this point, but perhaps I missed it.", "This is an interesting segmentation task, and we will be sure to try Outlier Exposure on this task in the future. We intend to include a citation to your work after submission deanonymization.", "Reviewer 1, we have added more emphasis that the Outlier Exposure data and the test sets are disjoint in the revised draft.", "Thank you for your thoughtful feedback and willingness to question the premises behind submitted works.\n\nWe believe there may be a misunderstanding of our experimental setup. In the setup you describe, out-of-distribution data is available during training, and data from that same distribution is encountered at test time. We agree that such a setup has issues, and we intentionally avoided that setup. We do not assume access to the test distribution, but this confusion is understandable as many recent OOD papers assume this. In particular, we took great care to keep datasets disjoint in our experiments, and the only out-of-distribution dataset examples we use at training time come from the realistic, diverse Outlier Exposure datasets described in Section 4.2.2. We ensured that these OE datasets were disjoint with the out-of-distribution data evaluated at test time. For instance, in the NLP experiments, we used WikiText-2 as the OE dataset, and none of the NLP OOD datasets evaluated on at test time were collected from Wikipedia.\n\nOne of our contributions is that training on the OE datasets which we identified leads to generalization to novel forms of anomalies. Concretely, with SVHN as the in-distribution, we found that OE improved OOD detection on the Icons-50 dataset of emojis, even though the OE dataset consisted in natural images and did not contain any emojis. Thus, training with OE does help with generalization to new anomalies, and it does not simply teach the detector a particular, narrow distribution of outliers.", "Thank you for your detailed feedback.\n\n1.\nLee et al. [2] propose training against GAN-generated out-of-distribution data, and they use a confidence loss for anomaly detection with multiclass classification as the original task. By contrast, we consider a broader range of original tasks, including density estimation and natural language settings, and we show how to incorporate Outlier Exposure for each scenario.\n\nAnother crucial difference between our work and [2] is that we demonstrate that realistic, diverse data is significantly more effective than GAN-generated examples, and is scalable to complex, high-resolution data that everyday GANs have difficulty generating. Likewise, GANs are currently not capable of generating high-quality text. Finally, Lee et al. [2] state in Appendix B, “For each out-of-distribution dataset, we randomly select 1,000 images for tuning the penalty parameter β, mini-batch size and learning rate.” Thus some of their hyperparameters are tuned on OOD test data, which is not the case in our work. Hence, our work is in a different setting from Lee et al. [2]. In our paper we show how to use real data to _consistently_ improve detection in a host of settings. In essence, our some of our multiclass experiments are built on the seminal work of Lee et al. [2] by using real and diverse data. \n\nOur primary contribution is that real data from a diverse source can be used to train anomaly detectors which generalize to anomalies from new and different distributions, so there is no need to use GANs or assume access to the test distributions. We demonstrate this in a variety of settings, showing that this technique is general and consistently boosts performance.\n\nSecondary sources of novelty in our paper include the margin loss for OOD detection with density estimators, the cross entropy OOD score instead of MSP (Appendix G), posterior rescaling for confidence calibration in the presence of OOD data (Appendix C), and our observation that a cutting-edge CIFAR-10 density model unexpectedly assigns higher density to SVHN images than to CIFAR-10 images. The latter contribution forms the basis for a concurrent submission by different authors, which can be found here: https://openreview.net/forum?id=H1xwNhCcYm Since that work is concurrent, it does not detract from our paper’s novelty. We should note that we not only reveal that density estimates are unreasonable on out-of-distribution points, but we also ameliorate it with Outlier Exposure.\n\n2.\nWe have added a section comparing to ODIN [3] (Appendix I). We will incorporate the results into the main paper if you think we should.\n\n3.\nThank you for pointing out these related works. The works of [4] and [5] are ECCV 2018 and NIPS 2018 papers, both of which are for conferences occurring after the submission deadline of this paper. We have a working implementation of [4] and will incorporate it into the paper it once we are sure that it is a faithful reproduction. We think that our comparisons on multiclass OOD detection (including the baseline [1], Lee et al. [2], DeVries et al., Liang et al. [3]), density estimation OOD detection, and confidence calibration on vision and NLP datasets are sufficient to demonstrate our method.\n\nEdit: Thank you very much for taking the time to read this response and update your score.", "I have read authors' reply. In response to authors' comprehensive reply and feedback. I upgrade my score to 6. As authors mentioned, the extension to density estimators is an original novelty of this paper, but I still have some concern that OE loss for classification is basically the same as [2]. I think it is better to clarify this in the draft. \n\nSummary===\n\nThis paper proposes a new fine-tuning method for improving the performance of existing anomaly detectors. The main idea is additionally optimizing the “Outlier Exposure (OE)” loss on outlier dataset. Specifically, for softmax classifier, the authors set the OE loss to the KL divergence loss between posterior distribution and uniform distribution. For density estimator, they set the OE loss to a margin ranking loss. The proposed method improves the detection performance of baseline methods on various vision and NLP datasets. While the research topic of this paper is interesting, I recommend rejections because I have concerns about novelty and the experimental results.\n\nDetailed comments ===\n\n1. OE loss for softmax classifier\n\nFor softmax classifier, the OE loss forces the posterior distribution to become uniform distribution on outlier dataset. I think this loss function is very similar to a confidence loss (equation 2) proposed in [2]: Lee et al., 2017 [2] also proposed the loss function minimizing the KL divergence between posterior distribution and uniform distribution on out-of-distribution, and evaluated the effects of it on \"unseen\" out-of-distribution (see Table 1 of [2]). Could the authors clarify the difference with the confidence loss in [2], and compare the performance with it? Without that, I feel that the novelty of this paper is not significant.\n\n2. More comparison with baselines\n\nThe authors said that they didn’t compare the performance with simple inference methods like ODIN [3] since ODIN tunes the hyper-parameters using data from (tested) out-of-distribution. However, I think that the authors can compare the performance with ODIN by tuning the hyper-parameters of it on outlier dataset which is used for training OE loss. Could the authors provide more experimental results by comparing the performance with ODIN? \n\n3. Related work\n\nI would appreciate if the authors can survey and compare more baselines such as [4] and [5]. \n\n[1] Dan Hendrycks and Kevin Gimpel. A baseline for detecting misclassified and out-of-distribution examples in neural networks. International Conference on Learning Representations, 2017. \n[2] Kimin Lee, Honglak Lee, Kibok Lee, and Jinwoo Shin. Training confidence-calibrated classifiers for detecting out-of-distribution samples. International Conference on Learning Representations, 2018. \n[3] Shiyu Liang, Yixuan Li, and R. Srikant. Enhancing the reliability of out-of-distribution image detection in neural networks. International Conference on Learning Representations, 2018. \n[4] Kimin Lee, Kibok Lee, Honglak Lee, and Jinwoo Shin. A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks. In NIPS, 2018.\n[5] Apoorv Vyas, Nataraj Jammalamadaka, Xia Zhu, Dipankar Das, Bharat Kaul, and Theodore L. Willke. Out-of-Distribution Detection Using an Ensemble of Self Supervised Leave-out Classifiers, In ECCV, 2018.", "Thank you for your careful analysis of our paper.\n\nWe have uploaded a new draft incorporating your suggestions.\n\nTo improve clarity, we have added two paragraphs to the preface of Section 4 summarizing our experiments and novel discoveries. We found it difficult to import several specific details from individual experiments to Section 3, so we opted to instead improve the clarity of several experimental sections as they appear, and to improve the clarity of the discussion section. We also restructured the calibration section.\n\nRegarding your second and third points, we added the reference for the original GAN paper, and we added definitions for BPP, BPC, and BPW to Section 4.4. Thank you for these suggestions.\n\nThe baseline numbers in Table 3 differ from those in Table 1 because in Table 3 we use the training regime from the publicly available implementation of DeVries et al. to create an accurate comparison. The difference is that they use a different learning schedule than the models from Table 1.", "Hi, \nwe have a complementary out-of-distribution detection paper currently under review:\nhttps://openreview.net/forum?id=H1x1noAqKX\nWe detect OOD samples on a pixel level. We also find that using outliers during training is effective for detecting OOD samples.\n", "This paper proposes fine-tuning an out-of-distribution detector using an Outlier Exposure (OE) dataset. The novelty is in proposing a model-specific rather than dataset-specific fine-tuning. Their modifications are referred to as Outlier Exposure. OE includes the choice of an OE dataset for fine-tuning and a regularization term evaluated on the OE dataset. It is a comprehensive study that explores multiple datasets and improves dataset-specific baselines.\n\nSuggestions and clarification requests:\n- The structure of the writing does not clearly present the novel aspects of the paper as opposed to the previous works. I suggest moving the details of model-specific OE regularization terms to section 3 and review the details of the baseline models. Then present the other set of novelties in proposing OE datasets in a new section before presenting the results. Clearly presenting two sets of novelties in this work and then the results. If constrained in space, I suggest squeezing the discussion, conclusion, and 4.1.\n- In the related work section Radford et al., 2016 is references when mentioning GAN. Why not the original reference for GAN?\n- Maybe define BPP, BPC, and BPW in the paragraphs on PixelCNN++ and language modeling or add a reference.\n- Numbers in Table 3 column MSP should match the numbers in Table 1, right? Or am I missing something?", "Thank you for bringing your NIPS 2018 paper to our attention. We think decoupling uncertainty into \"data\" and \"OOD\" uncertainty is an interesting avenue, and we will cite your work accordingly.", "Hello! :) Interesting work. You may find our work on predictive uncertainty estimation to be relevant relevant.\n\nhttps://arxiv.org/pdf/1802.10501.pdf \n", "In Section 4.3 we observe that a cutting-edge CIFAR-10 density model unexpectedly assigns higher density to SVHN images than to CIFAR-10 images.\nAs it happens, a concurrent submission is based on this observation. Their work can be found here: https://openreview.net/forum?id=H1xwNhCcYm" ]
[ -1, 6, -1, -1, -1, -1, -1, -1, 6, -1, -1, 8, -1, -1, -1 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, 5, -1, -1, 4, -1, -1, -1 ]
[ "ryg5L0no0Q", "iclr_2019_HyxCxhRcY7", "r1xNRARj0X", "HkeMqN7qCm", "HJgkrz7_pX", "Bye5XYYT37", "Bye5XYYT37", "Skl4qxWN67", "iclr_2019_HyxCxhRcY7", "H1lavG27nQ", "iclr_2019_HyxCxhRcY7", "iclr_2019_HyxCxhRcY7", "rkgIONKsq7", "iclr_2019_HyxCxhRcY7", "iclr_2019_HyxCxhRcY7" ]
iclr_2019_HyxGB2AcY7
Contingency-Aware Exploration in Reinforcement Learning
This paper investigates whether learning contingency-awareness and controllable aspects of an environment can lead to better exploration in reinforcement learning. To investigate this question, we consider an instantiation of this hypothesis evaluated on the Arcade Learning Element (ALE). In this study, we develop an attentive dynamics model (ADM) that discovers controllable elements of the observations, which are often associated with the location of the character in Atari games. The ADM is trained in a self-supervised fashion to predict the actions taken by the agent. The learned contingency information is used as a part of the state representation for exploration purposes. We demonstrate that combining actor-critic algorithm with count-based exploration using our representation achieves impressive results on a set of notoriously challenging Atari games due to sparse rewards. For example, we report a state-of-the-art score of >11,000 points on Montezuma's Revenge without using expert demonstrations, explicit high-level information (e.g., RAM states), or supervisory data. Our experiments confirm that contingency-awareness is indeed an extremely powerful concept for tackling exploration problems in reinforcement learning and opens up interesting research questions for further investigations.
accepted-poster-papers
The paper addresses the challenging and important problem of exploration in sparse-rewards settings. The authors propose a novel use of contingency awareness, i.e., the agent's understanding of the environment features that are under its direct control, in combination with a count-based approach to exploration. The model is trained using an inverse dynamics model and attention mechanism and is shown to be able to identify the controllable character. The resulting exploration approach achieves strong empirical results compared to alternative count-based exploration techniques. The reviewers note that the novel approach has potential for opening up potential fruitful directions for follow-up research. The obtained strong empirical results are another strong indication of the value of the proposed idea. The reviewers mention several potential weaknesses. First, while the proposed idea is general, the specific implementation seems targetted specifically towards Atari games. While Atari is a popular benchmark domain, this raises questions as to whether insights can be more generally applied. Second, several questions were raised regarding the motivation for some of the presented modeling choices (e.g., loss terms) as well as their impact on the empirical results. Ablation studies were recommended as a step to resolving these questions Reviewer 3 questioned whether the learned state representation could be directly used as an additional input to the agent, and if it would improve performance. Finally, several related works were suggested that should be included in the discussion of related work. The authors carefully addressed the issues raised by the reviewers, running additional comparisons and adding to the original empirical insights. Several issues of clarity were resolved in the paper and in the discussion. Reviewer 3 engaged with the authors and confirmed that they are satisfied with the resulting submission. The AC judges that the suggestions of reviewer 1 have been addressed to a satisfactory level. A remaining issue regarding results reporting was raised anonymously towards the end of the review period, and the AC encourages the authors to address this issue in their camera ready version.
train
[ "SygtznNglE", "HkgYEAahJN", "H1eXJyTACX", "rJlI1eLA0Q", "H1ejPD2tnm", "BkxNyUCqR7", "BkeVt5D9Am", "rJecqfd9Cm", "Hkxsdfd5Cm", "HyelEz_qC7", "SyeF5cv5AQ", "SygATEK7pQ", "HJljoJPp27" ]
[ "author", "public", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thank you very much for your comment. You are correct that the reported performance of DDQN+ is achieved at 25M steps rather than at 50M steps. We will update the table in the final version of the paper. To the best of our knowledge, DDQN+ code is not publicly available and in our experience it was not trivial to replicate the results. On Montezuma’s revenge, very often many methods can reach the score of 2500 quite easily but afterwards they struggle to achieve higher scores (so running the algorithm longer usually doesn’t guarantee further improvement in scores). If the authors can share their code or report their results with more steps on Montezuma’s revenge, we are happy to include it in the table.\n\nConsidering that all of the baselines in Table 2 use frameskip of 4, reporting the number of frames (instead of number of steps) does not make a difference in the comparison. However, we will consider reporting the number of frames in the final version.", "Just a heads-up that you've overstated the training time for Bellemare et al.'s DDQN+ agent by a factor of 2. If you check their Figure 2 and the surrounding text, you'll see that it was only trained for 100m frames, or 25m \"environment timesteps\" in your terminology. In Table 2, you've stated that it was trained for 50m environment timesteps. With this in mind, if you compare the first quarter of your Figure 2 to theirs, it seems pretty dubious whether your agent is actually ahead.\n\nSide note: I think it would be better if you quoted training times with the multiplier of 4 throughout, as this is by-and-large the more common time scale used in the literature.", "Thank you for the clarifications", "Dear Reviewer 3,\n\nThank you very much for quickly and carefully going through our response and the updated draft.\n\nRegarding the description of \\tau, we will update it in the final version of paper. The caption of Table 4 shall now read: “For the four games where there is no change of high-level visual context (FREEWAY, FROSTBITE, QBERT and SEAQUEST), we do not include c in the state representation ψ(s), hence there is no \\tau.”\n\nRegarding Table 5, we note that A2C+CoEX(c) slightly differs from the vanilla A2C even on those games, as it has a decaying exploration bonus at each time step, whereas the vanilla A2C has no bonus reward at all. It can affect the agent’s behavior; for instance, a positive reward at every time step is known to incentivize the agent to survive longer.\n\nRegarding your questions about the new ablation study:\n\n1) This is our small mistake (sorry, Seaquest was added later), thank you for pointing this out. We have fixed this, which will appear in the final version.\n\n2) The mean score of 94 and 77 happens as a spike in the early stage of training, but the agent failed to retain the score, yielding almost zero mean reward afterwards (as shown in the plot). We will fix wordings accordingly.\n\nThe goal in the game of Venture is basically to navigate the world visiting many different rooms and destroy the enemy, and there is not much benefit going back to a previously explored room (more precisely, after clearing the room: i.e., killing enemies and picking up the score-items). Therefore, exploration with cumulative reward as extra state may not be beneficial. \n\n* More detailed answer: we would like to refer to our previous response on why the cumulative rewards may be useful as extra state information as it can potentially serve as important contextual change (e.g., picking up a key in Montezuma’s Revenge) that may incentivize the agent to revisit previously explored states (e.g., going to the door even if the corresponding state was previously explored without the key). However, in Venture, such revisiting behavior based on the change of cumulative rewards does not yield benefit due to the nature of the game.\n\n3) We have added the number of seeds in Figure 9 and Table 5, which will appear in the final version. Thanks for the suggestion!\n", "This paper investigates the problem of extracting a meaningful state representation to help with exploration in RL, when confronted to a sparse reward task. The core idea consists in identifying controllable (learned) features of the state, which in an Atari game for instance typically corresponds to the position of the player-controlled character / vehicle on the screen. Once this position is known (as x, y coordinates on a custom low-resolution grid), one can use existing count-based exploration mechanisms to encourage the agent to visit new positions (NB: in addition to the x, y coordinates, extra information is also used to disambiguate the states for counting purpose, namely the current score and the state’s cluster index obtained with a basic clustering scheme). To find the position, the algorithm trains one inverse dynamics model per x, y cell on the grid: each model tries to predict the action taken by the agent given two consecutive states, both represented by their feature map (at coordinate x, y) learned by a convolutional network applied to the pixel representation. The outputs of these inverse dynamics models are combined through an attention mechanism to output the final prediction for the action: the intuition is that the attention model will learn to focus on the grid cell with best predictive power (for a given state), which should correspond to where the controllable parts of the state are. Experiments on several Atari games (including Montezuma’s Revenge) indeed show that this mechanism is able to track the true agent’s coordinates (obtained from the RAM state) reasonably well. Using these coordinates for count-based exploration (in A2C) also yields significantly better results compared to vanilla A2C, and beats several previously proposed related techniques for exploration in sparse reward settings.\n\nThe topic being investigated here (hard-exploration tasks) is definitely very relevant to current RL research, and the proposed technique introduces some novel ideas to address it, notably the usage of an attention model combined with multiple inverse dynamics models so as to identify controllable features in the environment. The approach seems sound to me and is clearly explained. Combined with pretty good results on well known hard Atari games, I am leaning toward recommending acceptance at ICLR.\n\nI have a few significant concerns though, the first one being that the end result seems quite tailored to the specific Atari games of interest: trying to apply it to other tasks (or even just Atari games with different characteristics) may require significant changes (ex: the assumption that a single region of the screen is being controlled by the agent, the clustering to identify the various “rooms” of a game, and using the total score as a proxy to important state information). I do believe that some components are more general though (in particular the main new ideas in the paper), so this is not necessarily a major issue, but another example of application of these ideas to a different domain could have strengthened the submission.\n\nIn addition, even if experiments definitely investigate relevant aspects of the algorithm, I wish there had been an ablation study on the three components of the state representation used for counting (coordinates, cluster and reward). In particular it would be disappointing if similar results could be obtained with just the cluster and reward... even if I do not expect it to be the case, an empirical validation would have been welcome to be 100% sure.\n\nThe good results obtained here from exploration alone also beg the question whether this state representation could be useful to train the agent, by plugging it directly as input to the policy network (which by the way may not be trivial due to the co-training, but you get the idea). I realize that the focus of the paper is on exploration, and this is fine, but it seems to me a bit of a waste to build such a powerful state abstraction mechanism and not give the agent access to it. I was surprised that it was not at least mentioned in the discussion or conclusion. Note by the way that the conclusion says the agent “benefits from a compact, informative representation of the world”, which can be misinterpreted as using it in its policy.\n\nRegarding the algorithm itself, one potential limitation is the fact that the inverse dynamics models rely on a single time step to identify the action that was taken. This means that they can only identify controllable state features that change immediately after taking a given action. But if an action has “cascading” effects (the immediate state change causing further changes down the road), there may be other important state features that could be controlled (across longer timesteps), but the algorithm will ignore them (also, in a POMDP one may need to wait for more than one timestep to even observe a single change in the state). I suspect that a more generic variant of this idea, better accounting for long term effects of actions, may thus be needed in order to work optimally in more varied settings.\n\nFinally, I believe more papers deserve to be cited in the “Related Work” section. In particular, the idea of controlling features of the environment, (even if not specifically for exploration), has also been explored in (at least) the following papers:\n- “Reinforcement Learning with Unsupervised Auxiliary tasks” (Jaderberg et al, 2017)\n- “Feature Control as Intrinsic Motivation for Hierarchical Reinforcement Learning” (Dilokthanakul et al, 2017)\n- “Independently Controllable Factors” (Thomas et al, 2017)\n- “Disentangling Controllable and Uncontrollable Factors of Variation by Interacting with the World” (Sawada, 2018)\nRelying on the position of the agent on the screen to drive exploration in Atari games has also been used in: “Deep Curiosity Search: Intra-Life Exploration Improves Performance on Challenging Deep Reinforcement Learning Problems” (Stanton & Clune, 2018)\n\nOther remarks:\n- Please share the code if possible\n- In the Introduction, the sentence “it is still an open question on how to construct an optimal representation for exploration” seems to repeat “there is an ongoing open question about the most effective way of using neural network representations for exploration” => I wonder if one was supposed to replace the other?\n- On p.2, last line containing citations: Pathak et al should be in the parentheses\n- Please explicitly refer to Fig. 1 (Right) in 3.1\n- On p.4, three lines above eq. 5, there is a hat{alpha} that should probably be hat{a}\n- Is the left hand side L in eq. 5 the same as L^inv in Alg. 1? If so please use the same notations\n- “privious” work in 3.2\n- In 3.2 please briefly explain what psi is going to be. It is a bit confusing to have it appear “out of nowhere“, with no details on how it is constructed.\n- Please explain what the different shades mean in Fig. 2-3\n- In Table 2’s caption please add a reference for DQN-PixelCNN. Also what do the star and cross symbols mean next to the algorithms’ names?\n- “coule” at end of 4.6\n- The “Watson” citation is duplicated in references\n- Why are there games with no tau in Table 4? Is it because there was no such clustering on these games? (if yes, that was not clear in the paper). And how was tau chosen for other games? (in particular I want to make sure the RAM state was not used to optimize it)\n\nUpdate 2018-11-23: I am reducing my rating to 5 (from 6) due to the absence of author response regarding a potential revision addressing my comments/questions as well as those from other reviewers\n\nUpdate 2018-11-27: I am increasing my rating to 7 (from 5) after the authors responded to reviewers' comments and uploaded a revised version of the paper", "Sorry, I was not aware of the rebuttal period extension. Thank you for the detailed response and updated revision, I will update my review rating accordingly.\n\nRegarding the (lack of) generality of the proposed method, I do agree that at high level similar ideas could probably be used in different settings, however this remains hypothetical until actually verified empirically (that's what I meant by \"another example of application of these ideas to a different domain could have strengthened the submission\").\n\nAs far as the last point is concerned (tau), after quickly browsing through the changes in the new revision I didn't see mentioned in the text that some games were not using the clustering scheme. Please make sure it's clear (it should probably be at least in the caption of Table 4). If I understand correctly this also means that for these 4 games, methods A2C and A2C+CoEX(c) in Table 5 are actually the same and the differences only come from re-running the experiments (in that case maybe using the same numbers, e.g. those from A2C, could avoid some confusion).\n\nAbout the new content in the revised version:\n\n1) On p.16 (last paragraph), \"on these two games\" should be \"on these three games\". You also claim that \"full ADM worked best\", but that is not the case on Seaquest.\n\n2) On p.17, you claim that \"the variants without contingent regions (...) [gave] almost no improvement over the A2C baseline\", mentioning \"Montezuma's Revenge and Venture\" as examples: however in Venture both variants (scores 94 & 77) improve on A2C (score 0). It's also interesting to see how removing the reward from psi in Venture helps reach a much better score, do you have any idea why? (maybe it somehow has to do with how scoring works in this game?)\n\n3) Please mention the number of seeds in Table 5's caption.", "\nDear Reviewer 3,\n\nWe appreciate your positive, constructive, and detailed feedback. Our impression was that the rebuttal deadline is extended until November 26 per emails sent from the PCs. We apologize for not submitting the response earlier, as we have been using the extra time from the three-day extension of the revision period to prepare the best version of our response. Below we answer questions and address the concerns mentioned in the review. Please take a look at the revised draft for minor corrections and more related work. Please let us know if this addresses your points; we are happy to provide additional responses/information upon request.\n\n[Specificity of domain]\nOur experiments focus on 2D Atari games as they are popular in the RL community; however, the proposed high-level ideas are more general. We also briefly describe how our method can be extended to address your points.\n\n > Regarding applicability to different (e.g., non-Atari) environments: The idea of contingency awareness is applicable to continuous control problems as well, e.g., environments with continuous actions and image observations (e.g. rendering of 3D physics-based fully-observable environments from camera, such as AntMaze [Frans et al., ICLR 2018 / Nachum et al., NeurIPS 2018]). In such domains we can still discover controllable aspects out of observations via an attention mechanism by exploiting the correlation between actions and pixels, and then apply a similar exploration technique for the agent.\n\n > Regarding the assumption that a single region of the screen is being controlled by the agent: To deal with multiple controllable entities in the environment, one can extend our ADM with multiple attention heads, which could identify and track multiple controllable entities. In this case we could enrich the state representation for exploration to include information about multiple objects.\n\n > Regarding the clustering assumption: we used clustering to identify the context information (e.g., “rooms”), but one can alternatively use different methods to obtain such information, e.g., autoencoder-based distributed representation, and concatenate with the contingent-region information for improving exploration in sparse-reward problems.\n\n > Regarding using the total score as a proxy to important state information: In environments with sparse rewards it may be natural to assume that collecting a non-zero reward may indicate an important change of context or environmental information (e.g., obtaining a key in Montezuma’s Revenge). The addition of total score as extra state information improved the performance for Montezuma’s revenge. However, for other games, our method was still able to achieve high performance without such total score information. (Please see our ablative studies for details.) We will deemphasize the importance of this component in the final version. Thank you for your insightful comments.\n", "\n\n\nDear Reviewer 1, \n\nThank you for the constructive and positive feedback. Please have a look at the revised draft for ablation studies and other improvements. We are happy to provide additional information upon request.\n\n\n[Extra Loss Terms of ADM]\n\n>> Why not include an entropy regularization loss for policy?\nWe agree on the importance of entropy regularization for policy optimization. In fact, in our submission, the standard entropy regularization term H(pi(a|s)) was already included in policy training (we used the default regularization weight 0.01) --- please see Appendix A for details. We have revised the description to make it clearer.\n\n>> How is the second issue (= distribution shift & non i.i.d. training data) mitigated?\nOur goal is to make the ADM model generalize to unseen trajectories. However, if the model is trained only on the trajectories obtained by the current policy, there is a significant risk of overfitting. To prevent this we incorporate different forms of regularization, including attention entropy regularization and policy entropy regularization. We empirically find that this helps the model generalize better. In Appendix E we have included a concrete example on Freeway illustrating the positive impact of additional regularization terms in preventing overfitting.\n\nHowever, to address this issue more directly, we believe one can potentially incorporate a replay buffer of previous trajectories to optimize the ADM model on off-policy data, or one can train the ADM based on random exploration. We leave this to future work. That being said, we did not observe serious issues with on-line training of the ADM model in our experiments. \n\n>> Ablation Study of ADM.\nWe first note that the proposed ADM loss function worked very well on the 8 Atari games considered. That said, there might be other combinations of training objectives that can also work well. Upon your suggestion we have included ablation experiments in Appendix E to study the effect of ADM loss terms. Additional loss terms help to attain better performance and stability of ADM. In environments where the consequence of actions is easily predictable (e.g., Seaquest) the additional regularization may not be necessary. In more difficult games the additional loss terms improve the stability and the generalization of ADM.\n\n[Cell Loss Confusion]\nThere was a typo on the cell-wise cross-entropy loss. It was fixed to p(\\hat{a} | e) in the revision. Thank you for pointing it out.\n\n[State Representation]\nWe have added a small comment on what \\psi(s) consists of. We assumed that the construction of \\psi(s) can be thought of as an implementation detail in a more general perspective, to simply keep Section 3.2 as concise as possible.\n\n[Plots]\nThe x-axis denotes the environment step (100M steps = 400M frames due to the frameskip of 4), and the y-axis denotes the mean reward over recent 40 episodes for each individual run (shown in light curves). The learning curve (shown in dark) is obtained by averaging over 3 random seeds.\n\n(To be continued in part 2)\n\n", "\n(Continued from part 1)\n\n\n[Results]\nWe conjecture that the performance drop on Montezuma’s Revenge is mainly due to the instability of the A2C algorithm when it encounters large nonstationary exploration bonus rewards. However in our preliminary experiments, when a stronger and more stable base RL algorithm is used (e.g., PPO), we observe very stable results without such a performance drop. More specifically, using PPO+CoEX on Montezuma’s Revenge we achieve a score >11,000 averaged over 3 runs at 250M environment steps. The performance seems to keep improving as the number of steps increases, whereas the vanilla PPO achieves a score of <100. This suggests that such a high performance is not due to the use of PPO alone. We report the trend (score vs #steps) below:\n\nTest score, # of environmental steps\n-------------------------------------------\n5,066 at 100M steps (= 0.4B frames)\n8,015 at 150M steps (= 0.6B frames)\n10,108 at 200M steps (= 0.8B frames)\n11,108 at 250M steps (= 1B frames)\n(Plot) The corresponding learning curve is available at the supplementary web page: http://goo.gl/sNM3ir \n\nTo the best of our knowledge this result is above (or equal to) the state-of-the-art performance in Montezuma’s Revenge without using any explicit high-level information such as RAM states (as in SmartHash [Tang et al., NIPS 2018] or any expert demonstrations (e.g. DQfD [Hester et al., 2017]), when compared with work published to date. We will incorporate more comprehensive experiments with PPO and revise the paper for the final version.\n\nIn PrivateEye, we observe the instability of performance mainly due to the trick of clipping reward within the range [-1, 1], which is a standard used in DQN and A2C to deal with different scales of environment rewards. Specifically, PrivateEye has a negative raw reward (e.g. -1 at each time step) but the scale of positive and negative rewards are different (i.e., the scale of positive rewards is often much bigger than that of negative rewards). As a result, the agent actually increases the cumulative sum of “clipped” extrinsic rewards (which increases from around -500 to 0, which correspond to raw reward of approximately 3000 and 0 respectively), but the raw episode return drops as shown in Figure 2. Similar behaviors are also observed in (Bellemare et al. 2016).\n\n\n[Appendix (Algorithm 1&2)]\nWe have extended the description of loss functions and fixed notation issues as suggested by the reviewer. Regarding the question about the Algorithm 2 (clustering) it also makes sense to assign a frame to the closest cluster [Kulis & Jordan, 2012]. However, based on our experience we observe that there is no significant difference in terms of the agent’s end performance when we use the closest cluster. This is likely because we have chosen \\tau so that such a cluster is mostly unique and there would be only very little difference in room assignment. We will update the paper with the results with the algorithm assigning frames to the closest cluster in the final version.\n", "Dear Reviewer 2,\nThank you very much for your feedback. We are glad to hear that you find our work insightful and interesting. We have updated the draft to correct small errors and make the exposition of the paper clearer. Please let us know if you have additional comments. We are happy to provide additional information upon request.", "\n[Ablative Studies]\nWe conducted an ablation study on the state representation by exploring variants of A2C+CoEX without the predicted location information. We have added it in the Appendix F. To briefly summarize the result: as expected the variants without contingency-region information (especially the (c,R) baseline) perform much worse than the one with contingent region information. It is common for these variants to achieve almost no reward on Montezuma’s Revenge and Venture, where the reward is extremely sparse. This demonstrates that the contingent region information indeed plays an important role in count-based exploration.\n\nMethod | Freeway | Frostbite | Hero | Montezuma | PrivateEye | Qbert | Seaquest | Venture\nA2C | 7.2 | 1099 | 34352 | 12.5 | 574 | 19620 | 2401 | 0\nA2C+CoEX (c) | 10.7 | 1313 | 34269 | 14.7 | 2692 | 20942 | 1810 | 94\nA2C+CoEX (c; R) | 34.0 | 941 | 34046 | 9.2 | 5458 | 21587 | 2056 | 77\nA2C+CoEX (x; y; c) | 33.7 | 5066 | 36934 | 6558 | 5377 | 21130 | 1978 | 1429\nA2C+CoEX (x; y; c; R) | 34.0 | 4260 | 36827 | 6635 | 5316 |23962 | 5169 | 204\nTable 5: Summary of results for the ablation study: the maximum mean scores (averaged over 40 recent episodes) achieved over 100M environment steps of training.\n\n\n\n[Providing the policy with learned representation]\nWe first note that one can obtain a better function approximation by using this representation as an additional input, which is already claimed in (Bellemare et al., 2012). One easy way of providing learned contingency region information is to use it as an additional input to the policy and the value network. In our preliminary experiments this improved the performance only by a small margin, therefore we did not include those results for the clarity of the paper. We believe that taking advantage of contingent regions for policy learning could be more useful in a hierarchical RL setting or in combination with planning, which we plan to explore as a future work.\n\n\n[Long-term prediction of ADM]\nWe agree that one could improve ADM by taking multi-step transitions into consideration as suggested. We can consider extending an inverse-dynamics model to provide a window of state sequences that is a few steps wider and predict the action taken in the middle of the transition (e.g. given x_{t-3:t+2} predict a_t). This might be helpful on more complex environments, but it turns out that 1-step prediction works relatively well for the environments we experimented with. We plan to investigate the extension to multi-step prediction in a future work when dealing with more challenging environments.\n\n[Writing & Other Remarks]\nThanks for pointing out several typos and other suggestions on writing. We have fixed all of them as well as missing references, related work, etc. Regarding Table 2, there were unnecessary star and cross symbols used for denoting different steps which are now removed.\n\n[Choice of \\tau in clustering]\nThe games with no tau in Table 4 do not have c in the state representation because there is no change of high-level visual context (objects, layouts, etc.) in these games. We did not use RAM to tune the hyperparameter but chose a reasonable value of \\tau in the range [0.5, 0.8] based on visual inspection, such that it would give a sensible clustering result of observation samples collected across different visual contexts. One can tune this hyperparameter more extensively if given enough time/computational resources to find the best \\tau to reach the highest score in the game; however tuning of \\tau was not our primary concern.\n", "Summary:\n\nThe paper proposes the novel idea of using contingency awareness (i.e. the agent’s understanding of the environment dynamics, its perception that some aspects of the environment are under its control and ability to locate itself within the state space) to aid exploration in sparse-reward reinforcement learning tasks. They obtain great results on hard exploration Atari games and a new SOTA on Montezuma’s Revenge (compared to methods which are also not using any external data). They use an inverse dynamics model with attention, (trained with self-supervision) to predict the agent’s actions between consecutive states. This allows them to approximate the agent’s position in 2D environments, which is then used as part of the state representation to encourage efficient exploration. One of the main strengths of this method is the fact that it achieves good performance on challenging tasks without the expert demonstrations or environment simulators. I also liked the discussion part of the paper and the fact that it emphasizes some of the limitations and avenues for future work. \n\nPros:\nGood empirical results on challenging Atari tasks (including SOTA on Montezuma’s Revenge without extra supervision or information)\nTackles a long-standing problem in RL: efficient exploration in sparse reward environments\nNovel idea, which opens up new research directions\nComparison experiments with competitive baselines\n\nCons:\nThe choice of extra loss functions is not very well motivated \nSome parts of the paper are not very clear\n\nMain Comments:\nMotivation of Extra Loss Terms: It is not very clear how each of the losses (eq 5) will help mitigate all the issues mentioned in the paragraph above. I suggest providing more detailed explanations to motivate these choices. In particular, why are you not including an entropy regularization loss for the policy to mitigate the third problem identified? This has been previously shown to aid exploration. I also did not see how the second issue mentioned is mitigated by any of the proposed extra loss terms.\nRequest for Ablation Studies: It would be useful to gain a better understanding of how important is each of the losses used in equation 5, so I suggest doing some ablation studies.\nCell Loss Confusion: Last paragraph of section 3.1: is there a typo in the formulation of the per cell cross-entropy losses? Is alpha supposed to be the action a? Otherwise, this part is confusing, so please explain the reasoning and what supervision signal you used. \nState Representation: Section 3.2 can be improved by adding more details. For example, it is not explained at all what the function psi(s) contains and how it makes use of the estimated agent location. I would suggest moving some of the details in section 4.2 (such as the context representation and what psi contains) earlier in the text (perhaps in section 3.2). \n\n\nMinor Comments:\nPlots: It would be helpful to give more details about the plots. I suggest labeling the axes. Is the x-axis number of frames, steps or episodes? How many runs are used to compute the mean? What do the light and dark colors represent? What smoothing process did you use to obtain these curves if any? Figure 2, why is there such a large drop in performance on Montezuma’s Revenge after 80M? Something similar seems to happen in PrivateEye, but much earlier in training and the agent never recovers. \nTables: I would suggest reporting results in the tables for more than 3 seeds given that these algorithms tend to have rather high variance. Or at least, provide the values for the variance. \nAppendix A, Algorithm 1: I believe this can be written more clearly. In particular, it would be good to specify the loss functions that you are optimizing. There seems to be some mismatch between the notation of the losses in the algorithm and the paper. It would also help to define alpha, c, psi etc. \nFootnote on page 4: you may consider using a different variable instead of c_t to avoid confusion with c (used to refer to the context representation). \nAppendix D, Algorithm 2: is there a reason for which you aren’t assigning the embeddings to the closest cluster instead of any cluster that is within some range? \n\n\nReferences:\nThe related work section on exploration and intrinsic motivation could be improved by adding more references such as:\nGregor et al. 2016, Variational Intrinsic Control\nAchiam et al. 2018, Variational Option Discovery Algorithms\nFu et al. 2017, EX2: Exploration with Exemplar Models for Deep Reinforcement Learning\nSukhbaatar et al. 2018, Intrinsic Motivation and Automatic Curricula via Asymmetric Self-Play\nEysenbach et al. 2018, Diversity is all you need: learning skills without a reward function\n\n\nFinal Decision:\n\nThis paper presents a novel way for efficiently exploring environments with sparse rewards. \nHowever, the authors use additional loss terms (to obtain these results) that are not very well motivated. I believe the paper can be improved by including some ablation experiments and making some parts of the paper more clear, so I would like to see these additions in next iterations of the paper. \n\nGiven the novelty, empirical results, and comparisons with competitive baselines, I am inclined to recommend it for acceptance. \n", "This paper introduces contingency-aware exploration by employing attentive dynamics model (ADM). ADM is learned in self supervised manner in an online fashion and only using pure observations as the agents policy is updated. This approach has clear advantages to earlier proposed count based techniques where agent's curiosity is incentivized for exploration. Proposed technique provides an important insight into how to approach such challenging tasks where the rewards are very sparse. Not only it achieves state of the art results with convincing empirical evidence but also authors make a good job of providing details of their specific modelling techniques for training challenges. They make a good job of comparing and contrasting the contingency-awareness by ADM to earlier proposed methods such as intrinsic motivation and self-supervised dynamics model. Overall exposition is clear with well explained results. The proposed idea raises interesting questions for future work." ]
[ -1, -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, 6, 7 ]
[ -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, 3, 2 ]
[ "HkgYEAahJN", "iclr_2019_HyxGB2AcY7", "rJlI1eLA0Q", "BkxNyUCqR7", "iclr_2019_HyxGB2AcY7", "SyeF5cv5AQ", "H1ejPD2tnm", "SygATEK7pQ", "SygATEK7pQ", "HJljoJPp27", "BkeVt5D9Am", "iclr_2019_HyxGB2AcY7", "iclr_2019_HyxGB2AcY7" ]
iclr_2019_HyxKIiAqYQ
Context-adaptive Entropy Model for End-to-end Optimized Image Compression
We propose a context-adaptive entropy model for use in end-to-end optimized image compression. Our model exploits two types of contexts, bit-consuming contexts and bit-free contexts, distinguished based upon whether additional bit allocation is required. Based on these contexts, we allow the model to more accurately estimate the distribution of each latent representation with a more generalized form of the approximation models, which accordingly leads to an enhanced compression performance. Based on the experimental results, the proposed method outperforms the traditional image codecs, such as BPG and JPEG2000, as well as other previous artificial-neural-network (ANN) based approaches, in terms of the peak signal-to-noise ratio (PSNR) and multi-scale structural similarity (MS-SSIM) index. The test code is publicly available at https://github.com/JooyoungLeeETRI/CA_Entropy_Model.
accepted-poster-papers
This paper proposes an algorithm for end-to-end image compression outperforming previously proposed ANN-based techniques and typical image compression standards like JPEG. Strengths - All reviewers agreed that this a well written paper, with careful analysis and results. Weaknesses - One of the points raised during the review process was that 2 very recent publications propose very similar algorithms. Since these works appeared very close to ICLR paper submission deadline (within 30 days), the program committee decided to treat this as concurrent work. The authors also clarified the differences and similarities with prior work, and included additional experiments to clarify some of the concerns raised during the review process. Overall the paper is a solid contribution towards improving image compression, and is therefore recommended to be accepted.
train
[ "SkeXD949h7", "rylzhKZ6hX", "Hye-vwFXAm", "SJgeanZnT7", "SyUH3Znpm", "HkxhG0ao6m", "Byg5HXrlpQ", "BJlVvzHgpX", "BkeegQoyp7", "HkxN269Ja7", "B1xbVRde2Q", "BJxhpCaC27", "SyxV-q_3cm", "SJxyx5u397", "BkekKKXHq7" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "public" ]
[ "Update:\nI have updated my review to mention that we should accept this work as being concurrent with the two papers that are discussed below.\n\nOriginal review:\nThis paper is very similar to two previously published papers (as pointed by David Minnen before the review period was opened):\n\"Learning a Code-Space Predictor by Exploiting Intra-Image-Dependencies\" (Klopp et al.) from BMVC 2018,\nand\n\"Joint Autoregressive and Hierarchical Priors for Learned Image Compression\" (Minnen et al.) from NIPS 2018.\n\nThe authors have already tried to address these similarities and have provided a list in their reply, and my summary of the differences is as follows (dear authors: please comment if I am misrepresenting what you said):\n(1) the context model is slightly different\n(2) parametric model for hyperprior vs non-parametric\n(3) this point is highly debatable to be considered as a difference because the distinction between using noisy outputs vs quantized outputs is a very tiny detail (any any practitioner would probably try both and test which works better). \n(4) this is not really a difference. The fact that you provide details about the method should be a default! I want all the papers I read to have enough details to be able to implement them.\n(5+) not relevant for the discussion here.\n\nIf the results were significantly different from previous work, these differences would indeed be interesting to discuss, but they didn't seem to change much vs. previously published work.\n\nIf the other papers didn't exist, this would be an excellent paper on its own. However, I think the overlap is definitely there and as you can see from the summary above, it's not really clear to me whether this should be an ICLR paper or not. I am on the fence because I would expect more from a paper to be accepted to this venue (i.e., more than an incremental update to an existing set of models, which have already been covered in two papers).\n\n", "The authors present their own take on a variational image compression model based on Ballé et al. (2018), with some interesting extensions/modifications:\n\n- The combination of an autoregressive and a hierarchical approach to define the prior, as in Klopp et al. (2018) and Minnen et al. (2018).\n- A simplified hyperprior, replacing the flow-based density model with a simpler Gaussian.\n- Breaking the strict separation of stochastic variables (denoted with a tilde, and used during training) and deterministic variables (denoted with a hat, used during evaluation), and instead conditioning some of the distributions on the quantized variables directly during training, in an effort to reduce potential training biases.\n\nThe paper is written in clear language, and generally well presented with a great attention to detail. It is unfortunate that, as noted in the comments above, two prior, peer-reviewed studies have already explored extensions of the prior by introducing an autoregressive component, obtaining similar results.\n\nAs far as I can see, this reduces the novelty of the present paper to the latter two modifications. The bit-free vs. bit-consuming terminology is simply another way of presenting the same concept. In my opinion, it is not sufficiently novel to consider acceptance of this work into the paper track at ICLR.\n\nThe authors should consider to build on their work further and consider publication at a later time, possibly highlighting the latter modifications. However, the paper would need to be rewritten with a different set of claims.\n\nUpdate: Incorporating the AC/PC decision to treat the paper as concurrent work.", "Thank you for pointing these things out.\n\nI also feel sympathetic towards AnonReviewer1's comments. However, in practice, we have to draw the line somewhere. Where this line is should ultimately be decided by the conference organizers.\n\nI think all the relevant information is now on the table. Let's hope that the AC/PC can provide some guidance.\n", "Dear reviewer 2,\n\nWe appreciate your comments. As discussed in the separate thread, the most important issue is to clarify the criteria on prior works. To deal with this issue, we officially requested the decision of AC/PCs, and currently we’re waiting for it.\n\nAlthough the current prior work issue depends on the chairs’ decision, we’d like to show one similar example, the case of DiscoGAN (https://arxiv.org/abs/1703.05192) and CycleGAN (https://arxiv.org/abs/1703.10593). Both were opened to public via arXiv March 2017 (15 days of time difference). In spite of their very similar concepts and structures, they were accepted by ICML2017 and ICCV2017, respectively.\n\nIn addition, from the technical point of view, our approach has clear difference from Klopp et al. (2018)'s approach in performance (our approach is more than 10% superior in compression performance) and paper composition (we provide more comprehensive and concrete models along with a detailed implementation and training methods.) We think that around 10% of performance improvement has value enough to be reported to public in the compression research field.\n\nRegards,\nauthors\n", "Dear reviewer 1,\nWe appreciate your comments. As discussed in the separate thread, the most important issue is to clarify the criteria on prior works. To deal with this issue, we officially requested the decision of AC/PCs, and currently we’re waiting for it.\n\nWe agree with most of your comments, but please understand that the reason we described the differences was to emphasize that our work was independently conducted. One more thing we’d like to emphasize is that our results were significantly superior from Klopp et al. (2018)'s approach. As we described in other postings, our work is more than 10% superior in compression performance.\n\nAs we have described in the response to reviewer 2’s comments, occasionally there exist concurrently conducted studies, such as DiscoGAN and CycleGAN. Although decision on prior works will be made by AC/PCs, we would also be grateful if you view our work from a generous perspective for mutual progress of technologies.\n\nRegards,\nauthors\n", "Dear reviewer 3,\n\n[Authors] First of all, we really appreciate your careful comments. Please understand our late response due to an additional experiments to resolve your concerns. Attached please find the revised version. We address your comments as below:\n\n______\nCons.\n* Differences with (Balle et al 2018) should be emphasized. It is not easy to see where the improvements come from: from the new entropy model or from modifications in the training phase (using discrete representations on the conditions).\n______\n[Authors] We agree with your comments and we also think it is a really important point that needs to be clarified. To clarify this, we conducted an additional experiments (appendix 6.2 in the revised version) on the network trained using the noisy representations as inputs of g_s and h_s. From the results, we found that the performance improvement comes from both new context adaptive entropy model and replacing the noisy representations with the discrete representations. Compared with (Balle et al 2018)’s approach, our network trained with the noisy representations is 11.97% better in compression performance, whereas the same trained with the discrete representations is 7.2% better.\n______\n\n* I am surprised that there is no discussion on the choice of the hyperparameter \\lambda: what are the optimal values in the experiments? Are the results varying a lot depending on the choice? Is there a strategy for an a priori choice?\n______\n[Authors] As your comments, \\lambda is very important parameter for training, which determines which to focus on between rate and distortion. However, \\lambda is not an optimization target, but a given condition for optimization. Therefore, several networks were trained, each of which was trained with a specific value of \\lambda. In figure 5, illustrating the evaluation results, each point represents a result of one single network trained under a specific \\lambda, so one line of our approach represents results of nine trained networks. We described the range of \\lambda values in section 4.2, from 0.01 to 0.5. As the lambda increases, the gain of the bit amount side is increased, but the loss of the image quality side is also increased. The exact values that we used are 0.5, 0.4, 0.3, 0.2, 0.1, 0.06, 0.03, 0.017, and 0.01, in order of from rate-centric condition to distortion-centric condition.\nTo clarify the purpose of using \\lambda, we have added more description about \\lambda in the revised version, before equation (2).\n______\n\n* Also is one dataset enough to draw conclusions on the proposed method? \n______\n[Authors] One dataset may not be enough for completely evaluating one method. However, Kodak photo CD image set has served as a reference test set for many studies. We guess that the reason many studies have used this set is to make comparison between approaches easier, and to ensure the objectivity of the comparison results. Instead of adding more evaluation results over other image sets, we will add an URL link to our test code repository if publication is decided. Our methods could be evaluated over any kind of image sets with the test code. \n______\n\nEvaluation. \nAs a non expert in deep learning compression, I have a positive opinion on the paper but the paper seems more a fine tuning of the method of (Balle et al 2018). Therefore I am not convinced that the improvements are sufficiently innovative for publication at ICLR despite the promising experimental results. \n___\n[Authors] (Balle et al 2018) successfully captures spatial dependencies of natural images by estimating scales of representation, in an input-adaptive manner. To further remove the spatial dependency, we proposed a model that can sequentially predict each value (mean) of representations, as well as standard deviation values as in (Balle et al 2018). We believe that this autoregression using the two types of contexts is essential component to achieve higher compression performance.\nIt has been just two years since two great papers, which become a basis of entropy model based image compression, were poposed by (Balle et al. 2017) and (Theis et al. 2017), and currently context utilization within latent space is at the very beginning phase. We believe that a variety of context utilization methods will be studied, and hope our work will serve as a stepping stone for future studies utilizing various types of bit-free and bit-consuming contexts.\n______\n\nSome details. \nTypos: the p1, the their p2 and p10, while whereas p3, and and figure 2 \n[Authors] We’ve fixed the typos. Thank you for pointing out.\np8: lower configurations, higher configurations, R-D configurations\n[Authors] We’ve changed the phrases to make them clear, as follows:\n\\lambda configurations for lower bit-rates, \\lambda configurations for higher bit-rates, \\lambda configurations\n\n\nThank you very much for your insightful comments again!\n\nRegards,\nAuthors", "We deeply appreciate your reply. We agree with your insightful comments and concerns. Regarding your concerns, one more thing we would like to add is as follows:\n\nImage coding using the context of latent space is now at the beginning phase, and various subsequent studies are expected to proceed. In these following studies, citing multiple papers may be a burden more or less. However, the three studies differ in perspectives on contexts, implementation details, training methods, and directions for future studies, so these differences would rather be a chance to provide richer technical evidences and insights for them. Since each of the three papers has its own pros and cons, we think that citation will be naturally decided by future studies.\n\nPlease refer to our previous postings for detailed differences between the papers.\n- https://openreview.net/forum?id=HyxKIiAqYQ&noteId=SyxV-q_3cm\n- https://openreview.net/forum?id=HyxKIiAqYQ&noteId=SJxyx5u397\n", "First of all, we really appreciate your quick reply. We believe that there’s an ICLR’s standard on the scope of the reviewer's guideline. In any case, we would appreciate it if you recognize that our work was conducted independently.\n\nIn addition, as mentioned in our previous posting (https://openreview.net/forum?id=HyxKIiAqYQ&noteId=SyxV-q_3cm), there are obvious differences between ours and Klopp et al.’s approach, in the following aspects:\n\n -\tIn terms of performance, there is a significant difference between ours and the Klopp et al.’s approach. In the Klopp et al.’s paper, comparison results are provided, only in terms of the MS-SSIM. Comparing the experimental results in the same environment (MS-SSIM over the Kodak set), our method is more than 10% superior (Ours: -13.93%, Klopp et al: -3.2%; when both are compared with Balle et al. (2018)’s approach).\n\n -\tIn terms of paper composition, we provide a concrete model that fully utilizes two contexts, a detailed implementation method, and verification results of the proposed model, whereas only basic mechanism of integrating the two contexts are provided by Klopp et al. (because they mainly focused on other points such as improving GDN and predicting distributions of latent variables using given surrounding variables.)\n\nTherefore, we think that our approach has enough value as an academic paper for readers and subsequent studies.\n", "I agree with AnonReviewer2 that this is a complicated issue. From a technical standpoint, this is a good paper, but we have the issue that was outlined multiple times w.r.t. considering this either \"concurrent work\" or not accepting it. If we are to consider it concurrent work, I wouldn't oppose this decision.\n\nI do agree that it's very unfortunate from a timing standpoint that most of the conversation is not focused on the technical aspects, which seem OK, but rather the issue of how to handle this paper, but sometimes good ideas come from multiple people roughly at the same time.\n\nThere's a question that we (the academic field) must ask ourselves and decide what is the correct course of action, and perhaps even provide guidance to reviewers with respect to this: what is the correct way to handle a situation like this? \n\nThe paragraph cited by the authors provides some guidance, but I don't think it's sufficient. I don't think the work itself could have been plagiarized in any way from Klopp et al given the time frame, so we don't have to worry about that aspect. The work itself is non-trivial, so we can definitely expect that it took a considerable amount of time (several months), yet it ended up being quite similar. Do we (academics) want to reward this work regardless of the similarities by accepting this paper? If yes, I would really like to see this being clarified in the review policy.\n\nOn one hand if we accept this, future researchers will need to figure out which of the three papers to cite when referring to this type of idea (it's unlikely that all three papers will be cited). On the other hand, if we don't accept it, but we agree that the idea was developed independently, then are we doing the authors a disservice?\n\nI am very frustrated because the answer to any of the questions above is not at all clear to me.\n\nI would really appreciate if the AC/PC provided some insight here.", "Thank you for alerting us to this section of the reviewer guidelines.\n\nFirst of all, there was no intention to say that the paper was plagiarizing other work in any way. I believe that the authors did good research, the clarity and thoroughness of the paper demonstrates this.\n\nHowever, Klopp's work had been peer-reviewed and published prior to the deadline (the conference date was 3 September). I believe the section in the reviewer guidelines you pointed out (in particular, the 30-day grace period) is specifically applied to arXiv and other pre-print sites which do not include peer review.\n\nSo, while you are right that Minnen's work should be disregarded (because the peer reviewed paper isn't yet available), Klopp's paper should still count as prior work, as it had been peer reviewed well before the ICLR deadline.\n\nUltimately, I think it would be best for the area/program chair to decide this. In case they decide Klopp's paper should count as concurrent work, I would be in favor of accepting the paper, as it is a high quality paper besides the novelty aspect.", "Summary. The paper is an improvement over (Balle et al 2018) for end-to-end image compression using deep neural networks. It relies on a generalized entropy model and some modifications in the training algorithm. Experimentals results on the Kodak PhotoCD dataset show improvements over the BPG format in terms of the peak signal-to-noise ratio (PSNR). It is not said whether the code will be made available.\n\nPros. \n* Deep image compression is an active field of research of interest for ICLR. The paper is a step forward w.r.t. (Balle et al 2018). \n* The paper is well written. \n* Experimental results are promising.\n\nCons.\n* Differences with (Balle et al 2018) should be emphasized. It is not easy to see where the improvements come from: from the new entropy model or from modifications in the training phase (using discrete representations on the conditions).\n* I am surprised that there is no discussion on the choice of the hyperparameter \\lambda: what are the optimal values in the experiments? Are the results varying a lot depending on the choice? Is there a strategy for an a priori choice? \n* Also is one dataset enough to draw conclusions on the proposed method?\n\nEvaluation.\nAs a non expert in deep learning compression, I have a positive opinion on the paper but the paper seems more a fine tuning of the method of (Balle et al 2018). Therefore I am not convinced that the improvements are sufficiently innovative for publication at ICLR despite the promising experimental results.\n\nSome details.\nTypos: the p1, the their p2 and p10, while whereas p3, and and figure 2 \np8: lower configurations, higher configurations, R-D configurations\n", "Dear reviewers, area chair, and program chairs,\n\nFirst of all, thank you for the careful reviews and for giving this rebuttal chance. Before replying to each review comment, we would like to give our opinion on two reviewers’ comments regarding prior works.\n\nReviewer 1 and 2 posted comments that there exist two similar prior works as follows:\n-\tKlopp et al.'s paper (http://bmvc2018.org/contents/papers/0491.pdf), published on September 3rd\n-\tMinnen et al.’s paper (https://arxiv.org/abs/1809.02736), uploaded to arXiv on September 8th\n\nHowever, in our opinion, our work should be recognized as a concurrent and independent work, because related standards are documented in the ICLR Reviewer Guidelines (https://iclr.cc/Conferences/2019/Reviewer_Guidelines), leaving out a long period of time for designing, testing and validating works, which took more than half a year.\n\nSpecifically, according to the ICLR reviewer guidelines, it is clearly stated that the papers opened less than 30 days prior to the ICLR deadline should not be considered as prior works, as follows:\n\n-\t\"While we encourage reviewers to apply the reasonable standards of the relevant community in considering what does and does constitute prior work, the following minimum standards will be enforced: no paper will be considered prior work if it appeared on arxiv, or another online venue, less than 30 days prior to the ICLR deadline.\"\n\nKlopp et al.'s paper was published on September 3rd, and Minnen et al.'s paper was uploaded to arXiv on September 8th. Because both appeared less than 30 days prior to the ICLR deadline, they cannot be considered as prior works in ICLR’s review process.\n\nAs noted in the detailed reviews given by reviewer 1 and 2, the main problem on our paper they pointed out is existence of prior works, and we believe that this cause low scores from the reviewers. We respect the comments from the reviewers and the two papers, but we think that the \"minimum standard” on the prior works should be a rule for all papers submitted into ICLR. Therefore, we believe that our paper should be revaluated without considering the two papers as prior works.\n\nNumerous studies are being conducted on the same subject at the same time, and there may be occasional ambiguities in precedence relation. We believed that ICLR, as one of the top conferences, has achieved fairness by providing a clear review basis on these issues. We hope that our paper will not become one exceptional case. We would deeply appreciate it if you get the related discussion started.\n", "We appreciate your comments. Last month, we did not notice that two papers were open to public because we were concentrating on editing work of our manuscript. Thank you for introducing the two excellent papers. I am surprised and also pleased with the fact that similar ideas for entropy models have been already proposed. We have read the two papers carefully, and the followings are our response to comments:\n\nIn the case of Klopp et al.'s paper (http://bmvc2018.org/contents/papers/0491.pdf), they focused on improving GDN and predicting distributions of latent variables using given surrounding variables, and they achieved noticeable results on those topics. In addition, their supplementary material (http://bmvc2018.org/contents/supplementary/pdf/0491_supp.pdf) includes an integrated model with a similar structure to that of our method, along with a simple description (section 8.2.3 and figure 8 of the supplementary material). Experimental results of the integrated model, in terms of the MS-SSIM, are represented in the main text of the paper. However, as the experimental results show, the degree of performance improvement over the existing approaches decreases as the bpp increases. Our approach (and Minnen et al.’s approach) shows that as bpp increases, the performance gain increases, which shows that our approach takes full advantage of the hierarchical prior (or hyperprior). The paper by Klopp et al. only introduced basic mechanism of integration using two contexts, but did not describe the details of the integration method. On the other hand, in the scope of the integration, we provide a concrete model that fully utilizes two contexts, a detailed implementation method, and verification results of the proposed model through performance comparisons.\n\nRegarding Minnen et al.’s approach (https://arxiv.org/abs/1809.02736), we agree that the goals, structures, and results of their method are very similar to ours. It is an hornor to us that we have privilege of comparing our work with the excellent paper adopted for NIPS. We hope to publish our paper in ICLR to support Minnen et al.’s work and thereby help presenting one promising direction of research for ANN-based image compression. The differences between Minnen et al.’s approach and our work, which could be considered, are as follows:\n\n1) One of the most important parts of our work is to propose a model that can predict probability distributions of latent representation based on two types of contexts, context consuming bits and context not using it. From this point of view, our model clearly distinguishes between context extraction and distribution prediction, and this allows our model to provide extensibility. For example, if we want to use multi-scale context information, which is one of our candidates for further work, we only need to replace the extraction model with new one. On the other hand, Context Model of Minnen et al.’s approach includes both the context extraction and the transform function. Regarding the hyperpriors, the compositions of the Hyper Encoder / Decoder and the utilization of the results are also very similar to each other, but in our work, the information obtained through the h_a (Hyper Encoder) / h_s (Hyper Decoder) is strictly defined as one type of the contexts, and this allows our model to be a framework that can accommodate various contexts in the future.\n\n2) We used a parametric model (Gaussian dist.) for an entropy model of hyperprior z, whereas the non-parametric model is used in Minnen et al.’s approach. There are advantages and disadvantages to both non-parametric and parametric models. However, if there is no significant difference in performance between the two cases, we think that the advantage of the parametric model, which is easy to implement and cost-efficient, can be highlighted. As stated by the commenter, our approach and Minnen et al.’s approach show very similar performance results, which demonstrates that our simple parametric model provides sufficient performance for modeling the hyperprior z.", "(Cont'd)\n\n3) Balle et al. (2018)’s approach (https://arxiv.org/abs/1802.01436) uses noisy representations y_tilde and z_tilde for training to deal with the discontinuities caused by quantization for the entire model, including inputs to the transforms, g_s (Decoder) and h_s (Hyper Decoder) as well as for inputs to entropy model functions (continuous model functions convolved with an uniform dist. function). Minnen et al.’s approach seems to deal with the discontinuities in the same manner as in Balle et al. (2018)’s approach. Therefore, we think that model expressions in Minnen et al.’s paper represent the target model for test, because they use the “hat” symbols not only for conditions, but also for inputs. It seems that all these quantized representations are replaced by noisy representations, for training, as Balle et al. (2018) did. On the other hand, as clearly noted in our manuscript, we only use noisy representations as inputs to the entropy model functions, whereas quantized representations are used as inputs to the transforms for training. We made such a model because it prevents mismatches between training and testing and provides better performance. If necessary, we will provide the quantitative results on the impact of the two types of transform inputs for training. The representation flows for training, for our work and Minnen et al.’s approach, are different from each other as below:\n\n * Our approach (when training):\n x -> [g_a;Encoder] -> y -> [q] -> y_hat -> [g_s;Decoder] -> x’\n y_hat -> [h_a;Hyper Encoder] -> z -> [q] -> z_hat -> [h_s; Hyper Decoder] -> C’;Psi\n\n * Minnen et al.’s approach (when training):\n x -> [g_a;Encoder] -> y -> [U] -> y_tilde -> [g_s;Decoder] -> x’\n y -> [h_a;Hyper Encoder] -> z -> [U] -> z_tilde -> [h_s; Hyper Decoder] -> C’;Psi\n\nNote that we used quantized y_hat as inputs to h_a, but it has nothing to do with the mismatches between training and testing. We used them to match inputs of h_a to target representations of model estimation.\n\n4) In our paper, we provided details for training and implementation of our proposed model. For example, we provide information on training sets, batch sizes, the number of training iterations, optimization algorithms, learning rates, which are not given in Minnen et al.’s paper, and also techniques for reducing large training costs, such as random index selection, so that the readers can implement and train their own models without much trial and error. Furthermore, we are planning to share our test code via github after code refactoring and exception handling. We hope that our work and test code will draw more interest in the ANN-based image compression field.\n\n5) We presented the directions of improvement from a different perspective. Current ANN-based image compression techniques are still not practical due to high complexity versus low gain. To solve this, we need to maximize compression performance or reduce complexity. At the end of the paper by Minnen et al., they presented one direction to obtain fast, low-complexity solutions, while we suggested the use of high-level contexts to maximize compression performance. Both directions would be important topics for various follow-up studies.\n\n6) As the commenter's suggestion, we will add the runtime data of our hybrid entropy model. The hybrid model can be viewed as one implementation technique.\n\n7) In addition, we are now aware that our results are not the first results that outperform BPG in terms of PSNR, so we will remove the related phrases and sentences.\n\nWe hope that our work will be a solid evidence supporting Minnen et al.’s work, and hope both will together suggest a promising direction for ANN-based image compression researches.\n", "\"Learning a Code-Space Predictor by Exploiting Intra-Image-Dependencies\" (Klopp et al.) was recently published at BMVC 2018 (http://bmvc2018.org/contents/papers/0491.pdf) and also explores the use of spatial context to improve rate-distortion (RD) performance for learned image compression. In addition to using context to predict the parameters of the entropy model, they introduce a new nonlinearity (a sparse variant of GDN) and generalize the entropy model by using an equal-weight mixture of Gaussians. \n\n\"Joint Autoregressive and Hierarchical Priors for Learned Image Compression\" (Minnen et al.) was accepted at NIPS 2018 (https://arxiv.org/abs/1809.02736) and presents a very similar model. This paper combines information from the hyperprior (which I think is the same as the \"bit-consuming context\" in the submitted ICLR paper) and an autoregressive model (\"bit-free context\") in a slightly different way than Klopp et al. resulting in improved RD performance.\n\nThe submitted ICLR paper shows RD curves that are very similar to those in Minnen et al. and better than those in Klopp et al. It also introduces the idea of splitting the latent representation into two parts and coding each part with a different entropy model. This split makes sense since many latent values are zero and thus may not benefit from context or a predicted mean for the Gaussian entropy model. I agree with the authors' claim that this split should reduce runtime, though I'm not sure how significant it will be relative to the total encode / decode time (some runtime data would help here, though neither of the papers cited above provide runtime data so I don't think it's a requirement for research focused on improving RD performance).\n\nIn my opinion, the quality of the research and results presented in the submitted paper are appropriate for publication at ICLR. However, the model is too similar to Klopp et al. and Minnen et al. and thus should not be accepted without further differentiation." ]
[ 7, 7, -1, -1, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1 ]
[ 5, 4, -1, -1, -1, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1 ]
[ "iclr_2019_HyxKIiAqYQ", "iclr_2019_HyxKIiAqYQ", "BJlVvzHgpX", "rylzhKZ6hX", "SkeXD949h7", "B1xbVRde2Q", "BkeegQoyp7", "HkxN269Ja7", "BJxhpCaC27", "BJxhpCaC27", "iclr_2019_HyxKIiAqYQ", "iclr_2019_HyxKIiAqYQ", "BkekKKXHq7", "BkekKKXHq7", "iclr_2019_HyxKIiAqYQ" ]
iclr_2019_HyxPx3R9tm
Variational Discriminator Bottleneck: Improving Imitation Learning, Inverse RL, and GANs by Constraining Information Flow
Adversarial learning methods have been proposed for a wide range of applications, but the training of adversarial models can be notoriously unstable. Effectively balancing the performance of the generator and discriminator is critical, since a discriminator that achieves very high accuracy will produce relatively uninformative gradients. In this work, we propose a simple and general technique to constrain information flow in the discriminator by means of an information bottleneck. By enforcing a constraint on the mutual information between the observations and the discriminator's internal representation, we can effectively modulate the discriminator's accuracy and maintain useful and informative gradients. We demonstrate that our proposed variational discriminator bottleneck (VDB) leads to significant improvements across three distinct application areas for adversarial learning algorithms. Our primary evaluation studies the applicability of the VDB to imitation learning of dynamic continuous control skills, such as running. We show that our method can learn such skills directly from raw video demonstrations, substantially outperforming prior adversarial imitation learning methods. The VDB can also be combined with adversarial inverse reinforcement learning to learn parsimonious reward functions that can be transferred and re-optimized in new settings. Finally, we demonstrate that VDB can train GANs more effectively for image generation, improving upon a number of prior stabilization methods.
accepted-poster-papers
The paper proposes a simple and general technique based on the information bottleneck to constrain the information flow in the discriminator of adversarial models. It helps to train by maintaining informative gradients. While the information bottleneck is not novel, its application in adversarial learning to my knowledge is, and the empirical evaluation demonstrates impressive performance on a broad range of applications. Therefore, the paper should clearly be accepted.
train
[ "S1ljRntcT7", "BJeNdhYq67", "B1eIr2t5Tm", "ByxhshHL6Q", "BJxmQSxU6Q", "SylvFMGra7", "Byl41tz9nX", "Bkx6mnnK3Q", "rJx9PNrv3X" ]
[ "author", "author", "author", "public", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for the insight and feedback. We have included additional experiments to further compare with previous techniques, along with some additional clarifications.\n\nRe: additional citations\nThank you for the pointers, we have included the additional citations.\n\nRe: GP for other task\nWe have conducted additional motion imitation experiments with GAIL - GP and VAIL - GP [Figure 4, Table 1]. We also added experiments incorporating GP for the inverse RL tasks [Figure 7]. As in image generation, GP does indeed significantly improve the performance of GAIL. However, VAIL still performs better on most of the tasks, and VAIL - GP achieves the best performance overall.\n\nRe: content of batches used to compute KL divergence\nWe have added additional information to the paper to clarify the content of each batch [Section 4 above equation 11]. Each batch of data used to compute the expected KL contains an equal number of real and fake samples. The encoder maps each input sample to an individual distribution in Z. The KL divergence is computed separately for the distribution of each input, and then averaged across the batch, as opposed to computing the KL divergence across samples within a batch. Therefore, if the real and fake distributions are mapped to different parts of the manifold, it should result in a large KL.\n\nRe: saliency maps\nWe have added a colormap to Figure 5. The colors on the saliency map represent the magnitude of the discriminator’s gradient with respect to each pixel and color channel in the input image. The gradients are visualized for each color channel, which results in the different colors. The same procedure is used to compute the gradients for GAIL.\n", "Thank you for the insight and feedback, we have included new experiments in the paper, along with some additional clarifications.\n\nRe: Adapt beta based on gradient magnitudes\nYes, it might be possible to formulate a similar constraint for adaptively updating beta according to the gradient magnitudes. A constraint on the gradient norm can be added, then a Lagrangian can be constructed in a similar manner to yield an adaptive update for beta.", "Thank you for the insight and suggestions. We have added additional experiments and clarifications to the paper that aim to address each of your concerns -- we would really appreciate it if you could revisit your review in light of these additions and clarifications.\n\nRe: GP for other tasks\nWe have conducted additional motion imitation experiments with GAIL - GP and VAIL - GP [Figure 4, Table 1]. We also added experiments incorporating GP for the inverse RL tasks [Figure 7]. As in image generation, GP does indeed significantly improve the performance of GAIL. However, VAIL still performs better on most of the tasks, and VAIL - GP achieves the best performance overall.\n\nRe: How are VGAN and GP combined\nWe have added an additional section [Appendix B] that provides more information on how VDB and GP is combined. We use the reparameterization trick, as is done in VAEs, to backprop through the encoder to compute the gradient of the discriminator with respect to the inputs. There is a manually specified coefficient that weights the GP term in the objective, and we use the same value for the coefficient as [Mescheder et al., 2018] for image generation.\n\nRe: Combining VGAN and GP enhances performance\nThe VDB and GP are complementary techniques since the VDB helps to prevent vanishing gradients and GP prevents exploding gradients. Therefore both methods regularize the gradients, but under different criteria.\n\nRe: Spectral norm\nWe have included additional image generation experiments with spectral normalization [Figure 8]. Spectral normalization does show significant improvement over the vanilla GAN on CIFAR-10 (FID: 23.9), but our method still achieves a better score (FID: 18.1). The original spectral normalization paper [Miyato et al., 2018] reported an FID of 21.7 on CIFAR-10.\n", "Thanks for your response clarifying one part of the comment. \n\nWith respect to all the \"We never claimed ...\", the writing did not have factually false claims. However, isn't it normal to interpret that a statement like \"previous approaches used larger batch sizes and multiple GPUs and our approach did not\" is intended to \"sound\" as a contribution in comparison to prior work? 24 is larger than 8. 256 is also larger than 8. 2048 is also larger than 8. But it's not the same \"larger\". One is doable with a single V100. Another is doable with 32 V100s. Third is doable only on TPU. Wouldn't it make sense to say \"We used smaller batch size (8 instead of 24 as in Mescheder et al) on a single V100 and trained for fewer iterations because of resource constraints. We also generate at full resolution directly as in Mescheder et al instead of progressive growing done in Karras et al\"? Thanks for agreeing to refine the writing. \n\n", "Thank you for your comment. \n\nThe authors of the paper are not active on reddit and we do not have control over what reddit users post about our paper.\n\nWe used a batch size of 8 in our work, and we mention this in the paper for completeness, and since this is a bit different from Meschederer et al., who used a batch size of 24 with 4 GPUs. We do not state that the batch size from Meschederer et al. is “extremely large” in our paper, we state that it is \"larger\" than 8, which is factually true (it’s not clear how to state this in any other way…). We did not claim that the smaller batch size of 8 is a contribution of our work, and we did not claim that our paper is the first to train high-resolution GANs without progressive growing of resolution. We do have results for a network trained for 300k iterations and we will add these results to the paper.\n\nWe will refine the wording for the image generation experiments to further avoid these misinterpretations.", "\"CelebAHQ: VGAN can also be trained on on CelebAHQ Karras et al. (2018) at 1024 by 1024 resolution directly, without progressive growing (Karras et al., 2018). We use Ic = 0.1 and train with VGAN-GP. We train on a single Tesla V100, which fits a batch size of 8 in our experiments. Previous approaches (Karras et al., 2018; Mescheder et al., 2018) use a larger batch size and train over multiple GPUs. While previous approaches have trained this for 300k iterations or more, our results are shown at 100k iterations.\"\n\nEven though the authors don't intend to, this statement is likely to be misinterpreted that VGAN is the first GAN paper to show high resolution GAN samples without progressive growing of resolution or large batch sizes. \n\nThe batch size used in Mescheder et al is 24 while the authors use 8. Why would you call 24 \"large\" and 8 \"small\"? Secondly, 100k iterations is sufficient to start seeing good samples with most GAN architectures when the architecture uses residual connections and more iterations are needed to get more modes and sharper samples. You have shown a total of 8 samples. It is hard to say whether or not they were carefully picked. \n\nAs evidence for why this is likely to be misleading, I am quoting a comment from reddit: \"Also of note: training 1024px image GANs without extremely large minibatches, progressive growing, or self-attention, just a fairly vanilla-sounding CNN and their discriminator penalization.\" Not providing the link because that breaks the anonymity of the paper. \n\nNeither is it claimed or shown by the authors that Mescheder et al's model wouldn't produce good samples with a lower batch size or fewer (100K) iterations. The benefit to get it working for large resolution comes from the careful architecture designed by Mescheder et al and not from the bottleneck. \n\nTwo more issues with the claims made in the CIFAR-10 FID metrics section: (a) \"VGAN is competitive with WGAN-GP and GP\": The gap between VGAN and WGAN-GP is higher than WGAN-GP and VGAN-GP. But the improvement over WGAN-GP is considered \"significant\" whereas the other gap is considered \"competitive\"? (b) Is there any reason to show the metrics at the end of 750K iterations specifically? The plot shows that WGAN-GP training curve has a bigger negative slope at the cutoff point (750k) while VGAN-GP has flattened by then. It is worth showing the readers what happens when you train even a bit more, ie 1 million iterations when the difference isn't even that significant. Even though \"VDB and GP are complementary techniques\" morally, empirical conclusions may often not turn out to be the case. \n", "This paper proposed a constraint on the discriminator of GAN model to maintain informative gradients. It is completed by control the mutual information between the observations and the discriminator’s internal representation to be no bigger than a predefined value. The idea is interesting and the discussions of applications in different areas are useful. However, I still have some concerns about the work:\n1.\tin the experiments about image generation, it seems that the proposed method does not enhance the performance obviously when compared to GP and WGAN-GP, Why the combination of VGAN and GP can enhance the performance greatly(How do they complementary to each other), what about the performance when combine VGAN with WGAN-GP?\n2.\tHow do you combine VGAN and GP, is there any parameter to balance their effect?\n3.\tThe author stated on page 2 that “the proposed information bottleneck encourages the discriminator to ignore irrelevant cues, which then allows the generator to focus on improving the most discerning differences between real and fake samples”, a proof on theory or experiments should be used to illustrate this state.\n4.\tIs it possible to apply GP and WGAN-GP to the Motion imitation or adversarial inverse reinforcement learning problems? If so, will it perform better than VGAN?\n5.\tHow about VGAN compares with Spectral norm GAN?\n", "The paper \"Variational Discriminator Bottleneck: Improving Imitation Learning, Inverse RL, and GANs by Constraining Information Flow\" tackles the problem of discriminator over-fitting in adversarial learning. Balancing the generator and the discriminator is difficult in generative adversarial techniques, as a too good discriminator prevents the generator to converge toward effective distributions. The idea is to introduce an information constraint on a intermediate layer, called information bottleneck, which limits the content of this layer to the most discriminative features of the input. Based on this limited representation of the input, the disciminator is constrained to longer tailed-distributions, maintaining some uncertainty on simulated data distributions. Results show that the proposal outperforms previous researches on discriminator over-fitting, such as noise adding in the discriminator inputs. \n\nWhile the use of information bottleneck is not novel, its application in adversarial learning looks inovative and the results are impressive in a broad range of applications. The paper is well-written and easy to follow, though I find that it would be nice to give more insights on the intuition about information bottleneck in the preliminary section to make the paper self-contained (I had to read the previous work from Alemi et al (2016) to realize what information bottleneck can bring). My only question is about the setting of the constaint Ic: wouldn't it be possible to consider an adaptative version which could consider the amount of zeros gradients returned to the generator ? ", "Summary:\nThe authors propose to apply the Deep Variational Information Bottleneck (VIB) method of [1] on discriminator networks in various adversarial-learning-based scenarios. They propose a way to adaptively update the value for the bêta hyper-parameter to respect the constraint on I(X,Z). Their technique is shown to stabilize/allow training when P_g and P_data do not overlap, similarly to WGAN and gradient-penalty based approaches, by essentially pushing their representation distributions (p_z) to overlap with the mutual information bottleneck. It can also be considered as an adaptive version of instance noise, which serves the same goal. The method is evaluated on different adversarial learning setup (imitation learning, inverse reinforcement learning and GANs), where it compares positively to most related methods. Best results for ‘classical’ adversarial learning for image generation are however obtained when combining the proposed VIB with gradient penalty (which outperforms by itself the VGAN in this case).\n\n\nPros :\n- This paper brings a good amount of evidence of the benefits to use the VIB formulation to adversarial learning by first showing the effect of such approach on a toy example, and then applying it to more complex scenarios, where it also boosts performance. The numerous experiments and analyses have great value and are a necessity as this paper mostly applies the VIB to new learning challenges. \n\n- The proposition of a principled way of adaptively varying the value of Bêta to actually respect more closely the constraint I(X,Z) < I_c, which to my knowledge [1] does not perform, is definitely appealing and seems to work better than fixed Bêtas and does also bring the KL divergence to the desired I_c.\n\n- The technique is fairly simple to implement and can be combined with other stabilization techniques such as gradient penalties on the discriminator.\n\n\nCons:\n\n- In my view, the novelty of the approach is somewhat limited, as it seems like a straightforward application of the VIB from [1] for discriminators in adversarial learning, with the difference of using an adaptive Bêta.\n\n- I think the Bêta-VAE [2] paper is definitely related to this paper and to the paper on which it is based [1] and should thus be cited as the authors use a similar regularization technique, albeit from a different perspective, that restricts I(X,Z) in an auto-encoding task.\n\n- I think the content of batches used to regularize E(z|x) w.r.t. to the KL divergence should be clarified, as the description of p^tilde “being a mixture of the target distribution and the generator” (Section 4) can let the implementation details be ambiguous. I think batches containing samples from both distributions can cause problems as the expectation of the KL divergence on a batch can be low even if the samples from both distributions are projected into different parts of the manifold. This makes me think batches are separated? Either way, this should be more clearly stated in the text.\n\n- The last results for the ‘traditional’ GAN+VIB show that in this case, gradient penalty (GP) alone outperforms the proposed VGAN, and that both can be combined for best results. I thus wonder if the results in all other experiments could show similar trends if GP had been tested in these cases as well. In the imitation learning task, authors compare with instance noise, but not with GP, which for me are both related to VIB in what they try to accomplish. Was GP tested in Imitation Learning/Inverse RL ? Was it better? Could it still be combined with VIB for better results? \n\n- In the saliency map of Figure 5, I’m unclear as to what the colors represent (especially on the GAIL side). I doubt that this is simply due to the colormap used, but this colormap should be presented.\n\nOverall, I think this is an interesting and relevant paper that I am very likely to suggest to peers working on adversarial learning, and should therefore be presented. I think the limited novelty is counterbalanced by the quality of empirical analysis. Some clarity issues and missing citations should be easy to correct. I appreciate the comparison and combination with a competitive method (Gradient Penalty) in Section 5.3, but I wish similar results were present in the other experiments, in order to inform readers if, in these cases as well, combining VIB with GP leads to the best performance.\n\n[1] Deep Variational Information Bottleneck, (Alemi et al. 2017)\n[2] beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework (Higgins et al. 2017)\n" ]
[ -1, -1, -1, -1, -1, -1, 6, 10, 8 ]
[ -1, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "rJx9PNrv3X", "Bkx6mnnK3Q", "Byl41tz9nX", "BJxmQSxU6Q", "SylvFMGra7", "iclr_2019_HyxPx3R9tm", "iclr_2019_HyxPx3R9tm", "iclr_2019_HyxPx3R9tm", "iclr_2019_HyxPx3R9tm" ]
iclr_2019_HyxnZh0ct7
Meta-learning with differentiable closed-form solvers
Adapting deep networks to new concepts from a few examples is challenging, due to the high computational requirements of standard fine-tuning procedures. Most work on few-shot learning has thus focused on simple learning techniques for adaptation, such as nearest neighbours or gradient descent. Nonetheless, the machine learning literature contains a wealth of methods that learn non-deep models very efficiently. In this paper, we propose to use these fast convergent methods as the main adaptation mechanism for few-shot learning. The main idea is to teach a deep network to use standard machine learning tools, such as ridge regression, as part of its own internal model, enabling it to quickly adapt to novel data. This requires back-propagating errors through the solver steps. While normally the cost of the matrix operations involved in such a process would be significant, by using the Woodbury identity we can make the small number of examples work to our advantage. We propose both closed-form and iterative solvers, based on ridge regression and logistic regression components. Our methods constitute a simple and novel approach to the problem of few-shot learning and achieve performance competitive with or superior to the state of the art on three benchmarks.
accepted-poster-papers
The reviewers disagree strongly on this paper. Reviewer 2 was the most positive, believing it to be an interesting contribution with strong results. Reviewer 3 however, was underwhelmed by the results. Reviewer 1 does not believe that the contribution is sufficiently novel, seeing it as too close to existing multi-task learning approaches. After considering all of the discussion so far, I have to agree with reviewer 2 on their assessment. Much of the meta learning literature involves changing the base learner *for a fixed architecture* and seeing how it affects performance. There is a temptation to chase performance by changing the architecture, adding new regularizers, etc., and while this is important for practical reasons, it does not help to shed light on the underlying fundamentals. This is best done by considering carefully controlled and well understood experimental settings. Even still, the performance is quite good relative to popular base learners. Regarding novelty, I agree it is a simple change to the base learner, using a technique that has been tried before in other settings (linear regression as opposed to classification), however its use in a meta learning setup is novel in my opinion, and the new experimental comparison regression on top of pre-trained CNN features helps to demonstrate the utility of its use in the meta-learning settings. While the novelty can certainly be debated, I want to highlight two reasons why I am opting to accept this paper: 1) simple and effective ideas are often some of the most impactful. 2) sometimes taking ideas from one area (e.g., multi-task learning) and demonstrating that they can be effective in other settings (e.g., meta-learning) can itself be a valuable contribution. I believe that the meta-learning community would benefit from reading this paper.
test
[ "BkgTUolegV", "SJeA_NNyxE", "r1l23D05y4", "H1llepity4", "SyxWU3iYkE", "HyeglRQ_JE", "BJg6awLwk4", "rygq4xgwkN", "r1e-G4YL1N", "r1gGLd-507", "BkljQHC-T7", "Syg19STh6Q", "SklLz3csp7", "H1e3St5u67", "SyedD2rXaQ", "Byg2xy8MpX", "Hkgvq5AZpm", "rkevXdCb6m", "B1lS7rnxT7", "Syegwm5yaQ", "r1xPm1Kah7", "SJlghKO937", "SkeX8K6thm", "r1ggxa85nm" ]
[ "author", "public", "author", "official_reviewer", "official_reviewer", "author", "public", "author", "public", "author", "author", "official_reviewer", "public", "author", "author", "public", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer", "public" ]
[ "1) We wrote: ““[multi-task learning] is different to our work, and in general to all of the previous literature on meta-learning applied to few-shot classification (e.g. Finn et al. 2017, Ravi & Larochelle 2017, Vinyals et al. 2016, etc). Notably, these methods and ours take into account adaptation *already during the training process*, which requires back-propagating errors through the very fine-tuning process.””\n\n2) R1 answered with: ““Merely because some other paper also had small novelty and got accepted in the past I can not see why this paper should also get accepted””\n\n3) We then observed that *R1 did not refute any of our point of rebuttal* (long answer in this thread) and seems to be dismissive of the above papers, which are widely accepted by the community.\n\n> ““ However, using a multi-task technique in meta-learning setting cannot be treated as a novel or original contribution.””\nAgain, it is not what we do - we amply addressed this point both on OpenReview (last two answers to the reviewer) and in the paper.\n\nWe would like to repeat that if this were true, the baseline experiment we described (applying ridge regression in the manner that the reviewer refers to as standard) would not have been possible, since our method and the baseline would then be the same (which they are not -- both in methodology and results).", "The 3 meta-learning papers developed new techniques and/or models for meta-learning (which have never been proposed or used in multi-task learning), while this paper applies existing multi-task learning technique in the meta-learning setting. The contributions in these two cases are very different. I think it is misleading to indicating that the contribution of this paper is as novel as the 3 meta-learning papers.\n\nIt is fine to apply multi-task learning technique to the meta-learning problem. To some extent, meta-learning can be explained as a generalization of multi-task learning in the way that meta-learning applies to any set of tasks sampled from certain *task distribution*, while the set of tasks in multi-task learning are fixed. They both need knowledge transfer between different tasks. However, using a multi-task technique in meta-learning setting cannot be treated as a novel or original contribution. ", "The reviewer has not refuted any of the points we made above. Namely:\n\n- That meta-learning approaches (like ours) back-propagate errors through the fine-tuning process, a major departure from standard multi-task/transfer learning.\n- That *not doing so* incurs a large performance penalty, as demonstrated by our experiments.\n\nWe invite the reviewer to address these points, rather than just reiterate a subjective judgment over the value of meta-learning. While we respect this opinion, our paper cannot be rejected based solely on the reviewer’s opinion that meta-learning papers are not novel in general (compared to multi-task learning).\n", "Merely combining ridge regression (trivial, and nothing novel) inside meta-learning is not sufficentlynovel in my opinion. \n\nIn essence we agree to disagree. I request the AC to make a decision based on both our inputs. ", "I disagree with Reviewer #2 and the authors about the novelty. The delta from just simple multi-task learning approach of eg Caruana 93 is extremely small -- the same algorithms are trivially extended to deal with meta-learning. The mere fact of using closed form ridge regression in this setting does not feel like sufficient contribution to warrant an ICLR paper to this reviewer. Merely because some other paper also had small novelty and got accepted in the past I can not see why this paper should also get accepted with minimal novel contributions. ", "Thank you for pointing us to this interesting paper! We agree that methods with limited inductive bias such as protonets are attractive, and there is indeed a good case for their performance scaling better with computation and data.\nWe are looking forward to try out the proposed testbed. One possible advantage of using our R2-D2 with the deeper architectures of their setup is that we can concatenate activations from multiple layers together without increasing the computational burden of the base-learner thanks to the Woodbury identity.", "I understand your points. Overall, I like the idea of using closed-form base learner which also demonstrate good performance when the backbone network is shallow. However, as a practioner, I may not adopt the proposed method for now.\n\nIn my opinion, meta-learning is about learning a data-driven inductive bias for few-shot learning. Closed-form regression itself introduces a strong inductive bias which is not learned. Therefore, it is interesting to investigate whether the inductive bias of closed-form regression is needed when the backbone network gets deeper.\n\nAs shown in the Figure 3 of https://openreview.net/pdf?id=HkxLXnAcFQ , the performance gap between different meta-learning methods diminishes as the backbone gets deeper. One intersting point in the figure is that ProtoNet typically outperforms other methods when the network is deeper.", "We thank the anonymous commenter for pointing out a GitHub repo with improvements. We note that neither data augmentation nor the optimizer schedule are mentioned at all in the associated published paper.\n\nAdditionally, the mentioned improvements are not specific to prototypical networks (or to any method for that matter), and can also be applied to ours. As such, we fail to see how this says anything about the merits of our proposal.\nIn our experiments, we compare against prototypical networks using the same setup of the original paper (Adam optimizer, halving LR every 20 epochs; no data augmentation).\nIn this fair comparison, we outperform it.\n\nWe would gain no knowledge by showing that “proto-nets with data augmentation and optimizer improvements” (as suggested) beats “R2D2 with no data augmentation”, or that “MAML with a ResNet base” beats “R2D2 with 4 layers”. These are apples-to-oranges comparisons which make any scientific conclusion very hard to draw.\n\nInstead, a proper comparison is to take the innovation of each paper -- the prototype layer in proto-nets, and the ridge regression layer in R2D2 -- and compare them, with everything else fixed. This includes data augmentation, as well as network model and initialization.\n\nCarefully controlled comparisons are a core part of the scientific method, and ignoring them will lead to unsubstantiated conclusions.\n", "Using closed-form base learner is an interesting idea. However, the results are underwhelming. \n\nAs shown in https://github.com/gidariss/FewShotWithoutForgetting , Prototypical Networks can be quite powerful with some modifications. The modifications include:\n1. add data augmentation\n2. use SGD with momentum optimizer\n3. scale the output of the euclidean distance to a suitable range\n\nUsing a 4-Conv backbone with 64 channels, Protypical Networks are able achieve remarkable results in MiniImagenet: 1-shot: 53.30% +/- 0.79 5-shot: 70.33% +/- 0.65\n\nEven without data augmentation, in my experiments, Protypical Networks can still get 5-shot accuracy around 68.8%.\n\nConsidering this, the proposed method has not demonstrated superior empirical results than Protypical Network yet.", "We would like to thank both reviewers and anonymous commenters for their feedback and participation.\nIn light of the discussion, the Appendix of the paper has been updated:\n\n* Section B offers a runtime analysis which reveals that R2-D2 is several times faster than MAML and almost as fast as a simple (fixed) metric learning method such as prototypical networks, while still allowing per-episode adaptation.\n* Section A reports the accuracy of the 1-vs-all variant of LR-D2 (as suggested by AnonReviewer2), which is comparable with the one of R2-D2.\n* Finally, Section C extends the discussion sparked here on OpenReview about a) the nature of our contribution b) the disambiguation with the multi-task learning paradigm .", "We thank the reviewer for the comments and questions.\n\n> “Why one can simply treat \\hat{Y} as a scaled and shifted version of X’W?”\nIn the case of logistic regression, the scaling and shifting is not needed, and we have \\hat{Y}=X’W. This is because logistic regression is a classification algorithm, and directly outputs class scores. These scores are fed to the (cross-entropy) loss L.\n\nHowever, ridge regression is a regression algorithm, and its regression targets are one-hot encoded labels, which is only an approximation of the discrete problem (classification). This means that an extra calibration step is needed (eq. 6), to allow the network to tune the regressed outputs into classification scores for the cross-entropy loss L.\n\n> “The empirical performance of the proposed approach is not very promising and it does not outperform the comparison methods, e.g., SNAIL”\nOur method actually outperforms SNAIL on an apples-to-apples comparison, with the same number of layers. We would like to draw the reviewer’s attention to the last paragraph of the “Multi-class classification” subsection (page 8).\n\nThe result mentioned by the reviewer uses a ResNet, while we use a 4-layer CNN to remain comparable to prior work. SNAIL with a 4-layer CNN ([11] Appendix B) performs much worse than our method (7.4% to 10.0% accuracy improvement).\n\nEven disregarding the great difference in architecture capacity, our proposal's performance coincides with SNAIL on miniImageNet 5way-5shot and it is comparable on 3 out of 4 Omniglot setups. We would have liked to establish a comparison also on CIFAR, but unfortunately the official code for SNAIL hasn’t been released.\n\nBorrowing the words of AnonReviewer2: “Notably, the ridge regression variant can reach results competitive with SNAIL that uses significantly more weights and is shown to suffer when its capacity is reduced.”\n\nWe hope that this addresses the two concerns raised by the reviewer. We will be happy to answer any other question about the paper.\n", "I respectfully disagree with the argument regarding lack of novelty. Indeed, the authors did not invent the meta-learning framework, and they did not invent ridge regression. Yet the two of them had not been combined before in this way, and this combination is evidently beneficial. It does seem like a natural idea, but if it was so obvious, how come it wasn't done before?\n\nIt may be tempting to create complicated models to solve a problem, yielding \"more novel\" solutions. But this seems wrong if the same problem can be solved in a simpler way! I feel strongly that re-using existing components in clever ways that yield good results on new problems is an important contribution and should be encouraged.", "Thanks for your reply!\n\n> Clearly, the overall training framework is not novel and it is common in the few-shot learning literature. \n\nThanks for the clarification! But if the \"essential difference\" (asked in my first post and answered in your previous reply) is not the contribution, it is hardly to tell the essential novelty of this method.\n\n> We strongly disagree with the statement. This is exactly the nature of the contribution of most approaches for few-shot classification.\n\nI do not agree with this statement. Simply replacing the base learner and following the standard meta learning/few-shot learning scheme sounds not novel to me. The claimed adaptation capability comes from the standard meta-learning scheme, while the claimed efficiency comes from the closed-form solver. Both are well known and common for years. \n\nYes MAML can be explained to be using SGD as base learner (but there are other more intuitive explanations), but they redesigned the learning procedure specifically for SGD, since SGD is a dynamic optimization algorithm rather than a model. Other meta-learning methods either proposes new algorithm or new model structure specifically for few-shot learning. BTW, I do not agree that \"some papers propose their methods in the similar way, so our paper also presents contribution of similar novelty\".\n\n>Our contribution is to use closed-form solvers such as ridge regression to tackle few-shot classification, which is novel in the literature and it is a non-trivial endeavor.\n\nUsing closed-form solver for sure can converge faster than using deep neural networks or doing second order optimization (like MAML). But this is an advantage of the existing closed-form solvers. In addition, as mentioned in your reply and paper, the fine-tuning still needs to backpropagate the error from the closed form solver to the pre-trained deep CNN. Together they still compose a deep model whose last layer is the closed-form solver, and each epoch of the fine tuning might need heavy computation (**This has been also pointed out by Reviewer 1**). Then the advantage of using shallow model is not clear: you can always find a good trade-off between fine tuning a large/small backbone model and a complex/simple base learner. Besides, logistic regression does not have a closed-form solver so the title is somehow misleading.\n\nOverall, I agree that using closed-form solver of a shallow model might have some practical value, especially in the case when you use a very powerful pre-trained CNN as the backbone model. However, I am not convinced that this is a novel contribution. \n", "We thank the reviewer for the comment.\nHowever, we believe that the low score originates from a misunderstanding of our proposal.\nBelow, we try to bring some clarity by disambiguating between what the reviewer refers to and our method.\nIf our interpretation of what the reviewer refers to as “entirely common” is incorrect, it would be great to be provided with at least one reference, so that we can continue the conversation on the same ground.\n\n> “novel contribution?” , “training multi-task neural nets with shared feature representation and task specific final layer is probably 20-30 years old by now and entirely common.”\n“It is also common freeze the feature representation learned from the first set of tasks, and to simply use it for new tasks by modifying the last layer”\n\nWe understand that the reviewer is hinting at the common multi-task scenario with a shared network and task-specific layers (e.g. Caruana 1993). He/she also refers to basic transfer learning approaches in which a CNN is first pre-trained on one dataset/task and then adapted to a different dataset/task by simply adapting the final layer(s) (e.g. Yosinski et al. “How transferable are features in deep neural Networks?” - NIPS 2014; Chu et al. “Best Practices for Fine-tuning Visual Classifiers to New Domains” - ECCVw 2016).\n\nIf so, then this is significantly different to our work, and in general to all of the previous literature on meta-learning applied to few-shot classification (e.g. Finn et al. 2017, Ravi & Larochelle 2017, Vinyals et al. 2016, etc).\nNotably, these methods and ours take into account adaptation *already during the training process*, which requires back-propagating errors through the very fine-tuning process.\n\nWithin this setup, our main contribution is to propose an adaptation procedure based on closed-form regressors, which have the important characteristic of allowing different models for different episodes while still being fast because of 1) their convergence in one (R2-D2) or few (LR-D2) steps, 2) the use of the Woodbury identity, which is particularly convenient in the few-shot data regime, and 3) back-propagation through the closed-form regressor can be made efficient.\n\nTo better illustrate our point, we conducted a baseline experiment.\nFirst, we pre-trained the same 4-layers CNN architecture, but for a standard classification problem, using the same training samples as our method. We simply added a final fully-connected layer (with 64 outputs, like the number of classes in the training splits) and used the cross-entropy loss.\nThen, we used the convolutional part of this trained network as a feature extractor and fed its activation to our ridge-regression layer to produce a per-episode set of weights.\nOn miniImagenet, the drop in performance w.r.t. our proposed R2-D2 is very significant: 13.8% and 11.6% accuracy for the 1 and 5 shot problems respectively.\nResults are consistent on CIFAR, though less drastic: 11.5% and 5.9%.\n\nThis confirms that simply using a “shared feature representation and task specific final layer” as commented by the reviewer is not what we are doing and it is not a good strategy to obtain results competitive with the state-of-the-art in few-shot classification.\nInstead, it is necessary to enforce the generality of the underlying features during training explicitly, which we do by back-propagating through the fine-tuning procedure (the closed-form regressors).\n\nWe would like to conclude remarking that, probably, the source of confusion arises from the overlap that exists in general between the few-shot learning and the transfer/multi-task learning sub-communities.\nWe realize that the two have developed fairly separately while trying to solve very related problems, and unfortunately the similarities/differences are not acknowledged enough in few-shot classification papers, including our own. We intend to alleviate this problem in our related work section, and invite the reviewer to suggest more relevant works from this area.\n", "Thank you. \n\n> “I understand that the main novelty here is to apply fine tuning on the test set (of tasks sampled for training) in meta-learning, instead of on the training data of a single supervised learning task (as we normally did in supervised learning).”\n\nSorry but this is not claimed in the paper or in the answer above. Clearly, the overall training framework is not novel and it is common in the few-shot learning literature. In fact, we specifically wrote: “Our training procedure (and indeed, all meta-learning methods for few-shot learning, such as MAML, SNAIL, etc) ...”.\n\nThe point of our previous comment was simply to clarify why different episodes correspond to different sets of parameters.\n\n\n> ““changing the model of base learners cannot be recognized as a novelty”\nWe strongly disagree with the statement. This is exactly the nature of the contribution of most approaches for few-shot classification. For example, both MAML and prototypical networks use the same algorithm (SGD) in the external loop, while they vastly differ for the method used in the inner loop (SGD and nearest neighbour respectively).\n\nOur contribution is to use closed-form solvers such as ridge regression to tackle few-shot classification, which is novel in the literature and it is a non-trivial endeavor.\nAs stated by AR2: “[it] strikes an interesting compromise between not performing any adaptation for each new task (as is the case in pure metric learning methods [e.g. prototypical networks]]) and performing an expensive iterative procedure, such as MAML or Meta-Learner LSTM where there is no guarantee that after taking the few steps prescribed by the respective algorithms the learner has converged.”\n\nBesides offering a trade-off with respect to existing techniques, our proposal also presents a significant practical value in terms of performance, as outlined in our experimental section.\n", "Thanks a lot for your reply and explanation! I understand that the main novelty here is to apply fine tuning on the test set (of tasks sampled for training) in meta-learning, instead of on the training data of a single supervised learning task (as we normally did in supervised learning). However, I agree with AnonReviewer1: I do not think this work presents very original contributions. It applies the existing fine-tuning technique by following standard meta-learning setting, as many other meta-learning methods already did.\n\nFine tuning is an existing technique that can be generally applied to different learning settings. The basic idea is to update a pre-trained model and continue to train it on new training instances. In supervised learning, each training instance is a data point, and the learning goal is to minimize the training error on each data point. In meta-learning, each training instance is an (n-way k-shot) classification task, and the learning goal is to minimize the validation/test error on the test set of each training task. Therefore, fine tuning in meta-learning should be applied to the test sets of training tasks (as this paper does). In fact, in meta-learning, any training happening on task-shared part (e.g., meta-learner or shared pre-trained model) should minimize the error/loss on the test sets of training tasks. However, these are all well-known facts, derived from the very early optimization formulation of \"learning to learn\" (although meta-learning becomes very popular topic very recently). So they are not the contributions of this paper.\n\nIn addition, as the authors mentioned, many existing meta-learning methods use the same idea, the only difference here is that the base learner for each task changes to ridge/logistic regression model. But changing the model of base learners cannot be recognized as a novelty. Therefore, I think this is a successful application of existing technique, it re-explains how to do fine-tuning in meta-learning setting, but is not novel to me. ", "Thank you, this is a really nice paper. The bi-level optimization point of view is very insightful. Although their framework is very general, they seem to specialize it in the experiments using gradient descent for the inner loop, which is different from our closed-form solutions.", "> “I am confused about whether the proposed method is the same as … multiple models (e.g., logistic regression) for different tasks based on shared input features provided by a pre-trained model (e.g., CNN)”\n\nThank you for participating in the discussion. This describes well only the behavior at test-time -- when facing a new task, a new regressor is learned based on pre-trained features (hence, different tasks will have different parameters). However, this leaves out a crucial detail: where does this pre-trained CNN come from?\n\nThe standard approach is to use a CNN that was pre-trained on ImageNet or another task. However, there is no guarantee that the CNN features will transfer well to unknown tasks. In the case of few-shot learning, with only 1 or 5 training samples, fine-tuning will result in extreme over-fitting.\n\nOur training procedure (and indeed, all meta-learning methods for few-shot learning, such as MAML, SNAIL, etc) train the CNN features specifically to perform well on new, unseen tasks. “Performing well on unseen tasks” is formalized as achieving a low error after fine-tuning. This means that we have to back-propagate errors through the fine-tuning procedure, which can be SGD (MAML) or a ridge/logistic regression solver (ours). The end result is a CNN that is especially trained to be fine-tuned later under the same conditions; this differs substantially from standard pre-training.\n\nThere is a nice, informal introduction to this (admittedly subtle!) distinction, that was written by the authors of MAML:\nhttps://bair.berkeley.edu/blog/2017/07/18/learning-to-learn/\n", "We thank the reviewer for the insightful comments and analysis.\n\n> “One-vs-all classifiers” for LR-D2\nThis is a great suggestion, and we are not quite sure how we missed it. We will update the results for 5-way classification incorporating this method.\n\n> “ablation where for the LR-D2 variant SGD was used ... instead of Newton’s method”\nWe previously did exactly this experiment, although for the R2-D2 (ridge regression) variant. We did not include it due to space constraints. It is equivalent to MAML, which also uses SGD, but adapting only the classification layer for new tasks (instead of adapting all parameters).\n\nWe tested this variant on miniImageNet with 5 classes, with the lowest-capacity CNN (which is the most favorable model for MAML/SGD). It yields 45.4±1.6% accuracy for 1-shot classification and 61.7±1.0% for 5-shot classification. Comparing it to Table 1, there’s a drop in performance compared to our closed form solver (3.5% and 4.4% less accuracy, respectively), and also compared to the original MAML (3.3% and 1.4% respectively).\n\nAlthough we expect the conclusions for logistic regression (LR-D2) to be similar, we will extend the experiment to this case and report the results.\n\n> “Neither MAML nor MetaLearner LSTM have been showed to be as effective as Prototypical Networks for example”\nWe agree, and will amend the text. Their interest may lie more in their technical novelty.\n\n> Suggestions on multinomial term and sentence grammar\nThese do improve the readability of the text and will be corrected.\n", "IMO, shared parameters are optimized for Base test-set (Figure 1) instead of Base training-set, which is different than multi-task learning setup. ( I think AnonReviewer1 also raised similar issues...)\n\nAnd, I think authors missed a reference, which is very relevant.\nhttps://arxiv.org/abs/1806.04910", "This paper proposes a new meta-learning method based on closed-form solutions for task specific classifiers such as ridge regression and logistic regression (iterative). The idea of the paper is quite interesting, comparing to the existing metric learning based methods and optimization based methods. \n\nI have two concerns on this paper. \nFirst, the motivation and the rationale of the proposed approach is not clear. In particular, why one can simply treat \\hat{Y} as a scaled and shifted version of X’W?\n\nSecond, the empirical performance of the proposed approach is not very promising and it does not outperform the comparison methods, e.g., SNAIL. It is not clear what is the advantage. \n", "Summary: The paper proposes an algorithm for meta-learning which amounts to fixing the features (ie all hidden layers of a deep NN), and treating each task as having its own final layer which could be a ridge regression or a logistic regression. The paper also proposes to separate the data for each task into a training set used to optimize the last, task specific layer, and a validation set used to optimize all previous layers and hyper parameters. \n\nNovelty: This reviewer is unsure what the paper claims as a novel contribution. In particular training multi-task neural nets with shared feature representation and task specific final layer is probably 20-30 years old by now and entirely common. It is also common freeze the feature representation learned from the first set of tasks, and to simply use it for new tasks by modifying the last (few) layer(s) which would according to this paper qualify as meta-learning since the new task can be learned with very few new examples. \n\n", "This paper proposes a meta-learning approach for the problem of few-shot classification. Their method, based on parametrizing the learner for each task by a closed-form solver, strikes an interesting compromise between not performing any adaptation for each new task (as is the case in pure metric learning methods) and performing an expensive iterative procedure, such as MAML or Meta-Learner LSTM where there is no guarantee that after taking the few steps prescribed by the respective algorithms the learner has converged. For this reason, I find that leveraging existing solvers that admit closed-form solutions is an attractive and natural choice. \n\nSpecifically, they propose ridge regression as their closed-form solver (R2-D2 variant). This is easily incorporated into the meta-learning loop with any hyperparameters of this solver being meta-learned, along with the embedding weights as is usually done. The use of the Woodbury equation allows to rewrite the closed form solution in a way that scales with the number of examples instead of the dimensionality of the features; therefore taking advantage of the fact that we are operating in a few-shot setting. While regression may seem to be a strange choice for eventually solving a classification task, it is used as far as I understand due to the availability of this widely-known closed-form solution. They treat the one-hot encoded labels of the support set as the regression targets, and additionally calibrate the output of the network (via a transformation by a scale and bias) in order to make it appropriate for classification. Based on the loss of ridge regression on the support set of a task, a parameter matrix is learned for that task that maps from the embedding dimensionality to the number of classes. This matrix can then be used directly to multiply the embedded (via the fixed for the purposes of the episode embedding function) query points, and for each query point, the entry with the maximum value in the corresponding row of the resulting matrix will constitute the predicted class label.\n\nThey also experimented with a logistic regression variant (LR-D2) that does not admit a closed-form solution but can be solved efficiently via Newton’s Method under the form of Iteratively Reweighted Least Squares. When using this variant they restrict to tackling the case of binary-classification.\n\nA question that comes to mind about the LR-D2 variant: while I understand that a single logistic regression classifier is only capable of binary classification, there seems to be a straightforward extension to the case of multiple classes, where one classifier per class is learned, leading to a total of N one-vs-all classifiers (where N is the way of the episode). I’m curious how this would compare in terms of performance against the ridge regression variant which is naturally multi-class. This would allow to directly apply this variant in the common setting and would enable for example still oversampling classes at meta-training time as is done usually.\n\nI would also be curious to see an ablation where for the LR-D2 variant SGD was used as the optimizer instead of Newton’s method. That variant may require more steps (similar to MAML), but I’m curious in practice how this performs.\n\nA few other minor comments:\n- In the related work section, the authors write: “On the other side of the spectrum, methods that optimize standard iterative learning algorithms, [...] are accurate but slow.” Note however that neither MAML nor MetaLearner LSTM have been showed to be as effective as Prototypical Networks for example. So I wouldn’t really present this as a trade-off between accuracy and speed.\n- I find the term multinomial classification strange. Why not use multi-class classification?\n- In page 8, there is a sentence that is not entirely grammatically correct: ‘Interestingly, increasing the capacity of the other method it is not particularly helpful’.\n\nOverall, I think this is good work. The idea is natural and attractive. The writing is clear and comprehensive. I enjoyed how the explanation of meta learning and the usual episodic framework was presented. I found the related work section thorough and accurate too. The experiments are thorough as well, with appropriate ablations to account for different numbers of parameters used between different methods being compared. This approach is evidently effective for few-shot learning, as demonstrated on the common two benchmarks as well as on a newly-introduced variant of cifar that is tailored to few-shot classification. Notably, the ridge regression variant can reach results competitive with SNAIL that uses significantly more weights and is shown to suffer when its capacity is reduced. Interestingly, other models such as MAML actually suffer when given additional capacity, potentially due to overfitting.\n", "After reading this paper, I am confused about whether the proposed method is the same as a widely used technique, i.e., training multiple models (e.g., logistic regression) for different tasks based on shared input features provided by a pre-trained model (e.g., CNN), which can be fine-tuned. Although a minor difference here is that the tasks are sampled from a distribution of tasks rather than a fixed set (which follows a standard meta-learning setting), the used technique already exists and is well-known.\n\nSince the proposed method is claimed to be a meta-learning approach that can quickly adapt to novel tasks, the training algorithm or the meta-learner should do something different for different tasks (i.e., adaptive to each specific task). However, The CNN remains the same for different tasks, and the closed-form solvers do not have any hyper-parameters changed with the task. I am not sure if it can be recognized as a meta-learning method. It might be more suitable to be categorized in multi-task learning, where models for different tasks share the same feature extractor (the CNN here).\n\nPlease correct me if I am wrong in the understanding of the essential idea of this paper. Thanks a lot!" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 2, 7, -1 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 4, -1 ]
[ "SJeA_NNyxE", "r1l23D05y4", "SyxWU3iYkE", "Syg19STh6Q", "H1e3St5u67", "BJg6awLwk4", "rygq4xgwkN", "r1e-G4YL1N", "BkljQHC-T7", "iclr_2019_HyxnZh0ct7", "r1xPm1Kah7", "SklLz3csp7", "SyedD2rXaQ", "SJlghKO937", "Byg2xy8MpX", "rkevXdCb6m", "Syegwm5yaQ", "r1ggxa85nm", "SkeX8K6thm", "r1ggxa85nm", "iclr_2019_HyxnZh0ct7", "iclr_2019_HyxnZh0ct7", "iclr_2019_HyxnZh0ct7", "iclr_2019_HyxnZh0ct7" ]
iclr_2019_HyxzRsR9Y7
Learning Self-Imitating Diverse Policies
The success of popular algorithms for deep reinforcement learning, such as policy-gradients and Q-learning, relies heavily on the availability of an informative reward signal at each timestep of the sequential decision-making process. When rewards are only sparsely available during an episode, or a rewarding feedback is provided only after episode termination, these algorithms perform sub-optimally due to the difficultly in credit assignment. Alternatively, trajectory-based policy optimization methods, such as cross-entropy method and evolution strategies, do not require per-timestep rewards, but have been found to suffer from high sample complexity by completing forgoing the temporal nature of the problem. Improving the efficiency of RL algorithms in real-world problems with sparse or episodic rewards is therefore a pressing need. In this work, we introduce a self-imitation learning algorithm that exploits and explores well in the sparse and episodic reward settings. We view each policy as a state-action visitation distribution and formulate policy optimization as a divergence minimization problem. We show that with Jensen-Shannon divergence, this divergence minimization problem can be reduced into a policy-gradient algorithm with shaped rewards learned from experience replays. Experimental results indicate that our algorithm works comparable to existing algorithms in environments with dense rewards, and significantly better in environments with sparse and episodic rewards. We then discuss limitations of self-imitation learning, and propose to solve them by using Stein variational policy gradient descent with the Jensen-Shannon kernel to learn multiple diverse policies. We demonstrate its effectiveness on a challenging variant of continuous-control MuJoCo locomotion tasks.
accepted-poster-papers
This paper proposes a reinforcement learning approach that better handles sparse reward environments, by using previously-experienced roll-outs that achieve high reward. The approach is intuitive, and the results in the paper are convincing. The authors addressed nearly all of the reviewer's concerns. The reviewers all agree that the paper should be accepted.
train
[ "Hkgln7Zw14", "HJxRqW-v14", "r1g9FkFUk4", "B1gAK7lT2m", "r1lt10yBJE", "rkerMF4K0X", "HylcXomtC7", "rJxAv8QYAQ", "HkxD8U7tCQ", "H1xJ2emFA7", "BkeCkTGFAQ", "H1e1gh6sTX", "Skx-JJf62Q", "H1lUeG3vnQ" ]
[ "author", "author", "public", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer" ]
[ "We would merge pieces from the Appendix into the main sections for better coherence. Also, we would make our source code and scripts public. ", "\n1. Experiments in section 3.1 use a parameterized discriminator since a single network suffices for self-imitation. Experiments in section 3.2 use $\\psi$ networks for computational efficiency with policy ensembles.\n\n2. In practice, to get the complete SVPG gradient, we calculate the exploitation and exploration components, and then do a convex combination as: (1-p)*exploitation + p*exploration, where p is linearly decayed from 1 to 0. The temperature (T) is held constant at 0.5 (2D-navigation) and 0.2 (Locomotion).", "Thank you for your reply!\nAnd I have two more questions.\n\n1. Whether discriminator or $\\psi$ network did you use for getting results you write in your paper?\n2. What number you use for $\\alpha$ for SVPG and T for JS kernel?\n\nThank you!", "The paper proposes how previously experienced high reward trajectories can be used to generate dense reward functions for more for efficient training of policies in context of reinforcement learning. The paper does this by computing the state-action pair distribution of high rewarding trajectories in the replay buffer, and using a surrogate reward that measures the distance between this distribution and the current state-action pair distribution. The paper derives approximate policy gradients for this surrogate reward function. The paper then describes limitations of doing this: possibility of getting stuck in the local neighborhood of currently well-performing trajectories. It also describes an extension based on Stein variational policy gradients to diversify behavior of an ensemble of policies that are learned together. The paper shows experimental results on a number of MuJoCo tasks.\n\nStrengths:\n1. Adequately leveraging high-return roll-outs for effective learning of policies is an important problem in RL. The paper proposes and empirically investigates a reasonable approach for doing this. The paper shows how using the proposed additional rewards leads to better performance on the choses benchmarks than baseline methods without the proposed rewards.\n\n2. I also like that the paper details the short-comings of the proposed approach, and how these could be fixed.\n\nWeaknesses:\n1. The paper uses sparse rewards in RL as a motivation. However, the proposed approach crucially relies on the fact that a good trajectory has at least been encountered once in the past to be of any use. I am not sure if how the proposed approach does justice to the motivation in the paper. The paper should re-write the motivation, or better explain why the proposed method addresses the motivation.\n\n2. Additionally, the paper does not provide adequate experimental validation. The experiment that I think will make the case for the paper is one that shows the sample efficiency of the proposed approach over other baseline methods, when given a successful past roll-out. The current experimental setup emphasizes the sparse reward scenario in RL, and it is just not clear to me as to why this is a good benchmark to study the effects of the proposed method. \n\n3. The paper primarily makes comparisons to on-policy methods. This may not be a fair comparison, as the proposed method uses past trajectories from a replay buffer (to compute reward). Perhaps improvements are coming because of use of this off-policy information. The paper should design experiments to de-conflate this: perhaps by also comparing to how these additional rewards will compare in context of off-policy methods (like Q-learning).\n\n4. I also do not understand how the benchmark tasks were chosen? Are the MuJoCo tasks studied here a fair representative of MuJoCo tasks studied in literature, or are these selected in any manner? While selecting and modifying benchmarks for the purpose of making a specific point is acceptable, it is important to include benchmark results on a full suite of tasks. This can help understand (desirable or un-desirable) side-effects of proposed ideas.\n\nAfter reading author response and the extra experiments, I have changed my rating to 6 (from the original rating of 5).", "Thank you for providing a detailed reply. I hope authors will incorporate these points into the paper (specifically the results on a more comprehensive benchmark suite (my concern in my 4th point). \n\nI also hope authors will release code and scripts to reproduce the results in the paper, so as to make future comparisons possible.", "\nYou are correct in observing that if we use parameterized discriminator networks to estimate the ratio $r^{\\phi}_{ij} = \\rho_{\\pi_i}(s,a) / [\\rho_{\\pi_i}(s,a) + \\rho_{\\pi_j}(s,a)]$ for the SVPG exploration rewards, then we would need O(n^2) discriminator networks, for n policies in the ensemble. To ensure scalability to ensembles of large number of policies, we opt for explicit modeling of the state-action visitation density for each policy (i) by a parameterized network $\\psi_i$. With this, we can obtain the ratios for the SVPG exploration rewards using the n $\\psi$ network, reducing the complexity to O(n). Please check the recently added Appendix 5.8.2 in our revision for more details. We would be happy to answer any further questions you may have on this.", "\n1- Concerning “Points 1. and 2. under Weaknesses” : \n\nWe do not wish to claim or motivate that self-imitation would suffice if the task is “sparse” in the sense that most of the episodes don’t see *any* rewards. This would fall under the limitations of self-imitation which we discuss in the paper; we could rely on population-based exploration methods (e.g. SVPG, Section 2.3) and draw on the rich literature on single-agent exploration methods like curiosity/novelty-search or parameter noise to alleviate this to an extent. Instead, we focus on scenarios where “sparse” feedback is available within an episode. We will make this very clear in our revision. For example, our experiments in Section 3.1 consider tasks where some feedback is available in an episode - either only once at the end of the episode, or at very few timesteps during an episode. We find self-imitation to be highly beneficial (compared to standard policy gradients) on these “sparse” constructions. Some practical situations of the kind include a.) robotics tasks where rewards in an episode could be intermittent or delayed by arbitrary timesteps due to the inverse kinematics operations b.) cases where a mild feedback on the overall quality of the episode is available, but designing a dense reward function manually is prohibitively hard; an interesting example of this is [5].\n\nAlso, although our algorithm exploits “good” trajectories from agent’s past experience, the demands on the “goodness” of the trajectories are very relaxed. Indeed, the trajectories imitated during the initial phases of learning have quite low overall scores, and they gradually improve in quality.\n\n[5] Christiano, Paul F., et al. \"Deep reinforcement learning from human preferences.\" Advances in Neural Information Processing Systems. 2017.\n\n\n2- Concerning “Point 3. under Weaknesses -- comparison to off-policy RL methods”:\n\nOur approach makes use of a replay memory to store and exploit past good rollouts of the agent. Off-policy RL methods such as DQN, DDPG also accumulate agent experience in a replay buffer and reuse them for learning (e.g. by reducing TD-error). We run new experiments with a recent off-policy RL method based on DDPG - Twin Delayed Deep Deterministic policy gradient (TD3; [2]). Appendix 5.10 evaluates its performance on MuJoCo tasks under the various reward distributions we used in our paper. We find that the performance of TD3 suffers appreciably under the episodic case and when the rewards are masked out with 90% probability (p_m=0.9). We therefore believe that popular off-policy algorithms (DDPG, TD3) do not exploit the past experience in a manner that accelerates learning when rewards are scarce during an episode. The per-timestep (dense) pseudo-rewards that we obtain with the divergence-minimization objective help in temporal credit assignment, resulting in good policies even under the episodic and noisy (p_m=0.9) settings (Table 1, Section 3.1).\n\n[2] Fujimoto, Scott, Herke van Hoof, and Dave Meger. \"Addressing Function Approximation Error in Actor-Critic Methods.\" International Conference on Machine Learning. 2018.\n\n\n4- Concerning “Point 4. under Weaknesses ”: \n\nWe have added Appendix 5.7 with results on more MuJoCo tasks. Combined with Table 1. in the paper, we believe our overall set to be fairly representative. For reference, the PPO paper [6], which forms our baseline, uses the same set of benchmarks (Figure 3 in their paper). \n\n[6] Schulman, John, et al. \"Proximal policy optimization algorithms.\" arXiv preprint arXiv:1707.06347 (2017).", "\n6- Concerning “How is the priority list threshold and size chosen?”: \n\nOur implementation stores the top-C trajectories in the priority queue based on cumulative trajectory-return. We fix the capacity (C) to 10 trajectories for all our experiments. This number was chosen after a limited hyperparameter grid search on Humanoid and Hopper (Appendix 5.4). In general, we didn’t find our method to be particularly sensitive to the choice of C.\n\n\n7- Concerning “Would a softer version of the priority queue update do anything useful?”:\n\nIn our initial experiments, we tested with using more relaxed update rules for the priory queue, but found that storing the overall top-C trajectories gave the best results. Nonetheless, the various options for storing and reusing past experiences present interesting trade-offs, and we hope to look deeper into this in the future.\n\n\n8- Concerning “The update in (3) seems quite similar to what GAIL would do. What is the difference there?”: \n\nYes, as we mention in the derivation (Appendix 5.1), GAIL does a similar update, but using external expert trajectories rather than using self-imitation. An implementation-specific difference is that while GAIL uses discriminator networks to implicitly estimate the ratio required in the policy gradient theorem, we (when using SVPG exploration in Algorithm 2) learn separate state-action density estimation networks (psi), and explicitly compute the required ratios. This is done for reasons of computational efficiency (Appendix 5.8.2). \n \n\n9- Concerning “why higher asymptotic performance but is often slower in the beginning than the other methods in Fig 3”: \n\nConsider SparseHopper as an example. There is a local minima where the agent can stand still (i.e. no hopping) and collect the per-timestep survival bonus given for not falling down. Baseline algorithms such as PPO-Independent or SI-independent quickly get into this local minima since they greedily exploit the survival bonus readily available. Hence, they reach a score of ~1000 quickly. In, SI-Interact-JS, however, the JS repulsion forces the agents to be diverse and explore the state-space much more effectively. The highest scoring agent in this ensemble (which is plotted in Figure 3.) discovers the hopping behavior eventually. However, during its learning lifetime, it takes varied actions to reach states different from other agents, due to JS repulsion. The score grows gradually since many of the attempts in the beginning lead to the agent falling down (and therefore episode termination) in the process of trying something different. The agent does not quickly accumulate the survival bonus and stand still, unlike the baselines. The asymptotic score is higher since the forward hopping is rewarded higher compared to the survival bonus. ", "\n1- Concerning “Why is self-imitation more effective than standard policy gradients, and if the source of stability can be explained intuitively” : \n\nWe believe that learning pseudo-rewards with self-imitation helps in the temporal credit assignment problem in the sparse- or episodic-reward setting. For instance, in the episodic setting, where a reward is only provided at episode termination, standard policy gradient algorithms reinforce the actions towards the beginning of the episode based on a reward signal which is obtained after multiple timesteps and convolves the effect of many intermediate actions. This signal is potentially sparse and diluted, and may deteriorate with task horizon. With our approach, since we learn “per-timestep” pseudo-rewards with self-imitation, we expect this greedy signal to help in attributing credit to actions more effectively, leading to faster training.\n\nQualitatively, the stability of the self-imitation algorithm could also be understood by viewing it as a form of curriculum learning [4]. Unlike learning from perfect demonstrations by external experts, our learner at any point in time is imitating only a slightly different version of itself. The demonstrations, therefore, increase in complexity gradually over time, resulting in an implicit, adaptive curriculum which stabilizes learning and avoids catastrophic forgetting of behaviors. \n\n[4] Bengio, Yoshua, et al. \"Curriculum learning.\" Proceedings of the 26th annual international conference on machine learning. ACM, 2009.\n\n\n2- Concerning “Re-phrases in various sections”: \n\nWe have incorporated all the suggested changes in the revision with extra discussion. We have also added the missing reference to Guided Policy Search and expanded on GAIL. DIAYN (Eysenbach et al 2018) is included in Appendix 5.6.\n\n\n3- Concerning “Comparison to Oh et al. (2018)”: \n\nWe have added a new section (Appendix 5.9) focussed on the algorithm (SIL) by Oh et al. (2018). Therein, we mention the update rule for SIL and the performance of PPO+SIL on MuJoCo tasks under the various reward distributions we used in our paper. We summarize our observations here (please see Appendix 5.9 for more details). The performance of PPO+SIL suffers under the episodic case and when the rewards are masked out with 90% probability (p_m=0.9). Our intuition is that this is because PPO+SIL makes use of the “cumulative return” from each transition of a past good rollout for the update. When rewards are provided only at the end of the episode, for instance, cumulative return does not help with the temporal credit assignment problem and hence is not a strong learning signal. \n\n\n4- Concerning “Comparing SVPG exploration (Figure 3) to novelty/curiosity based exploration schemes”: \n\nWe have added a new section (Appendix 5.11) on comparing SVPG exploration to a novelty-based exploration baseline - EX2 [3]. The EX2 algorithm does implicit density estimation using discriminative modeling, and uses it for novelty-based exploration. We report results on the hard exploration MuJoCo tasks considered in Section 3.2, using author provided code and hyperparameters. Table 5 in Appendix 5.11 shows that we compare favorably against EX2 on the tasks evaluated. \n\n[3] Fu, Justin, John Co-Reyes, and Sergey Levine. \"Ex2: Exploration with exemplar models for deep reinforcement learning.\" Advances in Neural Information Processing Systems. 2017.\n\n\n5- Concerning “What is psi in appendix 5.3? ”: \n\nWe apologize for skimping the details on this. “psi” denotes the parameters of neural networks that are used to model the state-action visitation distribution (rho) of the policy. Therefore, for an ensemble of n policies, there are n “psi” networks. The motivation behind using these networks is as follows. To calculate the gradient of JS, we need the ratio denoted by r^{\\phi} in the paper. This ratio can be obtained implicitly by training a parameterized discriminator network. However, when using SVPG exploration with JS kernel, this method would require us to train O(n^2) discriminator networks, one each for calculating the gradient of JS between a policy pair (i,j). To reduce the computational and memory resource burden to O(n), we opt for explicit modeling of the state-action visitation distribution (rho) of the policy by a network with parameters “psi”. The “psi” networks are trained using the JS optimization (Equation 2.) and we can then obtain the ratio explicitly from these “psi” networks. We have added these details (and more) to Appendix 5.8.2. It also contains proper symbols (in Latex) for easier reading.", "\n1- Concerning “Section 2.3 being too dense” : \n\nWe have re-organized the writing. Specifically, we have added more details on SVPG exploration with the JS-kernel in Appendix 5.8. Appendix 5.8.1 includes some more intuition and theory behind Stein Variational Gradient Descent (SVGD) and Stein Variational Policy Gradient (SVPG). Appendix 5.8.2 contains details on our implementation such as calculation of SVPG exploration rewards by each agent, and state-value function baselines, along with better explanation of symbols used in our full algorithm (Algorithm 2).\n\n2- Concerning “Minor points”: \n\nThank you for pointing these out. We have changed Table 1. in the revision to include all the suggested changes, in the hope that the table becomes self-explanatory. We have also rephrased the text to clarify that we compare performance with two different reward masking values - suppressing each per-timestep reward r_t with 90% probability (p_m = 0.9), and with 50% probability (p_m=0.5).", "We would like to thank the anonymous reviewers for their comments and constructive feedback. We address each reviewer's comments individually and summarize the major additions to the revision here:\n\n1. Added Appendix 5.7 with results on more MuJoCo tasks\n2. Added Appendix 5.8 with SVPG background and our implementation details. \n3. Added Appendix 5.9 on comparison to Oh et al. (2018) [1]\n4. Added Appendix 5.10 on comparison to off-policy RL (TD3, Fujimoto et al. (2018)) [2]\n5. Added Appendix 5.11 on comparing SVPG exploration to a novelty-based baseline (EX^2, Fu et al. (2017)) [3]\n\n[1] Oh, Junhyuk, Yijie Guo, Satinder Singh, and Honglak Lee. \"Self-Imitation Learning.\" International Conference on Machine Learning. 2018.\n[2] Fujimoto, Scott, Herke van Hoof, and Dave Meger. \"Addressing Function Approximation Error in Actor-Critic Methods.\" International Conference on Machine Learning. 2018.\n[3] Fu, Justin, John Co-Reyes, and Sergey Levine. \"Ex2: Exploration with exemplar models for deep reinforcement learning.\" Advances in Neural Information Processing Systems. 2017. ", "I enjoyed reading your interesting submission, and I have one question about implementation.\n\nHow did you calculate JS kernel, k(theta_j , theta_i)=exp(-D_JS(rho_pi_theta_i, rho_pi_theta_j)/T)?\n\nI think in order to calculate D_JS(rho_pi_theta_i, rho_pi_theta_j), we have to train discriminators which differentiate between trajectories from rho_pi_theta_i and trajectories from rho_pi_theta_j. If this thought is right, we have to 28 discriminators for all combinations of 8 policies. However, this is not practical.\n\nIf replay memory is shared, D_JS can be calculated by using 2 discriminators, r^phi_i and r^phi_j. This is because rho_pi_theta_i/rho_pi_theta_j = rho_pi_theta_i/rho_pi_E * rho_pi_theta_E/rho_pi_theta_j = r^phi_i / (1-r^phi_i) * (1 -r^phi_j)/r^phi_j . However, in your paper, replay memories are not shared.\n\nTherefore, I would like to know how to calculate JS kernel.\n\nThank you!!", "The paper describes a method to improve reinforcement learning for task with sparse rewards signals.\n\nThe basic idea is to select the best episodes from the system's experience, and learn to imitate them step by step as the system evolves, aiming at providing a less sparse learning signal.\n\nThe math works out to a gradient that is of similar form as a policy gradient, which makes it easy to interpolate both of them. The resulting training procedure is a policy gradient that gets additional reinforcement of the system's best runs.\n\nThe experiments show the validity especially for the most extreme case (episodic rewards), while, as expected, for the other extreme of dense rewards, the method's effect is not consistently positive.\n\nThe paper then critiques its own method and identifies a critical weakness: the reliance on good exploration. I like that a lot. The paper goes on to suggest an extension to address this by training an ensemble, and shows the effectiveness of this for a number of tasks. However, I feel that the description of this extension is less clear than that of the core idea, and introduces too many new ideas and concepts in a too condensed text.\n\nThe paper seems a significant in that it provides a notable improvement for sparse-rewards tasks, which are a common sub-class of real-world problems.\n\nMy background is not RL. While I am quite confident in my understanding of the paper's math, I am not 100% familiar with the typical benchmark sets. Hence, I cannot judge whether the results include good baselines, or whether the task selection is biased. I can also not judge the completeness of the related work, and how novel the work is. For these questions, I hope that the other reviewers can provide more information.\n\nPros:\n - intuitive idea for a common problem\n - solution elegantly has the form of a modified policy gradient\n - convincing experimental results\n - self-critique of core idea, and extension to address its main weakness\n - nicely written text, does not leave a lot of questions\n\nCons:\n - while the core idea is nicely motivated and described and good to follow, Section 2.3 feels very dense and too short.\n\nOverall, I find the core idea quite intuitive and elegant. The paper's background, motivation, and core method are well-written and, with some effort, quite readable for someone who is not an RL expert. I found that several questions I had during reading were preempted promptly and addressed. However, the description of the secondary method (Section 2.3) is too dense.\n\nTo me, the paper solidly meets the threshold of publication. Since I have no good comparison to other papers, I rate it a \"clear accept\" (8).\n\nMinor points:\n\nI noticed a few superfluous \"the\", please double-check.\n\nIn Table 1, please use the same exponent for directly comparable numbers, e.g. instead of \"1.8e5 4.4e4\", say \"18e4 4.4e4\". Or best just print the full numbers without exponent, I think you have the space.\n\nWhen reading Table 1, I could bnot immediately line up \"PPO\" and \"Self-imitation\" in the caption with the table columns. It took a while to infer that PPO refers to \\nu=0, and SI to \\nu=0.8. Can you add PPO and SI to the table headings?\n\nYou define p as \"the masking probability\", but it is not clear whether that is the probability for keeping a \"1\" in the mask,\nor for masking out the value. I can only guess from the results. I suggest to rephrase as \"the probability of retaining a reward\". Also, how about using plain words in Table 1's heading, such as \"Noisy rewards\\nSuppressing 10% of rewards\", so that one can understand the table without having to search for its description in the text?\n", "Overall impression: \nI think that this is a well written interesting paper with strong results. One thing I’d have liked to see a bit more is an explanation of why self imitation is more effective than standard policy gradient? Where does the extra supervision/stability come from, and can this be explained intuitively? I’ve suggested some small changes/clarifications to be made inline, and a few more comparisons to add. But overall, I very much like this line of work and I recommend accepting this paper. \n\n\nAbstract:\nWe demonstrate its effectiveness on a number of challenging tasks. -> be more specific.\n\nThe term single-timestep optimization is not very clear. Can this be clarified?\n\nthey are more widely applicable in the sparse or episodic reward settings -> it is likely important to mention that they are agnostic to horizon of the task.\n\nRelated works: \nGuided Policy Search also does divergence minimization. GAIL considers the imitation learning work as a sort of divergence minimization problem as well, which should be explicitly mentioned. Other work for good exploration include DIAYN (Eysenbach et al 2018). The difference in resulting updates between (Oh et al) and this work should be clearly discussed in the methods section. \n\n“we learn shaped, dense rewards”-> too early in the paper for this to make sense. can provide some contextt\n\nSection 2.2:\nfully decides the expected return -> clarify this a bit. I think what you mean is that the dynamics are wrapped into this already, so it accounts for this, but this can be made explicit.\n\nSmall typos in appendix 5.1 (r should be replaced by the density ratio)\n\nThe update in (3) seems quite similar to what GAIL would do. What is the difference there? Or is the difference just in the fact that the experts are chosen from “self” experiences. \n\nHow is the priority list threshold and size chosen?\n
Would a softer version of the priority queue update do anything useful? Or would it just reduce to policy gradient when weighted by rewards?\n\nAppendices are very clear and very informative while being succinct!\n\nI would have liked to see Appendix 5.3 in the main text (maybe a shorter form) to clarify the whole algorithm \n\nWhat is psi in appendix 5.3? The algorithm remains a bit unclear without this clarification\n\nExperiments. \nOnly 1 question to answer in this section is labelled? Put 2) and 3) appropriately. \n\nCan a comparison to Oh et al 2018 be added to this for the sake of completeness? Also can this be compared to using novelty/curiosity based exploration schemes?\n\nCan the authors comment on why the method reaches higher asymptotic performance but is often slower in the beginning than the other methods in Fig 3. " ]
[ -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, 8, 8 ]
[ -1, -1, -1, 2, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "r1lt10yBJE", "r1g9FkFUk4", "rkerMF4K0X", "iclr_2019_HyxzRsR9Y7", "HylcXomtC7", "H1e1gh6sTX", "B1gAK7lT2m", "HkxD8U7tCQ", "H1lUeG3vnQ", "Skx-JJf62Q", "iclr_2019_HyxzRsR9Y7", "iclr_2019_HyxzRsR9Y7", "iclr_2019_HyxzRsR9Y7", "iclr_2019_HyxzRsR9Y7" ]
iclr_2019_HyzMyhCcK7
ProxQuant: Quantized Neural Networks via Proximal Operators
To make deep neural networks feasible in resource-constrained environments (such as mobile devices), it is beneficial to quantize models by using low-precision weights. One common technique for quantizing neural networks is the straight-through gradient method, which enables back-propagation through the quantization mapping. Despite its empirical success, little is understood about why the straight-through gradient method works. Building upon a novel observation that the straight-through gradient method is in fact identical to the well-known Nesterov’s dual-averaging algorithm on a quantization constrained optimization problem, we propose a more principled alternative approach, called ProxQuant , that formulates quantized network training as a regularized learning problem instead and optimizes it via the prox-gradient method. ProxQuant does back-propagation on the underlying full-precision vector and applies an efficient prox-operator in between stochastic gradient steps to encourage quantizedness. For quantizing ResNets and LSTMs, ProxQuant outperforms state-of-the-art results on binary quantization and is on par with state-of-the-art on multi-bit quantization. We further perform theoretical analyses showing that ProxQuant converges to stationary points under mild smoothness assumptions, whereas variants such as lazy prox-gradient method can fail to converge in the same setting.
accepted-poster-papers
A novel approach for quantized deep neural nets is proposed, which is more principled than commonly used straight-through gradient method. A theoretical analysis of the algorithm's converegence is presented, and empirical results show advantages of the proposed approach.
test
[ "HJe54G3sRm", "BklDCt7c37", "S1eLOUDJn7", "rJlGBiktAX", "HJxhpp6DAm", "BJxebk0vA7", "HyxI9lvQAX", "r1gpSKtvam", "rygyZPnvpX", "SJxojtYwpm", "HJgzFtYvaX", "HJeb-tr93Q", "rJe9XiOgq7", "HJ7xolRtX" ]
[ "author", "official_reviewer", "official_reviewer", "public", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "public" ]
[ "Thank you for the quick response after the rebuttal. We respond to the added comments in the following.\n\n--- “Novelty is limited”\nFirst, we would like to clarify that the difference between our method and BNN are two-fold: our method is a {non-lazy, soft} prox-gradient method whereas BNN (BinaryConnect) is {lazy, hard} (see the discussion in Section 2 and 3.) The difference between lazy projection and standard projection is important as well.\n\nSecond, our “motivating storyline” is not only the observation that BinaryConnect is Nesterov’s dual-averaging, but also the observation that BinaryConnect suffers from non-convergence on fairly simple toy problems (Figure 1), and that all lazy prox algorithms suffer from a fundamental information-theoretic limit due to the lack of gradient information as well (Figure 1 & Section 2.3). That poses an issue to address.\n\nMore importantly, the contribution of our paper is not merely “proposing yet another alternative to the (already abundant) literature of quantization algorithms”, but also “unveiling the properties of all these algorithms with theories (Section 5) and diagnostic experiments (Appendix C)”. We believe that it will indeed be \"an interesting addition to the literature\", as Reviewer 1 kindly commented. \n\n--- “Convergence is a decoration”\nWe wanted to point out that our theoretical results contain 3 parts: (1) convergence of ProxQuant, (2) non-convergence of lazy proximal gradient descent under the same problem assumptions, and (3) a characterization that BinaryConnect is very unstable and unlikely to converge in practice (with corroborating experiment). \n\nThe combination of (1) and (2) points out a clear advantage of ProxQuant over the lazy version in theory, which to the best of our knowledge is novel. Further, the counterexample we construct to show (2) is a very natural simple problem on 1d for binary quantization, which suggest a serious potential drawback of all lazy proximal algorithms in quantization applications. We believe that this result will convey an interesting message to the community about the limitation of the lazy projection mechanism.\n\nRe “convergence can be easily obtained”. We didn’t either claim that the convergence is hard. Indeed, before we present the proof in Appendix D.1, we already remarked that such type of convergence is fairly standard in the literature of proximal algorithms. We will add a reference and attribute the asymptotic critical point convergence guarantee to Atouch et al. (2013). We are not sure if our Ghadimi & Lan (2013) style of convergence **rate** result is new either, but given that it is not hard, we are not claiming any credits for that.\n\nThe point of presenting the convergence analysis is to have a self-contained discussion with elementary proofs that separates the standard and lazy prox-gradient algorithms for the problem of model quantization. This particular insight, to the best of our knowledge, is new to the current paper.\n\n- Attouch, H., Bolte, J., & Svaiter, B. F. (2013). Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward–backward splitting, and regularized Gauss–Seidel methods. Mathematical Programming, 137(1-2), 91-129.\n", "This paper proposed ProxQuant method to train neural networks with quantized weights. ProxQuant relax the quantization constraint to a continuous regularizer and then solve the optimization problem with proximal gradient method. The authors argues that previous solvers straight through estimator (STE) in BinaryConnect (Courbariaux et al. 2015) may not converge, and the proposed ProxQuant is better.\n\n I have concerns about both theoretical and experimental contributions\n\n1. The proposed regularizer for relaxing quantized constraint looks similar to BinaryRelax (Yin et al. 2018 BinaryRelax: A Relaxation Approach For Training Deep Neural Networks With Quantized Weights.), which is not cited. I hope the authors can discuss this work and clarify the novelty of the proposed method. One difference I noticed is that BinaryRelax use lazy prox-graident, while the proposed ProxQuant use non-lazy update. It is unclear which one is better.\n\n2. On page 5, the authors claim ‘’Our proposed method can be viewed as … generalization ...’’ in page 5. It seems inaccurate because unlike proposed method, BinaryConnect use lazy prox-gradient.\n\n3. What’s the purpose of equation (4)? I am confused and did not find it explained in the content.\n\n4. The proposed method introduced more hyper-parameters, like the regularizer parameter \\lambda, and the epoch to perform hard quantization. In section 4.2, it is indicated that parameter \\lambda is tuned on validation set. I have doubts about the fairness comparing with baseline BinaryConnect. Though BC does not have this parameter, we can still tune learning rate.\n\n5. ProxQuant is fine-tuned based on the pre-trained real-value weights. Is BinaryConnect also fine-tuned? For a CIFAR-10 experiments, 600 epochs are a lot for fine-tuning. As a comparison, training real-value weights usually use less than 300 epochs. BinaryConnect can be trained from scratch using same number of epochs. What does it mean to hard-quantize BinaryConnect? The weights are already quantized after projection step in BinaryConnect. \n\n6. The authors claim there are no reported results with ResNets on CIFAR-10 for BinaryConnect, which is not true. (Li et al. 2017 Training Quantized Nets: A Deeper Understanding) report results on ResNet-56, which I encourage authors to compare with. \n\n7. What is the benefit of ProxQuant? Is it faster than BinaryConnect? If yes, please show convergence curves. Does it generate better results? Table 1 and 2 does not look convincing, especially considering the fairness of comparison.\n8. How to interpret Theorem 5.1? For example, Li et al. 2017 show the real-value weights in BinaryConnect can converge for quadratic function, does it contradict with Theorem 5.1?\n\n9. I would suggest authors to rephrase the last two paragraphs of section 5.2. It first states ‘’one needs to travel further to find a better net’’, and then state ProxQuant find good result nearby, which is confusing. \n\n10. The theoretical benefit of ProxQuant is only intuitively explained, it looks to me there lacks a rigorous proof to show ProxQuant will converge to a solution of the original quantization constrained problem.\n\n11. The draft is about 9 pages, which is longer than expected. Though the paper is well written and I generally enjoyed reading, I would appreciate it if the authors could shorten the content. \n\nMy main concerns are novelty of the proposed method, and fairness of experiments. \n\n\n\n\n======================= after rebuttal =======================\n\nI appreciate the authors' efforts and am generally satisfied with the revision. I raised my score. \n\nThe authors show advantage of the proposed ProxQuant over previous BinaryConnect and BinaryRelax in both theory and practice. The analysis bring insights into training quantized neural networks and should be welcomed by the community. \n\nHowever, I still have concerns about novelty and experiments.\n\n- The proposed ProxQuant is similar to BinaryRelax except for non-lazy vs. lazy updates. I personally like the theoretical analysis showing ProxQuant is better, although it is based on smooth assumptions. However, I am quite surprised BinaryRelax is so much worse than ProxQuant and BinaryConnect in practice (table 1). I would encourage the authors to give more unintuitive explanation.\n\n- The training time is still long, and the experimental setting seems uncommon. I appreciate the authors' efforts on shortening the finetuning time, and provide more parameter tuning. However, 200 epochs training full precision network and 300 epochs for finetuning is still a long time, consider previous works like BinaryConnect can train from scratch without a full precision warm start. In this long-training setting, the empirical advantage of ProxQuant over baselines is not much (less than 0.3% for cifar-10 in table 1, and comparable with Xu 2018 in table 2).\n\n", "After the rebuttal:\n\n1. Still, the novelty is limited. The authors want to tell a more motivated storyline from Nestrove-dual-average, but that does not contribute to the novelty of this paper. The real difference to the existing works is \"using soft instead of hard constraint\" for BNN. \n\n2. The convergence is a decoration. It is easy to be obtained from existing convergence proof of proximal gradient algorithms, e.g. [accelerated proximal gradient methods for nonconvex programming. NIPS. 2015].\n\n---------------------------\nThis paper proposes solving binary nets and it variants using proximal gradient descent. To motivate their method, authors connect lazy projected SGD with straight-through estimator. The connection looks interesting and the paper is well presented. However, the novelty of the submission is limited.\n\n1. My main concern is on the novelty of this paper. While authors find a good story for their method, for example,\n- A Proximal Block Coordinate Descent Algorithm for Deep Neural Network Training\n- Training Ternary Neural Networks with Exact Proximal Operator\n- Loss-aware Binarization of Deep Networks\n\nAll above papers are not mentioned in the submission. Thus, from my perspective, the real novelty of this paper is to replace the hard constraint with a soft (penalized) one (section 3.2). \n\n2. Could authors perform experiments with ImageNet?\n\n3. Could authors show the impact of lambda_t on the final performance? e.g., lambda_t = sqrt(t) lambda, lambda_t = sqrt(t^2 lambda", "Hi authors!\n\nI have a question in regards to the binary quantization performed in the experiments. I am curious about how you choose the binary weights. Are they chosen from {-1,1} or {-\\alpha, \\alpha} with some adaptive real scalar \\alpha>0? \n\nI think the adaptive scalar is important to maintain satisfactory precision, e.g., see Xnor-net. But your convergence analysis seems tied to {-1,1}, not {-\\alpha, \\alpha}. \n\nCan the authors clarify this? Thank you.", "We have made a revision to our paper, adding a section on theoretical analysis, as well as some new experimental results. For convenience to the reviewers and readers, we have temporarily highlighted the changes in red (for updated experiments) and blue (for stuff related to theoretical results). \n\nDetails of the changes are summarized as follows. Noticably, we now have both theoretical and empirical evidence of our advantage over Yin et al. as described below.\n\n(1) Theoretical analysis\nWe added a new section for theoretical analysis (Section 5). Specifically, we show that our ProxQuant converges to stationary points under mild smoothness assumptions on the problem (Section 5.1). In the same setting, lazy prox-gradient method (e.g. BinaryRelax of Yin et al.) fails to converge in general -- we construct a fairly natural example in 1d to show that. \n\nOur previous convergence analysis of BinaryConnect is now Section 5.3, and the corresponding sign change experiment is now in Appendix C.\n\n(2) New experimental results\nWe have shortened the CIFAR-10 training from 600 epochs to 300 epochs (200 training + 100 BatchNorm layer stabilizing) and re-done the experiments. In this new setting, our ProxQuant maintains the advantage over BinaryConnect. This setting also matches the 300 epoch training setup of Yin et al., and our performance drop (~1% - 1.3%) is significantly lower than the reported results of their BinaryRelax (~2% - 4%). We have also added an experiment on ResNet-56.\n\nDue to space constraints and the added binarization results, we have moved the results of ternarization to Appendix B.\n\nFor the LSTM experiment, we performed an additional learning rate tuning for the binarized LSTMs. Improved PPW is seen on both BinaryConnect (419.1 -> 372.2) and ProxQuant (321.8 -> 288.5), and ProxQuant still maintains a significant advantage over BinaryConnect.", "Thank you again for the suggestions. We have revised our paper again, adding a new section on theoretical analysis and some new experimental results (details also appearing as a new public comment).\n\nAddressing the comments:\n\n(1) Novelty, comparison with Yin et al.: We have shown the advantage of our ProxQuant over their BinaryRelax with both theoretical and empirical evidence. \n\nTheoretically, we have added a non-convergence result for lazy prox-gradient method (e.g. their BinaryRelax) in Section 5.2, which works under the same setting in which our ProxQuant converges (Section 5.1). Specifically, the counter-example we constructed for the non-convergence result is a fairly natural problem in 1d and not very adversarial: quadratic loss, smoothed W-shaped regularizer. Together, we have a comprehensive comparison over lazy and non-lazy prox-gradient methods and shown the advantage of our non-lazy version (ProxQuant).\n\nEmpirically, we have added a comparison of the performance drops of binarization in Table 1, Section 4.1. Our performance drop on CIFAR-10 is typically 1% - 1.3%, much lower than the reported result (2% - 4%) in Yin et al..\n\n(2) Fairness: number of epochs, ResNet-56, comparison with Li et al.\n\nWe have re-done our CIFAR-10 binarization experiments with 300 epochs (200 training, 100 BatchNorm stabilization), half of what we had before. Our ProxQuant maintains the advantage over BinaryConnect (see Table 1). We have also done experiments on ResNet-56, on which the classification error of {FP, BinaryConnect, ProxQuant} is {6.54%, 7.97%, 7.70%}. \n\nSpecifically, about the comparison with Li et al.: their reported result on ResNet-56 was 8.10% for FP net and 8.83% for BinaryConnect. Though they achieved the <1% performance drop with BinaryConnect, we suspect that may come from the much inferior initializing full-precision net they used (8.10% compared with our 6.54%), so that binarization will cause a lower performance drop on that particular net. In fact their initializing net is even inferior than our full-precision ResNet-20 (8.06%).\n\n(3) LSTM experiments\nWe have done a learning rate tuning on binary LSTMs. Both BinaryConnect and our ProxQuant have improved perplexities (BinaryConnect: 372.2, ProxQuant: 288.5) and ProxQuant is still significantly better than BinaryConnect (see Table 2).\n", "I appreciate the authors' detailed response. My main concerns are still novelty and fairness. I am willing to raise my score after my main concerns are resolved. \n\n1. I understand there are other contributions such as the analysis of BinaryConnect and ProxQuant in the paper. However, I am still worried about the difference with Yin et al. Like in the authors' response, the main difference seems to be the lazy vs. non-lazy update. Which one is better? Could theoretical or empirical analysis be done for the difference?\n\n2. I am still concerned about experiments and would love to see the authors' response. It looks to me fine-tuning 400 epochs (much more than 200 epochs for standard training) is an uncommon setting. Regarding resnet-56 in Li et al. , it is more about relative number. Li et al. show that with same number of epochs (about 200 for standard training), BinaryConnect can approximate full precision result within <1% difference. In table 1, ProxQuant is better than BinaryConnect within <0.5% difference, but with 600 epochs for training. \n\nThanks.", "Thank you for the very concrete and thoughtful feedback! We have found the comments very useful and constructive for revising the paper.\n\nWe have made some initial revisions to address the comments -- please find our changes as well as our response to the comments below.\n\nNovelty and Fairness of Experiments\n\nPoint 1 -- As you have pointed out, the main algorithmic difference between ours and Yin et al. (2018) is that we use a non-lazy, standard prox-gradient method whereas their BinaryRelax is a lazy prox-gradient. \n\nThe further novelty of our paper lies in the new observation that BinaryConnect suffers from more optimization instability, which are both theoretically and empirically justified in our Section 5.\n\nWe have addressed Yin et al. (2018) as well as a few other related literature in the Prior Work subsection (within the “Principled Methods” paragraph), comparing them with our work and highlighting our novelty.\n\nPoint 5 -- Both BinaryConnect and our ProxQuant are initialized at pre-trained full-precision nets, which are trained with 200 epochs over CIFAR-10.\n\nFor quantization, our schedule is essentially 400 epochs training, and the additional 200 epochs after hard quantization is mostly for fine-tuning the BatchNorm layers. Such fine-tuning was found very useful for *both ProxQuant and BinaryConnect*. Indeed, for BinaryConnect, the signed net keeps changing (in a tiny proportion) even at epoch 400, and the BatchNorm layer hesitates around without being optimized towards any fixed binary net. Hard quantizing forces BinaryConnect to stay at a specific binary net, after which the BatchNorm layer can approach this optimal and boosts performance.\n\nWe have modified Section 4.1 to clarify this.\n\nTheoretical Results\n\nPoint 8 -- Li et al.’s convergence bound involves an additive error O(\\Delta) that does not vanish over iterations, where \\Delta is the grid size for quantization. Hence, their result is only useful when \\Delta is small. In contrast, we consider the original BinaryConnect with \\Delta = 1, in which case the error makes Li et al.’s bound vacuous.\n\nWe have added a remark after Theorem 5.1 to clarify that.\n\nPoint 9 -- We have rephrased the last two paragraphs in Section 5.2 a bit, to first state our finding and then analyze why it shows the power of ProxQuant over BinaryConnect.\n\nPoint 10 -- We have added a convergence guarantee for ProxQuant in Appendix D, showing that ProxQuant converges to a stationary point of the regularized loss.\n\nPresentation\n\nPoint 2 -- We have added that we are also using the non-lazy prox to highlight our difference from BinaryConnect.\n\nPoint 3 -- The Eq (4) was just an expanded formula for the prox-gradient method. As it did not really mean to say anything and the prox operator has been already defined, we have removed it for clarity. \n\nPoint 11 -- We would indeed like to shorten the paper. We will do that once we have a better idea of the potential additional materials that we would present. Please stay tuned.\n\nAdditional Experiments\n\nPoint 4, 6, 7 -- We will work on some additional experiments to address these points. Please stay tuned and we will let you know once it’s done. \n\nFor Point 6 -- The baseline classification error of Adam + BinaryConnect on ResNet56 in Li et. al is 8.10%, whereas we already achieve a better error 7.79% on ResNet44. We suspect this is due to the difference in the initializing FP net.", "We have added Yin et al., as well as a couple of other relevant literature, into our related work section.", "Thank you very much for the valuable feedback!", "Thank you for the valuable feedback! We have made a revision to the paper to address all the comments. We will respond to the specific questions in the following.\n\nNovelty --- We agree that there has been a large literature on replacing the straight-through estimator with prox-type algorithms. Our novelty comes in two aspects:\n\n(1) The proposal of combining non-lazy proximal gradient method with a finite (soft) regularization, as well as principled methods for quantizing to binary, ternary, and multi-bit.\n\n(2) A new challenge to the straight-through gradient estimate in its optimization instability through systematic theoretical and empirical investigations. In particular, we show that the convergence criterion of BinaryConnect is very stringent (Theorem 5.1), while our proposed ProxQuant is guaranteed to converge on smooth problems Theorem D.1). Our sign change experiment in Section 5.2 further shows that BinaryConnect is indeed highly unstable in its optimization, as well as giving a lower-performance solution, compared with ProxQuant.\n\nWe have updated the related work section (in particular the “Principled methods” part) to include these citations.\n\nImageNet experiments --- Due to time constraints, we didn’t have time to perform ImageNet experiments for this submission. We have experimental results on LSTMs (Section 4.2) to be complementary with the CIFAR-10 results. Performing ImageNet experiments will be of our interest as a future direction.\n\nExperiments with \\lambda_t --- We have thought about that, but we chose to use the linear scheme \\lambda_t = \\lambda * t for simplicity and to demonstrate that a simple choice would work well. We suspect that changing the schemes would not boost the performance by a great deal -- but we would like to test it experimentally. Please stay tuned and we would potentially add that in our next revision. \n", "This paper proposes a new approach to learning quantized deep neural networks, which overcome some of the drawbacks of previous methods, namely the lack of understanding of why straight-through gradient works and its optimization instability. The core of the proposal is the use of quantization-encouraging regularization, and the derivation of the corresponding proximity operators. Building on that core, the rest of the approach is reasonably standard, based on stochastic proximal gradient descent, with a homotopy scheme.\n\nThe experiments on benchmark datasets provide clear evidence that the proposed method doesn't suffer from the drawbacks of straight-through gradient, does contributing to the state-of-the-art of this class of methods.\n\n", "Thanks for bringing the work by Yin et al. to our attention. We were not aware of this paper and did our work independently. We will carefully address this work in our next revision.\n\nWe would like to take this opportunity to point out several major differences between our work and Yin et al.:\n\n(1) While we both arrived at the observation that BinaryConnect has a simple expression (our Eq (1) and Yin et al.’s Eq (12)), Yin et al. did not point out this is exactly the dual-averaging algorithm or the lazy-projected gradient descent with constraint set {-1, 1}^d, which dates back to at least Nesterov (for the convex case):\n\n- Nesterov, Y. (2009). Primal-dual subgradient methods for convex problems. Mathematical programming, 120(1), 221-259.\n\n(2) Our algorithm is in fact *different* from Yin et al.: they used the lazy proximal gradient descent (Eq (10), Yin et al.), whereas we used the standard non-lazy proximal gradient descent (our Eq (5)), which is one step further different from the straight-through gradient method.\n\n(3) We proposed and experimented with (1) non-smooth L1-like regularizers for binary quantization; (2) multi-bit quantization with adaptive levels, both not covered in Yin et al..\n\n(4) Our theoretical insights on BinaryConnect (Figure 1 and Section 5) are novel, and in stark contrast with Yin et al.. Our Theorem 5.1 shows that the actual convergence criterion of BinaryConnect is very stringent. We provide a simple 1-d example of such non-convergence in Figure 1.\n\nOur further experimental evidence (Section 5.2) shows that BinaryConnect indeed fails to converge on CIFAR-10 in every run, demonstrating that the condition in Yin et al.’s convergence theorem are quite unlikely to hold in practice. ", "The authors should have cited the paper by Yin et al., first appeared on arXiv in Jan 2018: https://arxiv.org/pdf/1801.06313.pdf\n\n1. In section 3.1, the authors propose to replace the hard constraint that imposes the quantization of weights with, for example, a quadratic penalty/regularizer. The formula of the proximal operator for the quadratic regularizer is derived, which is a weighted average between the weights to be quantized and and the quantized weights as shown in items (1)&(2) below Eq. (11) on page 6. These contributions are the same as those in section 2.3 of the earlier paper by Yin et al.. Proposition 2.3 in Yin et al.'s paper provided essentially the same proximal operator formula. \n\n2. The authors observe that BinaryConnect iteration can be nicely expressed by Eq. (1) on page 4. The original BinaryConnect paper did not present it explicitly in this way. Their observation of Eq. (1) is basically the same as Eq. (12) on page 9 in Yin et al.'s paper. \n" ]
[ -1, 7, 5, -1, -1, -1, -1, -1, -1, -1, -1, 8, -1, -1 ]
[ -1, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1 ]
[ "S1eLOUDJn7", "iclr_2019_HyzMyhCcK7", "iclr_2019_HyzMyhCcK7", "iclr_2019_HyzMyhCcK7", "iclr_2019_HyzMyhCcK7", "HyxI9lvQAX", "r1gpSKtvam", "BklDCt7c37", "rJe9XiOgq7", "HJeb-tr93Q", "S1eLOUDJn7", "iclr_2019_HyzMyhCcK7", "HJ7xolRtX", "iclr_2019_HyzMyhCcK7" ]
iclr_2019_HyzdRiR9Y7
Universal Transformers
Recurrent neural networks (RNNs) sequentially process data by updating their state with each new data point, and have long been the de facto choice for sequence modeling tasks. However, their inherently sequential computation makes them slow to train. Feed-forward and convolutional architectures have recently been shown to achieve superior results on some sequence modeling tasks such as machine translation, with the added advantage that they concurrently process all inputs in the sequence, leading to easy parallelization and faster training times. Despite these successes, however, popular feed-forward sequence models like the Transformer fail to generalize in many simple tasks that recurrent models handle with ease, e.g. copying strings or even simple logical inference when the string or formula lengths exceed those observed at training time. We propose the Universal Transformer (UT), a parallel-in-time self-attentive recurrent sequence model which can be cast as a generalization of the Transformer model and which addresses these issues. UTs combine the parallelizability and global receptive field of feed-forward sequence models like the Transformer with the recurrent inductive bias of RNNs. We also add a dynamic per-position halting mechanism and find that it improves accuracy on several tasks. In contrast to the standard Transformer, under certain assumptions UTs can be shown to be Turing-complete. Our experiments show that UTs outperform standard Transformers on a wide range of algorithmic and language understanding tasks, including the challenging LAMBADA language modeling task where UTs achieve a new state of the art, and machine translation where UTs achieve a 0.9 BLEU improvement over Transformers on the WMT14 En-De dataset.
accepted-poster-papers
This paper presents Universal Transformers that generalizes Transformers with recurrent connections. The goal of Universal Transformers is to combine the strength of feed-forward convolutional architectures (parallelizability and global receptive fields) with the strength of recurrent neural networks (sequential inductive bias). In addition, the paper investigates a dynamic halting scheme (by adapting Adaptive Computation Time (ACT) of Graves 2016) to allow each individual subsequence to stop recurrent computation dynamically. Pros: The paper presents a new generalized architecture that brings a reasonable novelty over the previous Transformers when combined with the dynamic halting scheme. Empirical results are reasonably comprehensive and the codebase is publicly available. Cons: Unlike RNNs, the network recurs T times over the entire sequence of length M, thus it is not a literal combination of Transformers with RNNs, but only inspired by RNNs. Thus the proposed architecture does not precisely replicate the sequential inductive bias of RNNs. Furthermore, depending on how one views it, the network architecture is not entirely novel in that it is reminiscent of the previous memory network extensions with multi-hop reasoning (--- a point raised by R1 and R2). While several datasets are covered in the empirical study, the selected datasets may be biased toward simpler/easier tasks (--- R1). Verdict: While key ideas might not be entirely novel (R1/R2), the novelty comes from the fact that these ideas have not been combined and experimented in this exact form of Universal Transformers (with optional dynamic halting/ACT), and that the empirical results are reasonably broad and strong, while not entirely impressive (R1). Sufficient novelty and substance overall, and no issues that are dealbreakers.
train
[ "SylL9Yz1lN", "rkginvfklN", "rklvRIQR1N", "HyxfZDmCk4", "r1xW6d1jCX", "rkxMwMsFn7", "Hyx3t4h5A7", "Skl0xm35CX", "SkxrQ435AQ", "B1luhGh90X", "ByewREh90m", "SkeCBTIYCQ", "SklQ8hSt07", "BkgUZgHKCX", "Sye8Myd937", "ByeMxPX9nm" ]
[ "author", "author", "public", "public", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public", "author", "public", "official_reviewer", "official_reviewer" ]
[ "This is incorrect. Please see our response to the same comment with the heading \"Potentially wrong claim in this paper\".", "Thanks for your comment.\n\nThe main point here is that in [1] the authors assume arbitrary-precision arithmetic, as clarified in their responses on OpenReview where they noted \"Our proofs are based on having unbounded precision for internal representations [...]\". Therefore, as mentioned in their section \"The need of arbitrary precision\", \"[...] the Transformer with positional encodings and fixed precision is not Turing complete.\" In other words, in practice (i.e. assuming fixed-precision arithmetic), the Transformer is *not* computationally universal.\n\nTo see this, note that in fixed-precision arithmetic a single multiply is O(1) (and so are the nonlinearities). Therefore the computation of the fixed number of attention layers in the Transformer is at most O(n^2), which is polynomial time, while there exist computable functions that are not computed in polynomial time. Or stated in another way: If a model only has a specific time-window, like O(n^2), there are problems it cannot solve, hence it cannot be universal (see also [2] for more on this).\n\nIn the Universal Transformer, on the other hand, this time-window is *not* fixed (see Appendix B in the revised version of our paper on OpenReview for an intuitive example). As pointed out by AnonReviewer2 below, we further want to emphasize that this is because the recurrence resulting from tying the weights allows one to vary the number of time-steps T arbitrarily at inference time (i.e. you can train with T=4 and test with any T). This potentially unbounded time-window (which is only possible because of its recurrence) is what makes UT computationally universal. \n\nWe will clarify these points in the revised version of the paper.\n\n----\n[1] https://openreview.net/forum?id=HyGBdo0qFm&noteId=HyGBdo0qFm\n[2] https://en.wikipedia.org/wiki/Time_hierarchy_theorem", "The claim stated in this paper \"Transformers are not Turing-complete\" is potentially wrong. It's proved in [1] that Transformer is Turing-complete. It is definitely necessary to address this concern before this paper can be accepted.\n\n[1] https://openreview.net/forum?id=HyGBdo0qFm&noteId=HyGBdo0qFm", "The claim stated in this paper \"Transformers are not Turing-complete\" is wrong. It's proved in [1] that Transformer is Turing-complete.\n\n[1] https://openreview.net/forum?id=HyGBdo0qFm&noteId=HyGBdo0qFm", "Thanks for your feedback. \nRegarding the following argument:\n>>> * In UT, parameters are tied across layers (i.e. the same self-attention and the same transition function is applied across recurrent steps); Transformer has different weights for each layer / step. This is important because a UT trained on T=4 steps can be evaluated using any T, whereas a Transformer trained with T layers/steps can only be evaluated for the same T steps.\nI guess I had understood this but had not realised the implications. To make the paper persuasive, it might be worth emphasising this specific point.", "My summary: A new model, the UT, is based on the Transformer model, with added recurrence and dynamic halting of the recurrence. The UT should unite the computational universality properties of Neural Turing Machines and Neural GPU with good performance on disparate language and algorithmic tasks.\n\n(I have read your author feedback and have modified my rating according to my understanding.)\n\nReview:\nThe paper is well written and proofread, concrete and clear. The model is quite clearly explained, especially with the additional space of the supplementary material, appendices A and B (note fig 4 is less good quality than fig 2 for some reason) -- I’m fine with the use of the Supp Mat for this purpose.\n \nThe experiments have been conducted well, and demonstrate a wide range of tasks, which seems to suggest that the UT has pretty general purpose. The range of algorithmic tasks is limited, e.g. compared to the NTM paper.\nI miss any experimental details at all on training.\nI miss a comparison to Neural GPU and Stack RNN in 3.1, 3.2.\n\nI miss a proof that the UT is computationally equivalent to a Turing machine. It does not have externally addressable, shared memory like a tape, and I’m not sure how to transpose read/write heads either.\n\nThe argument that the UT offers a good balance between inductive bias and expressivity is weak, though it may be the best one can hope for of a statistical model in a way. I note that in 3.1, the Transformer overfits, while it seems to underfit in 3.3 (lower LM and RC accuracy, higher LM perplexity), while the UT fare well, which suggests that the UT hits the balance better than the Transformer, at least.\n\nFrom the point of view of network structure, it seems natural to lift further constraints on the model: \nwhy should width of intermediate layers be exactly equal to sequence length?\nwhy should all hidden state vectors be size $d$, the size of the embeddings chosen at the first layer, which might be chosen out of purely practical reasons like the availability of pre-trained word embeddings?\n\nWhat is the contribution of this work? It starts from the Transformer, the ACT idea for dynamic halting in recurrent nets, the need for models fit for algorithmic tasks. \nThe UT’s building blocks are near-identical to the Transformers (and the paper is upfront and does a good job of explaining these similarities, fortunately)\n- cf eq1-5: residuals, multi-headed self attention, and layer norm around all this. \n- shared weights among all such units\n- encoder-decoder architecture\n- autoregressive decoder with teacher forcing\n- decoder units like the encoder’s but with extra layer of attention to final output of encoder\n- coordinate embeddings\nThe authors may correct me, but I believe that the UT with FC layers is exactly identical to the Transformer described in Vaswani 2017 for T=6. \nSo this paper introduces the idea of varying T, interprets it as a form of recurrence, and adds dynamic halting with ACT to that. Interestingly, the recurrence is not over sequence positions here.\nThis contribution is not major, on the other hand the experimental validation suggests the model is promising.\n\nTypos and writing suggestions\nabove eq 8: masked such that -> masked so that\neq 8: dimensions of O and H^T are incompatible: d*V, m*d; to evacuate the notation issue for transposition, cf footnote 1, here and elsewhere, you could use either ${^t A}$ or $A^\\top$ or $A^\\intercal$. You could also write $t=T$ instead of just $T$.\nsec3.3 line -1: designed such that -> designed so that\nTowards the beginning of the paper, it may be useful to stabilise terminology for $t$: depth (as opposed to width for $m$), time steps, recurrence dimension, revisions, refinements\n\n", "We thank the reviewer for the thorough review, and respond below. We have also updated the paper to address these comments.\n\n>> “What is the contribution of this work [...]”\n\nWe introduce two changes to the Transformer architecture (namely adding recurrence and dynamic computation) which: \n\n1) increase the model’s theoretical capabilities (make it Turing-complete), \n2) significantly improve results (compared to standard Transformer) on all tasks that it was evaluated on including large-scale MT (UT improves over standard Transformer by 0.9 BLEU on WMT14 En-De), and lastly \n3) also increase the *types* of tasks Transformer can learn in the first place (eg a standard Transformer fails on bAbI (solves only 50% of tasks; see Table 1), is vastly outperformed by LSTMs on subject-verb agreement (Table 2), and achieves a test perplexity of 7,321 on LAMBADA (Table 3); on the other hand UT solves 100% of bAbI tasks, outperforms LSTMs on SVA prediction, even performing progressively better as the number of attractors increases, and achieves a state-of-the-art test perplexity of 142 on LAMBADA).\n\nWhile we agree (and readily point out throughout) that these are two fairly simple architectural changes, we do want to point out that this yields a new type of parallel-in-time recurrent self-attentive model which blends the best of both worlds of RNNs and Transformers, is theoretically superior to standard Transformers, and practically leads to vastly improved results across a much wider range of tasks, as mentioned above. \n\n>> Range of algorithmic tasks limited; experimental / training details missing\nThe main purpose of evaluating our model on algorithmic tasks is to probe its ability for length generalization in a controlled setup, where we train on 40 symbols and test on 400 symbols. We intentionally chose three simple tasks, i.e. copy, reverse, and addition to mainly focus on the length generalization aspect of the problem, and as can be seen, Transformers and LSTMs perform poorly in this setup in terms of sequence accuracy, while UT is doing a much better job (despite the fact that it’s not trained with a custom curriculum learning like Neural GPU to perform well on these tasks). Furthermore, we also tested our model on Learning-to-Execute tasks which can be considered in the family of algorithmic tasks.\n\nWe have added additional experimental and training details to the revised version of the paper.\n\n>> I miss a comparison to Neural GPU and Stack RNN in 3.1 and 3.2\n\nThis is because for each of the tasks we only reported the state-of-the-art / best performing baselines and Neural GPUs and Stack RNNs have been outperformed by other methods for both bAbI (3.1) and subject-verb agreement prediction (3.2).\n\n>> I miss a proof that the UT is computationally equivalent to a Turing machine. It does not have externally addressable, shared memory like a tape, and I’m not sure how to transpose read/write heads either.\n\nThe proof included in the paper goes by reduction from the Neural GPU which in turn goes by reduction from cellular automata. So this line of proof does not operate directly on a tape or read/write heads, it starts from cellular automatas’ universality (like the game of life). We have also added an Appendix B to elaborate on this with an example. \n", "We thank the reviewer for the thorough review, and respond below. We have also updated the paper to address these comments.\n\n>> Questions around Universality of UT\n\nThe main ingredient for the universality of UT comes from the recurrence in depth. Unbounded memory is also important, but it’s the sharing of weights combined with adaptive computation time that brings universality -- even with unbounded size, the standard Transformer would not be universal. We have added an Appendix B to elaborate on this with an illustrative example. \n\n>> More detailed descriptions of the tasks\n\nWe’ve added an appendix D, which provides more detail on the tasks and datasets.\n\n>> 3. In the discussion, the crucial difference between UT and RNN is that RNN is stated to be that RNN cannot access memory in the recurrent steps while UT can. This seems to be the case for not just UT but any Transformer-type model by construction.\n\nThis is correct in the sense that UT, like transformer, can access memory in each of its processing steps. But the crucial difference is that UT, unlike transformer, is recurrent in its steps (similar to RNNs), where the standard Transformer is like a deep feed-forward model where each step is computed using a separate, learned layer. So, as we stated in the paper, “UTs combine the parallelizability and global receptive field (access to the memory) of feed-forward sequence models like the Transformer with the recurrent inductive bias of RNNs”. As the experiments demonstrate, this *combination* yields very strong results across a wider range of tasks than either on its own.\n\n>> 4. The authors stated that the “recurrent step” for RNN is through time (as the authors stated) while the “recurrent step” in UT is not through time. [...] In this sense, we may argue that the UT cannot access memory across its own t (stacking across t). [...]\n\nYes, this is a good point and indeed correct in terms of the model as reported in the paper. We did also implement a variant of UT where in every step (in depth “t”) the model attends to the output of all the previous steps (not just the last one; i.e. it has access to memory across t), but it didn’t improve results in our experiments. We speculate that this may be because being able to access memory in time (i.e. across sequence length), in particular for language tasks, is more important than being able to access all the previous transformations (i.e. access memory in depth). \n\nFurthermore, we also note that the maximum number of steps in depth (denoted $T$ in the paper) is typically *much fewer* than the maximum length of the sequences (denoted $m$ in the paper). This makes access to previous transformations less useful across \"recurrent steps\" for UTs as the recurrence allows the model to memorize its transformations across the shorter paths in depth (due to vanishing gradient playing a smaller role), and so being able to look up memory in each step (“across its own t” as the reviewer mentions) therefore becomes less useful.", ">> why should width of intermediate layers be exactly equal to sequence length?\n\nIf we understand correctly, the question is “Why only have one vector per input symbol at every intermediate layer/step?”. With the self-attention mechanism, both in Transformer and the Universal Transformer at each layer/step, we revise the representation of each symbol given the representations of all the other input symbols in the previous layer/step. Thus, we need vectors representing each symbol in the input at each intermediate layer/step (illustrated in Fig. 1 in the paper). \n\n>> why should all hidden state vectors be size $d$, the size of the embeddings chosen at the first layer, which might be chosen out of purely practical reasons like the availability of pre-trained word embeddings?\n\nIndeed, there is no architectural constraint in UT for having the same size for the hidden state and input/output embeddings (same as with standard Transformer). These are independent hyper-parameters and one can set different values for them, although this has not really been done in any other transformer-based work as far as we are aware. \n\n>>The authors may correct me, but I believe that the UT with FC layers is exactly identical to the Transformer described in Vaswani 2017 for T=6. \n\nNo, there are several differences (which prove to be important theoretically and in practice):\n\n* In UT, parameters are tied across layers (i.e. the same self-attention and the same transition function is applied across recurrent steps); Transformer has different weights for each layer / step. This is important because a UT trained on T=4 steps can be evaluated using any T, whereas a Transformer trained with T layers/steps can only be evaluated for the same T steps.\n* Besides the position embedding, we also have time-step embeddings, which are combined into (essentially 2-D) “coordinate embeddings”\n* We introduce the coordinate embedding at the beginning of each step (not just once at t_0)\n* Lastly, ACT makes T dynamic for each position, whereas with Transformer T is static.\n\n>> So this paper introduces the idea of varying T, interprets it as a form of recurrence, and adds dynamic halting with ACT to that. Interestingly, the recurrence is not over sequence positions here.\n\nIt is in fact the other way around: We introduce recurrence over processing steps (by sharing/tying the transition weights), and that allows us to vary T. We then add ACT to that. \n\n(As noted above: You cannot vary T / number of layers between training and testing in a standard Transformer as it is trained with a different set of weights for each of the T layers.)\n\n>>Typos and writing suggestions\nThanks, we’ve updated these in the revised version. We also increased the resolution of the image in the Figure 4.", ">>3. Although evaluated on multiple datasets and tasks, they only cover simple QA task and EN-DE translation task. Comparing to other papers applying modifications to Transformer, it is better to include at least one heavy task on large/challenging dataset/task. \n\nWe chose an array of 6 different tasks (ranging from smaller and more structured, to large-scale in the case of the WMT machine translation experiments) in order to measure and highlight different capabilities of UT compared to other models:\n\n* We chose bAbI-QA since its set of 20 different tasks each tests a unique aspect of language understanding and reasoning. Besides this, the bAbI-1k data set (as opposed to the 10k version) is quite a challenging setup since a model should be very data efficient to be able to get reasonable results on this data, and as we show, the Transformer (and LSTMs for that matter) are *not* able to solve these tasks. Therefore, given that these state-of-the-art sequence models fail here, we believe evaluating on these tasks to be a reasonable first step to benchmark the capabilities of UTs against other models on (admittedly simpler) structured linguistic inference tasks. \n* Algorithmic tasks and LTE tasks are also considered as a set of controlled experiments that first of all helps us to compare the model with other theoretically-appealing models like Neural GPU, and to test the models in terms of some specific aspects such as length-generalization or ability to model nesting in the input source code (where again, LSTMs and the Transformer perform very poorly).\n* The subject-verb agreement task is chosen as it has been shown [1] that the lack of recurrence can prevent the Transformer from solving this task, whereas we show that the Universal Transformer easily solves it and in fact improves as the task gets harder, i.e. more attractors are introduced (last paragraph, Sec 3.2).\n* Lambada is a challenging large-scale dataset which highlights the difficulties of incorporating broader context in the task of language modeling. Achieving SOTA on this dataset is further evidence that the Universal Transformer provides a better inductive bias for language understanding. \n* And finally, experiments on the large-scale machine translation task, WMT2014-ENDE, show that the Universal Transformer is not only a theoretically-appealing model, but also a model that performs well on practical real-world tasks.\n\nWe believe that, together, this set of 6 diverse tasks highlights the different strengths and weaknesses of UT, especially compared to the well established LSTM and Transformer baselines, and we leave more investigation with more datasets/tasks for future studies. \n---------------------------------------------------------------\n[1] Tran, Ke, Arianna Bisazza, and Christof Monz. \"The Importance of Being Recurrent for Modeling Hierarchical Structure.\" arXiv preprint arXiv:1803.03585 (2018).\n \n\n>>4. On machine translation task, why does the model without dynamic halting achieve the SOTA performance? This is in contrast to the claim of the advantage of using dynamic halting.\n\nThe advantage of dynamic halting is that it mainly helps in the smaller (bAbI, SVA) and more structured tasks (Lambada). On MT we achieved marginally better results without it. We believe this is because dynamic halting acts as a useful regularizer on the smaller tasks, and is therefore not as useful when more data is available in the large-scale MT task. We mention this in the discussion of our results, but we emphasize this even more in the revised version of the Introduction.\n\n>> 5. The ablation studies focus only on the dynamic halting, but what if weight sharing is removed from the UT?\n\nAs noted above, UT without weight-sharing (across depth) is not recurrent (as separate transition functions are learned for each step/”layer”), so it cannot generate a variable number of revisions / processing steps, and therefore also cannot use dynamic halting. It is only with shared transition blocks that the model becomes recurrent, allowing the use of dynamic halting / ACT.", "We thank the reviewer for the thorough review and respond below. We have also updated the paper to address these comments.\n\n>>extends Transformer by recursively applying a multi-head self-attention block, rather than stack multiple blocks in the vanilla Transformer. An extra transition function is applied between the recursive blocks\n\nTo avoid any potential confusion about the architecture, we note that the {multi-head self-attention + transition}-block is applied recursively *as a whole*. The Transition function is not “extra”, it also exists in the standard Transformer, but the difference is that we apply the same Transition function at every layer / step (by tying the weights). This makes the model recurrent (in “depth” or in its concurrent processing steps), which then allows us to vary the number of steps and add dynamic halting -- both impossible with the standard Transformer architecture. \n\n\n>>it also uses a dynamic adaptive computation time (ACT) halting mechanism on each position, as suggested by the previous ACT paper\n\nACT was introduced and applied in the context of a sequential RNN model where each symbol is processed one after the other, but with a variable number of steps each. However we apply ACT concurrently to all symbols (i.e. in a parallel-in-time model). It has the same effect of allowing a variable number of processing steps per symbol, but we want to emphasize that the way it is used in UT is different from the original ACT paper (in depth vs in sequence length / time).\n\n>>1. [...] The idea behind UT is similar to memory networks and multi-hop reasoning. \n\nYes, indeed, the idea behind UT is related to memory networks. We mentioned this briefly (last paragraph of Section 4), but have expanded on this in the updated version: In UT, similar to dynamic memory networks, there is an iterative attention process which allows the model to condition its attention over memory on the result of previous iterations. As we also show in the visualization of the attention distributions for the bAbI task (Appendix F in the revised paper), we can see that there is a notion of temporal states in UT, where the model updates the memory (states) in each step based on the output of previous steps, and this chain of updates can indeed be viewed as steps in a multi-hop reasoning process. \n\n>>2. The recursive structure is not applied to the input sequence, so UT does not have the advantage of RNN/LSTM on capturing sequential information and high-order features.\n\nWe disagree with this statement: In self-attentive parallel-in-time models (such as Transformer or UT) information is exchanged between symbols (i.e. sequential information) using the self-attention mechanism. Therefore, in the first step each symbol representation is already conditioned on every other symbol (i.e. includes first-order features). However, as this process is continued, with each additional processing step UTs are in fact able to capture higher-order features between symbols.\n", "oops, the typo I mentioned exist in your arXiv submission rather than openreview submission, sorry about the mistake.\nAlso thanks for your notice about eqn 5.", "Thanks for the comment. If you download and check the pdf of our submission in OpenReview, equation 4 is in fact $H^t=LayerNorm(A^t +Transition(A^t))$, and not $H^t=LayerNorm(A^{t-1}+Transition(A^t))$.\n\nThere is, however, a small typo in eqn 5. It should be $A^t =LAYERNORM((H^{t−1}+P^t ))+MULTIHEADSELFATTENTION(H^{t−1}+P^t ))$ instead of $A^t =LAYERNORM(H^{t−1}+MULTIHEADSELFATTENTION(H^{t−1}+P^t ))$, as the residual connection in our model adds up the input \"with coordinate embedding\" to the state. We already fixed this in the revised version of our submission and will upload it to OpenReview soon. ", "In eq 4, you wrote $H^t=LayerNorm(A^{t-1}+Transition(A^t))$.\nBut according to your text description and figure 4, I suppose it should be $H^t=LayerNorm(A^t +Transition(A^t))$, otherwise, there would be a cross-step residual connection which is not mentioned in the paper.", "This paper extends Transformer by recursively applying a multi-head self-attention block, rather than stack multiple blocks in the vanilla Transformer. An extra transition function is applied between the recursive blocks. This combines the idea from RNN and attention-based models. But the RNN structure here is not applied to the input sequence, but to the sequence of blocks inside the Transformer encoder/decoder. In addition, it also uses a dynamic adaptive computation time (ACT) halting mechanism on each position, as suggested by the previous ACT paper. In fact, it can be seen as a memory network with a dynamic number of hops at the symbol level. \n\nThe paper is well-written and easy to follow. The experimental results demonstrate that the proposed model can achieve state-of-the-art prediction quality in several algorithmic and NLP tasks.\n\nPros\n1. The proposed UT is compatible with both algorithmic and NLP tasks by combining the Transformer with weight sharing of recurrence and dynamic halting. In contrast, previous algorithmic and NLP takes can only be solved by more specific neural architectures (e.g., NTM for algorithmic tasks and the Transformer for NLP tasks).\n2. The empirical results verify the effectiveness of the UT on several benchmarks. \n3. The careful experimental analyses not only show the insight of dynamic halting in QA task but demonstrate the ACT is very useful for algorithmic tasks. \n4. The publicly-released codes could make great contributions to the NLP community. \n\nCons\n1. It proposes an incremental change to the original Transformer by introducing recursive connection between multihead self-attention blocks with ACT. The idea behind UT is similar to memory networks and multi-hop reasoning. \n2. The recursive structure is not applied to the input sequence, so UT does not have the advantage of RNN/LSTM on capturing sequential information and high-order features. \n3. Although evaluated on multiple datasets and tasks, they only cover simple QA task and EN-DE translation task. Comparing to other papers applying modifications to Transformer, it is better to include at least one heavy task on large/challenging dataset/task. \n4. On machine translation task, why does the model without dynamic halting achieve the SOTA performance? This is in contrast to the claim of the advantage of using dynamic halting.\n5. The ablation studies focus only on the dynamic halting, but what if weight sharing is removed from the UT? ", "This paper describes a transformer with recurrent structure to take advantage of self-attention mechanism. The number of recurrences can be dynamically determined through ACT-like halting depending on the difficulty of the input. A series of experiments on language modeling tasks have been demonstrated to show promising performances.\n\nThe overall concerns about this paper is that while the performances are quite promising, the theoretical claims and comparisons in the discussion section are of question. The authors attempt to provide connections to other networks (i.e., Natural GPU, RNN) since UT is an amalgamation of both transformers and RNN, they sound a little “hand-wavy” (i.e., comments about UT effectively interpolating between the feed-forward, fixed-depth Transformer and a gated recurrent architecture). In short, while empirically completely acceptable, intuitively or theoretically it is hard to grasp why UT is superior other than the dynamic/sharing layers across t (not time). I believe that improving this aspect could make this paper even better. Based on the comments below and the responses with the authors, I am willing to improve my score.\n\nPros:\n1.\tThe best of both worlds from parallelizable transformer and recurrent structure for repeated self-attention mechanism. Essentially, the “depth” of the transformer can vary if we “unroll” the recurrent stacks.\n\n2.\tExtensive experiments showing the performance of UT.\n\n3.\tAnalysis of the effect of the recurrent aspect of UT and how it can vary depending on the task difficulty.\n\nComments/cons:\n1.\tI am having trouble understanding the “universal” aspect of the transformer. Is this because the variability of the depth of UT (since “given sufficient memory” was mentioned)? If so, such characteristic of “computational universality” does not seem much unique to UT compared to infinite memory for a transformer or a simple RNN across stack (i.e., input is the while sequence and recurrent step is through the stack analogous to UT stack). Please comment on this.\n\n2.\tIt is nice to see many experiments, but without preexisting knowledge about the datasets and their tasks, I can only make relative judgements based on the provided comparisons against other methods. It would be nice to see slightly more detailed descriptions of each task (particularly LAMBADA LM), not necessarily in the main paper (due to space) but in the appendix if possible for improved self-containedness. \n\n3.\tIn the discussion, the crucial difference between UT and RNN is that RNN is stated to be that RNN cannot access memory in the recurrent steps while UT can. This seems to be the case for not just UT but any Transformer-type model by construction.\n\n4.\tThe authors stated that the “recurrent step” for RNN is through time (as the authors stated) while the “recurrent step” in UT is not through time. While this claim is completely correct itself, the RNN’s inability to access memory in its “recurrent steps” was compared with how UT could still access memory throughout its “recurrent steps”. In this sense, we may argue that the UT cannot access memory across its own t (stacking across t). I am not sure if it is fair to make such implications by putting both “recurrent steps” to be of same nature and pointing out one’s weakness. Perhaps the authors could comment on this.\n\nMinor:\n1.\tTable 2.: Best Stack-RNN for 1 attractor is the highest but not bold-faced.\n" ]
[ -1, -1, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 4, 2 ]
[ "HyxfZDmCk4", "rklvRIQR1N", "iclr_2019_HyzdRiR9Y7", "ByeMxPX9nm", "SkxrQ435AQ", "iclr_2019_HyzdRiR9Y7", "rkxMwMsFn7", "ByeMxPX9nm", "rkxMwMsFn7", "Sye8Myd937", "Sye8Myd937", "SklQ8hSt07", "BkgUZgHKCX", "iclr_2019_HyzdRiR9Y7", "iclr_2019_HyzdRiR9Y7", "iclr_2019_HyzdRiR9Y7" ]
iclr_2019_HyztsoC5Y7
Learning to Adapt in Dynamic, Real-World Environments through Meta-Reinforcement Learning
Although reinforcement learning methods can achieve impressive results in simulation, the real world presents two major challenges: generating samples is exceedingly expensive, and unexpected perturbations or unseen situations cause proficient but specialized policies to fail at test time. Given that it is impractical to train separate policies to accommodate all situations the agent may see in the real world, this work proposes to learn how to quickly and effectively adapt online to new tasks. To enable sample-efficient learning, we consider learning online adaptation in the context of model-based reinforcement learning. Our approach uses meta-learning to train a dynamics model prior such that, when combined with recent data, this prior can be rapidly adapted to the local context. Our experiments demonstrate online adaptation for continuous control tasks on both simulated and real-world agents. We first show simulated agents adapting their behavior online to novel terrains, crippled body parts, and highly-dynamic environments. We also illustrate the importance of incorporating online adaptation into autonomous agents that operate in the real world by applying our method to a real dynamic legged millirobot: We demonstrate the agent's learned ability to quickly adapt online to a missing leg, adjust to novel terrains and slopes, account for miscalibration or errors in pose estimation, and compensate for pulling payloads.
accepted-poster-papers
The authors consider the use of MAML with model based RL and applied this to robotics tasks with very encouraging results. There was definite interest in the paper, but also some concerns over how the results were situated, particularly with respect to the related research in the robotics community. The authors are strongly encouraged to carefully consider this feedback, as they have been doing in their responses, and address this as well as possible in the final version.
test
[ "ByxoVJewyN", "S1lzTCLUJN", "BJxo8zPU67", "rkl7x9ea3X", "r1eBKWeYCX", "SJeiiK_NAm", "rkxmhBF10Q", "ByeKOSYJRm", "H1x4xrK1Rm", "SJeknkUiTQ", "Skgo5yLsT7", "ryx-_D6z67", "rJlYYIcC2X", "BJeoWBEghm", "S1gLnA52oQ" ]
[ "author", "public", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "public" ]
[ "In the real-robot data collection is expensive, training a single model for the different terrains and conditions allows us to make a more efficient use of the data. Instead, in simulation we have separate experiments to have a more controlled comparison.\n\nThe task distribution during training and testing does indeed need to match in theory. However, generating sufficiently diverse tasks requires a considerable engineering effort. It's often hard to get the kind of task diversity needed to evaluate things effectively with distributions that truly match. For instance, in the disabled half-cheetah task the agent just has 6 joints, so if we want one held-out joint there will be inevitably some distribution mismatch. While this is certainly a shortcoming in our experiments, we believe this setup overall is reasonable, and the comparison to prior methods is informative about overall performance of each method.\n\nRegarding the neural network size, we used a 3 layer NN with 512 units per layer and ReLU activations for all the feed-forward models, and a LSTM with 512 hidden units for the recurrent models. We will incorporate it to the paper.\n\nPlease, let us know if you have further doubts or questions. And, thanks for pointing out the typo!", "This paper proposes a model-based meta-reinforcement learning method that achieves good results and enables fast adaptation in dynamics environment. While I can understand the paper better with the comments of the reviewers and the recent improvements done to the paper, there are still a few things that seem not so clear to me. It is without question that the assumption on whether the meta-test tasks are drawn from the same task distribution as the meta-training tasks is subjective , I would like to know how did the authors decide about the task distribution for training and testing? For example, for the experiments of half-cheetah (HC), the authors presented the results separately for HC disabled joint, HC slope terrains and HC pier and I suppose that three different models are trained for each of these experiments. Is this because of the differences in the task distributions between these three experiments? However, in the experiments with the millirobot, the authors meta-trained the agent on three different terrains with random trajectories, but tested the agent on various meta-test tasks such as missing leg, slope, added payload, etc that do not seem obvious to me that they are from the same task distribution as the meta-training tasks. Did the authors make assumption that these tasks are supposed to be in the same task distribution as the meta-training tasks? If so, why didn't the authors make the assumption that HC disabled joint, HC slope terrains and HC pier come from the same task distribution and just train one model for these three experiments, just like the experiments with the millirobot? \n\nMy second concern is that the deep neural networks architectures for the experiments are not at all mentioned. Since the expressive power of neural networks are limited by their size, so I wonder if the architecture and size of deep neural networks will also limit the adaptation capability of the agent to different tasks.\n\nPlease correct me if I have some misunderstandings about the paper. Thank you.\n\nIf I am not mistaken, there is a typo on page 8, in section 6.3: \"in comparison to the aforementioned methods.\"\n\n", "The paper proposes using meta-learning and fast, online adaptation of models to overcome the mismatch between simulation and the real world, as well as unexpected changes and dynamics. This paper proposes two model-based meta-learning reinforcement algorithms, one based on MAML and the other based on recurrence, and experimentally shows how they are more sample efficient and faster at adapting to test scenarios than prior approaches, including prior model-free meta-learning approaches.\n\nI do have an issue with the way this paper labels prior work as model-free meta-learning algorithms, since for example, MAML is a general algorithm that can be applied to model-free and model-based algorithms alike. It would be more accurate in my opinion to label the contributions of this paper as model-based instantiations of prior existing algorithms, rather than new algorithms outright.\n\nI’m a bit confused with equation 3, as the expectation is over a single environment, and the trajectory of data is also sampled from a single environment. But in the writing, the paper describes the setting as a potentially different environment at every timestep. Equation 3 seems to assume that the subsequence of data comes from a single environment, which contradicts what you say in the text. As described, equation 3 is then not really much different from previous episodic or task based formulations.\n\nThe results themselves are not unexpected, as there has already been prior work that this paper also mentions showing that model-based RL algorithms are more sample efficient than model-free.\n\nSection 6.1, I like this comparison and showing how the errors are getting better.\n\nFor section 6.2, judging from the plots, it doesn’t seem you are doing any meta-learning in this experiment, so then are you just basically running a model-based RL algorithm? I’m very confused what you are trying to show. Are you trying to show the benefit of model-based vs model-free? Prior work has already done that. Are you trying to show that even just using a meta-learning algorithm in an online setting results in good online performance? Then you should be comparing your algorithm to just a model-based online RL algorithm. You also mention that the asymptotic performance falls behind, is this because your model capacity is low, or maybe your MPC method is insufficient? If so, then wouldn’t it be more compelling to, like prior work, combine this with a model-free algorithm and get the best of both worlds?\n\nSection 6.3 results look good.\n\nSection 6.4, I really like the fact you have results on a real robot.\n\nOverall I think the paper does successfully show the sample complexity benefits and fast adaptation of model-based meta-RL methods. The inclusion of a real world robot experiment is a plus. However the result is not particularly surprising or insightful, as prior work has already shown the massive sample complexity improvement of model-based RL methods.\n\nUPDATE (Dec 4, 2018):\n\nI have read the author response and they have addressed the specific concerns I have brought up. I am overall positive about this paper and the new changes and additions so I will slightly increase my score, though I am still concerned about the significance of the results themselves.\n", "The authors introduce an algorithm that addresses the problem of online policy adaptation for model-based RL. The main novelty of the proposed approach is that it defines an effective algorithm that can easily and quickly adapt to the changing context/environments. It borrows the ideas from model-free RL (MAML) to define the gradient/recursive updates of their approach, and it incorporates it efficiently into their model-based RL framework. The paper is well written and the experimental results on synthetic and real world data show that the algorithm can quickly adapt its policy and achieve good results in the tasks, when compared to related approaches. \n\nWhile applying the gradient based adaptation to the model-free RL is trivial and has previously been proposed, in this work the authors do so by also focusing on the \"local\" context (M steps within a K-long horizon, allowing the method to recover quickly if learning from contaminated data, and/or its global policy cannot generalize well to the local contexts. Although this extension is trivial it seems that it has not been applied and measured in terms of the adaptation \"speed\" in previous works. Theoretically, I see more value in their second approach where they investigate the application of fast parameter updates within model-based RL, showing that it does improve over the MAML-RL and non-adaptive model-based RL approaches. This is expected but to my knowledge has not been investigated to this extent before. \n\nWhat I find is lacking in this paper is insight into how sensitive the algorithm is in terms of the K/M ratio, and also how it affects the adaptation speed vs performance (tables 3-5 show an analysis but those are for different tasks); no theoretical analysis was performed to provide deeper understanding of it. The model does solve a practical problem (reducing the learning time and having more robust model), however, it would add more value to the current state of the art in RL if the authors proposed a method for optimal selection of the recovery points and also window ratio R/L depending on the target task. This would make a significant theoretical contribution and the method could be easily applicable to a variety of tasks. where the gains in the adaptation speed are important.", "We believe that we addressed all of the reviewer's concerns. We would appreciate if the reviewers could take a look at our changes and let us know if they would like to revise their rating or request additional changes that would alleviate their concerns.\n\nIn summary, here are the main changes that we made to the paper:\n- Ran a sensitivity analysis over the parameters K and M, and added a discussion section in the appendix regarding the selection of these values (R3)\n- Edited the experiments section to clarify our main empirical insights (R4)\n- Fixed the notational discrepancy in section 3 and added an explanation in section 4 regarding environments and rollouts (R4)\n- Edited and added to the plot in section 6.2 to now include all experiments/comparisons of interest (R1, R4)\n- Edited the related work to clarify the technical contributions of our method over MAML and prior work (R1, R4)\n- Extended the related work section work to incorporate citations for model-based RL, adapting inverse dynamic models, and the suggested recent model-based RL citation (R1)\n- Edited introduction to scope the claims more carefully (R1)\n- Edited the experiments section's text and citations to clarify the misunderstanding regarding our choice of MPC controllers for each method of our comparisons (R1)\n- Edited and clarified the methods and experiments to address all 4 of R1’s requested experimental comparisons (explicitly including #1/2/4, and running experiments to confirm that #3 does indeed fail) (R1)\n- Edited the text to make it clear that we do already perform the suggested model-bootstrapping (R1)\n- Edited the experiments to clearly differentiate meta-training time from test time (R1)\n", "Regarding prior work, as requested, we have extended the related work section by incorporating prior work on model-based control. In particular, we have added references on adaptive control methods [1-7], and online system identification [8]. Please let us know if we should include any specific paper, we will be happy to include it and discuss it.\n\n\n[1] Sastry, Sosale Shankara and Isidori, Alberto. Adaptive control of linearizable systems.IEEE Transactions on Automatic Control, 1989.\n[2] Meier, Franziska and Schaal, Stefan. Drifting Gaussian Processes with Varying Neighborhood Sizes for Online Model Learning. ICRA 2016.\n[3] Meier, Franziska and Kappler, Daniel and Ratliff, Nathan and Schaal, Stefan. Towards Robust Online Inverse Dynamics Learning. IROS 2016.\n[4] P. Pastor and Ludovic Righetti and M. Kalakrishnan and S Schaal. Online movement adaptation based on previous sensor experiences. IROS 2011.\n[5] Underwood, Samuel J and Husain, Iqbal. Online parameter estimation and adaptive control of permanent-magnet synchronous machines. Transactions on Industrial Electronics 2010\n[6] Kelouwani, Sousso and Adegnon, Kokou and Agbossou, Kodjo and Dube, Yves. Online system identification and adaptive control for PEM fuel cell maximum efficiency tracking. Transactions on Energy Conversion 2012.\n[7] Rai, Akshara and Sutanto, Giovanni and Schaal, Stefan and Meier, Franziska. Learning Feedback Terms for Reactive Planning and Control. ICRA 2017. \n[8] Manganiello, Patrizio and Ricco, Mattia and Petrone, Giovanni and Monmasson, Eric and Spagnuolo, Giovanni. Optimization of Perturbative PV MPPT Methods Through Online System Identification. Transactions on Industrial Electronics 2014.", "We thank the reviewer for their valuable feedback and agree that the strength of our approach comes from being able to adapt the dynamics model to the local dynamics. We do include a model-free RL algorithm in our experiments, but this is a prior method that is included only for comparison: we clarify that both of our approaches are model-based, and neither are model-free.\n\nWe also clarify that we do not choose \"M steps within a K-long horizon.\" We have edited section 4 to paper to properly specify it. We use information from the past M steps to adapt the meta-learned model and predict the future K steps; this is done at every time-step of the rollout. In this setup, K and M are simply hyperparameters\n\nWe have added to the appendix D a sensitivity analysis of the values K and M for GrBAL. The results show that our approach is not particularly sensitive to those values. We also added a discussion in Appendix D of how the values can be determined -- the optimal values depend on various task details, such as the amount of information present in the state (a fully-informed state variable precludes the need for additional past timesteps) and the duration of a single timestep (a longer timestep duration makes it harder to predict more steps into the future).\n\nLastly, given our clarifications, it would really help us if the reviewer could clarify what they meant by \"optimal selection of the recovery points\" -- what does \"recovery points\" mean in this context?", "Thank you for taking the time to respond, we really appreciate your detailed feedback. We believe that we can address all of your concerns, please let us know if the revisions and modifications we describe below have addressed these issues. Thanks again for helping us improve the paper!\n\n(i) Meta-learning for online adaptation to dynamics has not been proposed in prior work. [1] trains for episodic adaptation, rather than online adaptation, showing good adaptation performance after several trials rather than several timesteps. We have clarified the introduction to scope the claims more carefully, but we do believe this is a novel contribution. However, if there is any other citation that covers this approach, we would be happy to reference it and discuss. We have made a best faith attempt to cover all topics you referenced in your comment.\nRegarding prior work on model based meta-RL, to our knowledge [1] is the only prior work that uses both meta-learning and model-based RL together. If there are any others, we would be happy to cite and discuss them as well. While we agree that [1] makes a valuable contribution, the technique is very different from ours and is specific to non-parametric latent variable models, while our method addresses parametric models. Further, we explicitly train for online adaptation (i.e., using only M timesteps of data for adaptation). Instead, their approach trains for episodic adaptation (i.e. using around M trajectories). Finally, we evaluate our approach on a real-world robotic system, while the prior paper evaluates on cart-pole and pendulum.\n\n(ii) We have edited sections 2 and 4 to highlight the technical contributions over MAML. Our method is not a straightforward application of MAML to model-based RL. MAML requires a distribution over tasks to be hand-specified in advance. Our method removes this assumption by developing an online formulation of meta-learning where “tasks” correspond to segments of time and are provided implicitly by the environment. In addition with the empirical contributions, we believe that this does constitute a novel conceptual contribution.\n\n(iii)\n- Regarding section 6.2 & MPPI: we added learning curves of MB and MB+DE to the plots. We edited all of the plot legends to clarify which planner is used for each method. All simulated comparisons use MPPI for all methods. We have fixed the citation of Nagabandi et. al. 2017a by replacing it with [2].\n\n- Regarding model bootstrapping: Sorry for the misunderstanding on our end. We edited the paper to clarify the following --- at training time, we iteratively collect and aggregate data using the MPPI with MPC for all model-based methods (our method, MB, and MB+DE), collecting data in the loop of training. As a result, our “MB” and “MB+DE” comparisons corresponds to MPPI with model-bootstrapping [2], with and without adaptation (respectively) when collecting roll-outs during training & run-time. For our method, we also use bootstrapping, meta-learning the dynamics models iteratively (see line 5 and 6 of Alg. 1). We therefore believe that our comparison is set up properly and that the paper adequately communicates this, but we would appreciate any feedback you might have here, and we would be happy to alter the comparison if needed.\n\n- “make it very explicit”: We edited the paper to make it clear that there is a training and run-time phase (which we refer to as meta-training and testing).\n\nFinally, we would emphasize that results on difficult problems, including substantial performance gains on 5 distinct tasks and including real-world robotic control problems, are also a contribution of our work. Algorithms that improve on prior work in terms of efficiency and generalization are of interest to the community, even when they build on ideas that were presented in prior work. If this were not the case, then most papers on model-based RL (a very old idea in itself) and RL for robotics would not be publishable. Therefore, we do not think that the criticism that there are other model-based RL papers, other meta-learning papers, or even other meta-learning model-based RL papers by itself precludes publication. We do however strongly agree with the reviewer that citing and discussing all relevant prior work, and appropriately scoping the claims, is critical, and we have endeavored to do so. We are grateful for any help and advice to do this better.\n\n[1] Meta Reinforcement Learning with Latent Variable Gaussian Processes, UAI 2018\n[2] MPPI with model-bootstrapping: Information Theoretic MPC for Model-Based Reinforcement Learning , ICRA 2017", "We thank the reviewer for the valuable feedback, and we clarify the individual points below. We have edited the paper to address each of the concerns raised in the review, and we would appreciate additional feedback regarding whether we have addressed the reviewer's concerns about the paper or if the reviewer has anything else they would like us to improve. \n\n\"Results not unexpected\":\nWe agree that the sample efficiency of model-based RL is generally known, and we have revised Section 6.2 to explicitly state it. Our intent is not to claim that model-based RL is more sample-efficient than model-free RL (which, as the reviewer stated, is well known), but rather to show that meta-training for fast adaptation can improve over directly running online model updates with a model trained with standard model-based RL. Note that the comparison to \"MB-DE\" in Section 6.3 is precisely this comparison: adapting our meta-trained models outperforms adapting these standard model-based RL models by a large margin.\n\nThe takeaway of this work is fast adaptation of expressive dynamics models. For instance, a real robot adapting online (in milliseconds) to unseen and drastic dynamics changes has not been shown in prior work that we know of. We emphasize that our meta-trained model can adapt in less than a second, whereas model-based RL from scratch takes minutes or hours.\n\nRelation to MAML/prior work:\nWe have edited section 2 of the paper to clarify the relation to MAML. In summary: MAML assumes access to a hand-designed distribution of tasks. Instead, one of our primary contributions is the online formulation of meta-learning, where tasks correspond to temporal segments, enabling “tasks” to be constructed automatically from the experience in the environment; MAML is a very general algorithm, but it has not been previously demonstrated on online learning problems.\n\nEquation 3:\nWe have fixed the discrepancy and added several clarifications in Section 4. Our method uses M consecutives steps to predict the next K steps, which makes the assumption that the environment is constant for M+K timesteps. As a result, only a fraction of the roll-out, i.e. M+K timesteps, has to correspond to the same environment. The underlying assumption is that the subsequence of data does indeed come from the same environment. In our experiments, M+K is 0.5 seconds, making this assumption true most of the time. The fast adaptation (F.A.) environments in Section 6.3 show this adaptation occurring as the environment keeps changing within the rollouts.\n\nSection 6.2:\nWe fixed the typo that was originally in the caption: GrBAL and ReBAL are our proposed meta-learning algorithms, so there is indeed meta-learning in this experiment. In this plot, we aim to show that our model-based meta-learning approaches achieve high performance while using 1000x less data than the two model-free approaches. Finally, we edited this plot by adding two more comparisons to further clarify the benefit of our model-based meta-learning approach over standard non-meta-learned model-based approaches.\n\nThe reviewer's comment about the asymptotic performance is very relevant, so we added it to the text in section 6.2. We agree that the development of some model-based/model-free hybrid would be great, and plan to do this in future work.\n", "Some general comments:\n-----------------------\nThe presentation of your approach would benefit from making it very explicit that you have a training time (model-based RL to learn models) and a run-time phase (model-predictive control with model adaptation). This is a particularly confusing component of your evaluations, so please be clear about what phase your in, and if your evaluations are meant to evaluate run time adaptation then you should explain how all methods were initialized.\n\nDo not cite work as baseline, that you actually do not use as a baseline. MPPI with neural networks for dynamics models exists.\n\nto MPPI with model-bootstrapping you say:
\n\n\"The difference between the requested point (4) and our existing point (2) is the collection of expert data for initializing the training data set. Being able to collect expert samples is a strong assumption, requiring either human demonstrations or knowledge of the ground truth model, and does not fall under the assumptions of our problem setting.\"\n\n

I don't understand this comment. MPPI with model-bootstrapping does not require an initial training data set, but it would help of course. What I meant is that at run time, you could continue to update the model (so essentially you continue with the model-based RL setup) - the difference is that you're not resetting the model at each time step. You could argue that this is exactly not what you want to do, you don't want to update your model continuously. But then you should argue why you wouldn't want to do this, in your introduction.", "Thank you for your response and addressing my concerns (at least partially). I'd like to re-iterate what my main concerns with this manuscript are. To summarize\n\ni ) work is not put in the context of existing relevant related work (not really addressed)\nii) minor/questionable technical contribution (not really addressed)\niii) evaluations are not designed to evaluate fast model adaptation (was partially addressed)\n\nIn more detail:\n\nBefore going into detail of my concerns, I'd like to quickly summarize your approach:\n\n1. at train time you use a model-based RL algorithm to learn a dynamics model. You utilize existing meta-learning methods/ideas to learn representations that can be utilized to adapt the dynamics model fast at test time. Specifically you present a) GrBAL, at training time you use MAML to learn dynamics model parameters that can quickly be adapted to changes in the dynamics b) ReBAL, you learn a recurrent-based update policy that can update the dynamics model parameters effectively online.\n2. at test time you use a model-predictive controller with the learned dynamics model and adapt it online based on recent observations. At each controller time step you reset the dynamics model to the dynamics model learned in phase 1. \n\nIn that context my concerns are:\n\ni) utilizing meta-learning in model-based RL is not a novel idea, yet you write most of your manuscript as if it were. Utilizing meta-learning to quickly adapt dynamics models online is also not novel, yet your writing makes readers believe that it is. While you've added the references I've mentioned, you have not really discussed how your proposed methods improve over other relevant work. Your introduction should highlight were current methods fall short, and how your proposed work improves over existing work. Furthermore, you have not added any references for model-based control. There is a ton of related work that uses model-based controllers and adapts dynamics models online, with and without meta-learning. This needs to be acknowledged.\n\nii) I'm still not clear on what your work exactly addresses. You're using 2 very different meta-learning approaches to learn models/update policies such that adaptation is fast at test time. Neither of them involve a significant contribution. Using MAML in this model-based RL context, reduces to using MAML in a regression problem (no technical advances here). Learning the recurrent-based update policy is something that has been extensively explored in the learning-to-learn community. It's not clear what you're adding here. You cite relevant work in the related work section, but you don't explain how your work differs from them. If there are technical issues that arise from applying these methods in the model-based RL framework, you do not describe them. Maybe there is a technical contribution here - but if there is you are not highlighting it. \n\niii) when evaluating your methods, you want to highlight that your meta-learning approach leads to models/policies that adapt faster. However, I can still not infer that this is true, here is why:\n\n1. Section 6.2: It's not clear whether this evaluation evaluates sample efficiency at training (meta-learning) time or at test time (how many samples you need to adapt online). In either case, if you want to highlight sample efficiency of your proposed approach (meta-learning to learn models that adapt fast at run time), you need to compare to model-based RL methods that do not use your meta-learning approach. And you need to use the same model-predictive controller. There is no point in comparing to model-free methods here.\n\n2. Section 6.3. You say you use the same model-predictive controller in your experiments for model-based RL (MPPI), however you cite other papers that do not use MPPI. For instance, you say 
\"a non- adaptive model-based method (“MB”), which employs a feedforward neural network as the dynamics model and selects actions using MPC (Nagabandi et al., 2017a), \"
your non-adaptive model-based method should be MPPI with a fixed neural network model (ideally the same that you use to initialize your methods). This is particularly problematic, because recent model-based RL methods have by far outperformed the work you cite (Nagabandi et al., 2017a). 

I want to re-iterate that you need to present the ablation study I suggested in my earlier review, and also present it as such (if you're already doing the experiments that I suggest, then change the plots and experiment description to make this clear) . \n\nto be cont'd", "We thank the reviewer for the feedback.\n\nThe main concern of the reviewer is that we did not control for the choice of controller. This is a misunderstanding. We implemented the same controller for all of the model-based comparisons; hence, all comparisons reported in the paper are fair. To be precise, we used MPPI for all simulation experiments, and random-shooting MPC for all real-world experiments (since the action spaces were of lower dimension and did not need iterations of refinement). We updated the paper to clarify this.\n\nRelated work:\nWe thank the reviewer for pointing out these recent works. We updated the paper to incorporate citations for model-based RL, adapting inverse dynamic models, and the suggested recent model-based RL citation.\n\nSample efficiency:\nIn this section, we do both of the things the reviewer mentions: we compare our MB meta-learning method against a state-of-the-art MF meta-learning method to show the benefit of model-based over model-free, and against a state-of-the-art MF method to show the benefit of meta-learning.\n\nEvaluation:\nThe reviewer suggested 4 points to evaluate. Points (1) and (2) are exactly the results we show in Section 6.3: (1) corresponds to our full GrBAL/ReBAL, meta-learning with adaptation, and (2) corresponds to our MB baseline, which has neither meta-learning nor adaptation. We further have a DE baseline, which addresses the combination of adaptation without meta-learning.\n\nPoint (3) suggests metalearning the prior with the adaptation objective, but then not adapting it at test-time. We ran this experiment on the real robot, and it performed worse than (1) and (2), failing to solve the task. This is expected; the meta-learned model parameters (theta*) were optimized to be used only after adapting them. We can add these numerical results to our results. \n\nThe difference between the requested point (4) and our existing point (2) is the collection of expert data for initializing the training data set. Being able to collect expert samples is a strong assumption, requiring either human demonstrations or knowledge of the ground truth model, and does not fall under the assumptions of our problem setting.\n\nContribution:\nOur contribution is a new model-based meta-RL algorithm that incorporates elements of meta-learning and model-based RL. While our method is relatively simple, we are not aware of prior works that show that meta-learning can be used to enable online adaptation to varying dynamics in the context of model-based RL. Further, our experiments, which include domains that are more complex than the cartpole and double pendulum in [1], demonstrate the effectiveness of the approach. If we are mistaken regarding prior works, please let us know!\n\nWe would like to emphasize that our work presents an extensive comparative evaluation, and we believe that these results should be taken into consideration in evaluating our work. We compare multiple approaches across more than 6 simulated tasks as well as 4 tasks on a real-world robotic locomotion task. Hopefully our clarifications are convincing in terms of explaining why the evaluation is fair and rigorous, and we would of course be happy to modify it as needed.", "This work addresses the problem of online adapting dynamics models in the context of model-based RL. Learning globally accurate dynamics model is impossible if we consider that environments are dynamic and we can't observe every possible environment state at initial training time. Thus learning dynamics models that can be adapted online fast, to deal with unexpected und never seen before events is an important research problem.\n\nThis paper proposes to use meta-learning to train an update policy that can update the dynamics model at test time in a sample efficient manner. Two methods are proposed\n- GrBAL: this method uses MAML for meta-learning\n- ReBAL: this method trains a recurrent network during meta-training such that it can update the dynamics effectively at test time when the dynamics change\n\nBoth methods are evaluated on several simulation environments, which show that GrBAL outperforms ReBAL (on average). GrBAL is then evaluated on a real system. \n\nThe strengths of this paper are:\n\n- this work addresses an important problem and is well motivated\n- experiments on both simulated and on a real system are performed\n\nThe weaknesses:\n\n- the related work section is biased towards the ML community. There is a ton of work on adapting (inverse) dynamics models in the robotics community. This line of work is almost entirely ignored in this paper. Furthermore some important recent references for model-based RL are not provided in the related work section (PETS [3] and MPPI [2]), although MPPI is the controller that is used in this work as a framework for model-based RL. Additionally, existing work on model-based RL with meta-learning [1] has not been cited. This is unacceptable. \n- There is no significant technical contribution - the \"contribution\" is that existing meta-learning methods have been applied to the model-based RL setting. Even if no-one has had that idea before - it would be a minor contribution, but given that there is prior work on meta-learning in the context of model-based RL, this idea itself is not novel anymore.\n- Two methods are provided, without much analysis. Often authors refer to \"our approach\" - but it's actually not clear what they mean by our approach. The authors can't claim \"model-based meta RL\" as their approach. \n- While I commend the authors for performing both simulation and real-world experiments, I find the that experiments lack a principled evaluation. More details below.\n\nFeedback on experiments:\n\nSection 6.2 (sample efficiency)\n\nYou compare apples to oranges here. I have no idea whether your improvements in terms of sample-efficiency are due to using a model-based RL approach or because your deploying meta-learning. It is well known that model-based RL is more sample efficient, but often cannot achieve the same asymptotic performance as model-free RL. Since MPPI is your choice of model-based RL framework, you would have to include an evaluation that shows results on MPPI with model bootstrapping (as presented in [2]) to give us an idea of how much more sample-efficient your approach is.\n\nSection 6.3 (fast adaptation and generalization)\n\nWhile in theory one can choose the meta-learning approach independently from the choice of model-based controller, in practice the choice of the MPC method is very important. MPPI can handle model inaccuracies very well - almost to the point where sometimes adaptation is not necessary. You CANNOT evaluate MPPI with online adaptation to another MPC approach with another model-learning approach. This does not give me any information of how your meta-learning improves model-adaptation. In essence these comparisons are meaningless. To make your results more meaningful you need to use the same controller setup (let's say MPPI) and then compare the following:\n1. MPPI with your meta-trained online adaptation\n2. MPPI results with a fixed learned dynamics model - this shows us whether online adaptation helps\n3. results of MPPI with the initial dynamics model (trained in the meta-training phase) -without online adaptation. This will tell us whether the meta-training phase provides a dynamics model that generalizes better (even without online adaptation)\n4. MPPI with model bootstrapping (as presented in [2]). This will show whether your meta-trained online adaptation actually outperforms simple online model bootstrapping in terms of sample-efficiency\n\nThe key here is that you need to use the same model-based control setup (whether its MPPI or some other method). Otherwise you cannot detangle the effect of controller choice from your meta-learned online adaptation.\n\n6.4 Real-world: same comments as above, comparisons are not meaningful\n\n[1] Meta Reinforcement Learning with Latent Variable Gaussian Processes, UAI 2018\n[2] MPPI with model-bootstrapping: Information Theoretic MPC for Model-Based Reinforcement Learning , ICRA 2017\n[3] Deep Reinforcement Learning in a Handful of Trials using Probabilistic Dynamics Models, NIPS 2018", "Thank you for your suggestions. The biggest clarification that we would like to offer is that our method adapts online (in less than a second), and not minutes/hours. For example, when the agent sees a new terrain, when it encounters a slope, or when the system's pose estimation system become miscalibrated, we don't need to run multiple trials in this new setting (and we don't need an external reward signal like \"distance travelled\" to trigger/guide the adaptation). Instead, the agent constantly uses its past few data points, in a self-supervised way, to perform online adaptation of its dynamics model. Note that it successfully does this even when it encounters tasks that it did not see during training. This ability to adapt model parameters using such few data points is crucial, and we achieve it through meta-learning.\n\nWe will add discussion of these works in the next version of our paper, to be thorough. We would, however, like to emphasize that the purpose of our work is not adapting to damage. The purpose of our work is an algorithm that uses meta-learning to enable *online* model adaptation. Although recovering from damage is included in our experiments, it is merely one example in this category of experiencing unexpected disturbances at test time: We also evaluate other tasks such as a pier of differing buoyancy from that seen during training, slopes that were never seen during training, and pulling an unknown payload. A big difference between our work and the suggested work is that we are not performing trial and error learning. The problem statement itself is very different, and thus it does not make sense to perform such a comparison.", "I think this paper might be the first to apply meta-learning ideas to adaptation with a real robot. However, this is far from being the first paper to demonstrate that data-efficient reinforcement learning can be used for adapting to damage.\n\nUnfortunately, the author of the submitted paper does not compare their result to this state-of-the-art... and does not even cite any of the previous paper on the topic (see below). This is worrisome because the previous papers require only 1-2 minutes of interaction time for adapting in similar tasks (legged robot with a blocked joint, a lost leg, etc.), compared to 1.5-3 hours in the submitted paper.\n\nA few relevant papers about adaptation and damage recovery with data-efficient RL:\n\nActive learning of a model / model-identification + direct policy search: \nBongard J, Zykov V, Lipson H. Resilient machines through continuous self-modeling. Science. 2006 Nov 17;314(5802):1118-21. http://www.cs.uvm.edu/~jbongard/papers/2006_Science_Bongard_Zykov_Lipson.pdf\n\nPrior from simulation + Bayesian optimization:\nCully A, Clune J, Tarapore D, Mouret JB. Robots that can adapt like animals. Nature. 2015 May;521(7553):503. https://arxiv.org/pdf/1407.3501 \nSee also: https://arxiv.org/pdf/1709.06919\n\nModel-based policy search with priors:\nChatzilygeroudis K, Mouret JB. Using Parameterized Black-Box Priors to Scale Up Model-Based Policy Search for Robotics. 2018. Proc. of IEEE ICRA. https://arxiv.org/pdf/1709.06917\n\nRepertoire of policies + high-level model:\nChatzilygeroudis K, Vassiliades V, Mouret JB. Reset-free trial-and-error learning for robot damage recovery. Robotics and Autonomous Systems. 2018 Feb 28;100:236-50. https://arxiv.org/abs/1610.04213\n\nBio-inspired approach:\nRen G, Chen W, Dasgupta S, Kolodziejski C, Wörgötter F, Manoonpong P. Multiple chaotic central pattern generators with learning for legged locomotion and malfunction compensation. Information Sciences. 2015 Feb 10;294:666-82. https://arxiv.org/abs/1407.3269\n\n\"Classic\" RL:\nErden MS, Leblebicioğlu K. Free gait generation with reinforcement learning for a six-legged robot. Robotics and Autonomous Systems. 2008 Mar 31;56(3):199-212.\n" ]
[ -1, -1, 7, 7, -1, -1, -1, -1, -1, -1, -1, -1, 2, -1, -1 ]
[ -1, -1, 3, 5, -1, -1, -1, -1, -1, -1, -1, -1, 5, -1, -1 ]
[ "S1lzTCLUJN", "iclr_2019_HyztsoC5Y7", "iclr_2019_HyztsoC5Y7", "iclr_2019_HyztsoC5Y7", "iclr_2019_HyztsoC5Y7", "ByeKOSYJRm", "rkl7x9ea3X", "SJeknkUiTQ", "BJxo8zPU67", "Skgo5yLsT7", "ryx-_D6z67", "rJlYYIcC2X", "iclr_2019_HyztsoC5Y7", "S1gLnA52oQ", "iclr_2019_HyztsoC5Y7" ]
iclr_2019_S1E3Ko09F7
L-Shapley and C-Shapley: Efficient Model Interpretation for Structured Data
Instancewise feature scoring is a method for model interpretation, which yields, for each test instance, a vector of importance scores associated with features. Methods based on the Shapley score have been proposed as a fair way of computing feature attributions, but incur an exponential complexity in the number of features. This combinatorial explosion arises from the definition of Shapley value and prevents these methods from being scalable to large data sets and complex models. We focus on settings in which the data have a graph structure, and the contribution of features to the target variable is well-approximated by a graph-structured factorization. In such settings, we develop two algorithms with linear complexity for instancewise feature importance scoring on black-box models. We establish the relationship of our methods to the Shapley value and a closely related concept known as the Myerson value from cooperative game theory. We demonstrate on both language and image data that our algorithms compare favorably with other methods using both quantitative metrics and human evaluation.
accepted-poster-papers
The paper presents two new methods for model-agnostic interpretation of instance-wise feature importance. Pros: Unlike previous approaches based on the Shapley value, which had an exponential complexity in the number of features, the proposed methods have a linear-complexity when the data have a graph structure, which allows approximation based on graph-structured factorization. The proposed methods present solid technical novelty to study the important challenge of instance-wise, model-agnostic, linear-complexity interpretation of features. Cons: All reviewers wanted to see more extensive experimental results. Authors responded with most experiments requested. One issue raised by R3 was the need for comparing the proposed model-agnostic methods to existing model-specific methods. The proposed linear-complexity algorithm relies on the markov assumption, which some reviewers commented to be a potentially invalid assumption to make, but this does not seem to be a deal breaker since it is a relatively common assumption to make when deriving a polynomial-complexity approximation algorithm. Overall, the rebuttal addressed the reviewers' concerns well enough, leading to increased scores. Verdict: Accept. Solid technical novelty with convincing empirical results.
train
[ "HyenmOMRoX", "HygvWsscTX", "B1xadhicT7", "rkeAS2s5pm", "SyloPoi96m", "Hyep4ji56Q", "SyxgP3dn2Q", "SkeChVXt27", "rylA6HRv57", "BkgYeky15Q" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "public" ]
[ "The paper proposes two approximations to the Shapley value used for generating feature scores for interpretability. Both exploit a graph structure over the features by considering only subsets of neighborhoods of features (rather than all subsets). The authors give some approximation guarantees under certain Markovian assumptions on the graph. The paper concludes with experiments on text and images.\n\nThe paper is generally well written, albeit somewhat lengthy and at times repetitive (I would also swap 2.1 and 2.2 for better early motivation). The problem is important, and exploiting graphical structure is only natural. The authors might benefit from relating to other fields where similar problems are solved (e.g., inference in graphical models). The approximation guarantees are nice, but the assumptions may be too strict. The experimental evaluation seems valid but could be easily strengthened (see comments).\n\nComments:\n\n1. The coefficients in Eq. (6) could be better explained.\n\n2. The theorems seem sound, but the Markovian assumption is rather strict, as it requires that a feature i has an S that \"separates\" over *all* x (in expectation). This goes against the original motivation that different examples are likely to have different explanations. When would this hold in practice?\n\n3. While considering chains for text is valid, the authors should consider exploring other graph structures (e.g., parsing trees).\n\n4. For Eqs. (8) and (9), I could not find the definition of Y. Is this also a random variable representing examples?\n\n5. The authors postulate that sampling-based methods are susceptible to high variance. Showing this empirically would have strengthened their claim.\n\n6. Can the authors empirically quantify Eqs. (8) and (9)? This might shed light as to how realistic the assumptions are.\n\n7. In the experiments, it would have been nice to see how performance and runtime vary with increased neighborhood sizes. This would have quantified the importance of neighborhood size and robustness to hyper-parameters.\n\n8. For the image experiments, since C-Shapley considers connected subsets, it is perhaps not surprising that Fig. 4 shows clusters for this method (and not others). Why did the authors not use superpixels as features? This would have also let them compare to LIME and L-Shapley.\n\n", "We have added four experiments in the updated version of the paper based on the suggestions of three reviewers. The first experiment compares human evaluation on top words selected by our algorithms and KernelSHAP, and also compares human evaluation on masked reviews. The second experiment evaluates how the rank produced by our algorithms correlates with the rank of the Shapley value. The third experiment evaluates the sensitivity of our algorithms to the size of neighborhood. The last experiment empirically evaluates the statistical dispersion of sampling-based algorithms. The first experiment has been added to Section 5.3 in the main paper while the rest are added to Appendix C,D and E.\n\nThere are also some other minor changes addressing the concern of Reviewer 3 in the length of the paper. We have deferred the detailed description of data sets and models into the appendix. We have shortened Section 4.3 which describes the connection with related work. We also reduced the number of text examples for visualization in the appendix.\n\nWe again express our sincere thanks to all the reviewers, who have helped build our manuscript into a better and more complete shape!", "1. “Coefficients in Eq. (6)”\n\nThe coefficients are derived from Myerson value, which can be interpreted as the Shapley value for the coalition game with a graph structure. The details can be found in the proof of Theorem 2. In particular, Equation (22) in the Appendix provides the concrete procedure of derivation.\n\n2. \"The Markovian assumption is rather strict.\" \n\nWe thank the reviewer for addressing this point. We agree with the reviewer that Markovian assumption introduces bias in explanation, which aims for a better bias-variance trade-off when approximating Shapley values on structured data. Theorem 1 and Theorem 2 quantify the introduced bias under the setting when the Markovian assumption is approximately true. We also show on real data such an approximation achieves a better bias-variance trade-off empirically when the number of model evaluations is linear in the number of features. \n\n3. \"Use other graph structures like parse trees on language.\" \n\nThe reviewer made a very bright proposal. As the current paper focuses on the study of the generic setting where data with graph structure, we only use the simplest possible model on language to demonstrate the validity of the proposed algorithms. But the proposed idea can be a promising future direction. The authors have been thinking along the same direction for a while. One question one could ask is whether there exists a better solution concept in coalitional game theory under the setting of a parse tree. Related literature includes [1] and [2] if the reviewer is interested to think about this further.\n\n4. \"Y in Eqs. (8) and (9).\" \n\nWe assume the model has the form P_m(Y|X). Y is the response variable from the model.\n\n5. “The authors postulate that sampling-based methods are susceptible to high variance. Show this empirically.”\n\nWe have added an experiment in the updated version addressing the statistical dispersion of estimates of the Shapley value produced by sampling-based methods. Two commonly used nonparametric metrics are introduced to measure the statistical dispersion between different runs of a common sampling-based method, as the number of model evaluations is varied. Figure in the link below shows the variability of SampleShapley and KernelSHAP as a function of the number of model evaluations:\nhttps://drive.google.com/file/d/1yUvJ_Jqn2Bg16U-poEtMcTGfWifIcQ3_/view?usp=sharing\nSee also Appendix E for details.\n\n6. \"Empirically quantify Eqs. (8) and (9).\" \n\nWhile we agree with the reviewer that a good empirical quantification of quantities in Eqs. (8) and (9) can verify the assumptions in practice, it is rather difficult to get a reliable estimate of the conditional mutual information (or similar quantities) in the high dimensional regime. We have added one experiment in the updated version to validate the correlation between our algorithms and the Shapley value directly, which partially reflects the conclusion of our theorem. See the figure in the link below and Appendix C for details: https://drive.google.com/file/d/1oWsWyA4IkDIbaOjwOOwMAYJzu6kUuQSa/view?usp=sharing\n\nThe better performance on real data in terms of log-odds ratio decay when top features are masked may also be viewed as a partial empirical evidence on the fact that the introduced bias is not as big as the reduced variance.\n\n7. “it would have been nice to see how performance and runtime vary with increased neighborhood sizes” \n\nWe have included a section for sensitivity analysis of our algorithms in the updated version. We study how correlation between the proposed algorithms and the Shapley value vary with the radius of neighborhood, the only hyper-parameter in our algorithms. A plot of model evaluations against the radius of neighborhood is also included. See the figures in the link below, and also Appendix D for details:\nhttps://drive.google.com/open?id=1perbCh7oH95j3uDp6jNEM0vPvUvcUkZ8\nhttps://drive.google.com/file/d/1f5yBIwxd85tyxQKB5gBlRtBX4pRe0noL/view?usp=sharing\n\n8. \"Not use superpixels as features.\"\n\nWe agree with the reviewer that using superpixels may lead to better visualization results. However, this leads to a performance decay in terms of the change in log-odds ratio when a fixed number of pixels are masked. The same issue has been addressed in [3]. For fairness of comparison, we use the raw pixels as features for all methods.\n\n[1] Winter, Eyal. \"A value for cooperative games with levels structure of cooperation.\" International Journal of Game Theory 18.2 (1989): 227-240.\n[2] Faigle, Ulrich, and Walter Kern. \"The Shapley value for cooperative games under precedence constraints.\" International Journal of Game Theory 21.3 (1992): 249-266.\n[3] Lundberg, Scott M., and Su-In Lee. \"A unified approach to interpreting model predictions.\" Advances in Neural Information Processing Systems. 2017.\n", "We thank the reviewer for the detailed comments and encouraging title! We have included three experiments in the updated version to address Point 5, 6,and 7 of the reviewer’s comments, and also omit unnecessary details in the original paper. We will respond the reviewer's comments concretely below. \n\n“The paper is generally well written, somewhat lengthy and at times repetitive (I would also swap 2.1 and 2.2 for better early motivation)”\nBased on the reviewer’s request, we have shortened the paper by deleting unnecessary repetitions and details in Section 4.3 and the experiment section, and putting some of them to appendix. For example, the description of datasets is deferred to the appendix. As a replacement, we have included a new experiment with human evaluation. On the other hand, we still keep the order of 2.1 and 2.2. The main reason is that it seems more natural to explain how importance of a feature subset is quantified first (section 2.1) before we motivate the Shapley value, which incorporating interaction based on this quantification (section 2.2).", "We thank the reviewer for the detailed suggestions and encouraging comments! We have included an experiment with human evaluation in the updated version. Below we respond to Reviewer 1’s questions in details. \n\n“Is there a way to compare against KernelSHAP using the same (human) evaluation methods from the original paper?”\n\nWe agree with the reviewer that human evaluation is important in this area, and we have added a new experiment with human evaluation in the updated version. \n\nIn KernelSHAP paper, the authors designed experiments to argue for the use of Shapley value instead of LIME, which shows Shapley value is more consistent with human intuition on a data set with only a few number of features. Both KernelSHAP and our algorithms are ways of approximating Shapley value when there is a large number of features, under which case the exact same experiment is difficult to replicate. \n\nWe have designed two experiments by ourselves involving human evaluation for our methods and KernelSHAP on IMDB in the updated version. We assume that the key words contain an attitude toward a movie and can be used to infer the sentiment of a review. In the first experiment, we ask humans to infer the sentiment of a review within a range of -2 to 2, given the key words selected by different model interpretation approaches. Second, we also ask humans to infer the sentiment of a review with top words being masked, where words are masked until the predicted class gets a probability score of 0.1. In both experiments, we evaluate the consistency with truth, the agreement between humans on a single review by standard deviation, and the confidence of their decision via the absolute value of the score. We observe L-Shapley and C-Shapley take the lead respectively in two experiments. See the table and an example interface in the links below, and also Section 5.3 for more details: \nhttps://drive.google.com/open?id=1aHZPP0ZAdyODgTEFLRrQAKyS4uJ8h-XS\nhttps://drive.google.com/file/d/1_HOR28DGlKqEQVplGahv47o2xPe5lT5e/view?usp=sharing\n\n“It's a little ambiguous to me whether you tried to complement other sampling/regression-based methods in your experiments or not. Can you please clarify?”\n\nIn the experiments, we didn't combine our approach with sampling based methods as the number of model evaluations is already small enough in the setting (linear in the number of features). ", "We thank the reviewer for the detailed and encouraging comments! Based on the suggestions from the reviewer, we have included an experiment in the updated version that measures the correlation between L-Shapley, C-Shapley and the Shapley value. \n\n“Understanding of the evaluation metric”:\n\nThe evaluation metric we use is the following: \nlog(P(y_pred | x)) - log(P(y_pred | x_{top features MASKED})). The reviewer's understanding is in general correct except that we use the predicted label instead of the true label in the data set, because we hope to find key features for why the model makes its own decision. \n\n“I wonder is there some way to attack the problem of distinguishing when a feature is ranked highly when its (exact) Shapley value is high versus when it is ranked highly as an artifact of the estimator?”\n\nWe have added a new experiment in the updated version to address the problem of how the rank of features correlates with the rank produced by the true Shapley value. We sample a subset of test data from Yahoo! Answers with 9-12 words, so that the underlying Shapley scores can be accurately computed. We employ two common metrics, Kendall's Tau and Spearman's Rho to measure the similarity (correlation) between two ranks. We have observed a high rank correlation between our algorithms and the Shapley value. See the figure in the link below, and also Appendix C for more details: \nhttps://drive.google.com/open?id=1oWsWyA4IkDIbaOjwOOwMAYJzu6kUuQSa", "This paper proposes two methods for instance-wise feature importance scoring, which is the task of ranking the importance of each feature in a particular example (in contrast to class-wise or overall feature importance). The approach uses Shapely values, which are a principled way of measuring the contribution of a feature, and have been previously used in feature importance ranking.\n\nThe difficulty with Shapely values is they are extremely (exponentially) expensive to compute, and the contribution of this paper is to provide two efficient methods of computing approximate Shapely values when there is a known structure (a graph) relating the features to each other.\n\nThe paper first introduces the L(ocal)-Shapely value, which arises by restricting the Shapely value to a neighbourhood of the feature of interest. The L-Shapely value is still expensive to compute for large neighbourhoods, but can be tractable for small neighbourhoods.\n\nThe second approximation is the C(onnected)-Shapely value, which further restricts the L-Shapely computation to only consider connected subgraphs of local neighbourhoods. The justification for restricting to connected neighbourhoods is given through a connection to the Myerson value, which is somewhat obscure to me, since I am not familiar with the relevant literature. Nonetheless, it is clear that for the graphs of interest in this paper (chains and lattices) restricting to connected neighbourhoods is a substantial savings.\n\nI have understood the scores presented in Figures 2 and 3 as follows:\n\nFor each feature of each example, rank the features according to importance, using the plugin estimate for P(Y|X_S) where needed.\nFor each \"percent of features masked\" compute log(P(y_true | x_{S\\top features})) - log(P(y_true | x)) using the plugin estimate, and average these values over the dataset.\n\nBased on this understanding the results are quite good. The approximate Shapely values do a much better job than their competitors of identifying highly relevant features based on this measure. The qualitative results are also quite compelling, especially on images where C-Shapely tends to select contiguous regions which is intuitively correct behavior.\n\nComparing the different methods in Figure 4, there is quite some variability in the features selected by using different estimators of Shapley values. I wonder is there some way to attack the problem of distinguishing when a feature is ranked highly when its (exact) Shapley value is high versus when it is ranked highly as an artifact of the estimator?\n", "This paper provides new methods for estimating Shapley values for feature importance that include notions of locality and connectedness. The methods proposed here could be very useful for model explainability purposes, specifically in the model-agnostic case. The results seem promising, and it seems like a reasonable and theoretically sound methodology. In addition to the theoretical properties of the proposed algorithms, they do show a few quantitative and qualitative improvements over other black-box methods. They might strengthen their paper with a more thorough quantitative evaluation.\n\nI think the KernelSHAP paper you compare against (Lundberg & Lee 2017) does more quantitative evaluation than what’s presented here, including human judgement comparisons. Is there a way to compare against KernelSHAP using the same evaluation methods from the original paper?\n\nAlso, you mention throughout the paper that the L-shapley and C-shapley methods can easily complement other sampling/regression-based methods. It's a little ambiguous to me whether this was actually something you tried in your experiments or not. Can you please clarify?", "We first thank the reader for reading and greatly appreciate his/her time for writing such detailed reviews:)\n\nIn summary, the reader proposes two suggestions: \n1. The current baselines, including KernelSHAP and LIME, are weak, compared to methods like 'leave-one-out'. \n2. The authors should compare with model-specific techniques, including ‘integrated gradients’. \n\nThe short reply is:\n1. Leave-one-out is not as strong as KernelSHAP, both theoretically and experimentally.\n2. We do not compare with model-specific approaches in the paper as we focus on model agnostic interpretation. See the anonymous link at the end for a comparison made specifically for the reader.\n\nBelow are the concrete details:\n\nWe have different opinions on the first point (to a certain extent). In particular, KernelSHAP is stronger than 'leave-one-out':\na. Based on the source code of KernelSHAP (https://github.com/slundberg/shap/blob/master/shap/explainers/kernel.py), KernelSHAP considers 'masking each word' when computing importance scores, as long as the number of samples is super-linear in the number of features. \nb. Shapley value further incorporates the interaction between features when the number of samples is larger than d (the number of features), which is not the case for leave-one-out.\nc. Experimentally, Leave-one-out is not as good as KernelSHAP when more than one features are masked in terms of the decay in log likelihood.\n\nSecondly, the focus of this work is on model-agnostic interpretation, and thus we did not include comparison with model-specific methods in the paper. Model-specific methods can have superior performance in some cases while suffer a performance decay in other cases: For example, Integrated Gradients can have comparable performance to L-Shapley on CNNs, but perform not as well as other methods on LSTM with comparable complexity. Comparing our methods with all model-specific methods for various models will be an unnecessary use of time and also distract readers from the focus of the paper: efficient approximations of Shapley value, as a model-agnostic method for model interpretation. Being MODEL-AGNOSTIC can be important in some practical settings where models are not specified or multiple models are used. \n\nNevertheless, it does no harm to compare one or two model-specific methods in the reply as suggested by the reader. The reader proposes to compare our methods with Gradient X Input, DeepLIFT and Integrated Gradients. Given the inferior performance of Gradient X Input and the complexity of implementing DeepLIFT, we only compare with Integrated Gradient on NLP tasks, where the time complexity of integrated gradients is controlled to be (approximately) the same as L-Shapley for each sample: \nhttps://drive.google.com/file/d/1UYp2lKDXt-ORgL5vKsU35K5SMa-GQSrs/view?usp=sharing", "I just wanted emphasize that the baselines used in this paper are very weak. To the best of my knowledge, no one has claimed that any of the provided baselines (LIME, KernelSHAP, or SampleSHAP) are remotely close to SOTA for, or even capable of, interpreting neural networks in the manner demonstrated here, as the original papers focused on simpler models, such as SVM, or image models with superpixel preprocessing. \n\nThe authors (partially) address this in the results section: \n\n\"We emphasize that our focus is model-agnostic interpretation, and we omit the comparison with interpretation methods requiring additional assumptions or specific to a certain class models, like Integrated Gradients (Sundararajan et al., 2017), DeepLIFT (Shrikumar et al., 2017), LRP (Bach et al., 2015) and LSTM-specific methods (Karpathy et al., 2015; Strobelt et al., 2018; Murdoch & Szlam, 2017).\"\n\nEven if limited to model-agnostic interpretation, a very simple, strong baseline is leave one out - black out a variable and see how much the prediction changes - which is well established in both NLP (https://arxiv.org/pdf/1612.08220.pdf) and vision (https://arxiv.org/abs/1311.2901). This method would perform significantly better than the provided baselines (the baseline examples in the bottom two rows of Figure 4 are the worst I've seen in any paper).\n\nI'd also argue that gradient-based methods should be compared against, such as gradient times input or integrated gradients. While not truly model-agnostic, they only require the model to be differentiable, thus apply to all neural nets, and all models considered in this paper.\n\nMoreover, even if not directly comparable, I'd argue that at least some model-specific techniques should be included as well, in order to see how much is lost by moving from a custom method to a model-agnostic one." ]
[ 6, -1, -1, -1, -1, -1, 7, 7, -1, -1 ]
[ 4, -1, -1, -1, -1, -1, 3, 2, -1, -1 ]
[ "iclr_2019_S1E3Ko09F7", "iclr_2019_S1E3Ko09F7", "HyenmOMRoX", "HyenmOMRoX", "SkeChVXt27", "SyxgP3dn2Q", "iclr_2019_S1E3Ko09F7", "iclr_2019_S1E3Ko09F7", "BkgYeky15Q", "iclr_2019_S1E3Ko09F7" ]
iclr_2019_S1EERs09YQ
Discovery of Natural Language Concepts in Individual Units of CNNs
Although deep convolutional networks have achieved improved performance in many natural language tasks, they have been treated as black boxes because they are difficult to interpret. Especially, little is known about how they represent language in their intermediate layers. In an attempt to understand the representations of deep convolutional networks trained on language tasks, we show that individual units are selectively responsive to specific morphemes, words, and phrases, rather than responding to arbitrary and uninterpretable patterns. In order to quantitatively analyze such intriguing phenomenon, we propose a concept alignment method based on how units respond to replicated text. We conduct analyses with different architectures on multiple datasets for classification and translation tasks and provide new insights into how deep models understand natural language.
accepted-poster-papers
Important problem (making NN more transparent); reasonable approach for identifying which linguistic concepts different neurons are sensitive to; rigorous experiments. Paper was reviewed by three experts. Initially there were some concerns but after the author response and reviewer discussion, all three unanimously recommend acceptance.
val
[ "rklw7IW-xN", "rkeFc0bDkV", "BJeMqyfch7", "HkxBOdwqAm", "SJlEFDw9AQ", "Ske8LPPq0m", "Bkx4UOPqRQ", "SyxDYjcq2m", "rke4auot2Q" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "\nWe are deeply grateful to reviewer3 for thoughtful post-rebuttal suggestions. We will clarify terminology, add more analyses and modify the figures accordingly. For example, we will match the detected concepts with those in WordNet (ConceptNet) tree and update Fig 7 and Fig 14 to show which concepts are detected at each bin.", "Thank you to the authors for your comprehensive replies and revisions. The added analyses help to clarify and solidify the overall picture, and I remain of the opinion that this paper offers some interesting insights into the internal workings of these networks.", "========== Edit following authors' response ==========\n\nThank you for your detailed response and updated version. I think the new revision is significantly improved, mainly in more quantitative analyses and details in several places. I have updated my evaluation accordingly. \n\nSee a few more points below.\n\n1. Thank you for clarifying your definition of concepts. I still think that the word \"concept\" has a strong semantic connotation, while the linguistic elements your analyses capture may do other things. The results in appendix E do show that some semantic clusters arise. It's especially interesting to see the blocks in some of the heat maps, where similar \"concepts\" are clustered together (like the sports terms in AG); consider commenting on this. \n\n2. The new quantitative analyses are helpful. One other suggestion that I mentioned before is to connect detected concepts to external resources like WordNet or ConceptNet. That would help show that \"concepts\" are indeed semantic objects. \n \n3. The motivation for replicating as normalizing for length does make sense, although the input would still be unnatural. The comparison to \"one instance\" is helpful, but it's interesting that the differences between it and replication in figure 2 are not large. It would be good to show results that substantiate your assumption that without replication there will be a bias towards lengthy concepts. Does \"one instance\" detect more lengthy concepts than replication? \n\n4. The results on frequency and loss difference in 4.5 are very interesting. There is another angle to consider frequency: words that appear frequently often carry less semantic content (e.g. function words), so one might conjecture that they would require less units. It may be interesting to look at which concepts are detected at each frequency bin.\n\n5. Minor points: section 2.2 still mentions \"regression\" where it should be \"classification\". \n\n6. A few remaining grammar issues:\n- \"one concept has a less activation value..\" - rephrase \n- end of section 3.3: \"this experiments\" -> \"these experiments\"\n\n\n========== Original review follows ==========\n\nSummary:\n=======\nThis paper analyzes individual units in CNN models for text classification and translation tasks. It defines a measure of sensitivity for a unit and evaluates how sensitive each unit is to \"concepts\" in the input text, where concepts are morphemes, words, and phrases. The analysis shows that some units seem to learn semantic concepts, while others capture linguistic elements that are frequent or relevant for the end task. Layer-wise results show some correspondence between layer depth and linguistic element size. \n\nThe paper studies an important question that is relatively under-studies in NLP compared to the computer vision community. The motivation for the work is quite convincing. \nI found some of the results and analysis interesting, but overall felt that the work can be made much stronger by more quantitative evaluations. I am also worried that the notion of \"concept\" is misleading here. See below for this and other comments. I am willing to reconsider my evaluation pending response to the below issues. \n\nMain comments:\n=============\n1. Concepts: \n- morphemes, words, and phrases - are these \"concepts\"? They are indeed \"fundamental building blocks of natural language\" (2.2), but \"concepts\" has a more semantic connotation that I'm not sure these units target at. \n- Some of the results do suggest that units learn concepts, as the analysis in 4.2 shows a \"unit detecting the meaning of certainty in knowledge\" and later units that have similar sentiments. It would be informative to quantify this in some way, for example by matching detected concepts to WordNet synsets, sentiment lexicons, etc., or else tagging and classifying them with various NLP tools. This could also reveal if units learn more syntactic or semantic concepts, and so on. \n2. Generally, many of the analyses in the paper are qualitative and on a small scale. The results will be more convincing with more automatic aggregate measures. \n3. The structure of the paper is confusing. Section3 starts with the approach but then mentions datasets and tasks (3.1). Section 4 is titled experiments, but section 4.1 starts with defining the concept selectivity. I would suggest reorganizing sections 3 and 4, such that section 3 describes all the methods and metrics, while dataset-specific parts are moved to section 4. \n4. section 3.2 should provide more details on the sentence representation and how its obtained in the CNN models. A mathematical derivation and/or figure could be helpful. It is also not clear to me what's the motivation for mean-pooling over the l entries of the vector. \n5. section 3.3: the use of replicated text for \"concept alignment\" is puzzling. This is not a natural input to the model, and I think more justification and motivation åre needed for this issue, as well as perhaps comparison with other approaches. \n6. I found section 4.4 very interesting. It shows some intuitive results of larger linguistic elements learned at higher layers, but then some results that do not show such a trend. Then, hypothesizing that the middle layers are sufficient AND validating the hypothesis by retraining the model is excellent. It's a very nice demonstration that the analysis can lead to model improvements. \n7. Figure 2 seems to be almost caused by construction of the different options for S_+. Is it surprising that the replicate set has the highest sensitivity? Is there a better control setup than comparing with a random set? \n8. One concern that I have is the effect of confounding factors like frequency on the results. The papers occasionally attributes importance to concepts (e.g. in 4.2), but I wonder if instead we may be seeing more frequent words. Controlling for the effect of frequency would be useful. \n\n\nMinor comments:\n==============\n- Section 2.2, first paragraph: regression should be changed to classification\n- The related work is generally relevant, although one could mention a few other papers that analyzed individual neurons in NLP tasks [1, 2]\n- section 4.1: the random set may perhaps be denoted by something more neutral, not S_+ as the replicate and inclusion sets. \n- section 4.3, last paragraph: listing examples showing that units in Europarl focus on key words would be good. \n- Figure 5, y axis label: should this be number of units instead of concepts? \n- Appendix A has several interesting points but there is no reference to them from the main paper. \n\n\nWriting, grammar, etc.:\n======================\n- Introduction: among them - who is them? \n- 2.1: motivated from -> motivated by; In computer vision community -> In the computer vision community\n- 2.1: quantifying characteristics of representations in layer-wise -> rephrase\n- 3.2: dimension of sentence -> dimension of the/a sentence \n- 4.1: to which -> remove \"which\" \n- 4.2: in the several encoding layer -> in several encoding layers \n- 4.3: aliged -> aligned \n- Capitalize titles in references \n- A.2: with following -> with the following; how much candidate -> how much a candidate; consider following -> consider the following \n- A.3: induces similar bias -> induces a bias; such phrase -> such a phrase; on very -> on a very \n- C: where model -> where the model; In consistent -> Consistent; where model -> where the model \n\n\nReferences\n==========\n[1] Qian et al., Analyzing linguistic knowledge in sequential model of sentence\n[2] Shi et al., Why Neural Translations are the Right Length", "\n5. Concept replication\n ===================================\nThe main reason that we replicate each concept into a fixed-length sentence is to normalize the degree of the input signal to the unit activation. We clarify this point in Section 3.3. Without such normalization (e.g. a single instance of a candidate concept as input, as Reviewer 2 suggested), the DoA metric has a bias to prefer a lengthy concept. Please refer to Appendix A.4 for comparison with 'one instance' method.\n\n\n6. Section 4.4\n ===================================\nWe thank Reviewer 3 for acknowledging the significance of results in section 4.4.\n\n\n7. Sensitivity of replicate setting\n ===================================\nWe add a ‘one instance’ option to the comparison of selectivity (Fig. 2). The results show that the mean selectivity of the ‘replicate’ set is higher than that of the ‘one instance’ set, which implies that a unit's activation increases as its concepts appear more often in the input text. One of our main contributions is the discovery of the units that are selectively responsive to specific natural language concepts and “it is quantitatively verified” in Fig. 2.\n\n\n8. Factors that affect concept alignment\n ===================================\nIt is an interesting question why certain concepts emerge more than others. We experiment some factors that may affect concept alignment, and add results to Section 4.5 and Appendix F. We investigate the following two hypotheses: (i) The concepts with higher frequency in training data are aligned to more units (as Reviewer 3 suggested). (ii) Concepts that have more influence on the objective function (expected loss) are aligned to more units. For the concepts in the final layer of translation model, we measure the Pearson correlation coefficient between [# of aligned units per concept] and the factor (i) and (ii), and obtain 0.482 / 0.531, respectively. These results make a lot of sense in that the learned representation focuses more on identifying both frequent concepts and important concepts for solving the target task. Yet, we are not sure that we should directly “control” the effect of frequency, because it is quite unnatural and non-trivial to manipulate the training data to change the frequency of a specific concept.\n\n\n9. Minor comments from Reviewer 3\n===================================\n(1) We update Section 2.2, related work, Section 4.1 and Section 4.3 as Reviewer 3 suggested. Please see the blue fonts.\n(2) Fig. 5: We thank Reviewer 3 for correcting the typo. The y-axis of Fig. 5 is “the number of aligned concepts” in each layer. For example, the plot on the top left dbpedia shows that more than 100 morpheme concepts are aligned across all units of the 0-th layer. We also update the caption of Fig. 5 for clarification. \n(3) Appendix A: We add reference to Appendix A in footnote of Section 3.3 of the revised paper.\n(4) Notation of set of ‘random’ sentences: we will modify notation of random set for less confusing in the camera-ready version. \n\n10. Writing and grammar\n===================================\nWe sincerely thank Reviewer 3 for thorough proofreading. We correct all the typos.\n\nReference\n===================================\n[1] Bolei Zhou et al., Revisiting the Importance of Individual Units in CNNs via Ablation (arXiv:1806.02891, 2018)\n[2] David Bau et al., Network Dissection: Quantifying Interpretability of Deep Visual Representations (CVPR 2017)\n[3] Ruth Fong et al., Net2Vec: Quantifying and Explaining how Concepts are Encoded by Filters in Deep Neural Networks (CVPR 2018)\n[4] Bolei Zhou et al., Object Detectors Emerge In Deep Scene CNNs (ICLR 2015)", "\nWe thank Reviewer 2 for positive and constructive review. Please see blue fonts in the newly uploaded draft to check how our paper is updated.\n\n1. Replicating concepts\n===================================\nThe main reason that we replicate each concept into a fixed-length sentence is to normalize the degree of the input signal to the unit activation. Without such normalization (e.g. a single instance of a candidate concept as input, as Reviewer 2 suggested), the DoA metric has a bias to prefer a lengthy concept. We clarify this point at Section 3.3, and present detailed discussion in Appendix A.4.\n\n\n2. M values\n===================================\nThe M value is used as a threshold to set how many concepts per unit are considered for later analyses. We observe that the overall trend in our quantitative results does not change much with M. As an example, we add Fig.8 to Appendix C, which shows the trend of selectivity values is stable across different M= [1,3,5,10]. \n\n\n3. Non-interpretable units\n===================================\nIt is a highly interesting suggestion to investigate non-interpretable units as well as interpretable ones. We add one approximate method to quantify the non-interpretability of unit to Appendix D in the revised paper.\nWe define a unit as non-interpretable, if the activation value of its top-activated sentence is higher than the DoA values of all aligned concepts. The intuition is that if a replicated sentence that is composed of only one concept has a less activation value than the top-activated sentences, the unit is not sensitive to the concept compared to a sequence of different words. Using this definition of non-interpretable units, we report the layer-wise ratios of interpretable units in Fig. 9 and some examples of non-interpretable units in Fig.10 in Appendix D. Please refer to Appendix D for the detailed results.\n\n\n4. Figure 5\n===================================\nWe thank Reviewer 2 for correcting the typo. The y-axis of Fig. 5 is “the number of aligned concepts” in each layer. For each layer, we collect all concepts, and then count category of each concept. For example, the plot on the top left dbpedia shows that more than 100 morpheme concepts are aligned to the units of the 0-th layer. We also update the caption of Fig. 5 for clarification. \n\n\n5. Concept clusters\n===================================\n(1) What concept clusters emerge?\nAs Reviewer 2 suggested, we add experiments of concept clusters to Fig. 11 and Appendix E.1. The top and left dendrograms of Fig. 11 show the hierarchical cluster of concepts based on the vector space distance between the concepts in the last layer. For clustering ([4]), we use the Euclidean distance as the distance measure, and pretrained Glove ([1]), fastText ([2]), ConceptNet ([3]) embedding for projecting concepts into the vector space. Each element of the heat map represents the number of times two concepts are aligned in the same unit. We observe that several diagonal blocks (clusters) appear more strongly in classification than in translation, particularly in the AG News and the DBpedia dataset. Please refer to Appendix E.1 for more details.\n\n(2) Why certain clusters emerge more than others?\nIt is an interesting question why certain concepts or clusters emerge more than others. We add some results to this inquiry to Section 4.5 and Appendix F. We deal with individual concepts rather than clusters of concepts. We investigate the following two hypotheses: (i) The concepts with higher frequency in training data are aligned to more units. (ii) Concepts that have more influence on the objective function (expectation of the loss) are aligned to more units. For the concepts in the final layer, we measure the Pearson correlation coefficient between [# of aligned units per concept] and the factor (i) and (ii), and obtain 0.482 / 0.531, respectively. These results make a lot of sense in that the learned representation focuses more on identifying both frequent concepts and important concepts for solving the target task.\n\n6. Typos\n===================================\nWe corrected the typos. Thanks for pointing out.\n\nReferences\n===================================\n[1] Jeffrey Pennington et al., GloVe: Global Vectors for Word Representation (EMNLP 2014)\n[2] Piotr Bojanowski et al., Enriching Word Vectors with Subword Information (TACL 2017)\n[3] Speer Robert et al., ConceptNet 5.5: An Open Multilingual Graph of General Knowledge (AAAI. 2017)\n[4] Daniel Mullner. Modern hierarchical, agglomerative clustering algorithms. arXiv:1109.2378v1. (arXiv 2011)\n", "\nWe thank Reviewer 1 for positive and constructive review. Please see our revisions in blue font to check how our paper is updated.\n\n1. Concepts coverage over multiple layers\n===================================\nWe plot the number of unique concepts per layer in Figure 13. In all datasets, the number of unique concepts increases with the layer depth, which implies that the units in a deeper layer represent more diverse concepts.\n\n\n2. Multiple occurrences of each concept at different layers\n===================================\nWe add Figure 16 to Appendix H to show how many layers each concept appears. Although task and data specific concepts emerge at different layers, there is no strong pattern between the concepts and their occurrences at multiple layers.\n\n\n3. The layers’ activation dynamics towards noisy elements\n===================================\nIt is an interesting suggestion to investigate how unit activations vary with noisy elements of natural language such as synthetic adversarial examples or natural noise (Belinkov et al.[1]) that could attack the model. Since we discover some units that capture the abstract semantics rather than low-level text patterns in Section 4.2, we expect that those units will be not sensitive to such noisy transformation of the concepts. More thorough analysis for this topic will be one of our emergent future works.\n\nReferences\n===================================\n[1] Yonatan Belinkov et al., Synthetic and Natural Noise Both Break Neural Machine Translation (ICLR 2018)\n", "\nWe thank Reviewer 3 for positive and constructive review. Please see blue fonts in the newly uploaded draft to check how our paper is updated.\n\n1. Concepts\n===================================\n(1) We agree that the term ‘concept’ could be ambiguous. Nonetheless, we use the term ‘concept’, following the related work for interpretability [1-4], where the ‘units’ and ‘concepts’ are typically used to refer to the channels of hidden layers and the detected semantic parts of the input information (eg, wheels, cars, legs as visual concepts), respectively. In our work on natural language, the ‘concepts’ in previous work should correspond to morphemes, words, and phrases, which form the fundamental building blocks of natural language. Please also note that we define ‘natural language concept’ in Section 1 instead of ‘concept’ alone for less confusion. \n\n\n(2) We define a “concept cluster” as a set of concepts that are aligned to the same unit and have similar semantics or grammatical roles. We add what concept clusters emerge per task to Appendix E.1. We observe that such concept clusters appear more strongly in classification tasks rather than translation tasks. Also, we investigate how concept clusters vary with layer depth and discuss the detailed results in Appendix E.2, where we discover that units in deeper layers tend to form clusters more strongly than units in earlier layers. Please refer to Appendix E for more results.\n\n\n2. Analyses are qualitative and in a small scale\n ===================================\nGiven that we use two state-of-the-art models on seven benchmark datasets, our experiments are large-scale, although some analyses are done qualitatively in small-scale as Reviewer pointed out. \nTherefore, we add more quantitative and thorough analyses as follows.\n(1) Ratios of interpretable/non-interpretable units across layers for multiple tasks and datasets (Appendix D).\n(2) Quantitative measures of concept clusters across layers for multiple tasks and datasets (Appendix E).\n(3) Correlation coefficients of possible hypotheses on why certain units emerge (i.e. document frequency and delta of expected loss) for multiple tasks and datasets (Section 4.5 and Appendix F). \n(4) Selectivity variation for different M values = [1,3,5,10] (Appendix C).\n(5) The number of unique concepts aligned to each layer for multiple tasks and datasets. (Figure 13)\n\n3. Paper structure\n ===================================\nPer Reviewer 3’s suggestion, we will move [The Model and the Task] Section to 4.1 in the camera-ready version. \n\n\n4. Sentence representation\n ===================================\n(1) We clarify Section 3.2 as Reviewer 3 suggested. Please refer to blue fonts in Section 3.2\n(2) The idea of mean-pooling over all spatial locations is motivated by Zhou et al. [4]. The only difference is that [4] uses the addition pooling because the input set is fixed-length images, whereas we use the mean pooling because the input is variable-length sentences. \n", "This paper describes a method for identifying linguistic components (\"concepts\") to which individual units of convolutional networks are sensitive, by selecting the sentences that most activate the given unit and then quantifying the activation of those units in response to subparts of those sentences that have been isolated and repeated. The paper reports analyses of the sensitivities of different units as well as the evolution of sensitivity across network layers, finding interesting patterns of sensitivity to specific words as well as higher-level categories.\n\nI think this paper provides some useful insights into the specialization of hidden layer units in these networks. There are some places where I think the analysis could go deeper / some questions that I'm left with (see comments below), but on the whole I think that the paper sheds useful light on the finer-grained picture of what these models learn internally. I like the fact that the analysis is able to identify a lack of substantial change between middle and deeper layers of the translation model, which inspires a prediction - subsequently borne out - that decreasing the number of layers will not substantially reduce task performance.\n\nThe paper is overall written pretty clearly (though some of the questions below could likely be attributed to sub-optimal clarity), and to my knowledge the analyses and insights that it contributes are original. Overall, I think this is a solid paper with some interesting contributions to neural network interpretability.\n\nComments/questions:\n\n-I'm wondering about the importance of repeating the “concepts” to reach the average sentence length. Do the units not respond adequately with just one instance of the concept (eg \"the ball\" rather than \"the ball the ball the ball\")? What is the contribution of repetition alone?\n\n-Did you experiment with any other values for M (number of aligned candidate concepts per unit)? It seems that this is a non-trivial modeling decision, as it has bearing on the interesting question of how broadly selective a unit is.\n\n-You give examples of units that have interpretable sensitivity patterns - can you give a sense of what proportion of units do *not* respond in an interpretable way, based on your analysis?\n\n-What exactly is plotted on the y-axis of Figure 5? Is it number of units, or number of concepts? How does it pool over different instances of a category (different morphemes, different words, etc)? What is the relationship between that measure and the number of distinct words/morphemes etc that produce sensitivity?\n\n-I'm interested in the units that cluster members of certain syntactic and semantic categories, and it would be nice to be able to get a broader sense of the scope of these sensitivities. What examples of these categories are captured? Is it clear why certain categories are selected over others? Are they obviously the most optimal categories for task performance?\n\n-p7 typo: \"morhpeme\"", "The paper is well written and structured, presenting the problem clearly and accurately. It contains considerable relevant references and enough background knowledge. It nicely motivates the proposed approach, locates the contributions in the state-of-the-art and reviews related work. It is also very honest in terms of how it differs on the technical level from existing approaches. \nThe paper presents interesting and novel findings to further state-of-the-art’s understanding on how language concepts are represented in the intermediate layers of deep convolutional neural networks, showing that channels in convolutional representations are selectively sensitive to specific natural language concepts. It also nicely discusses how concepts granularity evolves with layers’ deepness in the case of natural language tasks.\nWhat I am missing, however, is an empirical study of concepts coverage over multiple layers, studying the multiple occurrences of single concepts at different layers, and a deeper dive on the rather noisy elements of natural language and the layers’ activation dynamics towards such elements.\nOverall, however, the ideas presented in the paper are interesting and original, and the experimental section is convincing. My recommendation is to accept this submission.\n" ]
[ -1, -1, 6, -1, -1, -1, -1, 6, 6 ]
[ -1, -1, 4, -1, -1, -1, -1, 4, 3 ]
[ "BJeMqyfch7", "SJlEFDw9AQ", "iclr_2019_S1EERs09YQ", "Bkx4UOPqRQ", "SyxDYjcq2m", "rke4auot2Q", "BJeMqyfch7", "iclr_2019_S1EERs09YQ", "iclr_2019_S1EERs09YQ" ]
iclr_2019_S1EHOsC9tX
Towards the first adversarially robust neural network model on MNIST
Despite much effort, deep neural networks remain highly susceptible to tiny input perturbations and even for MNIST, one of the most common toy datasets in computer vision, no neural network model exists for which adversarial perturbations are large and make semantic sense to humans. We show that even the widely recognized and by far most successful L-inf defense by Madry et~al. (1) has lower L0 robustness than undefended networks and still highly susceptible to L2 perturbations, (2) classifies unrecognizable images with high certainty, (3) performs not much better than simple input binarization and (4) features adversarial perturbations that make little sense to humans. These results suggest that MNIST is far from being solved in terms of adversarial robustness. We present a novel robust classification model that performs analysis by synthesis using learned class-conditional data distributions. We derive bounds on the robustness and go to great length to empirically evaluate our model using maximally effective adversarial attacks by (a) applying decision-based, score-based, gradient-based and transfer-based attacks for several different Lp norms, (b) by designing a new attack that exploits the structure of our defended model and (c) by devising a novel decision-based attack that seeks to minimize the number of perturbed pixels (L0). The results suggest that our approach yields state-of-the-art robustness on MNIST against L0, L2 and L-inf perturbations and we demonstrate that most adversarial examples are strongly perturbed towards the perceptual boundary between the original and the adversarial class.
accepted-poster-papers
The paper presents a technique of training robust classification models that uses the input distribution within each class to achieve high accuracy and robustness against adversarial perturbations. Strengths: - The resulting model offers good robustness guarantees for a wide range of norm-bounded perturbations - The authors put a lot of care into the robustness evaluation Weaknesses: - Some of the "shortcomings" attributed to the previous work seem confusing, as the reported vulnerability corresponds to threat models that the previous work did not made claims about Overall, this looks like a valuable and interesting contribution.
train
[ "B1eu6wFIyN", "r1lb0cYI1E", "S1xZgzdYJE", "BJx0OiRP1E", "SylDJKFLk4", "SyxI1zFUkE", "rJlyOldp0X", "HJgw5VPpAQ", "r1eCatzaRm", "ByeIeWEi0m", "SJxNatjURX", "SJegx_sU0X", "H1xX7jsIAX", "r1lfWKsIA7", "SyeyTVeH0Q", "H1lxliTMCX", "BygGXkaqT7", "SklgCVqq2Q", "rylr8jLc3X", "B1laSlHch7", "BkgyRoDshm", "H1xrRbe537", "B1gdc1g92m", "ByexLNECsX", "SylqQqa6jQ", "rJxefCOMom", "SygNj-kI57", "HJgxiO8b5m" ]
[ "public", "public", "author", "public", "public", "public", "author", "public", "author", "public", "author", "author", "author", "author", "public", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer", "public", "author", "public", "author", "author", "public", "author", "public" ]
[ "I concur. Fashion-MNIST is a necessary datasets, which is similar to MNIST. Why not to choose Fashion-MNIST for analysis. The fact that the method performs well on MNIST is nice, but MNIST should be considered for what it is: a toy dataset. ", "Why the authors choose these two classes (airplane, automobile) in the experiments, Not all classes on CIFAR 10 ? Do other categories have the similar results?", "We assume full knowledge of the model (= white-box setting) which we use to design a customised attack that optimises in the hidden latent space of the model. Furthermore we use score-based and decision-based adversarial attacks.", "A nice paper, but there is one place I don't know very well.\n\nWhether the attacks is in white box mode or black box mode?\n\nI would appreciate it if the authors can answer my question.", "Nice to see the proposal work for 2 class CIFAR. However, as the result show, the accuracy is greatly reduced. \nThere are some questions:\n\n1. Can you show us robustness results on 3-10 class CIFAR? As the category increases, will the robustness decrease?\n\n2. There are 10 categories in CIFAR10, which two categories did you choose to experiment with? why?\n\n3. What about L0 robustness and L-inf robustness on 2 class CIFAR?", "As we know, MNIST is too easy and over used, more importantly, it can not represent modern CV tasks. Fashion-MNIST is an alternative dataset for MNIST. It would be interesting to see whether the proposal would work for more complicated tasks like Fashion-MNIST.", "1. \"The method goes to great computational expense to use Eq. 3 instead of Eq. 2. (8000 evaluations per sample) It would be interesting to see if it's worth it.\"\n\nWe performed a quick experiment with L2 Basic Iterative Method [1] for the ABS model and found that the median L2 robustness with the variational inference (Eq. 2) is 0.05 compared to 2.3 with optimization-based inference (Eq. 3). So indeed the optimization step is crucial to make the model robust.\n\n[1] https://foolbox.readthedocs.io/en/latest/modules/attacks/gradient.html#foolbox.attacks.L2BasicIterativeAttack\n\n\n2. \"Also, what if Madry's defense were trained to defend against L2=1.5 attacks? (This seems like a trivial generalization, but I'm not aware of anyone having done this). It would be interesting to see where such a defense would fit in Table 1.\"\n\nThat’s indeed a very interesting question but the generalisation of Madry’s defense is not as trivial (for a fair comparison we have to be careful in choosing the right iterative method (alternative to PGD) and choosing the optimal hyperparameters). We will try to follow up on this in the future but are currently concentrating on other parts of the ABS model (in particular scaling to more complex data sets).", "Thanks for your detailed reply, which clarifies most of my minor questions and concerns. \n Well, defense-GAN also learn a z in the latent space iteratively such that G(z) is close to the sample x. That's why I thought the methodology used in defense-GAN is somehow similar to what you did in the optimization-inference step. But defintely I see ABS is different from Defense-GAN. For BPDA, I know it is method to recover gradient, actually more precisely, estimate the non-zero and finite gradient that can be used for gradient-based algorithms. It seems like LatentDescentAttack did the similar thing. \n Thanks again, since I got many insights from the paper and your reply.\n\n\n\n\n", "\"1. As far as I can see, the defense method is quite like a composition of Defense-GAN and binarization method.\" \n\nOur method is very different from Defense GAN which is basically a sophisticated image denoising followed by a feedforward classifier. In contrast, we use class conditional generative models and no (vulnerable) feedforward classifier at all. \n\n\n\"As claimed in \"ensemble adversarial training\" (appendix), binarization can help MNIST model robust to Linfinity perturbation. But it is still not very intuitive to me why the model can be robust to L2 perturbation. \n\nOur intuition behind the ABS models L2 robustness is due to the Gaussian posterior (in pixelspace) in the reconstruction term, which ensures that small changes in the input can only entail small changes to the posterior likelihood and thus to the model decision. In other words, small changes in the input can only lead to small changes in the reconstruction error and so the logits (= reconstruction error + KL divergence) can only change slowly with varying inputs.\n\n\n\"(The bound given in eq. 8 seems close to 0, does it really make much sense?)\"\n\nOur verified lower bound of the mean L_2 robustness is 0.690 ± 0.005 which is quite high compared to other SOTA methods which provide guarantees for the lower bound (i.e. Hein et al. [1] who have 0.48). \n\n[1] Matthias Hein and Maksym Andriushchenko. \"Formal guarantees on the robustness of a classifier against adversarial manipulation\". In Advances in Neural Information Processing Systems 30, pp. 2266–2276. Curran Associates, Inc., 2017.\n\n\n\"2. Another question is that Defense-GAN can be further attacked by BPDA proposed in https://arxiv.org/pdf/1802.00420.pdf. I was wondering did the proposed method suffer from the same problem (i.e., obfuscated gradients)?\"\n\nBPDA is not really an attack but a way to recover proper gradients for certain models (e.g. by pass-through estimators [2]). We do this in two ways: first by computing descent directions in the low-dimensional latent space (LatentDescentAttack) and second by estimating gradients using a finite-difference estimate (+ a \"pass through estimator\" [2] for the binary ABS model). The LatentDescentAttack is closest in spirit to BPDA (but adapted to our model).\n\n[2] Bengio, Y., Leonard, N., and Courville, A. \"Estimating or propagating gradients through stochastic neurons for conditional computation\". arXiv preprint arXiv:1308.3432, 2013.", "A very interesting method! Just two small questions:\n\n1. As far as I can see, the defense method is quite like a composition of Defense-GAN and binarization method. As claimed in \"ensemble adversarial training\" (appendix), binarization can help MNIST model robust to Linfinity perturbation. But it is still not very intuitive to me why the model can be robust to L2 perturbation. (The bound given in eq. 8 seems close to 0, does it really make much sense?)\n\n2. Another question is that Defense-GAN can be further attacked by BPDA proposed in https://arxiv.org/pdf/1802.00420.pdf. I was wondering did the proposed method suffer from the same problem (i.e., obfuscated gradients)?\n\nAnyway, the paper is pleasant to read. I would appreciate it if the authors can answer my questions.", "\"The main concern with this work is that it is heavily tailored towards MNIST and the authors do mention this. Scaling this to other datasets does not seem easy. \"\n\"Using VAEs to model the conditional class distributions is a nice idea, but how does this scale for datasets with large number of classes like imagenet? This would result in having 1000s of VAEs.\"\n\nFirst experiments suggest that our robustness is not limited to MNIST. To show this, we trained the proposed ABS model and a vanilla CNN on two class CIFAR and achieve a robustness ~3x larger than a CNN. \n\nRobustness results on 2 class CIFAR:\nmodel accuracy | L2 robustness\nCNN 97.1% | 0.8 (estimated with BIM)\nABS 89.7% | 2.5 (estimated with LatentDescent attack)\n\nTo tackle the reduced accuracy of ABS on CIFAR-10 and other datasets, we are currently working on extensions of our architecture and the training procedure. First experiments show that this can improve the accuracy substantially over baseline ABS and still comes with the same robustness to adversarial perturbations (but this is beyond the scope of this paper).\n\n\n\"It would be nice to see this model behaves for skewed datasets.\"\n\nIn contrast to purely discriminative models that require manual rebalancing of the training data, our generative architecture can cope well with unbalanced datasets out of the box. To demonstrate this experimentally, we have trained a two-class MNIST classifier (ones vs. sevens) both on a balanced dataset, an unbalanced datasets (10 times as many sevens than ones during training) and a highly unbalanced dataset (100 times as many ones as sevens during training). They all perform similarly well:\n \n accuracy | L_2 median perturbation size with Latent Descent attack\nbalanced ABS 99.6 +- 0.1% | 3.5 +- 0.1\n10 :1 unbalanced ABS 99.3 +- 0.2% | 3.4 +- 0.2\n100:1 unbalanced ABS 98.5 +- 0.2% | 3.2 +- 0.2\n", "\"Although the paper is designed for MNIST specifically, the proposed scheme should apply to other classification tasks. Have you tried the models on other datasets like CIFAR10/100? It would be interesting to see whether the proposal would work for more complicated tasks.\"\n\nFirst experiments suggest that our robustness is not limited to MNIST. To show this, we trained the proposed ABS model and a vanilla CNN on two class CIFAR and achieve a robustness ~3x larger than a CNN. \n\nRobustness results on 2 class CIFAR:\nmodel accuracy | L2 robustness\nCNN 97.1% | 0.8 (estimated with BIM)\nABS 89.7% | 2.5 (estimated with LatentDescent attack)\n\nTo tackle the reduced accuracy of ABS on CIFAR-10 and other datasets, we are currently working on extensions of our architecture and the training procedure. First experiments show that this can improve the accuracy substantially over baseline ABS and still comes with the same robustness to adversarial perturbations (but this is beyond the scope of this paper).\n\n\n\"When the training data for each label is unbalanced, namely, some class has very few samples, would you expect the model to fail?\"\n\nIn contrast to purely discriminative models that require manual rebalancing of the training data, our generative architecture can cope well with unbalanced datasets out of the box. To demonstrate this experimentally, we have trained a two-class MNIST classifier (ones vs. sevens) both on a balanced dataset, an unbalanced datasets (10 times as many sevens than ones during training) and a highly unbalanced dataset (100 times as many ones as sevens during training). They all perform similarly well:\n \n accuracy | L_2 median perturbation size with Latent Descent attack\nbalanced ABS 99.6 +- 0.1% | 3.5 +- 0.1\n10 :1 unbalanced ABS 99.3 +- 0.2% | 3.4 +- 0.2\n100:1 unbalanced ABS 98.5 +- 0.2% | 3.2 +- 0.2\n\n\n\"It would be more interesting to add more intuition on why the proposed model is already robust by design.\"\n\nAdversarial training is used to prevent small changes in the input to make large changes in the model decision. In the ABS model, the Gaussian posterior in the reconstruction term ensures that small changes in the input can only entail small changes to the posterior likelihood and thus to the model decision. In other words, small changes in the input can only lead to small changes in the reconstruction error and so the logits (= reconstruction error + KL divergence) can only change slowly with varying inputs.\n\n\n\"Equation (8) is complicated and still model-dependent. Without further relaxation and simplification, it’s not easy to see if this value is small or large, or to understand what kind of message this section is trying to pass.\"\n\nWe provide quantitative values in the results section \"Lower bounds on Robustness\" (we'll add a pointer). For ABS, the mean L2 perturbation (i.e. the mean of epsilon in eq. 8 across samples) is 0.69. For comparison, Hein et al. [1] reaches 0.48.\n\n[1] Matthias Hein and Maksym Andriushchenko. Formal guarantees on the robustness of a classifier against adversarial manipulation. In Advances in Neural Information Processing Systems 30, pp. 2266–2276. Curran Associates, Inc., 2017.\n\n\n\"Although the main contribution of the paper is to propose a model that is robust without further defending, the proposed model could still benefit from adversarial training. Have you tried to retrain your model using the adversarial examples you have got and see if it helps?\"\n\nIt's an interesting question as to whether a combination of analysis by synthesis and adversarial training can yield even better results. One potential problem could be that adversarial training makes little sense if adversarials are already at the perceptual boundary between two classes. This would need to be evaluated carefully and we feel that such an analysis goes beyond the scope of this paper. We will, however, release the code and the pretrained model for the community to play around with such ideas. Thanks for the suggestion!\n", "We would like to thank all reviewers for their valuable feedback. Regarding concerns we responded to each reviewer individually\n\nWe have uploaded an updated version of the paper with the following changes: \n\n1.) We provide additional intuitions behind the model architecture and its robustness \n\n2.) We have extended the section describing ideas to scale this approach to more complex datasets\n\n3) We provide preliminary results for two class CIFAR. \n\n4) Minor changes\n* fixed the correct image for distal adversarials for the ABS model \n* We changed p(x) to p(x|y) to be consistent\n* We added a pointer to to the results in section 4 \"TIGHT ESTIMATES OF THE LOWER BOUND FOR ADVERSARIAL EXAMPLES\"\n* We consistently refer to the sigma of the variational inference as \\sigma_q \n", "\"it was not very clear to me that the authors were estimating the p(x) for each y. The transition from p(x|y) to p(x) at the end of page 3 was astute and confused me. The authors should make it more clear.\"\n\nWe agree, thank you for pointing this out. We changed p(x) -> p(x|y) in Equation (2) and the text. \n\n\n\"it would be beneficial if the authors could comment on the how strict/loose the lower bound of (2) is, as it is critical in estimating the class specific density.\"\n\nFor a standard VAE trained on MNIST, the estimate of log p(x) is around -93 while true log-likeilhood is at around -87 (see https://openreview.net/pdf?id=HyZoi-WRb, Figure 3). Hence, the bound is neither extremely loose, nor extremely tight. In any case, one should keep in mind that the goal of the model is not optimal density estimation but accuracy and model robustness, so we can accept to be non-optimal. You may be right, however, that tighter bounds might also increase accuracy and robustness, which is an exciting question to be answered in future work.", "Thanks!\n\nThe method goes to great computational expense to use Eq. 3 instead of Eq. 2. (8000 evaluations per sample) It would be interesting to see if it's worth it.\n\nAlso, what if Madry's defense were trained to defend against L2=1.5 attacks? (This seems like a trivial generalization, but I'm not aware of anyone having done this) It would be interesting to see where such a defense would fit in Table 1.", "- How is \\sigma chosen in Eq.3? Is it different from \\sigma_q in Eq.7?\n\nThe \\sigma in Eq. 3 should be called \\sigma_q as well, thanks for pointing this out. We set \\sigma_q = 1 (the exact value doesn’t really matter at this point since we do not sample from the posterior distribution during the optimization step). We’ll add this to the “Model and Training Details” section in the appendix. \n\n\n- Why does it make sense to equate (7) and (6), upper and lower bounds? (I'm sure the authors thought it through, but it seems unclear from the text)\n\nRemember that an (untargeted) adversarial perturbation tries to maximally lower the likelihood of the true label and to maximally increase the likelihood of some other label. We here derive how much the likelihood of the true label can maximally decrease for a given norm-ball of epsilon (that’s the lower bound), and what the maximum likelihood of any other class may be under the same constraint (that’s the upper bound). The epsilon for which the lower and upper bound are the same is the maximum epsilon for which we can guarantee that the model will still predict the true label.\n", "\n\n- How is \\sigma chosen in Eq.3? Is it different from \\sigma_q in Eq.7?\n\n- Why does it make sense to equate (7) and (6), upper and lower bounds? (I'm sure the authors thought it through, but it seems unclear from the text)", "This paper shows that the problem of defending MNIST is still unsuccessful. It hereby proposes a model that is robust by design specifically for the MNIST classification task. Unlike conventional classifiers, the proposal learns a class-dependent data distribution using VAEs, and conducts variational inference by optimizing over the latent space to estimate the classification logits. \n\nSome extensive experiments verify the model robustness with respect to different distance measure, with most state-of-the-art attacking schemes, and compared against several baselines. The added experiments with rotation and translation further consolidate the value of the work. \n\nOverall I think this is a nice paper. Although being lack of some good intuition, the proposed model indeed show superior robustness to previous defending approaches. Also, the model has some other benefits that are shown in Figure 3 and 4. The results show that the model has indeed learned the data distribution rather than roughly determining the decision boundary of the input space as most existing models do.\n\n\nHowever, I have the following comments that might help to improve the paper:\n\n1. It would be more interesting to add more intuition on why the proposed model is already robust by design. \n\n2. Although the paper is designed for MNIST specifically, the proposed scheme should apply to other classification tasks. Have you tried the models on other datasets like CIFAR10/100? It would be interesting to see whether the proposal would work for more complicated tasks. When the training data for each label is unbalanced, namely, some class has very few samples, would you expect the model to fail?\n\n3. Equation (8) is complicated and still model-dependent. Without further relaxation and simplification, it’s not easy to see if this value is small or large, or to understand what kind of message this section is trying to pass. \n\n4. Although the main contribution of the paper is to propose a model that is robust without further defending, the proposed model could still benefit from adversarial training. Have you tried to retrain your model using the adversarial examples you have got and see if it helps?\n", "In this paper, the authors argued that the current approaches are not robust to adversarial attacks, even for MNIST. They proposed a generative approach for classification, which uses variational autoencoder (VAE) to estimate the class specific feature distribution. Robustness guarantees are derived for their model. Through numeric studies, they demonstrated the performance of their proposal (ABS). They also demonstrated that many of the adversarial examples for their ABS model are actually meaningful to humans, which are different from existing approaches, such as SOTA.\n\nOverall this is a well written paper. The presentation of their methodology is clear, so are the numerical studies.\n\nSome comments:\n1) it was not very clear to me that the authors were estimating the p(x) for each y. The transition from p(x|y) to p(x) at the end of page 3 was astute and confused me. The authors should make it more clear.\n2) it would be beneficial if the authors could comment on the how strict/loose the lower bound of (2) is, as it is critical in estimating the class specific density.", "Paper summary: The paper presents a robust Analysis by Synthesis classification model that uses the input distribution within each class to achieve high accuracy and robustness against adversarial perturbations. The architecture involves training VAEs for each class to learn p(x|y) and performing exact inference during evaluation. The authors show that ABS and binary ABS outperform other models in terms of robustness for L2, Linf and L0 attacks respectively. \n\nThe paper in general is well written and clear, and the approach of using generative methods such as VAE for better robustness is good. \n\nPros: \nUsing VAEs for modeling class conditional distributions for data is an exhaustive approach. The authors show in Fig 4 that ABS generates adversarials that are semantically meaningful for humans, which is not achieved by Madry et al and other models. \n\nCons: \n1) The main concern with this work is that it is heavily tailored towards MNIST and the authors do mention this. Scaling this to other datasets does not seem easy. \n2) Using VAEs to model the conditional class distributions is a nice idea, but how does this scale for datasets with large number of classes like imagenet? This would result in having 1000s of VAEs. \n3) It would be nice to see this model behaves for skewed datasets. \n\n", "That makes sense. Thanks a lot for the explanation.", "You can think of AbS as incorporating an explicit Gaussian noise model (by means of the Gaussian posterior): it basically assumes that the signal (the digit) is corrupted by noise. In return, as long as the corrupted images stay close (in terms of L2) to the original image, the AbS will not change it's decision. The difference between rotations and L_infty perturbations is that the latter still stay close to the original image in terms of L2 (at least roughly), whereas small rotations can easily lead to large L2 distances.", "Thanks a lot for doing this, very cool results!\n\nI agree with your point about the hardness of learning transformations that go beyond what is in the dataset.\nThis raises an interesting question regarding the difference between rotation/translations and l_p perturbations. Intuitively, large l_infty perturbations also go beyond typical data transformations. Yet AbS seems to do fine with them. \n\n", "Dear reviewers and readers,\n\nwe performed additional robustness evaluations and discovered a minor issue with the random seed in the Salt and Pepper (S&P) attack. We reevaluated robustness against S&P as well as Pointwise attack (which uses S&P for initialization) and found small changes in the L0 results:\n\nFormat: Binary ABS robustness | ABS robustness\n\nL2 Pointwise Attack: no change | 4.8 -> 4.6\nL2 overall: no change | no change\n\nL0 Salt&Pepper Noise: 158.5 -> 146.0 | 182.5 -> 165.0\nL0 Pointwise Attack: 36.5 -> 22.0 | 22.0 -> 16.5\nL0 overall: 36.0 -> 21.5 | 22.0 -> 16.5\n\nWe will update table 1 and figure 2 in the manuscript accordingly. No conclusions or statements in the paper are affected.", "Dear Florian, that's a great suggestion! I took the time to re-implement the spatial attack in Foolbox (because our whole evaluation setup is based on it) and tested (1) a vanilla MNIST network (the one used by Madry et al, as taken from Madry's challenge), (2) the Madry et al. defense (the secret model in Madry's challenge) and (3) our AbS model. We used the same transformation ranges as [Engstrom et al.] (translations: +- 3px, rotation +- 30 degrees). Here are the results:\n\n(1 - Vanilla) Translation-only: 12,3% --- Rotation-only: 12.7% --- Translation & Rotation: 0.01%\n(2 - Madry) Translation-only: 9% --- Rotation-only: 66.0% --- Translation & Rotation: 0%\n(3 - AbS) Translation-only: 25.5% --- Rotation-only: 67.1% --- Translation & Rotation: 0.3%\n\nI am not yet able to reproduce the large difference between vanilla and defended network present in [Engstrom et al]. We found the defense by Madry et al. work a little worse than reported in [Engstrom et al], in particular with respect to translations, while we found the vanilla network to perform much worse (we used a different one than in [Engstrom et al.] though, which probably explains the difference). AbS performs much better than vanilla in both rotation and translation and also performs better than Madry et al. on shifts. Frankly, I'd expected the AbS to perform even better but on the other hand, if the transformations go beyond the typical transformations of the data than there is no reason why the AbS should learn them.", "Did you measure the robustness of your model to small (worst-case) rotations and translations? (https://arxiv.org/abs/1712.02779)\n\nI think these attacks could be good candidates to further show that your model is not subject to some form of gradient masking, as the worst-case perturbation can be found via exhaustive search.\n\nIncidentally, rotations and translations are another class of perturbations that the l-infinity model of Madry et al. is not robust against (that's what the above paper by the same authors shows). The paper also shows that you can adversarially train a model to be robust to rotations and translations, but I don't think it says anything about training a model that is robust to both rotations/translations and l-infinity attacks (which your model might be)", "Thanks for your comment! We tested our ABS model against one of the background pixel attacks suggested in fig. 6 of https://arxiv.org/pdf/1807.06732.pdf (random lines added on top of the samples) and found a strong robustness against such perturbations (96% accuracy for two lines, 86% for four lines and 54% for eight lines [difficult even for humans], see https://ibb.co/cpDt9K for samples). The combination of Madry et al. with weight decay is certainly interesting but out of the scope of this paper. Thanks for the L1 reference, we'll include it in the manuscript.", "Since a major claim of this paper (the first claim listed in the abstract) is that the Madry et al 2017 model doesn't defend against L0 or L2 attacks, it seems like it would make sense to discuss earlier related work that showed the Madry et al 2017 model doesn't defend against attacks other than Linf threat model it was designed for. To the best of my knowledge, the first such work was the demonstration that it doesn't defend against L1 attacks, which seem to not be mentioned at all in this submission: https://arxiv.org/abs/1710.10733\n There is also the background pixel attack (fig 6 of https://arxiv.org/pdf/1807.06732.pdf ) and a variety of threat models described by https://arxiv.org/abs/1804.03308 where weight decay outperforms Linf-adversarial training." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 6, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "SyxI1zFUkE", "SJxNatjURX", "BJx0OiRP1E", "iclr_2019_S1EHOsC9tX", "SJegx_sU0X", "SJegx_sU0X", "SyeyTVeH0Q", "r1eCatzaRm", "ByeIeWEi0m", "iclr_2019_S1EHOsC9tX", "B1laSlHch7", "SklgCVqq2Q", "iclr_2019_S1EHOsC9tX", "rylr8jLc3X", "H1lxliTMCX", "BygGXkaqT7", "iclr_2019_S1EHOsC9tX", "iclr_2019_S1EHOsC9tX", "iclr_2019_S1EHOsC9tX", "iclr_2019_S1EHOsC9tX", "H1xrRbe537", "B1gdc1g92m", "SylqQqa6jQ", "iclr_2019_S1EHOsC9tX", "rJxefCOMom", "iclr_2019_S1EHOsC9tX", "HJgxiO8b5m", "iclr_2019_S1EHOsC9tX" ]
iclr_2019_S1GkToR5tm
Discriminator Rejection Sampling
We propose a rejection sampling scheme using the discriminator of a GAN to approximately correct errors in the GAN generator distribution. We show that under quite strict assumptions, this will allow us to recover the data distribution exactly. We then examine where those strict assumptions break down and design a practical algorithm—called Discriminator Rejection Sampling (DRS)—that can be used on real data-sets. Finally, we demonstrate the efficacy of DRS on a mixture of Gaussians and on the state of the art SAGAN model. On ImageNet, we train an improved baseline that increases the best published Inception Score from 52.52 to 62.36 and reduces the Frechet Inception Distance from 18.65 to 14.79. We then use DRS to further improve on this baseline, improving the Inception Score to 76.08 and the FID to 13.75.
accepted-poster-papers
The paper proposes a discriminator dependent rejection sampling scheme for improving the quality of samples from a trained GAN. The paper is clearly written, presents an interesting idea and the authors extended and improved the experimental analyses as suggested by the reviewers.
train
[ "BkeHrKU1kE", "r1lSsOUk1E", "BkgY088kJV", "Ske86fcY0Q", "BJgZuI6m0m", "SyxH1nd7R7", "r1e5iSqf6X", "r1gYHA1-a7", "SkeXZRk-Tm", "SylUYa1bpQ", "SklRqc0yTQ", "rJgNPOjOnX" ]
[ "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Please see this comment: https://openreview.net/forum?id=S1GkToR5tm&noteId=SyxH1nd7R7 or the updated PDF for experimental results on (what we think is) the simpler rejection scheme you mention. \n\nPlease also let us know if there's anything else you think we can do to improve the paper quality.", "Thanks very much for the review, please see this comment: https://openreview.net/forum?id=S1GkToR5tm&noteId=SyxH1nd7R7 for some ablation experiments and comparisons with heuristic rejection schemes. \n\nLet us know if there's anything else you think we can do to improve the work.", "Thanks for bringing [1] to our attention; we hadn't seen it.\nWe'll first summarize our understanding of the algorithm from [1] (which we'll call IR for 'Importance Resampling')\nand then we'll discuss differences.\n\nIR somehow computes importance weights for a set of samples using the Discriminator/Critic from a trained GAN.\nA single sample is drawn as follows:\nN samples from the trained GAN are prepared and importance weights are computed.\nA single one of the samples is then 'accepted' using a categorical distribution over N categories parameterized by the importance weights.\n\nThe differences (between IR and DRS and between our scientific evaluation and theirs) are:\n\n1. [1] don't continue to train D to approximate D^*.\nWe theoretically motivate the importance of this, and we also show (in the new experiments we ran for the rebuttal) that this is important empirically.\nThis difference may explain the small improvement given by IR (see below).\n\n2. [1] sample one image at a time given a set of N candidates instead of the probabilistic sampling as in DRS.\nThat is, their acceptance ratio is controlled by N.\nI don't think that this procedure will recover p_data given finite N?\nIt's hard to say for sure without knowing more detail about how they are getting the importance weights.\n\n3. We add the 'gamma trick', which you already noted is crucial to making the algorithm work in practice.\nImagine that the weights of n-1 samples are tiny (e.g. e-10) and the weight of one sample is close to 1.\nNormalizing all of the samples by \\sum{w_i} does not make any difference in the weights and thus, this importance re-sampling would not do much.\nThe 'gamma trick' changes the acceptance probabilities such that they cover the whole range of 0 to 1 scores.\nThis effect was also illustrated in Figure 2-A.\nThis results in a more efficient sampling scheme when acceptance probabilities for most of the samples are very small,\nwhich happened in our ImageNet experiment (purple histogram of Figure 2-A).\n\n4. [1] don't really provide evidence that IR yields quantitative improvement.\nIn the supplementary material, they show a single run on which the Inception score is changed from 7.28 to 7.42, an improvement of less than 2%.\nOur work shows that DRS yields improvements of (61.44 / 52.34 ~ 17%) and (76.08 / 62.36 ~ 22%) respectively on the baseline and improved\nversions of SAGAN[2] we used for experiments.\nApart from [3] (a concurrent submission to ICLR), these results are the best achieved in the literature.\nWe think it's reasonably to expect that DRS could improve the results from [3] as well.\n\n5. [1] seem to compare IR to a weak baseline in the experiment from the Supplementary Material.\nThis experiment is (presumably) conducted on the unsupervised CIFAR-10 task.\n7.42 is not only far from the state of the art at the time [1] was written (this is important because it gives evidence about whether IR can be 'stacked'\nwith other improvements), but it's less than the reported performance of the main method from [1], which is given as 7.47 +/- 0.10.\nThis is strange, because it suggests that the baseline for this experiment was not trained as well as the model in the main text (its performance of 7.28 is nearly\n2 standard deviations worse).\nFootnote 1 in the main text says 'We used a less well-trained model and picked our samples based on the importance weights to highlight the difference.',\nbut it's unclear if this was also intentionally done in the supplementary material.\n\n6. [1] don't compute the FID of the accepted samples, so there is no way to know if diversity has been sacrificed for sample quality.\nWe compute the FID and show that it has improved after DRS.\n\n7. [1] don't provide any theoretical analysis of IR.\n\n8. [1] don't include any illustrative toy experiments that suggest why resampling might work.\nWe propose and give support (using the mixture of gaussians experiment) for the hypothesis that it's easier for the\ndiscriminator to tell that certain regions of X are 'bad' than it is for the Generator to avoid spitting out samples in that region.\n\n\nPS:\nWe don't mean to be overly negative about [1].\nWe understand that IR was not the primary contribution of that work.\nWe just wish to emphasize the scope of the difference between the fraction of that work focusing on IR and our work.\n\nPPS:\nWe saw this message after the deadline to modify the PDF.\nWe will of course add this discussion to the final copy of the PDF when the time comes.\n\n[1] Chi-square generative adversarial network. In ICML, 2018.\n[2] Self-Attention GAN\n[3] Large Scale GAN Training for High Fidelity Natural Image Synthesis\n", "Thanks for the interesting applications, which addressed my main concern. \n\nAlso, I recently found that the literature [1] has also mentioned a similar resampling idea. So a relative discussion should be added into the manuscript to make clear the difference.\n\n[1] C. Tao, L. Chen, R. Henao, J. Feng, and L. Carin. Chi-square generative adversarial network. In ICML, 2018.", "We have also added plots corresponding to the above values", "Reviewers 1 and 2 both mentioned that they would like to see comparisons to certain baselines. \nWe have now performed such comparisons.\nWe are working on adding them to the PDF, but I will discuss the results here in the meantime.\n\nWe evaluated 4 different rejection sampling schemes on the mixture-of-Gaussians dataset:\n\n(1) Always reject samples falling below a hard threshold and DO NOT train the Discriminator to 'convergence'.\n\n(2) Always reject samples falling below a hard threshold and train the Discriminator to convergence.\n\n(3) Use probabilistic sampling as in eq 8 and DO NOT train the Discriminator to convergence.\n\n(4) Our original DRS algorithm, in which we use probabilistic sampling and train the Discriminator to convergence.\n\nIn (1) and (2), we were careful to set the hard threshold so that the actual acceptance rate was the same as in (3) and (4).\n\nBroadly speaking:\n4 performs best\n3 performs OK but yields less 'good samples' than 4.\n2 yields the same number of 'good samples' as 3, but completely fails to sample from 5 of the 25 modes.\n1 actually yields the most 'good samples' for the modes it hits, but it only hits 4 modes!\n\nThese results show that\na) continuing to train D so that it can approximate D^* (which we have already motivated theoretically) is helpful in practice. \nb) performing sampling as in eq 8 (which we also motivated theoretically) is helpful in practice. \n\nBelow we provide, for each method, the number of samples within 1, 2, 3 and 4 std deviations and the number of modes hit.\nFor reference, we also compute these statistics for the ground truth distribution and the unfiltered samples from the GAN.\n\nWe would have liked to perform the same analysis on SAGAN, but we currently don't have access to resources that would\nallow us to do this before the response deadline.\n\nDRS ABLATION STUDY\nGROUND TRUTH\nCentroid coverage: 25\nwithin 1 std: 0.3934\nwithin 2 std: 0.8661\nwithin 3 std: 0.9891\nwithin 4 std: 0.9999\nVANILLA GAN\nCentroid coverage: 25\nwithin 1 std: 0.273\nwithin 2 std: 0.5308\nwithin 3 std: 0.6615\nwithin 4 std: 0.7561\n(1) THRESHOLD NO FT\nCentroid coverage: 4\nwithin 1 std: 0.3849\nwithin 2 std: 0.9255\nwithin 3 std: 0.9944\nwithin 4 std: 0.9982\nTHRESHOLD\n(2) Centroid coverage: 20\nwithin 1 std: 0.3478\nwithin 2 std: 0.7023\nwithin 3 std: 0.8359\nwithin 4 std: 0.8928\n(3) DRS NO FT\nCentroid coverage: 25\nwithin 1 std: 0.314962934062\nwithin 2 std: 0.601736246586\nwithin 3 std: 0.73585641826\nwithin 4 std: 0.811841591885\n(4) DRS\nCentroid coverage: 25\nwithin 1 std: 0.35277582572\nwithin 2 std: 0.657589599438\nwithin 3 std: 0.817463106114\nwithin 4 std: 0.897487702038\n", "This paper proposes a rejection sampling algorithm for sampling from the GAN generator. Authors establish a very clear connection between the optimal GAN discriminator and the rejection sampling acceptance probability. Then they explain very clearly that in practice the connection is not exact, and propose a practical algorithm. \n\nExperimental results suggest that the proposed algorithm helps the increase the accuracy of the generator, measured in terms of inception score and Frechet inception distance. \n\nIt would be interesting though to see if the proposed algorithm buys anything over a trivial rejection scheme such as looking at the discriminator values and rejecting the samples if they fall below a certain threshold. This being said, I do understand that the proposed practical acceptance ratio in equation (8) is 'close' to the theoretically justified acceptance ratio. Since in practice the learnt discriminator is not exactly the ideal discriminator D*(x), I think it is super okay to add a constant and optimize it on a validation set. (Equation (7) is off anyways since in practice the things (e.g. the discriminator) are not ideal). But again, I do think it would make the paper much stronger to compare equation (8) with some other heuristic based rejection schemes.\n\n ", "We have written individual replies to Reviews 2 and 3 (these are the only reviews at present). \n\nWe have also update the PDF to include a new figure (fig 6) on the effect of gamma. \n\nWe are working on making more updates to the draft for purposes of clarity.", "We thank the reviewer for his/her time and feedback. We appreciate the kind words relating to the clarity and comprehensiveness of our submission, and hope to address any remaining concerns the reviewer has here.\n\nOTHER APPLICATIONS:\n (a) Suppose we’re designing molecules for drug discovery purposes using a generative model. \nAt some point, we will have to physically test the molecules that we have designed, which could be costly.\n If the discriminator can throw out some obviously unrealistic molecule designs, this will save us money and time.\n(b) For text generation applications, a nonsensical generated sentence in a dialog system could be rejected by the discriminator, reducing the frequency of embarrassing mistakes. \n(c) In RL applications, if we are predicting future states with a generative model, we could use this technique to throw out silly predictions, reducing the risk of taking a silly action predicated on those predictions. \n(d) More generally, you could use DRS on models that are not GANs.\n\nADDRESSING D* ISSUE\n\nYou’re right about this - we will change the wording. We don’t do anything to *fix* the problem that we can’t actually compute D*, we just show that you don’t need to precisely recover D* to get good results. The first paragraph on page 5 speculates on why this might be so, and figures 4 and 5 provide evidence for this speculation.\n\nREGARDING GAMMA:\n\nWe agree that gamma is an important hyperparameter, because it modulates the acceptance rate. \nWe have already made the figure you propose and have updated the PDF to include it. It is now figure 6. \nPlease let us know if there are other experiments that you think would\nimprove the quality of the work.\n", "Thanks very much for the review. \nWe think that there have been two misunderstandings here, one about the Gaussian Mixture experiment and one about the purpose of the quantity F_hat(x).\nThese are our fault; we should have made the paper more clear and we are modifying the draft to do so.\nIn the meantime, we will address both issues here. We use > for quotes. \n\nGAUSSIAN MIXTURE EXPERIMENT:\n> - GAN setting: 10K examples are generated and reported in figure 3?\nThis much is true.\n\n\n> - DRS setting: 10K examples are generated, and submitted to algorithm in figure 1. For each batch, a line search sets gamma so that 95% of the examples are accepted. Thus only 9.5K are reported in figure 3.\nThis part is not true.\nYou probably got confused by the line 'We generate 10,000 samples from the generator with and without DRS.' which we agree is unclear. \n\nFirst, we generate as many samples as needed to yield 10K acceptances, so both plots have 10k dots on them.\n\nSecond, there is no line search.\nEach example is given an acceptance probability p that is generated from substituting F_hat from equation 8 for F in equation 6.\nThen, a pseudo-random number in [0,1] is compared with p to determine acceptance. \nThus, for any given batch, the number of examples accepted is non-deterministic.\nWe think that this point also relates to the misunderstanding regarding the purpose of F_hat.\n\nThird, gamma is subtracted from F.\nSo setting gamma equal to the 95th %-ile value of F means that an example where F(x) is at the 95th %-ile will have a 50% chance of being accepted, because\n1 / (1 + e^(-F_hat(x))) = 1 / (1 + e^0) = 1 / 2 in this case. \nThe result is that around 23% of samples drawn from the generator made it into the final DRS plot, which means we had to draw a little less than 50k samples from the generator. \n\n\n> If this is my understanding, then the comparison in Figure 3 in unfair, as DRS is allowed to pick and choose.\nWe're unsure what you mean here.\nIt's true in some sense that DRS is allowed to pick and choose, but from our perspective this is part of the definition of rejection sampling?\nThe generator can't figure out how to stop yielding bad samples, but the discriminator can tell which samples are bad, so we can\nthrow those out and get a distribution closer to the ground truth distribution at the cost of having to generate extra samples from the generator.\n\n\nPURPOSE OF F_HAT:\n> Let's jump to equation (8): compared to a simple use of the discriminator for rejection, it adds the term under the log\nWe don't think this is correct - the log already exists and we just add the gamma and epsilon terms.\nThe discussion after eq 5 shows that the acceptance probability p(x) is exp(D_tilde^*(x) - D_tilde^*(x^*)).\nThe tildes are important, because they mean that we are operating not on the sigmoid output of D but on the logit that is passed to the sigmoid output.\nThen we ask what F(x) would have to be s.t. 1 / (1 + e^(-F(x))) = p(x).\nThis results in equation 7, *which already has the log term*.\nThe only difference between F_hat and F is that we introduce the epsilon for numerical stability and the gamma to modulate the acceptance probability.\n\n> First order Taylor expansion of...\nWhat you say here is true, but we are not thresholding. \nWe think this is the root of the misunderstanding.\nWe don't consider the hard thresholding algorithm here because it might deterministically reject certain samples for which D^* is low,\nwhich means that we would never be able to actually draw samples from p_d, even in the idealized setting of section 3.1\n\nPlease let us know if this response answers all of your questions. \nWe are happy to expand.\n", "his paper assumes that, in a GAN, the generator is not perfect and some information is left in the discriminator, so that it can be used to 'reject' some of the 'fake' examples produced by the generator.\n\nThe introduction, problem statement and justification for rejection sampling are excellent, with a level of clarity that makes it understandable by non expert readers, and a wittiness that makes the paper fun to read. I assume this work is novel: the reviewer is more an expert in rejection than in GANs, and is aware how few publications rely on rejection.\n\nHowever, the authors fail to compare their algorithm to a much simpler rejection scheme, and a revised version should discuss this issue.\nLet's jump to equation (8): compared to a simple use of the dicriminator for rejection, it adds the term under the log.\nThe basic rejection equation would read F(x) = D*(x) - gamma and one would adjust the threshold gamma to obtain the desired operating point. I am wondering why no comparison is provided with basic rejection? \n\nLet me try to understand the Gaussian mixture experiment, as the description is ambiguous:\n- GAN setting: 10K examples are generated and reported in figure 3?\n- DRS setting: 10K examples are generated, and submitted to algorithm in figure 1. For each batch, a line search sets gamma so that 95% of the examples are accepted. Thus only 9.5K are reported in figure 3.\n- What about basic rejection using F(x) = D*(x) - gamma: how does it compare to DRS at the same 95% accept?\n\nIf this is my understanding, then the comparison in Figure 3 in unfair, as DRS is allowed to pick and choose.\nFor completeness, basic rejection should also be added.\n\nGoing back to Eq.(8), one realizes that the difference between DRS rejection and basic rejection may be negligible.\nFirst order Taylor expansion of log(1-x) that would apply to the case where the rejection probability is small yields:\nF(x) = (D*(x) - D*_M) + exp(D*(x) - D*_M) \n\nx+ exp(x) is monotonous, so thresholding over it is the same as thresholding over x: back to basic rejection!", "This paper proposed a post-processing rejection sampling scheme for GANs, named Discriminator Rejection Sampling (DRS), to help filter ‘good’ samples from GANs’ generator. More specifically, after training GANs’ generator and discriminator are fixed; GANs’ discriminator is further exploited to design a rejection sampler, which is used to reject the ‘bad’ samples generated from the fixed generator; accordingly, the accepted generated samples have good quality (better IS and FID results). Experiments of SAGAN model on GMM toys and ImageNet dataset show that DRS helps further increases the IS and reduces the FID.\n\nThe paper is easy to follow, and the experimental results are convincing. However, I am curious about the follow questions.\n\n(1)\tBesides helping generate better samples, could you list several other applications where the proposed technique is useful? \n\n(2)\tIn the last paragraph of Page 4, I don’t think the presented Discriminator Rejection Sampling “addresses” the issues in Sec 3.2, especially the first paragraph of Page 5.\n\n(3)\tThe hyperparameter gamma in Eq. (8) is of vital importance for the proposed DRS. Actually, it is believed the key to determining whether DRS works or not. Detailed analysis/experiments about hyperparameter gamma are considered missing. \n" ]
[ -1, -1, -1, -1, -1, -1, 7, -1, -1, -1, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, 3, 4 ]
[ "SklRqc0yTQ", "r1e5iSqf6X", "Ske86fcY0Q", "SkeXZRk-Tm", "SyxH1nd7R7", "iclr_2019_S1GkToR5tm", "iclr_2019_S1GkToR5tm", "iclr_2019_S1GkToR5tm", "rJgNPOjOnX", "SklRqc0yTQ", "iclr_2019_S1GkToR5tm", "iclr_2019_S1GkToR5tm" ]
iclr_2019_S1M6Z2Cctm
Harmonic Unpaired Image-to-image Translation
The recent direction of unpaired image-to-image translation is on one hand very exciting as it alleviates the big burden in obtaining label-intensive pixel-to-pixel supervision, but it is on the other hand not fully satisfactory due to the presence of artifacts and degenerated transformations. In this paper, we take a manifold view of the problem by introducing a smoothness term over the sample graph to attain harmonic functions to enforce consistent mappings during the translation. We develop HarmonicGAN to learn bi-directional translations between the source and the target domains. With the help of similarity-consistency, the inherent self-consistency property of samples can be maintained. Distance metrics defined on two types of features including histogram and CNN are exploited. Under an identical problem setting as CycleGAN, without additional manual inputs and only at a small training-time cost, HarmonicGAN demonstrates a significant qualitative and quantitative improvement over the state of the art, as well as improved interpretability. We show experimental results in a number of applications including medical imaging, object transfiguration, and semantic labeling. We outperform the competing methods in all tasks, and for a medical imaging task in particular our method turns CycleGAN from a failure to a success, halving the mean-squared error, and generating images that radiologists prefer over competing methods in 95% of cases.
accepted-poster-papers
The proposed method introduces a method for unsupervised image-to-image mapping, using a new term into the objective function that enforces consistency in similarity between image patches across domains. Reviewers left constructive and detailed comments, which, the authors have made substantial efforts to address. Reviewers have ranked paper as borderline, and in Area Chair's opinion, most major issued have been addressed: - R3&R2: Novelty compared to DistanceGAN/CRF limited: authors have clarified contributions in reference to DistanceGAN/CRF and demonstrated improved performance relative to several datasets. - R3&R1: Evaluation on additional datasets required: authors added evaluation on 4 more tasks - R3&R1: Details missing: authors added details.
train
[ "Sye1L1Bn37", "r1xN-MahTm", "S1xNXGahT7", "HJxUobThTQ", "SJeW-lphpQ", "Hkl52g6nam", "H1xHSb636X", "rJgUEgahpm", "H1lGEHeonm", "SylYVmZKnm", "B1xpkXvb2Q", "ryeFogT13Q", "H1l5el1hjX" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "public", "author", "public" ]
[ "This paper proposes a method called HarmonicGAN for unpaired image-to-image translation. The key idea is to introduce a regularization term on the basis of CycleGAN, which encourages similar image patches to acquire similar transformations. Two feature domains are explored for evaluating the patch-level similarity, including soft RGB histogram and semantic features based on VGGNet. In fact, the key idea is very similar to that of DistanceGAN. The proposed method can be regarded as a combination of the advantages of DistanceGAN and CycleGAN. Thus, the technical novelty is very limited in my opinion. Some experimental results are provided to demonstrate the superiority of the proposed method over CycleGAN, DistanceGAN and UNIT.\n\nGiven the limited novelty and the inadequate number of experiments, I am leaning to reject this submission.\n\nMajor questions:\n1. Lots of method details are missing. In Section 3.3.2, what layers are chosen for computing the semantic features? What exactly is the metric for computing the distance between semantic features.\n2. The qualitative results on the task, Horse2Zebra and Zebra2Horse, are not impressive. Obvious artifacts can be observed in the results. Although the paper claims that the proposed method does not change the background and performs more complete transformations, the background is changed in the result for the Horse2Zebra case in Fig. 5. More qualitative results are needed to demonstrate the effectiveness of the proposed method.\n3. To demonstrate the effectiveness of a general unpaired image-to-image translation method, the proposed method is needed to be testified on more tasks.\n4. Implementation details are missing. I am not able to judge whether the comparisons are fair enough.\n\n[New comment:] I have read the authors' explanations and clarifications that make me increase my rating. Regarding the technical novelty, I still don't think this paper bears sufficient stuff. If there is extra quota, I would recommend Accept.\n", "Q1: The paper is lacking in technical details: a. what is the patch-size used for RGB-histogram? b. what features or conv-layers are used to get the features from VGG (19?) net? \n\nA1: For the RGB histogram, we set the patch size to 8 \\times 8. For the CNN features, we select the layer 4_3 after ReLU from VGG-16 network. Considering the limited space of ICLR submission, we put the demonstration of implementation details in the appendix; given that multiple reviewers pointed this out, we've moved the implementation details to the main paper and expanded the paper to 9 pages.\n\n\nQ2: Other than medical imaging where there isn't a variation in colors of the two domains, it is not clear why RGB-histogram would work?\n\nA2: The RGB-histogram for non-medical image cases is still useful as it captures the \"textureness\" of an image patch although it might not be a very rich representation.\nBased on our experiments, our framework learns translations of changing colors and textures. E.g. for the task of Horse2Zebra (or Zebra2Horse), regions of horse are brown and are expected to be translated to zebra-like texture with black and white stripes. At the same time, the background often shows different appearance for the horse or zebra. Therefore, two patches which are both from the horse or both from the background will have small distance in the RGB histogram, while two patches from horse and background respectively will have larger distance in the RGB histogram. This makes the RGB histogram useful for building a smoothness constraint in the proposed method to improve the translation results. In the task of Label2City, labels are shown with different colormaps, so here again it is reasonable to employ RGB histograms to represent the label patches. However, for the Photos2City, there are some categories which have variable colors and patterns and are not suitable to be represented by a RGB histogram, such as cars and humans. Therefore, using the RGB histogram may be damaging for the diversity of these categories, and this is why the RGB histogram shows a little lower performance than standard CycleGAN in Table 2. \n\n\nQ3: the current formulation can be thought as a variant of perceptual loss from Johnson et al. ECCV'16 (applied for the patches, or including pair of patches). In my opinion, implementing via perceptual loss formulation would have made the formulation cleaner and simpler? The authors might want to clarify as how it is different from adding perceptual loss over the pair of patches along with the adversarial loss. One would hope that a perceptual loss would help improve the performance. Also see, Chen and Koltun, ICCV'17.\n\nA3: The proposed smoothness term has a great difference compared with perceptual loss. A key and one-sentence summary would be: the perceptual loss preserves the ABSOLUTE high-level feature values for A pattern before and after the translation (therefore effective in style transfer to preserve the content part) whereas HarmonicGAN preserves the DIFFERENCE/DISTANCE of a PAIR of patterns before and after the translation.\n\nPerceptual loss is proposed for the style transfer task. It forces the result to maintain the content of the content target and preserve the style of the style target. Perceptual loss includes two parts, for content and style respectively, formulated as:\ncontent perceptual loss: L_{content}(x, y) = ||\\phi_j (y) - \\phi_j (x)||^2_2 / (C_j H_j W_j),\nstyle perceptual loss: L_{style}(x, y) = || G_j(y) - G_j(x) ||^2_F,\nwhere \\phi_j represents the activations of the jth layer in a pre-trained network (e.g. VGG-Net), and C_j, H_j, W_j are the channel, height, width of jth layer, G_j represents the Gram matrix computed on the jth layer. Therefore, perceptual loss enforces the output y to reconstruct the feature of the Gram matrix of the input x. \n\nIn contrast, the proposed smoothness term in HarmonicGAN aims to provide similarity-consistency between image patches during the translation, formulated in Eq. 6, 7, 8. The smoothness term is designed to build a graph Laplacian on all pairs of image patches, and the smoothness constraint preserves the overall integrity of the translation from the manifold learning perspective, rather than reconstructing the input sample directly. In addition, although the smoothness constraint in HarmonicGAN is measured on the features of each patches, including a RGB histogram or CNN features, it is not suitable to treat the smoothness constraint as a variant of perceptual loss: the CNN feature is only a kind of representation of image patches, not a major design part of the smoothness constraint. Other methods of representing image patches could also be employed in the smoothness constraint, such as RGB histogram.\n\n(continued below)", "(continued from above) \n\nQ4: The proposed approach is highly constrained to the settings where structure in input-output does not change. I am not sure how would this approach work if the settings from Gokaslan et al. ECCV'18 were considered (like cats to dogs where the structure changes while going from input to output)? \n\nA4: It is a interesting idea to change the shapes and structures of objects during translation. The proposed method is implemented based on CycleGAN, which doesn’t have the capacity to change structure. In this work, we focus on improving the translation by introducing the smoothness constraint to provide similarity-consistency between image patches during the translation. The application of changing structure could be considered in future work.\n\n\nQ5: Does the proposed approach also provide temporal smoothness in the output? E.g. Fig. 7 shows an example of man on horse being zebra filed. My guess is that input is a small video sequence, and I am wondering if it provides temporal smoothness in the output? The failure on human body makes me wonder that smoothness constraints are helping learn the edge discontinuities. What if the edges of the input (using an edge detection algorithm such as HED from Xie and Tu, ICCV'15) were concatenated to the input and used in formulation? This would be similar in spirit to the formulation of deep cascaded bi-networks from Zhu et al . ECCV'16.he relevant literature\n\nA5: We focus on image-to-image translation, so we have not considered temporal smoothness in the output, but we agree that would be an interesting topic to explore in future work. \nHarmonicGAN aims at preserving similarity from the overall view of the image manifold, rather than getting a \"smoother\" images/labels in the translated domain. Thus, the smoothness constraint is not suitable to learn the edge discontinuities. For more analysis, please refer to the answer and experimental results comparing to CRF which are in our response to Question #1 of Reviewer #2. ", "(continued from above)\n\nA2: In eq. 6, 7, 8, the smoothness term defines a graph Laplacian with the minimal value achieved as a harmonic function. We define the set consisting of individual image patches as the nodes of the graph, and define the affinity measure (similarity) computed on image patches as the edges of the graph. Then the smoothness term acts as a graph Laplacian on all pairs of image patches. Our definition of harmonic function is consistent with what was defined in (Zhu et al. ICML 2003) where the smoothness term defines a graph Laplacian with the minimal value achieved at \\Delta f = 0 as a harmonic function. In our paper, the smoothness term (Eq. 6, 7, 8) defines a Laplacian \\Delta = D - W, where W is our weight matrix in Eq. 6 and D is a diagonal matrix with D_{i} = \\sum_j w_{ij}. In the implementation, the losses and gradients of smoothness term are computed in parallel, which is efficient computing in GPUs. We also randomly sample the image pairs to further reduce computation complexity.\n\n\nQ3: Missing citations & term vs constraint\n\nA3: We have added citations to CRFs and other papers. About the term \"constraint\", you are right that we don't have an explicit equality or inequality to satisfy here. However, recent constrained optimization literature makes less distinction between the two. We have replaced \"constraint\" in most locations by \"term\" but in a few places calling it \"constraint\" is easier to understand.\n\n\nQ4: When using feature from pre-training (VGG) in the CRF loss, the comparison with unsupervised CycleGAN is not fair.\n\nA4: Firstly, the VGG model used to obtain semantic features of image patches are pre-trained in a large scale classification dataset, e.g. ImageNet dataset. The VGG model has not seen the data of image-to-image translation during its training process, and the VGG model is fixed during extracting features in the training process of image-to-image translation. Therefore, the VGG model will not bring extra supervised information about the image-to-image translation datasets. Secondly, we only use the VGG model as a feature extractor during the training process. In the inference stage, the VGG model is removed along with all the constraints and the discriminator. That means the structure of models from CycleGAN and the proposed HarmonicGAN are exactly the same since they use the same structure for generator. We also provide alternative results using RGB histogram features. In conclusion, we think it is fair to employ VGG as a feature extractor in the training process of the proposed method. \n\n\nQ5: In Table 2 (Label translation on Cityscapes), CycleGAN outperforms the proposed method in all metrics when only unsupervised histogram features are used, which makes me doubt about the practical value of the proposed regularization in the context of image-translation tasks. Having said that, the histogram-based regularization is helping in the medical-imaging application (Table 1). By the way, the use of histograms (of patches or super-pixels) as unsupervised features in pairwise regularization is not new neither. Also, it might be better to use super-pixels instead of patches. \n\nA5: The main contribution of the proposed HarmonicGAN comes from the smoothness constraint which enforces consistent mappings during the translation. When computing the distance for the graph Laplacian, we adopt two types of feature measures, the RGB histogram and CNN features. These two feature measures could be selected according to the specialty of the domain. For example, for medical imaging, the major translation in images of two medical domains are colors. Thus, it is reasonable to use histogram features to represent the image patches, and histogram features improve the translation performance. However, for the task of label to city, regions of the same color should be translated to objects of the same category. Since objects of the same category may have different colors and appearances (e.g. cars of different colors and pedestrians wearing different clothes), the histogram feature is not suitable to represent the category information. This is why the results of the histogram feature for label to city task are unsatisfactory, and the CNN features are more suitable to represent the objects for this task. Results in Table 2 provide evidence for this explanation: the proposed method using the histogram performs slightly worse than CycleGAN, while the method using CNN features outperforms CycleGAN. In conclusion, selecting suitable feature measures for the smoothness constraint according to the image domains is important, and different domains benefit from different features.", "Q1: The key idea of this paper is very similar to that of DistanceGAN. The proposed method can be regarded as a combination of the advantages of DistanceGAN and CycleGAN.\n\nA1: There is a large difference between DistanceGAN and the proposed HarmonicGAN. First, DistanceGAN already included the CycleGAN loss. Second, DistanceGAN is about preserving the AVERAGED distance between the sample pairs from the source to the target domain, which is not sufficient to retain the underlying integrity and manifold structure.\n\nNext, we elaborate the key difference between DistanceGAN and HarmonicGAN. DistanceGAN encourages the distance of samples to be close to an ABSOLUTE MEAN during translation. In contrast, HarmonicGAN enforces a smoothness term naturally under the graph Laplacian, making the motivations of DistanceGAN and HarmonicGAN quite different.\n\nIn more detail, the distance constraint in DistanceGAN uses the expectation of the absolute differences between the distances in each domain, formulated as:\n\nL_{distance}(G, X) = E_{x_i, x_j \\in X} \\left| ( || x_i - x_j || - \\mu_X) / \\sigma_X + ( || G(x_i) - G(x_j) || - \\mu_Y ) / sigma_Y \\right|,\n\nwhere \\mu_X, \\mu_Y (\\sigma_X, \\sigma_Y) are the precomputed means (standard deviations) of pairwise distances in the training sets from domain X and Y.\nThis distance preserving is interesting but not strong enough to preserve the manifold structure. We suspect that it is probably the reason for DistanceGAN not performing well, as seen in the qualitative and quantitative measures.\n\nDifferently, HarmonicGAN introduces a smoothness constraint to provide similarity-consistency between image patches during the translation. The smoothness term defines a graph Laplacian with the minimal value achieved as a harmonic function. We define the set consisting of individual image patches as the nodes of the graph, and define the affinity measure (similarity) computed on image patches as the edges of the graph. The smoothness term acts as a graph Laplacian imposed on all pairs of image patches. For the translation from X to Y, the smoothness constraint is formulated as:\n\nL_{smooth} (G, X, Y) = E_{{\\bf x} \\in X} \\big [\\sum_{i,j} w_{ij}(X) \\times Dist[G(\\vec{x})(i), G(\\vec{x})(j)] + \\sum_{i,j} w_{ij}(G(X)) \\times Dist[F(G(\\vec{x}))(i), F(G(\\vec{x}))(j)]} \\big]\n\nwhere w_{ij}(X) = \\exp_{- Dist[\\vec{x}(i), \\vec{x}(j)] / \\sigma^2} defines the affinity between the two patches \\vec{x}(i) and \\vec{x}(j). Additionally, the similarity of a pair of patches is measured on the features of each patch, e.g. Histogram or CNN features. \n\nComparing the distance constraint in DistanceGAN and the smoothness constraint in HarmonicGAN, we can conclude the following main three differences between them:\n\n(1) They show different motivations and formulations. Most importantly, the loss term in DistanceGAN essentially matches the distance of sample pairs in the source domain to the AVERAGED distance in the target domain; it is not about preserving the distance of the individual sample pairs. From a manifold learning point of view, preserving the averaged distance is not sufficient for preserving the underlying manifold structure. In contrast, the smoothness constraint in our method is designed from a graph Laplacian to build the similarity-consistency between image patches. Thus, the smoothness constraint uses the affinity between two patches as weight to measure the similarity-consistency between two domains. Our approach is in the vein of manifold learning. The smoothness term defines a Laplacian \\Delta = D - W, where W is our weight matrix and D is a diagonal matrix with D_{i} = \\sum_j w_{ij}, thus, the smoothness term defines a graph Laplacian with the minimal value achieved as a harmonic function.\n\n(2) They are different in implementation. The smoothness term in HarmonicGAN is computed on image patches while the distance term in DistanceGAN is computed for the average. Therefore, the smoothness constraint is more fine-grained compared to the distance preserving term in DistanceGAN. Moreover, the distances in DistanceGAN is directly computed from the samples in each domain. They scale the distances with the precomputed means and stds of two domains to reduce the effect of gap between two domains. Differently, the smoothness constraint in HarmonicGAN is measured on the features (Histogram or CNN features) of each patches, which maps samples in two domains into the same feature space and removes the gap between two domains.\n\n(continued below)", "Q1: This paper adds a spatial regularization loss to the well-known CycleGAN loss for unpaired image-to-image translation (Zhu et al., ICCV17). Essentially, the regularization loss (Eq. 6) is similar to imposing a CRF (Conditional Random Field) term on the network outputs, encouraging spatial consistency between patches within each generated image. Imposing pairwise regularization on the outputs of modern deep networks has been investigated in a very large number of works recently, particularly in the context of weakly-supervised and supervised CNN segmentation, e.g., Tang et al., ECCV 18 , Lin et al. CVPR 2016, Chen et al. ICLR 2015 and Zheng et al., ICCV 2015. Very similar in spirit to this ICLR submission, these works impose within-image pairwise regularization (e.g., CRF) on the latent outputs of deep networks, with the main difference that these works use CNN semantic segmentation classifiers whereas here we have a CycleGAN for image generation. The manifold regularization terminology is misleading. The regularization is not over the feature space of image samples. It is within the spatial domain of each generated image (patch or pixel level); so, in my opinion, CRF (or spatial) regularization (instead of manifold regularization) is a much more appropriate terminology. \n\nA1: There are some fundamental differences between the CRF literature and our work. They differ in output space, mathematical formulation, application domain, effectiveness, and the role in the overall algorithm. The similarity between CRF and HarmonicGAN lies the adoption of a regularization term: a binary term in the CRF case and a Laplacian term in HarmonicGAN. The differences are detailed below:\n\n1. Label space vs. feature space\nThe key difference is the explicit graph Laplacian adopted in HarmonicGAN on vectorized representation on all pairs vs. a binary term for the neighboring labels on the scalar representation.\n\nHarmonicGAN is indeed formulated in the feature space, not just limited to patches within the single image. The CycleGAN implementation by Zhu et al. happens to include one image only in a batch for computational reason. We follow the standard pipeline of CycleGAN in HarmonicGAN and might have created a confusion here. The description has been clarified in the revised text and we have added citations to the mentioned papers.\n\n2. Mathematical formulation\n\nWhen learning a CRF model, the objective function often combines a unary term and binary term to minimize\n\\arg \\min_{w,a} - \\sum_{i} \\log p(y_i|X_i; w) + \\sum_{(i,j) \\in Neighborhood} a \\log p(y_i, y_j|X_i, X_j; w)\nwhere w and a are the parameters in CRF to be learned, and y_i and y_i are SCALAR \\in {1,...,k} for k-class labels.\nFor HarmonicGAN, the objective function includes bidirectional translation having the unary term (CycleGAN loss) and binary term. For simplicity we can look at one direction only:\n\\arg \\min_{G,F} \\sum_{i} |F(G(X))_i, x_i| + \\sum_{i,j \\in ImageLattice} w_{ij} Dist[F(y)(i), F(y)(j)]\nwhere w_{i,j} defines the similarity measure and F(y)(i) computes a feature VECTOR center at i.\nThe key difference lies in the explicit graph Laplacian defined with w_{ij} for Dist[F(y)(i), F(y)(j)] for all pairs whereas p(y_i, y_j|X_i, X_j; w) is a joint probability for the neighboring pixels i and j.\nIn both supervised CRF or weakly-supervised CRF, y_i and y_j are scalars, which are not applicable to the general image translation task for non-labeling tasks since the feature vector space is too high for CRF to model. In addition, the graph Laplacian term in HarmonicGAN is explicitly modeled, which is very different from a joint probability model on the labels (scalar) for the neighboring pixels. It is true that HarmonicGAN adopts a smoothness term but so do semi-supervised learning, manifold learning, Markov Random Fields, spectral clustering and normalized cuts, and Laplacian eigenmaps.\n\n3. Application domain\nCFR models are used in supervised and weakly-supervised image labeling task but HarmonicGAN, like CycleGAN, is applied to the generic image translation tasks where the output is beyond image labels. The reason we show the result on Cityscapes here is twofold: (1) it is shown in the original CyceleGAN paper and we want to have a direct comparison with, and (2) the labeling result can have a quantitative measures since the ground-truth labels are available. The family of unpaired image translation tasks can be quite broad, as seen in a number of applications following CycleGAN.\n\n(continued below)", "(continued from above)\n\n4. Effectiveness\nThe effect of the binary term in CRF is to encourage the joint probability to be faithful to the training labels: p(y_i, y_j|X_i, X_j; w)\nThis term itself is not necessarily about smoothness. It only happens to be the case that most of the time the ground-truth labels are mostly the same for the neighboring pixels. Importantly, the overall effect of the binary term in CRF has been widely observed being secondary for image labeling tasks, meaning it can help smooth the output boundaries, but the learning procedure is mostly dictated by the unary term. In fact, it is very difficult for a CRF model to fundamentally improve the wrong prediction for large areas. As shown in Fig 9, HarmonicGAN instead is able to almost completely correct the mistakes made by the unary term (the CycleGAN loss) for the BRATS experiment.\n\n5. Role in the algorithm\nAs stated by the reviewer, \"CRFs have made a significant impact when used as post-processing\", but the smoothness term in HarmonicGAN is not about post-processing, at which stage it may anyway be too late to correct large mistakes. The smoothness term in HarmonicGAN works closely with the CycleGAN loss to create meaningful translations while maintaining the overall integrity of the image contents. The improvement of HarmonicGAN over CycleGAN goes way beyond the 5-20% improvement of adopting CRF in the standard image labeling tasks. HarmonicGAN provides a significant boost over CycleGAN in all cases and turns a failure case in BRATS to a success.\nAs a matter of fact, the smoothness term in HarmonicGAN is not about obtaining \"smoother\" images/labels in the translated domain, as seen in the experiments; instead, HarmonicGAN is about preserving the overall integrity of the translation itself for the image manifold. This is the main reason for the large improvement of HarmonicGAN over CycleGAN.\n\nTo further demonstrate the difference of HarmonicGAN and CRF, we perform an experiment of applying the pairwise regularization of CRFs to the CycleGAN framework. For each pixel of the generated image, we compute the unary term and binary term with its 8 neighbors, and then minimize the objective function of CRF. The results are:\n\n Flair -> T1 T1 -> Flair\n MAE\\downarrow MSE\\downarrow MAE\\downarrow MSE\\downarrow\nCycleGAN 10.47 674.40 11.81 1026.19 \nCycleGAN+CRF 11.24 839.47 12.25 1138.42\nHarmonicGAN-Histogram 6.38 216.83 5.04 163.29\nHarmonicGAN-VGG 6.86 237.94 4.69 127.84\n\nAs shown in the the above quantitative results, the pairwise regularization of CRF is unable to handle the problem of CycleGAN illustrated in Fig. 1. What's worse, using the pairwise regularization may over-smooth the boundary of generated images, which results in extra artifacts. In contrast, HarmonicGAN aims at preserving similarity from the overall view of the image manifold, and thus exploit similarity-consistency of the generated images rather than over-smooth the boundary. We have added these results along with a comparison and discussion to Section 6.2 in the paper to clarify this.\n\nQ2: I did not get how the loss in (9) gives a harmonic function. Could you please clarify and give more details? In my understanding, the harmonic solution in [ Zhu and Ghahramani, ICML 2013] comes directly as a solution of the graph Laplacian (and it assumes some labeled points, i.e., a semi-supervised setting). Even, if the solution is correct (which I do not see how), I do not think it is an efficient way to handle pairwise-regularization problems in image processing, particularly when matrix W = [w_{ij}] is dense (which might be the case here, unless you are truncating the Gaussian kernel with some heuristics). In this case, back-propagating the proposed loss would be of quadratic complexity w.r.t the number of image patches.\n\n(continued below)", "(continued from above)\n\n(3) They show different results. We add Fig. 6 to show the qualitative results of CycleGAN, DistanceGAN and the proposed HarmonicGAN on the BRATS dataset. As shown in Fig. 6, the problem of randomly adding/removing tumors in the translation of CycleGAN is still present in the results of DistanceGAN, while the proposed method solves the problem and connecrts the location of the tumors. Table 1 shows the quantitative results on the whole test set, which also reach the same conclusion. The results of DistanceGAN on four metrics are even worse than CycleGAN, while HarmonicGAN yields a large improvement over CycleGAN.\n\nIn conclusion, the proposed method differs significantly from DistanceGAN in motivation, formulation, implementation and results. We have added a comparison and discussion about the differences between DistanceGAN and HarmonicGAN in Section 6.1 in the revision to make this clear.\n\n\nQ2: Lots of method details are missing. Implementation details are missing. In Section 3.3.2, what layers are chosen for computing the semantic features? What exactly is the metric for computing the distance between semantic features.\n\nA2: In the implementation we select the layer 4_3 after ReLU from the VGG-16 network for computing the semantic features. In Eq. 6, 7, 8, we first normalize the features to [0,1] and then use the L1 distance of normalized features as the Dist function (for both Histogram and CNN features). Considering the limited space in an ICLR submission, we had moved the implementation details to the appendix; we've now moved it back to the main paper and expanded the paper to 9 pages. Are there any other details in particular that you would like to know?\n\n\nQ3: The qualitative results on the task, Horse2Zebra and Zebra2Horse, are not impressive. Obvious artifacts can be observed in the results. Although the paper claims that the proposed method does not change the background and performs more complete transformations, the background is changed in the result for the Horse2Zebra case in Fig. 5. More qualitative results are needed to demonstrate the effectiveness of the proposed method.\n\nA3: The task of unpaired image-to-image translation is highly difficult due to the lack of paired training data. Although the proposed method could not generate “perfect” results on some samples, it shows significantly better performance compared to the standard state of the art CycleGAN framework. The result of the human perceptual study in Table 4 demonstrates that the proposed method achieves a higher Likert score and the larger percentage of user preference over CycleGAN and DistanceGAN. As shown in Table 4, the users give the highest score (72%) to the proposed method, significantly higher than CycleGAN (28%). Meanwhile, the average Likert score of our method was 3.60, outperforming 3.16 of CycleGAN and 1.08 of DistanceGAN. Both CycleGAN and our method may change the color or tone of background, which also looks realistic overall (such as translating the color of grass from green to yellow). However, sometimes CycleGAN may translate some parts of the background to zebra-like texture, which is an artifact. The proposed method performs better on preventing these zebra-like parts and makes the generated results more realistic as shown e.g. in the comparisons in Fig. 7 and Fig. 10. Considering the limited space in the paper, please see more qualitative results in Fig. 10 in the appendix.\n\n\nQ4: To demonstrate the effectiveness of a general unpaired image-to-image translation method, the proposed method is needed to be testified on more tasks.\n\nA4: As suggested, we apply the proposed method on 4 more tasks in Fig. 11, including translation between apples and oranges, facades and labels, aerials and maps, summer and winter, and compare these to CycleGAN. These results demonstrate that the proposed method generalizes well to these tasks and outperforms CycleGAN.", "This paper adds a spatial regularization loss to the well-known CycleGAN loss for unpaired image-to-image translation (Zhu et al., ICCV17). Essentially, the regularization loss (Eq. 6) is similar to imposing a CRF (Conditional Random Field) term on the network outputs, encouraging spatial consistency between patches within each generated image.\n\nThe paper is clear and well written.\n\nUnpaired Image-to-Image translation is an important problem. \n\nThe way the smoothness loss (Eq. 6) is presented gives readers the impression that spatial pairwise regularization is new, ignoring its long history (e.g., CRFs) in computer vision (not a single classical paper on CRFs is cited). Putting aside classical spatial regularization works, imposing pairwise regularization on the outputs of modern deep networks has been investigated in a very large number of works recently, particularly in the context of weakly-supervised semantic CNN segmentation, e.g., [Tang et al., On Regularized Losses for Weakly-supervised CNN Segmentation, ECCV 18 ], [Lin et al. : Scribblesup: Scribble-supervised convolutional networks for semantic segmentation, CVPR 2016], among many other works. Very similar in spirit to this ICLR submission, these works impose within-image pairwise regularization (e.g., CRF) on the latent outputs of deep networks, with the main difference that these works use CNN semantic segmentation classifiers whereas here we have a CycleGAN for image generation.\n\nAlso, in the context of supervised CNN segmentation, CRFs have made a significant impact when used as post-processing step, e.g., very well known works such as [DeepLab by Chen et al. ICLR15] and [CRFs as recurrent Neural Networks by Zheng et al., ICCV 2015]. \n\nIt might be a valid contribution to evaluate spatial regularization (e.g., CRFs losses) on image generation tasks (such as CycleGAN), but the paper really needs to acknowledge very related prior works on regularization (at least in the context of deep networks).\n\nThere are also related pioneering semi-supervised deep learning works based on graph Laplacian regularization, e.g., [Westen et al., Deep Learning via Semi-supervised embedding, ICML 2008], which the paper does not acknowledge/discuss. \n\nThe manifold regularization terminology is misleading. The regularization is not over the feature space of image samples. It is within the spatial domain of each generated image (patch or pixel level); so, in my opinion, CRF (or spatial) regularization (instead of manifold regularization) is a much more appropriate terminology. \n\nAlso, I would not call this approach HarmonicGan. I would call it CRF-GAN or Spatially-Regularized GAN. The computation of harmonic functions is just one way, among many other (potentially better) ways to optimize pairwise smoothness terms (including the case of the used smoothness loss). And, by the way, I did not get how the loss in (9) gives a harmonic function. Could you please clarify and give more details? In my understanding, the harmonic solution in [ Zhu and Ghahramani, ICML 2013] comes directly as a solution of the graph Laplacian (and it assumes some labeled points, i.e., a semi-supervised setting). Even, if the solution is correct (which I do not see how), I do not think it is an efficient way to handle pairwise-regularization problems in image processing, particularly when matrix W = [w_{ij}] is dense (which might be the case here, unless you are truncating the Gaussian kernel with some heuristics). In this case, back-propagating the proposed loss would be of quadratic complexity w.r.t the number of image patches. Again, there is a long tradition in optimizing efficiently pairwise regularizers in vision/learning (even in the case of dense affinity matrices), and one very well-known work, which is currently being used a lot in the context imposing CRF structure on the outputs of deep networks, is [Krahenbuhl and Koltun, Efficient Inference in Fully Connected CRFs with Gaussian Edge Potentials], NIPS 2011. This highly related and widely used inference work for dense pairwise regulation is not cited/discussed neither. The Gaussian filtering ideas of the work of Krahenbuhl and Koltun, which ease optimizing dense pairwise terms (from quadratic to linear) are applicable here (as a Gaussian kernel is used), and are widely used in computer vision, including closely related works imposing spatial regularization losses on the outputs of deep networks, e.g., [Tang et al., On Regularized Losses for Weakly-supervised CNN Segmentation, ECCV 18], among many others. \n \nWhen using feature from pre-training (VGG) in the CRF loss, the comparison with unsupervised CycleGAN is not fair. In Table 2 (Label translation on Cityscapes), CycleGAN outperforms the proposed method in all metrics when only unsupervised histogram features are used, which makes me doubt about the practical value of the proposed regularization in the context of image-translation tasks. Having said that, the histogram-based regularization is helping in the medical-imaging application (Table 1). By the way, the use of histograms (of patches or super-pixels) as unsupervised features in pairwise regularization is not new neither; see for instance [Lin et al.: Scribblesup: Scribble-supervised convolutional networks for semantic segmentation, CVPR 2016]. Also, it might be better to use super-pixels instead of patches. \n\nSo, in summary, the technical contribution is minor, in my opinion (imposing pairwise regularization on the outputs of deep networks has been done in many works, but not for CycleGAN); optimization of the proposed loss as a harmonic function is not clear to me; using VGG in the comparisons with CycleGAN is not fair; and the long history of closely-related spatial regularization terms (e.g., CRFs) in computer vision is completely ignored.\n\nMinor: please use ‘term’ instead of ‘constraint’. These are unconstrained optimization problems and there are no equality or inequality constraints here. \n\n", "Summary: The paper proposes a new smoothness constraint in the original cycle-gan formulation. The cycle-gan formulation minimizes reconstruction error on the input, and there is no criterion other than the adversarial loss function to ensure that it produce a good output (this is in sync with the observations from Gokaslan et al. ECCV'18 and Bansal et al. ECCV'18). A smoothness constraint is defined across random patches in input image and corresponding patches in transformed image. This enables the translation network to preserve edge discontinuities and variation in the output, and leads to better outputs for medical imaging, image to labels task, and horse to zebra and vice versa.\n\nPros: \n\n1. Additional smoothness constraints help in improving the performance over multiple tasks. This constraint is intuitive.\n\n2. Impressive human studies for medical imaging.\n\n3. Improvement in the qualitative results for the shown examples in paper and appendix.\n\nThings not clear from the submission: \n\n1. The paper is lacking in technical details: \n\na. what is the patch-size used for RGB-histogram?\n\nb. what features or conv-layers are used to get the features from VGG (19?) net? \n\nc. other than medical imaging where there isn't a variation in colors of the two domains, it is not clear why RGB-histogram would work?\n\nd. the current formulation can be thought as a variant of perceptual loss from Johnson et al. ECCV'16 (applied for the patches, or including pair of patches). In my opinion, implementing via perceptual loss formulation would have made the formulation cleaner and simpler? The authors might want to clarify as how it is different from adding perceptual loss over the pair of patches along with the adversarial loss. One would hope that a perceptual loss would help improve the performance. Also see, Chen and Koltun, ICCV'17.\n\n2. The proposed approach is highly constrained to the settings where structure in input-output does not change. I am not sure how would this approach work if the settings from Gokaslan et al. ECCV'18 were considered (like cats to dogs where the structure changes while going from input to output)? \n\n3. Does the proposed approach also provide temporal smoothness in the output? E.g. Figure-6 shows an example of man on horse being zebrafied. My guess is that input is a small video sequence, and I am wondering if it provides temporal smoothness in the output? The failure on human body makes me wonder that smoothness constraints are helping learn the edge discontinuities. What if the edges of the input (using an edge detection algorithm such as HED from Xie and Tu, ICCV'15) were concatenated to the input and used in formulation? This would be similar in spirit to the formulation of deep cascaded bi-networks from Zhu et al . ECCV'16.", "Thanks for the detailed replies. Looking forward to the revised text.", "Thank you for the great suggestions on improving the writing! See our responses below. We will integrate these clarifications into the actual paper once it's editable.\n\n\nQ1: What is their distance (‘Dist’) function? Is it lower/upper-bounded?\n\nA1: We first normalize the features to [0,1] and then use the L1 distance of normalized features as the Dist function (for both Histogram and CNN features). Therefore the range of the 'Dist' function outputs is lower & upper-bounded within [0,1]. We will mention in the revision. \n\n\nQ2: How does Eq. 9 lead to a ‘harmonic function’? \n\nA2: The definition of a harmonic function is a twice continuously differentiable function f : \\mathbb{R}^n \\rightarrow \\mathbb{R} that satisfies Laplace's equation: \\Delta f = 0. Our definition of harmonic function is consistent with what was defined in (Zhu et al. ICML 2003) where the smoothness term defines a graph Laplacian with the minimal value achieved at \\Delta f = 0 as a harmonic function. In our paper, the smoothness term (Eq. 6, 7, 8) defines a Laplacian \\Delta = D - W, where W is our weight matrix in Eq. 6 and D is a diagonal matrix with D_{i} = \\sum_j w_{ij}. \n\n\nQ3: Have the authors performed any experiments with datasets in larger domains? The largest dataset used contains few thousand images, while much larger datasets are available. Does this mean that their method is not applicable in larger domains? \n\nA3: The datasets we evaluated on (BRATS, Cityscapes and horse/zebra) are all challenging benchmarks that have been commonly used for the task of unpaired image translation (Zhu et al. ICCV 2017). Note image translation performs dense pixel labeling/prediction which normally utilizes much smaller datasets than standard image classification tasks like ImageNet. It is primarily due to the difficulty of obtaining dense pixel-wise labeling for training and evaluation.\n\nOur method works very well on the standard benchmarks and there is no clear bottleneck for HarmonicGAN not to work on larger datasets. It is a good idea to be more ambitious and try to experiment on situations that are more complicated and on larger datasets. For example, the MSCOCO dataset for semantic and instance segmentation is becoming increasingly larger. Thanks for the suggestion.\n\n\nQ4: The whole idea is based on manifold learning but there are hardly few sentences for it in the whole manuscript. Even in related work, there is a one sentence reference; elaborating more on it would make it easier to follow the intuition and the claims (even in the appendix). \n\nA4: Thanks for the comment. We cited a number of references for manifold learning as well as the graph-based semi-supervised learning literature, but didn't go into details. We will provide more elaboration in the revision.\n\n\nQ5: What is the graph G suddenly mentioned in a single sentence in page 5?\n\nA5: We introduce the graph on page 5 section 3.1 and elaborate on it on the same page in section 3.3. We introduce smoothness constraints to unpaired image-to-image translation inspired by graph-based semi-supervised learning (Zhu et al. ICML 2003, Zhu 2006). Briefly, the graph is used by the smoothness constraint; its nodes are individual image patches and its edges are similarity computed for a pair of image patches. The smoothness term acts as a graph Laplacian imposed on all pairs of samples. We will clarify this earlier on in the paper.\n\n\nQ6: Are the arrows in Fig. 4 correct? For instance in (a) there are two arrows pointing to generator F, but zero arrows pointing out of it.\n\nA6: Thanks for pointing it out. The arrows in the figure are indeed a bit confusing. In (a) the arrow pointed from F(G(x)) to F should be horizontal flipped. Similarly, in (b) the arrow pointed from F(G(x)) to G should also be horizontally flipped. We will revise the direction of these two arrows.\n\n\nQ7: The way the patches are considered is also not explained. Are they overlapping? How are they considered during training? Dense patch extraction? \n\nA7: Yes, they are dense patches with overlaps. The Histogram/CNN features of patches are densely learned in parallel. In the implementation, the smoothness term is computed from patch pairs randomly selected from all pairs.", "Even though the new loss term seems interesting idea, the authors could improve their text to make it easier for the readers. Few questions from reading it: \n\n* What is their distance (‘Dist’) function? Is it lower/upper-bounded?\n* How does eq. 9 lead to a ‘harmonic function’? \n* Have the authors performed any experiments with datasets in larger domains? The largest dataset used contains few thousand images, while much larger datasets are available. Does this mean that their method is not applicable in larger domains? \n* Some text improvements that the authors might consider: \n - The whole idea is based on manifold learning but there are hardly few sentences for it in the whole manuscript. Even in related work, there is a one sentence reference; elaborating more on it would make it easier to follow the intuition and the claims (even in the appendix). \n - What is the graph G suddenly mentioned in a single sentence in page 5?\n - Are the arrows in Fig. 4 correct? For instance in (a) there are two arrows pointing to generator F, but zero arrows pointing out of it.\n* The way the patches are considered is also not explained. Are they overlapping? How are they considered during training? Dense patch extraction? \n \n" ]
[ 6, -1, -1, -1, -1, -1, -1, -1, 5, 4, -1, -1, -1 ]
[ 5, -1, -1, -1, -1, -1, -1, -1, 5, 5, -1, -1, -1 ]
[ "iclr_2019_S1M6Z2Cctm", "SylYVmZKnm", "r1xN-MahTm", "H1xHSb636X", "Sye1L1Bn37", "H1lGEHeonm", "Hkl52g6nam", "SJeW-lphpQ", "iclr_2019_S1M6Z2Cctm", "iclr_2019_S1M6Z2Cctm", "ryeFogT13Q", "H1l5el1hjX", "iclr_2019_S1M6Z2Cctm" ]
iclr_2019_S1VWjiRcKX
Universal Successor Features Approximators
The ability of a reinforcement learning (RL) agent to learn about many reward functions at the same time has many potential benefits, such as the decomposition of complex tasks into simpler ones, the exchange of information between tasks, and the reuse of skills. We focus on one aspect in particular, namely the ability to generalise to unseen tasks. Parametric generalisation relies on the interpolation power of a function approximator that is given the task description as input; one of its most common form are universal value function approximators (UVFAs). Another way to generalise to new tasks is to exploit structure in the RL problem itself. Generalised policy improvement (GPI) combines solutions of previous tasks into a policy for the unseen task; this relies on instantaneous policy evaluation of old policies under the new reward function, which is made possible through successor features (SFs). Our proposed \emph{universal successor features approximators} (USFAs) combine the advantages of all of these, namely the scalability of UVFAs, the instant inference of SFs, and the strong generalisation of GPI. We discuss the challenges involved in training a USFA, its generalisation properties and demonstrate its practical benefits and transfer abilities on a large-scale domain in which the agent has to navigate in a first-person perspective three-dimensional environment.
accepted-poster-papers
This paper addresses an importnant and more realistic setting of multi-task RL where the reward function changes; the approach is elegant, and empirical results are convincing. The paper presents an importnant contribution to the challenging multi-task RL problem.
train
[ "HJllwrS8aQ", "SJe_R952hm", "rJxFXMyF2X" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The goal here is multi-task learning and generalization, assuming that the expected one-step reward for any member of the task family can be written as $\\phi(s,a,s')^T w$. The authors propose universal successor features (USF) $\\psi$s, such that the action-value functions Q can be written as $Q(s,a,w,z)=\\psi(s,a,z)^T w$, generalizing over mutiple tasks each denoted by $w$, and multiple policies each denoted by $z$. Here, $z$ represents the optimal policy induced by a reward specified by $z$ (from the same set as $w$). Using USFs $\\psi$-s, the Q values can be interpolated across policies and tasks. Due to the disentangling of reward and policy generalizations, the training sets for $w$ and $z$ can be independently sampled. The authors further generalize a temporal difference error in these USFs $\\psi$s, using the TD error to learn to approximate the $\\psi$s by a network (USF Approximator i.e USFA). They then test the generalization capabilities of these USFAs on families of a simple task and a DeepMind Lab based task.\n\nI find this paper a good fit for ICLR as the paper significantly advances learning representations for Q values that generalize across policies and tasks.\n\nSome issues to consider:\n1. Given a policy, I would think that the reward function that induces this policy is not unique. This non-uniqueness probably doesn't matter for the USF development, since the policies are restricted to those induced by z-s (from the same set as w-s), but the authors should clarify this point.\n\n2. I suppose there are no convergence guarantees on the $\\psi$-learning?\n\n3. I do believe that this work goes reasonably beyond the Ma et al 2018 paper, and the authors do clarify their advance especially in incorporating generalized policy improvement. However, the authors way of writing makes it appear as if their work only differs in some details. I recommend to remove this unexplanatory sentence:\n\"Although this work is superficially similar to ours, it differs a lot in the details.\"\n\nMinor:\npage 3: last but one line: \"more clear\" --> \"clearer\"\npage 3: \"In contrast\" --> \"By contrast\" -- but this is not a hard rule\n", "This paper proposes new ideas in the context of deep multi-task learning for RL. Ideas seem to me to be a rather small (epsiilon) improvement over the cited works.\n\nThe main problem - to me - with described approach is that the Q* value now lives in a much higher dimensional space, levelling any advantage a subsequent heuristic might give. \n\nStatements as 'Although this work is superficially similar to ours, it differs a lot in the details' makes clear that this work is only of potential interest for a rather small audience, a tenet also supported by the density of presentation. I leave it to the AC to decide on relevance. \n\n", "Paper’s contributions:\nThis paper considers the challenging problem of generalizing well to new RL tasks, based on having learned on a set of previous related RL tasks. It considers tasks that differ only in their reward function (assume the dynamics are identical), and where the reward functions are constrained to be linear combinations over a set of given features. The main approach, Universal Successor Features Approximators (USFAs) is a combination of two recent approaches: Universal Value Function Approximators (UVFAs) and Generalized Policy Improvement (GPI). The main claim is that while each of these methods leverages different types of regularity when generalizing to new tasks, USFAs are able to jointly leverage both types (and elegantly have both other methods as special cases).\n\nSummary of evaluation:\nOverall the paper tackles an important problem, and provides careful explanation and reasonably extensive results showing the ability of USFA to leverage structure. I’m on the fence because I really wish the combination of generalization properties could be understood in a more intuitive way. There are some more minor issues, such as lack of complexity analysis and a few notation details, that can be easily fixed.\n\nPros:\n-\tThe problem of generalizing to new tasks in RL is an important open problem.\n-\tThe paper is carefully written and provides clear explanation of most of the methods & results.\n\nCons:\n-\tThe authors are diligent about trying to explain what type of regularities are exploited by each of UVFAs and GPI, and how this can be combined in USFAs. However despite reading these parts carefully, I could not get a really good intuition, either in the methods or in the results, for the nature of the regularities exploited, and how it really differs. Top of p.4 says that GPI generalizes well when the policy \\pi(s) does well on task w’. Can you give a specific MDP where Q is not smooth, but the policy does well?\n-\tThere is no complexity analysis. I would like to know the computational complexity of each of the key steps in Algorithm 1 (with comparison to simple UVFA and GPI).\n-\tIt would be useful to see the empirical comparison with the approach of Ma et al. (2018), which also combines SFs and UFVAs. I understand there are differences in the details, but I would like to see confirmation of whether the claims about USFA’s superior ability to exploit structure is supported by results.\n\nMinor comments:\n-\tThe limitation to linear rewards is a reasonably strong assumption. It would be good to support this, e.g. by references to domain that meet this assumption.\n-\tIt seems the mathematical properties in Sec.3.1 could be further developed.\n-\tP.4: “Given a deterministic policy \\pi, one can easily define a reward function r_\\pi”. I did not think this mapping was unique (see the literature on IRL, e.g. Ross et al.). Can you provide a proof or reference to support this statement?\n-\tThe definition of Q(s,a,w,z) is interesting. Can this be seen as a kernel between w and z?\n-\t\\theta suddenly shows up in Algorithm 1. I presume these are the parameters of Q? Should be defined.\n-\tThe distribution used to sample policies seems to be a key step of this approach, yet not much guidance is given on how to do this in general.\n" ]
[ 7, 5, 6 ]
[ 3, 2, 4 ]
[ "iclr_2019_S1VWjiRcKX", "iclr_2019_S1VWjiRcKX", "iclr_2019_S1VWjiRcKX" ]
iclr_2019_S1eK3i09YQ
Gradient Descent Provably Optimizes Over-parameterized Neural Networks
One of the mysteries in the success of neural networks is randomly initialized first order methods like gradient descent can achieve zero training loss even though the objective function is non-convex and non-smooth. This paper demystifies this surprising phenomenon for two-layer fully connected ReLU activated neural networks. For an m hidden node shallow neural network with ReLU activation and n training data, we show as long as m is large enough and no two inputs are parallel, randomly initialized gradient descent converges to a globally optimal solution at a linear convergence rate for the quadratic loss function. Our analysis relies on the following observation: over-parameterization and random initialization jointly restrict every weight vector to be close to its initialization for all iterations, which allows us to exploit a strong convexity-like property to show that gradient descent converges at a global linear rate to the global optimum. We believe these insights are also useful in analyzing deep models and other first order methods.
accepted-poster-papers
This paper proves that gradient descent with random initialization converges to global minima for a squared loss penalty over a two layer ReLU network and arbitrarily labeled data. The paper has several weakness such as, 1) assuming top layer is fixed, 2) large number of hidden units 'm', 3) analysis is for squared loss. Despite these weaknesses the paper makes a novel contribution to a relatively challenging problem, and is able to show convergence results without strong assumptions on the input data or the model. Reviewers find the results mostly interesting and have some concerns about the \lambda_0 requirement. I believe the authors have sufficiently addressed this issue in their response and I suggest acceptance.
train
[ "HygVLQBmkV", "HJeK5b6f14", "HygOMNNqhQ", "S1es44k-JN", "ryeydjfWJE", "BJeygSVrTm", "SkxPEuIhCQ", "BkgphZviC7", "B1lTu7zsC7", "BylyV4KqRX", "r1gzcd-5aX", "H1llXbvcC7", "rJgDnCeICm", "Skeo1ukt07", "B1l4CQpeAm", "BkxSoWagAm", "rygEFV6l0m", "HylJ7E6xAm", "HJg5_7pe0m", "HJlyYM6gCQ", "rJeQQfalRm", "H1g-pI4Dhm", "SklA8vtVh7", "S1eU0bxSn7", "HklcRVD4hm", "S1xpytvfnX", "Hkgk1YwgnX", "HklKuGwacm", "ryxgnoz357", "SkgYjPMn97", "SklNmL-nq7", "BkeKdmWn9Q", "H1gUINXtqX", "B1gUqYQK5m", "SkxQT2fFcQ", "HygpnikO9X", "r1el9Jqv5X", "SyemVAXv9Q", "SJeNOPOU9Q", "HJei63YVq7" ]
[ "public", "author", "official_reviewer", "public", "official_reviewer", "official_reviewer", "author", "public", "author", "author", "official_reviewer", "public", "public", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "public", "public", "public", "author", "public", "public", "author", "author", "author", "author", "public", "public", "public", "public", "public", "public", "author", "public" ]
[ "Though assuming w fixed and only randomness of w(0), the random event still depends on w. I believe Lemma 3.2 actually proved that Prob[H(w) eigenvalues are lower bounded]>1 -delta, for any fixed w. But what is used in the latter proof seems to be Prob[for any fixed w, H(w) eigenvalues are lower bounded]>1 -delta.\n\nThink the following simple example. If Z is N(0,1), then E[(Z-1)^2] = 2 and E[(Z+1)^2] = 2. By Markov inequality, Prob[(Z-1)^2 >2/delta]<delta, Prob[(Z+1)^2 >2/delta]<delta, but '(Z-1)^2 >2/delta' and '(Z+1)^2 >2/delta' are certainly different random events.", "Thanks for increasing your score! We will fix the typo in our final version.", "This work considers optimizing a two-layer over-parameterized ReLU network with the squared loss and given a data set with arbitrary labels. It is shown that for a sufficiently large number of hidden neurons (polynomially in number of samples) gradient descent converges to a global minimum with a linear convergence rate. The proof idea is to show that a certain Gram matrix of the data, which depends also on the weights, has a lower bounded minimum eigenvalue throughout the optimization process. Then, it is shown that this property implies convergence of gradient descent.\n\nThis work is very interesting. Proving convergence of gradient descent for over-parameterized networks with ReLU activations and data with arbitrary labels is a major challenge. It is surprising that the authors found a relatively concise proof in the case of two-layer networks. The insight on the connection between the spectral properties of the Gram matrix and convergence of gradient descent is nice and seems to be a very promising technique for future work. One weakness of the result is the extremely large number of hidden neurons that are required to guarantee convergence.\n\nThe paper is clearly written in most parts. The statement of Lemma 3.2 and its application appear to be incorrect as mentioned in the comments. I am convinced by the authors' response and the current proof that it can be fixed by defining an event which is independent of t. Moreover, I think it would be nice to include experiments that corroborate the theoretical findings. Specifically, it would be interesting to see if in practice most of the patterns of ReLUs do not change or if there is some other phenomenon.\n\nAs mentioned in the comments, it would be good to add a discussion on the assumption of non-degeneracy of the H^{infty} matrix and include a proof (or exact reference) which shows under which conditions the minimum eigenvalue is positive.\n\n-------------Revision--------------\n\nI disagree with most of the points that AnonReviewer3 raised (e.g., second layer fixed is not hard, contribution is limited). I do agree that the main weakness is the number of neurons. However, I think that the result is significant nonetheless. I did not change my original score.\n", "This paper seems to be one of the most popular papers in ICLR though. It got a lot of attention on social media as well as in academia. The impact of the paper is definitely huge as it's closely correlated with the popularity. ", "The revised lemma is much clearer than the initial version. \n\nThe proof of Lemma 3.2 only uses the randomness of w_i(0) s, and the result holds for any weight vectors satisfying the distance assumption in the Lemma, including the setting where the weight vectors are random and dependent on w_i(0) s. \n\nTypo: In the first line of Lemma 3.2, 'w_1, ..., w_m' should be 'w_1(0), ..., w_m(0)'. \n\nI have adjusted my score accordingly. ", "This paper studies one hidden layer neural networks with square loss, where they show that in over-parameterized setting, random initialization + gradient descent gets to zero loss. The results depend on the property of data matrix, but not the output values.\n\nThe high level idea of the proof is quite different from recent papers, and it would be quite interesting to see how powerful this is for deep neural nets, and whether any insights could help practitioners in the future. \n\nSome discussions regarding the results: \n\nI would suggest the authors to be specific about ‘with high probability’, whether it is 1-c, or 1-n^{-c}. The proof step using Markov’s inequality gives 1-c probability, which is stated as ‘with high probability’. What about other ‘high probability’ statements?\n\nIn the statement of Theorem 3.1 and 4.1, please add ‘i.i.d.’ (independence) for generating w_r s.\n\nThe current statement of Lemma 3.2 is confusing. The authors state that given t, w.h.p. (let’s say 0.9 for now) over initialization, the minimum eigenvalue is lower bounded. This does not imply, for example, that there exists an initialization, such that for 20 different t s, the minimum eigenvalue is lower bounded. The proof uses Markov’s inequality for a single t. Therefore, I am slightly worried about its correctness. I hope the authors could address my concern. \n\nAlso, in the proof of Lemma 3.2, (just to improve the readability,) I would suggest the authors to make it clear that the expectation is taken over the initialization of the weights. \n\nSome typos: \n\n‘converges’ -> ‘converges to’ in the abstract\n‘close’ -> ‘close to’ on page 5\n‘a crucial’ -> ‘a crucial role’ on page 5\nIn the proof of Lemma 3.2, x_0 should be x_i\nwhether using boldface for H_{ij} should be consistent\n'The next lemma shows we show' in page 6\n'Markov inequality' -> ‘Markov’s inequality’\n‘a fixed a neural network architecture’ in page 8\n\nIt is good to see other comments and discussions on this paper. I believe the authors will make a revision and I would be happy to see the new version of the paper and re-evaluate if some of my comments are not correct. \n", "Thanks for your encouraging comments!", "This paper seems to be an interesting and important paper for neural networks theory. It gets rid of the distributional input assumption common in previous works. It also gives a linear convergence rate which could not be made possible solely by landscape analysis. \n\nIn the analysis of this paper, the H^infty matrix appears naturally and seems to reveal a connection between neural networks and kernels. Moreover, I would like to mention that the ideas presented in the current submission have recently been generalized to deal with multi-layer neural networks, which clearly illustrates the potential its proof structure and techniques.", "Thanks for your thorough reading! We will add more discussions on non-differentiability!\n\n1. Proof idea and experiments:\nWe think viewing our proof from a \"noisy\" linear regression perspective is an interesting observation. Indeed, analyzing a hard non-linear problem from a \"linear\" perspective is a common practice in mathematics.\n\nIn our proof, R' < R is a sufficient condition to show most patterns do not change, which we have verified in Figure 1 (b). It is possible that through other types of analysis, one can show most patterns do not change. For Figure 1 (c), we just want to verify that as $m$ becomes larger, the maximum distance becomes smaller.\n\n2. Network size.\nWe have discussed this point many times in the response. \nOur current bound requires m = \\Omega(n^6). In this paper, to present the cleanest proof, we only use the simplest concentration inequalities (Hoeffding and Markov). We do not think this bound is tight, and we believe using more advanced techniques from probability theory, this bound can be tightened. \n\n3. Dependency on lambda_0.\nFirst of all, your example is not valid in our setting. If x=0, y=1, it is not possible that ReLU(w*x) can achieve zero training error. \nFurthermore, it is easy to prove linear convergence for your example because we can just study the Gram matrix defined over other data points, which has a positive lambda_0. \nWe will add a remark about this in our final version. Thanks for pointing out.\n\n", "Thanks for your clarifications and we are happy to address your concerns.\n\n1. On H^{\\infty} and \\lambda_0. \nWe have discussed this in length in our Response to Common Questions and Summary of Revisions. In short, because Equation (7) is an equality, at least in the large $m$ regime, $H^{\\infty}$ determines the whole optimization dynamics, and as a consequence, $\\lambda_0$ is the correct complexity measure. See more discussions in Remark 3.1. \n\nWe are not hiding the difficulty because we have identified the correct complexity measure. We believe it is indeed an interesting problem about how the spectrum of $H^{\\infty}$ is related to other assumptions on the training data. We will list this problem in the Discussion section in our final version.\n\nAs a side note, before this paper, even if we allow $m$ to be exponential in $n$, there is no analysis showing that randomly initialized gradient descent can achieve zero training loss.\n\n2. On discrete time analysis.\nOur discrete time analysis follows closely to the continuous time analysis. Note we analyze $u(k+1) – u(k)$ which is analog to $du/dt$. Furthermore, in the equation in the middle of page 9, in the third equality, we decompose the loss at the (k+1)-th iteration into several terms. Note the second term just corresponds to $d(\\|y-u(t)\\|_2^2)/dt$ in the proof of Lemma 3.3 and the other terms are perturbations terms due to discretization. We will make the connection between continuous time analysis and discrete time analysis clearer in our final version. Thanks for pointing out!\n", "Additional Review\n\nThis paper did NOT handle the non-differentiability and non-linearity very well. We can see this from the following three perspectives:\n\n1. Proof idea: the proof of this paper is noisy version of the convergence analysis of a simple convex problem --it treats the contribution of the non-linearity and non-differentiability as bounded noise.\n2. The network size is of order n^6.\n3. Network size requirement is dependent on \\lambda_0. \n\n1.Proof idea: The proof is essentially a noisy version of the convergence analysis of a linear regression problem provided in Appendix (at the end of this updated review). The only difference between linear regression and the problem in this paper is the changing patterns due to the non-linearity of ReLU. However, this paper views the changing patterns as noises compared to those unchanging patterns (e.g., S_i v.s. S_i^\\perpendicular). The key trick is that if the actual trajectory radius (i.e.,the largest deviation from the initial point) R’ is much smaller than the desired trajectory radius R (given by a formula), then along the trajectory, the contribution of non-linearity is just O(n^2 R), which is small compared to the contribution of linearity, i.e., -\\lambda_0 (shown in proof on page 9). \n\nFollowing the above analysis, if the experiment shows that R’ is really small compared to R, then the approach of treating non-linearity as noise is fine. However, it is not the case for the problem studied in the experiments (Sec 5, Fig 1). In figure 1, we can easily see that the maximum distance R’ is O(1), which is far larger than R = c*\\lambda_0/n^2 =10^-6 when n=1k. Therefore, the proof idea used in this paper is fundamentally not able to explain the phenomenon shown in the experiment. In fact, to address this issue, authors need to consider significant contribution of non-linearity, instead of just viewing them as noises. \n\n2. The network size is too large. This paper requires O(n^6) neurons, that is 10^18 neurons for n=1000 samples used in the experiment. The theoretical trick to make R’< R is to note that R’ can be bounded by O(1/sqrt{m}) while R is independent of m, thus picking a sufficiently large m can make R’ very small. In a word, the reason that this paper requires so many neurons because of the inability of properly addressing non-linearity. \n\n3. I found the dependence of the network size on the least eigenvalue funny, although the authors claim this tool is elegant. After authors add Thm 3.1 in the revision, I realize that the dependence on \\lambda_0 might come from the fact that authors do NOT handle the issue of non-differentiability. \n\nLet us see a simple example. Assume I have a dataset with \\lambda_0 = 1. Now I am adding one more data point (x=0_d, y=1) to the dataset. After adding this sample, \\lambda_0 clearly becomes 0. It seems I am just adding a constant 1 to the loss function and the gradient descent can also converge to the global min with a linear convergence rate since the constant does NOT contribution to the gradient. However, it seems the proof does NOT work. This is due to the fact that the “gradient” of the non-differentiable points are NOT well defined. Here is a simple example: h(w)=(y-ReLU(w*x))^2, where x= 0, y =1. By the definition provided in this paper (Eq.4), we can easily see that dh/dw = 1 for any w, even if h(w) = 1 for any w. This means that the constant can provide “fake” gradient information and make the maximum distance become infinity, (R’=\\inf). Therefore, the whole proof collapses. In fact, changing the gradient definition from I{z>=0} to I{z>0} does not address the issue and we can see this from this example w=g(w)=Relu(w)-Relu(-w) has a zero gradient at w=0. \n\nIn summary, the problem considered in this paper where the size m=O(n^6), maximum distance R’= O(1/n^2) is too easy compared to most problems in practice where m=\\Theta(n), R’=O(1). To address the latter problem, we need a better definition of subgradient and need to analyze the significant contribution of non-linearity and non-differentiability, instead of just viewing them as noises. \n\n=================================Appendix===============================\n\nThe proof basically follows from the convergence analysis of the following linear regression problem (note that u_j is fixed):\n \\min_{w_1,...,w_m}\\sum_{i=1}^{n}(f(x_i;w_1,...,w_m)-y_i)^2 = L(w_1,...,w_m)\nwhere f(x;w_1,...,w_m)=1/\\sqrt{m}\\sum_{j=1}^{m} a_j*(w_j^T x)*1{u_j^T x>=0}\n\nGradient Descent Algorithm:\n-Initialization:\n-For each j=1,...,m: a_j ~ U({-1,1}), u_j~N(0, I)\n-Fix a_1,...,a_m, u_1,...,u_m\n-Update:\n-For t = 1,...,T\n w_j(t+1) = w_j(t) - \\eta* \\nabla_{w_j}L(w_1,...,w_m) for j=1,..., m.\n\nIn this problem, since a_j and u_j are fixed, then model f is just a linear model w.r.t. w_j’s and the above problem is just a simple linear regression problem. Therefore, it is not difficult to prove the linear convergence rate for the gradient descent for the above problem under some mild assumptions. Note that in this paper, u_j(t)= w_j(t) and are not fixed in iterations, i.e., patterns can change. \n\n=========================\nFirst, I apologize to the authors and ACs for the late review, since this paper desearves much more time to judge the quality. \n\nSummary: This paper proves that the gradient descent/flow converges to the global optimum with zero training error under the settings (1) the neural network is a heavily over-parameterized ReLU network (i.e., requiting Omega(n^6) neurons); (2) the algorithm update rule “ignores” the non-differentiable point; (3) the parameters in the output layer (i.e., a_i’s) are fixed; (4) the data set has some non-degenerate properties and comes from a unit ball. The proof relies on the fact that the Gram matrix is always positive definite on the converging trajectory. \n\nPros: The proof is simple and seems to be correct. The paper is paper is written clearly and easy to follow. \n\nCons:\n\nThe problem setting considered in this paper does not seem to be difficult enough. The difficulty of analyzing the landscape property of a ReLU network and proving the global convergence of the gradient descent mainly lies in the following three perspective and this paper does not try to tackle any one of them. \n\nFirst, it is very hard to characterize the landscape or the convergence trajectory at/ near the non-differentiable point and this paper fails to touch it. The parameter space is separated into several regions by the hyperplanes and the loss function is differentiable in the interior of each region and non-differentiable on the boundary. I believe the very first question authors need to answer is wether there are critical points on the boundary and why the sub-gradient descent escapes from any of these points. However, in this paper, authors avoid this problem by defining an update rule used in practice and this rule does not use the sub-gradient at the non-differentiable point. Thus, it is totally unclear to me wether this global convergence result comes from the fact that this update rule can generally avoid the non-differentiable points on the boundary or the fact that the landscape is so nice such that there are no critical points on the boundary or the fact that all points on the convergence trajectory is differentiable only in this unique problem.\n\nSecond, the problem is much easier if the loss is not jointly optimized over the parameters in the first and second layer. Having parameters in one layer fixed does not seem to be a big problem at first glance, but then I realize it indeed makes the problem much easier, which can be seen in the following example. If we randomly sample the weight vector w_i from N(0, I) and only optimize over the parameters in the second layer, then it is straightforward to show the following result.\n\nResult: If \\lambda_\\min(H^\\inf)>0 and m=\\Omega(n\\log n), then with high probability, the loss function L is strongly convex with respect to a=(a_1,…, a_m) and the loss function is zero at the global minimum.\n\nThe above result shows that if we fix the parameters in the first layer and only optimize the parameters in the second layer, it is easy to prove the global convergence with a linear convergence rate. In fact, this result does not require the samples coming from a unit ball and the network size is only slightly over-parameterized. Therefore, if we are allowed to fix the parameters in some layer, how are the result presented in this paper fundamentally different from the above result. \n\nAuthors may say that the loss is not convex with respect to the weights in the first layer even if the second layer is fixed. However, when the second layer is fixed, the loss function is smooth and convex in each parameter region and some recent works have shown that in this case, the loss function is a weakly global function. This means that the loss function is similar to a convex function except those plateaus and this further indicates that if the initial point is chosen in a strictly convex basin, the gradient descent is able to converge to a global min. However, the problem becomes far more difficult if the loss is jointly optimized over all parameters in the first and second layer. This can be easily seen since in each parameter region, the loss is no longer a convex function and this may lead to some high order saddle points such that the gradient descent cannot provably escape. Furthermore, the critical points on the boundary can be much more difficult to characterize for this joint optimization problem. \n\n\nThird, the dataset considered in this paper does not seem to be a fundamental pattern and it seems more like a technical condition required by the proof. It is easy to see that a linearly separable dataset does not necessarily satisfy the conditions that 1) the gram matrix is positive definite and that 2) samples come from the surface of a unit ball. Therefore, I do not understand the reason why we need to analyze this pattern. Clearly, in practice, the data samples is unlikely sampled from a ball surface and it is totally unclear to me why the gram matrix is necessarily positive definite. I understand that some technical assumptions are needed in a theoretical work, but I would like to see more discussions on the dataset, e.g., some necessary conditions on the dataset such that the global convergence is possible.\n\n\nLast, I understand that the over-parameterization assumption is needed. In fact, I expect the network size to be of the order Omega(n*ploylog(n)). I am wondering wether Omega(n^6) is a necessary condition or wether there exists a case such that Theta(n^6) is required. \n\n\nAbove all, I believe this paper is a half-baked paper with some interesting explorations. In summary, it cannot deal with non-differential points, which is considered a major difficulty for analyzing ReLU. In addition, it makes an un-justified assumption on some matrix, it requires too many neurons, and fixed 2nd layer. With so many strong assumptions, and compared to related works like [1], Mei et al., Bach and ..., its contribution is rather limited.\n\n[1] https://arxiv.org/abs/1702.05777\n", "Thanks so much for your response. I would like to clarify my concerns.\n\n1. I mean the number of hidden nodes m will depend on \\lambda_0.\n\n2. The current paper fails to give an explicit relationship between \\lambda_0 and n, thus the requirement of m may be meaningless. What if this dependence is exponential? The authors should at least prove that the dependence of \\lambda_0 on n is polynomial under some more natural assumptions on data distribution. If this cannot be proved, does it imply that this eigenvalue lower bound assumption hides the major difficulty of this problem?\n\n3. In the current paper, the authors provide (i) continuous time convergence result (ii) discrete time convergence result (iii) discussion on how the proof method for continuous case can be generalized to deep networks. However, the connection between the continuous time analysis and the discrete time analysis is unclear in the current paper. It seems that the current discrete time analysis is not really a discretization of the continuous time proof, and the proof method looks independent of the continuous time analysis. As a result, it is unclear if the current discrete time analysis can provide enough insight on the training of deep networks, especially since the non-smoothness of ReLU activation function is one of the major difficulties.\n", "Although this paper provides a theoretical guarantee for one hidden layer ReLU based neural networks, the proposed analysis seems very limited, and I’m wondering whether this analysis can give us some insights for analyzing deep networks to get meaningful results.\n\nIn detail, the lower bound assumption of H will introduce a quantity \\lambda_0 into the dependence of m. This quantity can be extremely small in the case of deep networks, which gives us meaningless requirement of the number of hidden nodes. Most part of the current paper discuss about the continuous time analysis. However, this kind of analysis can get rid of the smoothness requirement of the loss function, which is one of the biggest challenges for analyzing ReLU based networks. In addition, the discrete time analysis is based on some loss concentration bounds, which may lead to meaningless results for deep networks. \n\nI think the proposed analysis of the current paper looks very limited.", "Thanks for your comments. However, we disagree with your comments. \n\nFirst, this paper is only about the training error, so we are confused why you talked about sample complexity.\n\nSecond, you wrote, “\\lambda_0 can be extremely small in the case of deep networks”. However, you did not give any concrete evidence about this claim.\n\nThird, we are confused about why using continuous analysis to gain intuition is a wrong approach. Many previous papers used this approach to analyze convex optimization problems and deep learning optimization problems [1,2,3,4].\n\nFourth, you wrote “the discrete analysis based on some loss concentration bounds, which may lead to meaningless results for deep networks.” Again, you did not give any concrete evidence on why for deep networks our analysis will be meaningless, and we are confused about what are the “loss concentration bounds” you are referring to. \n\n[1] Ashia C Wilson, Benjamin Recht, and Michael I Jordan. A Lyapunov analysis of momentum methods in optimization. arXiv preprint arXiv:1611.02635, 2016.\n[2] Zhang, J., Mokhtari, A., Sra, S., & Jadbabaie, A. (2018). Direct Runge-Kutta discretization achieves acceleration. arXiv preprint arXiv:1805.00521.\n[3] S Sanjeev Arora, Nadav Cohen, and Elad Hazan. On the optimization of deep networks: Implicit acceleration by overparameterization. arXiv preprint arXiv:1802.06509, 201\n[4] Simon S Du, Wei Hu, and Jason D Lee. Algorithmic regularization in learning deep homogeneous models: Layers are automatically balanced. arXiv preprint arXiv:1806.00900, 2018.", "We thank for your careful review. \n\nWe have modified our draft according to your suggestions:\n• We changed the statement of Lemma 3.2, and now it is independent of t.\n• We have added more discussions on how to generalize our technique to analyze multiple layers. In the conclusion section, we have described a concrete plan for analysis. \n• For all theorems and lemmas, we have added failure probability and how the amount of over-parameterization depends on this failure probability. \n• We have fixed the typos.\n• We have modified the statement of Theorem 3.1, 4.1 and the proof of Lemma 3.2 according to your suggestions. \n\nRegarding your question on how our insights could help practitioners in the future since we have characterized the convergence rate of gradient descent from the Gram matrix perspective, we believe our insights can inspire practitioners to design faster optimization algorithms from this perspective. \n\nWe kindly ask you to read our revised paper and our response to common questions and re-evaluate your comments. \n\nWe thank the reviewer again and welcome all further comments. \n\n", "Dear reviewers, \n\nWe thank for all your comments. Especially all reviewers agree our proof is simple. Here we address some common questions from reviews and other comments.\n\n1. H^{\\infty} matrix. \nMany comments asked when the least eigenvalue of H^{\\infty} is strictly positive and what is the intuition of H^{\\infty} matrix. We thank Dougal J Sutherland and Olivier Grisel for providing numerical evidence showing that on real datasets, this quantity is indeed strictly positive. \n\na. Theoretically, in our revised version, we give a theorem (c.f. Theorem 3.1) which shows if no two inputs are parallel, then the H^{\\infty} is full rank and thus it has a strictly positive eigenvalue. \n\nb. Here we also want to discuss informally on why we think H^{\\infty} is the fundamental quantity that determines the convergence rate. In Equation (7), the time derivative of the predictions u(t) is EQUAL to -H(t) (y-u(t)), i.e., the dynamics of the predictions is completely determined by H(t). Furthermore, in our analysis, we show if m -> \\infty, H(t) -> H^{\\infty} for all t >0. Therefore, the worst case scenario is that at the beginning y-u(0) is in the span of the eigenvector of H^{\\infty} that corresponds to the least eigenvalue of H^{\\infty}. In this case, y-u(t) will stay in this space and by one-dimensional linear ODE theory, we see that y-u(t) converges to 0 at a rate exp(-\\lambda_0 t). Also, see Remark 3.1.\n\n\n\n2. Why fixing the output layer and only training the first layer? The analysis will be much harder if one trains both the first and the output layer.\nThis is the concern raised by Reviewer 2 and Reviewer 3. In our original version, we only analyzed the convergence of gradient optimizing the first layer because we believe this problem already demonstrated the main challenge as many previous works tried to understand the same problem, but none of them has a polynomial time convergence guarantee towards zero training loss. For reviewers’ concern: \n\na. First, we disagree with Reviewer 3 that analyzing the case that only the first layer is trained is a trivial problem. For the same setting, there are many previous attempts to answer this question, but these results often rely upon strong assumptions on the labels and input distributions or do not imply why randomly initialized first order method can achieve zero training loss. Please see the second paragraph on Page 2 and Section 3 for detailed discussions. \n\nb. Second, if we fix the first layer and only train the second layer, the learned function is different from the function learned by fixing the second layer and training the first layer. We have added this point in footnote 3.\n\nc. Lastly, in our revised version, we added a new theorem (c.f. Theorem 3.3) which shows using gradient flow to train both layers jointly, we can still enjoy linear convergence rate towards zero loss. To prove Theorem 3.3, we use the same arguments as we used to prove Theorem 3.1 with slightly more calculations. Therefore, we have shown analyzing the case that both layers are trained is just as hard as analyzing the case where only the first layer is trained. \n\n\n3. Amount of over-parameterization. \nOur current bound requires m = \\Omega(n^6). In this paper, to present the cleanest proof, we only use the simplest concentration inequalities (Hoeffding and Markov). As we discussed in the conclusion section, we do not think this bound is tight, and we believe using more advanced techniques from probability theory, this bound can be tightened. \n\n\n\n4. Lemma 3.2. \nWe are sorry about the confusion in the statement in our original version. We have changed the statement, and the new statement is independent of t. \n\n\n\n5. Extending to more layers. \nWe have added more discussions in the conclusion section on how to extend our analysis to deeper neural networks, including a very concrete plan. In short, for deep neural networks, we can also consider the dynamics of the n predictions, and the dynamics are determined by the summation of H (number of layers) Gram matrices. We conjecture that 1) at the initialization phase as m -> \\infty, the summation converges to a fixed n by n matrix and 2) as m -> \\infty, these matrices do not change by much over iterations. Thus, as long as the least eigenvalue of that fixed matrix is strictly positive and m is large enough, we can still have linear convergence for deep neural networks.\n\n\n\n\nSummary of Revisions:\n1. We add a new theorem (Theorem 3.1) which shows as long as no two inputs are parallel, H^{\\infty} matrix is non-degenerate.\n2. We add a new theorem (Theorem 3.3) on the convergence of gradient flow for jointly training both layers.\n3. We add experimental results to verify our theoretical findings. \n4. More discussions on how to extend our analysis to more layers and why H^{\\infty} is a fundamental quantity.\n", "Thanks, Olivier. \n\nWe have acknowledged your experiments in our response.", "We thank for your suggestion. We have changed \"non-degenerate data\" to \"no parallel inputs\".", "Thank for your long review. Unfortunately, we disagree with most of your comments. First, we would like to point out two wrong statements in your review.\n\nFirst, the “result” you claim is wrong. If the first layer is fixed and m = \\Omega(n \\logn), and only a=(a_1,…, a_m) is being optimized, this is a linear regression problem with respect to a=(a_1,…, a_m). Since m > n, this problem has more features than the number of samples, and the covariance matrix (Hessian) is degenerate. There is no way this problem is a strongly convex one.\n\nSecond, you claimed there exists a linearly separable dataset whose corresponding H^{\\infty} is degenerate. However, we are considering a regression problem whereas linearly separable condition is only a favorable condition for classification problems. We don’t understand what does linearly separable mean for regression.\n\nNow regarding your main complaint that the problem is not difficult enough:\n\n1. This is not true at all. Reviewer #1 and Reviewer #2 both explicitly agreed this is a challenging/difficult problem and we have devoted a whole paragraph (second paragraph on page 2) and many sentences in Section 2 to describe the difficulty. \n\n\n2. You complained that we are not analyzing the landscape of this non-differentiable function and we are using the “practically used update rule instead of subgradient.” We don’t understand the point here. Our primary goal is to understand why practically used rule (gradient descent) can achieve zero training loss. We have stated our goal at the beginning of the abstract and the introduction. For the non-differentiability issue, in the revised version we have cited papers and added discussions in the fourth paragraph of Section 2 on recent progress in dealing with non-differentiability. \n\n\n3. You claimed fixing one layer and optimizing the other one is a trivial problem. We agree if one fixes the first layer and optimizes the output layer, then this is trivial because this is a convex problem. However, if one fixes the output layer and optimizes the first layer, the problem is significantly harder. You claimed in this case \n\n“the loss function is a weakly global function. This means that the loss function is similar to a convex function except those plateaus and this further indicates that if the initial point is chosen in a strictly convex basin, the gradient descent is able to converge to a global min. ”\n\nWe kindly ask for a reference and why it can imply the global convergence of gradient descent analyzed in our paper. To our knowledge, none of the previous results implies the global convergence of gradient descent in the setting we are analyzing. We have discussed this point in Section 2. Furthermore, we have never heard of the notion “weakly global function”. \n\n4. You believed that the inputs are generated from a unit sphere is a strong assumption. In our original version, we said making this assumption is only for simplicity. In our revised version, we added more details on this assumption. Please check footnote 7. \n\n5. For your other concerns, we kindly ask you to read our response to common questions. \n\n\nWe thank the reviewer again. We welcome all further comments!\n\n", "We thank for your careful and encouraging review. We believe our revised version has addressed most of your concerns. \n\n1. We have added discussions on the problem of fixing the first layer and only training the output layer in footnote 3. We believe the learned function is different from the function learned by fixing the output layer and only training the first layer. We would also like to point out that many previous papers considered the same setting but did not rigorously prove the global convergence of gradient descent. \n\n2. We have added a new theorem (Theorem 3.3) which shows applying gradient flow to optimize all variables still enjoys a linear convergence rate. To prove Theorem 3.3, we use the same arguments as we used to prove Theorem 3.1 with slightly more calculations. Therefore, we have shown analyzing the case that both layers are trained is just as hard as analyzing the case where only the first layer is trained.\n\n3. We have added a new theorem (Theorem 3.1) which shows as long as no two inputs are parallel, H^{\\infty} is non-degenerate. \n\nWe thank the reviewer again. We welcome all further comments! \n", "We thank for your encouraging review. \nWe have modified our paper according to your suggestions:\n•\tWe fixed lemma 3.2.\n•\tWe added a new theorem (Theorem 3.1) showing the non-degeneracy of H^{\\infty} matrix.\n•\tWe also added some experiments to corroborate our theoretical findings. Indeed, most of the patterns of ReLUs do not change. Furthermore, over-parameterization leads to faster convergence rate.\n\nWe thank the reviewer again. We welcome all further comments!\n", "This paper studies convergence of gradient descent on a two-layer fully connected ReLU network with binary output and square loss. The main result is that if the number of hidden units is polynomially large in terms of the number of training samples, then under suitable randomly initialization conditions and given that the output weights are fixed, gradient descent necessarily converge to zero training loss.\n\nPros:\nThe paper is presented clearly enough, but I still urge the authors to carefully check for typos and grammatical mistakes as they revise the paper. As far as I have checked, the proofs are correct. The analysis is quite simple and elegant. This is one thing that I really like about this paper compared to previous work. \n\nCons:\nThe current setting and conditions for the main result to hold are quite a bit limited. If one has polynomially large number of neurons (i.e. on the order of n^6 where n is number of training samples) as stated in the paper, then the weights of the hidden layer can be easily chosen so that the outputs of all training samples become linearly independent in the hidden layer (see e.g. [1] for the construction, which requires only n neurons even with weight sharing) , and thus fixing these weights and optimizing for the output weights would lead directly to a convex problem with the same theoretical guarantee. At this point, it would be good to explain why this paper is focusing on the opposite setting, namely fixing the output weights and learning just the hidden layer weights, because it seems that this just makes the problem become more non-trivial compared to the previous case while yielding almost the same results . Either way, this is not the way how practical neural networks are trained as only a subset of the weights are optimized. Thus it's hard to conclude from here why the commonly used GD w.r.t. all variables converges to zero loss as stated in the abstract.\n\nThe condition on the Gram matrix H_infty in Theorem 3.1 seems to be critical. I would like to see the proof that this condition can be fulfilled under certain conditions on the training data.\n\nIn Lemma 3.1, it seems that \"log^2(n/delta)\" should be \"log(n^2/delta)\"? \n\nDespite the above limitations, I think that the analysis in this paper is still interesting (mainly due to its simplicity) from a theoretical perspective. Given the difficulty of the problem, I'm happy to vote for its acceptance.\n\n[1] Optimization landscape and expressivity of deep CNNs", "I found the paper interesting to read (although I did not try to check the mathematical correctness of the results).\n\nOne point could be improved though: several times the text mentions that the main assumption is that \"data is non-degenerate\" without formally defining what is meant by this. The data matrix is not square so the traditional definition of non-degeneracy does not apply here.\n\nWhen reading the theorems, I believe that the informal \"non-degenerate data\" assumption of the main text corresponds to the double assumptions that each input vectors has unit norm and more importantly that the H_inf kernel matrix is full-rank (non-degenerate).\n\nIn practice, this full-rank H_inf kernel assumption is typically not met if there exists duplicated samples in the training set (if there are duplicated samples with different labels, it's not possible to have zero training loss for any model).\n\nI just read in your reply (https://openreview.net/forum?id=S1eK3i09YQ&noteId=SJeNOPOU9Q) that you can prove that this assumption is met as soon as there are no two parallel samples in the training set. But I assume that this is not necessarily a problem if the labels of such parallel samples are the same. Furthermore, since you also assume that all x_i have unit norm, a pair of parallel samples is actually a pair of duplicated samples.\n\nSo to conclude I would suggest editing your text to change the \"non-degenerate data\" phrase to something more specific (such as \"record-wise normed data without duplicated records\" or alternatively \"non-degenerate extended feature matrix\") so as to avoid any confusion.", "Thanks for your reply, but sorry, I couldn't see how it helps to answer my question. Looking forward to the revision though.", "Interesting numerical study. I did not know about the analytical relationship between H and the data Gram matrix. I did a more brute-force numerical study of H on a non-random toy dataset (8x8-pixels gray level digits, d=64, n~=1797) and found lambda_0 > 1.3e-2 which is in line with your random data study:\n\nhttps://gist.github.com/ogrisel/1b430b2bf1e83173f6061676c62b9f18", "Thanks for your question.\n\nWe proved that with high probability over initialization, for any weight matrix $W(t)$ that satisfies $w_r(t)$ is close to $w_r(0)$ for all $r \\in [m]$, the induced Gram matrix $H(t)$ has lower bounded eigenvalue. Here $t$ is just an index relating the weight matrix and the induced Gram matrix. Note there is only one event which is independent of $t$. \n\nWe are sorry about the confusion and we will modify the statement of the lemma to make it more clear in the revised version.\n", "Thanks for the inspiring work. I found something confusing about the probability part though.\n\nDenote B(t) = the event that at time/iteration t, || w_r(t)-w_r(0)||_2\\leq R for all r happens. Denote C(t) = the event that at time/iteration t, the smallest eigenvalue of H(t) is at least \\lambda_0/2 happens\n\nThen Lemma 3.2 states that the conditional probability Prob[C(k)|B(k)] is large( > 1- c) when c ~ R*n^2/\\lambda_0 for a fixed k. However, it is unclear whether C(k)|B(k) implies C(k+1)|B(k+1) from the paper. It is possible that Prob[\\cap_k=1^N (C(k)|B(k))] is not high at all, and even could be zero when N approaches infinity.\n\nIn the last few lines of proving the induction hypothesis on page 10, it uses Lemma 3.2 that C(k)|B(k) holds with high probability over initialization. But if we review the WHOLE process of proof by induction, in k=1,2.. till infinity, we assume different events hold (assume C(k)|B(k) when proving case k+1), and their relationships are unclear. Thus the \"with high probability\" statement seems to be not solid to me. No lower bound on Prob[\\cap_k=1^\\infty (C(k)|B(k))] is proved.\n\nI would really appreciate your answer to this!", "I see now; Lemma 3.2 says that the expected number of total changes is small, not zero. Whoops; thanks.", "Thanks for your comments and the numerical study! They are very inspiring!\n\nFor the analysis:\nYour intuition is basically correct. We want to clarify that our current proof cannot show for continuous time gradient flow there is no activation pattern change. What we can show is the number of pattern changes is small and only incur small perturbation on H. See Lemma 3.2 and its proof.\n\nExtension to deep neural networks:\nYes, it would be very interesting to investigate empirically whether there is only a small amount of pattern changes when training deeper models.\n\nOn lambda_0:\nThanks for your numerical study! We agree it would be very interesting to obtain some bounds on lambda_0 under certain distributional assumptions. ", "Thanks for your comments and questions!\n\nAs stated in our paper, the results in the two papers you mentioned do not imply why randomly initialized gradient descent can achieve 0 training loss with arbitrary labels. Furthermore, there are many subtle differences in the assumptions. We will definitely expand our discussions on these two papers in the revised version. \n\n\nDependency on d and n: our bound depends on lambda_0, which is a dataset-dependent quantity. In general, this quantity is related to d,n and the input distribution. \n\n\nOn generalization: in general, population risk bound can be obtained only if there are additional assumptions on the input distribution and labels. It is an interesting direction to extend our analysis to incorporate structures in the input distributions and labels.\n\n\nWhy using uniform random initialization for the second layer:\nThere are two purposes for using this initialization scheme.\nFirst, as already explained by Dougal, $a_r^2 =1$ makes H matrix independent of a_r and in turn, makes our calculation much easier.\nSecond, this initialization makes ||y-u(0)||_2 = O(\\sqrt{n}). If the output layer are all ones, then u(0) is of order \\sqrt{m} which makes ||y-u(0)||_2 be of order \\sqrt{mn}. In this case, R' cannot be smaller than R.", "Thanks for bringing concerns from others! We are happy to answer these concerns. In fact, point 3 and point 4 already resolved some of the issues. \n\n\nTo point 1: Comparison with the universal approximation theorem.\nResponse: The universal approximation theorem only establishes that there exists a wide neural network that can approximate continuous function on compact subsets of $R^d$. It does not imply a wide neural network trained by randomly initialized gradient descent has the same approximation property.\nWe will add more discussions on universal approximation theorem in the revised version.\n\nTo point 2: Is this a convex problem?\nResponse: because of the use of ReLU activation, this is not a convex problem. The l2 loss is convex with respect to the predictions but is not convex with respect to the parameters we are optimizing.\n\nTo point 5,6: Degeneracy of the Gram matrix:\nResponse: This has been addressed in our previous reply.", "Yes that's the correct formula. Thanks!", "Not an author and haven't super-carefully checked the proof, but the derivation of (5), at the start of Proof of Theorem 3.1, assumes that a_r^2 = 1. Otherwise H would contain an a_r^2 term multiplying the indicator; if you used a different distribution for a, then everything to do with H is going to depend on that too. That could make things a lot messier....\n\nBut that doesn't prevent you from choosing a_r as some weighted distribution on +-1. In particular, you could pick all of the a_r = 1. The only place I see that affecting the continuous-time proof is the Markov's inequality bound for ||y - u(0)|| at the end, which uses E[a_r] = 0. But if you had some other high-probability bound on ||y - u(0)||, which you could definitely get just based on the distribution of W, it seems that the rest of the proof carries through with possibly a bigger m. But that can't be right – if all the a_r = 1, f can't output negative values, and nothing else stops any of the y from being negative.... Authors, what am I missing here?", "Thank you Dougal. The assumption on a_r is stated in Theorem 3.1, that a_r \\in {-1, 1} (and hence a_r^2=1). This is a perfectly fine assumption for ReLU given its homogeneity. There is also randomization: a_r ~ Unif({-1, 1}). Somehow the role of randomness of a_r is not transparent in the proof. But I suspect it should be important: suppose that I use a_r = 1 for all r (hence no randomization), and since ReLU is non-negative, with strictly negative labels y, there is no way that the network can find y...\n\nP.S.: I somehow missed the second part of Dougal's reply, which pointed to the same concern.", "We discussed this in our reading group today, and I'd like to relay some of our thoughts to other readers.\n\nThe paper randomly initializes an extremely overparameterized network: m = Omega(n^6 / lambda_0^4), where lambda_0's dependence on n will vary with the dataset, but presumably it decays with n, making the overall rate for m worse than n^6. Then, here's another way to think about the results of the paper; with high probability:\n\n1. There is a global optimum without switching any of the activation patterns, i.e. keeping sign(w_r^T x_i) the same for all i, r. (This isn't directly shown as a separate step in the paper, but it's implied by Theorem 3.1.)\n\n2. Following a continuous-time gradient flow leads you to that global optimum, following a path that \"looks\" strongly convex as you follow it (so you get linear convergence), without ever switching any of the sign(w_r^T x_i), with high probability.\n\n3. Discrete-time gradient descent, for a small enough step size O(lambda_0 / n^2), does basically the same thing. It's allowed to switch some of the activation patterns, but only a few of them, S_i (or maybe S_i^\\perp, depending on if you go by the definition you give or the way you then use it...). Those ones don't affect the loss too much, and we still have convergence.\n\n\nGiven (1), (2) is maybe not super-surprising: the set of W with the same activation patterns is the intersection of m n linear constraints, and within that set, the objective function is a convex QP. Probably lambda_min(H(0)) is related to lambda_min of the quadratic term in the QP objective, though I couldn't immediately show that. Of course, this doesn't show a result as strong as (2)/(3) without additionally showing you don't happen to break the constraints in following the gradient flow, and and it's circular anyway in that it's not obvious how to show (1) other than through the proof via (2) given here.\n\n\nThe applicability of this approach to deeper networks, then, rests on how realistic the extreme overparameterization here is. Is it still the case that you can avoid switching too many activation patterns in training a deeper network? It would be interesting to track that empirically while training a practical deep net. If switching activation patterns is indeed rare, then this type of approach might be very fruitful for studying deeper nets. Even if not, though, this is an elegant solution to the 1-layer setting.\n\n\nOut of curiosity, I also tried to check numerically what the dependence of lambda_0 is on n for a uniform distribution of inputs. It seems like lambda_0 is about n^{-2} for d=2, n^{-1/2} for d = 5, and n^{-1/4} for d = 10 - https://gist.github.com/dougalsutherland/cc7d8b6d740c6c07d3c6081cfb42d191 . If that's correct, then in 2d the required m is Omega(n^14) (!) while in 10d it's only Omega(n^7), and presumably in very high dimensions it becomes omega(n^6). It might be interesting to try to actually bound lambda_0 in terms of n and d to see if these simulations are accurate. (It might very well be that lambda_0 has a different rate for very large n, with \"very large\" depending on d; I only ran up to n about 3,000 because I only wanted to run for a few minutes on my desktop.)", "I would like to give a comment on the relation of this paper and certain prior works. The paper by Chizat and Bach proves continuous-time gradient flow can converge to optimal population loss, in the limit of infinite number of neurons, under certain conditions (which include sigmoid activation, and ReLU at a formal level). Mei et al. proves that noisy SGD can optimize to near optimal population loss. In fact, Mei et al. provides a quantitative statement, that the continuous-time flow and the discrete-time one are close already when the number of neurons >> the dimension of the input (i.e. m>>d as in the notation of this paper). As such, these works already suggest that first-order methods can work well on neural nets with a single hidden layer (in terms of population loss), requiring m>>d.\n\nThese two works are briefly mentioned in the paper, but I think it is important to clarify the distinction. The paper, whose analytical approach aligns with many other papers, proves that gradient descent can optimize to optimal empirical loss, for the specific case of ReLU activation. The analysis is nice in its simplicity (and length!), and so I believe many will try to study this type of analysis. The key finding is that when m>>poly(n) (where n is the number of training samples) and when n is large, many things remain close to initialization at all iterations. As such, random initialization works to our advantage.\n\nInterestingly the aforementioned two works require m>>d, whereas here m>>poly(n). There is no contradiction since the former analyzes SGD, and this paper analyzes (full-batch) gradient descent. Yet this difference raises a question of whether there is an analysis to unify the picture. There is also a question of generalization performance, which is resolved in the aforementioned two works but not in this paper.\n\nI must admit that I have not verified the proof, so it remains to see whether the analysis is correct.\n\nAs a clarifying question, is it crucial that the output weight is initialized uniformly random? The role of random initialization for the output weight is not transparent at first glance.", "Interesting results! It seems to me that the definition of H^{\\infty}_{ij} in your main theorems could be simplified as (x_i^T x_j) * arccos(- x_i^T x_j) / (2 * pi) -- am I correct?", "Thanks for the author(s) reply.\n\nI've just seen some discussions about this paper on another website and here I wanna seen the official reply from the author(s) w.r.t. the following interesting comments, which is also somewhat the concerns of mine. (I simplily repost those discussions)\n\n1. \"One of the mystery in the success of neural networks is randomly initialized first order methods like gradient descent can achieve zero training loss even though the objective function is non-convex and non-smooth. This paper demystifies this surprising phenomenon for two-layer fully connected ReLU activated neural networks.\"\nHardly a mystery, Cybenko's paper back in 1989 pointed out NN with one hidden layer can approximate any continuous high-dimensional surface without higher degree of smoothness assumption nor being convex, optimization methods like gradient descent is but one of the methods can do the job.\n\n2.\"For an m hidden node shallow neural network with ReLU activation and n training data, we show as long as m is large enough and the data is non-degenerate, randomly initialized gradient descent converges a globally optimal solution with a linear convergence rate for the quadratic loss function.\"\nAnother falsehood, the assumption of surface with positive eigen-values i.e. non-degenerate (in theorem 3.1 and 4.1 for example) implies convexity of the solution landscape. When the data is non-convex, there is no guarantee nor proof that the gradient descent or other more powerful optimization methods can always find the global optimal. Non-convexity problems pose similar challenges like NP-hard problems: solutions stuck in local optimum and there is no way in general to convert locally best solution to global optimal.\n\n3.\"Cybenko's paper back in 1989 pointed out NN with one hidden layer can approximate any continuous high-dimensional surface without higher degree of smoothness assumption nor being convex.\"\nCybenko's paper only says that, for a given continuous function and epsilon, there exists a one-hidden-layer sigmoidal NN with less than epsilon maximum error. It says nothing about the learnability of this NN (nor even the number of neurons in it).\n\n4. \"the assumption of surface with positive eigen-values i.e. non-degenerate (in theorem 3.1 and 4.1 for example) implies convexity of the solution landscape.\"\nThe matrix H∞ is not the \"solution landscape\". It's a function of the data only, not the parameters. It is not the Hessian of the loss function, as you seem to think.\n\n\n5.\"The key assumption is the least eigenvalue of the matrix H∞ is strictly positive. Interestingly, various properties of this H∞ matrix has been thoroughly studied in previous work [Xie et al., 2017, Tsuchida et al., 2017]. In general, unless the data is degenerate, the smallest eigenvalue of H∞ is strictly positive.\"\nFor example, Xie's paper [1] focus most with spherical data, from section 3. Problem setting and preliminaries.\n\n6.\"We will focus on a special class of data distributions where the input x ∈ Rd is drawn uniformly from the unit sphere, and assume that |y| ≤ Y . We consider the following hypothesis class.\"\nMoreover, it also stated:\n\"Typically, gradient descent over L(f) is used to learn all the parameters in f, and a solution with small gradient is returned at the end. However, adjusting the bases {wk} leads to a non-convex optimization problem, and there is no theoretical guarantee that gradient descent can find global optima.\"\n\nIt said nothing how common or not a given data set is convex like the current paper claimed. We suspect not, in general. Xie mentioned nothing about such data being degenerate.\n\nNow to Cybenko's paper:\n\"Cybenko's paper only says that, for a given continuous function and epsilon, there exists a one-hidden-layer sigmoidal NN with less than epsilon maximum error. It says nothing about the learnability of this NN (nor even the number of neurons in it).\"\n\nOnce we know the objective function, and expression of functional form, then the number of hidden layer neurons is a matter of engineering as long as we know \"there exists a one-hidden-layer sigmoidal NN with less than epsilon maximum error.\", that's learnability of one hidden-layer NN.\n\nReferences:\n[1]. Xie, Bo, Yingyu Liang, and Le Song. \"Diverse neural network learns true target functions.\" arXiv preprint arXiv:1611.03131 (2016).\n\n ", "We thank for your comments and we are happy to answer your concerns.\n\n1) Adding a linear combination of existing features to the data set leads to a degenerate Gram matrix?\nThis is wrong. Every entry in our Gram matrix is not an inner product between two features, but the result of using a non-linear kernel acting on two features. Please check our definition of the Gram matrix (H^{\\infty}) more carefully (c.f. Theorem 3).\n\nFor data augmentation with a linear combination of other samples, here we provide a counterexample. \nWe have two features (1,0), (0,1) and we add a linear combination (1/\\sqrt{2},1/\\sqrt{2}).\nThe Gram matrix is \n[0.5000 0 0.2652; \n0 0.5000 0.2652; \n0.2652 0.2652 0.5000 ] \nwhich is not degenerate.\nIn general, only if the activation is linear, the Gram matrix becomes degenerate after adding a linear combination of existing features. \n\nIn fact, we can easily prove as long as no two features are parallel, H^\\infty is always non-degenerate. We will add the proof in the revised version.\n\n2) Is this a trivial paper?\nSimplicity is not equivalent to triviality. \n\nOur result is simple: we just prove randomly initialized gradient descent achieves zero training loss for over-parameterized neural networks with a linear convergence rate. However, why randomly initialized first order methods can fit all training data is one of the unsolved open problems in neural network research.\n\nFor the same setting (training two-layer ReLU activated neural networks), there are many previous attempts to answer this question but these results often rely upon strong assumptions on the labels and input distributions or do not imply why randomly initialized first order method can achieve zero training loss. Please see the second paragraph on Page 2 and Section 3 for detailed discussions.\n\nFor technical contributions, we do agree our analysis is simple but we think this is actually an advantage because it will be easier to generalize simple arguments instead of involved ones. Our proof does not require heavy calculations and reveals the intrinsic properties of over-parameterized neural networks and random initialization schemes. Please see Analysis Technique Overview paragraph on page 2.\n\nComparing with [2], except that we use the same property that the patterns do not change by much during training, our analysis is completely different from theirs and is significantly simpler and more transparent. We have devoted a whole paragraph in Section 3 discussing the differences with [2].\n\n3) Experiments\nWe would like to emphasize that this is pure theory paper and the theorem we proved (randomly initialized gradient descent achieves zero training loss) is a well known experimental fact in training neural networks. Nevertheless, we are happy to provide some experimental results in the revised version.", "The analysis in this paper seems technically sound. However, I have questions w.r.t. this paper: is there any experimental result to support the analysis in this paper? The results are quite simple, and I wish the author(s) could add some experimental validations, even a toy one, to support the theoretical results.\n\nBesides, the assumption on the least eigenvalue of the Gram matrix seems somewhat unreasonable, because if we use some data augmentation tricks, such as mix-up [1] (i.e. if there is a training sample that is the linear combination of other samples), the assumption apparently does not hold in this case in the sense that the least eigenvalue of this gram matrix will become zero. However, the adding of one more data seems have little influence on the training procedure.\n\nAnother concern is that the analysis and conclusion in this paper is somewhat trivial. There are not much technical contributions in this paper. The technical part follows closely to this work [2].\n\n\n[1] Zhang, Hongyi, et al. \"mixup: Beyond Empirical Risk Minimization.\" (2018).\n[2] Li, Yuanzhi, and Yingyu Liang. \"Learning Overparameterized Neural Networks via Stochastic Gradient Descent on Structured Data.\" arXiv preprint arXiv:1808.01204 (2018)." ]
[ -1, -1, 8, -1, -1, 8, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, 4, -1, -1, 4, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "ryeydjfWJE", "ryeydjfWJE", "iclr_2019_S1eK3i09YQ", "rJgDnCeICm", "B1l4CQpeAm", "iclr_2019_S1eK3i09YQ", "BkgphZviC7", "iclr_2019_S1eK3i09YQ", "r1gzcd-5aX", "H1llXbvcC7", "iclr_2019_S1eK3i09YQ", "Skeo1ukt07", "iclr_2019_S1eK3i09YQ", "rJgDnCeICm", "BJeygSVrTm", "iclr_2019_S1eK3i09YQ", "HklcRVD4hm", "SklA8vtVh7", "r1gzcd-5aX", "H1g-pI4Dhm", "HygOMNNqhQ", "iclr_2019_S1eK3i09YQ", "iclr_2019_S1eK3i09YQ", "S1xpytvfnX", "SkxQT2fFcQ", "Hkgk1YwgnX", "iclr_2019_S1eK3i09YQ", "ryxgnoz357", "SkxQT2fFcQ", "HygpnikO9X", "SyemVAXv9Q", "r1el9Jqv5X", "HygpnikO9X", "H1gUINXtqX", "iclr_2019_S1eK3i09YQ", "iclr_2019_S1eK3i09YQ", "iclr_2019_S1eK3i09YQ", "SJeNOPOU9Q", "HJei63YVq7", "iclr_2019_S1eK3i09YQ" ]
iclr_2019_S1eOHo09KX
Opportunistic Learning: Budgeted Cost-Sensitive Learning from Data Streams
In many real-world learning scenarios, features are only acquirable at a cost constrained under a budget. In this paper, we propose a novel approach for cost-sensitive feature acquisition at the prediction-time. The suggested method acquires features incrementally based on a context-aware feature-value function. We formulate the problem in the reinforcement learning paradigm, and introduce a reward function based on the utility of each feature. Specifically, MC dropout sampling is used to measure expected variations of the model uncertainty which is used as a feature-value function. Furthermore, we suggest sharing representations between the class predictor and value function estimator networks. The suggested approach is completely online and is readily applicable to stream learning setups. The solution is evaluated on three different datasets including the well-known MNIST dataset as a benchmark as well as two cost-sensitive datasets: Yahoo Learning to Rank and a dataset in the medical domain for diabetes classification. According to the results, the proposed method is able to efficiently acquire features and make accurate predictions.
accepted-poster-papers
This paper presents a reinforcement learning approach for online cost-aware feature acquisition. The utility of each feature is measured in terms of expected variations of the model uncertainty (using MC dropout sampling as an estimate of certainty) which is subsequently used as a reward function in the reinforcement learning formulation. The empirical evaluations show improvements over prior approaches in terms of accuracy-cost trade-off on three datasets. AC can confirm that all three reviewers have read the author responses and have significantly contributed to the revision of the manuscript. Initially, R1 and R2 raised important concerns regarding low technical novelty. R1 requested an ablation study to understand which of the following components gives the most improvement: 1) using proper certainty estimation; 2) using immediate reward; 3) new policy architecture. Pleased to report that the authors addressed the ablation study in their rebuttal and confirmed that MC-dropout certainty plays a crucial rule in the performance of the proposed method. R1 subsequently increased the assigned score to 6. R2 raised concerns about related prior work Contardo et al 2016, which similarly evaluates the most informative features given budget constraints with a recurrent neural network approach. After a long discussion and a detailed rebuttal, R2 upgraded the rating from below the threshold to 7, albeit acknowledging an incremental technical contribution. R3 raised important concerns regarding presentation clarity that were subsequently addressed by the authors. In conclusion, all three reviewers were convinced by the authors rebuttal and have upgraded their initial rating, and AC recommends acceptance of this paper – congratulations to the authors!
train
[ "H1g0t0mMnQ", "rygaWIL0h7", "SkeepuOYC7", "Hkeo37zYAm", "HJeqWMc_0Q", "rkejR-qOCm", "HJgr7KZahQ", "H1g_9mF_07", "SJlpP-Kd0m", "BJg5AgFOC7", "HyeK0TddCm", "BJxJxn2l0Q", "HkgGRh3gAQ", "rJelzRnl0X", "HyeFo62e07", "HklsxJ6e07", "rJexCC2xR7" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author" ]
[ "This paper presents a novel method for budgeted cost sensitive learning from Data Streams.\nThis paper seems very similar to the work of Contrado’s RADIN algorithm which similarly evaluates sequential datapoints with a recurrent neural network by adaptively “purchasing” the most valuable features for the current datapoint under evaluation according to a budget. \n\nIn this process, a sample (S_i) with up to “d” features arrives for evaluation. A partially revealed feature vector x_i arrives at time “t” for consideration. There seems to exist a set of “known features” that that are revealed “for free” before the budget is considered (Algorithm 1). Then while either the budget is not exhausted or some other stopping condition is met features are sequentially revealed either randomly (an explore option with a decaying rate of probability) or according to their cost sensitive utility. When the stopping condition is reached, a prediction is made. After a prediction is made, a random mini-batch of the partially revealed features is pushed into replay memory along with the correct class label and the P. Q, and target Q networks are updated.\n\nThe ideas of using a sequentially revealed vector of features and sequentially training a network are in Contrado’s RADIN paper. The main novelty of the paper seems to be the use of MC dropout as an estimate of certainty in place of the softmax output layer and the methods of updating the P and Q networks.\nThe value of this paper is in the idea that we can learn online and in a cost sensitive way. The most compelling example of this is the idea that a patient shows up at time “t” and we would like to make a prediction of disease in a cost sensitive way. To this end I would have liked to have seen a chart on how well this algorithm performs across time/history. How well does the algorithm perform on the first 100 patients vs the last 91,962-91,062 patients at what point would it make sense to start to use the algorithm (how much history is needed).\n\nAm I correct in assuming there are some base features that are revealed “for free” for all samples? If so how are these chosen? If so how does the number of these impact the results? \n\nIn Contrado’s RADIN paper the authors explore both the MNIST dataset and others, including a medical dataset “cardio.” Why did you only use RADIN as a comparison for the MNIST dataset and not the LTRC or diabetes dataset? Did you actually re-implement RADIN or just take the numbers from their paper? In which case, are you certain which MNIST set was used in this paper? (it was not as well specified as in your paper).\n\nWith respect to the real world validity of the paper, given that the primary value of the paper has to do with cost sensitive online learning, it would have been better to talk more about the various cost structure and how those impact the value of your algorithm. For the first example, MNIST, the assumed uniform cost structure is a toy example that equates feature acquisition with cost. The second example uses computational cost vs relevance gain. This would just me a measure of computational efficiency, in which case all of the computational cost of running the updates to your networks should also be considered as cost. With respect to the third proprietary diabetes dataset, the costs are real and relevant, however there discussion of these are given except to say that you had a single person familiar with medical billing create them for you (also the web address you cite is a general address and does not go to the dataset you are using). \n\n In reality, these costs would be bundled. You say you estimate the cost in terms of overall financial burden, patient privacy and patient inconvenience. Usually if you ask the patient to fill out a survey it has multiple questions, so for the same cost you get all the answers. Similarly if you do a blood draw and test for multiple factors the cost to the patient and the hospital are paid for the most part upfront. It is not realistic to say that the cost of asking a patient a questions is 1/20th of the cost of the survey. The first survey question asked would be more likely 90-95% of the cost with each additional question some incremental percentage. To show the value of your work, a better discussion of the cost savings would be appreciated. \n", "The paper presents a RL approach for sequential feature acquisition in a budgeted learning setting, where each feature comes at some cost and the goal is to find a good trade-off between accuracy and cost. Starting with zero feature, the model sequentially acquires new features to update its prediction and stops when the budget is exhausted. The feature selection policy is learned by deep Q-learning. The authors have shown improvements over several prior approaches in terms of accuracy-cost trade-off on three datasets, including a real-world health dataset with real feature costs.\n\nWhile the results are nice, the novelty of this paper is limited. As mentioned in the paper, the RL framework for sequential feature acquisition has been explored multiple times. Compared to prior work, the main novelty in this paper is a reward function based on better calibrated classifier confidence. However, ablations study on the reward function is needed to understand to what extent is this helpful.\n\nI find the model description confusing. \n1. What is the loss function? In particular, how is the P-Network learned? It seems that the model is based on actor-critic algorithms, but this is not clear from the text.\n2. What is the reward function? Only immediate reward is given.\n3. What is the state representation? How do you represent features not acquired yet?\n\nIt is great that the authors have done extensive comparison with prior approaches; however, I find more ablation study needed to understand what made the model works better. There are at least 3 improvements: 1) using proper certainty estimation; 2) using immediate reward; 3) new policy architecture. Right now not clear which one gives the most improvement.\n\nOverall, this paper has done some nice improvement over prior work along similar lines, but novelty is limited and more analysis of the model is needed.", "\n* Comment: \"Ablation study Figure 6: Why is the training curve shown here? I believe the main objective is accuracy-cost trade-off so I was expecting a figure like Figure 5(b). The convergence rates look pretty similar to me anyway (considering the variance).\"\n\nIn Figure. 6, we show the speed of convergence with and without using the suggested representation sharing method. As it can be seen from this figure, the representation sharing would help the faster convergence. Here, after the convergence, the accuracy-cost curves would be very similar.\n\nRegarding your concern about the statistical significance of the results, the curves presented in this figure are average of 8 different randomly initialized runs. We believe that the representation sharing idea would help the convergence speed and worthy to be used in implementations of our work.\n\n------------------------------\n* Comment: \"Ablation study Figure 7: Which dataset is this on? It seems that on this dataset a small number of features yields accuracy close to the full-features setting. I wonder if a static feature selection method would do as well.\"\n\nAs noted in the first paragraph of the ablation study, Diabetes dataset was used here. The purpose of this figure is demonstrating the effect of different enforced budgets on the over performance. As it can be seen from this figure, having enforced budget does not have a considerable influence on the exploration and training of the P and Q networks.\n\nRegarding your concern about the easiness of the task, We compared our results with other baselines in Figure 4b. As it can be seen in this figure, many other works that are better than static methods are not able to converge as fast as OL.\n\n---------------------------\n* Comment: \"Also, just noticed, why are the baselines different for different datasets? i.e Figure 2, 3, 4(b) have different baselines.\"\n\nWe tried our best to use all the baselines on all figures; however, there are technical difficulties such as:\n- Exhaustive: this method is computationally very expensive, we could only run it on our smallest dataset.\n- Adapt-GBRT: the loss function and the source code provided by the authors are mostly appropriate for regression tasks or classification tasks with label ordering. \n- RADIN: this method has many hyper-parameters and we found it difficult to reimplement. We decided to report the results as presented in the original paper. \n- Tree-based approaches, such as EarlyExit, Cronus, GreedyMiser, are usually less powerful than the Adapt-GBRT, so we decided to omit these comparisons for the Diabetes dataset to enhance the readability.\n", "Thanks for the revision! I appreciate the authors' efforts addressing reviewers' comments and have updated my score.\n\nAblation study Figure 6:\nWhy is the training curve shown here? I believe the main objective is accuracy-cost trade-off so I was expecting a figure like Figure 5(b). The convergence rates look pretty similar to me anyway (considering the variance).\n\nAblation study Figure 7:\nWhich dataset is this on? It seems that on this dataset a small number of features yields accuracy close to the full-features setting. I wonder if a static feature selection method would do as well.\n\nAlso, just noticed, why are the baselines different for different datasets? i.e Figure 2, 3, 4(b) have different baselines.\n\n\n", "We agree with the reviewer that having efficient exploration algorithms would be crucial in this setting. We believe a more extensive study of different exploration techniques would be a great subject for any future work.", "\n* Comment: \"My concern is not about annealing per se. Rather, the literature of online learning focuses on cumulative regret, which the manuscript does not address. (\"Optimization\" != \"online learning\"). I suspect you mean \"online\" as in \"receiving examples one-at-a-time\", but this distinction should be clearer.\"\n\nAs requested by the reviewer, we have added the following clarification to the revised version (the first paragraph of the Introduction):\n\"Here, by online we mean processing samples one at a time as they are being received.\"\n\n--------------------------------------\n* Comment: \"The Achilles' Heel of reinforcement learning is sample complexity, and our (lack of good) exploration algorithms is central to the problem. By turning off the exploration for test you have elided this difficulty (\"in real life, when are you testing?\"). It's not a fatal flaw, because this paper uses the \"reinforcement learning as an optimization algorithm\" and \"online learning as sequential data presentation\" perspectives, but clarifying this would improve the exposition.\"\n\nThank you for pointing this out. We have included a clarification in the revised paper (see the second paragraph of Sec 4):\n\"In the current study, we use reinforcement learning as an optimization algorithm, while processing data in a sequential manner.\"\n\nWe have also added a brief discussion of using random exploration during the prediction (the second paragraph of Sec 4):\n\"However, intuitively we believe in datasets with non-stationary distributions, it may be helpful to use random exploration as it helps to capture concept drift.\"", "I like the approach, however: I consider the paper to be poorly written. The presentation needs to be improved for me to find it acceptable.\n\nIt presents a stream-oriented (aka online) version of the algorithm, but experiments treat the algorithm as an offline training algorithm. This is particularly critical in this area because feature acquisition costs during the \"warm-up\" phase are actual costs, and given the inherent sample complexity challenges of reinforcement learning, I would expect them to be significant in practice. This would be fine if the setup is \"we have a fixed offline set of examples where all features have been acquired (full cost paid) from which we will learn a selector+predictor for test time\".\n\nThe algorithm 1 float greatly helped intelligibility, but I'm left confused. \n * Is this underlying predictor trained simultaneously to the selector? \n * Exposition suggests yes (\"At the same time, learning should take place by updating the model while maintaining the budgets.\"), but algorithm block doesn't make it obvious.\n * Maybe line 21 reference to \"train data\" refers to the underlying predictor.\n * Line 16 pushes a value estimate into the replay buffer based upon the current underlying predictor, but:\n * this value will be stale when we dequeue from the replay buffer if the underlying predictor has changed, and \n * we have enough information stored in the replay buffer to recompute the value estimate using the new predictor, but\n * this is not discussed at all.\n\nAlso, I'm wondering about the annealing schedule for the exploration parameter (this is related to my concern that the\nalgorithm is not really an online algorithm). The experiments are all silent on the \"exploration\" feature acquisition cost. Furthermore I'm wondering: when you do the test evaluations, do you set exploration to 0?\n\nI also found the following disturbing: \"It is also worth noting that, as the proposed method is\nincremental, we continued feature acquisition until all features were acquired and reported the average\naccuracy corresponding to each feature acquisition budget.\" Does this mean the underlying predictor was trained on data \nthat it would not have if the budget constraint were strictly enforced?\n", "I had to write my response to your other response first.\n\nIf it doesn't happen for you this time, I recommending focusing more on the train-time behaviour going forward. I suspect you could discover an exploration algorithm that is empirically more effective than epsilon-greedy. Furthermore, as a practitioner I care most about 1) cumulative regret [total acquisition cost including learning], 2) tracking non-stationary environments. #2 includes the availability of new features as well as changes in cost or effectiveness of existing features.", ">> Regarding the reviewer’s concern about annealing, annealing is a standard approach widely used in the literature helping early steps of optimization. \n\nMy concern is not about annealing per se. Rather, the literature of online learning focuses on cumulative regret, which the manuscript does not address. (\"Optimization\" != \"online learning\"). I suspect you mean \"online\" as in \"receiving examples one-at-a-time\", but this distinction should be clearer.\n\n>> Regarding the exploration probability used in our experiments: during the training and validation phase, we use the random exploration mechanism. However, for the comparison of the results with other work in the literature, as they are all offline methods, we decided to not to do the exploration.\n\nThe Achilles' Heel of reinforcement learning is sample complexity, and our (lack of good) exploration algorithms is central to the problem. By turning off the exploration for test you have elided this difficulty (\"in real life, when are you testing?\"). It's not a fatal flaw, because this paper uses the \"reinforcement learning as an optimization algorithm\" and \"online learning as sequential data presentation\" perspectives, but clarifying this would improve the exposition.\n\n>> In order to address the reviewer’s concern, we conducted experiments using different enforced budgets (see Fig. 7). In summary, according to our experiments, the suggested method is able to efficiently operate at different enforced budget constraints.\n\nThis is great. What this tells me is that in the experiments most of the features are not useful, so placing a hard upper limit on feature acquisition during learning is not damaging. \n", "We are glad to hear that you found the revised version satisfactory!\nWe would appreciate if you could reconsider the decision or let us know if you have any additional concern.", "You have effectively incorporated the feedback from the points you quote in this response.", "\nWe thank the reviewers for the constructive comments and suggestions. We believe that the suggested revisions enhanced the scientific quality of the manuscript significantly.\n\nThe summary of revisions is as follows:\n\n- We added clarifications to different parts of the paper including: loss functions, algorithm box comments, explanation of the algorithm, implementation details, and so forth.\n\n- A new section is included in the revised version which is dedicated to ablation study of: (i) certainty measurement used in this paper, (ii) the suggested representation sharing method, (iii) the proposed reward function, and (iv) the performance of the proposed method under different enforced budgets (see Section 4.3.1).\n\n- We presented and discussed plots showing the performance of the algorithm during the operation at each episode.\n\n- We added discussions on how to handle bundled features, how to represent missing values, the impacts of the annealing strategy, etc.\n", "\nThank you for reviewing the manuscript and helpful comments. Please find a point-to-point response to your comments in the following.\n\n--------------------------------------------\n*Comment: “1. What is the loss function? In particular, how is the P-Network learned? It seems that the model is based on actor-critic algorithms, but this is not clear from the text.”\n\nThe loss function used for the P-Network is a cross-entropy loss as a typical loss used for classification tasks. For training the Q-Network we use mean squared error (MSE) between the estimated reward and the observed reward values. Please note that the replay buffer is used to sample batches of feature vectors, labels, actions, and reward values required to measure P- and Q-Network losses.\n\nWe added the following clarification to the revised paper (see Sec. 3.2):\n“Cross-entropy and mean squared error (MSE) loss functions were used as the objective functions for the P and Q networks, respectively.”\n\n--------------------------------------------\n*Comment: “2. What is the reward function? Only immediate reward is given.”\n\nThe reward function which is suggested by this paper is presented in Eq. (7). Here, we are using epsilon-greedy explorations and Bellman equations to fit the action-value function. It allows the general formulation of non-immediate and accumulated rewards through a discount factor.\nIntuitively, the suggested reward function in Eq. (7) measures the expected change of model hypothesis corresponding to each feature acquisition action.\n\n--------------------------------------------\n*Comment: “3. What is the state representation? How do you represent features not acquired yet?”\n\nIn this paper, each state is the current feature vector containing values for features that are acquired at that state. \nFrom the first paragraph of Section 3.1:\n“At each point, the current state is defined as the current realization of the feature vector (i.e., $\\bm{x}_i^t$) for a given instance.”\n\nWe use NaN (not a number) values to internally represent the features that are not available. However, the implementation we use replaces the NaN values with zeros during the forward/backward computation. We believe it is an efficient approach compared to using separate mask vectors to represent missing features as it reduces the memory and I/O overheads.\n\nWe included a brief explanation in the revised version (See Sec. 3.2):\n“Note that, in our implementation, for efficiency reasons, we use NaN (not a number) values to represent features that are not available and impute them with zeros during the forward/backward computation.”\n\n--------------------------------------------\n*Comment: “It is great that the authors have done extensive comparison with prior approaches; however, I find more ablation study needed to understand what made the model works better.”\n\n\nThank you for suggesting this. In the revised version, we added a new subsection (Sec. 4.3.1) to the results section entitled “ablation study”. In summary, it presents an ablation study and comparisons of:\n\n- Using the MC-Dropout certainty versus the uncalibrated softmax estimates. We compared the accuracy of the estimated certainty values achieved as well as the overall impact on the feature acquisition performance (see Fig. 5a and Fig. 5b of the revised version). As it can be seen from these figures, the idea of using MC-Dropout certainty plays a crucial rule in the performance of the proposed method.\n\n- Demonstrating the effectiveness of the suggested representation sharing between the P and Q networks (see Fig. 6) demonstrating that the representation sharing would result in a faster convergence.\n\n- We added an analysis of the suggested method under different enforced budget constraints (see Fig. 7). According to results, the suggested method is able to efficiently operate at different enforced budget constraints.\n\n- Regarding other ablation analysis suggested by the reviewer, we have comparisons of the suggested approach (OL) and a basic reinforcement learning based method (RL-based) in the comparison results presented in Section 4.2. Due to space considerations, in the revised version, we discussed this case in the ablation study section without reiterating the plots and by referring to Fig. 2 and Fig. 4b (see Sec 4.3.1):\n“A comparison between the suggested feature-value function (OL) in this paper with a traditional feature-value function (RL-Based) was presented in Figure 2 and Figure 4b. Here, RL-Based method is using a similar architecture and learning algorithm as the OL, while the reward function is simply the negative of feature costs for acquiring each feature and a positive value for making correct predictions. As it can be seen from the comparison of these approaches, the reward function suggested in this paper results in a more efficient feature acquisition.“ \n", "\nThank you for reviewing the manuscript and helpful comments. Please find a point-to-point response to your comments in the following.\n-----------------------------------------------------------------------------------------------\n* Is this underlying predictor trained simultaneously to the selector?\n* Exposition suggests yes (\"At the same time, learning should take place by updating the model while maintaining the budgets.\"), but algorithm block doesn't make it obvious.\n\nYes, the predictor is trained jointly with the feature-value estimator. In algorithm block, Line 22 is related to updating P and Q networks at the same time on a training batch sampled from the relay memory. In order to clarify this, we have added the following comment to the algorithm box in the revised version:\n“update P, Q, and target Q networks using train-batch // Jointly train P & Q”\n\n-----------------------------------------------------------------------------------------------\n* Maybe line 21 reference to \"train data\" refers to the underlying predictor.\n* Line 16 pushes a value estimate into the replay buffer based upon the current underlying predictor, but:\n * this value will be stale when we dequeue from the replay buffer if the underlying predictor has changed, and\n * we have enough information stored in the replay buffer to recompute the value estimate using the new predictor, but\n * this is not discussed at all.\n\nHere, train data refers to a batch of samples from the experience replay memory. Each item in the replay memory is a tuple of: a feature vector before and after the acquisition, the action taken, reward received for that action, and the ground-truth label corresponding to that feature vector (see Line 16 of the algorithm box).\n\nRegarding the reviewer’s concern about issues with having stale predicted values, in this paper, we prevent this by storing the ground-truth label in the replay buffer and by recomputing the predictions before updating the parameters. This way, we always use the most up-to-date results. However, as the ground truth label may not be available during the feature acquisition, in our final implementation, we use a temporary buffer to store experiences without the label and we push them along with the ground truth label as soon as the feature acquisition is finished and the ground-truth label is available. To simplify the presentation of the algorithm, we decided to omit the temporary buffering trick from the algorithm box and assumed that the labels are available. If the reviewer believes that including this in the algorithm would be helpful, we would be glad to include this.\n\nWe have discussed this in the second-to-last paragraph of Section 3.1:\n“It is worth noting that, in Algorithm 1, to simplify the presentation, we assumed that ground-truth labels are available at the beginning of each episode. However, in the actual implementation, we store experiences within an episode in a temporary buffer, excluding the label. At last, after the termination of the feature acquisition procedure, a prediction is being made and upon the availability of label for that sample, the temporary experiences along with the ground-truth label are pushed to the experience replay memory.”\n", "\n*Comment: Also, I'm wondering about the annealing schedule for the exploration parameter (this is related to my concern that the algorithm is not really an online algorithm). The experiments are all silent on the \"exploration\" feature acquisition cost. Furthermore I'm wondering: when you do the test evaluations, do you set exploration to 0?\n\nIn an online learning setup, data becomes available sequentially and the goal for an online learner is to update its hypothesis as more data is being observed. There are two main considerations for an online method. First, data is not provided or can be stored as a batch. Second, the hypothesis should be refined incrementally as more observations take place. \n\nRegarding the reviewer’s concern about annealing, annealing is a standard approach widely used in the literature helping early steps of optimization. We believe that the suggested algorithm is online because, initially, there is no viable alternative strategy to follow due to the limited number of samples. However, as we observe more samples, we anneal the random decisions and try to use the captured knowledge instead. In this respect, the suggested algorithm is online according to the definition above.\n\nRegarding the exploration probability used in our experiments: during the training and validation phase, we use the random exploration mechanism. However, for the comparison of the results with other work in the literature, as they are all offline methods, we decided to not to do the exploration.\n\nIn order to address the reviewer’s comment, we added the following explanation to the revised paper:\n“During the training and validation phase, we use the random exploration mechanism. However, for the comparison of the results with other work in the literature, as they are all offline methods, the random exploration is not used during the feature acquisition.”\n\n-----------------------------------------------------------------------------------------------\n*Comment: I also found the following disturbing: \"It is also worth noting that, as the proposed method is incremental, we continued feature acquisition until all features were acquired and reported the average accuracy corresponding to each feature acquisition budget.\" Does this mean the underlying predictor was trained on data that it would not have if the budget constraint were strictly enforced?\n\nIn order to address the reviewer’s concern, we conducted experiments using different enforced budgets (see Fig. 7). In summary, according to our experiments, the suggested method is able to efficiently operate at different enforced budget constraints.\n\nWe have also included the following discussion to the paper:\n“Figure 7 shows the performance of the OL method having various limited budgets during the operation. Here, we report the accuracy-cost curves for 25%, 50%, 75%, and 100% of the budget required to acquire all features. As it can be inferred from this figure, the suggested method is able to efficiently operate at different enforced budget constraints.”\n", "\nThank you for reviewing the manuscript and helpful comments. Please find a point-to-point response to your comments in the following.\n\n-----------------------------------------------------------------------------------------------\n* Comment: “The ideas of using a sequentially revealed vector of features and sequentially training a network are in Contrado’s RADIN paper.”\n\nWe agree with the reviewer that having sequentially revealed vectors is in common between the earlier work by Contrado (RADIN) and the current study (OL). However, we believe that RADIN and OL are significantly different from each other in the idea, architecture, and implementation. Specifically:\n\n- RADIN approaches the problem by looking into the feature acquisition process as a time sequence of acquisitions. However, the suggested method is modeling the utility of actions given the current state regardless of the previous actions. From this perspective, RADIN can be considered a time-series approach, while OL is a reinforcement learning approach using a time-invariant policy.\n\n- RADIN defines a cost function consisting of two terms weighted by a hyper-parameter: a classification loss and a feature acquisition cost. However, the introduced method in this paper is using the variations of model uncertainty as a value function of eq (7) being used in making decisions.\n\n- RADIN is using a recurrent neural network (RNN) architecture, while OL is based on reinforcement learning and deep Q learning algorithms.\n\n- The suggested method is designed to operate as an online learning algorithm, while RADIN is not studying this case.\n\n-----------------------------------------------------------------------------------------------\n* Comment: “I would have liked to have seen a chart on how well this algorithm performs across time/history. How well does the algorithm perform on the first 100 patients vs the last 91,962-91,062 patients at what point would it make sense to start to use the algorithm (how much history is needed).”\n\nThank you for suggesting this. We have included a new section to discuss this (see Section 4.3.2 and Fig. 8ab). \n\nWe have also added the following explanation in the results section (see Section 4.3.2):\n“Figure 8a and 8b demonstrate the validation accuracy and AUACC values measured during the processing of the data stream at each episode for the MNIST and Diabetes datasets, respectively. As it can be seen from this figure, as the algorithm observes more data samples, it achieves higher validation accuracy/AUACC values, and it eventually converges after a certain number of episodes. It should be noted that, in general, convergence in reinforcement learning setups is dependent on the training algorithm and parameters used. For instance, the random exploration strategy, the update condition, and the update strategy for the target Q network would influence the overall time behavior of the algorithm. In this paper, we use conservative and reasonable strategies as reported in Section 3.2 that results in stable results across a wide range of experiments.”\n\n-----------------------------------------------------------------------------------------------\n*Comment: “Am I correct in assuming there are some base features that are revealed “for free” for all samples? If so how are these chosen? If so how does the number of these impact the results?”\n\nIn our experiments, we are not assuming any feature will be available for free. However, the formulation presented in this paper accommodates the case where features are available for free. In order to clarify this issue and prevent any confusion to our readers, we added the following explanation to the paper:\n\n“In this algorithm, if any features are available for free we include them in the initial feature vector; otherwise, we start with all features not being available initially.”\nAlso, the algorithm box is revised by adding a comment to Line 4:\n“$x_i^t$ <- known features of S_i // if there are any features available”\n", "\n*Comment: “In Contrado’s RADIN paper the authors explore both the MNIST dataset and others, including a medical dataset “cardio.” Why did you only use RADIN as a comparison for the MNIST dataset and not the LTRC or diabetes dataset? Did you actually re-implement RADIN or just take the numbers from their paper? In which case, are you certain which MNIST set was used in this paper? (it was not as well specified as in your paper).”\n\nWe have compared our results with Contrado’s results as reported on the RADIN paper. The reason behind this was the fact that RADIN is consisting of many components and parameters which makes reproducing their results for our comparisons with RADIN very difficult. We would be glad to include comparisons with RADIN on other datasets, if the reviewer could point us to an open source implementation of RADIN.\nRegarding the reviewer’s comment on which samples of the MNIST was used for training/validation/test: we use the standard MNIST separation using the provided train set for our train and validation, and the MNIST test set is used for testing the suggested algorithm.\n\n-----------------------------------------------------------------------------------------------\n*Comment: “With respect to the real world validity of the paper, given that the primary value of the paper has to do with cost sensitive online learning, it would have been better to talk more about the various cost structure and how those impact the value of your algorithm...”\n\nWe agree with the reviewer that it is very important to consider cost structures in real-world scenarios. However, a deep study of any specific cost structure (e.g., in specific healthcare problems) is itself an area of research and any problem would require an in-depth study. In this paper, we introduced a general formulation for the problem for cost-sensitive feature acquisition from stream data that is evaluated on different applications. However, a deeper study of any specific cost structure would require integrating domain expertise and is out of the scope of this study.\n\n-----------------------------------------------------------------------------------------------\n*Comment: “the web address you cite is a general address and does not go to the dataset you are using”\n\nThe web address provided contains links to the dataset download page (the “Questionnaires, Datasets, and Related Documentation” option on the left sidebar). Additionally, we plan to publish the dataset preprocessing source code to help other future work to reproduce and compare our results.\n\n\n-----------------------------------------------------------------------------------------------\n*Comment: “In reality, these costs would be bundled...To show the value of your work, a better discussion of the cost savings would be appreciated.”\n\nThe current formulation presented in this paper allows for having bundled feature sets. In this case, each action would be acquiring a bundle and the reward function is evaluated for the acquisition of this bundle by measuring the variations of uncertainty before and after acquiring the bundle. As suggested by the reviewer, we have added a discussion on this to the revised paper:\n“In our experiments, for the simplicity of presentation, we assume that all features are independently acquirable at a certain cost, while in many scenarios, features are bundled and acquired together (e.g., certain clinical measurements). However, it should be noted that the current formulation presented in this paper allows for having bundled feature sets. In this case, each action would be acquiring each bundle and the reward function is evaluated for the acquisition of the bundle by measuring the variations of uncertainty before and after acquiring the bundle.”\n" ]
[ 7, 6, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2019_S1eOHo09KX", "iclr_2019_S1eOHo09KX", "Hkeo37zYAm", "HkgGRh3gAQ", "H1g_9mF_07", "SJlpP-Kd0m", "iclr_2019_S1eOHo09KX", "BJg5AgFOC7", "HyeFo62e07", "HyeK0TddCm", "rJelzRnl0X", "iclr_2019_S1eOHo09KX", "rygaWIL0h7", "HJgr7KZahQ", "HJgr7KZahQ", "H1g0t0mMnQ", "H1g0t0mMnQ" ]
iclr_2019_S1eYHoC5FX
DARTS: Differentiable Architecture Search
This paper addresses the scalability challenge of architecture search by formulating the task in a differentiable manner. Unlike conventional approaches of applying evolution or reinforcement learning over a discrete and non-differentiable search space, our method is based on the continuous relaxation of the architecture representation, allowing efficient search of the architecture using gradient descent. Extensive experiments on CIFAR-10, ImageNet, Penn Treebank and WikiText-2 show that our algorithm excels in discovering high-performance convolutional architectures for image classification and recurrent architectures for language modeling, while being orders of magnitude faster than state-of-the-art non-differentiable techniques.
accepted-poster-papers
This paper introduces a very simple but effective method for the neural architecture search problem. The key idea of the method is a particular continuous relaxation of the architecture representation to enable gradient descent-like differentiable optimization. Results are quite good. Source code is also available. A concern of the approach is the (possibly large) integrality gap between the continuous solution and the discretized architecture. The solution provided in the paper is a heuristic without guarantees. Overall, this is a good paper. I recommend acceptance.
train
[ "HklMs3OuxE", "H1l5BS-ueV", "Syed9oKDgV", "SJesF7dIg4", "HklGrBN8gE", "B1l8_f9Qx4", "H1eNn09uAm", "HyxF8gidA7", "H1gA2JsOAQ", "rkgEU1jdCX", "r1lmv05d0Q", "Byea04oQ0Q", "S1gIGlGbT7", "rJeh6xB5nQ", "r1ekErZ53Q", "HJg9ETOFn7", "HyeSHfVcnQ", "B1eNZMVcnX", "SkxmNjGq27", "HJek5PHFh7", "r1e0Zyrt3X", "r1lYDFO_37", "rkx7zlubn7", "SkgH5LvMo7", "rkxzCbNziX", "B1xH2iLk2m", "Syla8iL1hQ", "S1l9xl2Rim", "ryef0ciRjQ", "r1lW2xUCiX", "Hkxeq34sjQ", "r1x3m94ssX", "Bylnj2t5j7", "ryxFEjq_sQ", "rklh2Hfvim", "BkllDdQUo7", "H1gSF654im", "H1gHvB8Wi7", "S1eKIEw55Q", "HkxGFGpfcX", "BkeB4E5f97", "Hkxx5G9f57", "BygL-vufqQ" ]
[ "public", "author", "public", "author", "public", "public", "author", "author", "author", "author", "author", "public", "public", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "public", "public", "author", "public", "public", "author", "public", "author", "author", "public", "public", "author", "public", "public", "author", "public", "public", "author", "public", "author", "public", "author", "author", "public", "public" ]
[ "At least, you should provide experimental results without the wired strategy. I think it is a big problem for the literature, it will make the future NAS work confuses on whether to use your \"strategy\".", "Dear Reviewers,\n\nIn response to the negative anonymous comments that we have received, we would like to reiterate that our claims are valid, and the publicly available implementation is correct. Throughout the reviewing process, we have done our best to address all questions we have received, and we will strive to continue improving the paper.", "1. \"This is not a bug, but a strategy to reduce the memory consumption\": \n a)The size of Ws is only 300x300, it is not a big matrix that can cause OOM error on GPU. Why do you need to reduce the memory?\n b) I have checked the implementation of ENAS (https://github.com/melodyguan/enas). For the RNN search, ENAS uses different W_{i,j} for different previous node j of node i. In ENAS, it even uses different W for different activations. That is to say, for node i, and there are 4 activations (Relu, Tanh, Sigmoid, Identity), there are (i-1)x4 connection weights for node i, as there are i-1 previous nodes and 4 activation functions. The implementation of ENAS makes sense. Since the weights should not be shared by different activations. So do the previous nodes. It is very very wired that different inputs use the same weights. If they are shared, there are no difference for the connection for different previous nodes, except the node itself. \n c) If the implementation of ENAS do not OOM (I tested ENAS code and it works), why do you use this wired strategy?\n\n2. \" It has been mentioned in sect. A.1.2\"\n In this section, you mention: \"The linear transformation parameters across all candidate operations on the SAME EDGE are shared\". What does the \"same edge\" mean? I think the connection between node i and node j is an edge, and between node i and node k is an different edge (see figure 1). Edge (i-j) and Edge(i-k) are not the same edge, right? So in Sect. A. 1.2, I think the sentence means different ACTIVATION FUNCTIONS use the same connection matrix, and it does NOT mean any connection to node i uses the same weights. Hence, it is misleading.\n\n\n ", "This is not a bug, but a strategy to reduce the memory consumption when (1) parameters within all incoming ops are of the same shape and (2) we know that for each node only one of its predecessors will be retained (as in the case of RNNs) and the algorithm always has the option to zero out the others. It has been mentioned in sect. A.1.2, and we will explain it in more detail in the next revision.\n\n> \"In the code, there are only N connection weight Ws\"\nLike ENAS, each node in our derived recurrent cell has only a single predecessor, hence there should be N ops (W's) in total.", "Hi,\n\nThe work is quite interesting! After reading the paper carefully, I read the code provided by the authors on Github (https://github.com/quark0/darts). However, I found that the code for RNN searching is WRONG!\n\nIn ENAS (https://arxiv.org/abs/1802.03268), the paper mentioned: \"In the example above, we note that for each pair of nodes j < ℓ, there is an independent parameter matrix W^(h)_{ℓ,j} . As shown in the example, by choosing the previous indices,\nthe controller also decides which parameter matrices are used. Therefore, in ENAS, all recurrent cells in a search space share the same set of parameters.\" That means if there are N nodes, there should be (1+2+...+N-1) weights, for each pair of nodes j<ℓ. This setting is reasonable, as if node ℓ is connected to different nodes, the connection matrices should be different. \n\nHOWEVER, I find that DARTS use the SAME Weights if node ℓ is connected to different nodes!! In the code, there are only N connection weight Ws (https://github.com/quark0/darts/blob/master/rnn/model.py#L26), and the correct number of Ws should be (1+2+...+N-1). This error is also confirmed by https://github.com/quark0/darts/blob/master/rnn/model_search.py#L28, where all the previous nodes j of node i share the same Ws[i], not Ws[i][j]. Using the SAME Weights is really wired and does not make sense! But the authors did not mention this point at all! \n\nI think this is a bug in the code and the authors did not notice it. So the results is not convincing! I hope the authors should fix it the redo the experiments ASAP!", "One relevant NeurIPS paper this year, which shares the same high-level idea as this work and searches architectures in a continuous and differentiable space, is missing.\n\nNeural Architecture Optimization\nhttps://nips.cc/Conferences/2018/Schedule?showEvent=11750 \n", "Thank you for the feedback.\n\n> “It seems that the justification of equations (3) and (4) is not immediately obvious”\nIn this work we treat \\alpha as a high-dimensional hyperparameter. The bilevel formulation offers a mathematical characterization of the standard hyperparameter tuning procedure, namely to find hyperparameter \\alpha that leads to the best validation performance (eq. (4)) after regular parameters w are trained until convergence on the training set (eq. (3)) given \\alpha.\n\n> \"it is not that clear why iterating between test and validation set is the right thing to do\"\nUsing two separate data splits for \\alpha and w as in the bilevel formulation should effectively prevent hyperparameter/architecture from overfitting the training data. Advantage of doing so has also been empirically verified by our experiments. Please refer to “Alternative Optimization Strategies” in sect. 3.3 of the revised draft.\n\nFrom the algorithmic point of view, each architecture gradient step consists of two subroutines:\n(i) Obtaining w^*(\\alpha), namely weights trained until convergence for the given architecture, by solving the inner optimization eq (4). This can normally be achieved by taking a large number of gradient descent steps of w wrt the training loss.\n(ii) Descending \\alpha wrt the validation loss defined based on w^*(\\alpha). \nOur iterative algorithm is a truncated version of the above by approximating the optimization procedure in (i) using only a single gradient step.\n\n> “I think architecture pruning literature is relevant too”\nYes, network pruning and (differentiable) architecture search are related despite somewhat different goals. The former aims to learn fine-grained sparsity patterns (e.g. which neurons or channels should be kept) that best approximate a given unpruned network. The latter aims to learn macro-level sparsity patterns that represent an architecture.", "Thank you for the feedback.\n\n> Regarding the initialization of \\alpha\nWe use zero initialization which implies equal amount of attention (after taking the softmax) over all possible ops. At the early stage this ensures weights in every candidate op to receive sufficient learning signal (more exploration). This detail has been added to the revised draft.\n\n> “I think (5) is misleading as it is because of k-1.”\nThank you for the suggestion. This has been fixed in the revised sect. 2.3.", "> “how did you choose the hyperparameters of DARTS” (Q8)\nWhile Adam with a small learning rate (3e-4) and the default first momentum 0.9 works well for recurrent cells, the same setup leads to slow progress for conv cells (\\alpha would remain near-uniform in 50 epochs). We thus (1) increased the learning rate by an order of magnitude to 3e-3 and (2) lowered the momentum from 0.9 to 0.5 to alleviate instability due to the increased learning rate.\n\nTo better understand the effect of different momentums, we have now repeated our CIFAR-10 experiments using momentum 0.9 instead. The newly obtained cells achieve 2.89% test error with 3.5M params (1st order) and 2.91% with 3.3M params (2nd order). These are comparable with our previous results based on momentum 0.5. \n\n> “I am wondering whether the authors have a reply to this” (Q9)\nIn DARTS we use a deterministic architecture encoding, where \\alpha is a continuous variable with well-defined gradients. While being conceptually simple, the method may suffer from bias due the discrepancy between \\alpha and the derived discrete architecture.\n\nThe key idea of SNAS is to replace the deterministic encoding in DARTS with a stochastic one. This modification makes architecture derivation more straightforward as \\alpha is now a discrete random variable by definition. Unlike DARTS, gradients wrt (the distribution of) \\alpha are no longer well-defined, hence Gumbel-softmax estimator is used to enable a differentiable optimization procedure. As a result, the estimated gradients are biased as long as the temperature is not zero.\n\nAs far as the empirical results are concerned, the two methods perform similarly on CIFAR-10, though the DARTS cell transfers slightly better to ImageNet. The ability of DARTS to learn the architectures of recurrent cells has also been empirically verified by its strong performance for language modeling (Table 2), whereas that of SNAS requires future investigation.\n\n> “A derivation, or at least a clearer motivation for the algorithm would be useful.” (2nd part of Q9)\nPlease refer to our response to AnonReviewer1 and our revised sect. 2.3.", "Thank you for the detailed comments and questions. We have fixed the missing references (Q2) and presentational issues (Q4, Q10) in the revision. Below we focus on the major points:\n\n> Regarding discretization schemes (Q1)\nThe current discretization scheme can be viewed as a heuristic to minimize the per-node rounding error, as described in the revised sect. 2.4. While refining this part was not our primary focus, it indeed deserves further study. We have also added a remark in the draft to make readers aware of this potential limitation. \n\nTo reduce the rounding error, in our preliminary experiments we tried annealing the softmax temperature to enforce one-hot selection, but did not observe clear differences in terms of the quality of the derived cells. Note that a large rounding error does not necessarily imply poor performance, since the current discretization mechanism only depends on the ranking among the strengths of the incoming edges.\n\n> “since ENAS is 8 times faster one could even run it 8 times” (Q3)\nWe agree it would be informative to compare DARTS and ENAS given the same search cost (e.g., 4 GPU days). Following your suggestion, we repeated the search process of ENAS for 8 times on CIFAR-10 using the authors' implementation and their best setup. We then used the same selection protocol as for DARTS by training the candidate cells for 100 epochs using half of the CIFAR-10 training data to get the validation performance on the other half. The best ENAS cell out of 8 runs achieves 2.91% test error using 4.2M params in the final evaluation, which is slightly worse than 4 runs of DARTS (2.76% error using 3.3M params). These new results have been included in Table 1 of the revised draft.\n\n> “One big question I have is where the hyperparameters come from” (Q5, Q6).\nLet us explain our reasoning for each of these hyperparameters in detail:\n\nFor convolutional cells:\n\nOur setup of #cells (8->20), #epochs (600) and weight for the auxiliary head (0.4) in the final evaluation exactly follows Zoph et al., 2018. The #init_channels is enlarged from 16 to 36 to ensure a comparable model size (~3M) with other baselines. Given those settings, we then use the largest possible batch size (96) for a single GPU. The drop path probability was tuned wrt the validation set among the choices of (0.1, 0.2, 0.3) given the best cell learned by DARTS.\n\nWe treat droppath, auxiliary towers and cutout as additional augmentations only for the final evaluation. Learnable affine parameters in the batch normalisation are disabled during the search phase to avoid arbitrary rescaling of the nodes, as explained in sect A.1.1. They are enabled in the evaluation phase to ensure fair comparison with other baseline networks.\n\nFor recurrent cells:\n\nWe always use the same #units for both embedding and hidden layers, which is enlarged from 300 to 850 in the final evaluation to make our #params (~23M) comparable with other models in the literature. We then use the largest possible batch size (64) to fit our model in a single GPU. The l2 weight decay was tuned on the validation set given the best recurrent cell. We do not trigger ASGD during the search phase for simplicity and also to accommodate our current approximation scheme which does not take into account model averaging (though it can be modified to support it).\n\nBatch normalisation is useful during architecture search to prevent gradient explosion (Sect 3.1.2). Similar to the case of convnets, learnable affine params are disabled to avoid node rescaling, as explained in A.1.1 and A.1.2. Once the cell is learned, batch normalisation layers are omitted in the final evaluation for fair comparison with existing language models which usually do not involve normalisation. Our usage of batch normalisation for RNN architecture search follows ENAS.\n\n> “how the best of the 24 random samples in random search is evaluated” (Q7):\nThe same script is used for cell selection of DARTS and random search. All the hyperparameters, except #epochs, are identical to those in our final evaluation pipeline.", "We thank all reviewers and public commenters for their feedback. The draft has been updated and major changes include:\n+ Fixed some claims, typos and missing references.\n+ Revised sect. 2.3 to better explain the motivation of our algorithm.\n+ Revised sect. 2.4 to make the description of our discretization scheme more intuitive.\n+ Highlighted the selection and evaluation costs on top of Table 1 & 2.\n+ Added results of repeating ENAS for 8 times in Table 1.\n+ Added results of simultaneously optimizing w and \\alpha over the same set instead of two separate data splits in sect. 3.3.\n+ Changes addressing the public comments.", "Hi authors!\n\nI enjoy your paper with awesome codes.\n\nHere I have one question about FLOPS of DARTS on ImageNet in the mobile setting. \nI have come to the conclusion that FLOPS of DARTS on ImageNet in the mobile setting is 585M/s, which conflicts with 574M/s provided in Table 3 of your paper. \n\nCan the authors clarify this doubt? Thank you.", "Hello, authors!\nTo begin with, I am so impressed by this work because it is both simple and powerful.\nHowever, I am curious about some details on your model evaluation.\n\nAccording to this paper, there are 7 nodes within a cell for both search and evaluation, and 8 cells were used for search and 20 cells were used for evaluation. Here, could you specify derived model for evaluation in terms of the number of initial channels? Evaluated model in DARTS for state-of-the-art comparison for CIFAR10 does not seem to match with its reported number of parameters(2.9M or 3.3M) if the number of initial channels was kept the same with architecture search as 16. I think it should have fewer parameters than 2.9M or 3.3M if the number of initial channels was kept the same. Could you answer this?\n\nAnyway, thanks for this amazing work!", "(Disclaimers: I am not not active in the sub-field, just generally interested in the topic, it is easy however to find this paper in the wild and references to it, so I accidentally found out the name of the authors, but had not heard about them before reviewing this, so I do not think this biased my review).\n\nDARTS, the algorithm described in this paper, is part of the one-shot family of architecture search algorithms. In practice this means training an over-parameterized architecture is, of which the architectures being searched for are sub-graphs. Once this bigger network is trained it is pruned into the desired sub-graph. DARTS has \"indicator\" weights that indicate how active components are during training, and then alternatively trains these weights (using the validation sets), and all other weights (using the training set). Those indicators are then chosen to select the final sub-graph.\n\nMore detailed comments:\n\nIt seems that the justification of equations (3) and (4) is not immediately obvious, in particular, from an abstract point of view, splitting the weights into w, and \\eta to perform the bi-level optimizations appears somewhat arbitrary. It almost looks like optimizing the second over the validation could be interpreted as some form of regularization. Is there a stronger motivation than that is similar to more classical model/architecture selection?\n\nThere are some papers that seem to be pretty relevant and are worth looking at and that are not in the references:\n\nhttp://proceedings.mlr.press/v80/bender18a.html \nhttps://openreview.net/forum?id=HylVB3AqYm (under parallel review at ICLR, WARNIGN TO REVIEWERS: contains references to a non anonymized version of this paper )\n\nI think architecture pruning literature is relevant too, it would be nice to discuss the connection between NAS and this sub-field, as I think there are very strong similarity between the two.\n\nPros:\n* available source code\n* good experimental results\n* easy to read\n* interesting idea of encoding how active the various possible operations are with special weights\n\nCons\n* tested on a limited amount of settings, for something that claims that helps to automate the creation of architecture, in particular it was tested on two data set on which they train DARTS models, which they then show to transfer to two other data sets, respectively\n* shared with most NAS papers: does not really find novel architectures in a broad sense, instead only looks for variations of a fairly limited class of architectures\n* theoretically not very strong, the derivation of the bi-level optimization is interesting, but I believe it is not that clear why iterating between test and validation set is the right thing to do, although admittedly it leads to good results in the settings tested\n", "This paper proposes a novel way to formulate neural architecture search as a differentiable problem.\nIt uses the idea of weight sharing introduced in previous papers (convolutional neural fabrics, ENAS, and Bender et al's one shot model) and combines this with a relaxation of discrete choices between k operators into k continuous weights. Then, it uses methods based on hyperparameter gradient search methods to optimize in this space and in the end removes the relaxation by dropping weak connections and selecting the single choice of the k options with the highest weight. This leads to an efficient solution for architecture search. Overall, this is a very interesting paper that has already created quite a buzz due to the simplicity of the methods and the strong results. It is a huge plus that there is code with the paper! This will dramatically increase the paper's impact. \nIn my first read through, I thought this might be a candidate for an award paper, but the more time I spent with it the more issues I found. I still think the paper should be accepted, but I do have several points of criticism / questions I detail below, and to which I would appreciate a response.\n\nSome criticisms / questions:\n\n1. The last step of how to move from the one-shot model to a single model is in a sense the most interesting aspect of this work, but also the one that leaves the most questions open: Why does this work? Are there cases where we lose arbitrarily badly by rounding the solution to the closest discrete value or is the performance loss bounded? How would other ways of moving from the relaxation to a discrete choice work? I don't expect the paper to answer all of these questions, but it would be useful if the authors acknowledge that this is a critical part of the work that deserves further study. Any insights from other approaches the authors may have tried before the mechanism in Section 2.4 would also be useful.\n\n2. The related work is missing several papers, namely the entire category of work on using network morphisms to speed up the optimization process, Bender et al's one shot model, and several early papers on neural architecture search (work on NAS did not only start in 2017 but goes back to work in the 1990s on neuroevolution that is very similar to the evolution approach by Real). This is a useful survey useful for further references: https://arxiv.org/abs/1808.05377\n\n3. I find a few of the claims to be a bit too strong. In the introduction, the paper claims to outperform ENAS, but really the paper doesn't give a head-to-head comparison. In the experiments, ENAS is faster and gives slightly worse results. The authors state explicitly that their method is slower because they run it 4 times and pick the best result. One could obviously also do that with ENAS, and since ENAS is 8 times faster one could even run it 8 times! This is unfair and should be fixed. I don't really care even if it turns out that ENAS performs a bit better with the same budget, but comparisons should be fair and on even ground in order to help our science advance -- something that is far too often ignored in the ML literature in order to obtain a table with bold numbers in one's own row.\nLikewise, why is ENAS missing in the Figure 3 plots for CIFAR, and why is its performance not plotted over time like that of DARTS?\n\n4. The paper is not really forthcoming about clearly stating the time required to obtain the results:\n- On CIFAR, there are 4 DARTS run of 1 day each\n- Then, the result of each of these is evaluated for 100 epochs (which is only stated in the caption of Figure 3) to pick the best. Each of these validation runs takes 4 hours (which, again, one has to be inferred from the fact that random search can do 24 such evaluations in 4 GPU days), so this step takes another 16 GPU hours.\n- Then, one needs to train the final network for 600 epochs; this is a larger network, so this should take another 2-3 GPU days.\nSo, overall, to obtain the result on CIFAR-10 requires about one GPU week. That's still cheap, but it's a different story than 1 day.\nLikewise, DARTS is *not* able to obtain 55.7 perplexity on PTB in 6 hours with 4 GPUs; again, there is the selection step (probably another 4*6 hours?) and I think training the final model takes about 2 GPU days. These numbers should be stated prominently next to the stated \"search times\" to not mislead the reader.\n\n5. One big question I have is where the hyperparameters come from, for both the training pipeline and the final evaluation pipeline (which actually differ a lot!).\nFor example, here are the hyperparameters for CIFAR, in this format: training pipeline value -> final evaluation pipeline value:\n#cells: 8 -> 20\nbatch size: 64 -> 96\ninitial channels: 16 -> 36\n#epochs: 50 -> 600\ndroppath: no -> yes (with probability 0.2)\nauxiliary head: no -> yes (with weight 0.4)\nBatchNorm: enabled (no learnable parameters) -> enabled\n\nThe situation is similar for PTB:\nembedding size: 300 -> 850\nhidden units per RNN layer: 300 -> 850\n#epochs: 500 -> 8000\nbatch size: 256 (SGD) -> 64 (ASGD), sped up by starting with SGD\nweight decay: 5e-7 -> 8e-7\nBatchNorm: enabled (no learnable parameters) -> disabled\n\nThe fact that there are so many differences in the pipelines is disconcerting, since it looks like a lot of manual work is required to get these right. Now you need to tune hyperparameters for both the training and the final evaluation pipeline? If you have to tune them for the final evaluation pipeline, then you can't capitalize at all on the fact that DARTS is fast, since hyperparameter optimization on the full final evaluation pipeline will be order of magnitudes more expensive than running DARTS.\n\n6. How was the final evaluation pipeline chosen? Before running DARTS the first time, or was it chosen to be tuned for architectures found by DARTS?\n\n7. A question about how the best of 4 DARTS runs is selected, and how the best of the 24 random samples in random search is evaluated: is this based on 100 epochs using the *training* procedure or the *final evaluation* procedure? Seeing how different the hyperparameters are above, this should be stated.\n\n8. A few questions to the authors related to the above: how did you choose the hyperparameters of DARTS? The DARTS learning rate for PTB is 10 times higher than for CIFAR-10, and the momentum also differs a lot (0.9 vs. 0.5). Did you ever consider different hyperparameters for DARTS? If so, how did you decide on the ones used? Is it sensitive to the choice of hyperparameters? In the author response period, could you please report the \n(1) result of running DARTS on PTB using the same DARTS hyperparameters as used for CIFAR-10 (learning rate 3*e-4 and momentum (0.5,0.999)) and\n(2) result of running DARTS on CIFAR-10 using the same DARTS hyperparameters as used for PTB (learning rate 3*e-3 and momentum (0.9,0.999))?\n\n9. DARTS is being critizized in https://openreview.net/pdf?id=rylqooRqK7#page=10&zoom=180,-16,84\nI am wondering whether the authors have a reply to this.\nThe algorithm for solving the relaxed problem is also not mathematically derived from the optimization problem to be solved (equations 3,4), but it is more a heuristic. A derivation, or at least a clearer motivation for the algorithm would be useful.\n\n10. Further comments:\n- Equation 1: This looks like a typo, shouldn't this be x(j) = \\sum_{i<j} o(i,j) x(i) ? Even if the authors wanted to use the non-intuitive way of edges going from j to i, then o(i,j) should still be o(j,i).\n- Just above Equation 5: \"the the\"\n- Equation 5: I would have found it more intuitive had \\alpha_{k-1} already just been a generic \\alpha here.\n- It would be nice if the authors gave the explicit equations for the extension with momentum in the appendix for completeness.\n- The authors should include citations for techniques such as batch normalization, Adam, and cosine annealing.\n\n\nDespite these issues (which I hope the authors will address in the author response and the final version), as stated above, I'm arguing for accepting the paper, due to the simplicity of the method combined with its very promising results and the direct availability of code.", "The authors introduce a continuous relaxation for categorical variables so as to utilize the gradient descent to optimize the connection weights and the network architecture. It is a cool idea and I enjoyed the paper. \n\nOne question, which I think is relevant in practice, is the initialization of the architecture parameters. I might be just missing, but I couldn't find description of the initial parameter values. As it is gradient based, it might be sensitive to the initial value of alpha. \n\nIn (5), the subscript for alpha should be removed as it defines a function of alpha. I think (5) is misleading as it is because of k-1. (and remove one \"the\" in \"minimize the the validation\" in the sentence above (5))", "> \"Could you please give an explanation?\" (on the role of zero ops for edge selection)\nSince the zero op has been taken into account in the denominator of the edge strength (defined in sect. 2.4), edges with large weights on the zero ops are less likely to be selected.\n\nOur implementation follows the intuitions above. In particular, strengths of the zero ops are included for row-wise normalization of W (L154-155). The normalized W will then affect the output of L142 to determine the selected edges.\n\n> \"Do you have some thoughts on this phenomenon?\"\nIt is tempting to replace our current discretization scheme with temperature annealing + argmax. However, we found it nontrivial to come up with a suitable annealing schedule to simultaneously ensure (1) the temperature is low enough to yield a near-discrete architecture (thus getting rid of the “mixing effect” that you are referring to) (2) the temperature is high enough so that \\alpha does not get stuck at some suboptimal region, e.g., solution with lots of zeros. We leave more investigations on this direction as an interesting future work.", "Thanks for the suggestion. We will revise our writing accordingly.", "Hi! \n\nInteresting work! This question is more on the particular writing style and not on the method. \n\n About the claim of being \"able to discover both convolutional and recurrent networks\" as mentioned in text, I don't think it's an accurate way to remark that. Given that here you search for a computation cell, and quoting from section 2.1 \"The learned cell could either be stacked to form a convolutional network or recursively connected to form a recurrent network.\". \n\nIn my opinion, this doesn't imply that the method discovered recurrence or convolutional architecture, but instead it was explicitly done by stacking cells in a recurrent manner or providing a convolution as candidate operation. I would request the authors to reconsider their way of writing this and maybe say something like, \"able to discover effective cells for use in convolutional and recurrent networks\". \n\nThanks!\n", "Thank you for your reply. \n\n1) The claim that ZERO is omitted in edge selection is based on my understanding of your code at darts/cnn/search_model.pg:142. But I am not sure whether I comprehend it correctly. Could you please give an explanation? \n\n2) The explanation that ZERO does not play a role in the relative value of a feature map makes a lot of sense. But it also puts lots of weights on the fact that the mix op is continuous. If the softmax is annealed, this mixing effect is supposed to be diminishing. Rather than ZERO, a truly effective operation should comes up as max. However, in my experiment as I depicted in last comment, when the temperature is low, ZERO still has the largest logit. Do you have some thoughts on this phenomenon? ", "Thank you for the questions.\n\n> \"why ZERO operation is omitted in both edge and operation selection?\"\nWe’d like to point out that the zero op does play a role in determining the predecessors for each node (edge selection). Please refer to the edge strength defined in sect 2.4.\n\nOnce the predecessors are determined, the zero operations are no longer used in argmax (op selection) for two reasons:\n(1) To make our derived networks comparable with NAS/PNAS/ENAS/AmoebaNets, which all assume a fixed sparsity level, i.e., exactly two predecessors per node via *non-zero* ops.\n(2) The strengths of zero ops can be underdetermined, as will be explained below.\n\n> \"why ZERO operation tends to have largest logit?\"\nNote the behavior of the network is not sensitive to the output scale of the mixed ops due to the presence of batchnorm. This makes the strength of the zero operation underdetermined, because we can always add some incremental value to the logit of a zero op (which is equivalent to rescaling the mixed op it belongs to, according to eq (2) in sect. 2.2) with a little effect on the final classification outcome. \n\nThe above is not an issue with our current discretization scheme, which is based on the relative importance among non-zero ops only (once the active predecessors are decided). We will add more discussions on this topic in the revised paper.", "I have tried your suggestion on annealing the softmax to see the effect of discretization, and there seems to be little difference in the derived network. However, put it aside, when I inspect into the derivation method provided in your implementation, I find that the operation ZERO is omitted (the final operation is selected from any ops but ZERO), as in your code darts/cnn/model_search.py:146. Then I go back to check the logit of ZERO operation, and find it is actually the largest in almost every edge of the normal cell. \n\nTo exclude the effect of annealing, I run your original implementation for three times with different random seeds. And it seems ZERO is still the one with largest logit. If the ZERO operation is playing the role as you stated in Sec. 2.1, it should be the argmax that is supposed to be selected as you stated in Sec. 2.4, resulting in an extremely sparse graph rather than the one you provided. Could you please give an explanation to\n1) why ZERO operation is omitted in both edge and operation selection? \n2) why ZERO operation tends to have largest logit?", "Does the \"2.88 +/- 0.09%\" come from DARTS (first order) or DARTS (second order)?\n\nIn addition, would you mind to report the results of DARTS (first order) on WT2?", "First, we'd like to emphasize that the hyperparameters provided in our scripts were chosen based on a random subset of the training data (as the validation set) rather than the test data, though we used the 50K/10K training/test split in our released code (i.e., cnn/train.py for the final run) and printed out the errors on both sets. This is to make it easier for people to reproduce the expected test learning curves and the reported test error of *the model at the very end of training*.\n\nSecondly, we'd like to point out that training the final model using all the 50K images to obtain the test error on the 10K images is a common practice. Please refer to ResNet [1] (Sect. 4.2), DenseNet [2] (Sect. 4.1: “For the final run we use all 50,000 training images and report the final test error at the end of training”), their official implementations, as well as the codebases of NAS and ENAS. Note the 45K/5K split is recommended for model selection (architecture search and hyperparameter tuning) but not for the final run.\n\nFinally, we agree that this is an important detail that should be included in the paper. We also plan to refactor our code to ensure the users do not mistakenly tune their models wrt the test set. Thanks for bringing it up and please let us know if you have any other concerns.\n\n[1] He, Kaiming, et al. \"Deep residual learning for image recognition.\" Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.\n[2] Huang, Gao, et al. \"Densely Connected Convolutional Networks.\" CVPR. Vol. 1. No. 2. 2017.", "Dear authors,\n\nI noticed that in your architecture evaluation script (cnn/train.py) for CIFAR-10, you use the whole training set of 50k images for training and declare the test set as your validation set. To my knowledge, this is not common practice and will result in a lower test error compared to others who split the training set into 45k/5k train/validation (as, for example, in the Resnet and Densenet papers), while evaluating on the test set only once at the very end of the training procedure.\n\nI suggest you rerun your experiments with a 45k/5k train/validation split to ensure a fair comparison, or please clarify if there is a misunderstanding.\n\nThank you.", "Thank you for the comments. We respectfully disagree with your statement that “the loss is wrong.”. The reasons are as follows:\n\n(1) Our architecture encoding is deterministic and we don’t maintain any probability distribution over architectures. Hence “expectation of loss over all possible architectures” in your statement is not even well-defined, not to mention the statistical consistency.\n(2) eq. 3 is just the paraphrase of “finding a (deterministic) architecture that minimizes its final validation loss.”. No stochasticity is involved.\n(3) The continuous architecture \\alpha is nothing but a high-dimensional hyperparameter. While bi-level optimization is new to the field of architecture search, formulations similar to eq. 3 have been well-studied for hyperparameter search [1,2,3].\n(4) Effectiveness of eq. 3 has been empirically verified by extensive experiments.\n\n[1] Dougal Maclaurin, David Duvenaud, and Ryan Adams. Gradient-based hyperparameter optimization through reversible learning. In ICML, pp. 2113–2122, 2015.\n[2] Fabian Pedregosa. Hyperparameter optimization with approximate gradient. In ICML, 2016.\n[3] Luca Franceschi, Paolo Frasconi, Saverio Salzo, and Massimilano Pontil. Bilevel programming for hyperparameter optimization and meta-learning. ICML, 2018.\n", "Thank you for the comments.\n\n> “Selecting the k strongest predecessors to derive the final architecture cannot ensure the discrete one is the best”\nWe retained 2 predecessors per node in order to make our derived cells comparable with the ones in prior works (NAS/PNAS/ENAS/AmoebaNets). This is for fair comparison but by no means the optimal discretization strategy. \n\n> “Actually, the \"quantization error\" might leads the final architecture to be totally different with the one from the training procedure.”\nIt’s expected that continuous relaxation would come with a tradeoff between efficiency and bias. Quantization error of such kind can be reduced, e.g., by annealing the softmax temperature throughout the search process, forcing the \\alpha’s to approach one-hot vectors. Improving our current discretization strategy at the end of search is an interesting direction orthogonal to our main focus, i.e. the overall framework of differentiable architecture search.\n\n> “ALL of ops for reduce cell is max pooling and most of ops for normal cell is sep_conv_3x3”\nFirst, this is incorrect. Our learned reduction cell contains not only max pooling but also skip connections; our learned normal cell contains not only sep_conv_3x3, but also skip connections and dilated convs. Please refer to Figure 4 & 5.\n\nSecondly, it’s actually interesting that the algorithm learns to introduce more translation invariance in the reduction cell (through multiple pooling ops) and to come up with a densely connected normal cell (through 3x3 sep convs and skip connections). Both design patterns are existent in successful architectures designed by human experts.\n\n> “It is really wired.”\nWhile visual judgements about cells in Figure 4 & 5 can be subjective, please note (1) effectiveness of those cells has been quantitatively verified by their competitive performance on both CIFAR-10 and ImageNet; (2) the algorithm can learn to leverage a more diverse set of ops when necessary. Please refer to our recurrent cell in Figure 6 with strong results on PTB. ", "Hello,\n\nThe key contribution of this work is to propose that architecture search can be carried on by gradient decent. That is great!\n\nThe solution of this work lies on relaxing \"the categorical choice of a particular operation as a softmax over all possible operations\". However, the objective (eq. 3) based on this relaxation is not equivalent to expectation of loss over all possible architectures. But the expectation of loss over all possible architectures should be the correct metric to be optimized. Hence, I think the loss of DARTS does not make sense.\n\nI do not have a hard feeling on this work. Instead, I appreciate the work. However, I just think the loss is not correct and want to discuss it here to make it clearer.\n\n", "The way to derive the final discrete architectures is to select the k strongest predecessors according to Sec. 2.4. However, it seems it is not consistent with the training objective. Specifically, in training, the model is optimized in the condition that all possible ops for each edge are summarized according to the weights by the softmax of alpha. Selecting the k strongest predecessors to derive the final architecture cannot ensure the discrete one is the best. Actually, the \"quantization error\" might leads the final architecture to be totally different with the one from the training procedure. \n\nI have run the code, and I also find the alphas seems quite wired: Most of them have max value on the same op. This is also confirm by the figure 4 and 5 of the paper, i.e. ALL of ops for reduce cell is max pooling and most of ops for normal cell is sep_conv_3x3. It is really wired.\n\nCan you prove that the discrete one is the best architecture? And Could you provide the values of alphas for the normal and reduce cell for figure 4 and 5?", "Thank you for the questions.\n\n> \"it would be better to provide results on using the same set to optimize w and alpha\"\nThe results using this strategy are already presented in the 2nd paragraph of sect 3.3. The corresponding cell yielded 4.16 ± 0.16% test error.\n\n> \"also compare the alternating update manner with the simultaneous updating\"\nFollowing your suggestion, we further treated \\alpha as part of conventional parameters and optimized it simultaneously with w. The resulting cell yielded 3.56 ± 0.10% test error. \n\nTo summarize, both schemes are worse than the original bilevel formulation (2.76 ± 0.09% test error), which we attribute to overfitting — note \\alpha is \"tuned\" directly on the training set in the suggested heuristics. We will expand our sect 3.3 to include more discussions.", "Hi,\n\nI have some questions about the optimization of DARTS.\n\n1. In Equation 3 and 4, the w*(alpha) is obtained from the training set and alpha is optimized on validation set. I know it makes sense to use validation set for alpha, as is discussed in the paper. However, I wonder what the performance will be if you optimize w and alpha both on training set? If you split the training set to a \"train\" and a \"valid\" set half and half, as you did in the code, the generalization would be better, but less samples is used to train alpha and w. However, if you use the whole training set, more samples are seen by the model to optimize alpha and w, and the performance might also be better. In my opinion, this should also be a baseline for completeness.\n\n2. This question is associated with the above one. In algorithm 1, the alpha and w are optimized alternatively. My question is: If w and alpha are both optimized on the same training set, can we optimize alpha and w simultaneously without alternating?\n\nIn summary, I think it would be better to provide results on using the same set to optimize w and alpha, and also compare the alternating update manner with the simultaneous updating when the same set is used to optimize w and alpha.\n\nThanks ", "Cool!!\nThanks for your kind reply", "Thanks for the question. We will include complexity analysis in the revised paper.\n\nAs for ConvNets, each of our discretized cell allows \\prod_{k=1}^4 ((k+1)*k)/2)*(7^2) = ~10^9 possible DAGs (recall we have 7 non-zero ops, 2 input nodes, 4 intermediate nodes with 2 predecessors each) without considering graph isomorphism. Since we jointly learn both normal and reduction cells, the total #architectures is approximately (10^9)^2 = 10^18. This is greater than the ~5.6*10^14 of PNAS (reported in their sect 3.1) which learns only a single type of cell.\n\nAlso note that we retained the top-2 predecessors per node only in the very end, and our continuous search space before this final discretization step is even larger. Specifically, each relaxed cell (a fully connected graph) contains 2+3+4+5 = 14 learnable edges, allowing (7+1)^14 = ~4*10^12 possible configurations (+1 to include the zero op indicating a lack of connection). Again, since we are learning both normal and reduction cells, the total number of architectures covered by the continuous space before discretization is (4*10^12)^2 = ~10^25. The above assumes that we retain only 1 of the 8 ops per edge, as done in our experiments. The search space can be substantially enlarged without additional computation overhead by retaining multiple ops per edge (e.g. by replacing the current argmax during discretization with top-K selection). We leave the exploration of this enriched space as our future work.\n\n> \"networks searched per GPU hour?\"\nThis metric is not directly applicable to DARTS, which optimizes architectures in continuous space in contrast to most prior works that enumerate architecture samples.", "Hi there,\n\nIn ENAS, they show that their search space can realize 1.3*10^11 final networks in section 2.4, and that of PNAS is ~10^12 as calculated in section 3.1, what is the complexity of the search space in this paper? Could you add a table to compare the complexity of search space in these papers, or add a column in table 1&2 to show the efficiency i.e networks searched per GPU hour? It seems to be more convincing if the comparison of complexity of search space could be provided.", "Thank you for your kind reply, it is indeed a very good paper worth reading and reflecting.", "> “there is a paper about NAS not mentioned”\nThanks for mentioning about the BlockQNN paper. We will cite it as a method under the RL category.\n\n> Regarding definitions of w’ and w_k\nw’ means the one-step unrolled w, whose definition is given underneath eq (6). w_k means the actual numerical value of w at step k. We’ll make these more clear in the revision. \n\n> “I am kind of curious about the motivation of formula 5”\nPlease refer to section 2.3. The motivation is to descent the architecture wrt the optimal w* instead of the current suboptimal w. The former is expensive but can be approximated by the latter after taking a gradient step. While the idea of unrolling is new to the NAS literature, similar techniques can be found in unrolled GAN [1] and MAML [2].\n\n> “the comparison between the vanilla GD and current formula 5?”\nWe do have provided results to compare formula 5 (DARTS 2nd order in Table 1 & 2), vanilla GD (DARTS 1st order in Table 1 & 2) and coordinate descent (2nd paragraph in section 3.3).\n\n[1] Metz, Luke, et al. \"Unrolled generative adversarial networks.\" arXiv preprint arXiv:1611.02163 (2016).\n[1] Finn, Chelsea, Pieter Abbeel, and Sergey Levine. \"Model-agnostic meta-learning for fast adaptation of deep networks.\" arXiv preprint arXiv:1703.03400 (2017).", "Hi there,\n\nThis is absolutely a good work, however, there might be some small questions.\n\nFirstly, there is a paper about NAS not mentioned, accepted by CVPR 2018. By using Q-Learning, it achieves comparable results on ImageNet within 3 days on 32 GPU. It might be better to mention and add comparison with this work. The link is here https://arxiv.org/abs/1708.05552.\n\nSecondly, the algorithm 1 with the formula 5 seems a little bit confusing. Would it be more clear and distinguishable to give complete expression or formula about w_k, w_k-1, and w_prime in algorithm 1?\n\nLastly, I am kind of curious about the motivation of formula 5, could you give more detailed demonstration or experiment results about the comparison between the vanilla GD and current formula 5? ", "You are welcome. We also conducted architecture search using 20 cells (with initial #channels reduced from 16 to 6 due to memory budget) without adjusting other hyperparameters. The resulting cell achieved 2.88 +/- 0.09% test error on CIFAR-10. We will include those additional results and related discussion in the revised paper.", "Thanks for your kind reply.", "Thank you for the comments.\n\n>> Regarding the number of operations\nThe #ops in our convnet experiments is the same (eight) as in PNAS [1] and is greater than 6 used in ENAS [2]. We didn’t try larger numbers due to the memory constraints of a single GPU. We will include the #ops as a column in our Tables to better reflect these details.\n\n>> \"the search space is much larger than DARTS\"\nThis is not correct. While the controller in NAS must sample exactly 2 connections per node, DARTS is simultaneously exploring all possible connections within a fully-connected supergraph. Although we kept the top-k (k=2) connections in the derived discrete architecture (sect. 2.4) for fair comparison with NAS, with DARTS k could be other numbers greater than 2.\n\n>> \"Most previous NAS works seem not to use dilated convolutions.\"\nThis is not correct. Dilated convolutions are used in most prior works. Please refer to NASNets [3], AmoebaNets [4] and PNASNets [1]. \n\n>> \"Would you mind to discuss the effect of the network depth during searching?\"\nSince \\alpha is shared among cells at different layers, backprop wrt \\alpha behaves similarly to BPTT. Searching with a deeper network might thus require different hyper-parameters due to the increased number of layers (steps) to back-prop through. \n\n[1] Liu, Chenxi, et al. \"Progressive neural architecture search.\" arXiv preprint arXiv:1712.00559 (2017).\n[2] Pham, Hieu, et al. \"Efficient Neural Architecture Search via Parameter Sharing.\" arXiv preprint arXiv:1802.03268 (2018).\n[3] Zoph, Barret, et al. \"Learning transferable architectures for scalable image recognition.\" arXiv preprint arXiv:1707.070122.6 (2017).\n[4] Real, Esteban, et al. \"Regularized evolution for image classifier architecture search.\" arXiv preprint arXiv:1802.01548(2018).", "SMBO (used in PNAS) and MCTS are discrete search algorithms. Both do not offer an explicit notion of gradient over (the continuous representation of) the architecture as in DARTS.\n\nThe goal of the performance predictor/surrogate model in SMBO is to guide the search within the discrete space. This alone does not make the search algorithm itself differentiable.\n", "This is an interesting work with awesome codes. I have a few questions about the experimental comparison.\n\n1. This paper uses a different search space than NAS/PNAS/ENAS, i.e., 8 different operations with only 4 steps. Is it unfair to compare the search cost with those methods? For example, NAS uses 13 operators and tries 2 connections, the search space is much larger than DARTS. Would it be better to use the same search space for comparison?\n\n2. Why use dilated convolution? Most previous NAS works seem not to use dilated convolutions.\n\n3. Would you mind to discuss the effect of the network depth during searching? In A.1.1, the network with 8 cells is used to search the best cell. I try the released code and use a deeper network (20 cells) for searching, but obtain much worse results than DARTS. Is there any explanation?", "Hello there,\n\nIt is apparently an interesting work with solid results on a variety of dataset.\n\nI have a quick question, the paper tries to model the architecture design domain as a function, then the agent searches for the promising architectures with the gradient descent.\n\nSo, what's the key difference between the surrogate function in Progressive Neural Architecture Search? The surrogate model is also differentiable, and the idea, in my perspective, would be similar.\n\nAlso the simulation model in \"AlphaX: eXploring Neural Architectures with Deep Neural Networks and Monte Carlo Tree Search\" is also differentiable, and potentially to achieve the same goal.\n\nCould you please clarify these points? Thank you." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 5, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "H1l5BS-ueV", "iclr_2019_S1eYHoC5FX", "SJesF7dIg4", "HklGrBN8gE", "iclr_2019_S1eYHoC5FX", "iclr_2019_S1eYHoC5FX", "rJeh6xB5nQ", "HJg9ETOFn7", "rkgEU1jdCX", "r1ekErZ53Q", "iclr_2019_S1eYHoC5FX", "iclr_2019_S1eYHoC5FX", "iclr_2019_S1eYHoC5FX", "iclr_2019_S1eYHoC5FX", "iclr_2019_S1eYHoC5FX", "iclr_2019_S1eYHoC5FX", "HJek5PHFh7", "SkxmNjGq27", "iclr_2019_S1eYHoC5FX", "r1e0Zyrt3X", "r1lYDFO_37", "Syla8iL1hQ", "H1gHvB8Wi7", "rkxzCbNziX", "iclr_2019_S1eYHoC5FX", "S1l9xl2Rim", "ryef0ciRjQ", "iclr_2019_S1eYHoC5FX", "iclr_2019_S1eYHoC5FX", "Hkxeq34sjQ", "iclr_2019_S1eYHoC5FX", "Bylnj2t5j7", "ryxFEjq_sQ", "iclr_2019_S1eYHoC5FX", "BkllDdQUo7", "H1gSF654im", "iclr_2019_S1eYHoC5FX", "S1eKIEw55Q", "HkxGFGpfcX", "Hkxx5G9f57", "BygL-vufqQ", "iclr_2019_S1eYHoC5FX", "iclr_2019_S1eYHoC5FX" ]
iclr_2019_S1ecm2C9K7
Feature-Wise Bias Amplification
We study the phenomenon of bias amplification in classifiers, wherein a machine learning model learns to predict classes with a greater disparity than the underlying ground truth. We demonstrate that bias amplification can arise via inductive bias in gradient descent methods resulting in overestimation of importance of moderately-predictive ``weak'' features if insufficient training data is available. This overestimation gives rise to feature-wise bias amplification -- a previously unreported form of bias that can be traced back to the features of a trained model. Through analysis and experiments, we show that the while some bias cannot be mitigated without sacrificing accuracy, feature-wise bias amplification can be mitigated through targeted feature selection. We present two new feature selection algorithms for mitigating bias amplification in linear models, and show how they can be adapted to convolutional neural networks efficiently. Our experiments on synthetic and real data demonstrate that these algorithms consistently lead to reduced bias without harming accuracy, in some cases eliminating predictive bias altogether while providing modest gains in accuracy.
accepted-poster-papers
The authors identify a source of bias that occurs when a model overestimates the importance of weak features in the regime where sufficient training data is not available. The bias is characterized theoretically, and demonstrated on synthetic and real datasets. The authors then present two algorithms to mitigate this bias, and demonstrate that they are effective in experimental evaluations. As noted by the reviewers, the work is well-motivated and clearly presented. Given the generally positive reviews, the AC recommends that the work be accepted. The authors should consider adding additional text describing the details concerning Figure 3 in the appendix.
test
[ "Hyxcbl9SyV", "SJlKHfoNJN", "SkeSFyHzyN", "B1l3c_5e14", "rkek5d6R0Q", "r1ei7VroCX", "r1xgFKumnQ", "S1x3UjW7RQ", "ByelNylQ0m", "S1xctjvOaQ", "SJxsEivOaQ", "S1g-livup7", "H1lSw16ch7", "BkldIeZq27" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thank you for your continued feedback. We ran the experiment suggested, where \\mu_1 = (1,0,1,0,1,...,0), and this results in no systematic bias (with a setup similar to that of Figure 2(a), but with 200 weak features - 100 per class - and N=1000, the average bias over 100 trials was 0.00031, which would round to 0.0% using the same precision as in Table 1). We believe this result makes sense: since permuting the order of the features does not affect the result, it would not be possible to have bias when the features are entirely symmetric, because the orientation of the features can be reversed simply by permuting them when there are the same number of features oriented in each direction. We show empirically that the weaker features are more likely to be overestimated by SGD, but without any asymmetry, we would expect that this would affect both classes equally in expectation.\n\nWe agree that the asymmetry need not be precisely in the *number* of weak features, as it was in the synthetic data. For example, some weak features may be weaker than others, and there may be a disparity in the total strength of the features for each class. Thus, more complicated cases may be slightly harder to analyze. In this vein, on the real data, feature parity is likely often overly simple, as it doesn’t necessarily balance the total strength of the features for each class. Experts are more targeted towards balancing the strength of the features rather than only the number, which was likely more appropriate for most of the real datasets.\n\nThank you for your suggestion regarding the additional information on the datasets. In Section 5 we briefly note that we selected datasets based on high feature dimensionality, but we can also include a table with further details in the appendix.\n\nWe agree that overfitting likely happens to some extent on these datasets during training. What is interesting is that our techniques are post-hoc, meaning that models are not retrained following feature selection, they are simply pruned. Intuitively, this could perhaps be interpreted to mean the strong features were learned well and the overfitting happens primarily with the weak features. Aside from being interesting from the perspective of understanding the bias/overfitting, our techniques are specifically targeted towards removing bias when improving accuracy, while we see, e.g., L1 cannot typically accomplish both of these goals. Comparison to some of the methods in Li et al. would be interesting in the context of bias. However, even if the performance of other methods were the same, our methods could still be preferable in some contexts because they are easily and quickly applied post-hoc, and are easily extended to deep networks.\n\nWe apologize, the equation bias + accuracy <= 100 is correct; upon reviewing our data it appears we rounded 0.0980 incorrectly to 0.0100 when writing it into Table 1 (the bias and accuracy were 0.0980 -> 9.8 and 0.9019 -> 90.2 respectively). We will update Table 1 with this correction.", "Apologies; you are correct that the direction of the bias is not immediately clear from Table 1, as Table 1 reports absolute bias (since this is what we would like to minimize). In our experiments we observed that the bias was in fact in the same direction as the feature asymmetry in prostate (i.e., bias with sign is -47.3); while we do not highlight this fact in the paper specifically, we will update the table to include the sign of the bias so the direction agreement is also clear.", "Thanks for your answer and the revision. The writing and the structure of the paper are much better now. I still have two issues with the paper.\n\n1) You’ve suggested asymmetry of the features is one of the reasons that SGD leads to systematic bias (e.g., you have written: “When the data is distributed asymmetrically with respect to features’ orientation towards a class, gradient descent may lead to systematic bias”). I’m wondering what the reason behind this claim is?\n\nIn your synthetic dataset (Figure 2) all the features are asymmetric; and, you did not study presence of bias when there are lots of symmetric weak features (e.g., instead of \\miu_1 = (1,0,1,1,...1); assuming \\miu_1 = (1,0,1,-1,1,-1,…, 1)\n\nIn the experiment section building upon this claim, you introduced feature parity to mitigate bias; however, feature parity does not have a very good performance in comparison to the other method (Experts). So, I’m not sure how much I can believe that asymmetry causes bias.\n\n\n2) I took a look at the statistics of some of the datasets in your experiment (datasets from Li et al., 2016), and I realized in some datasets there are 10X to 100X more features than instances. E.g., the prostate has 100 instances while 50K features (I would suggest having a small table about statistics of the datasets).\nGiven these statistics, it is somehow clear that overfitting happens in the training; therefore, improvement of the accuracy is not surprising (note that in the prostate increment in accuracy causes the reduction in bias; all the error (all the 10%) are still toward one of the classes).\n\nI’m wondering if there is anything special about your feature selection methods. I mean if I use other feature selection methods how do they perform regarding the bias reduction? As I checked (in Li et al., 2016), some other feature selection methods increase the accuracy comparable or sometimes better than your methods.\n\nAgain, I would like to mention that I really liked the idea of showing weak features cause systematic bias; and I liked that you experimentally showed even with p*=0.5, SGD leads to systematic bias.\n\n\nMinor:\nIs the below equation right?\nBias <= 100 – accuracy\nwhy does this not hold for prostate dataset?", "The rewritten sections are much clearer. The comparison between LR/SVM & L-BFGS/ SGD is really impressive. The comparison between LR/SVM without SGD makes it even more interesting to identify when bias asymmetry will be linked to bias, and so that when feature re-balancing helps.\n\n\"namely, the bias is typically in the direction of the feature imbalance, even when this is at odds with the prior bias (as is the case in prostate).\" I am confused. prostate has asymm<0.5 and bias>0. Is it the same direction?\n", "I really like the statement: \"Rather, we aim to point out that in the case of “avoidable” bias, there is no such trade-off, as bias and accuracy are not in conflict.\" It's entirely possible that I just missed it, but I think a statement of this type and some discussion of the broader trade-offs would go very well in the introduction. People are thinking a lot about this issue and I think this paper makes a good argument that, in fact, there may be some low hanging fruit where there is basically no trade-off at all.", "Thank you for your further feedback on the story of the paper. To answer your specific questions: we do not believe that the form of bias amplification identified in this paper as “avoidable” occurs whenever unbalanced features are present, as we observed that linear SVM models trained using SMO do not exhibit it (Figure 3 in the appendix); this form of bias amplification does not just occur in linear models, as we observed it in the two deep convolutional networks presented in our evaluation (Table 1). We agree that a more general result that pinpoints why SGD overestimates weak features is interesting and an avenue of future work. We see the contributions in this paper as a necessary first step towards answering these more general “why” questions, and look forward to further analysis of this phenomenon as future work. \n\nWe appreciate your suggestions on framing our claims, and will revise the writing accordingly prior to future submission or publication to ensure that our precise claims are clear and not overstated.\n\nWe certainly agree that in some cases we may reasonably want to sacrifice accuracy for bias. In these cases we might, e.g., use a notion of fairness to guide how we handle the trade-off. It was not our intention to take a specific position on this trade-off, or to weigh in on defining fairness. Rather, we aim to point out that in the case of “avoidable” bias, there is no such trade-off, as bias and accuracy are not in conflict. Mitigating feature-wise bias may be used in conjunction with other techniques in the context of fairness.", "update: The authors' feedback has addressed some of my concerns. I update my rating to 6.\n=================\noriginal:\nThis paper provides some new insights into classification bias. On top of the well known unbalanced group size, it shows that a large number of weak but asymmetry weak features also leads to bias. This paper also provides a method to reduces bias and remain the prediction accuracy.\n\nIn general, the paper is well written, but some description can be clearer. Some notation seems inconsistent. For example, D in equation (1) denotes the joint distribution (x,y), but it also refers to the marginal distribution of x somewhere else. \n\nIn the high level, I am not totally convinced of how significant the result is. In particular, the bias this paper defines is on the probability (softmax) scale, but logistic regression is on logit scale-- not even aimed at the unbiasedness in the original scale. So the result in section 2 seems to be expected. Given the fact that unbiasedness is not invariant under transformation, I am wondering why it should be the main target in the first place. \n\nIn the bias reduction methods in equation 5 and 6, both the objective function and the constraint are empirical estimations. Will it be too noisy to adapt to the high dimensional setting? On the other hand, adding some sparsity regularization improves prediction seems well known in practice.\n\nI would also encourage the authors to have extended work both theoretically and experimentally. The asymmetry feature is only illustrated by a single logistic regression. Is it a problem of weak features, or indeed a problem of logistic regression? What will happen in a more general case beyond mean-field Gaussian? I would imagine in this simple case the authors may even derive the closed form expression to verify their heuristics. \n\nBased on the evaluations above, I would recommend a weak reject. \n", "I think I should clarify a bit what I mean when I say \"this would be a much more general result\" and why I think it would make the paper better. As I see it, the main contribution of the paper is an observation that weak features lead to bias amplification in logistic regression models when the parameters are estimated using SGD. To be clear, I think this is a valuable observation in and of itself, and the authors are rigorous in confirming and describing this observation (section 3.2 of the updated paper); however, the scope of this observation is unclear. For example, does bias amplification occur in any setting with weak features regardless of the model and optimization method used (I assume not, but this is not evaluated)? Does bias amplification occur in any classification model trained using SGD or only linear models? At the core of these questions is the \"why\" question: \"what are the properties of LR, SGD, or their combination that lead to bias amplification in the presence of weak features?\" Answering this question would be a more general result because it would let us identify the problem in other settings without the need for experimentation and would allow us to propose fixes that are based on addressing the root cause rather than heuristics.", "RE: The source of bias - In light of this comment, I think you need to be *very* careful about how you describe the sources of bias in the paper. For example, the second paragraph of section 3.2 in the updated paper says \"Logistic regression models make fewer assumptions about the data and are therefore more widely-applicable, but as we demonstrate in this section, this flexibility comes at the expense of an inductive bias that can lead to systematic bias in predictions.\" I read this as implying that LR is the source of the bias which your experiments seems to suggest it isn't. As another example, the last paragraph of section 3.2.1 in the updated paper says \"Figure 2c suggests that overestimation of weak features is precisely the form of inductive bias exhibited by gradient descent when learning logistic classifiers.\" Your analysis suggests that it is largely due to SGD rather than general gradient descent so I would replace any mention of \"gradient descent\" with \"SGD\". In light of the updates, I think this paper would be a lot stronger if it focused on identifying and describing the source of the bias (this would be a much more general result), but is still worth publishing if the authors are careful about the scope of their claims.\n\nRE: \"it can be considered equally problematic to sabotage accuracy in order to reduce bias\" - I would argue that this is exactly what we want to do in many settings where we care about bias. For example, we should be willing to sacrifice accuracy in recidivism prediction in order to avoid racial bias. A focus on accuracy first is exactly the mindset that has led to algorithmic fairness becoming a serious issue.", "While logistic regression is often on the logit scale, we tried to consistently use the probability scale in our analysis and experiments. If the paper contains any inconsistencies on this matter, we would appreciate knowing where they appeared so that we can address them. However, we would like to better understand the reviewer’s concern about unbiasedness failing to be invariant under transformation, and how we could have otherwise targeted our approach to better address the problem. With additional details, we hope to be able to address your concern.\n\nIn (7) (formerly 6), we are minimizing the bias of the model over the choices of alpha and beta subject to not harming accuracy. It is true that when optimizing, the bias and accuracy of the model are necessarily obtained via an empirical estimation, so it is possible that the alpha and beta chosen wouldn’t generalize well to the test data. We treated these as normal hyperparameters in our experiments. The numbers reported in Table 1 report the bias and accuracy on the test data, while the optimization problem from (7) was solved on the training data, so we are reasonably confident that in practice the optimal alpha and beta generalize well, even in high-dimensional settings.\n\nOur aim was to identify the phenomenon of feature-wise bias on a class of problems that are sufficiently controlled so that we can make reasonable conclusions about the source of the bias. In the general case, beyond mean-field Gaussian, it may be harder to identify the source of the bias, as many sources may be interacting at once (e.g., feature-wise, class-imbalance, correlated features, etc.). We believe the results in Table 1 shed some light on the general case, namely, the bias is typically in the direction of the feature imbalance, even when this is at odds with the prior bias (as is the case in prostate). Furthermore, on some of the datasets (arcene in particular), balancing the number of features was quite effective at removing bias while improving accuracy, suggesting that a reasonable portion of the bias was caused by feature asymmetry.", "We agree that the results we have presented do not indicate that SGD is the exclusive cause of the bias-inducing behavior examined in the paper. We note that LR will, given enough data, converge to the Bayes-optimal classifier, and because the data used in Figure 2 has an unbiased prior, we would expect no bias in the predictions according to Thm. 1. However, we posit that feature-wise bias occurs when the learner has not seen enough data to converge. While we observed this consistently with models trained using SGD, it may indeed happen when other methods are used to learn the coefficients from insufficient data. On the other hand, different methods may yield different models when training ends prior to convergence.\n\nWe have updated the paper with additional results that shed more light on the sources of bias in linear models. Figure 3 in the appendix depicts the bias of classifiers trained using the same data as in Figure 2, including LR trained with either L-BFGS or SGD, linear SVM trained with either SMO or SGD, and SGD using modified Huber and squared hinge losses. In short, while LR trained with L-BFGS does exhibit some bias, it is not as pronounced or consistent as it is in models trained with SGD, whereas all the models trained with SGD exhibited nearly identical bias trends. In slightly more detail, LR trained without SGD was less sensitive to the number of weak features, i.e., there was less bias than LR trained with SGD until there was a sufficiently high number of weak features, and even then, the effect was not as strong. Furthermore, SVM trained without SGD exhibited essentially no such bias, while SVM trained with SGD exhibited the same bias as LR with SGD. These results suggest that while the bias-inducing behavior may occur when other methods are used, they consistently follow from the use of SGD.\n\nThank you for your feedback on the related work section, we have moved it to the front of the paper as suggested.\n\nThank you for your comment about L1 versus experts method parameters--upon review, the wording in the experiments section is not clear. We did use the same procedure for finding the hyperparameter for L1 regularization as for the experts technique, i.e. we optimized for minimizing bias subject to the constraint that accuracy should not decrease from the original model. You may have noticed that on the glioma dataset, the accuracy goes down for L1. We conjecture that this is caused by the hyperparameter not generalizing well to the test data, as we evaluated hyperparameters on the training data. We have updated the writing in Section 4 to clarify this.\n\nIt’s not immediately clear what distinguishes the prostate data from the others, but upon inspection, prostate has a rather high Mahalanobis distance between classes compared to many of the other datasets. This might suggest there was a lot of room for improvement on this dataset (i.e., the bias was largely preventable because the classes are well-separated). Like most of the other datasets, prostate had a huge disparity in the number of data points (small) to features (large), so it is perhaps unsurprising that despite having the classes fairly well-separated in its feature space, a model with no regularization was unable to generalize well on it. Furthermore, prostate was the only dataset for which the feature disparity opposed the prior bias (and moreover the bias went in the direction of the features rather than the prior), so perhaps the feature-wise bias was the most significant source of bias in this example. It may be an interesting avenue for future work to investigate whether, e.g., Mahalanobis distance between classes, is a good predictor for the effectiveness of our techniques on real data.\n\nIn Section 3 (previously Section 2), paragraph 2, we state the goal (minimizing 0-1 loss) of the “standard binary classification problem,” not the overall goal of our paper. In fact, our goal is not exactly to generally minimize bias along with loss; we note that there are multiple possible sources of bias, only some of which are avoidable when optimizing accuracy. Namely, as stated in Theorem 1, an optimal classifier may necessarily be biased in some cases. Our goal is to remove bias that is not “necessary” in this way, which is not easily captured by additional terms in the training objective. Our work identifies feature-wise bias as one type of preventable or “unnecessary” bias, and attempts to remove it in a targeted fashion with post-hoc feature selection. In other words, we want our model to be no more biased than the most accurate predictor, which may still have some bias according to Theorem 1 (but we consider this bias unavoidable because it can be considered equally problematic to sabotage accuracy in order to reduce bias).\n\nThank you for your minor comments as well, we have addressed them in the updated paper.", "Thank you for your comments regarding the previous work section. We have included a more in-depth comparison to other work around bias in GNB in our update to the paper. \n\nWe have updated Section 2.2 (now Section 3.2) with a more precise description of the data used in that section, which was constructed to exemplify the feature asymmetry we describe. We hope that it clears up some of the confusion in that part of the paper, and are willing to revise with additional clarifications if needed.\n\nRegarding the claim that bias follows from an inductive bias of SGD, the argument is that because we see bias when we train SGD-LR in a setting where the Bayes-optimal classifier would have no bias, the bias cannot be explained by Theorem 1 (i.e., as bias that is inevitable when optimizing accuracy), hence we conclude the bias must have been caused by the learning rule (SGD-LR). While the inductive bias may not be uniquely attributable to SGD, and instead may be a consequence of using LR regardless of how the coefficients were obtained, we found that LR models trained on the same data using other methods, such as L-BFGS, did not result as much consistent bias as LR trained with SGD. Moreover, training with SGD using other loss functions, such as hinge, modified-Huber, and perceptron, resulted in the same bias characteristics as shown in Figure 2. Thus, linear classifiers trained with SGD consistently show the inductive bias we describe, whereas comparable classifiers trained using other methods may not. We have included an additional figure (Fig. 3 in the appendix) that details these results.\n\nIn our experiments we compare our feature selection method targeted at feature-wise bias to L1 regularization. We are not aware of other feature selection methods intended to mitigate the bias we target in the paper, but are willing to include additional comparisons if there are comparable approaches that we missed.\n\nWe additionally added results for L1 regularization on CIFAR. In general, L1 is harder to apply to the deep network scenarios because training takes a long time, making the hyperparameters hard to tune.\n\nThank you also for your formatting comments; we have addressed them in the updated version of the paper.", "In this paper, the authors studied bias amplification. They showed in some situations bias is unavoidable; however, there exist some situations in which bias is a consequence of weak features (features with low influence to the classifier and high variance). Therefore, they used some feature selection methods to remove weak features; by removing weak features, they reduced the bias substantially while maintaining accuracy (In many cases they even improved accuracy). Showing that weak features cause bias is very interesting, especially in their real-world dataset in which they improved bias and accuracy simultaneously. \n\n\nMy main concerns about this paper are its related work and its writing.\nAuthors did a great job in reviewing related work for bias amplification in NLP or vision. \nHowever, they studied bias amplification in binary classification, in particular, they looked at GNB; and they did not review the related work about bias in GNB. I think it is clear that using MAP causes bias amplification. Therefore, I think changing theorem 1 to a proposition and shifting the focus of the paper to section 2.2 would be better. Right now, I found feature orientation and feature asymmetry section confusing and hard to understand. In the paper, the authors claimed bias is a consequence of gradient descent’s inductive bias, but they did not expound on the reasoning behind this claim. Although the authors ran their model on many datasets, there is no comparison with previous work. So it is hard to understand the significance of their work. It is also not clear why they don’t compare their model with \\ell_1 regularization in CIFAR.\n\n\nMinor:\n\nPaper has some typos that can be resolved.\nCitations have some errors, for example, Some of the references in the text does not have the year, One paper has been cited twice in two different ways, For more than two authors you should use et al., sometimes \\citet and \\citep are used instead of each other.\nAuthors sometimes refer to the real-world experiment without first explaining the data which I found confusing.", "Summary:\n\nIn this paper the authors identify a specific source of marginal class probability bias that occurs when using logistic regression models. Using synthetic and real datasets they demonstrate this bias and explore characteristics of the data that exacerbate the issue. Finally, they propose two methods for correcting this bias in logistic regression models and neural network models with logistic output layers and evaluate these methods on several benchmark datasets.\n\nReview:\n\nOverall, I found the paper well-written, the problem well-motivated, and the proposed methods clear and reasonable. While I have a few concerns about presentation and experimentation, these are issues that can easily be remedied and I recommend acceptance.\n\nMajor comments:\n\n- The authors repeatedly say that gradient descent is the cause of the bias amplification (e.g. Section 2.2 title, \"...features that are systematically overestimated by gradient descent.\", \"... i.e., a consequence of gradient descent's inductive bias.\", \"... gradient descent may lead to systematic bias...\"). The inductive bias they describe is coming from the use of logistic regression, not the use of gradient descent. Specifically, a logistic regression model has a convex likelihood, which means that regardless of what algorithm is used to maximize the likelihood, it should converge to the same point. In fact, most off-the-shelf implementations of logistic regression do not use vanilla gradient descent. Further, gradient descent may be used to estimate the parameters of any number of models which may or may not have the same inductive bias the authors describe.\n\n- I thought the related work section was well-written and would strongly recommend moving it to the beginning of the paper as it motivates the entire problem. I also think it could be helpful to ground the technical definitions of bias amplification in a meaningful example.\n\n- I think that the experimental setup for comparing \\ell_1 regularization to the proposed feature selection methods is not quite fair. In particular, the hyperparameters of the \"expert\" method are selected to minimize bias subject to the constraint that loss not increase. In contrast, the \\ell_1 regularization hyperparameter is selected purely to minimize bias. Instead, I would select the \\ell_1 regularization hyperparameter in the same way as the expert method, that is, to minimize bias subject to a constraint on loss. In general, I think hyperparameters should be selected using the same criterion for all methods.\n\n- The authors make a point of highlighting results on the \"prostate\" which showed a large increase in accuracy along with a large decrease in bias. I think the paper would benefit from some exploration of why this happened. Specifically, it would be valuable to answer the question: what are the properties of the \"prostate\" dataset that make this method so effective and are these properties general and identifiable a priori?\n\n- Section 2, paragraph 2, line 5: The stated goal in this paragraph is \"minimizing 0-1 loss on unknown future i.i.d. samples\". As stated in the introduction, this is, in fact, not the goal. The goal is to minimize loss while also minimizing bias. A larger criticism that I would have of this work is: if minimizing bias is a first order goal, then why are we using empirical risk minimization in the first place? Put another way, why use post-hoc correction for an objective function that does not match our actual stated goals rather than using an objective function that does?\n\nMinor comments:\n\n- Section 1, paragraph 4, line 2: \"Weak\" is not clearly defined here. Is it different than \"moderately-predictive\"?\n\n- Section 2.1, last paragraph, line 1: I understand what the authors are saying when they say \"Bias amplification is unavoidable\", but it is avoidable by changing our objective function. I would consider rewording this statement to something like \"Using an ERM objective will lead to bias amplification when the learning rule...\"\n\n- Equation 4: I believe h should be changed to f in this equation.\n\n- Equation 6: L is not defined anywhere.\n\n- Table 1: As defined in equation 1, B_D(h_s) should be between 0 and 1. Also, the accuracy results for the glioma dataset have the wrong result in bold.\n\n- Section 4, methodology paragraph, line 5: forthe --> for the\n\n- Section 5, paragraph 5, lines 5-6: Feature selection is not used \"only to improve accuracy\". For example, Kim, Shah, and Doshi-Valez (2015) use feature selection to improve interpretability (https://beenkim.github.io/papers/BKim2015NIPS.pdf)." ]
[ -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, 4, 4 ]
[ "SkeSFyHzyN", "B1l3c_5e14", "S1g-livup7", "S1xctjvOaQ", "r1ei7VroCX", "S1x3UjW7RQ", "iclr_2019_S1ecm2C9K7", "ByelNylQ0m", "SJxsEivOaQ", "r1xgFKumnQ", "BkldIeZq27", "H1lSw16ch7", "iclr_2019_S1ecm2C9K7", "iclr_2019_S1ecm2C9K7" ]
iclr_2019_S1erHoR5t7
The relativistic discriminator: a key element missing from standard GAN
In standard generative adversarial network (SGAN), the discriminator estimates the probability that the input data is real. The generator is trained to increase the probability that fake data is real. We argue that it should also simultaneously decrease the probability that real data is real because 1) this would account for a priori knowledge that half of the data in the mini-batch is fake, 2) this would be observed with divergence minimization, and 3) in optimal settings, SGAN would be equivalent to integral probability metric (IPM) GANs. We show that this property can be induced by using a relativistic discriminator which estimate the probability that the given real data is more realistic than a randomly sampled fake data. We also present a variant in which the discriminator estimate the probability that the given real data is more realistic than fake data, on average. We generalize both approaches to non-standard GAN loss functions and we refer to them respectively as Relativistic GANs (RGANs) and Relativistic average GANs (RaGANs). We show that IPM-based GANs are a subset of RGANs which use the identity function. Empirically, we observe that 1) RGANs and RaGANs are significantly more stable and generate higher quality data samples than their non-relativistic counterparts, 2) Standard RaGAN with gradient penalty generate data of better quality than WGAN-GP while only requiring a single discriminator update per generator update (reducing the time taken for reaching the state-of-the-art by 400%), and 3) RaGANs are able to generate plausible high resolutions images (256x256) from a very small sample (N=2011), while GAN and LSGAN cannot; these images are of significantly better quality than the ones generated by WGAN-GP and SGAN with spectral normalization. The code is freely available on https://github.com/AlexiaJM/RelativisticGAN.
accepted-poster-papers
All authors agree that the relativistic discriminator is an interesting idea, and a useful proposal to improve the stability and sample quality of GANs. In earlier drafts there were some clarity issues and missing details, but those have been fixed to the satisfaction of the reviewers. Both R1 and R3 expressed a desire for a more theoretical justification of why the relativistic discriminator should work better, but the empirical results are strong enough that this can be left for future work.
train
[ "HyxBUhUonX", "HJxZFCitTQ", "BJxS-DnO6m", "HJej_y3d6Q", "Bkl-U0WwT7", "SkxVOxiLpX", "ryeqtnJonQ", "SJxhZqNKn7" ]
[ "official_reviewer", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer" ]
[ "The paper describes an interesting tweak of the standard GAN model (inspired by IPM based GANs) where both the generator and the discriminator optimize relative realness (and fakeness) of the (real, fake) image pairs. The authors give some intuition for this tweak and ran experiments with CIFAR10 and CAT datasets. Different variants of the standard GAN and the new tweak were compared under the FID metric. The experimental setup and details are provided; and the code is made publicly available. \n\nThe results are good and their tweak seems to help in most of the cases. The paper, however, is not very well written and is not of publication quality. All the insights given in Section 3 are wrong, incomplete and unsatisfying. For example, in Section 3.4, the authors suggest that gradient dynamics of the tweaked model (with some unrealistic and infeasible assumptions) is same as that of an IPM-GAN and contribute to stability. This is wrong. Similar dynamics (even under the unrealistic assumption), does not imply similar performance. In fact, if one is trying to move towards IPM dynamics, then one should try to tweak an IPM model directly. Section 3.2 also seems wrong from my understanding of GAN training. Section 3.3 could also be improved. In fact, any explanations based on minimizing JS divergence is incomplete without answering as to why JS divergence minimizing is the best thing to do. \n\nThe author should have provided more comparison images to rule out the fact that the tweak is not overfitting for the FID metric. The benchmarks are also weak and more experiments need to be done (Eg, CelebA). ", "Dear Reviewer 1,\n\nThank you for your comments. \n\nWe hope that this message will find you well. We really took the time to review all your comments and in doing so we significantly improved the paper. As you suggested, one aspect (the gradient argument) was relying on unrealistic assumptions (that G would be trained to optimality). We believe that we were able to make the paper of much higher quality so please consider this response in your assessment of the paper.\n\nYou mention that the paper is “not well written” and a lot of your emphasis is on Section 3. To remedy your concerns, we spent a lot of time to rewrite parts of it in a way that is much clearer. Also, as suggested by Reviewer 3, we reviewed corrected spelling mistakes and removed contractions to make it less familiar.\n\nNote that we removed section 3.1 since it was not a real subsection.\n\nRegarding Section 3.2 (which is now section 3.1), we rewrote it because it was somewhat unclear after we removed so much text to fit the 8 pages limit.\n\nRegarding Section (3.3, which is now section 3.2), we clarified that JSD is not the only divergence where we see something like Figure 1a, this is true for most divergences. Thus, our explanation is not incomplete. See below:\n“Note that although specific to the JSD, similar dynamics are true for other divergences; when the divergence is maximal, D(x_r) and D(x_f) are very far from one another, but they converge to the same value as the divergence approach zero. Thus, this argument applies to other divergences.”\n\nRegarding Section (3.4, which is now section 3.3), we agree that one assumption was unrealistic. The problematic assumption was assuming that both D and G are trained to optimality. In practice, certain GANs (mostly IPM-based GANs) train D multiple times. However, no GANs to our knowledge train G multiple times since GANs do not converge when doing so. G can only take a small step at a time; otherwise, the generator will collapse early on. Note that Reviewer 3 suggested that we do some experiments regarding the gradient argument and we did (the full experiment described below is in Appendix E). We observed that we do not reach D(x_r)=0 using relativistic GANs when n_G = 1 (the number of generator update per critic’s updates). If using n_G = 2, it does sometimes happen that D(x_r)=0. Either way, we have that RSGAN significantly increase the proportion of low D(x_r) even if it rarely reaches 0. Thus, although we cannot make SGAN equivalent to IPM-based GANs, we can make them more similar. We rectify this in p4.\n\nTo respond to your comments about IPMs, we seek to find a GAN with a similar dynamic to IPM-based GANs without actually using IPMs. We want this because IPM-based GANs have an important drawback: they tend to be very computationally demanding (not always, but more often than not). In the introduction, we now mention that IPM-based GANs tend to be longer to train. Thus, finding an approach with similar stability, but which requires less training time would be useful.\nThe added paragraph is:\n\" Note that although powerful, IPM-based GANs tend to more computationally demanding than other GANs. Certain IPM-based GANs use a gradient penalty (e.g. WGAN-GP, Sobolev GAN) which is very computationally costly and most IPM-based GANs need more than one discriminator update per generator update (WGAN-GP requires at least 5 \\citep{WGAN-GP}). Assuming equal training time for D and G, every additional discriminator update increase training time by a significant 50\\%.”\n\nWe do provide more comparison images in the linked GitHub. However, the link (the footnote on p18) is hidden to retain anonymity for the review process. We transferred the GitHub to an anonymous version for the reviewers. Here are the full minibatch for the models generating 256x256 cats:\nhttps://github.com/anonymousconference/RGAN/tree/master/images/full_minibatch.\n\nWe would like to note that we ran additional stability analyses for CIFAR-10 in the appendix. We will consider doing using more benchmarks next time. We are very limited in our computing capability, thus we decided to only use CIFAR-10 and CAT. Next time, we will consider using CAT and CelebA instead.\n", "Dear Reviewer 2, \n\nThank you for your comments.\n\nWe are in agreement about the fact that IPM-based GANs are different from Relativistic GANs. They are similar, but yet different enough that they are not of the same class. Although our paper mentioned the similarity, it did not mention the difference which could lead to readers thinking that IPM-Based GANs are a subset of Relativistic GANs (and we talked to people who thought this was the case after reading our paper). In section 4.2 p5, we now highlight better the differences and similarities:\n“If one use the identity function (i.e., f_1(y)=g_2(y)=-y, f_2(y)=g_1(y)=y), this results in a degenerate case since there is no supremum/maximum. However, if one adds a constraint so that C(x_r)-C(x_f) is bounded, then there is a supremum and one arrives at IPM-based GANs. Thus, although different, IPM-based GANs share a very similar loss function focused on the difference in critics.”\n\nAs you suggested, we ran some additional experiments focused on testing the gradient argument (see Appendix E, p13-14). Although the gradient argument applies if we train G to optimality; in practice, we do not train G to optimality. Thus, we observed that RSGAN/RaSGAN are not equivalent to IPM-based GANs in real-world scenarios. However, they act in a way that is somewhere in-between the dynamics of SGAN and IPM-based GANs. In addition to Appendix E, we now also mention that training G to optimality is an unrealistic assumption in Section 3.3 p4.\n\nThe main intuition that led to Relativistic average GANs was actually in our initial paper version, but it was removed due to space constraints (8 pages max). Given your comment, we decided to relay it to the Appendix rather than completely removing it; it is now in Appendix B p11-12. Additionally, we added the following sentence at the beginning of section 4.3 p5:\n“The discriminator has a very different interpretation in SGAN compared to RSGAN. In SGAN, D(x) estimates the probability that x is real, while in RGANs, D(x_r,x_f) estimates the probability that x_r is more realistic than x_f. As a middle ground, we developed an alternative to the Relativistic Discriminator, which retains approximately the same interpretation as the discriminator in SGAN while still being relativistic.”\n\nThis explains why we created RaGANs, but it does not explain why they generally perform better than RGANs. We are still uncertain as to why RGANs perform less well than RaGANs, given that both approaches improve stability.\n", "Dear Reviewer 3, \n\nThank you for your comments.\n\nWe reviewed the paper to correct for spelling mistakes and to make it less familiar (by removing contractions). According to Reviewer 1 suggestions, we also revised Section 3 to improve the wording and explanations.\n\nThe code has already been released through GitHub. To retain anonymity, we re-uploaded the GitHub repository without any information relating to the authors: https://github.com/anonymousconference/RGAN.\n", "This comment appears to be written in bad faith to influence negatively the reviewers. Even your title is a fake question suggesting that relativistic GANs are not useful. If it wasn't your intention, then this shows a lack of judgment, as you could have sent me (the first author) an email as everyone does. I will answer you, but only once.\n\nFirst, the results as you even show, point out that relativistic average variants are almost always better than their non-relativistic counterparts.\n\nBoth sets of hyper-parameters are stable, set 1 is DCGAN hyper-parameters and it what most people use. What differentiate the second set of hyper-parameters is that it use 5 Discriminator update per generator update (n_d = 5). These settings are needed to make WGAN-GP perform properly. However, in practice, very few people use n_d = 5 because it would take forever to train. Considering researchers and AI engineers want to apply GANs to real-world hard problems in high dimensions, they cannot afford to wait 3 times longer (instead of 1 D update and 1 G update, we have 5 D updates and 1 G update; 3 times more) for the model to finish training and not even necessarily reach better results. This is why Self-attention GANs and BigGANs use Hinge loss with n_d = 1 or 2.\n\nYou fail to mention that our approach reached better results than WGAN-GP while using only n_d = 1 (thus 3 times faster).\n\nThe only scenario where we could not show better results when using relativistic GANs where in the challenging experiments with extremely unstable hyper-parameters (that no one uses in practice; see Appendix) in which Relativistic GANs didn't seem to perform better or worse on average.\n\nHowever, in very realistic and meaningful scenarios where one has high-resolution images and a small sample size (as companies generally do), Relativistic GANs perform amazingly well when non-relativistic GANs cannot even train past generating pure noise. Which is why we were told by many engineers and practitioners that without relativistic GANs, they could have been able to achieve their goals. See for example ESRGAN (https://github.com/xinntao/ESRGAN) which won a competition because of the use of Relativistic GANs. \n\nThis shows that yes, \"(average) relativistic really matter\".", "I was attracted by your works since you put your paper on Arxiv (and codes on github).\nOne primary concern: although you presented quite a lot experiments around relativistic loss functions, it seems hard to prove that relativistic helps generally.\n\nAs shown in Tab. 1, with the 1st hyper-parameters (which seems less stable than the 2nd ones), RSGAN+GP>LSGAN>RaLSGAN>RaSGAN>RSGAN>RaHingeGAN>SGAN>HingeGAN>WGAN+GP>RaSGAN+GP, it only demonstrated that R/Ra sometimes work well but sometimes don't, and when to apply average to loss function is really a mystery.\nIn a set of more stable hyper-parameters, you get a totally different order (where WGAN+GP is the best one).\n\nIt seems R/Ra is very sensitive to hyper-parameters, hence in my reproducibility training of RGAN/RaGAN is very unstable and the results are worse than standard GAN(s).", "The paper proposes a “relativistic discriminator” which has the property that the probability of real data being real decreases as the probability of fake data being real increases. \n\nThe paper is very well-written. I particularly liked Section 3 which motivates the key idea through multiple viewpoints. The experiments show that the relativistic discriminator helps in some settings, although it does seem a bit sensitive to hyperparameters, architectures and datasets.\n\nI found the argument about connections to IPM-GANs a bit confusing. In a couple of places in Section 4, the relativistic loss is motivated by showing that the relativistic discriminator makes SGANs more like IPM-GANs. However, not all IPM-GANs are the same, e.g. the experiments show performance gaps between RSGAN, RaSGAN, and WGAN-GP, which suggests there could be other confounding factors. \n\nCould you devise experiments on synthetic datasets where the different hypotheses in Section 3 might lead to different solutions? Would be very interesting to see which hypothesis best explains why relativistic discriminator helps!\n\nSection 4.3: How do you justify the averaging? While the relativistic GAN is well-explained, section 4.3 only briefly mentions the averaging idea. Given that averaging seems to help a lot in some of the experiments, it’d be great to see further discussion of why this helps.\n", "\nIn this work, the authors considers a variation of GAN by consider simultaneously decrease the probability that real data is real for the generator. To include such a property, the authors propose a relativistic discriminator which estimate the probability that the given real data is more realistic than the fake data. Numerical results are performed to show that the proposed methods are effective, and the resulting GANs are relatively more stable and generate higher quality data samples than their non-relativistic counterparts.\n\nOverall the paper is well written and the rationale behind the proposed modification is clear. In particular, the authors use three different perspective, (the prior knowledge, the divergence minimization, and the gradient expressions), to explain what they thought is missing in the state-of-the-art. By proposing to utilize the information about both real and fake data in the discriminator definition, the authors’ have (to some extent) alleviated the above shortcoming of the state-of-the-art. Unfortunately, like almost all papers related to the field, there has been no rigorously justification behind the proposed methods. \n\nThe English of the paper has to be significantly improved. For example, grammar errors like “this mean….”, “didn’t converge, …”\n\nUnfortunately, the codes of the paper is not released, I will encourage the authors to do so. \n" ]
[ 6, -1, -1, -1, -1, -1, 6, 7 ]
[ 2, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2019_S1erHoR5t7", "HyxBUhUonX", "ryeqtnJonQ", "SJxhZqNKn7", "SkxVOxiLpX", "iclr_2019_S1erHoR5t7", "iclr_2019_S1erHoR5t7", "iclr_2019_S1erHoR5t7" ]
iclr_2019_S1fQSiCcYm
Understanding and Improving Interpolation in Autoencoders via an Adversarial Regularizer
Autoencoders provide a powerful framework for learning compressed representations by encoding all of the information needed to reconstruct a data point in a latent code. In some cases, autoencoders can "interpolate": By decoding the convex combination of the latent codes for two datapoints, the autoencoder can produce an output which semantically mixes characteristics from the datapoints. In this paper, we propose a regularization procedure which encourages interpolated outputs to appear more realistic by fooling a critic network which has been trained to recover the mixing coefficient from interpolated data. We then develop a simple benchmark task where we can quantitatively measure the extent to which various autoencoders can interpolate and show that our regularizer dramatically improves interpolation in this setting. We also demonstrate empirically that our regularizer produces latent codes which are more effective on downstream tasks, suggesting a possible link between interpolation abilities and learning useful representations.
accepted-poster-papers
The reviewers have reached a consensus that this paper is very interesting and add insights into interpolation in autoencoders.
train
[ "H1egXXwd14", "HJeuQMG5n7", "H1l1iK8OyE", "B1gHm_rw14", "SyefNNz8kN", "S1eavUPSJE", "rJlGBqmBy4", "H1g-QchEJE", "H1gUty5Ey4", "BJgfoaTGkE", "SJgQLPtKTm", "r1g4OITYpX", "HkxOaxgtp7", "BJl0L1xt6m", "S1e8zyxY67", "ryx6nAJFaQ", "rJlQxJGchX", "B1gmB4Kv2m" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "public", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thanks for engaging in discussion with us, suggesting additional experiments, and being open to updating your review.", "Main idea:\nThis paper investigates the desiderata for a successful interpolation:\n1) Interpolation looks realistic;\n2) The interpolation path is semantically smooth. \nAn adversarial regularizer is proposed to achieve 1), and in practice 2) may also satisfied. \nTo evaluate the method, they introduce a synthetic dataset with line images and compare with different autoencoder methods without the interpolation regularization.\nFor real data, they show that the interpolation regularized autoencoder (i.e. ACAI) leads to a better unsupervised representation.\n\nQuestions:\n1. Do we really need every interpolated point to be realistic (i.e. similar to a data point in the train-set)? I believe that there exists an interpolation between two totally different objects can never be observed. \n2. Do we need interpolation points to form a semantically smooth morphing? I guess this is a desired property for continuous generators, but it seems not necessary in general.\n3. The gamma in the 2nd term in (1) is confusing. If gamma = 1, I understand it forces to predict alpha = 0 since x is real. But if gamma < 1, the average in data space may be very blurry thus not realistic at all. How does gamma affect the optimization?\n4. ACAI looks very similar to LSGAN: by giving \"0\" label to real data and \"alpha\" label to fake data; in LSGAN, alpha = 1.\nHave you tested a LSGAN like regularizer? \n5. The baselines are not representative: since ACAI introduces an adversarial regularizer, you should compare with other GAN techniques induced regularizers, such as WGAN regularized autoencoder. \n\nAfter rebuttal:\nSee the long discussion below. I tend to believe that a good interpolation is not only a way to do sanity check but also a nice property to explicitly control in representation learning.", "I appreciate your quick experiments for addressing my concerns!\n\nNow I'm convinced ACAI is a quite interesting method: \nIt seems very important for the critic to see reconstructions and interpolants only. I tend to believe this somehow smooth the latent space, while LRAE doesn't make a full use of the encoder since by increasing d_z the performance only increases marginally. \n\nI believe ACAI deserves more visibility to our community. ", "We have now tested an autoencoder using this regularizer on our representation learning experiments. As a reminder, we first train an autoencoder on MNIST, SVHN, and CIFAR-10. We then use the latent codes as a learned representation for a single-layer classifier and report the accuracy of the classifier on the test set. Denoting the LSGAN Regularized Autoencoder as \"LRAE\", we obtained the following results (with Baseline and ACAI results included for reference):\n\n | Baseline | ACAI | LRAE\nMNIST, d_z = 32 | 94.90 | 98.25 | 95.66 \nMNIST, d_z = 256 | 93.94 | 99.00 | 96.94\nSVHN, d_z = 32 | 26.21 | 34.47 | 22.49\nSVHN, d_z = 256 | 22.74 | 85.14 | 30.77\nCIFAR-10, d_z = 256 | 47.92 | 52.77 | 47.99\nCIFAR-10, d_z = 1024 | 51.62 | 63.99 | 50.26\n\nThe LRAE improves over the baseline in some cases, but not consistently. Since the additional loss term/critic in LRAE is satisfied by making reconstructions more realistic, we hypothesize that it does not change the structure of the latent space. This would explain why it does not generally improve representation learning performance. In contrast, ACAI has the specific goal of modifying the structure of the latent space by making interpolants appear more like reconstructions. This results in improved representation learning performance.\n\nWe believe that these additional experiments further strengthen our claim that improving interpolation behavior can also produced a better learned representation. We also believe this addresses your main concern: \n> My main concern still remains: is the good representation coming from a GAN regularized autoencoder (since your interpolation formulation is very similar to that of LSGAN) or because of the improved interpolation (then it's your contribution)? \nThe results of these experiments definitively show that the improved performance comes from the specific form and goal of ACAI and not simply from that our approach uses a critic. We hope this convinces you of the merit of our submission.", "We have implemented the LSGAN regularized AE as you described and have (anonymously) pushed the code to https://github.com/anonymous-iclr-2019/acai-iclr-2019/blob/master/lrae.py \nThe loss is implemented here: https://github.com/anonymous-iclr-2019/acai-iclr-2019/blob/master/lrae.py#L61\n\nWe have run this autoencoder on the lines dataset. We tried lambda in {0.01, 0.02, 0.05, 0.1, 0.2, 0.5, 1.0}. The best setting of lambda achieved a Mean Distance of 3.62e-3 and a Smoothness of 0.51. For high settings of lambda, the autoencoder collapses to producing a single output. For comparison, the baseline autoencoder (equivalent to setting lambda = 0) achieved Mean Distance of 6.88e-3 and a Smoothness of 0.44 (lower is better for both). It appears qualitatively and quantitatively that (on this task) including this additional loss term improves reconstruction quality (lowering the Mean Distance) but makes the interpolation quality slightly worse (lowering the Smoothness). The interpolations exhibit sudden jumps (similar to the VAE), hence the poor smoothness score. This follows our intuition - the regularizer you suggested will make reconstructions closer to real data (i.e., more realistic) but doesn't have a mechanism to improve interpolations or change the structure of the latent space. For comparison, the ACAI regularized autoencoder achieves a Mean Distance 0.24e-3 and a Smoothness of 0.10.\n\nWe will now run the autoencoder on our representation learning experiments on real datasets and will report back with results.", "Thanks very much for your clarifications. We now understand what you were describing as a baseline. A few comments on this -\n\n1) We will implement this approach and update this comment thread with the results. Unfortunately, we cannot update the paper draft anymore during the review period, so we will have to just copy results here and update the paper in subsequent drafts. We will also push the code for this approach to our anonymous repository so that you can verify that we are implementing what you've described.\n\n2) The difference between what you are describing and what we propose is that our critic only ever sees reconstructions and interpolants - it never sees real points. In what you described, we understand the goal to be to make reconstructions more realistic. We instead enforce that interpolants look like reconstructions, which we could expect to have a very different impact. Our paper is focused on improving interpolation quality rather than reconstruction quality - we do not expect our approach to improve reconstruction quality (compared to a baseline without the regularizer).\n\n3) We want to point out a distinction between pix2pix/cycleGAN and autoencoders. For completeness, we first define our understanding of pix2pix, CycleGAN, and an autoencoder below.\n- pix2pix consists of a generator which maps an input x to an output \\hat{y} = G(x). The discriminator tries to distinguish between pairs of (x, \\hat{y}) (generated pair) and (x, y) (real pairs).\n- CycleGAN contains two generators, one to map from x -> y and one for y -> x. Call the first one G(x) and the second F(y). Two discriminators D_x and D_y are trained to distinguish between outputs of F(x) vs. real x's and outputs of G(y) and real y's. The CycleGAN loss enforces that F(G(x)) = x and G(F(y)) = y and that G and F fool D_y and D_x respectively.\n- An autoencoder uses an encoder to map x to a latent z, and then from z back to the latent space x. It's typically trained to reconstruct x accurately. The latent z can be used for representation learning and semantic manipulation of data (such as interpolation). We introduce a regularizer which also encourages interpolants to appear similar to reconstructions.\nWe want to point out that neither pix2pix nor CycleGAN contain a latent code or an encoder/decoder, so we don't think of them as autoencoders. While CycleGAN does include a loss which encourages cycle-consistency, there is no latent code, and so there is no opportunity for interpolation or representation learning. We believe the primary similarity between CycleGAN and ACAI is that both use a discriminator to learn and minimize a divergence between implicit distributions, but to us this is a commonality to any model using a critic. We have some discussion of this in section 2.1 of our current draft, but we can expand this discussion to include a comparison to CycleGAN and pix2pix in future drafts.", "To improve AE by GAN is quite common due to CycleGAN (Zhu et al. 2017, Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks) and pix2pix (Isola et al. 2016, Image-to-Image Translation with Conditional Adversarial Nets). A LSGAN regularized AE is almost equivalent to a CycleGAN except that you only do a half cycle here:\n\nGiven a AE: x -> z -> \\hat{x} with parameterization \\hat{x} = G(z), z = F(x). The critic is trained by minimizing\nL_critic = (D(x) - 0)^2 + (D(\\hat{x}) - 1)^2\nand the AE is trained by minimizing\nL_AE = || x - \\hat{x} || + lambda * (D(\\hat{x}) - 0)^2\n\nThis is fairly similar to your objective function in my opinion. So I was asking for a comparison.\n", "R3, thank you for noticing the comments were not public and making the discussion public.\n\n> I like your question: \"given an autoencoder which reconstructs well but interpolations poorly (our Baseline), can we improve the quality of its interpolations, and does improving the interpolation quality improve the representation learned?\"\n> This should be added to the paper with an emphasis.\n\nWe are glad this question clarified your understanding of the paper. Unfortunately, the time period for us to be able to make revisions to the paper is over, so we can't update the PDF. However, we can assure you we will include and emphasize (e.g. boldface) this text in an updated draft.\n\n> My main concern still remains: is the good representation coming from a GAN regularized autoencoder (since your interpolation formulation is very similar to that of LSGAN) or because of the improved interpolation (then it's your contribution)? \n> I found the experiments insufficient unless you compared with such a baseline (e.g. LSGAN regularized autoencoder) on representation learning.\n\nCan you describe in more detail what you mean by an LSGAN regularized autoencoder? Our model is quite different from a GAN, since it is an autoencoder and not a generative model (there is no way to draw samples from it). While it uses a critic and an adversarial learning process, it otherwise has very little in common with GANs. If you mean an autoencoder whose latent space is regularized by a critic, I think that baseline is represented by our inclusion of an AAE. If you have a specific model architecture or loss function in mind, we would be happy to include it in our experiments.", "Sorry that I didn't realize the discussion between the authors and me was private! I replied to AC's question which was private making everything private afterwards.\nI think it is worth an open discussion by more people. So I post the discussion here.\n\n*** By reviewer 3 ***\nThis is an interesting idea, but I'm still not sure its practicality for autoencoders. I will rephrase and elaborate my concerns:\n\n> R3: \"The baselines are not representative; you should compare with other GAN techniques induced regularizers, such as WGAN regularized autoencoder.\" \nA: \"WAE = AAE; Our paper includes the adversarial autoencoder as a baseline.\"\n\nI'm sorry the question was not clear. In fact, I meant to compare other GAN regularizers for the output of the decoder (AAE regularizes the code), which is quite common due to the popularity of CycleGAN, and it indeed improves significantly the quality of the outputs. \nAs I asked: should we \"regularize the interpolation or regularize the image of the decoder\"?\nI think the latter is the main desideratum for autoencoders. \nInterpolation regularizer is one way to achieve that; and the proposed ACAI in my opinion is a generalized LSGAN regularizer (may be the motivations are different). But since there is no comparison between that and ACAI, I'm not sure if this interpolation extension plays an important role. \n\n> Regarding the philosophy of interpolation: \n1) Interpolation looks realistic;\n2) The interpolation path is semantically smooth,\n\nI am not sure if there is a clear connection between a good interpolation and a good representation learning, since there are good discrete representation learning and as the authors mentioned the denoising AE could perform better despite producing bad interpolations.\nMore experiments are needed to gain a deeper understanding. The evaluation of interpolation on a toy dataset is far from satisfactory. \n\n\n*** By authors ***\nR3, thanks for clarifying your initial comments and for your additional discussion. We think there remains some misunderstanding about the scope and claims of our paper.\n\n1) Our paper focuses on autoencoders and representation learning, not generative models. GANs are generative models, and ACAI is a regularizer for autoencoders. While ACAI includes an adversarial training process and a \"critic\", it otherwise has very little in common with GANs and the resulting autoencoder is not a generative model. Similarly, while the loss function has some similarities with the LSGAN loss function (i.e., they both use a least-squared error loss), it has very little in common with an LSGAN because an LSGAN is a generative model and not an autoencoder or a technique for learning representations. We agree that it would be interesting to study the effect of regularizing the decoder of an autoencoder in similar ways to the generator in a GAN, but this is outside the scope of our paper. More specifically, GANs are not representation learning techniques, they are generative models; so, there is no way to test their representation learning capabilities (as is the focus of our paper).\n\n2) We do not claim that \"good interpolation implies a good representation and a good representation implies good interpolation\". In contrast, we ask \"given an autoencoder which reconstructs well but interpolations poorly (our Baseline), can we improve the quality of its interpolations, and does improving the interpolation quality improve the representation learned?\" Note that the first is a claim of causality, ours is a test of an intervention. As an aside, we are not the first to study or point out this potential connection; see e.g. \"Better Mixing via Deep Representations\" by Bengio et al.\n\nBased on this discussion, we have included some additional statements in our updated draft to make it clear what the scope and claims of our paper are. We hope this clarifies the intention of our paper.\n\n\n*** By authors ***\nR3, we believe have addressed your concerns and clarified some of your points. Do you have an updated impression of our paper? Thanks for your consideration.\n\n\n*** By reviewer 3: EXPERIMENT REQUEST ***\nI like your question: \"given an autoencoder which reconstructs well but interpolations poorly (our Baseline), can we improve the quality of its interpolations, and does improving the interpolation quality improve the representation learned?\"\nThis should be added to the paper with an emphasis.\n\nMy main concern still remains: is the good representation coming from a GAN regularized autoencoder (since your interpolation formulation is very similar to that of LSGAN) or because of the improved interpolation (then it's your contribution)? \nI found the experiments insufficient unless you compared with such a baseline (e.g. LSGAN regularized autoencoder) on representation learning.", "I would like to thank the authors for addressing my feedback. This comforts me in the rating I gave to their paper. \nGood luck.", "For example, if the proposed regularizer is applied to a VAE, does it help in getting better random samples by decoding z ~ N(0, 1)?\n\n", "Thanks for your question. In general we do not expect this regularizer to improve the sample quality of a given autoencoder, since the critic's primary objective is to discriminate between interpolants and reconstructions (not interpolants and \"real\" data). The goal instead is to take an autoencoder which already reconstructs well but interpolates poorly and improve the quality of the interpolations. The VAE typically has the opposite problem - it reconstructs poorly but interpolates smoothly. In other words, the latent space of the VAE is already \"continuous\" in some sense (due to the enforcement of the prior) but many regions in latent space map to \"unrealistic\" (i.e. blurry) outputs. So, we aren't sure whether our regularizer would improve VAE reconstructions. It would be pretty straightforward to try using our publicly-available code, though!", "Thanks to all of the reviewers for their feedback on our paper. We have addressed each reviewer's comments individually and have also uploaded an updated draft based on the suggestions. The changes include the following:\n- Clarified why smooth and realistic interpolations may potentially lead to better reconstructions in the introduction\n- Framed our objective as minimizing an adversarial divergence between reconstructions and interpolants\n- Clarified the second term of the critic loss involving \\gamma and gave additional justification for this term\n- Added comparison of the ACAI and LSGAN critic losses\n- Gave additional intuition as to how the critic could potentially regress \\alpha when only being shown a single image at a time\n- Referred to our lines images as \"greyscale\" rather than \"black and white\"\n- Noted that the AAE baseline we included has also subsequently been referred to as Wasserstein Autoencoder\n- Pointed out some cases where interpolations can be smooth and realistic despite interpolating between dissimilar points\n\nWe hope these changes address any concerns the reviewers have.", "Thanks for your review and thoughtful analysis. To address each of your cons in turn:\n\n> The interplay of the adversarial network (between AE and critic) isn’t very clear and can be improved.\n\nThe goal of the critic is to predict the interpolation mixing coefficient \\alpha; the goal of the autoencoder is to \"fool\" the critic into outputting \\alpha = 0. It can be useful to think of the critic as estimating a divergence between real and interpolated datapoints, and the autoencoder is trying to minimize this divergence. We have added some discussion of this to our paper.\n\n> Eq. 1, should x be x_1 or a new data other than x1 and x2?\n\nIt actually can be any real datapoint x - the second term can be computed separately from the first. We have clarified this in our updated draft.\n\n> The paper states that the 2nd term of Eq. 1 isn’t crucial. If x is a new data (other than x1 or x2), how can the critic infer \\alpha without a reference to x1 or x2?\n\nThe critic must infer \\alpha from common artifacts of interpolated datapoints alone. This is best illustrated in Figure 3(a) - note that as the interpolation morphs from one endpoint to the other, the image becomes dimmer and closer to a \"dot\" in the middle of the image. In this case, it is easy to infer \\alpha based on the length and brightness of the line. This is exactly the kind of behavior that ACAI seeks to discourage, and we find it's effective in practice. We have added some additional discussion of this point to our paper.\n\n> The paper states that “encouraging this behavior also produce semantically smooth interpolation …”. Besides the empirical evidences from data, it would be better to any some theoretical justifications.\n\nOur approach can be viewed in the framework of adversarial divergences, where the critic network is being used to estimate a divergence . Of course, the exact form of this divergence is not clear, but it does provide a connection to the GAN theory literature. We have made this connection explicit in our updated draft.", "Thanks for your review, we are glad you found the paper interesting and significant. To address your questions and comments:\n\n> For the critic Loss L_d in equation (1) , the authors mention that the \\gamma based second term (that should ensure that the critic outputs 0 for non-interpolated inputs and expose the critic to realistic data even if the AE reconstruction is poor) does not seem to be crucial in your approach but stabilized the adversarial training. Could you somehow quantify this. It seems like stability of the adversarial training should be paramount to your method to make sure the AE learns a better latent representation. This comment, even though I assume it well-founded, seems a bit of a contradiction.\n\nWe agree that this comment should be expanded on, and we have done so in our updated draft. To clarify, when we say it \"helped stabilize the adversarial learning process\", we mean that a) it allowed us to use the same value of \\lambda across all of our experiments and still achieve good results and b) it resulted in smooth convergence of the autoencoder's loss. We note that stability of the adversarial learning process was not an issue in general, in the sense that stability across runs was not an issue and our model never \"collapsed\" to a bad solution.\n\n> For the Lines synthetic data. It was chosen to use a 32x32 image size with 16 points length lines. This configuration does quantize directly the angles your measures can distinguish. Below a certain angle differences (or delta), 2 angles must have the same pixel representation, i.e. exact overlapping lines. My question is simple: What is the smallest angle you can use/distinguish or, how many exact unique lines can you have? \n\nOur code for synthesizing line images uses anti-aliasing, so for example a line with angle 0.3 and another with angle 0.300001 will be rendered differently. As a result, the number of unique lines is actually up to floating point precision. We think some confusion about this probably stems from the fact that we referred to the line images as \"black-and-white\"; we have updated the language in the paper to say \"grayscale\".", "Thanks for your thorough review and questions. We've answered your questions below and have updated our draft to clarify.\n\n> Do we really need every interpolated point to be realistic (i.e. similar to a data point in the train-set)? I believe that there exists an interpolation between two totally different objects can never be observed. \n\nWe are interested in latent spaces where interpolations produce realistic outputs across the entirety of the interpolation because this suggests some form of continuity in the latent space (as illustrated in FIgure 1). Our paper asks whether this property also results in an improved representation for downstream tasks. If an intermediate point was not realistic, the latent space might not have this property.\nThanks for pointing out that in some cases it's not obvious that there is a smooth and realistic path between two datapoints. We think two good examples of this are in Figure 6, bottom, where we interpolate between different MNIST digits. We find that even though there is no real digit which is at the midpoint of, for example, a 2 and a 9, the midpoint of ACAI's interpolation still appears realistic. We have added a note about this to our paper.\n\n> Do we need interpolation points to form a semantically smooth morphing? I guess this is a desired property for continuous generators, but it seems not necessary in general.\n\nWe agree that smoothness is not required for high-quality learned features -- for example, the Denoising Autoencoder fared well on our classification experiments despite producing poor interpolations. However, we are interested in the opposite, namely whether the ability to perform latent-space manipulations like interpolation suggest a better learned representation. We have added some clarification of this point in our paper.\n\n> The gamma in the 2nd term in (1) is confusing. If gamma = 1, I understand it forces to predict alpha = 0 since x is real. But if gamma < 1, the average in data space may be very blurry thus not realistic at all. How does gamma affect the optimization?\n\nNote \\hat{x} is a reconstruction of x, so in practice \\gamma*x + (1 − \\gamma)*\\hat{x} will be quite similar to x as long as \\hat{x} is a reasonable reconstruction. In other words, we are not interpolating between two totally different points, so typically the blurriness you might expect from pixel-space mixing won't be present. We have added some additional discussion of gamma and this term to our paper.\n\n> ACAI looks very similar to LSGAN: by giving \"0\" label to real data and \"alpha\" label to fake data; in LSGAN, alpha = 1. Have you tested a LSGAN like regularizer? \n\nYou're right that the LSGAN loss function and our regularization term are similar in the sense that both measure a squared error between the critic's output and a scalar. The difference is that the LSGAN is designed for use on a GAN-based generative model; our regularizer is designed as a regularizer for an autoencoder. As a result, the scalar in the LSGAN objective is a fixed hyperparameter whereas we regress the interpolation amount \\alpha. We added some discussion of the LSGAN objective to our paper.\n\n> The baselines are not representative: since ACAI introduces an adversarial regularizer, you should compare with other GAN techniques induced regularizers, such as WGAN regularized autoencoder. \n\nNote that the Wasserstein Autoencoder (WAE) is actually equivalent to an adversarial autoencoder when using a GAN loss; in the WAE paper [1] they write \"When c is the squared cost and D_Z is the GAN objective, WAE coincides with adversarial auto-encoders\". Our paper includes the adversarial autoencoder as a baseline (labeled AAE in tables and described in Section 3.2, paragraph 4). We added a citation to [1] to clarify this.\n\n[1] Ilya Tolstikhin, Olivier Bousquet, Sylvain Gelly and Bernhard Schoelkopf. \"Wasserstein Auto-Encoders\", ICLR 2017.\n", "Summary: The authors propose a new approach to encourage valid interpolation in Auto-Encoders (AE). It is based on a regularization procedure involving a critic network judging the realistic nature of reconstructed data point from its mixed latent representations by recovering the mixing coefficient. The authors show that this approach does indeed improve the quality of interpolated samples on few tasks. A synthetic tasks of lines interpolation (proposing new Mean Distance and Smoothness metric for this task), classification task (with a single-layer classifier) from the latent space representation and finally a clustering accuracy on the latent space. On the proposed regularization method seems to help significantly compared to commonly used AE architectures (Basic AE, Denoising AE, Variational AE, Adversarial AE and VQ-VAE).\n\nThis paper was a very interesting read, and the work seems to be of significance for the unsupervised learning community.\nIt was clearly written and conveys the contributions clearly and the experimental results and their interpretations seem valid.\n\nThe proposed approach of a critic based regularizer is a simple but seemingly important addition that contributes to improving interpolation in AE significantly and even show impact \"downstream tasks\" as the authors put it.\n\nFew comments/questions come to mind:\n\n- For the critic Loss L_d in equation (1) , the authors mention that the \\gamma based second term (that should ensure that the critic outputs 0 for non-interpolated inputs and expose the critic to realistic data even if the AE reconstruction is poor) does not seem to be crucial in your approach but stabilized the adversarial training. Could you somehow quantify this. It seems like stability of the adversarial training should be paramount to your method to make sure the AE learns a better latent representation. This comment, even though I assume it well-founded, seems a bit of a contradiction.\n\n- For the Lines synthetic data. It was chosen to use a 32x32 image size with 16 points length lines. This configuration does quantize directly the angles your measures can distinguish. Below a certain angle differences (or delta), 2 angles must have the same pixel representation, i.e. exact overlapping lines. My question is simple: What is the smallest angle you can use/distinguish or, how many exact unique lines can you have? \n\nOverall this is a good paper that deserves publications.", "This paper proposed an adversarially regularized AE algorithm that improve interpolation in latent space. Specifically, a critic is used to predict the interpolation weight \\alpha and encourage the interpolated images to be more realistic. The paper verified the method on a newly proposed synthetic line benchmark and on downstream classification and clustering tasks.\n\nPros:\n1.\tA novel algorithm that promotes the interpolation ability of AE\n2.\tA new synthesized line benchmark to verify the interpolation ability of different AE variants\n3.\tStrong results on downstream classification and clustering tasks\n\nCons: \n1.\tThe interplay of the adversarial network (between AE and critic) isn’t very clear and can be improved\n2.\tEq. 1, should x be x_1 or a new data other than x1 and x2?\n3.\tThe paper states that the 2nd term of Eq. 1 isn’t crucial. If x is a new data (other than x1 or x2), how can the critic infer \\alpha without a reference to x1 or x2?\n4.\tThe paper states that “encouraging this behavior also produce semantically smooth interpolation …”. Besides the empirical evidences from data, it would be better to any some theoretical justifications.\n" ]
[ -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 9 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "H1l1iK8OyE", "iclr_2019_S1fQSiCcYm", "B1gHm_rw14", "SyefNNz8kN", "S1eavUPSJE", "rJlGBqmBy4", "H1g-QchEJE", "H1gUty5Ey4", "ryx6nAJFaQ", "S1e8zyxY67", "iclr_2019_S1fQSiCcYm", "SJgQLPtKTm", "iclr_2019_S1fQSiCcYm", "B1gmB4Kv2m", "rJlQxJGchX", "HJeuQMG5n7", "iclr_2019_S1fQSiCcYm", "iclr_2019_S1fQSiCcYm" ]
iclr_2019_S1fUpoR5FQ
Quasi-hyperbolic momentum and Adam for deep learning
Momentum-based acceleration of stochastic gradient descent (SGD) is widely used in deep learning. We propose the quasi-hyperbolic momentum algorithm (QHM) as an extremely simple alteration of momentum SGD, averaging a plain SGD step with a momentum step. We describe numerous connections to and identities with other algorithms, and we characterize the set of two-state optimization algorithms that QHM can recover. Finally, we propose a QH variant of Adam called QHAdam, and we empirically demonstrate that our algorithms lead to significantly improved training in a variety of settings, including a new state-of-the-art result on WMT16 EN-DE. We hope that these empirical results, combined with the conceptual and practical simplicity of QHM and QHAdam, will spur interest from both practitioners and researchers. Code is immediately available.
accepted-poster-papers
This paper presents quasi-hyperbolic momentum, a generalization of Nesterov Accelerated Gradient. The method can be seen as adding an additional hyperparameter to NAG corresponding to the weighting of the direct gradient term in the update. The contribution is pretty simple, but the paper has good discussion of the relationships with other momentum methods, careful theoretical analysis, and fairly strong experimental results. All the reviewers believe this is a strong paper and should be accepted, and I concur.
train
[ "Bklzj4_9n7", "BygWhwctA7", "ryxZv8qKAm", "ByeKaCR-07", "rygjk-aiaX", "H1eadSvGpX", "Bket2ivu6Q", "H1lwIdcuTQ", "BylCh-9OTm", "Hyxyo6TDT7", "B1x95wpPaQ", "ryg5hkUDTX", "rJe2cWND67", "rkxTldzvT7", "HkebiJLLT7", "SyeWFkLL67", "rJlL-18UTX", "SkezCLH-6m", "SklIQuS-Tm", "HklAFD5lpX", "BJxvbVG9h7" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "public", "public", "author", "public", "author", "public", "author", "author", "author", "author", "author", "public", "official_reviewer" ]
[ "Update after the author response: I am changing my rating from 6 to 7. The authors did a good job at clarifying where the gain might be coming from, and even though I maintain that decoupling the two variables is a simple modification, it leads to some valuable insights and good results which would of interest to the larger research community.\n\n-------\nIn this paper the authors propose simple modifications to SGD and Adam, called QH-variants, that can not only recover the “parent” method but a host of other optimization tricks that are widely used in the applied deep learning community. Furthermore, the resulting method achieves better performance on a suit of different tasks making it an appealing choice over the competing methods. \n\nTraining a DNN can be tricky and substantial efforts have been made to improve on the popular SGD baseline with the goal of making training faster or reaching a better minima of the loss surface. The paper introduces a very simple modification to existing algorithms with surprisingly promising results. For example, on the face of it, QHM which is the modification of SGD, is exactly like momentum except we replace \\beta in eq. 1 to \\nu*\\beta. Without any analysis, I am not sure how such a change leads to dramatic difference in performance like the first subfigure in Fig. 2. The authors say that the performance of SGD was similar to that of momentum, but performance of momentum with \\beta = 0.7*0.999 should be the same as that of QHM. So where is the gain coming from? What am I missing here? Outside of that, the results are impressive and the simplicity of the method quite appealing. The authors put in substantial efforts to run a large number of experiments and providing a lot of extra material in the appendix for those looking to dive into all the details which is appreciated. \n\n\nIn summary, there are a few results that I don’t quite follow, but the rest of the paper is well organized and the method shows promise in practice. My only concern is the incremental nature of the method, which is only partly offset by the good presentation. ", "Thanks for the follow-up, and we are glad that the reviewer enjoyed the paper!", "We thank the reviewer for their generous revisiting of their assessment! Our latest update to the manuscript addresses the reviewer's remaining concerns as follows:\n\n- We have explicitly stated in Section 5 of the main text that the stability properties of QHAdam discussed come from the tighter step size bound.\n- We have briefly elaborated on the need for large beta_2 in Appendix F.\n- We have moved the proof of Fact F.1 inline.", "Thanks for your clarifications. I am retaining my rating; I maintain that this is a good paper and endorse it for publication.", "#1 This looks good!\n\n#2 I think that the new additions to the paper do a great job of distinguishing QHM and AggMo while exposing their similarities. I am not sure that I agree with the two works being entirely orthogonal, but I think that the revision is more than fair in its comparison of the two.\n\n#3 I understand what you are saying. While you should weigh your presentation against the opinions of the many, as a reviewer it is my job to give feedback from my position. I still believe that the main paper struggles from some of the issues I presented in my initial review. However, the appendix does seem easier to read and the additions to the AccSGD section are good. Though we are left in disagreement, I think overall that my issue is a minor point which has been addressed to some extent.\n\nI don't have any specific recommendations beyond what I said in my initial review. However, I respect that my bias may be in conflict with other feedback you have received.\n\n#4 & 5 In my initial review I did not have time to explore Appendix F. I must confess that I still have not been able to cover all of the details. However, I am still not completely convinced by some aspects of QHAdam. In particular, some of the theoretical arguments in the appendix. Disproving the step size bound seems interesting, though I do not entirely understand the significance. It seems the key theoretical argument for QHAdam over Adam is the ability to recover a tighter step size bound. Perhaps this should be made clear in the main text (expanding on \"it is possible that setting v_2 < 1 can improve stability\"). Moreover, why is this method of reducing the step size more effective than simple reducing beta_2 in Adam? You claim that small beta_2 values can lead to slow convergence, how does reducing v_2 instead correct this?\n\nThank you for clarifying the empirical results. After taking a more careful look, I agree that QHAdam seems worthwhile to include. I am not familiar with NMT optimization, is the idea of a spiky gradient distribution well established? While I acknowledge QHAdam gives a significant win on this task, I am not yet convinced by the proposed explanation. However, I do not see this as a critical component of the paper.\n\nTo summarize, with your explanation here I am more convinced by the empirical results than on my first reading.\n\n#6 It is impossible to get a second first-impression, but I feel that in general the clarity has been improved. \n\n- Minor point: Why relegate Fact F1 proof to appendix G?\n\nThank you for addressing the points I raised. After reading your response, I am more convinced that the paper should be accepted and have thus increased my original score from 6 to 7.\n", "Edit: Following response, I have updated my score from 6 to 7.\n\nI completed this review as an emergency reviewer - meaning that I had little time to complete the review. I did not have time to cover all of the material in the lengthy appendix but hope that I explored the parts most relevant to my comments below.\n\nPaper summary: The paper introduces QHM, a simple variant of classical momentum which takes a weighted average of the momentum and gradient update. The authors comprehensively analyze the relationships between QHM and other momentum based optimization schemes. The authors present an empirical evaluation of QHM and QHAdam showing comparable performance with existing approaches.\n\nDetailed comments:\n\nI'll use CM to denote classical momentum, referred to as \"momentum\" in the paper.\n\n\n1) In the introduction, you reference gradient variance reduction as a motivation for QHM. But in Section 3 you defer readers to the appendix for the motivation of QHM. I think that the main paper should include a brief explanation of this motivation.\n\n2) The proposed QHM looks quite similar to a special case of Aggregated Momentum [1]. It seems that the key difference is with the use of damping but I suspect that this can be largely eliminated by using different learning rates for each velocity (as in Section 4 of [1]) and/or adopting damping in AggMo. In fact, Section 4.1 in your paper recovers Nesterov momentum in a very similar way. More generally, could one think of AggMo as a generalization of QHM? It averages plain SGD and several momentum steps on different time scales.\n\n3) I thought that some of the surprising relations to other momentum based optimizers was the most interesting part of the paper. However, I found the presentation a little difficult. There are many algorithms presented but none are explored fully in the main paper. I had to flick between the main paper and appendix to uncover the information I wanted most from the paper.\n\nMoreover, I found some of the arguments in the appendix a little tough to follow. For example, with AccSGD you should specify that epsilon is a constant typically chosen to be 0.7. When the correspondence to QHM is presented it is not obvious that QHM -> AccSGD but not the other way around. I would suggest that you present a few algorithms in greater detail, and list the other algorithms you explore at the end of Section 4 with pointers to the appendix.\n\n4) I am not sure that the QHAdam algorithm adds much to the paper. It is not explored theoretically and I found the empirical analysis fairly limited.\n\n5) In general, the empirical results support QHM as an improvement on SGD/NAG. But I have some (fairly minor) concerns.\n\n a) For Figure 1, it looks like QHM beats QHAdam on MLP-EMNIST. Why not show these on the same plot? This goes back to my point 4 - it does not look like QHAdam improves on QHM and so I am not sure why it is included. The idea of averaging gradients and momentum is general - why explore QHAdam in particular?\n\n b) For Figure 2, while I certainly appreciate the inclusion of error bars, they suggest that the performance of all methods are very similar. In Table 3, QH and the baselines are often not just within a standard deviation of eachother but also have very close means (relatively).\n\n6) I feel that some of the claims made in the paper are a little strong. E.g. \"our algorithms lead to significantly improved training in a variety of settings\". I felt that the evidence for this was lacking.\n\n\nOverall, I felt that the paper offered many interesting results but clarity could be improved. I have some questions about the empirical results but felt that the overall story was strong. I hope that the issues I presented above can be easily addressed by the authors.\n\n\nMinor comments:\n\n- I thought the use of bold text in the introduction was unnecessary\n- Some summary of the less common tasks in Table 2 should be given in the main paper\n\n\nClarity: I found the paper quite difficult to follow in places and found myself bouncing around the appendix frequently. While the writing is good I think that some light restructuring would improve the flow.\n\nSignificance: The paper presents a simple tweak to classical momentum but takes care to identify its relation to existing algorithms. The empirical results are not overwhelming but at least show QHM as competitive with CM on tasks and architecture where SGD is typically dominant.\n\nOriginality: To my knowledge, the paper presents original findings and places itself well amongst existing work.\n\n\nReferences:\n\n[1] Lucas et al. \"Aggregated Momentum: Stability Through Passive Damping\" https://arxiv.org/pdf/1804.00325.pdf", "Firstly, QHAdam is in Figure 5 -- specifically, (d), (e), (f), (j), (k), (l).\n\nThere is an intuitive advantage and a theoretically grounded advantage.\n\nThe intuitive advantage is that whatever benefits interpolation provides for non-adaptive methods (i.e. all the theoretical results for QHM) translate to adaptive methods. This is strictly intuitive for now -- we do not provide any theoretical demonstrations of accelerated QHAdam convergence, only empirical results.\n\nThe theoretically grounded advantage is stability. Adam's updates to the parameters can be much larger than can be dealt with during training. In fact, they can be much larger than previously believed -- in our manuscript, we disprove the step size bound claimed in [5] (the original Adam paper), which had been taken as fact in subsequent literature. QHAdam offers a way to mitigate this without simply cutting the learning rate and thus making training slower; this is discussed in much theoretical depth in Appendix F, and empirically validated primarily by the NMT case study.\n\n[5] Kingma & Ba, https://arxiv.org/abs/1412.6980", "We are aware of the observed poor generalization ability of Adam, and we note that quite a few manuscripts submitted to this conference seek to address this issue. This issue is out of scope for our manuscript, but we note that our results extend beyond the training dataset, as depicted by the figures.\n\nWe note that Figs 5abc and 5def use identical settings, as do 5ghi and 5jkl. For our rationale for not showing them side-by-side, please refer to our response to AnonReviewer2.\n\nWe (the authors) are not qualified to intelligently comment on or compare to AM methods, as we are only familiar in passing with the relevant modern literature. We suspect that you are in a much better position to speak to your question :)", "Dear Authors:\n Thank you for illustration. I need to read Appendix F in detail before making comments. As for Figure 5, I just wonder why Adam disappears in some of sub-figures like Figure 5(a),(b) and (c). Did Adam outperform QHAdam in these figures or some other reasons? I am just curious about it. One more recommendation is that authors can also show the performance of the QHMAdam on the test data. The reason is that the Adam works well in training data, but may generalize poorly on the test set. See Figure 2 in [1]. \n Let us explore a bit further. As you know, SGD is a dominant method for deep learning. However, recently, alternating minimization(AM) is also attracting researchers' interest because AM can avoid gradient explosion and provide convergence guarantees[2][3]. It is easy to implement AM in parallel, and it allows for non-differentiable activation functions like Relu. AM includes the Alternating Direction Method of Multipliers(ADMM) and Block Coordinate Descent(BCD). What is your opinion on the comparison between SGD and AM? \n Finally, thank you again for patient explanation and hope this paper will be accepted in the ICLR conference.\n Sincerely yours\n[1] Zhang, Guoqiang, and W. Bastiaan Kleijn. \"Training Deep Neural Networks via Optimization Over Graphs.\" 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018. https://arxiv.org/pdf/1702.03380.pdf\n[2] Taylor, Gavin, et al. \"Training neural networks without gradients: A scalable admm approach.\" International Conference on Machine Learning. 2016.\n[3] Global Convergence in Deep Learning with Variable Splitting via the Kurdyka-Łojasiewicz Property. https://arxiv.org/abs/1803.00225", "Dear Authors:\n Thank you very much for providing useful learning materials. I really appreciate it. One question is about the comparison between QHMAdam and Adam. You have conducted various experiments to illustrate the effectiveness of QHMAdam. Some figures(e.g. Figure 1) show that the QHMAdam outperformed Adam. But the Adam did not appear in other figures(e.g. Figure 5). Could you please explain what are the advantages of QHMAdam over Adam? Thank you very much.", "Demonstrations of critical point convergence for GD methods (in the general smooth+non-convex setting) are most likely absent from recent literature. We recommend various online course materials, such as [1] and [2].\n\nOf course, there are various restrictions one can impose for non-convex settings that will yield more interesting results (e.g. convergence to a local optimum with known rate) -- this is the focus of much recent literature! As a sampler, you might check out [3] and [4].\n\n[1] D. Papailiopoulos, http://papail.io/teaching/901/scribe_09.pdf\n[2] C. Sa, http://www.cs.cornell.edu/courses/cs6787/2017fa/Lecture7.pdf\n[3] Ge et al., https://arxiv.org/abs/1503.02101\n[4] Lee et al., https://arxiv.org/abs/1602.04915", "Thank you for the feedback. I appreciate it. I am interested in the global convergence of critical points. Could you then recommend some literature of critical point convergence of SGD in the nonconvex setting? I have gone through most SGD papers but did not find any literature related to this field. Thank you.", "Thanks for the interest in our paper!\n\nWe are not aware of any compelling convergence results for gradient descent and momentum (and other common algorithms) in a general non-convex setting — the best one can do is critical point convergence.\n\nAs QHM is a simple interpolation between the two, QHM similarly does not have any compelling convergence results in a general non-convex setting.", "Dear Authors:\n Thank you for presenting an interesting work on the optimization of deep learning problems. Could you please provide the convergence analysis of your proposed QHM in the nonconvex deep learning setting? This is because as for the SGD related methods, the convergence seems to be proved in the convex case, like Adam. Thank you very much.\n Sincerely yours", "\n# 4 & 5\n\nWe acknowledge that formal convergence analysis is not provided for QHAdam. Nevertheless, we believe that the contradiction of the widely-accepted Adam step size bound from Kingma & Ba (2015) and QHAdam's theoretically grounded ability to tighten this bound is of substantial interest. We believe that we have indeed demonstrated the empirical usefulness of this with the NMT case study. Increasing from 60% to 100% robustness is a large improvement, and an increase of 0.3 BLEU from an optimizer change alone is viewed as fairly significant in the NMT community.\n\nWith regards to the EMNIST classification parameter sweeps, we seek to compare our algorithms with their own vanilla counterparts (i.e. QHAdam > Adam), without meticulously tuning the QHAdam and QHM curves to look comparable with one another. We note that there is a certain non-standard LR schedule for (QH)Adam which surpasses the results shown for QHM. However, for the purposes of this study, we believe it best to stick to the standard Adam LR. More generally, we lament the trend of comparing adaptive and non-adaptive methods side-by-side when the terms of comparison are questionable at best. Fair comparison of adaptive and non-adaptive methods is likely a suitable subject for an entirely new paper.\n\nFinally, we wish to make a broader point regarding the “case study” experiments. Our primary goal in performing these case studies is to demonstrate practically realistic scenarios. Thus, we did not perform systematic sweeps to squeeze all possible performance out of the algorithms. Rather, we approached the case studies as we felt a practitioner would, relying on intuition to translate the vanilla optimizer to the QH optimizer. In that light, we believe that the case study results as a whole are compelling:\n- We observe *much* faster convergence in image recognition and marginal/neutral results in final validation accuracy. In general, one should not expect significant differences in final validation accuracy on the standard ResNet+ImageNet combo, assuming that the optimizer has trained the model to convergence.\n- We observe respectably lower perplexity in language modeling. Note that though the SD bars overlap here, the results are still statistically significant (at the 0.1% confidence level) since using 10 seeds results in a reduced standard error.\n- We observe neutral results in reinforcement learning.\n- We observe notable robustness and performance improvements in NMT, as discussed above. The graph is primarily for illustrative purposes, since the metric of interest is BLEU (which is only highlighted in the table).\n\n# 6/Overall\n\nWe hope that our updates to the manuscript address the reviewer's concerns about clarity, and we hope that the discussion above addresses the reviewer's concerns about empirical significance. We once again thank the reviewer for the incredibly thorough commentary of our manuscript.\n", "We thank the reviewer for their encouraging and constructive feedback.\n\nThe reviewer has offered a large number of insightful comments, which is particularly appreciated given the exigence of the review request. For convenience, we address them by number:\n\n# 1\n\nWe concur with the reviewer's suggestion and have updated Section 3 of the manuscript to provide this brief summary.\n\n# 2\n\nWe appreciate the pointer to the AggMo algorithm (Lucas et al., 2018), which proposes the additive use of many momentum buffers with different values of beta (the momentum constant). We had tried this in independent preliminary experimentation (toward analyzing many-state optimization), and we found that using multiple momentum buffers yields negligible value over using a single slow-decaying momentum buffer and setting an appropriate immediate discount (i.e. QHM with high beta and appropriate nu). Given the added costs and complexity of using multiple momentum buffers, we decided against discussing many-state optimization.\n\nWe believe that the two papers are largely orthogonal, as one paper focuses in depth on two-state optimization, while the other more broadly explores many-state optimization. However, in light of AggMo's existence, we believe it is valuable to comment on the relationship between QHM and AggMo. Specifically, we have updated the manuscript as follows:\n- In section 4.5, we briefly connect QHM to AggMo.\n- In Appendix H, we provide a supplemental discussion and comparison with AggMo. Specifically, we perform the autoencoder study from Appendix D.1 of Lucas et al. (2018) with both algorithms, using the EMNIST dataset. In short, we believe that the results of this comparison support the above notion from our preliminary experimentation.\n\n# 3\n\nWe appreciate the feedback on the presentation of Section 4. We have attempted to cater to a diverse audience across the practitioner-theorist spectrum, and the strongest feedback we received pre-submission is that many readers on both ends of the spectrum appreciate to have in the main text only:\n- The analytical form (i.e. update rule) of the discussed algorithm, and brief efficiency discussion\n- The succinct “upshot” as it relates to QHM (i.e. narrative summary of the recovery result)\n\nand for the mathematical derivations and specific recovery parameterizations to be relegated to the appendix. In particular, we have received feedback that the matrix machinery required for most of the recoveries detracts from the main text, and any detailed derivations depend on this machinery.\n\nIn recognition of the reviewer's concerns, we have updated Appendix C of the manuscript to be more structured and self-contained (essentially, a more detailed version of Sections 4.2 through 4.4), so that the more theory-minded audience might have an easier time reading without having to switch back-and-forth between Appendix C and the main text.\n\nWe would very much welcome suggestions on what specific facts merit inclusion in the main paper, besides the analytical forms of the update rules and narrative relation to QHM.\n\nRegarding AccSGD specifically, we have updated the manuscript to more clearly explain the one-way nonrecovery (both in the main text and in the appendix). We believe that our current method of showing this nonrecovery (via NAG) is the most accessible, while revealing a useful erratum in the prior work of Kidambi et al. (2018).\n", "We thank the reviewer for their encouraging and constructive feedback. We are heartened that the reviewer has found the algorithms useful for their own applications!\n\n# Using multiple momentum buffers\n\nWe appreciate the pointer to the AggMo algorithm (Lucas et al., 2018), which proposes the additive use of many momentum buffers with different values of beta (the momentum constant). We had tried this in independent preliminary experimentation (toward analyzing many-state optimization), and we found that using multiple momentum buffers yields negligible value over using a single slow-decaying momentum buffer and setting an appropriate immediate discount (i.e. QHM with high beta and appropriate nu). Given the added costs and complexity of using multiple momentum buffers, we decided against discussing many-state optimization.\n\nWe believe that the two papers are largely orthogonal, as one paper focuses in depth on two-state optimization, while the other more broadly explores many-state optimization. However, in light of AggMo's existence, we believe it is valuable to comment on the relationship between QHM and AggMo. Specifically, we have updated the manuscript as follows:\n- In section 4.5, we briefly connect QHM to AggMo.\n- In Appendix H, we provide a supplemental discussion and comparison with AggMo. Specifically, we perform the autoencoder study from Appendix D.1 of Lucas et al. (2018) with both algorithms, using the EMNIST dataset. In short, we believe that the results of this comparison support the above notion from our preliminary experimentation.\n", "We thank the reviewer for their encouraging and constructive feedback.\n\n# QHM vs. momentum\n\nWe appreciate the reviewer raising this potential point of confusion, and we would like to emphasize that replacing beta with (nu * beta) in momentum *does not* recover QHM.\n\nAnalytically, we note that replacing beta with (nu * beta) in Equation 2 propagates nu into the momentum buffer (g_t) via Equation 1, ultimately changing the decay rate of the momentum buffer from beta to (nu * beta). Intuitively, we note that QHM constitutes the *complete* decoupling of the momentum buffer's decay rate (beta) from the current gradient's contribution to the update rule (1 - nu * beta). In contrast, momentum tightly couples the decay rate (beta) and the current gradient's contribution (1 - beta).\n\nIt is crucial to understand this difference as it reveals QHM's added expressivity over momentum, and we concur that more explicit discussion of this difference would be beneficial. We have updated the manuscript as follows:\n- Appendix A.8 analytically demonstrates the difference between the two, in terms of the weight on each past gradient.\n- Section 3 of the main text briefly and intuitively describes the added expressive power of QHM over momentum, in line with the above explanation.\n\n# Incrementality\n\nWe appreciate the reviewer's honest assessment of the incrementality of the approach, but respectfully disagree. In the interest of accessibility, we have intentionally presented the simplest possible exposition of the algorithm, rather than the various more complex formulations possible with our original motivation. On first principles, we believe that this simplicity is a benefit rather than a disadvantage. Yet this simplicity belies both theoretical and practical power. Theoretically, we have demonstrated that many powerful but opaque optimization algorithms (essentially, all two-state linear first-order optimizers) boil down to decoupling the momentum buffer's decay rate from the current gradient's weight, and we have presented the most direct and efficient method to do so. And practically, we have demonstrated improvements that are at least as significant as the improvement between plain SGD and momentum/NAG.\n\nAlthough we wish to err toward understating rather than overstating our contributions, we would be deeply appreciative of any suggestions the reviewer could offer to improve the articulation of these points in the manuscript.", "Thanks for the interest in our paper!\n\nIn short, momentum cannot recover QHM via this rewriting. Please refer to the discussion thread under AnonReviewer3 for further details.", "Hi,\n\nI'm confused by the update rule of QHM. What's the difference between QHM and plain momentum method? From my perspective, we can rewrite eqn (3) and (4) with eqn (1) and (2) but change *beta* to *v beta*. If so, what's the advantage of QHM as we can always tune *beta*.", "The authors introduce a class of quasi-hyperbolic algorithms that mix SGD with SGDM (or similar with Adam) and show improved empirical results. They also prove theoretical convergence of the methods and motivate the design well. The paper is well-written and contained the necessary references. Although I did feel that the authors could have better compared their method against the recent AggMom (Aggregated Momentum: Stability Through Passive Damping by Lucas et al.). Seems like there are a few similarities there. \n\nI enjoyed reading this paper and endorse it for acceptance. The theoretical results presented and easy to follow and state the assumptions clearly. I appreciated the fact that the authors aimed to keep the paper self-contained in its theory. The numerical experiments are thorough and fair. The authors test the algorithms on an extremely wide set of problems ranging from image recognition (including CIFAR and ImageNet), natural language processing (including the state-of-the-art machine translation model), and reinforcement learning (including MuJoCo). I have not seen such a wide comparison in any paper proposing training algorithms before. Further, the numerical experiments are well-designed and also fair. The hyperparameters are chosen carefully, and both training and validation errors are presented. I also appreciate that the authors made the code available during the reviewing phase. Out of curiosity, I ran the code on some of my workflows and found that there was some improvement in performance as well. \n\n\n" ]
[ 6, -1, -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8 ]
[ 4, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2019_S1fUpoR5FQ", "ByeKaCR-07", "rygjk-aiaX", "rJlL-18UTX", "HkebiJLLT7", "iclr_2019_S1fUpoR5FQ", "Hyxyo6TDT7", "BylCh-9OTm", "Bket2ivu6Q", "B1x95wpPaQ", "ryg5hkUDTX", "rJe2cWND67", "rkxTldzvT7", "iclr_2019_S1fUpoR5FQ", "SyeWFkLL67", "H1eadSvGpX", "BJxvbVG9h7", "Bklzj4_9n7", "HklAFD5lpX", "iclr_2019_S1fUpoR5FQ", "iclr_2019_S1fUpoR5FQ" ]