paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
sequence
review_writers
sequence
review_contents
sequence
review_ratings
sequence
review_confidences
sequence
review_reply_tos
sequence
iclr_2018_SkF2D7g0b
Exploring the Space of Black-box Attacks on Deep Neural Networks
Existing black-box attacks on deep neural networks (DNNs) so far have largely focused on transferability, where an adversarial instance generated for a locally trained model can “transfer” to attack other learning models. In this paper, we propose novel Gradient Estimation black-box attacks for adversaries with query access to the target model’s class probabilities, which do not rely on transferability. We also propose strategies to decouple the number of queries required to generate each adversarial sample from the dimensionality of the input. An iterative variant of our attack achieves close to 100% adversarial success rates for both targeted and untargeted attacks on DNNs. We carry out extensive experiments for a thorough comparative evaluation of black-box attacks and show that the proposed Gradient Estimation attacks outperform all transferability based black-box attacks we tested on both MNIST and CIFAR-10 datasets, achieving adversarial success rates similar to well known, state-of-the-art white-box attacks. We also apply the Gradient Estimation attacks successfully against a real-world content moderation classifier hosted by Clarifai. Furthermore, we evaluate black-box attacks against state-of-the-art defenses. We show that the Gradient Estimation attacks are very effective even against these defenses.
rejected-papers
The paper explores an increasingly important questions, especially showing the attack on existing APIs. The update to the paper has also improved it, but the paper is still not yet as impactful as it could be and needs much more comprehensive analysis to correctly appreciate its benefits and role.
train
[ "Syh_3H0VM", "Hk96V1clf", "rJGGOrcxz", "B10Nn-jlf", "Hkx9uTl4G", "BkWKdLPGM", "H1ddlLvzf", "BJ0xxIwMM", "H1FQ2HPfG", "ry7lNQexM", "ryDHmbY1G", "SkYyvAX1G" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public", "author", "public" ]
[ "Thank you for your revised review. Regarding the higher value of distortion for SPSA, we would like to refer you to the second column of Table 2 titled 'Attack success'. The numbers in parentheses in this column provide the average distortion value for each type of attack. Since the earlier table (Table 1) of results had mentioned that numbers in parentheses represent average distortion, we omitted that detail here. We apologize for the confusion.", "This paper generates adversarial examples using the fast gradient sign (FGS) and iterated fast gradient sign (IFGS) methods, but replacing the gradient computation with finite differences or another gradient approximation method. Since finite differences is expensive in high dimensions, the authors propose using directional derivatives based on random feature groupings or PCA. \n\nThis paper would be much stronger if it surveyed a wider variety of gradient-free optimization methods. Notably, there's two important black-box optimization baselines that were not included: simultaneous perturbation stochastic approximation ( https://en.wikipedia.org/wiki/Simultaneous_perturbation_stochastic_approximation), which avoids computing the gradient explicitly, and evolutionary strategies ( https://blog.openai.com/evolution-strategies/ ), a similar method that uses several random directions to estimate a better descent direction.\n\nThe gradient approximation methods proposed in this paper may or may not be better than SPSA or ES. Without a direct comparison, it's hard to know. Thus, the main contribution of this paper is in demonstrating that gradient approximation methods are sufficient for generating good adversarial attacks and applying those attacks to Clarifai models. That's interesting and useful to know, but is still a relatively small contribution, making this paper borderline. I lean towards rejection, since the paper proposes new methods without comparing to or even mentioning well-known alternatives.\n\nREVISION: Thank you for your response! The additional material does strengthen the paper. There is now some discussion of how Chen et al. differs, and an explicit comparison to SPSA and PSO. I think there are some interesting results here, including attacks on Clarifai. However, the additional evaluations are not thorough. This is understandable (given the limited time frame), but unfortunate. SPSA is only evaluated on MNIST, and while the paper claims its distortion is greater, this is never shown explicitly (or was too difficult for me to find, even when searching through the revised paper). Chen et al. is only compared in terms of time, not on success rate, distortion, or number of queries. These timing results aren't necessarily comparable, since the experiments were done under different conditions. Overall, the new experiments and discussion are a step towards a thorough analysis of zero-order attacks, but they're not there yet. I've increased my rating from 4 to 5, but this is still below the bar for me.", "\nQuality: The paper studies an important problem given that public ML APIs are now becoming available. More specifically, the authors study black-box attacks based on gradient estimation. This means that adversaries have no access to the underlying model.\n\nClarity: The paper is clear and well-written. Some parts are a bit redundant, so more space of the main body of the paper could be devoted information provided in the appendix and would help with the flow (e.g., description of the models A, B, C; logit-based loss; etc.). This would also provide room for discussing the targeted attacks and the tranferability-based attacks.\n\nOriginality: While black-box attacks are of greater interest than withe-box attacks, I found the case considered here of modest interest. The assumption that the loss would be known, but not the gradient is relatively narrow. And why is not possible to compute the gradient exactly in this case? Also, it was not clear what how \\delta can be chosen in practice to increase the performance of the attack. Could the authors comment on that?\n\nSignificance: The results in the paper are encouraging, but it is not clear whether the setting is realistic. The main weakness of this paper is that it does not state the assumptions made and under which conditions these attacks are valid. Those have to be deduced from the main text and not all are clear and many questions remain, making it difficult to see when such an attack is a risk and what is the actual experimental set-up. For example, what does it mean that attackers have access to the training set and when does that occur? Is it assumed that the API uses the adversarial example for training as well or not? How are the surrogate models trained and what are they trying to optimize and/or what do they match? In which situations do attackers have access to the loss, but not the gradient? How sensitive are the results to a loss mismatch? Finally, I do not understand the performance metric proposed by the authors. It is always possible to get an arbitrarily high success rate unless one fixes the distortion. What would be the success rate if the distortion was equal to the distortion of white-box attacks? And how sensitive are the results to \\epsilon (and how can it be chosen by an attacker in practice)?\n", "The authors consider new attacks for generating adversarial samples against neural networks. In particular, they are interested in approximating gradient-based white-box attacks such as FGSM in a black-box setting by estimating gradients from queries to the classifier. They assume that the attacker is able to query, for any example x, the vector of probabilities p(x) corresponding to each class.\n\nGiven such query access it’s trivial to estimate the gradients of p using finite differences. As a consequence one can implement FGSM using these estimates assuming cross-entropy loss, as well as a logit-based loss. They consider both iterative and single-step FGSM attacks in the targeted (i.e. the adversary’s goal is to switch the example’s label to a specific alternative label) and un-targeted settings (any mislabelling is a success). They compare themselves to transfer black-box attacks, where the adversary trains a proxy model and generates the adversarial sample by running a white-box attack on that model. For a number of target classifiers on both MNIST and CIFAR-10, they show that these attacks outperform the transfer-based attacks, and are comparable to white-box attacks, while maintaining low distortion on the attack samples. \n\nOne drawback of estimating gradients using finite differences is that the number of queries required scales with the dimensionality of the examples, which can be prohibitive in the case of images. They therefore describe two practical approaches for query reduction — one based on random feature grouping, and the other on PCA (which requires access to training data). They once again demonstrate the effectiveness of these methods across a number of models and datasets, including models deploying adversarially trained defenses. \n\nFinally, they demonstrate compelling real-world deployment against Clarifai classification models designed to flag “Not Safe for Work” content. \n\nOverall, the paper provides a very thorough experimental examination of a practical black-box attack that can be deployed against real-world systems. While there are some similarities with Chen et al. with respect to utilizing finite-differences to estimate gradients, I believe the work is still valuable for its very thorough experimental verification, as well as the practicality of their methods. The authors may want to be more explicit about their claim in the Related Work section that the running time of their attack is “40x” less than that of Chen et al. While this is believable, there is no running time comparison in the body of the paper. ", "Thank you for your clarifications and changes. Most of my concerns were addressed. I appreciated the comparison to similar work [1] and the additional experiments to Chen at al. Overall this is an important problem, so I am happy to bump up my score.", "We have uploaded a revised version of the paper. In particular, the revised version contains the following changes:\n\n1. Shortened the Introduction by removing bulleted list of contributions\n2. Clarified the practical relevance of situations where adversaries can have access to the loss of a model but not the gradient in the first paragraph of the Introduction\n3. Provided a quantitative comparison with the running time of the closest related work by Chen et. al in Appendix I.6\n4. Updated the anonymous link to more Clarifai model attack samples. It now directs to a zipped folder of attack images\n5. Added Section 3.4 and Table 2 which provide both a quantitative and qualitative comparison of a range of gradient-free optimization methods to generate adversarial samples. In particular, we compare Gradient Estimation with Particle Swarm Optimization and Simultaneous Perturbation Stochastic Approximation\n6. Description and results for the attacks on defenses and on the real-world Clarifai models are now in two separate sections for clarity\n\nA number of other minor writing and presentation changes have also been made to improve the flow of the paper. We welcome further comments!", "We thank the reviewer for the insightful suggestions. We did experiment with other gradient-free optimization methods including Particle Swarm Optimization (PSO) and Simultaneous perturbation stochastic approximation (SPSA). We found that the Gradient Estimation method achieves the highest attack success rates and due to the limitation of space, these results were not included in the initial submitted version. In the updated version, we have added Section 3.4 which contains a discussion of these two methods. Table 2 contains a quantitative comparative evaluation of all the types of query-based black-box attacks we tried.\n\nWe found Particle Swarm Optimization to perform very poorly as it ran much slower than the other methods without achieving success rates comparable to the other methods. SPSA ran faster and was able to achieve attack success rates matching those of the Gradient Estimation attacks. However, we had to experiment with a large number of parameters in order for SPSA to be effective, unlike Gradient Estimation which required very little parameter tuning. Further, the adversarial samples found using SPSA had almost twice as high distortion as those found using Gradient Estimation based attacks. ", "We thank the reviewer for the nice suggestions. We provide details for aspects of the paper the reviewer found unclear and update the paper for clarity.\n\nIn the black-box threat model considered in our paper, the model is not known, and without access to the model, automatic differentiation methods cannot be used to obtain the true gradient by backpropagation. Attackers would have access only to the loss and not the gradient of the model if they were able to query the target model for its classification output, consisting of class probabilities, but did not have access to the model itself. Both the cross-entropy and logit losses we consider can be easily computed from the output probabilities, which are provided for deployed classification systems by a number of MLaaS companies. \nSince an estimate of the true gradient of a function can be calculated with access to just the function values, we compute the loss from the output probabilities and use this to estimate the gradient of the loss. The estimate of the gradient of the loss is then used to compute an adversarial perturbation. This is explained in detail in Section 3.1.1. To summarize, computing the true gradient of the loss needs access to the entire model, while an estimate of the gradient of the loss can be computed with just access to model probabilities. \n\nWith regards to the practicality of our attacks and settings in which it could represent a threat, we emphasize that we demonstrate a real-world attack on Clarifai's Content Moderation and NSFW models. These are known to be deployed by Clarifai's clients. Using the access to class probabilities provided through a public API, we were able to create adversarial inputs with barely perceptible perturbations. An example is given in Figure 1 of the paper.\n\nTo choose \\delta, we performed a line search over a range of \\delta values in order to estimate the gradient. Although a small value of \\delta would give the best approximation, in reality using a very small value of \\delta ends up with a bad approximation of the gradient, because the value of the cross-entropy loss does not change enough to be able to estimate the gradient at all. The logit loss is much more sensitive, and thus accurate estimates of it can be found using a smaller \\delta.\n\nWe clarify the threat model in Section 3 on Page 5 in the updated version. The only assumption we needed to perform a large majority of our attacks is access to the target model’s class probabilities. Only the PCA-based query reduction technique needs the extra assumption of access to a dataset representative of the training data.\n\nWe do not make any assumptions on the training data of the public API, since we do not know what the model or data behind the API is. However, we were able to attack it in spite of this lack of knowledge. \n\nThe models that use adversarial training were trained by us to evaluate the robustness of the proposed attack further. Even for our local models, we only assume black-box access, and we attack them without knowing the true gradient. For clarity’s sake, we have separated these two sets of experiments and clarified it in the updated version. The surrogate models used in the transferability attack are standard CNNs trained with the objective of minimizing the loss on the training set. This is the typical attack model assumed for transferability. In the case of both MNIST and CIFAR-10, the surrogate models used to transfer samples achieve classification accuracies close to that of the model which is attacked. The architecture and accuracy on benign data of all models are given in Appendix C.2.\n \nWe do evaluate the attack success rate under a constraint on the maximum distortion possible. The maximum possible distortion is fixed by an L_{\\infty} constraint on each pixel as is commonly done in the literature. However, most attacks don't perturb all the pixels, leading to a lower distortion than the maximum possible. Figure 4 in the Appendix shows some representative adversarial samples generated from our attacks with an L_{\\infty} constraint of 0.3 for the MNIST data and 16 for the CIFAR-10 data. For comparisons with white-box attack given fixed distortion, we can compare the distortion for our attacks in Table 1 and that for white-box attacks in Table 7 in the Appendix. The distortion levels match almost exactly, thus the success rates are eminently comparable.\n \nThe sensitivity of the attack success to \\epsilon is shown in Figure 2 as well as in Figure 5 in the Appendix. The attack success increases as \\epsilon increases. Figure 4 in the Appendix demonstrates that perturbation values of 0.3 for MNIST and 16 for CIFAR-10 do not cause significant difficulty in perception for humans, and can thus be safely chosen by an attacker. In the attack on Clarifai shown in Figure 1, we show that even an \\epsilon value of 32 can be used safely by an attacker. We observe high attack success rates even at these perturbation limits.", "We thank the reviewer for the thoughtful comments. We have provided running times for our attacks in Appendix I.6 on Page 25. The same section also contains the results of Chen et al. in order to provide a quantitative comparison in the current updated version.", "Hi, so it seems that the claimed novelty here is a way of reducing the number of queries using finite differences, however the ZOO attack also uses some novel techniques to reduce queries. It would be extremely useful to provide a side-by-side attack comparison with ZOO so we can infer which attack is more effective under various settings. You can find their code here https://github.com/huanzhang12/ZOO-Attack ", "The concurrent work from Chen et al. also proposes a black-box attack that uses queries from a model that exposes confidence scores. As you rightly note, both ZOO and our proposed methods do have in common that they use finite differences to estimate the derivative of a function. This shared part is a well-known method, for which we provide a citation (Spall, 2005).\n\nBeyond that, the attack methods proceed differently. We propose attacks that compute an adversarial perturbation, approximating FGSM and iterative FGS. On the other hand, ZOO approximates the Adam optimizer, while trying to perform coordinate descent on the loss function proposed by Carlini and Wagner (2016).\n\nWe further provide new ways of reducing the number of queries required. Thus, our claim to novelty is not in using finite differences to estimate the gradient of a model, but the idea of estimating the gradient in a number of new query-reduced ways. Our work evaluates new attacks that use these estimates as well as known black-box attacks, as an additional contribution.\n\nBecause of the relevance of Chen et al.’s work to the threat model, we will add a clarification in the “Related Work” section of the Introduction, as well as in Section 3, noting the fact that Chen et al. used the finite difference technique in a similar setting.\n\nThank you for the comment!\n", "The submission cites the paper by Chen et al. (2017), which proposes \"ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Networks without Training Substitute Models\". However, the submission goes on to claim the method of finite differences as a novel contribution, even though the cited paper by Chen et al. has already proposed it.\n\nThe \"Gradient Estimation black-box attack based on the method of finite differences\" presented in Section 3 and Section 3.1 of this submission, using a \"two-sided approximation of the gradient\", is identical to what is proposed in ZOO, which uses the \"symmetric difference quotient to estimate the gradient\" (Chen et al. 2017, equation 6).\n" ]
[ -1, 5, 6, 7, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, 4, 3, 4, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "Hk96V1clf", "iclr_2018_SkF2D7g0b", "iclr_2018_SkF2D7g0b", "iclr_2018_SkF2D7g0b", "BJ0xxIwMM", "iclr_2018_SkF2D7g0b", "Hk96V1clf", "rJGGOrcxz", "B10Nn-jlf", "ryDHmbY1G", "SkYyvAX1G", "iclr_2018_SkF2D7g0b" ]
iclr_2018_r1RF3ExCb
Transformation Autoregressive Networks
The fundamental task of general density estimation has been of keen interest to machine learning. Recent advances in density estimation have either: a) proposed using a flexible model to estimate the conditional factors of the chain rule; or b) used flexible, non-linear transformations of variables of a simple base distribution. Instead, this work jointly leverages transformations of variables and autoregressive conditional models, and proposes novel methods for both. We provide a deeper understanding of our models, showing a considerable improvement with our methods through a comprehensive study over both real world and synthetic data. Moreover, we illustrate the use of our models in outlier detection and image modeling task.
rejected-papers
This paper looks at building new density estimation methods and new methods for tranformations and autoregressive models. The request from reviewers for comparison improves the paper. These models have seen a wide range of applications and have been highly successful, needing the added benefits shown and their potential impact to be expanded further.
train
[ "S1FCACYeG", "By_sZWcgz", "HkZ8Gb9eG", "B1naFJL7z", "SkEr0orXM", "r1E9brOZf", "r1A1-Bd-f", "H1Ubxrd-f", "Bkw504ubz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author" ]
[ "The authors propose to combine nonlinear bijective transformations and flexible density models for density estimation. In terms of bijective change of variables transformations, they propose linear triangular transformations and recurrent transformations. They also propose to use as base transformation an autoregressive distribution with mixture of gaussians emissions.\nComparing with the Masked Autoregressive Flows (Papamakarios et al., 2017) paper, it seems that the true difference is using the linear autoregressive transformation (LAM) and recurrent autoregressive transformation (RAM), already present in the Inverse Autoregressive Flow (Kingma et al., 2016) paper they cite, instead of the masked feedforward architecture used Papamakarios et al. (2017).\nGiven that, the most important part of the paper would be to demonstrate how it performs compared to Masked Autoregressive Flows. A comparison with MAF/MADE is lacking in Table 1 and 2. Nonetheless, the comparison between models in flexible density models, change of variables transformations and combinations of both remain relevant.\n\nDiederik P. Kingma, Tim Salimans, Rafal Józefowicz, Xi Chen, Ilya Sutskever, Max Welling: Improving Variational Autoencoders with Inverse Autoregressive Flow. NIPS 2016\nGeorge Papamakarios, Theo Pavlakou, Iain Murray: Masked Autoregressive Flow for Density Estimation. NIPS 2017\n", "This paper offers an extension to density estimation networks that makes them better able to learn dependencies between covariates of a distribution.\n\nThis work does not seem particularly original as applying transformations to input is done in most AR estimators.\n\nUnfortunately, it's not clear if the work is better than the state-of-the-art. Most results in the paper are comparisons of toy conditional models. The paper does not compare to work for example from Papamakarios et al. on the same datasets. The one Table that lists other work showed LAM and RAM to be comparable. Many of the experiments are on synthetic results, and the paper would have benefited from concentrating on more real-world datasets.", "This paper is well constructed and written. It consists of a number of broad ideas regarding density estimation using transformations of autoregressive networks. Specifically, the authors examine models involving linear maps from past states (LAM) and recurrence relationships (RAM). \n\nThe critical insight is that the hidden states in the LAM are not coupled allowing considerable flexibility between consecutive conditional distributions. This is at the expense of an increased number of parameters and a lack of information sharing. In contrast, the RAM transfers information between conditional densities via the coupled hidden states allowing for more constrained smooth transitions.\n\nThe authors then explored a variety of transformations designed to increase the expressiveness of LAM and RAM. The authors importantly note that one important restriction on the class of transformations is the ability to evaluate the Jacobian of the transformation efficiently. A composite of transformations coupled with the LAM/RAM networks provides a highly expressive model for modelling arbitrary joint densities but retaining interpretable conditional structure.\n\nThere is a rich variety of synthetic and real data studies which demonstrate that LAM and RAM consistently rank amongst the top models demonstrating potential utility for this class of models.\n\nWhilst the paper provides no definitive solutions, this is not the point of the work which seeks to provide a description of a general class of potentially useful models.\n\n\n", "This paper introduces *multiple* new methods for both a conditional model for factors of the chain rule and a transformation of variables: the LAM conditional model, the RAM conditional model, the LU linear transformation, the recurrent transformation, and recurrent shift transformation.\n\nOur extensive empirical study shows the fundamental result that modern density estimation methods should employ *both* a flexible conditional model and a flexible transformation (e.g. using a MAF transform with MADE MoG conditional). Moreover, these new comparisons of TANs to MADE, Real NVP, MAF, and MAF MoG methods show that the combination of our proposed transformations and conditional models are superior.\n\nWe have better emphasized our contributions in our revised introduction section (see page one). \n", "The MAF paper uses MADE which already models conditional distributions using Mixture of Gaussians, hence the \"MAF MoG\" label in the experiments (which you copied). \nYour contributions need to be better emphasized.", "Thank you for your comments and suggestions. Please see our general reply where we address comparisons to MAF. Also, we would like to emphasize that LAM and RAM components are not deterministic transformations as IAF and MAF, but are modeling the conditional distribution of covariates using a mixture of gaussians.", "Thank you for your comments and suggestions. Please see our general reply where we address comparisons to MAF and our contributions. Also, we would like to emphasize that we have increased the number of real-world datasets used to evaluate performance from 9 to 14.", "Thank you for your time and insightful comments.", "We would like to thank all the reviewers for their time and helpful comments.\n\nWe agree with reviewers that comparing to MAFs strengthens our paper. At the time of writing we were unaware that the work by Papamakarios et al. (2017) was to be published and hence did not extensively compare to those results; we now revise with added comparisons.\n\nWorking off of both the paper and code in https://github.com/gpapamak/maf, we carefully preprocessed the datasets found in (Papamakarios et al. 2017) to work over the same instances/covariates. As can be seen in Section 4.2.1 Table 2 and Section 4.4 Table 4, Table 5, we are considerably beating both MAF, MADE, and Real NVP models in every dataset used by Papamakarios et al. (2017) (POWER, GAS, HEPMASS, MINIBOONE, BSDS300, MNIST, CIFAR-10).\n\nWe also wish to reemphasize the extent of our contribution. Whereas many modern density estimation work will introduce a single new conditional model for factors of the chain rule or a single new transformation of variables, this paper introduces *multiple* new methods for each of these components: the LAM conditional model, the RAM conditional model, the LU linear transformation, the recurrent transformation, and recurrent shift transformation. \n\nIn addition, our extensive empirical study shows the fundamental result that modern density estimation methods should employ *both* a flexible conditional model and a flexible transformation. Our extensive original experiments coupled with these new comparisons of TANs to MADE, Real NVP, and MAF make a very strong case for using TANs for density estimations.\n" ]
[ 5, 5, 8, -1, -1, -1, -1, -1, -1 ]
[ 3, 2, 4, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_r1RF3ExCb", "iclr_2018_r1RF3ExCb", "iclr_2018_r1RF3ExCb", "SkEr0orXM", "r1E9brOZf", "S1FCACYeG", "By_sZWcgz", "HkZ8Gb9eG", "iclr_2018_r1RF3ExCb" ]
iclr_2018_rkONG0xAW
Recursive Binary Neural Network Learning Model with 2-bit/weight Storage Requirement
This paper presents a storage-efficient learning model titled Recursive Binary Neural Networks for embedded and mobile devices having a limited amount of on-chip data storage such as hundreds of kilo-Bytes. The main idea of the proposed model is to recursively recycle data storage of weights (parameters) during training. This enables a device with a given storage constraint to train and instantiate a neural network classifier with a larger number of weights on a chip, achieving better classification accuracy. Such efficient use of on-chip storage reduces off-chip storage accesses, improving energy-efficiency and speed of training. We verified the proposed training model with deep and convolutional neural network classifiers on the MNIST and voice activity detection benchmarks. For the deep neural network, our model achieves data storage requirement of as low as 2 bits/weight, whereas the conventional binary neural network learning models require data storage of 8 to 32 bits/weight. With the same amount of data storage, our model can train a bigger network having more weights, achieving 1% less test error than the conventional binary neural network learning model. To achieve the similar classification error, the conventional binary neural network model requires 4× more data storage for weights than our proposed model. For the convolution neural network classifier, the proposed model achieves 2.4% less test error for the same on-chip storage or 6× storage savings to achieve the similar accuracy.
rejected-papers
This is an interesting paper and addresses an important problem of neural networks with memory constrains. New experiments have been added that add to the paper, but the full impact of the paper is not yet realised, needing further exploration of models of current practice, wider set of experiments and analysis, and additional clarifying discussion.
train
[ "BkYwge9ef", "SkMJBHOez", "H11OyNqgM", "HyA3vW57z", "H1suSbq7z", "HyJRxZ9mf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "There could be an interesting idea here, but the limitations and applicability of the proposed approach are not clear yet. More analysis should be done to clarify its potential. Besides, the paper seriously needs to be reworked. The text in general, but also the notation, should be improved.\n\nIn my opinion, the authors should explain how to apply their algorithm to more general network architectures, and test it, in particular to convnets. An experiment on a modern dataset beyond MNIST would also be a welcome addition.\n\nSome comments:\n- The method is present as a fully-connected network training procedure. But the resulting network is not really fully-connected, but modular. This is clear in Fig. 1 and in the explanation in Sect. 3.1. The newly added hidden neurons at every iteration do not project to the previous pool of hidden neurons. It should be stressed that the networks end up with this non-conventional “tiled” architecture. Are there studies where the capacity of such networks is investigated, when all the weights are trained concurrently.\n\n- It wasn’t clear to me whether the memory reallocation could be easily implemented in hardware. A few references or remarks on this issue would be welcome.\n\n- The work “Efficient supervised learning in networks with binary synapses” by Baldassi et al. (PNAS 2007) should be cited. Although usually ignored by the deep learning community, it actually was a pioneering study on the use of low resolution weights during inference while allowing for auxiliary variables during learning.\n\n- Coming back my main point above, I didn’t really get the discussion on Sect. 5.3. Why didn’t the authors test their algorithm on a convnet? Are there any obstacles in doing so? It seems quite important to understand this point, as the paper appeals to technical applications and convolution seems hard to sidestep currently.\n\n- Fig. 3: xx-axis: define storage efficiency and storage requirement.\n\n- Fig. 4: What’s an RSBL? Acronyms should be defined.\n\n- Overall, language and notation should really be refined. I had a hard time reading Algorithm 1, as the notation is not even defined anywhere. And this problem extends throughout the paper.\nFor example, just looking at Sect. 4.1, “training and testing data x is normalized…”, if x is not properly defined, it’s best to omit it; “… 2-dimentonal…”, at least major typos should be scanned and corrected.", "The idea of this work is fairly simple. Two main problems exist in end devices for deep learning: power and memory. There have been a series of works showing how to discretisize neural networks. This work, discretisize a NN incrementally. It does so in the following way: First, we train the network with the memory we have. Once we train and achieve a network with best performance under this constraint, we take the sign of each weight (and leave them intact), and use the remaining n-1 bits of each weight in order to add some new connections to the network. Now, we do not change the sign weights, only the new n-1 bits. We continue with this process (recursively) until we don't get any improvement in performance. \n\nBased on experiments done by the authors, on MNIST, having this procedure gives the same performance with 3-4 times less memory or increase in performance of 1% for the same memory as regular network. \n\nI like the idea, and I think it is indeed a good idea for IoT and end devices. The main problem with this method that there is undiscussed payment with current hardware architectures. I think there is a problem with optimizing the memory after each stage was trained. Also, current architectures do not support a single bit manipulations, but is much more efficient on large bits registers. So, in theory this might be a good idea, but I think this idea is not out-of-the-box method for implementation.\n\nAlso, as the authors say, more experiments are needed in order to understand the regime in which this method is efficient. To summarize, I like this idea, but more experiments are needed in order to understand this method merits. ", "Summary: The paper addresses the issue of training feed-forward neural networks with memory constraints. The idea is to start by training a very small network, binarise this network, then reuse the non-signed bits of the binarised weights to add/store new weights, and recursively repeat these steps. The cost of reducing the memory storage is the extra computation. An experiment on MNIST shows the efficacy of the proposed recursive scheme.\n\nQuality and significance: The proposed method is a combination of the binarised neural network (BNN) architecture of Courbariaux et al. (2015; 2016) with a network growing scheme to reduce the number of bits per weight. However, the computation complexity is significantly larger. The pitch of the paper is to reduce the \"high overhead of data access\" when training NNs on small devices and indeed this seems to be the case as shown in the experiment. However, if the computation is that large compared to the standard BNNs, I wonder if training is viable on small devices after all. Perhaps all aspects (training cost [computation + time], accuracy and storage) should be plotted together to see what methods form the frontier. This is probably out of scope for ICLR but to really test these methods, they should be trained/stored on a real small device and trained/fine-tuned using user data to see what would work best.\n\nThe experiment is also limited to MNIST and fully connected neural networks.", "Thank you for your insightful reviews helping us improve our paper. More analysis and discussion about computation payment and hardware implementation are added in the revised paper. And we answer your questions as follows. \n\nThe computation payment of the proposed RBNN is very small compared to the energy saving it brings. First of all, from the results on arithmetic complexity Table 2, it is noticed that the extra computation brought by the proposed RBNN are shift and add, and multiplication computation are the same for RBNN and conventional BNN. Since multiplication has much more overhead than add and shift, the final computation increase is not significant. Secondly, it has been proved that for fully connected NN systems, data access costs the majority of the energy overhead. The proposed RNN model reduce the data storage requirement so the system only need to fetch data from on-chip SRAM during training. According to the quantitative analysis added in Table2 and Section 5.3, this saves around 100x energy compared to conventional BNN which has to fetch weights from off-chip DRAM. \n\nThe single-bit manipulation can be implemented by very simple hardware logic. We added Appendix A to the revised paper to illustrate the implementation of bit-wise operation of weights. The main idea is fetching complete weights from weight storage. And use mask code to separate fixed bits and plastic bits. After plastic bits are updated, they are concatenated to fixed bits through XOR operation and write back to data storage. This implementation only requires simple AND and XOR operation at the very beginning and end of each training epoch, so the extra energy consumption is very small. \n\nThe results of applying the proposed RBNN model to CNNs on MNIST benchmark and to MLP-like DNNs on AURORA 4 benchmark are added in Appendix B and C, respectively. We really appreciate your suggestions to validate the proposed RBNN model more.", "Thank you very much for your insightful comments. We really appreciate your comments which help us to improve the draft. We fixed typos, revised the paper so as to reduce confusion, and also add relevant references including those suggested by the reviewer. Below are our answers to your other questions.\n\nAbout the topology of the generated neural network, firstly, we correct our presentation of \"fully-connected\" based on the fact that our RBNN trained the fully-connected structure for the 1-hidden layer case and the tiled structure for the 2-hidden layer case. However, we'd like to point out that in all the experiments in the paper, the results of conventional BNNs that are compared to ones of the proposed model are all fully-connected. We tested the RBNN to train the fully-connected structure for the 2-hidden layer case, but we do not see much difference in terms of the accuracy and storage-requirement trade-off. Still we added this results to Fig. 8. \n\nFor hardware implementation, we added Appendix A to illustrate the hardware implementation of memory reallocation. It describes multi-weight operations where each weight takes one to k bits during the training process based on the RBNN model. The main idea is to fetch multiple weights packed in one 8-bit word from data storage (SRAM) and to use a mask to separate already-trained weights (bits) and plastic weights (bits). After finishing training, we use XOR operation to pack once-plastic bits and the fixed bits into a word and store it in the data storage. This mapping requires bit-wise AND and XOR operations, which are supported in CPU,GPU, custom circuits and also FPGAs, at the very beginning and the end of each training epoch. Therefore, the extra energy consumption is minimal. It also allows us to use the existing SRAM macros without modification. We added this discussion to Sec 5. 3 in the revised paper. \n\nThe CNN with the proposed RBNN model is tested and the results are shown the revised paper (Appendix B and C), We added an experimental result on the application of our RBNN on the LeNet CNN performing MNIST benchmark. We also added the experimental result on the application of our RBNN on the MLP-like DNN performing voice activity detection benchmark (AURORA 4). These new results confirm that our RBNN can improve the trade-off between weight storage requirement and accuracy trade-off by the similar amount as the original results from the MLP and the MINST test case. \n", "Thank you for your insightful comments. Your concern on the extra computation overhead of our proposed model is valid. However, we'd like to point out that it is not as significant compared to our benefit in data access. In terms of energy, compared to conventional BNN, the proposed model needs notably less amount of off-chip data storage access, which easily offsets the extra computation cost with a large margin. To elaborate this issue more, we added quantitive analysis in Sec. 5.3, Table 2 and Table3 in the revised paper.\n\nFirst, in a BNN, the main bottleneck is data access overhead rather than computation. This is because the use of binary information of weights reduces computational complexity. The proposed model reduces the data storage size so that it can store all the weights in the on-chip SRAM. This reduces energy consumption significantly because accessing data from off-chip DRAM and FLASH consumes at least 2 orders of magnitude more energy than SRAM. Conventional BNN systems have to store and fetch data from off-chip DRAM and FLASH. Our quantitative energy analysis, added in Sec. 5.3, shows the proposed RBNN can save at least 100X training energy compared to conventional BNN.\n\nSecond, the proposed model only increases the number of add and shift operations roughly two times for the neural networks having the same number of hidden units (Table 2), whereas it does not increase the number of multi-bit multiplications as compared to conventional BNNs. Note that this multi-bit multiplication is used to calculate gradients. In both RBNN and BNN, the multiplications between inputs/activations and weights are replaced with sign change operations. Multiplication is much more costly than add and shift operations. Thus, it is important not to increase the number of multiplications. \n\nAnd the evaluation of the proposed model on CNN classifying MNIST benchmark and DNN classifying AURORA 4 VAD benchmark is added in the Appendix B and C in the revised paper, respectively\n" ]
[ 6, 7, 5, -1, -1, -1 ]
[ 3, 4, 3, -1, -1, -1 ]
[ "iclr_2018_rkONG0xAW", "iclr_2018_rkONG0xAW", "iclr_2018_rkONG0xAW", "SkMJBHOez", "BkYwge9ef", "H11OyNqgM" ]
iclr_2018_SJIA6ZWC-
Stochastic Hyperparameter Optimization through Hypernetworks
Machine learning models are usually tuned by nesting optimization of model weights inside the optimization of hyperparameters. We give a method to collapse this nested optimization into joint stochastic optimization of both weights and hyperparameters. Our method trains a neural network to output approximately optimal weights as a function of hyperparameters. We show that our method converges to locally optimal weights and hyperparameters for sufficiently large hypernets. We compare this method to standard hyperparameter optimization strategies and demonstrate its effectiveness for tuning thousands of hyperparameters.
rejected-papers
The paper is interesting, and the update to the paper and additional experiments has already improved it in many ways, but the paper still does still not have as much impact as it could, by further strengthening the comparisons and usefulness in many of situations of current practice.
train
[ "Bk_UdcKxf", "ryb9D_Bxf", "r1dLqgZWM", "BJH3BITQG", "rJ6qV8pXf", "ryTNBU6mM", "Sygmdl-WG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "*Summary*\n\nThe paper proposes to use hyper-networks [Ha et al. 2016] for the tuning of hyper-parameters, along the lines of [Brock et al. 2017]. The core idea is to have a side neural network sufficiently expressive to learn the (large-scale, matrix-valued) mapping from a given configuration of hyper-parameters to the weights of the model we wish to tune.\nThe paper gives a theoretical justification of its approach, and then describes several variants of its core algorithm which mix the training of the hyper-networks together with the optimization of the hyper-parameters themselves. Finally, experiments based on MNIST illustrate the properties of the proposed approach.\n\nWhile the core idea may appear as appealing, the paper suffers from several flaws (as further detailed afterwards):\n-Insufficient related work\n-Correctness/rigor of Theorem 2.1\n-Clarity of the paper (e.g., Sec. 2.4)\n-Experiments look somewhat artificial\n-How scalable is the proposed approach in the perspective of tuning models way larger/more complex than those treated in the experiments?\n\n*Detailed comments*\n\n-\"...and training the model to completion.\" and \"This is wasteful, since it trains the model from scratch each time...\" (and similar statement in Sec. 2.1): Those statements are quite debatable. There are lines of work, e.g., in Bayesian optimization, to model early stopping/learning curves (e.g., Domhan2014, Klein2017 and references therein) and where training procedures are explicitly resumed (e.g., Swersky2014, Li2016). The paper should reformulate its statements in the light of this literature.\n\n-\"Uncertainty could conceivably be incorporated into the hypernet...\". This seems indeed an important point, but it does not appear as clear how to proceed (e.g., uncertainty on w_phi(lambda) which later needs to propagated to L_val); could the authors perhaps further elaborate?\n\n-I am concerned about the rigor/correctness of Theorem 2.1; for instance, how is the continuity of the best-response exploited? Also, throughout the paper, the argmin is defined as if it was a singleton while in practice it is rather a set-valued mapping (except if there is a unique minimizer for L_train(., lambda), which is unlikely to be the case given the nature of the considered neural-net model). In the same vein, Jensen's inequality states that Expectation[g(X)] >= g(Expectation[X]) for some convex function g and random variable X; how does it precisely translate into the paper's setting (convexity, which function g, etc.)? \n\n-Specify in Alg. 1 that \"hyperopt\" refers to a generic hyper-parameter procedure.\n\n-More details should be provided to better understand Sec. 2.4. At the moment, it is difficult to figure out (and potentially reproduce) the model which is proposed.\n\n-The training procedure in Sec. 4.2 seems quite ad hoc; how sensitive was the overall performance with respect to the optimization strategy? For instance, in 4.2 and 4.3, different optimization parameters are chosen.\n\n-typo: \"weight decay is applied the...\" --> \"weight decay is applied to the...\"\n\n-\"a standard Bayesian optimization implementation from sklearn\": Could more details be provided? (there does not seem to be implementation there http://scikit-learn.org/stable/model_selection.html to the best of my knowledge)\n\n-The experimental set up looks a bit far-fetched and unrealistic: first scalar, than diagonal and finally matrix-weighted regularization schemes. While the first two may be used in practice, the third scheme is not used in practice to the best of my knowledge.\n\n-typo: \"fit a hypernet same dataset.\" --> \"fit a hypernet on the same dataset.\"\n\n-(Franceschi2017) could be added to the related work section.\n\n*References*\n\n(Domhan2014) Domhan, T.; Springenberg, T. & Hutter, F. Extrapolating learning curves of deep neural networks ICML 2014 AutoML Workshop, 2014\n\n(Franceschi2017) Franceschi, L.; Donini, M.; Frasconi, P. & Pontil, M. Forward and Reverse Gradient-Based Hyperparameter Optimization preprint arXiv:1703.01785, 2017\n\n(Klein2017) Klein, A.; Falkner, S.; Springenberg, J. T. & Hutter, F. Learning curve prediction with Bayesian neural networks International Conference on Learning Representations (ICLR), 2017, 17\n\n(Li2016) Li, L.; Jamieson, K.; DeSalvo, G.; Rostamizadeh, A. & Talwalkar, A. Hyperband: A Novel Bandit-Based Approach to Hyperparameter Optimization preprint arXiv:1603.06560, 2016\n\n(Swersky2014) Swersky, K.; Snoek, J. & Adams, R. P. Freeze-Thaw Bayesian Optimization preprint arXiv:1406.3896, 2014\n\n*********\nUpdate post rebuttal\n*********\n\nI acknowledge the fact that I read the rebuttal of the authors, whom I thank for their detailed answers.\n\nMy minor concerns have been clarified. Regarding the correctness of the proof, I am still unsure about the applicability of Jensen inequality; provided it is true, then it is important to see that the results seem to hold only for particular hyperparameters, namely regularization parameters (as explained in the new updated proof). This limitation should be exposed transparently upfront in the paper/abstract. \nTogether with the new experiments and comparisons, I have therefore updated my rating from 5 to 6.\n", "This paper introduces the use of hyper-networks for hyper-parameter optimization in the context of neural networks. A hyper-network is a network that has been trained to find optimal weights for another neural network on a particular learning task. This hyper-network can also be trained using gradient descent, and then can be optimized with respect to its inputs (hyper-parameters) to find optimal hyper-parameters. Of course, for this to be feasible training the hyper-network has to be efficient. For this, the authors suggest to use a linear hyper-network. The use of this approach for hyper-parameter optimization is illustrated in several experiments considering a linear model on the MNIST dataset.\n\nThe paper is clearly written with only a few typing errors.\n\nAs far as I know this work is original. This is the first time that hyper-networks are used for hyper-parameter optimization.\n\nThe significance of the work can, however, be questioned. To begin with, the models considered by the authors are rather small. They are simply linear models in which the number of weights is not very big. In particular, only 7,850 weights. The corresponding hyper-net has around 15,000, which is twice as big. Furthermore, the authors say that they train the hyper-network 10 times more than standard gradient descent on the hyper-parameter. This accounts for training the original model 20 times more. \n\nIf the original model is a deep neural network with several hidden layers and several hidden units in each layer, it is not clear if the proposed approach will be feasible. That is my main concern with this paper. The lack of representative models such as the ones used in practical applications.\n\nAnother limitation is that the proposed approach seems to be limited to neural network models. The other techniques the authors compare with are more general and can optimize the hyper-parameters of other models.\n\nSomething strange is that the authors claim in Figure 6 that the proposed method is able to optimize 7,850 hyper-parameters. However, it is not clear to what extent this is true. To begin with, it seems that the performance obtained is worse than with 10 hyper-parameters (shown on the right). Since the left case is a general case of the right case (having only 10 hyper-parameters different) it is strange that worse results are obtained in the left case. It seems that the optimization process is reaching sub-optimal solutions.\n\nThe experiments shown in Figure 7 are strange. I have not fully understood their importance or what conclusions can the authors extract from them.\n\nI have also missed comparing with related techniques such as Dougal Maclaurin et al., 2015.\n\nSumming up, this seems to be an interesting paper proposing an interesting idea. However, it seems the practical utility of the method described is limited to small models only, which questions the overall significance.\n\n", "[Apologies for short review, I got called in late. Marking my review as \"educated guess\" since I didn't have time for a detailed review]\n\nThe authors model the function mapping hyperparameters to parameter values using a neural network. This is similar to the Bayesian optimization setting but with some advantages such as the ability to evaluate the function stochastically.\n\nI find the approach to be interesting and the paper to be well written. However, i found theoretical results have unrealistic assumptions on the size of the network (i.e., rely on networks being universal approximator, whose number of parameters scale exponentially with the dimension) and as such are not more than a curiosity. Also, the authors compare their approach (Fig. 6) vs Bayesian optimization and random search, which are approaches that are know to perform extremely poorly on high dimensional datasets. Comparison with other gradient-based approaches (Maclaurin 2015, Pedregosa 2016, Franceschi 2017) is lacking.\n", "Thank you for your detailed comments. I will address your comments in the order in which you wrote them. For clarity, I have placed your comments in [brackets] and my responses will follow in plain text.\n\n[The significance of the work can, however, be questioned. To begin with, the models considered by the authors are rather small. They are simply linear models in which the number of weights is not very big. In particular, only 7,850 weights. The corresponding hyper-net has around 15,000, which is twice as big. Furthermore, the authors say that they train the hyper-network 10 times more than standard gradient descent on the hyper-parameter. This accounts for training the original model 20 times more. ]\n\nWe have added graphs showing performance on deeper models with more weights. It is a good point that each iteration of hypernet is more expensive than an iteration of optimizing the elementary model. For a linear hypernet it is about (# hyperparameters * # parameters) / (# parameters) = # hyperparameters times more expensive. For our experiment where # hyperparameters = # parameters with 10 hidden units it is (# hyperparameters * 10 + 10 * # parameters) / (# parameters) = 20 times more expensive.\n\n\n[If the original model is a deep neural network with several hidden layers and several hidden units in each layer, it is not clear if the proposed approach will be feasible. That is my main concern with this paper. The lack of representative models such as the ones used in practical applications.]\n\n\nWe have added comparisons to MLP with up with several hidden layers and several hidden units in each layer. Further experiments should still be conducted on convolutional or recurrent networks.\n\n\n[Another limitation is that the proposed approach seems to be limited to neural network models. The other techniques the authors compare with are more general and can optimize the hyper-parameters of other models.]\n\n\nA good point is raised in that there are hyperparameters this algorithm can not optimize. We can not optimize hyperparameters about optimization, because there is no inner optimization loop. These methods can be applied to any (unconstrained) bi-level optimization, where we learn the inner parameters best-response to the outer-parameter with a neural network. The method is applied to a specific case often encountered in machine learning where the inner parameter is model weights, while the outer parameter is hyperparameters. SMASH uses a similar algorithm to do architecture search of neural networks.\n\n\n[Something strange is that the authors claim in Figure 6 that the proposed method is able to optimize 7,850 hyper-parameters. However, it is not clear to what extent this is true. To begin with, it seems that the performance obtained is worse than with 10 hyper-parameters (shown on the right). Since the left case is a general case of the right case (having only 10 hyper-parameters different) it is strange that worse results are obtained in the left case. It seems that the optimization process is reaching sub-optimal solutions.]\n\n\nThis is a good point! We have emphasized that the algorithm is reaching sub-optimal solutions due to limited capacity and sampling an insufficient number of hyperparameters on each iteration. This is shown in the new experiment where we compare our algorithm to differentiating through optimization, which is slower but finds better solutions.\n\n\n[The experiments shown in Figure 7 are strange. I have not fully understood their importance or what conclusions can the authors extract from them.]\n\nThe main takeaway from this experiment is a hypernet can learn more accurate surrogate functions than a GP for equal compute budgets because it views (noisy) evaluations of more points. This has been added to the experiment\n\n\n\n[I have also missed comparing with related techniques such as Dougal Maclaurin et al., 2015.]\n\nWe have added a comparison with differentiating through optimization from Dougal Maclaurin et al., 2015.", "Dear Reviewer 2,\n\nThank you for your comments. We have added a comparison to Maclaurin 2015, instead of Bayesian optimization/random search, showing that our algorithm reaches sub-optimal solutions faster than differentiating through unrolled optimization.", "Dear Reviewer 1,\n\nThank you for your detailed comments. I will address your comments in the order in which you wrote them. For clarity, I have placed your comments in [brackets] and my responses will follow in plain text.\n\n\n[-\"...and training the model to completion.\" and \"This is wasteful, since it trains the model from scratch each time...\" (and similar statement in Sec. 2.1): Those statements are quite debatable. There are lines of work, e.g., in Bayesian optimization, to model early stopping/learning curves (e.g., Domhan2014, Klein2017 and references therein) and where training procedures are explicitly resumed (e.g., Swersky2014, Li2016). The paper should reformulate its statements in the light of this literature.]\n\n\nGood point! Fixed introduction to include lines of work exploring resumed training.\n\n\n[-\"Uncertainty could conceivably be incorporated into the hypernet...\". This seems indeed an important point, but it does not appear as clear how to proceed (e.g., uncertainty on w_phi(lambda) which later needs to propagated to L_val); could the authors perhaps further elaborate?]\n\nWe believe using stochastic variational inference, as in the Bayes by Backprop paper may be leveraged to incorporate uncertainty into the hypernet. This is now mentioned in the paper.\n\n\n[-I am concerned about the rigor/correctness of Theorem 2.1; for instance, how is the continuity of the best-response exploited? Also, throughout the paper, the argmin is defined as if it was a singleton while in practice it is rather a set-valued mapping (except if there is a unique minimizer for L_train(., lambda), which is unlikely to be the case given the nature of the considered neural-net model). In the same vein, Jensen's inequality states that Expectation[g(X)] >= g(Expectation[X]) for some convex function g and random variable X; how does it precisely translate into the paper's setting (convexity, which function g, etc.)?]\n\nContinuity of the best-response is exploited to guarantee a universal approximator can approximate the best-response. It is a good point that the solutions are almost certainly set valued for \\phi and w. We now mention this, but leave notation as if it were a singleton for simplicity. The convex function is min_{\\phi} whose argument is a random variable that’s a function, L_\\lambda(w_\\phi) for \\lambda in supp(p(\\lambda)). L_\\lambda(w_\\phi) are sampled by sampling \\lambda ~ p(\\lambda) and currying it in L(w_\\phi, \\lambda).\n\n\n\n[-More details should be provided to better understand Sec. 2.4. At the moment, it is difficult to figure out (and potentially reproduce) the model which is proposed.]\n\n\nAdded a line mentioning the simplest way to do 2.4 is by using a linear network, a randomly initialized current hyperparameter, and a conditional hyperparameter distribution that is a normal distribution centered on the current hyperparameter.\n\n\n[-The training procedure in Sec. 4.2 seems quite ad hoc; how sensitive was the overall performance with respect to the optimization strategy? For instance, in 4.2 and 4.3, different optimization parameters are chosen.]\n\n\nWe have made the optimizer Adam for the hypernet and hyperparameter with identical optimizer parameters on all experiments. The algorithm is easy to tune with Adam, but more sensitive when using SGD.\n\n\n[-\"a standard Bayesian optimization implementation from sklearn\": Could more details be provided? (there does not seem to be implementation there http://scikit-learn.org/stable/model_selection.html to the best of my knowledge)]\n\nWe have added a link the gitHub of the Bayesian optimization implementation we use, but also moved all comparisons with Bayesian optimization to the appendix.\n\n\n[-The experimental set up looks a bit far-fetched and unrealistic: first scalar, than diagonal and finally matrix-weighted regularization schemes. While the first two may be used in practice, the third scheme is not used in practice to the best of my knowledge.]\n\nThis is true - we wanted an experiment with an excessive number of hyperparameters that was defined in the related paper Maclaurin 2015\n", "Franceschi 2017 should be cited as an ICML 2017 paper and not as a preprint" ]
[ 6, 6, 6, -1, -1, -1, -1 ]
[ 4, 3, 1, -1, -1, -1, -1 ]
[ "iclr_2018_SJIA6ZWC-", "iclr_2018_SJIA6ZWC-", "iclr_2018_SJIA6ZWC-", "ryb9D_Bxf", "r1dLqgZWM", "Bk_UdcKxf", "Bk_UdcKxf" ]
iclr_2018_ByW5yxgA-
Multiscale Hidden Markov Models For Covariance Prediction
This paper presents a novel variant of hierarchical hidden Markov models (HMMs), the multiscale hidden Markov model (MSHMM), and an associated spectral estimation and prediction scheme that is consistent, finds global optima, and is computationally efficient. Our MSHMM is a generative model of multiple HMMs evolving at different rates where the observation is a result of the additive emissions of the HMMs. While estimation is relatively straightforward, prediction for the MSHMM poses a unique challenge, which we address in this paper. Further, we show that spectral estimation of the MSHMM outperforms standard methods of predicting the asset covariance of stock prices, a widely addressed problem that is multiscale, non-stationary, and requires processing huge amounts of data.
rejected-papers
The paper addresses and interesting problem, but the reviewers found that the paper is not as strong as it could be: improving the range of evaluated data (significantly improve the convincingness of the experiments, and clearly adressing any alternatives, their limitations and as baselines).
val
[ "HyUR-6Oez", "Bk-DjW5ef", "r1hXsPf-G", "SJ7sJ_pmz", "BkXd1O6mM", "HJYNyupmG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The paper focuses on a very particular HMM structure which involves multiple, independent HMMs. Each HMM emits an unobserved output with an explicit duration period. This explicit duration modelling captures multiple scale of temporal resolution. The actual observations are a weighted linear combination of the emissions from each latent HMM. The structure allows for fast inference using a spectral approach.\n\nI found the paper unclear and lacking in detail in several key aspects:\n\n1. It is unclear to me from Algorithm 2 how the weight vectors w are estimated. This is not adequately explained in the section on estimation.\n\n2. The authors make the assumption that each HMM injects noise into the unobserved output which then gets propagated into the overall observation. What are reasons for his choice of model over a simpler model where the output of each HMM is uncorrupted?\n\n3. The simulation example does not really demonstrate the ability of the MSHMM to do anything other than recover structure from data simulated under an MSHMM. It would be more interesting to apply to data simulated under non-Markovian or other setups that would enable richer frequency structures to be included and the ability of MSHMM to capture these.\n\n4. The real data experiments shows some improvements in predictive accuracy with fast inference. However, the authors do not give a sufficiently broad exploration of the representations learnt by the model which allows us to understand the regimes in which the model would be advantageous.\n\nOverall, the paper presents an interesting approach but the work lacks maturity. Furthermore, simulation and real data examples to explore the properties and utility of the method are required. \n", "This paper proposes a variant of hierarchical hidden Markov Models (HMMs) where the chains operate at different time-scales with an associate d spectral estimation procedure that is computationally efficient.\n\nThe model is applied to artificially generated data and to high-frequency equity data showing promising results.\n\nThe proposed model and method are reasonably original and novel.\n\nThe paper is well written and the method reasonably well explained (I would add an explanation of the spectral estimation in the Appendix, rather than just citing Rodu et al. 2013).\n\nAdditional experimental results would make it a stronger paper.\n\nIt would be great if the authors could include the code that implements the model.", "The paper presents an interesting spectral algorithm for multiscale hmm. The derivation and analysis seems correct. However, it is well-known that spectral algorithm is not robust to model mis-specification. It is not clear whether the proposed algorithm will be useful in practice. How will the method compare to EM algorithms and neural network based approaches? ", "Thank you for your comments. We respond to the questions below.\n\nRegarding point 1. In our work we use linear regression. We will specify this in the paper. However, other methods can also easily be used to estimate w.\n\nRegarding point 2: From the perspective of the application, the slower time horizons are not deterministic, so we feel that this model better reflects the underlying data generation process. In the continuous emission HMM there is a noise term, so this was natural both from the application perspective and the model perspective.\n\nRegarding point 3: For other types of problems other than covariance prediction, model misspecification is more important. We didn’t intend to do that in this work. We look forward to understanding model misspecification in future work.\n\nRegarding point 4: HMMs are widely used even though there is model misspecification. In our simulation we evaluated model misspecification, and in practice it is possible to assess the models predictive value by examining the realized error. For covariance prediction, we do not believe that the true process is a MSHMM, but our model is sufficient to predict the data. \nCan you provide further guidance as to which “regimes in which the model would be advantageous” would be interesting to test? \nWe believe that MSHMM is useful in the class of problems where there are multiple processes and the ratio of \\delta^{(i)} and \\delta^{(i+1)} is sufficiently small that one cannot detrend and sufficiently large that a single HMM or LSTM model is insufficient. Problems such as \n\nWe will run more synthetic data where the data generating process is only from the slowest HMM process and another only from the fastest process. \n", "Thank you for your comments, we will certainly release the code upon acceptance of the paper.\n\nWe have added further explanation of Rodu et al. 2013, in the Appendix and post an update version of the paper.\n\nWe are definitely open to running further experiments. We could try synthetic data where the data generating process is only from the slowest HMM process and another only from the fastest process. \n\nIn our update to the paper, we have compared our results to both LSTM and the very recent work State Frequency Memory (SFM) recurrent network [Hu and Qi, 2017].\n", "Thank you for your review and comments.\n\nIs it possible to provide a citation regarding the instability issue with model mis-specification?\n\nRegarding model robustness, as shown in [Tran et al., 2016, “Spectral M-estimation with Applications to Hidden Markov Models”] with sufficient data the misspecification does not produce significantly worse results. From both our simulated experiments as well as results for covariance prediction on real data, we believe that the model misspecification is not an issue for this type of problem. It is possible that regularization of the spectral algorithm would lead to more robust results; however, we leave this for future work.\n\nWe believe that MSHMM is useful in the class of problems where there are multiple processes and the ratio of \\delta^{(i)} and \\delta^{(i+1)} is sufficiently small that one cannot detrend and sufficiently large that a single HMM or LSTM model is insufficient.\n\nEM is prohibitively slow for all but the daily covariance estimation, and thus excluded from analysis. As stated in paper, “For comparison, a simple HMM with 5 hidden states using EM required 1255 seconds to estimate parameters for 900,000 observations while our MSHMM-3-5 took 25 seconds.” \n\nWe ran experiments to assess the LSTM performance. \nThe synthetic data the relative RMSE was 1.76, which while worse than the MSHMM-3-5 and MSHMM-3-10, is better than both MSHMM-3-3 and HMM-15. It is expected that it should be better than HMM-15, but interesting that the LSTM exceeds MSHMM-3-3.\nExperiments with LSTM and an extremely recent variant SFM. We found that MSHMM outperforms the LSTM.\n\nWe are interested in suggestions on further synthetic data experiments. We changed the data generating process is only from the slowest HMM process and another only from the fastest process. The performance is only slightly worse than the single HMM. Furthermore, the other HMM processes yield nearly 0 load in the regression.\n" ]
[ 5, 6, 6, -1, -1, -1 ]
[ 3, 4, 4, -1, -1, -1 ]
[ "iclr_2018_ByW5yxgA-", "iclr_2018_ByW5yxgA-", "iclr_2018_ByW5yxgA-", "HyUR-6Oez", "Bk-DjW5ef", "r1hXsPf-G" ]
iclr_2018_r1uOhfb0W
Learning Sparse Structured Ensembles with SG-MCMC and Network Pruning
An ensemble of neural networks is known to be more robust and accurate than an individual network, however usually with linearly-increased cost in both training and testing. In this work, we propose a two-stage method to learn Sparse Structured Ensembles (SSEs) for neural networks. In the first stage, we run SG-MCMC with group sparse priors to draw an ensemble of samples from the posterior distribution of network parameters. In the second stage, we apply weight-pruning to each sampled network and then perform retraining over the remained connections. In this way of learning SSEs with SG-MCMC and pruning, we not only achieve high prediction accuracy since SG-MCMC enhances exploration of the model-parameter space, but also reduce memory and computation cost significantly in both training and testing of NN ensembles. This is thoroughly evaluated in the experiments of learning SSE ensembles of both FNNs and LSTMs. For example, in LSTM based language modeling (LM), we obtain 21\% relative reduction in LM perplexity by learning a SSE of 4 large LSTM models, which has only 30\% of model parameters and 70\% of computations in total, as compared to the baseline large LSTM LM. To the best of our knowledge, this work represents the first methodology and empirical study of integrating SG-MCMC, group sparse prior and network pruning together for learning NN ensembles.
rejected-papers
This paper is interesting since it goes to showing the role of model averaging. The clarifications made improve the paper, but the impact of the paper is still not realised: the common confusion on the retraining can be re-examined, clarifications in the methodology and evaluation, and deeper contextulaisation of the wider literature.
train
[ "B1A7YkceM", "BJt3Bg5gM", "Hy6mmeCgf", "S14r4n5fz", "B1OxK2cMG", "HkmrOnqzG", "BJJWN39zf", "BkgBDh9GG", "HysnNFwA-" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public" ]
[ "The authors propose a procedure to generate an ensemble of sparse structured models. To do this, the authors propose to (1) sample models using SG-MCMC with group sparse prior, (2) prune hidden units with small weights, (3) and retrain weights by optimizing each pruned model. The ensemble is applied to MNIST classification and language modelling on PTB dataset. \n\nI have two major concerns on the paper. First, the proposed procedure is quite empirically designed. So, it is difficult to understand why it works well in some problems. Particularly. the justification on the retraining phase is weak. It seems more like to use SG-MCMC to *initialize* models which will then be *optimized* to find MAP with the sparse-model constraints. The second problem is about the baselines in the MNIST experiments. The FNN-300-100 model without dropout, batch-norm, etc. seems unreasonably weak baseline. So, the results on Table 1 on this small network is not much informative practically. Lastly, I also found a significant effort is also desired to improve the writing. \n\nThe following reference also needs to be discussed in the context of using SG-MCMC in RNN.\n- \"Scalable Bayesian Learning of Recurrent Neural Networks for Language Modeling\", Zhe Gan*, Chunyuan Li*, Changyou Chen, Yunchen Pu, Qinliang Su, Lawrence Carin", "In this paper, the authors present a new framework for training ensemble of neural networks. The approach is based on the recent scalable MCMC methods, namely the stochastic gradient Langevin dynamics.\n\nThe paper is overall well-written and ideas are clear. The main contributions of the paper, namely using SG-MCMC methods within deep learning, and then increasing the computational efficiency by group sparsity+pruning are valuable and can have a significant impact in the domain. Besides, the proposed approach is more elegant the competing ones, while still not being theoretically justified completely. \n\nI have the following minor comments:\n\n1) The authors mention that retraining significantly improves the performance, even without pruning. What is the explanation for this? If there is no pruning, I would expect that all the samples would converge to the same minimum after retraining. Therefore, the reason why retraining improves the performance in all cases is not clear to me.\n\n2) The notation |\\theta_g| is confusing, the authors should use a different symbol.\n\n3) After section 4, the language becomes quite informal sometimes, the authors should check the sentences once again.\n\n4) The results with SGD (1 model) + GSP + PR should be added in order to have a better understanding of the improvements provided by the ensemble networks. \n\n5) Why does the performance get worse \"obviously\" when the pruning is 95% and why is it not obvious when the pruning is 90%?\n\n6) There are several typos\n\npg7: drew -> drawn\npg7: detail -> detailed\npg7: changing -> challenging\npg9: is strongly depend on -> depends on\npg9: two curve -> two curves", "The authors note that several recent papers have shown that bayesian model averaging is an effective and universal way to improve hold-out performance, but unfortunately are limited by increased computational costs. Towards that end, the authors of this manuscript propose several modifications to this procedure to make it computationally feasible and indeed improve performance.\n\nPros:\nThe authors demonstrate an effective procedure for FNN and LSTMs that makes model averaging improve performance.\nEmpirical evidence is convincing on the utility of the approach.\n\nCons:\nNot clear how this approach would be used with convolutional structures\nMuch of the benefit appears to come from the sparse prior, pruning, and retraining (Figure 3). The model averaging seems to have a smaller contribution. Due to that, it seems that the nature of the contribution needs to be clarified compared to the large literature on sparsifying neural networks, and the introductory comments of the paper should be rewritten to reflect that reality.", "Dear Reviewers,\nWe greatly appreciate your helpful and constructive comments on the paper. We have carefully revised the paper to incorporate your comments, adding some new results and polishing the writing for clarification. As a result, we believe that the paper has been substantially improved and strengthened.\nIn the following, we provide our response to your specific points.\nPlease find the updated paper for more details. ", "Thanks for your comment.\nAs said in your comment, these previous works apply group Lasso with SGD to learn structurally sparse DNNs. They focus on point estimates and are not in the context of learning ensembles. We have added the discussion in Related Work.", "Thank you very much for reviewing the paper.\n\n> Particularly. the justification on the retraining phase is weak. \n\nThanks for your note. As stated in the end of Section 3.2, there are two justifications for the retraining phase: First, theoretically (namely with infinite samples), model averaging does not need retraining. However, the actual number of samples used in practice is rather small for computational efficiency. So retraining essentially compensates for the limited size of samples for model averaging. Second, the MAP estimate is more likely than the network obtained just after pruning but before retraining. Retraining increases the posteriori probabilities of the networks in the ensemble and hopefully improves the prediction performance of the networks in the ensemble.\n\n> The second problem is about the baselines in the MNIST experiments. The FNN-300-100 model without dropout, batch-norm, etc. seems unreasonably weak baseline. So, the results on Table 1 on this small network is not much informative practically. \n\nSuch basic setting in the MNIST FNN experiments allows easy reproduction of the results.\nStrong results are reported on the more challenging LSTM LM task.\n\n> Lastly, I also found a significant effort is also desired to improve the writing.\n\nWe polish the paper and especially rewrite those parts after Sections 4.\n\n> The following reference also needs to be discussed in the context of using SG-MCMC in RNN. - \"Scalable Bayesian Learning of Recurrent Neural Networks for Language Modeling\", Zhe Gan*, Chunyuan Li*, Changyou Chen, Yunchen Pu, Qinliang Su, Lawrence Carin\n\nThis work pioneers in applying SG-MCMC to Bayesian learning of RNNs, but without considering model pruning and the cost of model averaging. We have added the discussion in Related Work.", "Thank you very much for reviewing the paper.\n\n> Not clear how this approach would be used with convolutional structures\n\nIt has been shown in [1] that group Lasso regularization is effective for structured sparsity SGD learning for convolutional structures (filters, channels, filter shapes, and layer depth). It is conceivable that group Lasso used with SGLD can work for convolutional structures, by employing proper groupings like those in [1].\n[1] Wen, Wu, Wang, Chen and Li. Learning structured sparsity in deep neural networks, NIPS 2016.\n\n> The model averaging seems to have a smaller contribution. \n\nIt can be seen from Figure 3(a) that as the training proceeds, more models are averaged, which consistently improves the PPLs. Also, the relationship between the performance of an ensemble and the number of models in an ensemble is examined in Figure 3(b), which clearly shows the contribution of model averaging.\n\n> Due to that, it seems that the nature of the contribution needs to be clarified compared to the large literature on sparsifying neural networks, and the introductory comments of the paper should be rewritten to reflect that reality.\n\nLiterature review with regards to NN sparse structure learning and NN compression is rewritten and presented in Related Work.", "Thank you very much for reviewing the paper.\n\n> 1) \nAs stated in the end of Section 3.2, there are two justifications for the retraining phase: First, theoretically (namely with infinite samples), model averaging does not need retraining. However, the actual number of samples used in practice is rather small for computational efficiency. So retraining essentially compensates for the limited size of samples for model averaging. Second, the MAP estimate is more likely than the network obtained just after pruning but before retraining. Retraining increases the posteriori probabilities of the networks in the ensemble and hopefully improves the prediction performance of the networks in the ensemble.\n\nNote that running SGLD enhances exploration of the model-parameter space, and we take thinned collection of samples so that there are low correlations between the samples. So in contrary to converging to the same minimum after retraining, thinned samples from SGLD would lead to neighbors of different local minima and retraining further fine-tune the paramters to take different minima.\n\n> 2) \nThanks for your suggestion, we have changed the notation to dim(\\theta_g).\n\n> 3)\nWe polish the paper and especially rewrite those parts after Sections 4.\n\n> 4)\nThanks for your suggestion. The results of SGD (1 model) + GSP + PR and SGD (ensemble) + GSP + PR have been added to Table 5, with the discussion in the paragraph before the last paragraph in Section 5.2.\nSGD (1 model)+GSP+PR can reduce the model size but the PPL is much worse than the ensemble, which clearly shows the improvement provided by the ensemble. Additionally, we compare SGLD (4 models)+GSP+PR with SGD (4 models)+GSP+PR. The two ensembles achieve close PPLs. However, SGLD ensemble learning reduces about 30% training time.\n\n> 5)\nWe empirically find that 90% is the highest pruning rate without hurting performance for LSTMs.\n\n> 6)\nTypos have been fixed.", "In one category of your related work -- \"Sparse structure learning\", some previous works [1][2] used group Lasso regularization during SGD to directly learn structurally sparse DNNs for computation efficiency and memory saving. Compare or clarify the difference might make this work more comprehensive.\n\n[1] http://papers.nips.cc/paper/6504-learning-structured-sparsity-in-deep-neural-networks.pdf\n[2] http://papers.nips.cc/paper/6372-learning-the-number-of-neurons-in-deep-networks.pdf" ]
[ 4, 6, 6, -1, -1, -1, -1, -1, -1 ]
[ 4, 3, 5, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_r1uOhfb0W", "iclr_2018_r1uOhfb0W", "iclr_2018_r1uOhfb0W", "iclr_2018_r1uOhfb0W", "HysnNFwA-", "B1A7YkceM", "Hy6mmeCgf", "BJt3Bg5gM", "iclr_2018_r1uOhfb0W" ]
iclr_2018_HJJ0w--0W
Long-term Forecasting using Tensor-Train RNNs
We present Tensor-Train RNN (TT-RNN), a novel family of neural sequence architectures for multivariate forecasting in environments with nonlinear dynamics. Long-term forecasting in such systems is highly challenging, since there exist long-term temporal dependencies, higher-order correlations and sensitivity to error propagation. Our proposed tensor recurrent architecture addresses these issues by learning the nonlinear dynamics directly using higher order moments and high-order state transition functions. Furthermore, we decompose the higher-order structure using the tensor-train (TT) decomposition to reduce the number of parameters while preserving the model performance. We theoretically establish the approximation properties of Tensor-Train RNNs for general sequence inputs, and such guarantees are not available for usual RNNs. We also demonstrate significant long-term prediction improvements over general RNN and LSTM architectures on a range of simulated environments with nonlinear dynamics, as well on real-world climate and traffic data.
rejected-papers
This paper address the increasingly studied problem of predictions over long-term horizons. Despite this, and the important updates from the authors, the paper is not yeat ready and improvements identified include more control over the fair comparisons, improved clarity in exposition.
train
[ "B1BulASgf", "SJfyCxYgG", "HJv0cb5xG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper proposes Tensor-Train RNN and Tensor-Train LSTM (TT-RNN/TLSTM), a RNN/LSTM architecture whose hidden unit at time t h_t is computed from the tensor-vector product between a tensor of weights and a concatenation of hidden units from the previous L time steps. The motivation is to incorporate previous hidden states and high-order correlations among them to better predict long-term temporal dependencies for seq2seq problems. To address the issue of the number of parameters growing exponentially in the rank of the tensor, the model uses a low rank decomposition called the ‘tensor-train decomposition’ to make the number of parameters linear in the rank. Some theoretical analysis on the number of hidden units required for a given estimation error, and experimental results have been provided for synthetic and real sequential data.\n\nFirst of all, the presentation of the method in section 2.1 is confusing and there seem to be various ambiguities in the notation that harms understanding of the method. The tensor-vector product in equation (6) appears problematic. The notation that I think is standard is as follows: given a tensor W \\in R^{n_1 \\times … \\times n_P} and vectors v_p \\in R^{n_p}, the tensor-vector product W \\times_{p=1}^P v_p = vec(W) \\otimes_{p=1}^P v_p = \\sum{i_1,...,i_P} \\prod_{p=1}^P v_{p,i_p}. So I’m guessing you want to get rid of the \\otimes signs (the kronecker products) in (6) or you want to remove the summation and write W \\times_{p=1}^P s_{t-1}. Also \\alpha that appears in (6) is never defined. Is it another index? This is confusing because you say W is P-dimensional but have P+1 indices for it including alpha (W_{\\alpha i_1 … i_p}). Moreover the dimensionality of W^{hx} x_t in (6) is R^H judging from the notation in page 2, but isn’t the tensor-vector product a scalar? Also am I correct in thinking that s_{t-1} should be [1, h_{t-1}^T, …, h_{t-L}^T], i.e. a vector of length LH+1 rather than a matrix? The notation from page 2 implies that you are using column vectors, so the definition of s_{t-1} makes it appear as an (L+1) by H matrix, which could make the reader interpret s_{t-1;i_1} in (6) as vectors instead of scalars (this is reinforced by the kronecker product between these s_{t-1;i_p}). I had to work this out from the number of parameters (HL+1)^P in section 2.2. The diagram of s_{t-1} in Figure 3 is also confusing, because it isn’t obvious that the unlabelled grey bars are copies of s_{t-1}. Also I notice that the name ‘Tensor Train RNN/LSTM’ has been used in Yang et al, 2017. You probably want to avoid using the same name since the models are different. It would be nice if you could explain in a bit more detail about how they are different in the related work section.\n\nAssuming I have understood the method correctly, the idea of using tensor products to incorporate higher order interactions between the hidden states at different times appears sensible. From the theoretical analysis, you claim that 1) smoother f is easier to approximate, and 2) polynomial interactions are more efficient than linear ones. The first point seems fairly self-explanatory and doesn’t seem to require a proof. The second point isn’t so convincing because you have two additive terms on the right hand side of the inequality in Theorem 3.1 (btw I’m guessing you want the inequality to be the other way round): the first term is independent of p, and the second decreases exponentially with p. Your second point would only hold if this first term is reasonably small, but this doesn’t seem obvious to me.\n\nRegarding the experiments, I’m sceptical as to whether a grid search over hyperparameters for TLSTM vs grid search over the same hyperparameters for (M)LSTM provides a fair comparison. You probably want to compare the models given the same number of parameters, since given the same state size, TLSTM will have many more parameters than (M)LSTM. A plot of x-axis: # parameters, y-axis: average RMSE at convergence would be informative. Moreover for figure 8, you probably want to control the time taken for training instead of just comparing validation loss at the same number of steps. I imagine the best performing TLSTM model will have many more parameters and hence take much longer to train than the best performing LSTM model. \nMoreover, it seems as though the increased prediction accuracy from LSTM is marginal considering you have 3 more hyperparameters to tune (L,S,P - what was the value of P used for the experiments?) and that tuning them is important to prevent overfitting.\n\nI’m also curious as to how TLSTM compares to hierarchical RNN approaches for modelling long-term dependencies. It will be interesting to compare against models like Stacked LSTM (Graves, 2013), Grid LSTM (Kalchbrenner, 2015) and HM LSTM (Chung, 2017). These models have mostly been evaluated on text, but I don’t see any reason they can’t be extended to sequential forecasting on time series data. Also regularisation techniques such as batch-norm for LSTMs (Cooijmans et al, 2016) and layer-norm (Ba et al, 2016) seem to help a lot for increasing prediction accuracy. Did you investigate these techniques to control overfitting?\n\nOther minor comments on presentation:\nFor figure 6, the legends are inconsistent with the caption. Also you might want to overlay predictions on top of the ground truth for better comparison and also to save space.\n\nOverall, I think there are vast scopes for improvement in presentation and comparisons with other methods, and hence find the paper not yet ready for publication.\n", "For method: \nthough it is known that RNN lacks the ability to capture long term dependency, it is designed to take infinite order of history (e.g., dimension of h is large enough, or f(x, h_t-1) is flexible enough). So the claim that RNN only learns a Markov Model is improper. For example, in “Recurrent Marked Temporal Point Processes: Embedding Event History to Vector”, it shows that RNN has the ability to fit the intensity function of Hawkes process (which has infinite order dependency).\n\nDecomposing tensor operator as a layer in neural network is not new. For example, “Tensor Contraction Layers for Parsimonious Deep Nets”. The technique used in this paper is tensor-trains, which is also proposed previously.\n\nAlso the author only talked about # parameters. A more important issue is the time cost. The author should also explicitly analyze the computation cost. \n\nFor writing: \n\nSome sections needs more explanations. In Section 2.1, it seems S_{t-1} is an (1 + L x H)-dimensional vector, according to your definition. Then how is S_{t-1; i_1} defined? Figure 2 has little information about proposed architecture. While Figure 3 is also very vague. \n\nNotations are not quite consistent. In Figure 3, the S_{t-1} contains K history vectors. What is K here? I suppose it is the same as L. In Figure 6, the legend says red curve is LSTM, but the caption says the green one is LSTM. \n\nFor experiment:\n\nThe datasets used are small, with a few number of not quite long sequences. But for demonstration purpose this might be ok. The doubt is whether this method is scalable to large datasets? Analysis like time cost, memory consumption needs to be included, in order for people to get an idea of its scalability. \n\nFigure 8 shows the convergence. I would say the difference is not significant. Consider its computation cost, I would doubt the ‘much faster’ claim in Page 7. \n\nAlso it seems the proposed method has more parameters than traditional RNN. To get a fair comparison, higher dimensionality of latent state should be used in LSTM. \n\nOverall the paper tries to tackle an important problem, which is good. However, both methods and experiments need improvement.", "This work addresses an important and outstanding problem: accurate long-term forecasting using deep recurrent networks. The technical approach seems well motivated, plausible, and potentially a good contribution, but the experimental work has numerous weaknesses which limit the significance of the work in current form.\n\nFor one, the 3 datasets tested are not established as among the most suitable, well-recognized benchmarks for evaluating long-term forecasting. It would be far more convincing if the author’s used well-established benchmark data, for which existing best methods have already been well-tuned to get their best results. Otherwise, the reader is left with concerns that the author’s may not have used the best settings for the baseline method results reported, which indeed is a concern here (see below).\n\nOne weakness with the experiments is that it is not clear that they were fair to RNN or LSTM, for example, in terms of giving them the same computation as the TT-RNNs. Section Hyper-parameter Analysis” on page 7 explains that they determined best TT rank and lags via grid search. But presumably larger values for rank and lag require more computation, so to be fair to RNN and LSTM they should be given more computation as well, for example allowing them more hidden units than TT-RNNs get, so that overall computation cost is the same for all 3 methods. As far as this reviewer can tell, the authors offer no experiments to show that a larger number of units for RNN or LSTM would not have helped them in improving long-term forecasting accuracies, so this seems like a very serious and plausible concern.\n\nAlso, on page 6 the authors say that they tried ARMA but that it performed about 5% worse than LSTM, and thus dismissing direct comparisons of ARMA against TT-RNN. But they are unclear whether they gave ARMA as much hyper-parameter tuning (e.g. for number of lags) via grid search as their proposed TT-RNN benefited from. Again, the concern here is that the experiments are plausibly not being fair to all methods equally.\n\nSo, due to the weaknesses in the experimental work, this work seems a bit premature. It should more clearly establish that their proposed TT-RNN are indeed performing well compared to existing SOTA.\n\n\n" ]
[ 4, 5, 6 ]
[ 4, 3, 4 ]
[ "iclr_2018_HJJ0w--0W", "iclr_2018_HJJ0w--0W", "iclr_2018_HJJ0w--0W" ]
iclr_2018_Skx5txzb0W
A Boo(n) for Evaluating Architecture Performance
We point out important problems with the common practice of using the best single model performance for comparing deep learning architectures, and we propose a method that corrects these flaws. Each time a model is trained, one gets a different result due to random factors in the training process, which include random parameter initialization and random data shuffling. Reporting the best single model performance does not appropriately address this stochasticity. We propose a normalized expected best-out-of-n performance (Boo_n) as a way to correct these problems.
rejected-papers
The subject of model evaluation will always be a contentious one, and the reviewers were not yet fully-convinced by the discussion. The points you bring up at the end of your rresponse already point to directions for improvement as well as a greater degree of precision and control.
val
[ "H1otcvggM", "BknlT5Bez", "rynGrnpeM", "rkvaNxw-G", "H1Nhrlvbz", "SyJ-SgPWG", "HyRyZFL-G" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "The authors propose a new measure to capture the inherent randomness of the performance of a neural net under different random initialisations and/or data inputs. Just reporting the best performance among many random realisations is clearly flawed yet still widely adopted. Instead, the authors propose to compute the so-called best-out-of-n performance, which is the expected best performance under n random initialisations. \n\nPros:\n- The widespread reporting of just the best model is clearly leading to very biased results and does not help with reproducibility. Any effort to mitigate this problem is thus welcome.\n- The proposed quantity is simple to compute if we have m realisations of the same model under different random inputs (random initialisation or random data) and will converge to a stable limit even if m is very large. \n\nCons:\n- The best-out-of-n performance is well grounded if we have different random inputs such as random initial parameters or random batch processing. Arguably, there is even larger variance if the model parameters such as number of layers, layer size etc are varied. Yet these variations cannot really be captured by the best-out-of-n performance indicator unless modelled as random variables (which would lead to different sorts of problems).\n- Computationally it requires to have a large number m of replications which is not always feasible. \n- Most importantly: the proposed way is just one of many ways to reduce the distribution of performances to a single scalar quantity. Why is it better than just reporting a specific quantile, for example? Perhaps any such attempt to reduce to a single scalar is flawed and we should report the full distribution (or first and second moment, or several quantiles). For example: the boo-n performance gets better if the outcome is highly variable compared to a model where the mean performance is identical but the outcome much less variable. High variance of the performance can be negative or positive, depending on the application and the choice of boo-n is making a singular choice just as if we chose the mean or min or max or a specific quantile. \n\n\n\n\n\n\n\n\n\n\n\n\n", "This manuscript raises an important issue regarding the current lack of standardization regarding methods for evaluating and reporting algorithm performance in deep learning research. While I believe that raising this issue is important and that the method proposed is a step in the right direction, I have a number of concerns which I will list below. One risk is that if the proposed solution is not adequate or widely agreeable then we may find a proliferation of solutions from which different groups might pick and choose as it suits their results!\n\nThe method of choosing the best model under 'internal' cross-validation to take through to 'external' cross-validation against a second hold-out set should be regarded as one possible stochastic solution to the optimisation problem of hyper-parameter selection. The authors are right to emphasize that this should be considered part of the cost of the technique, but I would not suggest that one specify a 'benchmark' number of trials (n=5) for comparison. Rather I would suggest that this is a decision that needs to be explored and understood by the researchers presenting the method in order to understand the cost/benefit ratio for their algorithm provided by attempting to refine their guess of the optimal hyperparameters. This would then allow for other methods not based on internal cross-validation to be compared on a level footing.\n\nI think that the fundamental issue of stochasticity of concern for repeatability and generalisability of these performance evaluation exercises is not in the stochastic optimisation search but in the use of a single hold-out sample. Would it not be wise to insist on a mean performance (a mean Boo_n or other) over multiple random partitions of the entire dataset into training and hold-out? I wonder if in theory both the effect of increasing n and the mean hold-out performance could be learnt efficiently with a clever experimental design. \n\nFinally, I am concerned with the issue of how to compute the suggested Boo_n score. Use of a parameteric Gaussian approximation is a strong assumption, while bootstrap methods for order statistics can be rather noisy. It would be interesting to see a comparison of the results from the parametric and non-parameteric Boo_n versions applied to the test problems. ", "This paper addresses multiple issues arising from the fact that commonly reported best model performance numbers are a single sample from a performance distribution. These problems are very real, and they deserve significant attention from the ML community. However, I feel that the proposed solution may actually compound the issues highlighted.\n\nFirstly, the proposed metric requires calculation of multiple test set experiments for every evaluation. In the paper up to 100 experiments were used. This may be reasonable in scenarios where the test set is hidden, and individual test numbers are never revealed. It also may be reasonable if we cynically assume that researchers are already running many test-set evaluations. But I am very opposed to any suggestion that we should relax the maxim that the test set should be used only once, or as close to once as is possible. Even the idea of researchers knowing their test set variance makes me very uneasy.\n\nSecondly, this paper tries to account for variation in results due to different degrees of hyper-parameter tuning. This is certainly an admirable aim, since different research groups have access to very different types of resources. However, the suggested approach relies on randomly picking hyper-parameters from \"a range that we previously found to work reasonably well\". This randomization does not account for the many experiments that were required to find this range. And the randomization is also not extended to parameters controlling the model architecture (I suspect that a number of experiments went into picking the 32 layers in the ResNet used by this paper). Without a solid and consistent basis for these hyper-parameter perturbations, I worry that this approach will fail to normalize the effect of experiment numbers while also giving researchers an excuse to avoid reporting their experimental process.\n\nI think this is a nice idea and the metric does merge the stability and low variance of mean score with the aspirations of best score. The metric may be very useful at development time in helping researchers build a reasonable expectation of test time performance in cases where the dev and test sets are strongly correlated. However, for the reasons outlined above, I don't think the proposed approach solves the problems that it addresses. Ultimately, the decision about this paper is a subjective one. Are we willing to increase the risk of inadvertent hyper-parameter tuning on the test set for the sake of a more stable metric?", "- We address the first point in the common response above.\n\n- Yes, usage of the proposed method may require a large number of replications. However, this requirement stems from the degree of stochasticity in training. If we used any other statistical technique, we believe it would require a comparable amount of replications. If one really cannot afford to run so many replications, one should still try to estimate the resulting confidence interval and hence at least disclose the uncertainty in reported results. Today this uncertainty is still there, just unreported.\n\n- Yes, we agree that researchers should publish as much information as possible about the performance distribution of their architecture, which may allow the reader to calculate the characteristic that interests her the most (whether it be mean, Boo_n, or a quantile). However, we believe that scalar metrics do have their value as proxies for comparing models - this usage now has an important place in Machine Learning research. This is why we are trying to propose an improvement in this area. \nAs to why we consider Boo_n better than the alternatives (e.g. mean, quantile), we believe that it best captures what may interest a practitioner intending to deploy the model: He may have the capacity to train n models and deploy the best one. Our score directly captures what performance to expect under such scenario. ", "Regarding your concern with multiple test set evaluations: Yes, there are risks associated with it; however, we do not think using Boo_n would significantly change the current situation, which already mostly relies on the honesty of researchers - this is to some extent unavoidable in science. \nEven with Boo_n the researchers still can keep the test scores hidden aside and calculate the final Boo_n score only when they finished tuning the architecture and running the experiments (or even run evaluations with trained models only in the end). \n\nMore importantly, we argue in the paper that reporting the test performance of a single model does not have that much scientific value. It's just a single sample drawn from a distribution, so we think it is not an appropriate way to characterize the performance distribution.\n\nTo some extent, this draws on a distinction between doing science and simply competing on a challenge, e.g. on Kaggle. In the latter case, we should focus on fair conditions for competitors and your objections would be very appropriate. In the former case, we should mainly try to well characterize the behaviour of the model on which the researchers are publishing their findings. There, we believe that multiple test evaluation brings better insight.\n\nWe address hyper-parameter tuning in the common response above. ", "\n- We admit that we add \"yet another\" method of evaluation. This does indeed create an opportunity for cherrypicking. However, we believe it has value to expand the pool of options in a situation where a large part of the community has settled on a standard we consider flawed. The situation may temporarily become messier, but we consider this a necessary step towards a new equilibrium, hopefully with better evaluation standards.\n \n- We will add the parametric/non-parametric estimator comparison to the paper soon.\n\nWe add some additional remarks in the common response above. ", "Dear reviewers, \n\nthank you for your insights. We all seem to agree that the problem of evaluation methodology is important. What your objections point to are flaws in our particular solution. We certainly admit that the solution is not flawless; however, we would certainly see purpose in starting a discussion on this important topic, which is largely absent from Deep Learning conferences while hundreds of papers continue reporting only the best single model performance, which is clearly inappropriate. 'Boon' is the best solution we were so far able to come up with, so we consider it a step towards improving the current situation.\n\nLet us now address some of the issues you pointed out. Firstly, the method does not appropriately account for the effects of hyperparameter tuning. That is true and we probably didn't point it out clearly enough in the paper. What it was primarily developed to address was the problem of stochastic variation due to random initialization and data shuffling - that is randomness which appears in repeated training with fixed hyperparameters. In the paper, we show that this problem alone is non-negligible. \nIf we pick hyperparameters randomly, it produces the same effect and hence the problem can be solved using the same method. \nHowever, normally, hyperparameters are chosen using either some Bayesian optimization or manual tuning. The improvement of results due to using these techniques over more experiment runs is much harder to model and discount in the evaluation procedure. It would be useful if we were able to do so, however it is beyond the scope of this paper. Still, these techniques are supposedly used because they are considered more efficient than simple random search, hence our method to correct for random search could be considered a lower bound on the correction due to more advanced hyperparameter tuning. \nAs a takeaway for us, we will try to make it clearer in the paper which problems we are addressing and which we are not.\n\nWe respond to some more specific points below each review. \n\nFinally, given the reviews, the paper in its current form may well be rejected. We consider this issue important and hence may want to try to publish a new version elsewhere. What approach would you find most sensible? We could simply publish an analysis of the problems of the current practice, which we do in the first part of this paper, possibly with a review of existing alternatives. Or do you think the direction of Boo_n is generally right and would you encourage further work in that direction (in which case which points would you consider crucial to fix?)?\n\nThank you for your opinions. " ]
[ 4, 6, 4, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1 ]
[ "iclr_2018_Skx5txzb0W", "iclr_2018_Skx5txzb0W", "iclr_2018_Skx5txzb0W", "H1otcvggM", "rynGrnpeM", "BknlT5Bez", "iclr_2018_Skx5txzb0W" ]
iclr_2018_SkwAEQbAb
A novel method to determine the number of latent dimensions with SVD
Determining the number of latent dimensions is a ubiquitous problem in machine learning. In this study, we introduce a novel method that relies on SVD to discover the number of latent dimensions. The general principle behind the method is to compare the curve of singular values of the SVD decomposition of a data set with the randomized data set curve. The inferred number of latent dimensions corresponds to the crossing point of the two curves. To evaluate our methodology, we compare it with competing methods such as Kaisers eigenvalue-greater-than-one rule (K1), Parallel Analysis (PA), Velicers MAP test (Minimum Average Partial). We also compare our method with the Silhouette Width (SW) technique which is used in different clustering methods to determine the optimal number of clusters. The result on synthetic data shows that the Parallel Analysis and our method have similar results and more accurate than the other methods, and that our methods is slightly better result than the Parallel Analysis method for the sparse data sets.
rejected-papers
The paper addresses the important question of determining the intrinsic dimensionality, but there remain several issue, which make the paper not ready at this point: unclear exposition, lack of contextualisation of existing work and seemingly limited insights. The reviewers have provided many suggestions to improve the paper which we hope will be useful to improve the paper.
train
[ "ByPKCgNgG", "r1N3gmtlz", "HJAPXrtgM", "B1hFqIXgM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "public" ]
[ "The manuscript proposes to estimate the number of components in SVD by comparing the eigenvalues to those obtained on bootstrapped version of the input.\n\nThe paper has numerous flaws and is clearly below acceptance threshold for any scientific forum. Some of the more obvious issues, each alone sufficient for rejection, include:\n\n1. Discrepancy between motivation and actual work. The method is specifically about determining the rank of a matrix, but the authors motivate it with way too general and vague relationships, such as \"determining the number of nodes in neural networks\". Somewhat oddly, the problem is highlighted to be of interest in supervised problems even though one would expect it to be much more important in unsupervised ones.\n\n2. Complete lack of details for related work. Methods such as PA and MAP are described with vague one-sentences summaries that tell nothing about how they actually work. There would have been ample space to provide the mathematical formulations.\n\n3. No technical contribution. The proposed method is trivial variant of randomised testing, described with single sentence \"Bootstrapped samples R_B are simply generated through random sampling with replacement of the values of R.\" with literally no attempt of providing any sort of justification why this kind of random sampling would be good for the proposed task or what kind of assumptions it builds on.\n\n4. Poor experiments using really tiny artificial data sets, reported in unprofessional manner (visual style in plots changes from figure to figure, tables report irrelevant numbers in hard-to-read format etc). No real improvement over the somewhat random choice of comparison methods that do not even represent the techniques people would typically use for this problem.", "The authors propose a bootstrap-based test for determining the number of latent dimensions to retain for linear dimensionality reduction (SVD/PCA). The idea is to retain eigenvectors which are larger than a bootstrap average. The resulting approach is evaluated on two simulated datasets (dense and sparse)as compared to common baselines and evaluated. The results suggest improved performance.\n\nThe paper addresses an important problem, but does not seem ready for publication:\n - The evaluation only uses simulated data. Ideally, the authors can evaluate the approach on real data -- perhaps using out of sample variance explained as a criterion?\n - There is limited technical novelty. The bootstrap is well known already. The authors do not provide additional insight, or provide any theory justifying the technique.\n - It's not clear if the results are new:\nPaper with related discussion: http://jackson.eeb.utoronto.ca/files/2012/10/Jackson1995.pdf\nand a blog post:\nhttps://stats.stackexchange.com/questions/33917/how-to-determine-significant-principal-components-using-bootstrapping-or-monte-c", "The authors propose the use of bootstrapping the data (random sampling entries with replacement) to form surrogate data for which they can evaluate the singular value spectrum of the SVD of the matrix to the singular values of the bootstrapped data, thereby determining the number of latent dimensions in PCA by the point in which the singular values are no greater than the bootstrapped sampled values. The procedure is contrasted to some existing methods for determining the number of latent components and found to perform similarly to another procedure based on bootstrapping correlation matrices, the PA procedure.\n\nPros:\nDetermining the number of components is an important problem that the authors here address.\n\nCons:\nI find the paper poorly written and the methodology not sufficiently rooted in the existing literature. There are many approaches to determining the number of latent components in PCA that needs to be discussed and constrasted including:\nCross-validation:\nhttp://scholar.google.dk/scholar_url?url=http%3A%2F%2Fwww.academia.edu%2Fdownload%2F43416804%2FGeneralizable_Patterns_in_Neuroimaging_H20160306-9605-1xf9c9h.pdf&hl=da&sa=T&oi=gga&ct=gga&cd=0&ei=rjkXWrzKKImMmAH-xo7gBw&scisig=AAGBfm2iRQhmI2EHEO7Cl6UZoRbfAxDRng&nossl=1&ws=1728x1023\nVariational Bayesian PCA:\nhttps://www.microsoft.com/en-us/research/publication/variational-principal-components/\nFurthermore, the idea of bootstrapping for the SVD has been discussed in prior publications and the present work need to be related to these prior works. This includes:\n\nMilan, Luis, and Joe Whittaker. “Application of the Parametric Bootstrap to Models That Incorporate a Singular Value Decomposition.” Journal of the Royal Statistical Society. Series C (Applied Statistics), vol. 44, no. 1, 1995, pp. 31–49. JSTOR, JSTOR, www.jstor.org/stable/2986193.\n\nFisher A, Caffo B, Schwartz B, Zipunnikov V. Fast, Exact Bootstrap Principal Component Analysis for p > 1 million. Journal of the American Statistical Association. 2016;111(514):846-860. doi:10.1080/01621459.2015.1062383.\n\nIncluding the following package in R for performing bootstrapped SVD: https://cran.r-project.org/web/packages/bootSVD/bootSVD.pdf\n\nThe novelty of the present approach is therefore unclear given prior works on bootstrapping SVD/PCA.\n\nFurthermore, for sparse data with missing entries there are specialized algorithms handling sparsity either using imputation or marginalization, which would be more principled to estimate the PCA parameters. \n\nFinaly, the performance appears almost identical with the PA procedure. In fact, it seems bootstrapping the correlation matrix has a very similar effect as the proposed bootstrapping procedure. Thus, it seems the proposed procedure which is very similar in spirit to PA does not have much benefit over this procedure.\n\nMinor comments:\nExplain what SW abbreviates when introduced first.\nWe will see that it PA a close relationship with BSVD-> We will see that PA is closely related to BSVD\n\nmore effective than SVD under certain conditions (?). – please provide reference instead of ?\n\nBut table 4 that shows -> But table 4 shows that\n\nWe can sum up with that the result seems ->To summarize, the result seems\n", "I think this is quite a nice idea.\n\nOne thing that would make the experiments more complete would be comparison to the bi-crossvalidation method that is sometimes used in stats (statweb.stanford.edu/~owen/reports/AOAS227.pdf). The BCV method can be quite slow for large matrices whereas it seems like the method proposed here can be efficiently parallelized." ]
[ 1, 2, 3, -1 ]
[ 4, 5, 4, -1 ]
[ "iclr_2018_SkwAEQbAb", "iclr_2018_SkwAEQbAb", "iclr_2018_SkwAEQbAb", "iclr_2018_SkwAEQbAb" ]
iclr_2018_rJ5C67-C-
Hyperedge2vec: Distributed Representations for Hyperedges
Data structured in form of overlapping or non-overlapping sets is found in a variety of domains, sometimes explicitly but often subtly. For example, teams, which are of prime importance in social science studies are \enquote{sets of individuals}; \enquote{item sets} in pattern mining are sets; and for various types of analysis in language studies a sentence can be considered as a \enquote{set or bag of words}. Although building models and inference algorithms for structured data has been an important task in the fields of machine learning and statistics, research on \enquote{set-like} data still remains less explored. Relationships between pairs of elements can be modeled as edges in a graph. However, modeling relationships that involve all members of a set, a hyperedge is a more natural representation for the set. In this work, we focus on the problem of embedding hyperedges in a hypergraph (a network of overlapping sets) to a low dimensional vector space. We propose a probabilistic deep-learning based method as well as a tensor-based algebraic model, both of which capture the hypergraph structure in a principled manner without loosing set-level information. Our central focus is to highlight the connection between hypergraphs (topology), tensors (algebra) and probabilistic models. We present a number of interesting baselines, some of which adapt existing node-level embedding models to the hyperedge-level, as well as sequence based language techniques which are adapted for set structured hypergraph topology. The performance is evaluated with a network of social groups and a network of word phrases. Our experiments show that accuracy wise our methods perform similar to those of baselines which are not designed for hypergraphs. Moreover, our tensor based method is quiet efficient as compared to deep-learning based auto-encoder method. We therefore, argue that we have proposed more general methods which are suited for hypergraphs (and therefore also for graphs) while maintaining accuracy and efficiency.
rejected-papers
While there are some interesting and novel aspects in this paper, none of the reviewers recommends acceptance.
train
[ "H1kAEtYlz", "rJvDxGceG", "S1teFU6gG", "ryF8mLfNM", "SyHasMG4M", "HyhIPu67f", "Sk1QH_aQM", "Skqb4upmf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author" ]
[ "The paper studies different methods for defining hypergraph embeddings, i.e. defining vectorial representations of the set of hyperedges of a given hypergraph. It should be noted that the framework does not allow to compute a vectorial representation of a set of nodes not already given as an hyperedge. A set of methods is presented : the first one is based on an auto-encoder technique ; the second one is based on tensor decomposition ; the third one derives from sentence embedding methods. The fourth one extends over node embedding techniques and the last one use spectral methods. The two first methods use plainly the set structure of hyperedges. Experimental results are provided on semi-supervised regression tasks. They show very similar performance for all methods and variants. Also run-times are compared and the results are expected. In conclusion, the paper gives an overview of methods for computing hypernode embeddings. This is interesting in its own. Nevertheless, as the target problem on hypergraphs is left unspecified, it is difficult to infer conclusions from the study. Therefore, I am not convinced that the paper should be published in ICLR'18.\n\n* typos\n* Recent surveys on graph embeddings have been published in 2017 and should be cited as \"A comprehensive survey of graph embedding ...\" by Cai et al\n* Preliminaries. The occurrence number R(g_i) are not modeled in the hypergraphs. A graph N_a is defined but not used in the paper.\n* Section 3.1. the procedure for sampling hyperedges in the lattice shoud be given. At least, you should explain how it is made efficient when the number of nodes is large.\n* Section 3.2. The method seems to be restricted to cases where the cardinality of hyperedges can take a small number of values. This is discussed in Section 3.6 but the discussion is not convincing enough.\n* Section 3.3 The term Sen2vec is not common knowledge\n* Section 3.3 The length of the sentences depends on the number of permutations of $k$ elements. How can you deal with large k ?\n* Section 3.4 and Section 3.5. The methods proposed in these two sections should be related with previous works on hypergraph kernels. I.e. there should be mentions on the clique expansion and star expansion of hypergraphs. This leads to the question why graph embeddings methods on these expansions have not be considered in the paper.\n* Section 4.1. Only hyperedeges of cardinality in [2,6] are considered. This seems a rather strong limitation and this hypothesis does not seem pertinent in many applications. \n* Section 4. For online multi-player games, hypernode embeddings only allow to evaluate existing teams, i.e. already existing as hyperedges in the input hypergraph. One of the most important problem for multi-player games is team making where team evaluation should be made for all possible teams.\n* Section 5. Seems redundant with the Introduction.", "This paper studies the problem of representation learning in hyperedges. The author claims their novelty for using several different models to build hyperedge representations. To generate representations for hyperedge, this paper proposes to use several different models such as Denoising AutoEncoder, tensor decomposition, word2vec or spectral embeddings. Experimental results show the effectiveness of these models in several different datasets. \n\nThe author uses several different models (both recent studies like Node2Vec / sen2vec, and older results like spectral or tensor decomposition). The idea of studying embedding of a hypergraph is interesting and novel, and the results show that several different kinds of methods can all provide meaningful results for realistic applications. \n\nDespite the novel idea about hyperedge embedding generation, the paper is not easy to follow. \nThe introduction of ``hypergraph`` takes too much spapce in preliminary, while the problem for generating embeddings of hyperedge is the key of paper. \n\nFurther, the experiments only present several models this paper described. \nSome recent papers about hypergraph and graph structure (even though cannot generate embeddings directly) are still worth mention and compare in the experimental section. It will be persuasive to mention related methods in similar tasks. \n\nit would better better if the author can add some related work about hyperedge graph studies. ", "This paper addresses the problem of embedding sets into a finite dimensional vector space where the sets have the structure that they are hyper-edges of a hyper graph. It presents a collection of methods for solving this problem and most of these methods are only adaptation of existing techniques to the hypergraph setting. The only novelty I find is in applying node2vec (an existing technique) on the dual of the hypergraph to get an embedding for hyperedges. \n\nFor several methods proposed, they have to rely on unexplained heuristics (or graph approximations) for the adaptation to work. For example, why taking average line 9 Algorithm 1 solves problem (5) with an additional constraint that \\mathbf{U}s are same? Problem 5 is also not clearly defined: why is there superscript $k$ on the optimization variable when the objective is sum over all degrees $k$?\n\nIt is not clear why it makes sense to adapt sen2vec (where sequence matters) for the problem of embedding hyperedges (which is just a set). To get a sequence independent embedding, they again have to rely on heuristics.\n\nOverall, the paper only tries to use all the techniques developed for learning on hypergraphs (e.g., tensor decomposition for k-uniform hypergraphs, approximating a hypergraph with a clique graph etc.) to develop the embedding methods for hyperedges. It also does not show/discuss which method is more suitable to a given setting. In the experiments, they show very similar results for all methods. Comparison of proposed methods against a baseline is missing. \n\n\n", "Dear Reviewer, \n\nThanks a lot for your response.\n\nWe have submitted a newer version of the paper. In section 4.3 we have made further changes and tried to answer the scalability and choice of methods, aspect more specifically.\n\nWe hope that you find our modifications convincing enough.\nPlease do let us know.\n\nSincere Thanks. ", "thanks for the rebuttal and modifications on the submitted version. Experimental results do not help to choose between the different methods. Large hyperedges exist in real applications for social networks.", "Dear Reviewer,\n\nPlease find our replies in-line below.\n \n* typos\n* Recent surveys on graph embeddings have been published in 2017 and should be cited as \"A comprehensive survey of graph embedding ...\" by Cai et al.\n\nØ This we have taken care \n\n* Preliminaries. The occurrence number R(g_i) are not modeled in the hypergraphs. A graph N_a is defined but not used in the paper.\n\nØ We have used it in our methods.\n\n* Section 3.1. the procedure for sampling hyperedges in the lattice should be given. At least, you should explain how it is made efficient when the number of nodes is large.\n \nØ This we have taken care of by explaining in further detail the procedure.\n \n* Section 3.2. The method seems to be restricted to cases where the cardinality of hyperedges can take a small number of values. This is discussed in Section 3.6 but the discussion is not convincing enough.\n \nØ Exact prediction of a “large” set is rarely found in real world applications. If we need to go beyond size six we can always employ distributed hypergraph computation frameworks like MESH [2] or HyperX [3].\n \n* Section 3.3 The term Sen2vec is not common knowledge\n* Section 3.3 The length of the sentences depends on the number of permutations of $k$ elements. How can you deal with large k ?\n \t\nØ This is a baseline method as we have clarified. And therefore, this is an inherent limitation of the baseline which we have have adapted for comparison purposes. Although, if we still need to use this baseline with large ‘k’ we can again use distributed hypergraph computation frameworks like MESH [2] or HyperX [3] or in general any other distributed computation for scalable enumeration. \n \t\n* Section 3.4 and Section 3.5. The methods proposed in these two sections should be related with previous works on hypergraph kernels. I.e. there should be mentioned on the clique expansion and star expansion of hypergraphs. This leads to the question why graph embeddings methods on these expansions have not be considered in the paper.\n \nØ We have considered the most popular normalized hypergraph laplacian, which has been extensively used in various application domains and is considered state of the art. Not just clique/star expansion but there are number of other hypergraph laplacians that can be employed. The paper by Agarwal et. al. [1] lists several such laplacians, and infact shows that number of them are actually equivalent. Specially the clique / star expansion one are shown to be equivalent with the normalized laplacian we have employed in our work. We therefore, leave this exploration of various such hypergraph laplacians which actually work on a proxy graph as something for future work. \n \n* Section 4.1. Only hyperedges of cardinality in [2,6] are considered. This seems a rather strong limitation and this hypothesis does not seem pertinent in many applications.\n \nØ Our algorithms in general works for any given cardinality range [c_min,c_max]. In the datasets used we found that a large portion of the hyperedges were found in the range [2,6]. Therefore, for our experimentation purpose this was a suitable choice. If we need to go beyond size six or any larger c_max, we can always go distributed hypergraph computation frameworks like MESH [2] or HyperX [3].\n \n* Section 4. For online multi-player games, hypernode embeddings only allow to evaluate existing teams, i.e. already existing as hyperedges in the input hypergraph. One of the most important problem for multi-player games is team making where team evaluation should be made for all possible teams.\n \nØ We think that team formation is a separate problem in its own right. Team performance is one of the problem we have chosen to illustrate the use of hyperedge embedding as a social science application. \n \n* Section 5. Seems redundant with the Introduction.\n \nØ We have taken care of this.\n\nReferences:\n\n[1] Agarwal, Sameer, Kristin Branson, and Serge Belongie. \"Higher order learning with graphs.\" In Proceedings of the 23rd international conference on Machine learning, pp. 17-24. ACM, 2006.\n\n[2] Enabling Scalable Social Group Analytics via Hypergraph Analysis Systems. Benjamin Heintz and Abhishek Chandra. In the USENIX Workshop on Hot Topics in Cloud Computing (HotCloud). Santa Clara, CA. July, 2015.\n\n[3] J. Huang, R. Zhang, and J. X. Yu, “Scalable hypergraph learning and processing,” in Proc. of ICDM, Nov 2015, pp. 775–780.", "Dear Reviewer, \n\n> This paper studies the problem of representation learning in hyperedges. The author claims their novelty for using several different models to build hyperedge representations. To generate representations for hyperedge, this paper proposes to use several different models such as Denoising AutoEncoder, tensor decomposition, word2vec or spectral embeddings. Experimental results show the effectiveness of these models in several different datasets.\n \n> The author uses several different models (both recent studies like Node2Vec / sen2vec, and older results like spectral or tensor decomposition). The idea of studying embedding of a hypergraph is interesting and novel, and the results show that several different kinds of methods can all provide meaningful results for realistic applications.\n \n> Despite the novel idea about hyperedge embedding generation, the paper is not easy to follow. The introduction of ``hypergraph`` takes too much space in preliminary, while the problem for generating embeddings of hyperedge is the key of paper.\n\nØ We have taken care these. \n\n> Further, the experiments only present several models this paper described. Some recent papers about hypergraph and graph structure (even though cannot generate embeddings directly) are still worth mention and compare in the experimental section. It will be persuasive to mention related methods in similar tasks. it would better better if the author can add some related work about hyperedge graph studies.\n\n Ø We have revised our related work.", "Dear Reviewer, \n\nThanks for you valuable comments, we have responded to them in-line below. \n\n> This paper addresses the problem of embedding sets into a finite dimensional vector space where the sets have the structure that they are hyper-edges of a hyper graph. It presents a collection of methods for solving this problem and most of these methods are only adaptation of existing techniques to the hypergraph setting. \n\nØ We agree our paper lacked a comprehensive clarification of the baseline methods, leading to some confusion. We have revised the paper with this clarification. Specifically, we propose two methods: hypergraph tensor decomposition and hypergraph auto-encoder, as our main contributions. Both these methods are designed to take into account the hypergraph structure in a principled manner. Rest all the methods are adaptations of existing graph or language models which have to be adapted by use of proxy or heuristics as they are not designed for hypergraphs. In this sense, our proposed techniques are more general. \n\n> The only novelty I find is in applying node2vec (an existing technique) on the dual of the hypergraph to get an embedding for hyperedges.\n \nØ Regarding novelty aspect, we have clearly listed the novelties we claim in the paper’s introduction, which we again comprehend as follows: \no We propose the concept of dual tensor, which is itself novel and allows us to get hyperedge embedding directly.\no Our proposed hypergraph tensor decomposition method is designed for general hypergraphs (containing different cardinality hyperedges). Therefore, this tensor decomposition is different than simple uniform hypergraph tensor decomposition which is restricted to fixed cardinality hyperedges (i.e. uniform hypergraph). \no Use of de-noising auto-encoder in a hypergraph setting is novel. The idea of creating noise using random-walks over hasse diagram topology is original and unique.\n\nApart from the methods we propose, we have used several interesting tricks and heuristics in our baselines while adapting them for hypergraph setting. \no Use of node2vec over hypergraph dual. (Reviewer has pointed this out himself)\n\no Using hyperedges to model sentences is a novel idea and opens up possibilities of various applications using higher order topological methods for modeling language structure. We show one possible application. \n\no Adapting set structured data to fit in a sequence based language model using proxy text is an interesting idea.\n \n> For several methods proposed, they have to rely on unexplained heuristics (or graph approximations) for the adaptation to work. For example, why taking average line 9 Algorithm 1 solves problem (5) with an additional constraint that \\mathbf{U}s are same? \n\nØ Although its a heuristic, but in our implementation we empirically observe that our algorithm converges successfully. Averaging can be interpreted as equal contribution from the latent factors learned from different cardinality (uniform) sub-hypergraphs. Also the optimization objective of problem (5) is unweighted. \n\n> Problem 5 is also not clearly defined: why is there superscript $k$ on the optimization variable when the objective is sum over all degrees $k$?\n \nØ We have clarified the problem definition more precisely.\n \n> It is not clear why it makes sense to adapt sen2vec (where sequence matters) for the problem of embedding hyperedges (which is just a set). To get a sequence independent embedding, they again have to rely on heuristics.\n \nØ As clarified above, hyperedge2vec using sen2vec is a baseline method. Given sen2vec is for sequences, we have to generate proxy node sequence i.e. proxy text, to be used as input for the sen2vec.\n \n> Overall, the paper only tries to use all the techniques developed for learning on hypergraphs (e.g., tensor decomposition for k-uniform hypergraphs, approximating a hypergraph with a clique graph etc.) to develop the embedding methods for hyperedges. It also does not show/discuss which method is more suitable to a given setting. In the experiments, they show very similar results for all methods. Comparison of proposed methods against a baseline is missing.\n \nØ As pointed out previously, over all we propose two methods which are principally designed to handle hypergraph structured data. Our experiments show that accuracy wise our methods perform similar to those of baselines which are not designed for hypergraphs. Moreover, our tensor based method is quiet efficient as compared to deep-learning based auto-encoder method. We therefore, argue that we have proposed more general methods which are suited for hypergraphs (and therefore also for graphs) while maintaining accuracy and efficiency. " ]
[ 5, 5, 5, -1, -1, -1, -1, -1 ]
[ 3, 3, 4, -1, -1, -1, -1, -1 ]
[ "iclr_2018_rJ5C67-C-", "iclr_2018_rJ5C67-C-", "iclr_2018_rJ5C67-C-", "SyHasMG4M", "HyhIPu67f", "H1kAEtYlz", "rJvDxGceG", "S1teFU6gG" ]
iclr_2019_B1gabhRcYX
BA-Net: Dense Bundle Adjustment Networks
This paper introduces a network architecture to solve the structure-from-motion (SfM) problem via feature-metric bundle adjustment (BA), which explicitly enforces multi-view geometry constraints in the form of feature-metric error. The whole pipeline is differentiable, so that the network can learn suitable features that make the BA problem more tractable. Furthermore, this work introduces a novel depth parameterization to recover dense per-pixel depth. The network first generates several basis depth maps according to the input image, and optimizes the final depth as a linear combination of these basis depth maps via feature-metric BA. The basis depth maps generator is also learned via end-to-end training. The whole system nicely combines domain knowledge (i.e. hard-coded multi-view geometry constraints) and deep learning (i.e. feature learning and basis depth maps learning) to address the challenging dense SfM problem. Experiments on large scale real data prove the success of the proposed method.
accepted-oral-papers
The first reviewer summarizes the contribution well: This paper combines [a CNN that computes both a multi-scale feature pyramid and a depth prediction, which is expressed as a linear combination of "depth bases"]. This is used to [define a dense re-projection error over the images, akin to that of dense or semi-dense methods]. [Then, this error is optimized with respect to the camera parameters and depth linear combination coefficients using Levenberg-Marquardt (LM). By unrolling 5 iterations of LM and expressing the dampening parameter lambda as the output of a MLP, the optimization process is made differentiable, allowing back-propagation and thus learning of the networks' parameters.] Strengths: While combining deep learning methods with bundle adjustment is not new, reviewers generally agree that the particular way in which that is achieved in this paper is novel and interesting. The authors accounted for reviewer feedback during the review cycle and improved the manuscript leading to an increased rating. Weaknesses: Weaknesses were addressed during the rebuttal including better evaluation of their predicted lambda and comparison with CodeSLAM. Contention: This paper was not particularly contentious, there was a score upgrade due to the efforts of the authors during the rebuttal period. Consensus: This paper addresses an interesting area of research at the intersection of geometric computer vision and deep learning and should be of considerable interest to many within the ICLR community. The discussion of the paper highlighted some important nuances of terminology regarding the characterization of different methods. This paper was also rated the highest in my batch. As such, I recommend this paper for an oral presentation.
test
[ "r1x8O_Sw3X", "SylPHRPDnQ", "BkgvvbtzkN", "H1ljP2vqAQ", "r1xqEgFcCX", "H1gAMXd90X", "rkxhFe_qAm", "HkeWLII90Q", "SJx-VMJcnm" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "edit: the authors added several experiments (better evaluation of the predicted lambda, comparison with CodeSLAM), which address my concerns. I think the paper is much more convincing now. I am happy to increase my rating to clear accept.\n\nI also agree with the introduction of the Chi vector, and with the use of the term of \"photometric BA\", since it was used before, even if it is unfortunate in my opinion. I thank the authors to replace reprojection by alignment, which is much clearer.\n\n---------------\n\n\nThis paper presents a method for dense Structure-from-Motion using Deep Learning:\nThe input is a set of images; the output is the camera poses and the depth maps for all the images.\nThe approach is inspired by Levenberg-Marquardt optimization (LM): A pipeline extracting image features computes the Jacobian of an error function. This Jacobian is used to update an estimate of the camera poses. As in LM optimization, this update is done based on a factor lambda, weighting a gradient descent step and a Gauss-Newton step. In LM optimization, this lambda evolves with the improvement of the estimate. Here lambda is also predicted using a network based on the feature difference.\n\nIf I understand correctly, what is learned is how to compute image features that provide good updates, how to predict the depth maps from the features, and how to predict lambda.\n\nThe method is compared against DeMoN and other baselines with good results.\n\nI like the fact that the method is based on LM optimization, which is the standard method in 'geometric bundle adjustment', while related works consider Gauss-Newton-like optimization steps. The key was to include a network to predict lambda as well.\n\nHowever, I have several concerns:\n\n* the ablation study designed to compare with a Gauss-Newton-like approach does not seem correct. The image features learned with the proposed method are re-used in an approach using a fixed lambda. If I understand correctly, there are 2 things wrong with that:\n- for GN optimization, lambda should be set to 0 - not a constant value. Several constant values should also have been tried.\n- the image features should be re-trained for the GN framework: Since the features are learned for the LM iteration, they are adapted to the use of the predicted lambda, but they are not necessarily suitable to GN optimization.\nThus, the advantage of using a LM optimization scheme is not very convincing.\n\nSince the LM-like approach is the main contribution, and the reported experiments do not show an advantage over GN-like approaches (already taken by previous work), this is my main reason for proposing rejection.\n\n* CodeSLAM (best paper at CVPR'18) is referenced but there is no comparison with it, while a comparison on the EuRoC dataset should be possible.\n\nLess critical concerns that still should be taken into account if the paper is accepted:\n\n- the state vector Chi is not defined for the proposed method, only for the standard bundle adjustment approach. If I understand correctly is made of the camera poses.\n\n- the name 'Bundle Adjustment' is actually not adapted to the proposed method. 'Bundle Adjustment' in 'geometric computer vision' comes from the optimization of several rays to intersect at the same 3D point, which is done by minimizing the reprojection errors. Here the objective function is based on image feature differences. I thus find the name misleading. The end of Section 3 also encourages the reader to think that the proposed method is based on the reprojection error. The proposed method is more about dense alignment for multiple images.\n\n\nMore minor points:\n\n1st paragraph: Marquet -> Marquardt\ntitle of Section 3: revisitED\n1st paragraph of Section 3: audience -> reader\ncaption of Fig 1: extractS\nEq (2) cannot have Delta Chi on the two sides. Typically, the left side should be \\hat{\\Delta \\Chi}\nbefore Eq (3): the 'photometric ..' -> a 'photometric ..'\n1st paragraph of Section 4.3: difficulties -> reason\ntypo in absolute in caption of Fig 4\nEq (6): Is B the same for all scenes? It would be interesting to visualize it.\nSection 4.5: applies -> apply\n", "I believe that the authors have a solid contribution that can be interesting for the ICLR community.\nTherefore, I recommend to accept the paper but after revision because the presentation and explanation of the ideas contain multiple typos and lacking some details (see bellow). \n\nSummary:\nThe authors propose a new method called BA-Net to solve the SfM problem by explicitly incorporating geometry priors into a machine learning task. The authors focus on the Bundle Adjustment process. \n\nGiven several successive frames of a video sequence (2 frames but can be extended up to 5), BA-Net jointly estimates the depth of the first frame and the relative camera motion (between the first frame and the next one).\nThe method is based on a convolutional neural network which extracts the features of the different pyramid levels of the two images and in parallel computes the depth map of the first frame. The proposed network is based on the DRN-54 (Yu et al., 2017) as a feature extractor. \n\nThis is complemented by the linear combination of depth bases obtained from the first image.\nThe features and the initial depth then passed to the optimization layer called BA-layer where the feature re-projection error is minimized by the modified LM algorithm. \n\nThe authors adapt the standard multi-view geometry constraints by a new concept of feature re-projection error in the BA framework (BA-layer) which they made differentiable. \nDifferentiable optimization of camera motion and image depth via LM algorithm is now possible and can be used in various other DL architectures (ex. MVS-Net can probably benefit from BA-layer).\n\nThe authors also propose a novel depth parametrization in the form of linear combination of depth bases which reduces the number of parameters for the learning task, \nenables integration into the same backbone net as used or feature pyramids and makes it possible to jointly train the depth generator and the BA-layer. \n\nOriginally the proposed approach depicts the network operating in the two-view settings. The extensibility to more views is also possible and, as shown by authors, proved to improve performance. It is, however, limited by the GPU capacity. \n\nOverall, the authors came up with an interesting approach to the standard BA problem. They have managed to inject the multi-view geometry priors and BA into the DL architecture. \n\nMajor comments regarding the paper:\n\nIt would be interesting to know the evaluation times for the BA-net and more importantly to have some implementation details to ensure reproducibility.\n\nMinor comments regarding the paper:\n\n-\tThe spacing between sections is not consistent. \n-\tFigures 1 is way too abstract given the complicated set-up of the proposed architecture. It would be nice to see more details on the subnet for depth estimator and output of the net. \nOverall it would be helpful for reproducibility if authors can visualize all the layers of all the different parts of the network as it is commonly done in the DL papers.\n-\tTalking about proposed formulation of BA use either of the following and be consistent across the paper:\nFeaturemetric BA / Feature-metric BA / Featuremetric BA / ‘Feature-metric BA’\n-\tTalking about depth parametrization use ‘basis’ or ‘bases’ not both and clearly defined the meaning of this important notion.\n-\tAttention should be given to the notation in formulas (3) and (4). The projection function there is no longer accepts a 3D point parametrized by 3 variables. Instead only depth is provided. \nIn addition, the subindex ‘1’ of the point ‘q’ is not explained. \n-\tMore attention should be given to the evaluation section. Specifically to the tables (1 and 2) with quantitative results showing the comparison to other methods. \nIt is not clear how the depth error is measured and it would be nicer to have the other errors explained exactly as they referred in the tables (e.g. ATE?).\n-\tHow the first camera pose is initialized?\n-\tIn Figure 2.b I’m surprised by the difference obtained in the feature maps for images which seems very similar (only the lighting seems to be different). Is it three consecutive frames?\n-\tAttention should be given to the grammar, formatting in particular the bibliography. \n\n\n\n", "The response has addressed enough of my concerns and I determine to increase my rating from 6 to 7.", "\nWe thank the reviewer for the comments and appreciation, and would like to answer the reviewer’s questions as follows:\n\nQ1. The use of the word “guarantees” is imprecise:\nThanks for pointing out this. We have adjusted the word. A theoretical analysis will be an interesting future work.\n\nQ2. Whole sequence reconstruction results:\nOur current implementation only allows up to 5 images in a single 2015 TITANX GPU with 12GB memories. This is because we implemented the whole pipeline using tensorflow in python, which is memory inefficient, especially during training. Each image takes about 2.3GB memory on average, and most of the memory is consumed by the CNN features and matrix operation. But it is straightforward to concatenate multiple 5-frame segments to reconstruct a complete sequence, which is demonstrated in the comparison with CodeSLAM in Figure 7 of the revised version. It is also straightforward to implement our BA-Layer in CUDA directly to reduce the memory consumption of matrix operation and push the number of frames.\n", "We thank the reviewer for raising the score.\n\nWe submitted the response and the revision until the last minute because a lot of extra works have been done for the revision, and we want to ensure the correctness and completeness.\n\nBut we will have a better-planned schedule for the next ICLR to fit the purpose of openreview. \n\n\n\n", "We thank the reviewer for the comments and appreciate that the reviewer likes our idea of including optimization in the network. But our contribution is beyond adopting Levenberg-Marquardt instead of Gauss-Newton. We would like to clarify several things to address the reviewer's concerns:\n\nQ1. The advantages of Levenberg-Marquardt over Gauss-Newton is unclear (the main reason for rejection):\n\nFirstly, we want to clarify that our contribution is beyond improving the Gauss-Newton optimization to Levenberg-Marquardt. More importantly, our contribution is the combination of conventional multi-view geometry (i.e. joint optimization of depth and camera poses) and end-to-end deep learning (I.e. depth basis generator learning and feature learning). This contribution is achieved by our differentiable LM optimization that allows end-to-end training. \n\nSecondly, we agree with the reviewer that comparing with the Gauss-Newton algorithm will be interesting and have updated such a comparison in Appendix B in the revised version according to the reviewer’s suggestions: \n\n 1. We retrained the whole pipeline with Gauss-Newton, to make sure the features are learned specifically for Gauss-Newton.\n\n 2. We compared with various constant lambda values to see how the performance varies along with lambda. Note that we also fine-tune the network to make sure the features fit different lambda. \n\nIn Table 4 of the revised version (Appendix B), our method outperforms the Gauss-Newton algorithm in the last column. This is because the objective function to be optimized is non-convex, and the vanilla Gauss-Newton method might get stuck at saddle point or local minimum. This is why the Levenberg-Marquardt algorithm is the standard choice for conventional bundle adjustment.\n\nIn Figure 6 of the revised version (Appendix B), our method also consistently performs better than different constant lambda values. This is because the value of lambda should be adapted to different data and optimization iterations. There is no ‘optimal’ constant lambda for all data and iterations.\n\n\nQ2. Comparison with CodeSLAM:\nWe have included that in Figure 7 of the revised version (Appendix E). Since there is no public code for CodeSLAM, we cite its results directly from the CodeSLAM paper.\n\nQ3. The state vector Chi is not defined for the proposed method.\nThe Chi is defined in Section 3 as the vector containing all camera poses and point depths. Since our method also solves for these unknowns as in classic methods, we did not redefine the Chi. But in the revised version we have recapped the definition of Chi when introducing our method at the beginning of Section 4.\n\nQ4. Should the paper be called Bundle Adjustment?:\nThe term ‘Bundle Adjustment’ is originally used to refer to the joint optimization of 3D scene points and camera poses by minimizing the reprojection error. The keyword Bundle comes from the fact that a bundle of camera view rays pass through each of the 3D scene points. Multiple recent works, e.g. [Engel et al., 2017,Delaunoy and Pollefeys, 2014], have generalized it to “photometric BA” where scene points and camera poses are optimized together by minimizing the photometric error. Our method is along this line. But we further improve the photometric error to featuremetric error. Each 3D scene point is still constrained by a bundle of camera view rays, though the error function has been changed. So we believe it is justified to call this method feature-metric BA. \n\nBut we agree with the reviewer that the word ‘reprojection’ is misleading when we introduce our feature-metric BA and the photometric BA. So we use the word ‘align’ as the reviewer suggested and use ‘reprojection’ only for the geometric BA.\n\nQ5. Is B the same for all scenes?:\nIn the revised version, We added Figure 8 to visualize of the term B in Equation 7 (Page 6) for different scenes. We can clearly see that it is scene dependent. \n\nQ6.Typos:\nWe have fixed all the typos as suggested in the revised version.\n", "\nWe thank the reviewer for the comments. We have revised the paper according to the suggestions and would like to clarify several things:\n\nQ1. Evaluation Time: \nWe have added the detailed running time for each component in Table 3 in Appendix A of the revised version.\n\nQ2. Implementation Details: \nWe will share all the source code to make sure it is reproducible. Meanwhile, we have included more details as suggested in Appendix A, including a visualization of all layers of the different parts of the network. If 1-2 extra pages are allowed, we can include those details to the paper.\n\nQ3. Figure 1 is too abstract:\nWe have updated the figure to make it more intuitive and contains more details.\n\nQ4. The top row of Figure 2b is confusing:\nWe apologize for the confusion caused. Shown at the top row of Figure 2b are not three consecutive frames. They are the R, G, B channels of a single frame. To avoid confusing, we use different colors for them and explained that in the figure.\n\nQ5. How the first camera pose is initialized?:\nAll the camera pose including the first camera are initialized with identity rotation and zero translation, which are aligned with the coordinate system of the first camera. We clarified this at the end of Section 4.3 in the revised version.\n\nQ6. Evaluation metrics are not clear:\nTo facilitate comparisons with other methods, we use the evaluation metrics in previous works in Table 1 and 2, so that we can cite the results of previous methods. As we described in the paper, the depth metric are the same as Eigen and Fergus (2015). The translation metrics(ATE) are the same as [Wang et al. 2018, Zhou et al. 2017]. In the revised version, we briefly introduce the definition of these metrics at the beginning of each paragraph in Section 5.2.\n\nQ7. Attention should be given to the notation in formulas (3) and (4):\nWe changed the parameters from ‘d’ to ‘d \\cdot p’ which is a 3D point. We also removed the redundant subindex ‘1’, because all points ‘q’ are on the first frame.\n \nQ8. Terminology consistency through the paper:\nThanks for the suggestion. We consistently use the term “feature-metric BA” and “basis depth maps” through the paper now.\n\nQ9. Typos, Grammar, Format, and Bibliography:\nThanks for pointing them out. We have revised the paper to fix these problems.", "We thank all the reviewers for their insightful comments. We have revised the paper as suggested by the reviewers, and summarize the major changes as follows:\n\n* Network architecture details and evaluation time required by Reviewer2 are added as Appendix A.\n\n* The Figure 1. is updated to include more details as required by Reviewer2.\n\n*Ablation studies comparisons with Gauss-Newton and different constant lambda value required by Reviewer3 are updated in Appendix B.\n\n*Comparison with CodeSLAM on EuroC required by Reviewer3 are updated in Appendix E.\n\nWe also would like to ask for the reviewers’ suggestions if it is allowed to have one more extra page to include more details and comparisons, and make the paper more informative to ensure reproducibility. We targeted at 8 pages in the initial submission, but according to the reviewers’ comments, it will be helpful to have more details in the main text \n\nThe other concerns raised by the reviewers have also been addressed individually.", "This paper presents a novel approach to bundle adjustment, where traditional geometric optimization is paired with deep learning.\nSpecifically, a CNN computes both a multi-scale feature pyramid and a depth prediction, expressed as a linear combination of \"depth bases\".\nThese values are used to define a dense re-projection error over the images, akin to that of dense or semi-dense methods.\nThen, this error is optimized with respect to the camera parameters and depth linear combination coefficients using Levenberg-Marquardt (LM).\nBy unrolling 5 iterations of LM and expressing the dampening parameter lambda as the output of a MLP, the optimization process is made differentiable, allowing back-propagation and thus learning of the networks' parameters.\n\nThe paper is clear, well organized, well written and easy to follow.\nEven if the idea of joining BA / SfM and deep learning is not new, the authors propose an interesting novel formulation.\nIn particular, being able to train the CNN with a supervision signal coming directly from the same geometric optimization process that will be used at test time allows it to produce features that will make the optimization smoother and the convergence easier.\nThe experiments are quite convincing and seem to clearly support the efficacy of the proposed method.\n\nI don't really have any major criticism, but I would like to hear the authors' opinions on the following two points:\n\n1) In page 5, the authors write \"learns to predict a better damping factor lambda, which gaurantees that the optimziation will converged to a better solution within limited iterations\".\nI don't really understand how learning lambda would _guarantee_ that the optimization will converge to a better solution.\nThe word \"guarantee\" usually implies that the effect can be somehow mathematically proved, which is not done in the paper.\n\n2) As far as I can understand, once the networks are learned, possibly on pairs of images due to GPU memory limitations, the proposed approach can be easily applied to sets of images of any size, as the features and depth predictions can be pre-computed and stored in main system memory.\nGiven this, I wonder why all experiments are conducted on sets of two to five images, even for Kitti where standard evaluation protocols would demand predicting entire sequences." ]
[ 8, 7, -1, -1, -1, -1, -1, -1, 9 ]
[ 4, 4, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2019_B1gabhRcYX", "iclr_2019_B1gabhRcYX", "rkxhFe_qAm", "SJx-VMJcnm", "r1x8O_Sw3X", "r1x8O_Sw3X", "SylPHRPDnQ", "iclr_2019_B1gabhRcYX", "iclr_2019_B1gabhRcYX" ]
iclr_2019_B1l08oAct7
Deterministic Variational Inference for Robust Bayesian Neural Networks
Bayesian neural networks (BNNs) hold great promise as a flexible and principled solution to deal with uncertainty when learning from finite data. Among approaches to realize probabilistic inference in deep neural networks, variational Bayes (VB) is theoretically grounded, generally applicable, and computationally efficient. With wide recognition of potential advantages, why is it that variational Bayes has seen very limited practical use for BNNs in real applications? We argue that variational inference in neural networks is fragile: successful implementations require careful initialization and tuning of prior variances, as well as controlling the variance of Monte Carlo gradient estimates. We provide two innovations that aim to turn VB into a robust inference tool for Bayesian neural networks: first, we introduce a novel deterministic method to approximate moments in neural networks, eliminating gradient variance; second, we introduce a hierarchical prior for parameters and a novel Empirical Bayes procedure for automatically selecting prior variances. Combining these two innovations, the resulting method is highly efficient and robust. On the application of heteroscedastic regression we demonstrate good predictive performance over alternative approaches.
accepted-oral-papers
The manuscript proposes deterministic approximations for Bayesian neural networks as an alternative to the standard Monte-Carlo approach. The results suggest that the deterministic approximation can be more accurate than previous methods. Some explicit contributions include efficient moment estimates and empirical Bayes procedures. The reviewers and ACs note weakness in the breadth and complexity of models evaluated, particularly with regards to ablation studies. This issue seems to have been addressed to the reviewer's satisfaction by the rebuttal. The updated manuscript also improves references to related prior work. Overall, reviewers and AC agree that the general problem statement is timely and interesting, and well executed. We recommend acceptance.
train
[ "H1eOIrXYhm", "HyeV1yHgAm", "HJxcV9EgRX", "rJex4YNeCQ", "H1g0a1ir2Q", "rJexO5ZynQ" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The authors propose a new approach to perform deterministic variational inference for feed-forward BNN with specific nonlinear activation functions by approximating layerwise moments. Under certain conditions, the authors show that the proposed method achieves better performance than existing Monte Carlo variational inference. This paper is interesting since most of the existing works focus on Monte Carlo variational inference. The main contribution of this paper is to perform Gaussian approximation. The authors show that for specific activation functions, the Gaussian approximation is reasonable. The main concern is the cumulative error due to the Gaussian approximation. Since the authors argue that the proposed method fixes the issues of stochastic VI for BNN, the authors should also investigate/clarify the following cases. \n(1) A deep BNN to show that the cumulative error is negligible as the number of the hidden layers increases \n(2) Small latent dimension since CLT may not hold\n(3) A heavy-tailed variational distribution since the second moment may not be finite \n(4) Other nonlinear activations since the Gaussian approximation may not be accurate due to (generalized) Berry-Esseen theorem\n(5) A BNN with skip connections since a Bayesian multiplayer perceptron with skip connections is also a feed-forward BNN\n \nAmong these cases, I am eager to see some results on a deep thin BNN. For example, a BNN with 5 hidden layers, where the latent dimension at each layer is less than 32. \nFurthermore, I would like to see some empirical comparison on real-world datasets between DVI and MCVI under a *fixed* prior since such comparison demonstrates the approximation accuracy of DVI and rule out the confounding factor introduced by the empirical Bayes approach.\n\n", "Thank you for your detailed and enthusiastic review. We have updated the paper and address specific questions below\n\n> The term \"fixing VB\" and some of the intro is not really supported… the authors may tone down their language a bit.\n\nWe have removed “fixing VB” from the title and removed strong phrases in the abstract and introduction.\n\n> While Barber&BIshop 98 is cited, they miss the expression for <h_j h_l> in there. Now, what is done here, is more elegant, does not need 1D quadrature.\n\nWe have added these comments to our related work section.\n\n> Can you do something with your posterior that normal DNN methods cannot do?\n\nStandard training of DNNs which returns point-parameter estimates (i) results in poorly calibrated predictive uncertainty estimates (notably predictions are often confidently wrong), (ii) does not support model-based sequential decision making (e.g. active learning), and (iii) suffers from catastrophic forgetting when trained in the continual learning setting. BNNs have been shown to substantially improve upon standard models / training in these three settings (see e.g. [1,2,3]). The new innovations proposed in this paper will be applied to these areas in future work. In the second two application areas – sequential decision making and continual learning – approximate Bayesian inference must be run as an inner loop of a larger algorithm. This requires a robust and automated version of BNN training: this is precisely where we believe the innovations in this paper will have large impact since they pave the way to automated and robust deployment of BBNs that do not involve an expert in-the-loop. We have included these points as motivational future work items in the paper conclusion.\n\n> what is q(w) [the variational family used in the experiments]?\n\nOur method is not limited to fully factorized Gaussian variational distributions (any distribution with a tractable first and second moment could be used). However, for computational simplicity, our experiments do use a fully factorized Gaussian q(w). We have added this detail to the experimental section.\n\n> Why not evaluate at least dDVI with diagonal q(w) on some much larger models and datasets? \n\nWe have added an appendix C that evaluates the performance of DVI in larger models including deep networks with skip connections. Regarding larger datasets, our evaluation focuses on assessing the robustness of the new methods and how automatic they are. The experiments do consider nine different datasets, following established practice for evaluating new approximate inference methods for BNNs (see e.g. [4,5,6]). We evaluate the proposed methods using many of different model variants (hetero vs homoscedastic, MC vs different deterministic approximations, different prior settings, various methods for parameterising the variance, etc.). In this way we have prioritized a comprehensive assessment of the myriad design decisions, rather than assessing a relatively small number of design decisions on a larger number of datasets. Whilst we acknowledge that since the benchmarks are relatively simple this work is just a first step of a completely comprehensive evaluation, we believe that the experiments provide a solid foundation for this longer-term enterprise. \n\n> Was MCVI run with re-parameterization? \n\nWe run vanilla MCVI, and re-parameterization is discussed in section E of the appendix and results using re-parameterization appear in Table 3. We have added this clarification and pointers to section E in the main text.\n\n> Relation to PBP: Note that dDVI has an advantage in practice… Why not show the PBP-1 results, comparing to dDVI, in the main text? Are they obtained with the same model? dDVI is doing better.\n\nTable 3 is too large to be included in the main text and although we perform comparison with PBP using the same model, we don’t want to move the results to the main text because our method has clear qualitative advantages over PBP as you highlight: 1) we handle batches of data and do not have to process one data point at a time, 2) we account for correlations in the forward pass and in the posterior distribution and 3) we can account for heteroskedastic noise. We have clarified these advantages in the related work section.\n\n> Compare against [dropout-like methods], and show it really does not work?\n\nOur extended results table (Table 3) in the appendix includes results using dropout.\n\n[1] Known Unknowns: Uncertainty Quality in BNNs, R Oliveira et al., NIPS BDL workshop 2016 \n[2] Deep Bayesian Active Learning with Image Data, Y Gal et al., PMLR 70:1183-1192, 2017\n[3] Variational Continual Learning CV Nguyen et al. ICLR 2018\n[4] Deep Gaussian Processes for Regression using Approximate Expectation Propagation. T Bui, et al. ICML 2016 \n[5] Black-box alpha-divergence minimization JM Hernández-Lobato et al., ICML 2016\n[6] Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles B Lakshminarayanan, et al., NIPS 2017", "Thank you for your kind review and suggestions for additional studies. We address specific questions below with reference to new sections in the paper.\n\n> I wonder if the Gaussian and moment propagation approximations cause difficulty when applied repeatedly in deeper networks\n\nWe have added a new Appendix C that studies deeper neural networks. We also include a 5-layer, 125-unit network (“deep, wide”), 5-layer, 25-unit network (“deep, narrow”) and a 5-layer, 5-unit network (“deep and impractically narrow”). For the practically relevant 25- and 125-unit cases, we observe good fits to data and qualitatively good agreement of our approximation with Monte Carlo (MC) simulation using 20k samples. For the extremely narrow 5-unit case we see significantly non-Gaussian output distributions in MC simulation due to failure of the central limit theorem underlying our approximations. These experiments cover the range of behaviour expected in our method and demonstrate that our method does work for deep networks in the practically relevant regime where at least a few tens of hidden units are used. In addition, we have derived the required results to incorporate skip connections into our method to help training on even deeper networks. All of these results are summarized in figure 6.\n\n> Are the problems with MCVI and high gradient variance most serious for large datasets and more complex models? If so a comparison of DVI with MCVI in a more complex example is of interest.\n\nOur evaluation focuses on assessing the robustness of the new methods and how automatic they are. The experiments do consider nine different datasets (containing up to 45k examples), in accordance with established practice for evaluating new approximate inference methods for BNNs (see e.g. [1,2,3]). Crucially we evaluate the proposed methods using many different model variants (hetero vs homoscedastic, MC vs different deterministic approximations, different prior settings, various methods for parameterising the variance, etc.). In this way we have prioritized a comprehensive assessment of the myriad design decisions, rather than assessing a relatively small number of design decisions on a larger number of datasets. Whilst we acknowledge that since the benchmarks are relatively simple this work is just a first step of a completely comprehensive evaluation, we believe that the experiments provide a solid foundation for this longer-term enterprise. \n\n> I don't feel there is much to compare the proposed EB approximations to, although a comparison with manual tuning is given in Section 6.\n\nTo complement our comparison with manual tuning, we have added section D.1 and Table 5 to the appendix to give an ablation study corresponding to all combinations of DVI or MCVI with fixed or EB priors. Note that when running with a fixed prior, we select the best prior variance by a separate hyperparameter sweep on each dataset (cf. figure 5). Besides eliminating this tuning overhead, EB maintains a small performance advantage over manual tuning because it automatically finds different prior variances for each weight matrix, whereas we only manually tune the global fixed prior variance.\n\n[1] Deep Gaussian Processes for Regression using Approximate Expectation Propagation. Thang Bui, José Miguel Hernández-Lobato, Yingzhen Li, Daniel Hernández-Lobato, and Rich Turner \nICML 2016 \n[2] Black-box alpha-divergence minimization José Miguel Hernández-Lobato, Yingzhen Li, Mark Rowland, Daniel Hernández-Lobato, Thang Bui, and Rich Turner \nICML 2016\n[3] Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles Balaji Lakshminarayanan, Alexander Pritzel, Charles Blundell, NIPS 2017\n", "Thank you for your great recommendations for additional studies. We created some new sections when replying to your questions that are good improvements to the paper:\n\n> (1) A deep BNN to show that the cumulative error is negligible as the number of the hidden layers increases\n> (2) Small latent dimension since CLT may not hold … I am eager to see some results on a deep thin BNN. For example, a BNN with 5 hidden layers, where the latent dimension at each layer is less than 32\n\nWe have added a new Appendix C that studies deeper neural networks:\nTo address request (1), we studied a 5-layer, 125-unit network, and we observe good fits to data and qualitatively good agreement of our Gaussian approximation with Monte Carlo (MC) simulation using 20k samples (see the new figure 6a)\nTo address request (2) we additionally study a 5-layer, 25-unit network, and we again see good fits and qualitatively reasonable performance of our approximation (figure 6b). For completeness, we include an extreme case: 5-layers, 5-units. In this extreme case we do see we see significantly non-Gaussian output distributions in MC simulation due to failure of the central limit theorem underlying our approximations. Since such narrow networks are not of significant practical importance, we do not see this as a major problem with our method.\nWe thank the reviewer for recommending these studies – including a demonstration that failure cases only arise in impractically narrow architectures helps to justify our use of the CLT.\n\n> (3) A heavy-tailed variational distribution since the second moment may not be finite\n\nThe reviewer is correct that our method relies on a variational distribution with finite first and second moments, for which the CLT holds. We have added clarification of these necessary conditions in the text. Note that *only* the first and second moments of the variational distribution are required to compute the reconstruction log-probability in the ELBO (i.e. only <W> and Cov(W,W) appear in equation 3). The precise form of the variational distribution is only required to evaluate the KL term in the ELBO, and therefore it is easy to apply our method to any variational family with finite moments which has a closed form KL with a suitable prior.\n\n> (4) Other nonlinear activations since the Gaussian approximation may not be accurate due to (generalized) Berry-Esseen theorem\n\nWe provide results on the Heaviside and ReLU nonlinearities. Other useful and commonly deployed nonlinearities are (generally speaking) “softened” and translated versions of either a Heaviside or ReLU nonlinearity (e.g. tanh is a soft Heaviside and elu is a soft ReLU). Note that the nonlinearity only appears in equations 4 and 5, where it is being convolved with the Gaussian activation distribution. Since this convolution already softens the hard nonlinearities (e.g. see the smooth functions plotted in figure 2), changes in the intrinsic the softness of the underlying nonlinearity are qualitatively equivalent to using a hard nonlinearity and adjusting the convolving Gaussian covariance. For this reason, we do not think there will be considerable benefit from exploring other nonlinearities. We believe that any gain is likely not worth the considerable work required to find closed form approximations for the integrals in 4 and 5 for arbitrary nonlinearities.\n\n> (5) A BNN with skip connections since a Bayesian multiplayer perceptron with skip connections is also a feed-forward BNN\n\nThis was fairly simple to add to our method, and we thank the reviewer for suggesting this nice addition. Specifically, we have added derivations of the integral results required to implement a network with skip connections in a new Appendix C.1 and include a figure showing that our approximation works in a deep network with skip connections in Fig 6d.\n\n> I would like to see some empirical comparison on real-world datasets between DVI and MCVI under a *fixed* prior since such comparison demonstrates the approximation accuracy of DVI and rule out the confounding factor introduced by the empirical Bayes approach.\n\nWe have added section D.1 and Table 5 to the appendix to give an ablation study corresponding to all combinations of DVI or MCVI with fixed or EB priors. Note that when running with a fixed prior, we select the best prior variance by a separate hyperparameter sweep on each dataset (cf. figure 5). Besides eliminating this tuning overhead, EB maintains a small performance advantage over manual tuning because it automatically finds different prior variances for each weight matrix, whereas we only manually tune the global fixed prior variance.\n", "This paper considers a purely deterministic approach to learning variational posterior approximations for Bayesian neural networks. Variational lower bound gradients are obtained by approximating the lower bound using Gaussian approximations and moment propagation for network activations, and using a closed form expression for the variational expectation of the log-likelihood, the latter being available for the models considered in the paper. \n\nThis is an interesting paper. The Gaussian approximations and moment propagation approximations are clever and highly original although the derivation is rather heuristic. There is some empirical support that the approximations work well. The paper is generally well written and clearly motivated in the context of the existing literature.\n\nThe approximations work well for the examples presented in the paper. The experiments are for rather small datasets and for the DVI method if I understand correctly only models with a single hidden layer are considered. I wonder if the Gaussian and moment propagation approximations cause difficulty when applied repeatedly in deeper networks. Are the problems with MCVI and high gradient variance most serious for large datasets and more complex models? If so a comparison of DVI with MCVI in a more complex example is of interest. The empirical Bayes approximations are interesting - I would have thought similar approximations been used in the literature before, in addition to the work you mention in Section 5? I don't feel there is much to compare the proposed EB approximations to, although a comparison with manual tuning is given in Section 6. \n\n", "Summary:\n\nThis work is tackling two difficulties in current VB applied to DNNs (\"Bayes by backprop\"). First, MC approximations of intractable expectations are replaced by deterministic approximations. While this has been done before, the solution here is new and very interesting. Second, a Gaussian prior with length scales is learned by VB empirical Bayes alongside the normal training, which is also very useful.\n\nThe term \"fixing VB\" and some of the intro is not really supported by the rather weak experiments, done on small datasets and networks, where much older work like Barber&Bishop would apply without any problems. While interesting and potentially very useful novelties are presented, and the writing is excellent, both experiments and motivation can be improved.\n\n- Quality: Extremely well written paper, I learned a lot from it. Approximations are\n tested, great figures to explain things. And the major technical novelty, the\n expression for <h_j h_l>, is really interesting and useful.\n- Clarity: Excellent writing until it comes to the experiments. Here, important\n details are just missing, for example what q(w) is (fully factorized Gaussian?).\n Very nice literature review, also historical.\n- Originality: The idea of matching Gaussian moments along the network graph is\n previously done in PBP (Lobato, Adams), as acknowledged here. Porting this from\n ADF to VB gives dDVI. PBP also has the property that a DL system gives you the\n gradients. Having said that, I think dDVI may be more useful than PBP.\n While Barber&BIshop 98 is cited, they miss the expression for <h_j h_l> in\n there. Now, what is done here, is more elegant, does not need 1D quadrature.\n- Significance: Judging from the existing experiments, the significance may be\n rather small, *if one only looks at test log likelihood*. I'd still give this the\n benefit of the doubt, as in particular dDVI could be really interesting at large\n scale as well. But the authors may tone down their language a bit.\n To increase significance, I recommend to comment beyond just test log\n likelihood scores. For example:\n - Does the optimization become simpler, less tuning required, more automatic?\n Would one not expect so, given you make a big point out of reducing variance?\n Does it converge faster?\n - Can you do something with your posterior that normal DNN methods cannot\n do? Better decisions (bandits, active learning, HPO)? Continual learning?\n In the end, who really cares about test log likelihood?\n\nExperiments:\n- What is the q(w) family being used here? Fully factorized Gaussian? I\n suppose so for dDVI. But for DVI? Not said anywhere, in main paper or\n Appendix\n- A bit disappointing. Why not evaluate at least dDVI with diagonal q(w) on\n some much larger models and datasets? Why not quote numbers on speed\n and robustness of learning, etc? Show what you really gain by reducing the\n variance.\n- Experiments are OK, but on pretty small datasets, and for single hidden\n layer NNs. On such data and models, the Barber&Bishop 98 method could\n be run as well\n- Was MCVI run with re-parameterization? This is really important. If not,\n this would be an important missing comparison. Please be clear in the main\n text\n- Advantages over MCVI are not very large. At least, dDVI should be faster to\n converge than MCVI.\n Can you say something about robustness of training? Is it easier to train\n dDVI than MCVI?\n- Why not show the PBP-1 results, comparing to dDVI, in the main text? Are they\n obtained with the same model? dDVI is doing better.\n\nOther points:\n- Please acknowledge the <h_j h_l> expression in Barber&Bishop 98. Yours is\n more elegant and faster (does not need 1D quadrature)\n- Relation to PBP: Note that dDVI has an advantage in practice. With PBP, I need\n to compute gradients for every datapoint. In dDVI, I can do mini-batch\n updates.\n- I just *love* the header \"Wild approximations\". I tend to refer to this kind of work\n as \"weak analogies\". Why do you not also compare against this, and show it really\n does not work?\n" ]
[ 7, -1, -1, -1, 7, 7 ]
[ 3, -1, -1, -1, 3, 5 ]
[ "iclr_2019_B1l08oAct7", "rJexO5ZynQ", "H1g0a1ir2Q", "H1eOIrXYhm", "iclr_2019_B1l08oAct7", "iclr_2019_B1l08oAct7" ]
iclr_2019_B1l6qiR5F7
Ordered Neurons: Integrating Tree Structures into Recurrent Neural Networks
Natural language is hierarchically structured: smaller units (e.g., phrases) are nested within larger units (e.g., clauses). When a larger constituent ends, all of the smaller constituents that are nested within it must also be closed. While the standard LSTM architecture allows different neurons to track information at different time scales, it does not have an explicit bias towards modeling a hierarchy of constituents. This paper proposes to add such inductive bias by ordering the neurons; a vector of master input and forget gates ensures that when a given neuron is updated, all the neurons that follow it in the ordering are also updated. Our novel recurrent architecture, ordered neurons LSTM (ON-LSTM), achieves good performance on four different tasks: language modeling, unsupervised parsing, targeted syntactic evaluation, and logical inference.
accepted-oral-papers
This paper presents a substantially new way of introducing a syntax-oriented inductive bias into sentence-level models for NLP without explicitly injecting linguistic knowledge. This is a major topic of research in representation learning for NLP, so to see something genuinely original work well is significant. All three reviewers were impressed by the breadth of the experiments and by the results, and this will clearly be among the more ambitious papers presented at this conference. In preparing a final version of this paper, though, I'd urge the authors to put serious further effort into the writing and presentation. All three reviewers had concerns about confusing or misleading passages, including the title and the discussion of the performance of tree-structured models so far.
train
[ "B1gIbtAKRm", "BkgiNwT7h7", "B1xh_mvdRX", "HkgTokYQCX", "Bkgp-SFxRQ", "Skxnyrtx0m", "SyeNaVYxCm", "Bygp1Apv2m", "H1ewsJDcjm" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Regarding “LSTM’s performance consistently lags behind that of tree-based models”. \nOn sentence embedding tasks (e.g SNLI) and sequential labeling tasks (e.g sentiment analysis), TreeLSTM has shown better performance compared to vanilla LSTM. We’ve also updated the abstract according to the reviews.\n\nRegarding “the performance difference for different layers”.\nThanks for the suggestion. The main reason for choosing the number of layers was to compare with the AWD-LSTM model, so we followed the hyperparameters used there as closely as possible. We will further investigate the relationship between varying the number of layers and its effect on the parsing performance. Our hypothesis is that the first and last layer focus on low level or short term information, while the middle also include longer term information. Similar results can be found in [1].\n[1]Blevins, Terra, Omer Levy, and Luke Zettlemoyer. \"Deep RNNs Encode Soft Hierarchical Syntax.\" arXiv preprint arXiv:1805.04218 (2018).\n\nRegarding the “logical inference experiment”. \nWe follow the same experiment setting in previous paper. It's designed to test the generalizability of model.", "The paper proposes a new RNN unit: ON-LSTM. The idea is to explicitly integrates the latent tree structure into recurrent models. Experiments are conducted to evaluate performances on four different tasks: language modeling, unsupervised parsing, targeted syntactic evaluation, and logical inference. Good results on unsupervised parsing show that the model learns something close to human judgments of the sentence parses.\n\nThe paper is clearly written, and the experiments seem planned well.\nThe language modeling results are not state-of-the-art, but the unsupervised parsing results of layer 2 are quite impressive. The analyses are reasonable.\n\nOverall, the paper seems worthy of being accepted.", "Overall, it seems that all reviewers agree that this paper is quite interesting. However, since my original review was rather short, I wanted to give some additional notes/suggestions:\n- In the abstract you say that the LSTM's \"performance consistently lags behind that of tree-based models\". Is that actually true? If so, could you maybe give examples for tasks where this is the case? (Otherwise, maybe weaken that statement.)\n- I felt that the explanation for the performance difference between the different layers for unsupervised parsing is rather vague, given that this is arguably the most important part of the paper. You mentioned earlier work which found something similar in another response and said that parsing wasn't as good for one or two layers. However, there are some additional interesting questions to ask, e.g., what happens with 4 layers? And is there a performance difference for parsing between the two layers when using two?\n- Concerning the logical inference experiment: Is there a reason why you train on <=6 operations only, but evaluate on longer sequences?\n- Finally, I agree with the other reviewers that the paper needs some editing. More examples are, e.g., Section 5.2.: \"latent stree structure\" -> \"latent tree structures\"; Section 5.3.: \"ON-LSTM perform\" -> \"ON-LSTM performs\"; Section 5.4.: \"given pair of sentences\" -> \"given pairs of sentences\", etc.\n", "Small changes look good to me.", "Regarding “The discussion of the motivation for unsupervised structure induction in the introduction is somewhat confused”. \nThanks for this suggestion. We have modified our introduction accordingly. \n\nRegarding “the author discuss hierarchy in terms of syntactic structure alone”. \nIt’s possible that the learned hierarchy also reflects topic and other structures. However, to our knowledge, it maybe hard to quantitatively measure whether the model captures such semantic level structures. We will further study the relationship between the induced structure and semantic units.\n\nRegarding “Why does the second layer show better unsupervised parsing performance than the third layer?”. \nOur hypothesis is that the first and last layer focus on low level or short term information, while the middle also include longer term information. Similar results can be found in [1].\n[1]Blevins, Terra, Omer Levy, and Luke Zettlemoyer. \"Deep RNNs Encode Soft Hierarchical Syntax.\" arXiv preprint arXiv:1805.04218 (2018).\n\nRegarding “Have you tried using only the \"master\" gates?“\nWe did try this. We found that for better language modelling performance, we still needed to use the unit specific gates. The unsupervised parsing capability of a master-gate-only model was similar.\n\nRegarding “Did the language model have 1150 units in each layer or in total? Why did you use exactly three layers? Did you try one, two and four?”\nThe main reason for these choices was to compare with the AWD-LSTM model, so we followed the hyperparameters used there as closely as possible. The first and second layer uses 1150 units, the last layer has 400 units. Having tried the one and two layer settings, we find that the hyperparameters do not result in similar parsing performance. \n\n\nRegarding “It's not clear if the results in Table 2 reflect the best seed out of five”\nThe “max” column provides the best result among different random seeds, while the “µ (σ)” column provides the mean and variance.", "It is true that our language modeling results are not state-of-the-art. The ordered neurons primarily focuses on inducing the latent structure of sequential data. We wanted to demonstrate that the model is able to give good parsing results in the acceptable range in terms of perplexity. One of the future research direction is trying to improve current SOTA models with the ordered neurons.", "Regarding the “hidden size” for targeted syntactic evaluation.\nFor fair comparison, we kept the number of parameters of our model comparable with the baseline. We also tried bigger hidden sizes and more layers. We observe that increasing the capacity doesn’t improve the performance on the task. This maybe due to overfitting. Since the training set and test set don’t share the same data distribution, overfitting the training set doesn’t necessarily provide better results on the test set.\n\nRegarding the “logical inference results”. \nUnderstanding the causes of that gap and investigating to what extent we can fill that gap is an important future direction. The advantages of the TreeLSTM over the ON-LSTM are 1) TreeLSTMs have access to the true structure; 2) TreeLSTMs reuse weights across the compositional processes related to non-terminal nodes, while the ON-LSTMs don’t have shared weights across different levels in the tree. For this task, the test set contains data with deeper tree structures than those seen during training. The weight sharing feature in TreeLSTMs may be beneficial in order to achieve better generalization.\n\nRegarding the wording in the introduction. \nThank you for the valuable suggestions. We have reworded the introduction to make it clear that the overtly sequential form is an essential characteristic for natural language, not just a conventional presentation format. In addition, we have changed the sentence to say that RNN explicitly imposes a chain structure.", "Quality\n - Pro:\n o This paper was in general a quality effort. It had a thorough bibliography of both older and recent relevant research contributions\n o Providing useful, well done experimental results on four tasks was also a sign of this good thoroughness\n - Con: none observed\n\nClarity\n - Pro:\n o The paper was generally well-written and clear. Results were clearly presented.\n - Con:\n o Notwithstanding the half page of explanation of the intuition behind the new ON-LSTM update rules (top of p.5), it wasn't really enough for my old brain to get a good sense of what was going on – though I'm sure younger, smarter people will have made more sense of it. :) It would really help to try to provide more intuition and understanding here. Things that would probably really help include a worked example and diagrams.\n o There were minor English/copyediting problems, but nothing that interfered with understanding. E.g., \"monotonously\" on p.4 should be \"monotonically\" (twice).\n\nOriginality\n - Pro\n o This was REALLY NEAT! This paper had a real, clear, different idea that appeared interesting and promising. That puts it into the top half of accepted papers right there.\n o The basic idea of the different update time scales, done flexibly, controlled by the master forget/input gates seemed original, flexible, and good.\n - Con: Nothing really observed; there are clearly a bunch of slightly related ideas, well referenced in this paper.\n\nSignificance\n - Pro\n o If this idea pans out well, it would be a really interesting new structural prior to add to the somewhat impoverished vocabulary of successful techniques for building deep learning systems.\n o Has an original, promising approach. That has the opportunity for impact and significance.\n - Con:\n o The results so far are interesting, and in places promising, but not so clearly good that this idea doesn't need further evaluation of its usefulness.\n o All the results presented are on small datasets (Penn Treebank WSJ (1 million words) size or smaller). What are the prospects on bigger datasets? It looks like in principle this shouldn't be a big obstacle – except for not having a highly tuned CuDNN implementation, it looks like this should basically be fairly efficient like an LSTM and not hard to scale like, e.g., an RNNG.\n\nOther comments:\n - Some of the wording on page 1 seemed strange to me. Natural language has a linear overt form as spoken and (hence) written. It's really not that the sequential form is just how people conventionally \"present\" it. That is, it's not akin to a chemical compound which is really 3 dimensional but commonly \"presented\" by chemists in a convenient sequential notation.\n - p.2 2nd paragraph: Don't RNNs \"explicitly impose a chain structure\" not \"implicitly\"?!?\n - I wasn't sure I was sold on the name \"Ordered Neurons\". I'm not sure I have the perfect answer here, but it feels more like \"multi-timescale units\" is what is going on.\n - The LM results look good.\n - Because of all the different datasets, etc. it was a little hard to call the grammar induction results, but they at least look competently strong.\n - The stronger results on long dependencies in targeted syntactic evaluation look promising, but maybe you need a bigger hidden size so you can also do as well on short dependencies?\n - The logical inference results were promising – they seem to suggest that you capture some but not all of the value of explicit tree structure (a TreeLSTM) on a task like this.\n - The tree structures in Appendix A look promisingly good.\n", "Language is hierarchically structured: smaller units (e.g., noun phrases) are nested within larger units (e.g., clauses). This is a strict hierarchy: when a larger constituent ends, all of the smaller constituents that are nested within it must also be closed. While the different units of an LSTM can learn to track information at different time scales, the standard architecture does not impose this sort of strict hierarchy. This paper proposes to add this constraint to the system by ordering the units; a vector of \"master\" input and forget gates ensures that when a given unit is reset all of the units that follow it in the ordering are also reset.\n\nStrengths:\n* The paper introduces an elegant way of adding a hierarchical inductive bias; the intuition behind this idea is explained clearly.\n* The evaluation tasks are very sensible. It's good that the model is shown to obtain good perplexity and slightly improve over an LSTM baseline; it's not the state of the art, but that's not the point of the paper (in fact, I would emphasize that even more than the authors do). The unsupervised parse evaluation (Table 2) is the heart of the paper, in my opinion (and should probably be emphasized more) -- the results from the second layer are quite impressive.\n* The (mildly) better performance than LSTMs on long-distance dependencies, and (mildly) worse performance on local dependencies, in the Marvin & Linzen dataset, is interesting (and merits additional analysis).\n\nWeaknesses:\n* The discussion of the motivation for unsupervised structure induction in the introduction is somewhat confused. I am not sure that neural networks with latent syntactic structures can really address the seemingly very fundamental question mentioned in the first paragraph (whether syntax is related to \"an underlying mechanism of human cognition\") - I would suggest eliminating this part. At the same time, the authors might want to add another motivation for studying architectures that discover latent structure (as opposed to being given that structure) - this setting corresponds more closely to human language acquisition, where children aren't given annotated parse trees.\n* The authors discuss hierarchy in terms of syntactic structure alone, but it would seem to me that the hierarchy that the LSTM is inducing could just as well include topic shifts, speech acts and others, especially if the network is trained across sentences.\n* There is limited analysis of the model. Why does the second layer show better unsupervised parsing performance than the third layer? (Could this be related to syntactic vs. semantic/discourse units I mention in the previous bullet?) Why is the model better at ADJP boundaries than NP boundaries? It would have been more useful to report less experiments but analyze the results of each experiment in greater depth.\n* In this vein, I am not sure it's useful to include WSJ10 in Table 2, which is busy as it is. These sentences are clearly too easy, as the right branching baseline shows, and require additional POS tagging.\n* I found it difficult to read Figure A.2: could you help us understand what we should take away from it? \n* It is not entirely clear why the model needs both unit-specific forget/input gates and the \"master\" forget/input gates, and there is no discussion of this issue. Have you tried using only the \"master\" gates?\n\nMinor notes:\n* RNNGs are described as having an explicit bias to model syntactic structure; this is an arguably confusing use of the word \"bias\", in that the architecture has a hard constraint enforcing syntactic structures (bias implies a soft constraint).\n* There are some language issues: agreement errors (e.g. \"have\" in the sentence that starts with \"Developing\" in the introduction), typos (\"A order should exist\", \"co-occurance\"), determiner issues (\"values in [the] master forget gate\", \"when the overlap exists\") - I would suggest going through and copy editing the paper.\n* \"cummax\" seems like a better choice of name for cumulative maximum than \"cumax\".\n* It may be helpful to remind the reader of the update equation for c_t in a standard LSTM.\n* Did the language model have 1150 units in each layer or in total? Why did you use exactly three layers? Did you try one, two and four?\n* It's not clear if the results in Table 2 reflect the best seed out of five (as the title of the column \"max\" indicates) or the average (as the caption says).\n" ]
[ -1, 7, -1, -1, -1, -1, -1, 9, 8 ]
[ -1, 3, -1, -1, -1, -1, -1, 4, 4 ]
[ "B1xh_mvdRX", "iclr_2019_B1l6qiR5F7", "BkgiNwT7h7", "SyeNaVYxCm", "H1ewsJDcjm", "BkgiNwT7h7", "Bygp1Apv2m", "iclr_2019_B1l6qiR5F7", "iclr_2019_B1l6qiR5F7" ]
iclr_2019_B1xsqj09Fm
Large Scale GAN Training for High Fidelity Natural Image Synthesis
Despite recent progress in generative image modeling, successfully generating high-resolution, diverse samples from complex datasets such as ImageNet remains an elusive goal. To this end, we train Generative Adversarial Networks at the largest scale yet attempted, and study the instabilities specific to such scale. We find that applying orthogonal regularization to the generator renders it amenable to a simple "truncation trick", allowing fine control over the trade-off between sample fidelity and variety by reducing the variance of the Generator's input. Our modifications lead to models which set the new state of the art in class-conditional image synthesis. When trained on ImageNet at 128x128 resolution, our models (BigGANs) achieve an Inception Score (IS) of 166.3 and Frechet Inception Distance (FID) of 9.6, improving over the previous best IS of 52.52 and FID of 18.65.
accepted-oral-papers
The paper proposes a set of tricks leading to a new SOTA for sampling high resolution images. It is clearly written and the presented contribution will be of high interest for practitioners.
train
[ "SJl68_Hx37", "SkgkCbBm0Q", "Syxd9-HXAQ", "r1gI_-SQAm", "BJeJx-H7RQ", "rJx99xSXAX", "S1gaWerP2X", "HklmZ1xqhm", "SkgcCLXypQ", "Hkgd30pT27", "BJgFGkiT2Q", "rJgBuz5a3Q", "rJlaYkcTnX", "BklSXtmL2X", "Sklp_OFLjm", "SyesNhmUjm", "Hke0IlKSim", "S1xw0OrXqm", "SJlWU-HGcm", "rkxcudXfq7", "SJgVJ8n1q7", "rJl92Zo197", "rJlFIzT0YX", "rJesh9ORFm" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "public", "public", "public", "author", "public", "author", "public", "author", "public", "public", "author", "public", "author", "public", "author", "public" ]
[ "This paper present extensions of the Self-Attention Generative Adversarial Network approach SAGAN, leading to impressive images generations conditioned on imagenet classes. \nThe key components of the approach are :\n- increasing the batch size by a factor 8\n- augmenting the width of the networks by 50% \nThese first two elements result in an Inception score (IS) boost from 52 to 93. \n- the use of shared embeddings for the class conditioned batch norm layers, orthonormal regularization and hierarchical latent space bring an additional boost of IS 99.\nThe core novel element of the paper is the truncation trick: At train time, the input z is sampled from a normal distribution but at test time, a truncated normal distribution is used: when the magnitude of elements of z are above a certain threshold, they are re-sampled.\nVariations of this threshold lead to variations in FD and IS, as shown in insightful experiments. The comments that more data helps (internal dataset experiments) is also informative. \nVery nice to have included negative results and detailed parameter sweeps.\n\nThis is a very nice work with impressive results, a great progress achievement in the field of image generation. \nVery well written.\n\nSuggestions/questions: \n- it would be nice to also propose unconditioned experiments. \nIt would be good to give an idea in the text of TPU-GPU equivalence in terms of feasibility of a standard GPU implementation - computation time it would involve. \n- I understand that no data augmentation was used during training? \n- clarification of the truncation trick: if the elements of z are re-sampled and are still above the threshold, are they re-sampled again and again until they are all below the given threshold?\n- A sentence could be added to explain the truncation trick in the abstract directly since it is simple to understand and is key to the quality of the results.\n- A reference to Appendix C could be given at the beginning of the Experiments section to help the reader find these details more easily.\n- It would be nice to display more Nearest neighbors for the dog image.\n- It would be nice to add a figure of random generations.\n- make the bib uniform: remove unnecessary doi - url - cvpr page numbers\n", "We would like to thank Reviewer 3 for the review and constructive suggestions. Our responses inline:\n\n>it would be nice to also propose unconditioned experiments. \n-We agree; this was simply not within the scope of the work we conducted.\n\n>I understand that no data augmentation was used during training? \n-This is correct, and consistent with previous works (Spectral Normalization and WGAN-GP). We briefly experimented with data augmentation (random crops and horizontal flips) but did not notice any measurable performance difference.\n \n>clarification of the truncation trick: if the elements of z are re-sampled and are still above the threshold, are they re-sampled again and again until they are all below the given threshold?\n-Yes, this can effectively be seen as modifying the PDF of z to have no mass outside of the truncation threshold. TensorFlow offers a built-in implementation with tf.random.truncated_normal.\n\n>A sentence could be added to explain the truncation trick in the abstract directly since it is simple to understand and is key to the quality of the results.\n-We have revised the abstract to explain the truncation trick as controlling the tradeoff between fidelity and diversity by reducing the variance of the Generator’s input.\n\n>A reference to Appendix C could be given at the beginning of the Experiments section to help the reader find these details more easily.\n-Thanks for the pointer! We have added this reference.\n\n>It would be nice to add a figure of random generations.\n-In the caption of Figure 5, we include a link to an anonymous drive folder with sample sheets at different resolutions and truncation values, with 12 random images per class.\n\n>make the bib uniform: remove unnecessary doi - url - cvpr page numbers\n-Thanks, we have fixed this.", "> In Section 3.1 : “Across runs in Table 1, we observe that without Orthogonal Regularization, only 16% of models are amenable to truncation compared to 60% with Orthogonal Regularization.” For me, this is not particularly clear. Is this something the reader should understand from Table 1?\n-This means that of all the models we trained for the study presented in Table 1 which did not use Orthogonal Regularization, only 16% were amenable to truncation. Of all the models which we trained for the study presented in Table 2 which did use Orthogonal Regularization, 60% were amenable to truncation. This is not reflected in Table 1, which is merely a presentation of how the introduced modifications impact performance.\n\n>I question the choice of sections chosen to be in the main paper/appendices. I greatly appreciated the negative results reported in the main text as well as in the appendices and this has significant value. However, as this is to me mostly a detailed empirical investigation and presentation of high-performance GANs on large scales, I would be likely to share this with colleagues who want to tackle similar problems. In this case, if future readers limit themselves to the main text, I think it can have more value to present some content form Appendix B and C than to have more than a full page on stability investigations and attempted tricks that turned out not to be used to reach maximal performance. However I do not want to discourage publishing of negative results, and I definitely wish to see this investigation in the paper, but I merely question the positioning of such information. With regard to my first negative point above about the lack of discussions, it seems the analysis of Section 4 is disproportionate compared to other places.\n-We appreciate this suggestion. While we recognize that this paper generally has a strong focus on implementation details, we felt that this instability was one of the most salient behaviors we observed, and that future work would be best served by presenting our investigations and attempts to understand its source, even if these methods did not improve performance. The information in Appendix B and C is intended to be of interest to those who want to reproduce our experiments, so it largely comprises hyperparameters and architectural details that we felt were not necessary to understand the main results of the paper. \n\n>In Appendix F, Figure 20 d), the title seems wrong. It seems to report sigma^2 values, but the title says “losses”.\n-Thanks! This was indeed an error, which we’ve corrected in the updated draft.\n\n>I would also be curious to see the proposed techniques applied on simpler datasets. Can this be useful for someone having less compute power and working on something similar to CelebA? \n-The goal of this work is to explore GANs at large scale; the exploration of small or medium scale models would indeed be interesting for another study. Having said that, we do evaluate BigGAN on conditional CIFAR-10 (mentioned briefly in Appendix C.2) and obtain an IS of 9.22 and an FID of 14.73 without truncation, which to our knowledge are better than any published results.", "We would like to thank Reviewer 2 for their review and constructive suggestions. Our responses inline:\n\n>Discussions sometimes lack depth or are absent.\n-We have added an additional section (Appendix G) expanding on our discussion and providing additional insight into the observed instabilities.\n\n>For example, it is unclear to me why some larger models are not amenable to truncation. Besides visible artifacts, what does it mean? Why does a smoother G reduces those artifacts?\n-Truncation introduces a train-test disparity in G’s inputs--at sampling time, G is given a distribution it has effectively never seen in training. The observation that imposing orthogonality constraints improves amenability to truncation is empirical. Our suspicion is that if G is not encouraged to be “smooth” in some sense, then it is likely that G will only properly generate images given points from the untruncated distribution. We hypothesize that models which are not amenable end up learning mappings which, when given truncated noise, either attenuate or amplify certain activation pathways, leading to extreme output values (hence the observed saturation artifacts). We speculate that encouraging G’s filters to have minimum pairwise cosine similarity means that, when exposed to distribution shift, the network’s features are less correlated and less likely to align and amplify an activation path it would otherwise have learned to scale properly. \n \n\n>Were samples from those networks better without using truncation? Why would this be?\n-Samples from those networks without truncation do not have measurably different quality, and their training metrics (losses, singular values) show no differences. Aside from empirically testing each network individually for amenability to truncation, we found no other way to check for that amenability.\n\n> Authors report how wider networks perform best, and how deeper networks degrade performance. Again, discussions are lacking, and it doesn’t seem the authors tried to understand why such behaviors were shown. Even though this is mostly an empirical investigation, I think some more efforts should be put in understanding and explaining why some of those behaviors are shown, as I think it can bootstrap future work more easily.\n-We are wary of explanations for which we do not have evidence. For each of the modifications introduced in Section 3, we offer a succinct conjecture as to why that change improves performance, but we are not aware of any existing reliable, informative metric which we could employ to understand or trace the source of each observed behavior, particularly with respect to GAN stability or performance.\nRegarding depth vs width: This paper is empirical, and we only briefly experimented with increasing depth analogously to increasing width. While increasing width provided an immediate measurable benefit, increasing depth did not. We felt that it was better to report the results of this brief investigation than to omit it for a lack of investigatory depth.", "We would like to thank Reviewer 1 for their review and constructive suggestions. Our responses inline:\n\n>Can you elaborate more on why BatchNorm statistics are computed across all devices as opposed to per-device? Was this crucial for best performance?\n-The primary reason is to ensure that training is invariant to the per-device batch size. When scaling from resolution 128x128 to 256x256, we increase the number of devices but maintain the same overall batch size, reducing the per-device batch size. Cross-replica BatchNorm ensures that the smaller per-device batch size does not affect training. Switching to per-device BatchNorm at 128x128 results in a performance drop, albeit not a crippling one: for a model which would otherwise get an IS of 92 and an FID of 9.5, switching to per-device BatchNorm results in an IS of 78 and FID of 13.\n\n>It is not clear if provided analysis for large-scale GANs apply for small-medium sized GANs. Providing such analysis would be also helpful for the community.\n-The goal of this work is to explore GANs at large scale; the exploration of small or medium scale models would indeed be interesting for another study. Having said that, we do evaluate BigGAN on conditional CIFAR-10 (mentioned briefly in Appendix C.2) and obtain an IS of 9.22 and an FID of 14.73 without truncation, which to our knowledge are better than any published results.\n\n>How do you see the impact of the suggested techniques on tackling harder data-modalities for GANs, e.g. text or sequential data in general?\n-Any of the proposed techniques could be applied to standard GANs for text or other sequential data in principle, but we have not experimented with these applications ourselves.\n", "We would like to thank all reviewers for their reviews. We have uploaded a revised draft incorporating this feedback. Specifically:\n-We have added reference to the two papers mentioned in an earlier comment, as well as “The Unusual Effectiveness of Averaging in GAN Training,” Yazici et al., arXiv:1806.04498.\n-Added Appendix G expanding on our discussion, and referenced this appendix at the end of section 4.2\n-Fixed typos in captions\n-Added a brief section on pitfalls of negative results in negative results appendix\n", "Summary:\nThe authors present a empirical investigation of methods for scaling GANs to complex datasets, such as ImageNet, for class-conditioned image generation. They first build and describe a strong baseline based on recently proposed techniques for GANs and push the performance on large datasets with several modifications presented sequentially, to obtain strong state-of-the-art IS/FID scores, as well as impressive visual results. The authors propose a simple truncation trick to control the fidelity/variance which is interesting on its own but cannot always scale with the architecture. The authors further propose a orthogonalization-based regularization to mitigate this problem. An investigation of training collapse at large scale is also performed; the authors investigate some regularization schemes based on gathered empirical evidence. As a result, they explore and discard Spectral Normalization of the generator as a way to prevent collapse and show that a severe tradeoff between stability and quality can be controlled when using zero-centered gradient penalties in the Discriminator. In the end, no solution that can ensure quality and stability is found, except having prohibitively large amounts of data (~300M images). Models are evaluated on the ImageNet and on this internal, bigger dataset.\n\nPros:\n- This investigation gives a significant amount of insights on GAN stability and performance at large scales, which should be useful for anyone working with GANs on complex datasets (and that have access to great computational resources).\n\n- Even though commonly used evaluations metrics for GANs are still not fully adequate, the authors obtain quantitative performance significantly beyond previous work, which seems indeed correlated with remarkable visual results.\n\n- The baseline and added modifications are well presented and clearly explained. The Appendices also have great value in that regard.\n\n\nCons:\n- Discussions sometimes lack depth or are absent.\nFor example, it is unclear to me why some larger models are not amenable to truncation. Besides visible artifacts, what does it mean? Why does a smoother G reduces those artifacts? Were samples from those networks better without using truncation? Why would this be?\n\nAuthors report how wider networks perform best, and how deeper networks degrade performance. Again, discussions are lacking, and it doesn’t seem the authors tried to understand why such behaviors were shown.\n\nEven though this is mostly an empirical investigation, I think some more efforts should be put in understanding and explaining why some of those behaviors are shown, as I think it can bootstrap future work more easily.\n\n- In Section 3.1 : “Across runs in Table 1, we observe that without Orthogonal Regularization, only 16% of models are amenable to truncation compared to 60% with Orthogonal Regularization.” For me, this is not particularly clear. Is this something the reader should understand from Table 1? \n\n- I question the choice of sections chosen to be in the main paper/appendices. I greatly appreciated the negative results reported in the main text as well as in the appendices and this has significant value. However, as this is to me mostly a detailed empirical investigation and presentation of high-performance GANs on large scales, I would be likely to share this with colleagues who want to tackle similar problems. In this case, if future readers limit themselves to the main text, I think it can have more value to present some content form Appendix B and C than to have more than a full page on stability investigations and attempted tricks that turned out not to be used to reach maximal performance. However I do not want to discourage publishing of negative results, and I definitely wish to see this investigation in the paper, but I merely question the positioning of such information. With regard to my first negative point above about the lack of discussions, it seems the analysis of Section 4 is disproportionate compared to other places.\n\n\nSuggestions/Comments:\n\n- Regarding the diversity/fidelity tradeoff using different truncation thresholds, I think constraining the norm of the sampled noise vectors to the exact threshold value (by projecting the samples on the 0-centered hyper-sphere of radius = threshold) could yield even more interesting or more informative Figures, as obtained scores or samples on the edge of that hyper-sphere might provide information on the ‘guaranteed’ (not proven) quality/fidelity of samples mapped from inside that hyper-sphere. \n\n- In Appendix D, the Figures could be slightly clarified by using a colored heatmap to color the curve, with colors corresponding to the threshold values. Similar curves could also be produced with the hyper-sphere projection proposed above to have a slightly clearer idea of the behavior on the limit of that hyper-sphere.\n\n- In Section 4.2, in the second paragraph, you refer to Appendix F and describe “sharp upward jump at collapse” in D’s loss. However, it seems the only Figure showing D’s loss when unconstrained is Figure 26, in which it is hard to notice any significant jump in the loss.\n\n- In Appendix F, Figure 20 d), the title seems wrong. It seems to report sigma^2 values, but the title says “losses”.\n\n\nThis investigation of GAN scalability is successful results-wise even though the inability to stabilize training without sacrificing great performance on ImageNet is disappointing. The improvement over previous SOTA is definitely significant. This work thus shows a modern GAN architecture for complex datasets that could be a strong basis for future work. However, I think the paper could and should be improved with some more detailed analysis and discussions of exhibited behaviors in order to further guide and encourage future work. It could also be clarified on some aspects, and potentially re-structured a bit to be better align with its probable impact directions. I would also be curious to see the proposed techniques applied on simpler datasets. Can this be useful for someone having less compute power and working on something similar to CelebA? \n", "Summary:\nThis paper proposes a suite of tricks for training large-scale GANs, and obtaining state-of-the-art results for high-resolution images. The paper starts from a self-attention GAN baseline (Zhang 2018), and proposes:\n-\tIncreasing batch size (8x) and model size (2x)\n-\tSplitting noise z in multiple chunks, and injecting it in multiple layers of the generator\n-\tSampling from truncated normal distribution, where samples with norms that exceed a specific threshold are resampled. This seems to be used only at test-time and is used to control variety-fidelity tradeoff. The generator is encouraged to be smooth using an orthogonal regularization term.\nIn addition, the paper proposes practical recipes for characterizing collapse in GANs. In the generator, the exploding of the top 3 singular values of each weight matrix seem to indicate collapse. In the discriminator, the sudden increase of the ratio of first/second singular value of weight matrices indicate collapse in GANs. Interestingly, the paper suggests that various regularization methods which can improve stability in GAN training, do not necessarily correspond to improvement in performance.\n\nStrengths:\n-\tProposed techniques are intuitive and very well motivated\n-\tOne of the big pluses of this work is that authors try to \"quantify\" each proposed technique with training speed and/or performance improvement. This is really a good practice.\n-\tDetailed analysis for detecting collapse and improving stability in large-scale GAN\n-\tProbably no need to mention that, but results are quite impressive\n\nWeaknesses:\n-\tComputational budget required is massive. The paper mentions model use from 128-256 TPUs, which severely limits reproducibility of results.\n\nComments/Questions:\n-\tCan you elaborate more on why BatchNorm statistics are computed across all devices as opposed to per-device? Was this crucial for best performance? \n-\tIt is not clear if provided analysis for large-scale GANs apply for small-medium sized GANs. Providing such analysis would be also helpful for the community.\n-\tHow do you see the impact of the suggested techniques on tackling harder data-modalities for GANs, e.g. text or sequential data in general?\n\nOverall recommendation:\nThe paper is well written, ideas are well motivated/justified and results are very compelling. This is a good paper and I higly recommend acceptance.\n", "Hi, I would like to ask some details about calculating inception scores.\n\nHow did you calculate the inception score for images of 128x128 and 512x512 resolutions?\nDid you just resize the images to 229x229 and feed them into the inception-v3 model, which was pretrained on 229x229 imagenet dataset?\n\nFor calculating IS, did you use the code provided by openai? \nhttps://github.com/openai/improved-gan\n\nThanks in advance.", "The samples look extremely good. Have you tried to calculate intra-class FID like the cGANs with Projection Discriminator did? Also, have you tried training your model on any unlabelled data set?", "Thanks for sharing the details!", "1. Yes, the non-local blocks have spectral normalization applied to the convolutional weights, as in SA-GAN.\n\n2. No, following SN-GAN there is no BatchNorm in D.\n \n3. We do not apply BatchNorm or ReLU before the non-local block--it takes in the output of the previous residual block. Please see https://github.com/brain-research/self-attention-gan for a reference implementation of non-local blocks.\n\n4. The sign of gamma is arbitrary (the output of the block before being multiplied by gamma can take on either sign), and we observe both positive and negative gammas in our models. Gamma is a freely learned scalar parameter.\n\nThanks.\n", "Hi, I would like to ask some more details about your architecture.\n\n1. Do you apply spectral normalization in the attention layer (non-local block)?\n\n2. Do you apply batch normalization in the discriminator?\n\n3. Do you perform batch normalization or nonlinear (relu) to the input of the attention layer (non-local block) before transforming the input in to the feature spaces f, g?\n\n4. In the non-local block, the weight gamma is initialized as 0. Did you observe that gamma becomes negative during training? Or did you force gamma to be non-negative?\n\nThanks!", "We've recently been made aware of two prior works that observe a correlation between the variance of the latent noise and the variety/quality of the Generator outputs. We will be adding references accordingly.\n\n[1] Marco Marchesi. Megapixel Size Image Creation using Generative Adversarial Networks. arXiv preprint arXiv:1706.00082.\n[2] Mathijs Pieters and Marco Wiering. Comparing Generative Adversarial Network Techniques for Image Creation and Modification. arXiv preprint arXiv:1803.09093.", "Thank you for the clarification! It's very interesting that pre-SN singular values of some layers' weight keeps growing. That seems to suggest that the outputs of the layer lie in a very low-dimensional subspace.", "Hi,\n\nAs mentioned in the first paragraph of Section 3, we use Spectral norm in both G and D. As mentioned in the caption of Figure 3, the spectra we plot are before spectral normalization, so the actual values will be normalized by the first singular value. We plot the unnormalized values to show how the spectra of the underlying weights change over time.\n\nThanks.", "Thank you for great insights into instabilities of the generator and discriminator! I'm a little bit confused though. Do you employ Spectral Normalization in the generator and discriminator? Spectral normalization should make the largest singular value of the weight matrix around 1 but Figure 3 shows very large eigenvalues. Am I missing something?", "I have a question about saturation artifacts you mention in section 3.\n\n>> The distribution shift caused by sampling with different latents than those seen in training is problematic for many models. \n>> Some of our larger models are not amenable to truncation, producing saturation artifacts (Figure 2(b)) when fed truncated noise. \n\nI wonder why larger models produce saturation artifacts when fed truncated noise.\nthe noise outside the range can be sampled from N(0, 1). so, I believe modles produce saturation artifacts without truncated trick.\nI believe that the reason why truncate trick produce saturation artifacts need more clarification.\n \nRegards.", "Hi Jaonary,\n\nIt may be possible to get similar results using gradient aggregation, but it's tough to say--we use cross-replica BatchNorm in the Generator, so aggregating gradients with a smaller batch size will not be exactly equivalent. In our ablations using per-device BatchNorm reduced performance but still trained, so perhaps aggregating gradients with cross-replica BatchNorm and multiple GPUs will work (albeit it will be quite slow and not exactly equivalent to what we've done).\n\nThe architectural difference is in the channel pattern of the Discriminator, where each residual block takes in a tensor with num_in channels and outputs a tensor with num_out channels. In (Miyato, 2018) the first convolution in the residual block has num_in outputs, and the second convolution has num_out outputs. In (Zhang, 2018), however, the first convolution in the residual block has num_out outputs instead of num_in inputs, which results in the Discriminator having more parameters and more capacity. We use the channel pattern from (Zhang, 2018).\n\nThanks.", "One most striking results of your paper is the effect of the batch size. In your experiment you use some TPU cores so I guess that you have enough memory to store all of your batch. Do you think that it is possible to get the same result if you use multiple GPUs instead with reduced batch size and algorithm such as all reduce to aggregate the gradients ?\nOne more thing, it's not really clear what is the difference of your architecture and the one used by Miyato 2018 ? You said in the appendix B that the number of filters of the first conv layer of each block is equal to the number of the output filters but not the number of the input filters. Can you explain better what does it mean ?", "Hi Sheng,\n\nThe score reported in \"A note on the Inception Score\" is for ImageNet at 64x64 resolution. We get approximately the same number using our code.\n\nThanks.", "I have a question about Inception Score. You mention in APPENDIX C \"We compute the IS for both the training and validation sets. At 128×128 the training data has an IS of 233, and the validation data has an IS of 166...\"\nHowever, in Table 1 of \"A Note on the Inception Score\", which is referenced by your paper, the Inception Score of ImageNet validation set is around 63.\nI wonder what is the cause of the gap between these two scores.", "Hi Mert,\n\ni/ii):\nPlease see our appendix for further details. A chunk refers to a subset of the dimensions of z in the channel dimension; if z is a 100 x 128-dimensional tensor (batch size x channels) sampled from N(0,1), then splitting it into 8 chunks would result in 8 tensors (z_i for i=1 to 8 ) each of dimension 100 x 16. E.g.\nz = tf.random_normal((100,128))\nz_chunks = tf.split(z, 8, axis=1)\n\niii / iv):\nIn previous works on conditional GANs, the conditional batchnorm gains and biases are implemented as embeddings, similar to word embeddings in language models, with one embedding per layer. We replace this with a single embedding which we pass through a single linear transform to get the batchnorm parameters. We describe this in the appendix, but here's some pseudocode:\nembedding_weights = matrix in (num_classes, embedding_dimension)\nbias_projection = matrices in (embedding_dimension, batchnorm_channels_dimension)\ngain_projection = matrices in (embedding_dimension, batchnorm_channels_dimension)\n\nshared_embedding = embedding_weights * one_hot(class index)\nbias_i = bias_projection_i * shared_embedding\ngain_i = 1 + gain_projection_i * shared_embedding\n\nIf you're using hierarchical latents, use this instead:\nbias_i = bias_projection_i * concatenate(shared_embedding, z_chunks_i)\ngain_i = 1 + gain_projection_i *concatenate(shared_embedding, z_chunks_i)\n\nHope that helps!", "Thank you for all your efforts towards understanding training dynamics in large-scale GANs. I have a question about conditional batch-norms. You mention in Section 3 these \n* \" Instead of having a separate layer for each embedding (Miyato et al., 2018; Zhang et al., 2018), we opt to use a shared embedding, which is linearly projected to each layer’s gains and biases (Perez et al., 2018).\" \n* \"For our architecture, this is easily accomplished by splitting z into one chunk per resolution, and concatenating each chunk to the conditional vector c which gets projected to the BatchNorm gains and biases. \".\n\nI believe that these statements need more clarification. i) how do you define a chunk?, ii) How z is split into chunks? iii) How do you compute shared embedding? iv) how parameters of an affine transformation for each layer is constructed from the shared embedding?\n\nRegards." ]
[ 9, -1, -1, -1, -1, -1, 7, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, -1, -1, -1, -1, -1, 3, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2019_B1xsqj09Fm", "SJl68_Hx37", "r1gI_-SQAm", "S1gaWerP2X", "HklmZ1xqhm", "iclr_2019_B1xsqj09Fm", "iclr_2019_B1xsqj09Fm", "iclr_2019_B1xsqj09Fm", "iclr_2019_B1xsqj09Fm", "iclr_2019_B1xsqj09Fm", "rJgBuz5a3Q", "rJlaYkcTnX", "iclr_2019_B1xsqj09Fm", "iclr_2019_B1xsqj09Fm", "SyesNhmUjm", "Hke0IlKSim", "iclr_2019_B1xsqj09Fm", "iclr_2019_B1xsqj09Fm", "rkxcudXfq7", "iclr_2019_B1xsqj09Fm", "rJl92Zo197", "iclr_2019_B1xsqj09Fm", "rJesh9ORFm", "iclr_2019_B1xsqj09Fm" ]
iclr_2019_Bklr3j0cKX
Learning deep representations by mutual information estimation and maximization
This work investigates unsupervised learning of representations by maximizing mutual information between an input and the output of a deep neural network encoder. Importantly, we show that structure matters: incorporating knowledge about locality in the input into the objective can significantly improve a representation's suitability for downstream tasks. We further control characteristics of the representation by matching to a prior distribution adversarially. Our method, which we call Deep InfoMax (DIM), outperforms a number of popular unsupervised learning methods and compares favorably with fully-supervised learning on several classification tasks in with some standard architectures. DIM opens new avenues for unsupervised learning of representations and is an important step towards flexible formulations of representation learning objectives for specific end-goals.
accepted-oral-papers
This paper proposes a new unsupervised learning approach based on maximizing the mutual information between the input and the representation. The results are strong across several image datasets. Essentially all of the reviewer's concerns were directly addressed in revisions of the paper, including additional experiments. The only weakness is that only image datasets were experimented with; however, the image-based experiments and comparisons are extensive. The reviewers and I all agree that the paper should be accepted, and I think it should be considered for an oral presentation.
train
[ "rkgBfaIahQ", "SkxJeJTX07", "SJxEJLX2CQ", "B1gEEtgAjm", "ryxmvC2mC7", "rJekflpQCX", "SJxOvyaQCm", "B1lMGywi6Q", "SJxqzmcVp7", "BkxA0Kt3nQ", "rygYR8UMoQ", "rklhZYMJom" ]
[ "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "author", "public" ]
[ "This paper proposes Deep InfoMax (DIM), for learning representations by maximizing the mutual information between the input and a deep representation. By structuring the network and objectives to encode input locality or priors on the representation, DIM learns features that are useful for downstream tasks without relying on reconstruction or a generative model. DIM is evaluated on a number of standard image datasets and shown to learn features that outperform prior approaches based on autoencoders at classification.\n\nRepresentation learning without generative models is an interesting research direction, and this paper represents a nice contribution toward this goal. The experiments demonstrate wins over some autoencoder baselines, but the reported numbers are far worse than old unsupervised feature learning results on e.g. CIFAR-10. There are also a few technical inaccuracies and an insufficient discussion of prior work (CPC). I don't think this paper should be accepted in its current state, but could be persuaded if the authors address my concerns.\n\nStrengths:\n+ Interesting new objectives for representation learning based on increasing the JS divergence between joint and product distributions\n+ Good set of ablation experiments looking at local vs global approach and layer-dependence of classification accuracy\n+ Large set of experiments on image datasets with different evaluation metrics for comparing representations\n\nWeaknesses:\n- No comparison to autoencoding approaches that explicitly maximize information in the latent variable, e.g. InfoVAE, beta-VAE with small beta, an autoencoeder with no regularization, invertible models like real NVP that throws out no information. Additionally, the results on CIFAR-10 are worse than a carefully tuned single-layer feature extractor (k-means is 75%+, see Coates et al., 2011). \n- Based off Table 9, it looks like DIM is very sensitive to hyperparameters like gamma for classification. Please discuss how you selected hyperparameters and whether you performed a similar scale sweep for your baselines.\n- The comparison with and discussion of CPC is lacking. CPC outperforms JSD in almost all settings, and CPC also proposed a \"local\" approach to information maximization. I do not agree with renaming CPC to NCE and calling it DIM(L) (NCE) as the CPC and NCE loss are not the same. Please elaborate on the similarties and differences!\n- The clarity of the text could be improved, with more space in the main text devoted to analyzing the results. Right now the paper has an overwhelming number of experiments that don't fit concisely together (e.g. an entirely new generative model experimentsin the appendix).\n\nMinor comments:\n- As noted by a commenter, it is known that MI maximization without constraints is insufficient for learning good representations. Please cite and discuss.\n- Define local/global earlier in the paper (intro?). I found it hard to follow the first time.\n- Why can't SOMs represent complex relationships?\n- \"models with reconstruction-type objectives provide some guarantees on the amount of information encoded\": what do you mean by this? VAEs have issues with posterior collapse where the latents are ignored, but they have a reconstruction term in the objective.\n- \"JS should behave similarly as the DV-based objective\" - do you have any evidence (empirical or theoretical) to back up this statement? As you're maximizing JSD and not KL, it's not clear that DIM can be thought of as maximizing MI.\n- Have you tried stochastic encoders? This would make matching to a prior much easier and prevent the introduciton of another discriminator.\n- I'm surprised NDM is much smaller than MINE given that your encoder is deterministic and thus shouldn't throw out any information. Do you have an explanation for this gap?\n- there's a trivial solution to local DIM where the global feature can directly memorize everything about the local features as the global feature depends on *all* local features, including the one you're trying to maximize information with. Have you considered masking each individual local feature before computing the global feature to avoid this trivial solution? \n\n-----------------------\n\nUpdate: Apologies for the slow response. The new version with more baselines, comparisons to CPC, discussion of NCE, and comparisons between JS and MI greatly improve the paper! I've increased my score (5 -> 7) to reflect the improved clarity and experiments. ", "Thank you for your detailed review, and we hope that our revisions address your concerns.\n\nKey points:\n- No comparison to autoencoders/beta-VAE/etc: See our discussions above under “New baselines”. We’ve now added these comparisons for classification results. \n- DIM vs CPC: see our previous comments on differences between CPC and DIM, as well as usage of the softmax-type “NCE”.\n- Comparison to CPC: See discussion above “Comparisons to CPC”.\n- Weak performance compared to older methods on Cifar10 (e.g. Coates et al., 2011): See discussions above “On architectures and baselines”. We have added some more details w.r.t. other models in Section 4.2, in “classification comparisons”. Also see \"Comparisons to CPC\" for improved results on CIFAR10.\n- NCE versus JSD: It would be difficult to conclude that NCE is uniformly superior. While NCE tends to be superior with a large number of negative samples, the differences diminish with larger datasets (Table 2). In addition, JSD outperforms NCE as you reduce the number of images used as negative samples (Figure 9). This will play a factor when choosing the right loss, as more negative samples means more computations / more memory in order to compute the softmax.\n- Sensitivity of the beta term: There was an error in the ranges presented in Figures 9 (accidentally cropped). The last subfigure shows that there is relative insensitivity of gamma (prior term), and much more sensitivity to beta (local term). The performance variation is only ~1% across the gamma, which is not enough to change conclusions of baseline comparisons.\n- We modified the text to improve clarity w.r.t. comments from all reviewers. Many of the experiments we put into the Appendix were related to questions we had about the model / representation, and we excluded them from the main text precisely because they do not relate directly to the main story. However, we chose to keep them in the Appendix as we found them interesting and informative. \n\nMinor comments:\n- On mutual information and constraints: See the updated version of Section 2, next to last paragraph. \n- On local definition: We have modified the text in the first paragraph to help define the “local” MI objective earlier.\n- On SOM: We modified this sentence to read \"generally lack the representational capacity of deep neural networks\".\n- On reconstruction and MI / VAEs: There was an error in Equation (1), which has now been fixed.\n- On the JSD and the mutual information: This is an important point, and we added a discussion appendix A1 to show the JSD between the joint and the product of marginals is related to PMI as well as some empirical analysis under a discrete setting.\n- Stochasticity: We have tried dropout as a form of stochasticity, and this does not significantly change classification performance, though it is reasonable to posit this might affect the encoder’s ability to match the marginal output to a given prior.\n- NDM vs MINE: NDM is small as the prior term is adversarial and is encouraging the aggregated posterior to match the prior. Small NDM indicates more independence / disentanglement, which is the desired effect (see Figure 12 for the study with beta VAE). DIM encourages the MINE measure to be large, though a combined global / local objective works best (Table 3). There is no straightforward direct relationship between disentanglement and mutual information.\n- Trivial solutions: Trivial solutions are a possibility, and surely this risk increases as the size of the global vector increases, though we never ran across this issue in our experiments (the dimension of 64 was chosen somewhat arbitrarily and to match other latent space sizes, such as those found in GANs. Our limited experiments with larger global vectors had no issues). The experiment you describe is nearly identical to our occlusion experiments (Table 5), which do indeed improve classification performance. It is reasonable to posit that other occlusion-type tasks would modify the representation in desirable ways.\n", "Thank you for your updated review. We actually had an internal debate about how to best phrase this, as we don't want to overclaim anything. Your suggested edit is better, and we will change the sentence at the next revision opportunity. ", "Revision 2: The new comparisons with CPC are very helpful. Most of my other comments are addressed in the response and paper revision. I am still uncomfortable with the sentence \"Our method ... compares favorably with fully-supervised learning on several classification tasks in the settings studied.\" This strongly suggests to me that you are claiming to be competitive with SOTA supervised methods. The paper does not contain supervised results for the resnet-50 architecture. I would recommend that this sentence should either be dropped from the abstract or have the phrase \"in the settings studied\" replaced by \"for an alexnet architecture\". If you have supervised results for resnet-50 they should be added to table 3 and the abstract could be adjusted to that. I apologize that this is coming after the update deadline (I have been traveling). The authors should simply consider the reaction of the community to over-claiming. Because of the new comparisons with CPC on resnet-50 I am upping my score. My confidence is low only because the real significance can only be judged over time.\n\nRevision 1: This is a revision of my earlier review. My overly-excited earlier rating was based on tables 1 and 2 and the claim to have unsupervised features that are competitive with fully-supervised features. (I also am subject to an a-priori bias in favor of mutual information methods.) I took the authors word for their claim and submitted the review without investigating existing results on CIFAR10. It seems that tables 1 and 2 are presenting extremely weak fully supervised baselines. If DIM(L) can indeed produce features that are competitive with state of the art fully supervised features, the result is extremely important. But this claim seems misrepresented in the paper.\n\nOriginal review:\n\nThere is a lot of material in this paper and I respect this groups\nhigh research-to-publication ratio. However, it might be nice to have\nthe paper more focused on the subset of ideas that seem to matter.\n\nMy biggest comment is that the top level spin seems wrong.\nSpecifically, the paper focuses on the two bullets on page 3 ---\nmutual information and statistical constraints. Here mutual\ninformation is interpreted as the information between the input and\noutput of a feature encoder. Clearly this has a trivial solution\nwhere the input equals the output so the second bullet --- statistical\nconstraints --- are required. But the empirical content of the paper\nstrongly undermines these top level bullets. Setting the training\nobjective to be the a balance of MI between input and output under a\nstatistical consrtraint leads to DIM(G) which, according the results in\nthe paper, is an empirical disaster. DIM(L) is the main result and\nsomething else seems to be going on there (more later). Furthermore,\nthe empirical results suggest that the second bullet --- statistical\nconstraints --- is of very little value for DIM(L). The key ablation\nstudy here seems to be missing from the paper. Appendix A.4 states\nthat \"a small amount of the [statistical constraint] helps improve\nclassification results when used with the [local information\nobjective]. No quantitative ablation number is given. Other measures\nof the statistical constraint seem to simply measure to what extent\nthe constraint has been successfully enforced. But the results\nsuggest that even successfully enforcing the constraint is of little,\nif any, value for the ability of the features to be effective in\nprediction. So, it seems to me, the paper to really just about the\nlocal information objective.\n\nThe real power house of the paper --- the local information objective\n--- seems related to mutual information predictive coding as\nformalized in the recent paper from deep mind by van den Oord et al\nand also an earlier arxiv paper by McAllester on information-theoretic\nco-training. In these other papers one assumes a signal x_1, ... x_T\nand tries to extract low dimensional features F(x_t) such that F(x_1),\n..., F(x_t) carries large mutual information with F(x_{t+1}). The\nlocal objective of this paper takes a signal x1, ..., x_k (nXn\nsubimages) and extracts local features F(x_1), ... F(x_k) and a global\nfeature Y(F(x_1), ..., F(x_k)) such that Y carries large mutual\ninformation with each of the features F(x_i). These seem different\nbut related. The first seems more \"on line\" while the second seems\nmore \"batch\" but both seem to be getting at the same thing, especially\nwhen Y is low dimensional.\n\nAnother comment about top level spin involves the Donsker-Varadhan\nrepresentation of KL divergence (equation (2) in the paper). The\npaper states that this is not used in the experiments. This suggests\nthat it was tried and failed. If so, it would be good to report this.\nAnother contribution of the paper seems to be that the mutual\ninformation estimators (4) and (5) dominate (2) in practice. This\nseems important.\n\n", "We thank the reviewers for providing productive comments and critiques. We believe this input has improved the quality of our work. We first address key shared concerns, and then respond to specific points from individual reviews.\n\nOn architectures and baselines:\nOur baselines and architectures were chosen to provide a level comparison across methods, rather than to maximize performance of our method. We tried to stay true to common / popular architectures from papers on unsupervised representation learning -- namely, DCGAN- and Alexnet-type encoders. We did not perform significant hyper-optimization on these architectures. For the classification results, our method and all baselines were trained in the same setting with the same architecture. The CIFAR10 supervised results are poor compared to SOTA results that rely on data augmentation and more sophisticated architectures. We did not intend to mislead. We modified Section 4.2 to help readers correctly interpret comparisons with supervised results. To our knowledge, our STL-10 results are SOTA for the unsupervised setting.\n\nNew baselines:\nWe have included new baselines to address concerns from Reviewer 1: CPC, beta VAE with low beta, and an unregularized autoencoder. See Tables 1, 2, and 3 in the revision. We did not implement NICE or real NVP as these involve specialized architectures. Following the same settings as our existing baselines, DIM(L) significantly outperformed all new baselines in classification results. The overall effect of beta in beta VAE is unremarkable. We report results for beta=0.5, which performed best, but also tested beta in {0.01, 0.1, 0.2, 0.5}.\n\nComparisons to CPC:\nWe spent considerable time implementing a CPC baseline, and had difficulty getting results that were significantly better than even BiGAN in our test setting. To achieve strong results with CPC, we needed to use an encoder architecture closer to that in the CPC paper. Specifically, we extract each local feature from a patch cropped from the full image. The patches form a 7x7 grid and have 50% overlap between neighboring patches. With this architecture, DIM(L) outperforms CPC on CIFAR10 using a ResNet-type encoder for the cropped patches. When classifying based on the full 7x7 grid of local features, DIM(L) achieves 80.9% accuracy and CPC achieves 77.5%. When strided crops were used with data augmentation on STL10, DIM(L) and CPC performed comparably, both achieving ~77% without fine-tuning through the Alexnet encoder. When we used a version of DIM with multiple global representations using a single convolutional layer, DIM got over 78%. Some of these differences could be architectural so DIM and CPC are at worst comparable in this setting, but we can conclude that the complex strictly ordered autoression in CPC is unnecessary. We have added a paragraph to Section 4.2, in “classification comparisons” to discuss these comparisons.\n", "We’re delighted that this approach excites you, and hopefully the comments above and revision address your previous and latest concerns.\n\n- On baselines: See \"On architectures and baselines\" and \"Comparisons to CPC above\".\n- Overall spin: We never meant to introduce the prior as a means of addressing trivial solutions to the first bullet point. Rather, the prior term is meant to impose constraints on the marginal distribution of the representation. Disentanglement, for example, is an important property in many fields (neuroscience or RL for instance), and prior matching is a common method for this (e.g., ICA). \n- Ablation studies: See Figure 10, last subfigure in the revision for the ablation study you requested. The prior term has only a small effect on classification accuracy, yet has a strong effect on dependence (it decreases it), according to the NDM measure. If you feel this should be included in the main text, we can add it before the final revision deadline.\n- On the role of the global term: it is true that alone the global term can exhibit some degenerate behavior, and this is especially apparent by classification results. However, its use depends what the end-goal of the representation is. For example, a combined global local version of DIM improves both reconstruction and mutual information estimates considerably over one or the other (Table 4 in the revision). We feel that the global term can still be useful, but it does seem like the global objective without the local objective is not useful.\n- On the DV representation: our initial experiments showed very poor DV performance, but this changed recently when we adopted the strategy of using a very large number of negative samples as in NCE. However, this approach performs only comparably or worse than using the JSD (Tables 1 and 2 in the revision), supporting our claim that the JSD is better for this task. In addition, we added DV to Figure 9, which shows that DV performance decays quickly as fewer images are used in negative sampling. \n", "Key points:\n- Image only: As the structural assumptions are important to the MI maximization task of DIM, we wanted to do an in-depth analysis and comparison in this setting. The core ideas of DIM transfer very easily, however, and we anticipate these ideas being successful in the NLP, graph, and RL settings, for example.\n\nMinor comments:\n- Trial solutions: This is true (see discussion with Reviewer 1) and obviously we need a bottleneck or noise in the global variable. One potential solution to this is presented in our occlusion experiments (Table 5 in the revision), where some local features are masked out from computation of the global objective.\n- Using DIM with supervised learning: It sounds reasonable to use DIM directly as a regularizer for supervised learning, and our fine-tuning experiments for STL10 support this. However, we have not tried this experiment specifically.\n- C and X: C_i is the feature map location that corresponds to the receptive field X_i.", "While searching for more prior work based on different versions of the original \"binary\" form of NCE, we found an explicit presentation of the \"multinomial\" NCE used in CPC and DIM.\n \nThe loss presented in CPC is less novel than we previously thought. The multinomial version of NCE is precisely described in Section 3 of [1]. A rigorous analysis of the relation between binary and multinomial NCE was also recently published in [2, page 3], which was submitted for review prior to CPC's appearance on arXiv. \n\n[1] \"Exploring the Limits of Language Modeling\" (Jozefcowicz et al., 2016)\n[2] \"Noise Contrastive Estimation and Negative Sampling for Conditional Models: Consistency and Statistical Efficiency\" (Ma and Collins, EMNLP 2018),", "We will provide a complete rebuttal soon, but first we address some concerns about our use of the terms DIM/CPC/NCE etc.\n\nDIM(L) and CPC have many similarities, but they are not the same. The key difference between CPC and DIM is the strict way in which CPC structures its predictions, as illustrated in Figure 1 of [1]. CPC processes local features sequentially (fixed-order autoregressive style) to build a partial “summary feature”, then makes separate predictions about several specific local features that weren’t included in the summary feature. \n\nFor DIM (without occlusions), the summary feature is a function of all local features, and this “global” feature predicts all of those features simultaneously in a single step, rather than forming separate predictions for a few specific features as in CPC. A consequence of this difference is that DIM is more easily able to perform prediction across all local inputs, as the predictor feature (global) is allowed to be a function of the predicted features (local). DIM with occlusions shares more similarities with CPC, as it mixes self-prediction for the observed local features with orderless autoregression for the occluded local features (see [6] for further discussion of ordered vs orderless autoregression).\n\nUsing Noise Contrastive Estimation (NCE) to estimate and maximize mutual information was first proposed in [1], and we credit them in the manuscript (and we will further emphasize this in the revision). While there are a variety of NCE-based losses [2, 3, 4], they all revolve around training a classifier to distinguish between samples from the intractable target distribution and a proposal noise distribution. E.g., [5] uses NCE based on an unbalanced binary classification task, and the loss in [1] is a direct extension of this approach. While novel to [1], we do not consider this NCE-based loss the defining characteristic of CPC, which could instead use, e.g. the DV-based estimator proposed in [7]. The authors of [1] specifically mention this as a reasonable alternative. Due to significant differences in which mutual informations they choose to estimate and maximize, we think it would be ungenerous to consider our method equivalent to CPC whenever we use this estimator.\n\n[1] Oord, Aaron van den, Yazhe Li, and Oriol Vinyals. \"Representation learning with contrastive predictive coding.\" arXiv preprint arXiv:1807.03748 (2018).\n[2] Gutmann, Michael, and Aapo Hyvärinen. \"Noise-contrastive estimation: A new estimation principle for unnormalized statistical models.\" Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics. 2010\n[3] Gutmann, Michael U., and Aapo Hyvärinen. \"Noise-contrastive estimation of unnormalized statistical models, with applications to natural image statistics.\" Journal of Machine Learning Research 13.Feb (2012): 307-361.\n[4] Mnih, Andriy, and Yee Whye Teh. \"A fast and simple algorithm for training neural probabilistic language models.\" arXiv preprint arXiv:1206.6426 (2012).\n[5] Mikolov, Tomas, et al. \"Distributed representations of words and phrases and their compositionality.\" Advances in neural information processing systems. 2013.\n[6] Benigno Uria, Marc-Alexandre Cote, Karol Gregor, Iain Murray, and Hugo Larochelle. “Neural Autoregressive Distribution Estimation.” arXiv preprint arXiv:1605.02226 (2016).\n[7] Mohamed Ishmael Belghazi, Aristide Baratin, Sai Rajeshwar, Sherjil Ozair, Yoshua Bengio, Aaron Courville, Devon Hjelm ;Proceedings of the 35th International Conference on Machine Learning, PMLR 80:531-540, 2018.", "This paper presents a representation learning approach based on the mutual information maximization. \nThe authors propose the use of local structures and distribution matching for better acquisition of representations (especially) for images.\n\nStrong points of the paper are: \n* This gives a principled design of the objective function based on the mutual information between the input data point and output representation. \n* The performance is gained by incorporating local structures and matching of representation distribution to a certain target (called a prior).\n\nA weak point I found was: \nThe local structure and evaluation are specialized for classification task of images. \n\nQuestions and comments.\n* Local mutual information in (6) may trivially be maximized if the summarizer f (E(x) = f \\circ C(x) with \\psi omitted for brevity) concatenates all local features into the global one.\nHow was f implemented? Did you compare this concatenation approach?\n* Can we add DIM like a regularizer to an objective of downstream task? \nIt would be very useful if combining an objective of classification/regression or reinforcement learning with the proposed (8) is able to improve the performance of the given task.\n* C^(i)_\\psi(X) in (6), but X^(i) in (8): are they the same thing?", "Thank you for the references. The works cited essential do the global version of DIM, but with discrete representations rather than continuous. Solutions for \"global\" infomax become degenerate, which motivates the use of regularization in the encoder. Using regularization such as those used in the referenced works (weight decay in [1] and data augmentation [2]) is essential for these approaches to work. This problem also affects us, and this probably is the reason for poor performance of \"global DIM\" with deterministic input->representation mappings.\n\nWe find that the regularization used in [2] is far more relevant to our work, as it \"regularizes\" the model by making it more robust to data augmentation / sensible transformation at the input space. This is similar in spirit to what we do in the occlusion experiments, where augmentation is done by removing part of the input when computing the global vector. Overall, [2] is essentially equivalent to adding data augmentation to the global version of DIM in the discrete setting. While the goal of the local version of DIM is to improve generalization by spatial consistency across features, the connection to data augmentation in [2] is not as clear-cut. We do agree that [2] is highly relatable to our work and will add it in the related works on the topic of \"leveraging known structure\" / data augmentation.", "It has been already pointed out that InfoMax alone is not enough to learn useful representations [1][2]. [1][2] apply regularization to resolve this problem, and your method can be also regarded as (a different kind of) regularization.\n\n[1] Gomes, R., Krause, A., and Perona, P. Discriminative clustering by regularized information maximization. In NIPS, 2010.\n[2] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., and Sugiyama, M. Learning discrete representations via information maximizing self-augmented training. In ICML, 2017." ]
[ 7, -1, -1, 9, -1, -1, -1, -1, -1, 7, -1, -1 ]
[ 5, -1, -1, 3, -1, -1, -1, -1, -1, 4, -1, -1 ]
[ "iclr_2019_Bklr3j0cKX", "rkgBfaIahQ", "B1gEEtgAjm", "iclr_2019_Bklr3j0cKX", "iclr_2019_Bklr3j0cKX", "B1gEEtgAjm", "BkxA0Kt3nQ", "rkgBfaIahQ", "rkgBfaIahQ", "iclr_2019_Bklr3j0cKX", "rklhZYMJom", "iclr_2019_Bklr3j0cKX" ]
iclr_2019_ByeZ5jC5YQ
KnockoffGAN: Generating Knockoffs for Feature Selection using Generative Adversarial Networks
Feature selection is a pervasive problem. The discovery of relevant features can be as important for performing a particular task (such as to avoid overfitting in prediction) as it can be for understanding the underlying processes governing the true label (such as discovering relevant genetic factors for a disease). Machine learning driven feature selection can enable discovery from large, high-dimensional, non-linear observational datasets by creating a subset of features for experts to focus on. In order to use expert time most efficiently, we need a principled methodology capable of controlling the False Discovery Rate. In this work, we build on the promising Knockoff framework by developing a flexible knockoff generation model. We adapt the Generative Adversarial Networks framework to allow us to generate knockoffs with no assumptions on the feature distribution. Our model consists of 4 networks, a generator, a discriminator, a stability network and a power network. We demonstrate the capability of our model to perform feature selection, showing that it performs as well as the originally proposed knockoff generation model in the Gaussian setting and that it outperforms the original model in non-Gaussian settings, including on a real-world dataset.
accepted-oral-papers
The paper presents a novel strategy for statistically motivated feature selection i.e. aimed at controlling the false discovery rate. This is achieved by extending knockoffs to complex predictive models and complex distributions via (multiple) generative adversarial networks. The reviewers and ACs noted weakness in the original submission which seems to have been fixed after the rebuttal period -- primary related to missing experimental details. There was also some concern (as is common with inferential papers) that the claims are difficult to evaluate on real data, as the ground truth is unknown. To this end, the authors provide empirical results with simulated data that address this issue. There is also some concern that more complex predictive models are not evaluated. Overall the reviewers and AC have a positive opinion of this paper and recommend acceptance.
train
[ "S1lT0N1cTQ", "Hkx5lH19T7", "B1xSOmkcpQ", "B1gBqxk9pm", "H1eAulfwpm", "HklHlOUPnQ", "HkeTawVrhX" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "\nA5: This was indeed an oversight and we will correct the text. We will change “trivial” to “well-known” and hopefully that will make clearer our point. The asterisk will be removed as we do not feel it helps provide any clarity.\n\nA6: The citations found in Table 1 are in fact citations to relevant PubMed literature who describe associations related to those found by our analysis. We will make this clearer by adding two columns to the table, each one containing the citations for the corresponding features.\nWe certainly concede that this type of evaluation is qualitative at best, but is the best we can do on real data given that we don’t have access to the ground truth. We would point to the synthetic data experiments as the main source of evidence for the efficacy of our method.\n\nA7: For reference, the dataset we used is 387-dimensional in the real-world experiments. Of course, upon an acceptance decision we will be able to disclose more complete details of the dataset which will be included in the paper.\n\nA8: We will correct these, thank you.", "Thank you for your insightful comments.\n\nA1: Thank you for your insightful comments. Actually, the power network and modified discriminator do not directly conflict - the modified discriminator requires that a feature and its knockoff have the same conditional distribution given the other features, but can be independent otherwise. This “room for independence” is what the power network will be minimizing. Of course, this is not the same as being able to be fully independent and so there is some trade-off, but we found that in practice setting the trade-off parameter (lambda) to 1 worked well. \nIt should be noted that hyper-parameter selection cannot be performed using cross-validation as we do not have access to ground truth and so the hyperparameters must be fixed a priori. For this we believe that 1 is a natural choice and as noted performed well in practice. Below are the results of varying lambda in the Y-Logit setting with features distributed as auto-regressive uniform.\n---------------------------------------------------------------------------------------------------\nPhi / Lambda | 0 | 0.1 | 0.5 | 1 | 5 | 10 |\n---------------------------------------------------------------------------------------------------\n 0.1 | 70.3 | 76.2 | 75.7 | 77.0 | 75.1 | 76.0 |\n 0.2 | 68.7 | 72.6 | 71.9 | 72.7 | 72.9 | 73.7 |\n 0.4 | 42.4 | 55.1 | 51.1 | 53.1 | 53.8 | 53.2 |\n 0.8 | 22.4 | 27.4 | 25.3 | 27.7 | 27.5 | 27.6 |\n---------------------------------------------------------------------------------------------------\n\nA2: Specific design details can be found in the Appendix under the “Implementation of KnockoffGAN” section. The training of the power, discriminator and WGAN discriminator is in fact independent (i.e. the updates of each of these do not depend on the weights of the other networks), and as such these 3 can be trained in any order (or even in parallel). Following standard practices in the WGAN literature we trained both discriminators and the power network for 5 iterations per generator iteration.\n\nA3: We were not aware of this paper, thank you. While this method does propose a more general method for generating knockoffs, it still has the same limitations of the original knockoff framework in that it is required to know that the distribution is from the class of distributions discussed in the paper (which includes Gaussian Mixture Model (GMM)). We would also note that, although we do perform experiments in a GMM setting, this was not a cherry-picked distribution, and the main goal here was to demonstrate results on non-Gaussian distributions for comparison with the original knockoff framework. This, we believe, demonstrates the more general result that when the distributions are mis-specified, the knockoff method struggles to give good performance, and this would apply to the above-mentioned paper in non-GMM settings as well. We will revise the manuscript to include a citation of this paper in the related works section and a brief discussion.\n\nA4: We will include the results of the regularization affects (mu = 0 and mu = 1) in terms of the final values of the other loss terms (L_D and L_P) in the revised supplementary materials. For easy reference, the results are given below for the auto-regressive uniform case with Y-Logit:\n--------------------------------------------------------------------------------------------\n | L_D | L_P |\n Phi | Mu = 0 | Mu = 1 | Mu = 0 | Mu = 1 | \n--------------------------------------------------------------------------------------------\n 0.1 | 0.6894 | 0.6962 | 0.0014 | 0.0203 | \n 0.2 | 0.7005 | 0.6964 | 0.0018 | 0.0115 | \n 0.1 | 0.6919 | 0.6955 | 0.0046 | 0.0180 | \n 0.1 | 0.7013 | 0.6960 | 0.0068 | 0.0311 | \n--------------------------------------------------------------------------------------------", "Thank you for your insightful comments.\n\nA1: We would first like to note that selecting hyper-parameters in this setting is not possible through conventional means, and they typically need to be selected in advance as we do not have access to any ground truth that allows us to perform cross-validation for hyper-parameter optimization.\nAs for this hyper-parameter (B_i) in particular, we note that this hyper-parameter is used to trade-off between speed of learning and optimality of the learned solution. A low probability makes for fast convergence, but suboptimal convergence whereas a high probability makes for slow convergence (but a “more” optimal solution). We chose 0.9 to balance this, following the implementation of [36]. To demonstrate this trade-off we will include results for our method in which we vary this hyper-parameter (from 0 to 0.9) in the revised manuscript. The results below are for the auto-regressive Uniform distribution with Y-Logit, FDR set to 10% for various values of the success probability of B_i:\n---------------------------------------------------------------------------------------------\nPhi / B_i | 0 | 0.2 | 0.4 | 0.6 | 0.8 | 0.9 |\n---------------------------------------------------------------------------------------------\n 0.1 | 73.3 | 74.7 | 75.0 | 75.3 | 75.7 | 77.0 |\n 0.2 | 67.7 | 68.4 | 69.1 | 69.7 | 72.4 | 72.7 |\n 0.4 | 51.2 | 51.6 | 51.7 | 51.9 | 52.9 | 53.1 |\n 0.8 | 18.0 | 51.7 | 21.4 | 22.4 | 25.4 | 27.7 |\n---------------------------------------------------------------------------------------------\n\nA2: As noted above it is not possible to tune hyperparameters in this setting as we do not have access to any ground truth; however, we will include results for various hyper-parameter settings (various lambda and mu) to evaluate their sensitivity. \nSetting eta = 10 is standard practice in the WGAN literature; thus, we fixed it. Finally, we would like to reassure you that we did not cherry pick these hyperparameters (lambda and mu) - they were set to 1 which is perhaps the most “canonical” choice we could have made.\nIn the below, we give the results for varying lambda in the auto-regressive Uniform setting with Y-Logit and FDR set to 10%:\n---------------------------------------------------------------------------------------------------\nPhi / Lambda | 0 | 0.1 | 0.5 | 1 | 5 | 10 |\n---------------------------------------------------------------------------------------------------\n 0.1 | 70.3 | 76.2 | 75.7 | 77.0 | 75.1 | 76.0 |\n 0.2 | 68.7 | 72.6 | 71.9 | 72.7 | 72.9 | 73.7 |\n 0.4 | 42.4 | 55.1 | 51.1 | 53.1 | 53.8 | 53.2 |\n 0.8 | 22.4 | 27.4 | 25.3 | 27.7 | 27.5 | 27.6 |\n---------------------------------------------------------------------------------------------------\nPhi / mu | 0 | 0.1 | 0.5 | 1 | 5 | 10 |\n---------------------------------------------------------------------------------------------------\n 0.1 | 70.0 | 79.3 | 79.0 | 77.0 | 77.3 | 79.0 |\n 0.2 | 63.4 | 74.4 | 73.4 | 72.7 | 74.0 | 73.7 |\n 0.4 | 42.1 | 49.8 | 50.1 | 53.1 | 52.8 | 50.8 |\n 0.8 | 22.0 | 27.7 | 29.7 | 27.7 | 29.4 | 23.4 |\n---------------------------------------------------------------------------------------------------\n\nA3: We agree that the real data does not provide strong evidence for the efficacy of our method, and would point to the synthetic data as the main source of evidence. However, we performed the real data experiment as a qualitative experiment, hoping to demonstrate that at the very least the method is capable of discovering known relevant features (according to PubMed).\nWhile we could use predictive power of the selected features as a metric, we do not believe this would be a meaningful metric here. The focus of this method (and Knockoffs in general) is on discovery of relevant variables and not on selecting variables for prediction. The predictive power of a set of features, while (most likely) correlated with TPR, will not necessarily increase monotonically with TPR.", "Thank you for the insightful comments.\n\nA1: We would first like to note that our synthetic experiments consist of Gaussian distributed features vs non-Gaussian distributed features (which include uniform, auto-regressive uniform and Dirichlet as well as mixtures of Gaussian). To make the FDR control clearer we will switch the FDR graphs from 0 to 100% scale to 0 to 20% scale in the revised manuscript. Third, we will add additional experiments with the FDR threshold set to 5% to align with the real world experiments (we note that FDR was set to 5% in the real data experiment so that the list of discovered true positives was manageable for cross-reference with PubMed). Below we give the results for autoregressive Uniform distribution in the Y-Logit setting at 5% FDR, further results will be added to the revised manuscript.\n\n-----------------------------------------------------------------------------------\nPhi/Methds (TPR) | KnockoffGAN | Knockoff | BHq Max Lik |\n-----------------------------------------------------------------------------------\n 0.1 | 0.718 | 0.562 | 0.040 |\n 0.2 | 0.616 | 0.550 | 0.028 |\n 0.4 | 0.380 | 0.205 | 0.027 |\n 0.8 | 0.171 | 0.012 | 0.017 |\n------------------------------------------------------------------------------------\nPhi/Methds (FDR) | KnockoffGAN | Knockoff | BHq Max Lik |\n-----------------------------------------------------------------------------------\n 0.1 | 0.038 | 0.034 | 0.053 |\n 0.2 | 0.049 | 0.035 | 0.012 |\n 0.4 | 0.042 | 0.027 | 0.062 |\n 0.8 | 0.044 | 0.011 | 0.019 |\n------------------------------------------------------------------------------------\n\nA2: i) The dataset details are not given at this time as we wish to remain anonymous for the review process. Upon an acceptance decision, we will include all details of the dataset in the paper.\n(ii) Our key contribution in this work is in having provided a new method for generating the knockoffs, and both the Lasso-based and Random Forest-based statistics we used are from existing knockoff works. In the synthetic data settings, the relationships are known to be linear in both the Y-Logit and Y-Gaussian settings, while in the real data setting the relationships are unknown. For this reason it makes sense to use the Lasso-based statistic in the synthetic data and the non-parametric RF-based estimator in real data. \n(iii) The results above demonstrate FDR control at 5% and further results will be added to the main manuscript (on more distributions than the AR-uniform given above).", "The paper presents a deep-learning-based version of the knockoff method by Candes et al. for FDR control in feature selection problems to avoid assumptions posed on the distribution of features by the original method. In a supervised feature selection setting, the goal of the knockoff framework is to select a set of input features that are statistically associated to an output variable Y, while controling the FDR. The basic idea behind knockoff is to generate artificial input feature vectors, (i.e. knockoffs) that are independent of Y, when conditioned on the real feature vector X, but after swapping arbitrary elements with X, are distributed as X. Sets of associated features and FDR estimates are obtained by contrasting suited feature selction criteria that measure associations of knockoffs and real features with the target Y.\nLasso coefficients and random forests are used in the paper.\nThe main contribution of the current paper is the use of a GAN to generate knockoffs, and, according to the paper in particular, the use of a discriminator that tries to identify the positions of knockoff features that have been swapped into real feature vectors X to control equlaity in distribution between knockoffs and feature vectors. Additionally, a Wasserstein discriminator and a MINE loss are used to control the knockoff distribution. Otherwise, the paper follows the standard knockoff procedure.\n\nThe approach is evaluated in simulations, varying two degrees of freedom: i) Gaussian distribute features vs. features that follow mixtures of Gaussians. ii) Gaussian and logit distributions of Y conditioned on linear functions of a subset of X features.\nUsing a Lasso-based feature selection criterion, the GAN knockoff method achieves the highest TP rates among a number of methods that empirically are shown to roughly control a target FDR of 10%. However, the figures are too small to judge FDR control more fine-graoned than 10% +-5%. Here, I would have wished i) a higher resolution to demonstrate FDR control, as well as an evaluation of different FDR cutoffs, especially including smaller cutoffs.\n\nAdditionally, an appliction to real data is performed. However, this evaluation is not very informative for several reasons. i) The dataset is not specified, making the experiment intransparent and non-reproducible. ii) A different feature selection criterion based on random forests is used, compared to the Lasso-based criterion in the synthetic experiments. iii) A different FDR cutoff of 5% has been used compared to the simulations. It is not clear, if the method shows FDR control in synthetic settings at 5%. For these reasons, the real-world experiment is hardly comparable to the synthetic settings.\n\n\nThe paper is relatively well-written and clear. Discussion of related work is appropriate.\n\nIn sum, the paper has some limitations in the empirical evaluation, but nonetheless the use of a GAN promises significant gains in statistical power.\n", "This manuscript describes an extension of the knockoff framework, which is designed to carry out feature selection while controlling the FDR among selected features, to settings in which the generative distribution of the features is not Gaussian. Specifically, the authors employ a GAN (with several modifications and additions) in which the generator produces knockoffs and the discriminator attempts to identify which features have been swapped between the original and the knockoff.\n\nThe method works as follows: (1) A conditional generator takes random noise and the real features as input, and outputs knockoff features. (2) A modified discriminator is used in such a way that the generator learns to generate knockoffs satisfying the necessary swap condition, so as to control the FDR of the knockoff procedure. (3) A power network uses Mutual Information Neural Estimation (MINE) to estimate the mutual information between each feature and its knockoff counterpart, so as to maximize the power of the knockoff procedure.\n\nResults are provided on synthetic and real data. As for the synthetic data, when the underlying feature distribution is Gaussian, the proposed method, KnockoffGAN, performs almost as well as the original knockoff and outperforms the BHq method; when the underlying feature distribution is non-Gaussian, KnockoffGAN dominates both the original knockoff and BHq methods. As for the real data, the authors claim to identify nine relevant features for cardiovascular disease and eight relevant features for diabetes, whereas the original knockoff procedure identifies zero features from the same data.\n\nGeneral comments: \n\nThis is an extremely impressive piece of work. The manuscript itself is a pleasure to read, and the results clearly demonstrate that the proposed KnockoffGAN both controls FDR and achieves power comparable to the original knockoff procedure in the Gaussian setting and much better than the original knockoff when the underlying distribution is not Gaussian.\n\nStrengths: \n\nThe combination of GANs and knockoff filter is a very promising and intriguing idea.\n\nThe use of the modified discriminator to ensure that the generated knockoffs satisfy the necessary swap condition is novel and intuitively sound.\n\nThe use of MINE to maximize power by maximizing the mutual information between each feature and its knockoff counterpart is also interesting.\n\nThe paper is well written, reads smoothly and the ideas are well exposed.\n\nThe illustrative figure is straightforward.\n\nWeaknesses:\n\nIntuitively, the modified discriminator and the power network should conflict with each other. I expect it was tricky to achieve a good tradeoff between two, but the authors failed to elaborate on these details.\n\nThe authors do not provide the design details of the neural networks. How dependent on the specific parametrization of the network architecture is the performance? How does the training order of four networks matter to the performance? \n\nThe manuscript should cite [[ Jaime Roquero Gimenez, Amirata Ghorbani, and James Zou. \"Knockoffs for the mass: new feature importance statistics with false discovery guarantees.\" arXiv:1807.06214, 2018. ]] which proposes a way to generate knockoffs for a Gaussian mixture model, and this method should be included in the relevant supplementary figure.\n\nIn Section 5.1.4, I would like to know, for a fixed data set, how the regularization affects the final values of the other loss terms.\n\nThe analysis of real data in Section 5.2 is unsatisfying in several respects.\n\nFirst, there is an unfortunate oversight in Table 1: the text refers to three features that are \"trivial,\" but only one of these is marked with an asterisk. This leaves open the question of whether there are other trivial features beyond the three mentioned in the text. In addition, it is not clear exactly what it means for a feature to be \"trivial\" in this context.\n\nThis point gets to a deeper problem with the evaluation, which is that we are told, with no evidence, that these features are supported by literature in PubMed. I would like to see two things here. First, it seems obvious to me that if you are going to say that there is support in PubMed, you are obliged to actually report the citations that supposedly give this support. This could be done in the appendix. Equally importantly, there is a potential here for ascertainment bias which should be combatted in some fashion. Presumably, some human expert had to do the PubMed searches to make this assessment. I would like to know how \"permissive\" this assessor is. To assess this, one could give the assessor a collection of terms, some of which were selected by KnockoffGAN and some at random, and then report the results. Obviously, some features that are significant may not be in the list of selected terms (because KnockoffGAN does not achieve 100% power) and so may appear as false negatives. But without some assessment like this, I have trouble believing this assessment.\n\nA related point is that it seems quite unfortunate that the authors chose a data set that cannot be described at all due to the anonymity constraint. At the very least, it seems that we should be told the dimensionality of the data set. The Knockoff literature contains real data sets that could have been used here.\n\nMinor comments:\n\nOn the first page, the sentence beginning \"On the other hand,\" should clarify that this is only in expectation.\n\np. 3: Missing right paren after [7].\n\np. 5: Write out “Gaussian process.”\n\np. 5: \"as little\" -> \"as little as possible\"\n\np. 6: \"to show that in\" -> \"to show, in\"\n\nIn Figures 2-5, add a horizontal line at 10% FDR for reference.\n\np. 10: \"features ones\" -> \"features\"\n\nNote to program committee:\n\nI did not review the technical details of the proof in the appendix.\n\n\n", "This paper introduces a novel feature selection method by utilizing GAN to learn the distributions. The novelty of this paper is to incorporate two recent works, i.e. knockoff for feature selection and W-GAN for generative models. Compared to the latest knockoff work which requires a known multivariate Gaussian distribution for the feature distribution, the proposed work is able to generate knockoffs for any distribution and without any prior knowledge of it.\n\nPros: This paper is very well written. I enjoyed reading this paper. It is novel and addresses an important problem. The numerical study clearly shows the advantage of the proposed work. \n\nCons:\n\nQ1: In the discriminator, instead of training with respect to the full loss, the authors consider to mask some information by using a multivariate Bernoulli random variable $B$ with success probability 0.9. Then the discriminator needs to predict only when $B_i = 0$. Can the authors provide some justification of such choice of the parameters? This choice is a little bit mysterious to me.\n\nQ2: How sensitive are the hyper-parameters $\\eta$ (set to 10 in the experiments), $\\lambda$, and $\\mu$ (set to 1 in the experiments)?\n\nQ3: In the real data example, the feature selection performance is less justified as there is no truth. One suggestion is to evaluate the prediction errors using the selected features and compare with the benchmarks.\n\n\n\n" ]
[ -1, -1, -1, -1, 6, 10, 7 ]
[ -1, -1, -1, -1, 4, 4, 4 ]
[ "HklHlOUPnQ", "HklHlOUPnQ", "HkeTawVrhX", "H1eAulfwpm", "iclr_2019_ByeZ5jC5YQ", "iclr_2019_ByeZ5jC5YQ", "iclr_2019_ByeZ5jC5YQ" ]
iclr_2019_Byg3y3C9Km
Learning Protein Structure with a Differentiable Simulator
The Boltzmann distribution is a natural model for many systems, from brains to materials and biomolecules, but is often of limited utility for fitting data because Monte Carlo algorithms are unable to simulate it in available time. This gap between the expressive capabilities and sampling practicalities of energy-based models is exemplified by the protein folding problem, since energy landscapes underlie contemporary knowledge of protein biophysics but computer simulations are challenged to fold all but the smallest proteins from first principles. In this work we aim to bridge the gap between the expressive capacity of energy functions and the practical capabilities of their simulators by using an unrolled Monte Carlo simulation as a model for data. We compose a neural energy function with a novel and efficient simulator based on Langevin dynamics to build an end-to-end-differentiable model of atomic protein structure given amino acid sequence information. We introduce techniques for stabilizing backpropagation under long roll-outs and demonstrate the model's capacity to make multimodal predictions and to, in some cases, generalize to unobserved protein fold types when trained on a large corpus of protein structures.
accepted-oral-papers
This paper presents a differentiable simulator for protein structure prediction that can be trained end-to-end. It makes several contributions to this research area. Particularly training a differentiable sampling simulator could be of interest to a wider community. The main criticism comes from the clarity for the machine learning community and empirical comparison with the state-of-the-art methods. The authors' feedback addressed a few confusions in the description, and I recommend the authors to further polish the text for better readability. R4 argues that a good comparison with the state-of-the-art method in this field would be difficult and the comparison with an RNN baseline is rigorously carried out. After discussion, all reviewers agree that this paper deserves a publication at ICLR.
train
[ "SJlUZjqEim", "B1lRoQMLTX", "HylvOjIjCQ", "r1etop_q0X", "SkxK8JG507", "HJlD-yM90X", "rJeaJ0bc0X", "Hyxj9pbc0Q", "Skx-IPBvTQ", "Hyl_nFNX6m", "Hkgk3b343m", "ryeM3Sm4im", "ByeS8UUb5X", "Ske1rVBZcX", "B1eOXxVb9X" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "public", "author", "public" ]
[ "Post-rebuttal revision: The authors have adressed my concerns sufficiently. The paper still has issues with presentation, and weak comparisons to earlier methods. However, the field is currently rapidly developing, and comparing to earlier works is often difficult. I believe the Langevin-based prediction is a significant and clever contribution. I'm raising my score to 6.\n\n------\n\nThe paper proposes an end-to-end neural architecture for learning protein structures from sequences. The problem is highly important. The method proposes to use a Langevin simulator to fold the protein ‘in silico’ from some initial state, proposes numerous tricks for the optimisation, and proposes neural networks to extract information from both the sequence and the fold state (energy function). The system works on internal coordinates, which are conditioned and integrated on the fly. The method seems to perform very well, improving upon their baseline model considerably.\n\nIn spite of the paper being an outstanding work, I have two criticisms about the accessibility and impact of the paper on the broader ICLR audience. In its current form and complexity, the paper feels accessible mostly to a narrow audience.\n\nFirst, the framework proposed in the paper is massive, containing a large amount of components, neural networks, simulators, integrators, optimisation tricks, alignments, profiles, stabilizations, etc. The amount of work done in the manuscript is staggering, but the method is also difficult to understand from reading the main manuscript alone. The 10+ page appendix is critical for understanding (for instance, the appendix reveals that MSA is used to generate more data), and even with it the method is difficult to grasp as a whole. This paper should be presented in a journal form with a presentation not hindered by page limits, while currently one needs to jump between the main text and appendix to get the whole picture. I also wonder if some parts of the system have already been published, and perhaps the presentation could be condensed that way. \n\nSecond, the introduction lists numerous competing methods both on the protein modelling side and on the MCMC vs optimisation side. The paper does not compare to any of these, which is strange, and makes it difficult to assess how much this paper improves upon state-of-the-art. Right now its unclear what is state-of-the-art in general. No bigger context of protein folding is given either, for instance, how well the method fares against purely alignment based approaches, or against purely physics-based simulators. Finally, the experimental section poorly describes how all the pieces of the system affect the final predictions. The discussion on the exploding gradients and dampening is excellent however. The only baseline is one with the simulator replaced by an RNN. There does not seem to be any running time analyses. As such, it is hard to interpret the current system, and it feels like a black box.", "Overall this is an important piece of work that deserves publication at ICLR. I recommend to the authors revise their manuscript to make it more accessible to the machine learning community and that they provide better context to allow them to assess the relative quality of the work compared to state of the art results.\n\n# Quality\n\nThe hypothesis that the authors set out to resolve is whether there is an advantage in using an energy function sampled by Langevin dynamics versus simply using a neural network to regress shape from sequence. They construct a flexible deep energy model where the sequence and structure dependent parts are separated in such a way that fast rollouts are possible. They also adapt the learning algorithm to ensure that long rollouts can be carried out and present a clever trick for integrating internal coordinates efficiently on a GPU. \n\nThe only criticism in terms of quality of work is that it somewhat lacks putting in context with results from the larger community, for example how well does the model compare in terms of speed and accuracy with co-evolutionary approaches? I realise it will not be possible to give a completely fair like to like comparison, but it will help readers put the results in context if they understood, for example, what the average TM score for CASP12 results was, as summarized in this paper for example: https://onlinelibrary.wiley.com/doi/full/10.1002/prot.25423. Similarly, it would be useful to compare the baseline - at least qualitatively - with the results from AlQuraishi et. al. whose model seems very similar in spirit.\n\n# Clarity\n\nI think in terms of clarity, the paper could be improved a little to take into account the audience of ICLR. In particular:\n\n* It may be useful to add a sentence of how profiles have been found to improve secondary structure prediction greatly. Currently the text makes it sound as though they constitute a sort of 'data augmentation', whereas in my opinion they add information compared to the sequence alone. In fact a brief explanation of the importance of homology might help the reader understand the relevance of the hierarchical approach taken to splitting the training set.\n\n* Fig. 2 caption. Could add some information to explain what panel B is showing. I think this would go a long way to explain why both cartesian and internal coordinates are important.\n\n* Fig. 4 second panel. The x axis should be labeled fraction or be numbered 0-100.\n\n* Fig 4. caption. The figure does not have a caption explaining what the graphs are showing. This would be a good place to explain that the colors refer to test sets that overlap with the training set in the full CATH code (black), overlap only in the CAT code (orange) etc. I admit I had found the explanation of the test/train/validation split rather confusing. It is not clear what the validation set is used for, i.e. which hyper-parameters have been tuned on it etc.\n\n* The nature of the loss. The appendix does a good job in describing each term in the loss function, but does not explain how the empirical loss function and the log-likelihood terms are mixed together. \n\n# Originality\n\nThe work is original and is references the relevant literature.\n", "We apologize and feel that we may have been unclear in how we are framing this work. Our primary goal is to explore the utility of a learned, differentiable simulator in the context of the challenging problem of protein folding and to show that it can have better inductive bias than an approach based directly on angle-prediction (i.e. AlQuraishi and our baseline RNN). To be clear, we do not expect that this method would be competitive at CASP (where all forms of sequence and structure data before a certain date are fair game), as the cost of scaling training limited us to a medium-sized dataset of proteins of limited length (200 AAs) from which many topologies and architectures have additionally been held out.\n\nWe feel that it is still useful to present this novel methodology before it can scale to the challenge of CASP, because we are trying to argue about the inductive bias of simulators and not to make a general claim about deep methods versus conventional methods. Both our method and AlQuraishi’s can create models with hundreds to thousands of atoms in milliseconds to seconds, which is far off from the timescales of physics based approaches (even the very impressive work of Jumper et al measures simulation times on the order of CPU-days), and we consider that sufficiently interesting to motivate research on ‘deep’ approaches.\n\nWith that in mind, we believe that the RNN baseline is a meaningful comparison and not ‘artificial’. Within the very recently emerging field of end-to-end models of protein structure, the idea of directly predicting internal coordinates (AlQuraishi) may be considered the other established paradigm (In Anand et al’s currently available manuscript, they focus on structural imputation & generation rather than prediction). Like AlQuraishi’s work, our RNN baseline composes a multilayer, bidirectional LSTM that predicts internal coordinates (in our case, coarse) with a scoring function on the resulting atomic Cartesian structure (in our case, after imputation). If we were to directly retrain the AlQuraishi model on our dataset, there are many possible explanations for performance differences such as different losses, the imputation network, training details, etc. We designed the experiments around our baseline because it allowed us keep those factors constant while replacing only the simulator portion of the model (this is what we meant by “where differences in performance come from”). Regarding the methods of Jumper and Krupa, we certainly want to acknowledge their contributions and related ideas but find that the costs of training (much longer simulations per protein) would be very challenging to scale to our dataset of 35k proteins.\n\nTo your other suggestions, we will add per-protein running time statistics and specific schematics for the 1D and 2D neural network modules upon revision. \n\nWe thank you for your patience with this work, and hope that the ICLR community can find value in the methodological and conceptual contributions.\n", "Thanks for your response.\n\nI am still concerned by lack of comparisons to earlier methods. As described very nicely in the introduction, the main energy-based method to compare are Jumper and Krupa, while AlQurashi and Anand are the competing deep models. Both AlQurashi and Anand are very recent, and there does not seem to be a public manuscript of the Anand yet, hence comparing to them would be unreasonable. However, both Jumper and Krupa models seem simple enough to be implemented and compared to (neither seem to have implementations available). The Jumper's method claims 3 days of computations per protein, and in that case the key characterisation would be accuracy vs speed of different methods.\n\nFurthermore, In AlQurashi they explicitly compare to CASP 7-12 top methods, which seem to be the classic conventional methods Rosetta, I-TASSER, HHPRED, etc. They also seem to recreate the conditions of earlier CASPs to get comparable results, which is a great idea. \n\nOne needs to lay out the baselines and compare to earlier methods, especially when one is proposing a novel paradigm (deep learning). Using only RNN as baseline is not sufficient to evaluate how the proposed method performs in different cases. You claim to focus on \"comparing to end-to-end approaches\", however there seem to be none currently except for the somewhat artificial RNN baseline. CASP methods or standard protein servers as baselines are then necessary to show that the new (deep) paradigm has merit. I would also encourage to authors to check the developments in CASP13. I think comparisons are the main issue with the paper, and needs to be properly addressed. \n\nI'm also not sure how the RNN baseline shows \"where differences in performance come from\". Can you explicate what parts of NEMO cause its performance to be better than RNN, and why? \n\nThe running times could be more specific by detailing processing time required per protein.\n\nOn the presentation, I think the paper mostly needs a \"big picture\" figure that shows how the neural networks play into the simulation and energy systems. For instance, AlQurashi's figs 1+2 are very illuminating in this respect. ", "Thank you for your constructive review of this work. We hope that with improved presentation and contextualization, it can be relevant to a broad audience at ICLR.\n\n> In its current form and complexity, the paper feels accessible mostly to a narrow audience.\n> The amount of work done in the manuscript is staggering, but the method is also difficult to understand from reading the main manuscript alone. The 10+ page appendix is critical for understanding … This paper should be presented in a journal form with a presentation not hindered by page limits… I also wonder if some parts of the system have already been published, and perhaps the presentation could be condensed that way.\n \nWhile a journal might better accommodate the appendix, we believe that the complete system can be of interest to a general audience at ICLR because it connects recent interesting ideas in machine learning (e.g. differentiable simulators & meta-learning) to a challenging and well-known application domain with novel methodology (transform integrator, stabilization strategies, etc). To make these contributions more accessible, we have simplified the presentation of the model in the main text and added previously missing legends and overview paragraphs. (Lastly, in case its relevant, no parts of the model have previously been published.)\n\n> The paper does not compare to any of these, which is strange, and makes it difficult to assess how much this paper improves upon state-of-the-art. Right now its unclear what is state-of-the-art in general.\n… The only baseline is one with the simulator replaced by an RNN\n\nWe focus on comparing to end-to-end approaches with a controlled dataset, largely due to the computational challenges of training the differentiable simulator model. We have added a discussion of Advantages and Disadvantages that more explicitly makes the connection to recent end-to-end methods for predicting protein structure in terms of angles (AlQuraishi et al) and our baseline RNN. We focus on comparing to the RNN baseline because it shares the same loss and data augmentation strategies as our simulator, thus making clearer where differences in performance come from. While we do see that our differentiable simulator model can generalize more effectively to distant folds on our controlled dataset of ~35k proteins < 200AAs, the dataset splitting and significant cost of training (2 months on 2 GPUs) mean that it is difficult to evaluate the approach performance on large proteins. Nevertheless, new models in machine learning with better inductive biases but greater computational demands often get their start via medium-sized controlled datasets.\n\n> There does not seem to be any running time analyses\n \nThanks for pointing this out. We have added a table of qualitative running times in the results section.\n \n> the experimental section poorly describes how all the pieces of the system affect the final predictions\n> it feels like a black box.\n \nWhile we could not afford to perform ablation studies of individual components given the long training time, we believe that structured nature of a differentiable simulator can make it easier to interpret and engineer than purely neural architectures. For example, the Markov Random Field formulation of the energy function means the sequence and structure features can be interpreted separately, and both the efficacy of Langevin dynamics and benefits of alternative coordinate parameterizations to sampling are well-understood phenomena.", "Thank you for your positive comments and constructive suggestions.\n\n> The paper is clearly written, however the description of the method can be confusing. …\nFig. 6 in the appendix helps, however it would be better to have a (perhaps more concise) overview in the main text.\n \nTo improve accessibility, we have simplified explanations of individual components and added overviews to the main text and Figure 6.\n \n> it would be interesting to see whether NEMO outperforms a baseline trained on profile features\n \nWhile it was not clear from the text, the original RNN was indeed trained on profiles, and we apologize for the confusion (Please see our previous comment).\n \n> it would be interesting to see some generated atomic substructures from the imputation network, in particular an analysis of how diverse the generated atom positions are and whether they depend on the local environment.\n \nWe agree that this is an interesting question, though our current model will likely not give an interesting answer since the imputation network is a deterministic mapping (ignoring dropout). That said, all secondary structure calls in the visualizations come directly from hydrogen bonding calls in pymol (default settings) which suggests that model can capture some aspects of (locally) orientation-dependent atom placement.", "We thank the reviewer for the positive comments about the approach and suggestions for how to improve the presentation.\n\n> It may be useful to add a sentence of how profiles have been found to improve secondary structure prediction greatly. Currently the text makes it sound as though they constitute a sort of 'data augmentation', whereas in my opinion they add information compared to the sequence alone.\n \nWe agree that evolutionary profiles add far more information than data augmentation, and have added an explicit point of comparison in the results section to draw the connection to SS prediction. We have also clarified the distinction between data augmentation and profiles in Appendix B.\n \n> lacks putting in context with results from the larger community, for example how well does the model compare in terms of speed and accuracy with co-evolutionary approaches? … Similarly, it would be useful to compare the baseline - at least qualitatively - with the results from AlQuraishi et. al. whose model seems very similar in spirit.\n \nWe agree that the paper needs to provide better context in the landscape of methods for protein structure prediction and have tried to address this by adding an ‘Advantages and Disadvantages’ paragraph to the Results. Since scaling our method to larger datasets of proteins remains difficult with current computational resources, we focus primarily on comparing to other end-to-end approaches, of which an RNN-based angle prediction (of AlQuraishi et al) is the other major approach. Hopefully our updated text and figure legends can clarify this.\n \n> [Fig 2., Fig 4 second panel and caption]\n\nThank you for these suggestions. We have fixed these figure legends to be clearer.\n \n> does not explain how the empirical loss function and the log-likelihood terms are mixed together.\n \nThanks for pointing this out. Our loss involves a simple sum of all terms without weights, and we have added a sentence to clarify this. \n\n>I admit I had found the explanation of the test/train/validation split rather confusing.It is not clear what the validation set is used for, i.e. which hyper-parameters have been tuned on it etc. \n \nWe have improved the explanation of how we split the dataset hierarchically and temporally to capture different generalization difficulties. We did not explicitly tune hyperparameters on the validation set (in part due to the long training time, for which 200k iterations was what we could afford), but we did allow ourselves to look at the validation set during model development and thus refer to it as such.\n", "We thank the reviewer for the extensive comments as well as suggestions for improving the presentation and evaluation.\n\n>Figure. 6 presents a scheme of the entire system, but it lacks details about the different modules, and it is not clear how they interact and how their training together is performed. \n\nWe apologize for the lack of clarity and have added a legend to this figure that walks through the complete sampling process (which is, in turn, backpropagated through).\n\n> The pseudo-code boxes describing Algorithms 1-4, and Table 2 describing the representation are informative and helpful, and more descriptions of this type would help.\n> what do 'CartesianStep' and 'ClippedInternalStep' mean? where are they described?\n \nWe are glad that these algorithm boxes are helpful, and while we have not made them for 'CartesianStep' and 'ClippedInternalStep', these computations refer to the Langevin Dynamics and Speed Clipping paragraphs in Appendix A.\n \n> I didn't see an Algorithm describing the atomic imputation part.\n \nThe atomic imputation was small and potentially easy to miss, but is defined in Section 2.4 ‘atomic imputation’. We have modified the formatting and added an overview paragraph to highlight its importance.\n \n> There is also no single place where all the parameters used by the authors\n\nWhile the complexity of the model makes presenting the hyperparameters in table form cumbersome, we intend to release the code which includes hyperparameters as structured objects.\n\n> if and how are constants multiplying them chosen to give lower/higher weights to some of the losses etc\n \nRegarding the loss, we simply sum the individual loss terms and have not explored weighting (owing to the the costly training time). We have clarified this in the text. \n\n> It is not clear to me how good are these results, except that they are shown to be better than a baseline simple model. How well does the author's model compare to other recently suggested end-to-end models? (the authors mention AlQuraishi, Anand&Huang, papers). How do they compare to state-of-the art structure prediction programs? (e.g. CASP winners)?\nI realize giving an automatic end-to-end solution is interesting even if performance is below that of best programs, but still it would be good to know gaps.\nIf such comparisons are less meaningful/not practical to perform this should be argued convincingly. \n \nTo better contextualize our model, we have added an Advantages and Disadvantages discussion as well as an improved explanation of the baseline. The RNN baseline method is similar to the approach of AlQuraishi (though differing in the use of coarse-to-fine reconstruction as well as our loss terms). We focus on comparing to the baseline model because it uses the same loss and imputation network, thus isolating the differences to the simulator itself. Regarding CASP: Although our method was able to scale to training on a database of ~35k protein domains up to length 200 (on 2 GPUs & 2 months), this particular dataset excludes the longer proteins and more diverse templates that would be necessary to be relevant to CASP.\n \n> It would also be useful to add some metrics of running time\n \nWe have added a qualitative table of approximate running times for our methods as well as conventional protein folding approaches.\n \n> typos and inconsistent notations\n\nThank you for pointing these typos out, we believe they are now fixed.", "The paper proposes a new end-to-end training framework for computational prediction of protein structure from sequence. \nThis is a very important problem and any progress due to new data and/or methods for utilizing may have high impact. \n\nThe paper presents several technical contributions in the modelling and training procedure - for example, automatic transformation between Cartesian and angular coordinates, using Langevin dynamics, and imputation method to get fine atomic coordinates. \n\nThe overall breadth and depth of the methods presented in the paper are impressive. The paper describes a quite complicated systems with multiple modules interacting between them. The paper doesn't describe the system built in enough details, although many of the details are given in the appendix. \nFigure. 6 presents a scheme of the entire system, but it lacks details about the different modules, and it is not clear how they interact and how their training together is performed. \nThe pseudo-code boxes describing Algorithms 1-4, and Table 2 describing the representation are informative and helpful, and more descriptions of this type would help. \nFor example: - In Algorithm 3, what do 'CartesianStep' and 'ClippedInternalStep' mean? where are they described? (should have their own boxes/description). \n\t\t- I didn't see an Algorithm describing the atomic imputation part. \n\t\t- It would be good to add a high-level pseudo-code for the entire end-to-end training algorithm. In it there could be calls to Algorithms 1-4 when needed. \n\nThere is also no single place where all the parameters used by the authors to achieve their empirical results are presented \n(e.g. learning rates, Gaussian kernel widths, how are random time steps for enforcing Lipschitz condition chosen etc.). \nIn addition, the empirical loss defined in eq. (8) is a sum of 6 different losses. It is not clear how are these very different losses scaled to the same 'units', which ones are more important, \nif and how are constants multiplying them chosen to give lower/higher weights to some of the losses etc. - I guess these choices will have a large effect on the training. \n\nThe authors present generalization results of their trained model in predicting 3D structures from CATH at different generalization level\n(i.e. different similarity levels to the training set proteins). It is not clear to me how good are these results, except that they are shown \nto be better than a baseline simple model. How well does the author's model compare to other recently suggested end-to-end models? \n(the authors mention AlQuraishi, Anand&Huang, papers). How do they compare to state-of-the art structure prediction programs? (e.g. CASP winners)? \nI realize giving an automatic end-to-end solution is interesting even if performance is below that of best programs, but still it would be good to know gaps.\nIf such comparisons are less meaningful/not practical to perform this should be argued convincingly. \nIt would also be useful to add some metrics of running time - it is not clear how computationally heavy and scalable is the author's model and training, compared to other methods. \n\n\nThere are many typos and inconsistent notations which makes it harder for the reader to understand the paper. \nFor example, 'Figure ??' in multiple locations, wrong Figure referenced, using s vs. S for sequence - S is defined as an L*20 matrix but in the appendix there are\n3 indices: s_{i,l,j} and it looks like different sequences in alignment should be denoted s_i. \nEquation for M_{l,j} isn't clear: j is used both as fixed index and index in summation. \nThe indexing in 'orientation vectors' v-hat_ij definition seems off (the formula of base vectors gives 0/0)\n\n\n\n", "Thank you for your review and positive words about the idea and approach. While we will respond in full later, we wanted to briefly clarify that all RNN models and NEMO results of Fig 4,5 were trained on profiles. The sentence \"We report the results of a sequence-only model in Table 1 and Figure 4\" is a figure-link typo and should instead read \"We also report the results of a sequence-only NEMO model in Table 1 and Figure 9.\" We apologize for the confusion and will make these points clearer upon revision.\n\nIn the meantime, we hope that this can clarify that our main claim about generalization is based on comparing profile-based NEMO to profile-based baselines.", "This paper presents an end-to-end differentiable model (NEMO) for protein structure prediction. I found this paper very interesting and the idea of training the network through the sampling procedure promising. The authors present the challenges and techniques (damping, Lyapunov regularization etc) in detail.\n\nThe paper is clearly written, however the description of the method can be confusing. This stems in part from the many components of the network as well as the fact that the protein is represented using various coordinate systems and features, so that it is not easy to follow which applies at each stage. Fig. 6 in the appendix helps, however it would be better to have a (perhaps more concise) overview in the main text.\n\nIn the evaluation, the NEMO method is compared to a baseline approach using RNNs. While NEMO trained on profile features performs best, the baseline is trained on sequences only. However, it outperforms the NEMO model trained on sequence-only in every category. Therefore, it would be interesting to see whether NEMO outperforms a baseline trained on profile features. Otherwise, I am not certain whether I can follow the conclusion that \"NEMO generalizes more effectively\". Beyond that, it would be interesting to see some generated atomic substructures from the imputation network, in particular an analysis of how diverse the generated atom positions are and whether they depend on the local environment.\n\nOverall, I appreciate the general idea and find the proposed approach very interesting. The contribution could have been stronger with a more detailed evaluation and better presentation.", "Thanks for catching the fraction (0-1) / percent (0-100) mislabeling, we will fix it.\n\nIt is correct that we split train and validation based on CATH 4.1 (hierarchical split) and tested on everything that was new to CATH 4.2 (temporal split), with the test set stratified into subsets of varying difficulty, from the very easy to the very difficult, and clearly labelled as such. While we currently stratify by CATH similarity between {test} and {train} (to evaluate more fold types), we can also include results that stratify by CATH similarity between {test} and {train+validation} (at the cost of reduced fold diversity). Since we subject all models and baselines to the same splits either way, all of these are interpretable measures of generalization.", "Thanks for the clarifications.\n\nMinor point, but that Fig. 4B x-axis is definitely incorrect. The scale goes from 0-1 and the axis label states that the units are % sequence ID (i.e. the maximum value is 100).\n\n\"Hierarchical purging\" like that is commonly practised in the comp. biol. community but it's unclear how that purging process has been extended to the _test_ set, which is based on CATH 4.2. I understand that you have split your _train_ and _validation_ sets according to a randomly selected subset of C, CA, CAT numbers found in CATH 4.1 - that's fine - I get that bit - but after training with that validation set, you cannot then have the same purged CAT numbers present in your test set because you will have fitted your model to your training set and then selected a model which does well on the domains in the validation set.\n\nIn reality you need to produce three splits of the CATH classification - train, validation and test. The test CAT codes should not have been used for either training or validation. Possibly this is exactly what you did, but it really could do with some clarification in the text.\n\nHowever, if that is what you did, then I don't understand how you can end up with the plot in Fig. 4B - because that would suggest that you have a lot of protein domains which have _different_ topologies (T in CATH) but which still have very high sequence similarity. That just can't be right. There are virtually no known examples of proteins with high sequence similarity which do not have the same topology. Even a sequence identity of just 30% is typically enough to guarantee that the two structures will have the same topology (TM-score > 0.5).\n\nFrom what you say, though, it sounds like your test set was simply all the new domains added to CATH 4.2, and that some happen to overlap with the training set domains and some don't. That would explain what I see in Figure 4B alright, but it would pretty much invalidate your final results as your test set would be contaminated. Surely you don't mean that?\n\nSorry to bang on, but if it wasn't such a potentially interesting paper I wouldn't care enough to ask!\n", "Thank you for your question about the training and test splits; we will do our best to clarify briefly here as well as update the manuscript at the next opportunity.\n\nWe very much agree that careful analysis of test and train overlap is one of the key issues when interpreting the results on protein structure prediction and we created our dataset in an attempt to frame this problem in terms of widely used (ie CATH) fold classifications.\n\nTo clarify we *did not* train on all of CATH 4.1 but rather intentionally hierarchically purged the training set at multiple levels of A, T and H. So, after collecting all folds in 4.1 (subject to max length 200 and class 1,2,3), we then randomly selected a subset of A classifications and purged all folds descending from these into the A-level validation set. We repeated this process two more times for T and H. While the specific domain-level splits are contained in a large table not feasible for attaching to an openreview submission, we intend to make these available and will try to summarize the high level held out classifications at next update to the manuscript.\n\nRegarding the middle panel of Figure 4, the x axis is correct and our test set from CATH 4.2 does (naturally) contain some folds of very high (sometimes complete) sequence overlap. However, because we purged the training set (as just described above), the test set also contains many sequences of low/random sequence overlap (left cluster in middle panel) and low classification overlap. The color coding on this scatter plot indicates how close the given test domain is to the training set, where blue means that the ATH classifications were not present in train (column C in table 1), magenta means TH were not present in train (column A), orange means H was not present in train (column T) and gray (column H) means that the full CATH classification *was* present in train. The showreel plot contains only folds from the A and T columns (magenta and orange). \n\nWe hope that hierarchical purging approaches to evaluation such as this will be more widely used in the future because they allow testing fold generalization more systematically across thousands of domains (rather than only doing temporal purging).\n\nTo conclude, our main claim is that the model is able to (sometimes) produce reasonable (TM>0.5) predictions for these difficult ATH, TH, and H problems created by our purging process and that it does so more accurately than a baseline that predicts angles directly without a folding process.", "I've really enjoyed reading your paper, but I'm left very confused about the testing procedures and the quality of the results obtained. The key issue when reviewing papers on protein structure prediction is whether or not there is any overlap between the training/validation data sets and testing data. Here it seems that you have tried to be very stringent about this by splitting your dataset of domains along CATH boundaries - so two domains with different CATH codes should in theory be unrelated and thus have no detectable sequence similarity. Likewise two proteins with different CAT codes should have different folds, CA different architectures and C having different classes. Having trained and validated on the domains in CATH release 4.1 you then tested on domains in CATH release 4.2, but only those that are unrelated to the training set (different CATH numbers) or have different folds (different CAT numbers). Assuming that was done without bugs, then I find it hard to understand the middle panel of Figure 4. Unless I’m misunderstanding what’s going on there (the x-axis scale is wrong by the way), it seems to suggest that a large proportion of your testing set is actually quite similar to your training set. In some cases identical (100% identical protein sequence). Does that not indicate that there is sequence overlap between training and test data. Of course it’s impossible to know whether the overlap is with the validation set rather than the training set, but that would still be problematic.\n\nIf those highly similar sequences were included in the statistics shown in Table 1, for example, it would make the results there very difficult to interpret.\n\nLooking at some of the specific examples of folded domains shown, it would have been useful to know what the sequence similarity is between the target and the most similar protein domain in the training/validation set. For example, I note that domain 2oy8A03 shares 100% sequence identity with domain 3ckgA02 (one is simply a deletion mutant relative to the other), which was already present in CATH release 4.1 and so must have been either in the training set or at least the validation set. If this is true, then the network has simply recapitulated what it has already seen and hasn’t actually predicted anything. Other examples shown have similar issues e.g. 4ykaC00, which is identical to 2wa9B00 in CATH 4.1 and indeed has the exact same CATH code, and so I don’t think should be included in the test set at all. Probably I have just misunderstood the exact way you’ve effected your training/test split, so I’d welcome any clarification you can give.\n" ]
[ 6, 7, -1, -1, -1, -1, -1, -1, 6, -1, 7, -1, -1, -1, -1 ]
[ 3, 5, -1, -1, -1, -1, -1, -1, 5, -1, 3, -1, -1, -1, -1 ]
[ "iclr_2019_Byg3y3C9Km", "iclr_2019_Byg3y3C9Km", "r1etop_q0X", "SkxK8JG507", "SJlUZjqEim", "Hkgk3b343m", "B1lRoQMLTX", "Skx-IPBvTQ", "iclr_2019_Byg3y3C9Km", "Hkgk3b343m", "iclr_2019_Byg3y3C9Km", "ByeS8UUb5X", "Ske1rVBZcX", "B1eOXxVb9X", "iclr_2019_Byg3y3C9Km" ]
iclr_2019_Bygh9j09KX
ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness
Convolutional Neural Networks (CNNs) are commonly thought to recognise objects by learning increasingly complex representations of object shapes. Some recent studies suggest a more important role of image textures. We here put these conflicting hypotheses to a quantitative test by evaluating CNNs and human observers on images with a texture-shape cue conflict. We show that ImageNet-trained CNNs are strongly biased towards recognising textures rather than shapes, which is in stark contrast to human behavioural evidence and reveals fundamentally different classification strategies. We then demonstrate that the same standard architecture (ResNet-50) that learns a texture-based representation on ImageNet is able to learn a shape-based representation instead when trained on 'Stylized-ImageNet', a stylized version of ImageNet. This provides a much better fit for human behavioural performance in our well-controlled psychophysical lab setting (nine experiments totalling 48,560 psychophysical trials across 97 observers) and comes with a number of unexpected emergent benefits such as improved object detection performance and previously unseen robustness towards a wide range of image distortions, highlighting advantages of a shape-based representation.
accepted-oral-papers
This paper proposes a hypothesis about the kinds of visual information for which popular neural networks are most selective. It then proposes a series of empirical experiments on synthetically modified training sets to test this and related hypotheses. The main conclusions of the paper are contained in the title, and the presentation was consistently rated as very clear. As such, it is both interesting to a relatively wide audience and accessible. Although the paper is comparatively limited in theoretical or algorithmic contribution, the empirical results and experimental design are of sufficient quality to inform design choices of future neural networks, and to better understand the reasons for their current behavior. The reviewers were unanimous in their appreciation of the contributions, and all recommended that the paper be accepted.
train
[ "BklS3JIeR7", "r1lJ3Mke0X", "S1eErXNKC7", "rJlAnqlH2X", "HylJK7nYAm", "HJxSI4x527", "BkeaMFAK3m", "r1xmGpubRX", "HygXM-Yb07", "SygjL7t-0m", "S1g75tNxTQ", "B1x2vGlKhm", "BJghHSvNh7", "Bkea9yCbiQ" ]
[ "author", "public", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public", "author", "public" ]
[ "Thanks for your interest in our work!\n\nAs mentioned in the paper, we used the PyTorch implementation from [1]. The degree of stylization (parameter \"alpha\" in the implementation) was kept at the default value of 1.0; it might be interesting to explore whether a lower coefficient still nudges a model towards a shape bias.\n\nWe used training with one stylization per image since this allows to pre-process ImageNet once rather than on-the-fly, which is desirable for faster training. In principle, our approach enables up to 79,434 different stylizations of a single image (this is the size of our style dataset), and we would expect that using more stylizations leads to even better results in terms of both SIN and IN accuracy.\n\nOnce the anonymous review period ends, we will release all of our trained model weights to facilitate comparisons to other models and other data sets like the one mentioned in your comment.\n\n[1] https://github.com/naoto0804/pytorch-AdaIN", "When constructing Stylized-ImageNet, does content-style trade-off coefficient for AdaIN vary? If not, what is the coefficient? Does training with several stylizations per image work better than just one stylization per image?\nAlso could the authors show that networks trained with SIN generalize to the set of corruptions from https://openreview.net/forum?id=HJz6tiCqYm so that we know that the distortions are not cherry-picked?", "We would like to thank all reviewers for their valuable feedback and we very much appreciate their assessment of our work as \"surprising\" & \"very cool\" (R1), \"inspiring\" (R2) and \"well-written\" (R3).\n\nThis is a summary of main concerns and how we addressed them.\n\n- additional control experiments (R2, R3): We have conducted the requested control experiments (texture bias in wider & deeper networks and different training dataset). Results consistently support original findings.\n\n- rephrasing some claims (R1, R2): we provided a listing of changed statements in the detailed rebuttals.\n\n- novelty (R3): we believe this is due to a misunderstanding; we have written a clarification and stated our contributions more clearly.\n\n- further clarifications (R1), release of dataset (R2), plotting improvements (R3), etc.: we have addressed all of them and updated the paper accordingly.\n\nR1, R2 and R3 have already taken the time to assess our changes and indicated being happy with the updated version.", "Review of ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness. \n\nIn this submission, the authors provide evidence through clever image manipulations and psychophysical experiments that CNNs image recognition is strongly influenced by texture identification as opposed the global object shape (as opposed to humans). The authors attempt to address this problem by using image stylization to augment the training data. The resulting networks appear much more aligned with human judgements and less biased towards image textures.\n\nIf the authors address my major concerns, I would increasing my rating 1-2 points.\n\nMajor Comments:\n\nThe results of this paper are quite compelling and address some underlying challenges in the literature on how CNN's function. I particularly appreciated Figure 5 demonstrating how the resulting stylized-augmented networks more closely align with human judgements. Additionally, it is surprising to me how poor BagNet performs on Stylized-ImageNet (SIN) implying that ResNet-50 trained on Stylized ImageNet may be better perceptually aligned with global object structure. Very cool.\n\n1. Please make sure to tone down the claims in your manuscript. Although I share enthusiasm for your results, please recognize that stating that your results are 'conclusive' is premature and not appropriate. (Conclusive requires more papers and much work by the larger scientific community for a hypothesis to become readily accepted). Some sentences of concern include:\n\n --> \"These experiments provide conclusive behavioural evidence in favour of the texture hypothesis\"\n --> \"we conclude the following: Textures, not object shapes, are the most important cues for CNN object recognition.\"\n\nI would prefer to see language such as \"We provide evidence that textures provide a more powerful statistical signal then global object shape for CNNs.\" or \"We provide evidence that CNNs are overly sensitive to textures in comparison to humans perceptual judgements\". This would be more measured and better reflect what has been accomplished in this study. Please do a thorough read of the rest of your manuscript and identify other text accordingly.\n\n2. Domain shifts and data augmentation. I agree with your comment that domain shifts present the largest confound to Figure 2. The results of Geirhos et al, 2018 (Figure 4) indicate that individual image augmentations/distortions do not generalize well. Given these results, I would like to understand what image distortions were used in training each and all of your networks. Did you try a baseline with no image distortions (and/or just Stylized-ImageNet)?\n\nAlthough the robustness in Figure 6 are great, how much of this can be attributed solely to Stylized-ImageNet versus the other types of image distortions/augmentations in each network. For instance, would contrast-insensitivity in Stylized-ImageNet diminish substantially if no contrast image distortion were used during training?\n\n3. Semantics of 'object shape'. I suspect that others in the field of computer vision may take issue with your definition of 'object shape'. Please provide a crisp definition of what you test for as 'object shape' in each of your experiments (i.e. \"the convex outline of object segmentation\", etc.).\n\nMinor Comments:\n\n- Writing style in introduction. Rather then quoting phrases from individual papers, I would rather see you summarize their ideas in your own language and cite accordingly. This would demonstrate how you regard for their ideas and how these ideas fit together.\n\n- Figure 2. Are people forced to select a choice or could they select 'I don't know'? Did you monitor response times to see if the manipulated images required longer times for individuals to pass decisions? I would expect that for some of the image manipulations that humans would have less confidence about their choices and that to be reflected in this study above and beyond an accuracy score.\n\n- In your human studies, please provide some discussion about how you monitored performance to guard against human fatigue or lack of interest.\n\n- Why did you use AdaIN instead of the original Gatys et al optimization method for image stylization? Was there some requirement/need for fast image stylization?\n\n- Do you have any comment on the large variations in the results across class labels in Figure 4? Are there any easy explanations for this variation across class labels?\n\n- Please use names of Shape-ResNet, etc. in Table 2.\n\n- Are Pascal-VOC mAP results with fixed image features or did you fine-tune (back-propagate the errors to update the image features) during training? The latter would be particularly interesting as this would indicate that the resulting network features are better generic features as opposed to having used better data augmentation techniques.\n\n- A.2. \"not not used in the experiment\" --> \"not used in the experiment\"\n", "I have read all of the other reviewer comments as well as the author responses to my original comments. The authors have addressed all of my primary concerns and clarified several issues that were not clear in the original manuscript. The additional results provided during the rebuttal (as requested by the other reviewers) provide even stronger evidence in favor of the central result. Based on this rebuttal and updates to the manuscript, I am upgrading my score from 6 to 8 as I think this will be an important piece of evidence in the design and analysis of deep network architectures for vision.", "The paper is well written and easy to follow. It was a nice read for me.\n\nThe paper studies the CNNs like AlexNet, VGG, GoogleNet, ResNet50 and shows that these models are heavily biased towards the texture when trained on ImageNet. The paper shows human evaluations and compares model accuracies when various transformations like cue hypothesis, texture hypothesis (terms coined in the paper) are applied to study texture vs shape importance. The paper shows various results on different models clearly and results are easily interpretable. The paper then proposed a new ImageNet dataset which is called Stylized-ImageNet (SIN) where the texture is replaced with randomly selected painting style.\n\nI believe that this is a good empirical study which is needed to understand why the ImageNet features are good (supervised training) and this can inform research in self-supervision, few shot learning domains.\n\nThe paper is an empirical paper and is presenting a quantitive study of role of texture which others have already presented like Gatys et al. 2017. The paper itself has no novel contributions. The paper notes \"novel Stylized-ImageNet dataset\" and shows that models can learn shape/texture features both but there is not much detail/explanation on why \"Stylized\" is the novel approach and also the methodology of constructing data by replacing with painting from AdaIN style transfer (Huang & Belongie, 2017) is not discussed/explored. More specifically, there is no ablation on other ways this dataset could have been constructed and why style transfer was picked as the choice, why was AdaIN chosen. While the choice is valid, I think these questions need to be answered if we have to consider it \"novel\". Additionally, I would like answers to the following questions:\n\n1. In Figure 4, ResNet50 results are missing. I would be very interested in seeing those results. Can authors show those results?\n2. Did authors study deeper networks like RN101/152 and do the observations about texture still hold?\n3. Did authors consider inspecting if the models have same texture biases when trained on other datasets like COCO? If yes, can you share your results?\n4. In Figure 5, can authors also show the results of training VGG, AlexNet, GoogleNet models on SIN dataset? I believe otherwise the results are incomplete since Fig. 4 shows the biases of these models on IN dataset but doesn't show if these biases are removed by training on SIN.\n5. In Section 3.3, Transfer learning, authors show improvement on VOC 2007 Faster R-CNN . Do authors have explanation on why this gain happens? how's the texture learning in pretext task (like image classification training on SIN dataset) tied to the transfer learning no different dataset?\n6. What are the results of transfer learning on other datasets like COCO, Faster R-CNN?", "This paper talks about the behavior bias between human and advanced CNN classifier when classifying objects. A clear conclusion is that DNN classifiers lean on texture cues more than human, which is in contrast to empirical evidence. The experimental results are delighting and convincing to some extent. This paper is also inspiring and potentially useful to interpret how CNN works in object classification task. \n\nNevertheless, I have several small issues: \n-\tI like the writing of this paper, fluent description and clear topic. Besides, it provides sufficient information about experiment details, thus I think the experiments are fully reproducible. But I want to remind the authors to downplay their claims. Some sentences are not with academic rigor. i.e. “Textures, not object shapes, are the most important cues for CNN object recognition.”I don’t think it a good idea to claim textures as the “most important”cue.\n-\tAlthough adequate experiments are conducted on ResNet-50 on ImageNet, I miss experiments on a different object classification dataset i.e. PASCAL VOC, and a different network backbone such as very deep ResNet-152 or wider DenseNet. This lies in the concern that a different (deeper or wider) framework may behave quite differently and also the slightly shifted data distribution may induce controversial results. The adopted network ResNet-50, AlexNet, VGG-16 and GoogLeNet are not deep enough or either wide as DenseNet. Although transfer learning experiment is carried out upon PASCAL VOC, it’s not straightforward and not so truly telling. We’re curious about universal conclusions rather than that based on one dataset or network architecture of the same category. As a matter of fact, I’m nearly convinced by the provided results. But I think the demanding experiments will make the conclusions more solid. \n\nBesides, I think the constructed dataset is beneficial to further research or fair comparison of future works, and I wonder the authors’ intention to publish such a dataset in the future. \n\nI would raise my scores if the aforementioned problems are convincingly checked and solved.", "Dear Reviewer 3,\n\nThank you for your review and feedback. We appreciate your assessment of our work as a \"good empirical study\" and \"well-written\" paper. We provide a point-by-point response to individual concerns below.\n\n1. ResNet-50 results not included in Figure 4.\nThank you for pointing this out. Originally, results for ResNet-50 were included in Figure 5 but not in Figure 4. We have now added them to Figure 4 as well (and likewise, to Figure 2).\n\n2. Does the texture bias exist in deeper networks like ResNet-152?\nWe have addressed this interesting question by investigating the texture bias in three additional networks: a very deep network (ResNet-152), a very wide network (DenseNet-121) and a highly compressed network (SqueezeNet1_1). The results are reported in the Appendix, Figure 13 (right). All three networks show a strong texture bias.\n\n3. Texture bias when trained on a different data set.\nWe appreciate this suggestion and validated our results on a different training data set. We have investigated the texture bias of a ResNet-101 architecture trained on a different data set, namely Open Images. In Figure 13 of the Appendix, the left plot shows that the texture bias is equally prominent (if not stronger) in a network trained on the Open Images classification data set, thus the texture bias is not specific to training on ImageNet.\n\n4. Training other models on Stylized-ImageNet.\nWe have additionally trained VGG-16 and AlexNet on Stylized-ImageNet (SIN) as suggested. The results are available in the Appendix, Figure 11 and linked from the caption of Figure 5 (in which we only show results for one network, ResNet-50, in order to avoid a cluttered plot). Training on SIN does remove the texture bias in these other networks just like it does remove it for ResNet-50.\n\n5. Explanation for improved object detection performance.\nWe have added the following sentence to the \"transfer learning\" section: \"This [the improved detection performance] is in line with the intuition that for object detection, a shape-based representation is more beneficial than a texture-based representation, since the ground truth rectangles encompassing an object are by design aligned with global object shape.\"\n\n6. Transfer learning on different object detection data set.\nWe believe that validating our transfer learning results on a different data set (MS COCO) is an important suggestion. We are working towards including these results as an additional column in Table 2. We will not be able to provide results by the end of the rebuttal period but we will make sure to include them once completed. We have inquired with the ICLR organizers who have assured us it will be possible to make minor changes after the rebuttal period.\n\nNovelty.\nWe have written a fast response to this point (\"Clarification concerning novelty\": https://openreview.net/forum?id=Bygh9j09KX&noteId=S1g75tNxTQ ) in the hope that this clarifies the issue. We have identified the following sentences that were misleading in our original submission, and we have changed them as described below to avoid any misunderstandings by future readers who might get the idea that the data set itself is being described as our core novel contribution:\na) In the abstract, we replaced \"our novel Stylized-ImageNet dataset\" with \"Stylized-ImageNet, a stylized version of ImageNet\".\nb) In the Introduction, we replaced \"Utilising style transfer (Gatys 2015), we devised a novel way to create images with a texture-shape cue conflict\" with \"Utilising style transfer (Gatys 2015), we created images with a texture-shape cue conflict\".\nc) We adapted the last paragraph of our Introduction such that our contributions, as we see them, are explicitly mentioned (\"Beyond quantifying existing biases, we subsequently present results for our two other main contributions: changing biases, and discovering emergent benefits of changed biases.\").\nd) and e) In Section 3.2 and in the Summary, we replaced \"a novel Stylized-ImageNet (SIN) data set\" with \"our Stylized-ImageNet (SIN) data set\".\n\nAgain, thank you very much for reviewing our paper and for your valuable suggestions!", "Dear Reviewer 2,\n\nThank you very much for your valuable feedback. We appreciate your assessment of our work as an \"inspiring\" paper. We provide a point-by-point response to your three suggestions below.\n\nWriting: claims.\nWe have identified a number of sentences where our excitement about the results has biased our writing. We made the following changes:\n1.) \"the texture hypothesis: object textures, not object shapes as commonly assumed, are the most important cues for CNN object recognition.\" changed to \"the texture hypothesis: in contrast to the common assumption, object textures are more important than global object shapes for CNN object recognition\".\n2.) \"conclusive behavioural evidence\" replaced with \"behavioural evidence\".\n3.) In the Discussion, \"we found strong evidence\" replaced with \"we provide evidence\".\n4.) In the Discussion, \"we conclude the following: Textures, not object shapes, are the most important cues for CNN object recognition\" replaced with \"this highlights the special role that local cues such as textures seem to play in CNN object recognition\".\n5.) In the Summary, \"we showed that machine recognition today primarily relies on texture rather than shape cues\" replaced with \"we provided evidence that machine recognition today overly relies on object textures rather than global object shapes as commonly assumed\".\n\nControl experiments with different networks and data sets.\nYou were expressing concern that deeper or wider networks and training on a different object classification data set may lead to different results. To address your concerns we have collected results for the requested control experiments. In Figure 13 of the Appendix, the left plot shows that the texture bias is equally prominent (if not stronger) in a network trained on the Open Images classification data set, thus the texture bias is not specific to training on ImageNet. The right plot of Figure 13 shows that ImageNet-trained networks ResNet-152 (a very deep network), DenseNet-121 (a very wide network) and for comparison also Squeezenet1_1 (a highly compressed network) all have a strong texture bias. Method details are reported in Section A.5 of the Appendix. Furthermore, we have started to train a ResNet-152 architecture on Stylized-ImageNet to use it as an additional \"deep\" backbone for our object detection experiments. Since network training and fine-tuning takes a lot of time, we will not be able to provide results by the end of the rebuttal period but we will make sure to include them afterwards. We have inquired with the ICLR organizers who have assured us it will be possible to make minor changes after the rebuttal period.\n\nRelease of data sets.\nWe are determined to release our cue conflict images (such as the cat with elephant skin) along with our raw data, image manipulation code, data analysis scripts, psychophysical experiment code and links to trained model weights in a github repository at the end of the anonymous review process. Furthermore, we will release code to create Stylized-ImageNet in a separate github repository along with a docker image; given two directory paths to ImageNet images (available from the ImageNet website [1]) and to the paintings used as a style source (available from Kaggle's painter-by-numbers website [2]) a shell script then creates Stylized-ImageNet.\n\nAgain, thank you very much for reviewing our paper and for your valuable suggestions!\n\n[1] http://www.image-net.org/\n[2] https://www.kaggle.com/c/painter-by-numbers/data\n\n", "Dear Reviewer 1,\n\nThank you for reviewing our paper, and for your helpful suggestions. We appreciate your assessment of our results as \"very cool\" and \"surprising\". We are happy to address your detailed suggestions and questions in a point-by-point response below.\n\nWriting: claims.\nWe have addressed this concern, which was shared by Reviewer 2:\nhttps://openreview.net/forum?id=Bygh9j09KX&noteId=HygXM-Yb07\n\nRobustness and data augmentation.\nWe made sure not to include any image distortions in the training data for any of our networks. Both models displayed in Figure 6 (ResNet-50 trained on ImageNet and ResNet-50 trained on Stylized-ImageNet) were trained under identical circumstances with respect to data augmentation (none apart from random resizing and flipping), hyperparameter settings, number of epochs, etc. Hence, any changes in the distortion robustness between these two models can be attributed solely to the changed training data (inducing different biases). We made this more clear in the Introduction as well as in Section 3.3 and the Discussion by writing that the SIN-trained network is more robust \"despite never being trained on any of the distortions\".\n\nSemantics of object shape.\nWe have added the requested definitions in Section 2.2. We define \"silhouette\" as the bounding contour of an object in 2D (i.e., the outline of object segmentation). When mentioning \"object shape\", we use a definition that is broader than just the silhouette of an object: we refer to the set of contours that describe the 3D form of an object, i.e. including those contours that are not part of the silhouette.\n\nQuotes in Introduction.\nIn general we agree that it is preferable to summarise ideas in one's own language. In this particular case, quotes are used with the aim to convince the reader that the \"shape hypothesis\" is not merely a straw man.\n\nResponse times / forced choice.\nIf human observers are allowed to select \"I don't know\", comparing results across participants with different confidence thresholds becomes very difficult. We therefore followed a standard psychophysical paradigm, namely an identification task with \"forced choice\" in the sense that they had to select a category even if unsure. However, observer confidence is typically correlated with reaction times (rapid responses for confident decisions), and we have thus added median reaction times across experiments as a column in Table 3 of the Appendix. This indeed shows that reaction times are longer for experiments with manipulated images.\n\nGuard against human fatigue / lack of interest.\nWe have added the following explanation to Section A.4 of the Appendix:\n\"Overall, we took the following steps to prevent low quality human data: 1., using a controlled lab environment instead of an online crowdsourcing platform; 2. the payment motivation scheme as explained above [i.e., better payment for better performance in experiments with a unique ground truth category]; 3. displaying observer performance on the screen at regular intervals during the practice session; and 4. splitting longer experiments into five blocks, where participants could take a break in between blocks.\"\n\nReason for fast image stylization.\nWe have added the following explanation to Section 2.3: \"We used AdaIN fast style transfer rather than iterative stylization (e.g. Gatys et al., 2016) for two reasons: Firstly, to ensure that training on SIN and testing on cue conflict stimuli is done using different stylization techniques, such that the results do not rely on a single stylization method. Secondly, to enable processing entire ImageNet, which would take prohibitively long with an iterative approach.\"\n\nExplanation for large variations in results across labels.\nOne possible explanation for the large variation in CNN results across labels may be that they use different strategies for different categories, i.e. that they sometimes rely solely on the texture (e.g. for category bear), and sometimes more on other cues. This is supported by the fact that there is a negative Spearman correlation between accuracy in our edge experiment, and texture bias in our cue conflict experiment (AlexNet: -0.582, GoogLeNet: -0.508, VGG-16: -0.238, ResNet-50: -0.014, human observers: -0.621): if a certain category seems hard to recognize from edges and contours, most networks are more likely to show a stronger class-conditional texture bias.\n\nFine-tuning image features for Pascal VOC.\nWe followed standard best practices for using image features in an object detection setting, which includes fine-tuning the image features. Importantly, this was done for all networks equally (which are trained on ImageNet and Stylized-ImageNet respectively), and the networks were trained under identical circumstances w.r.t. data augmentation. Thus, improved object detection performance can be attributed directly to better generic features induced by Stylized-ImageNet.\n\nAgain, thank you very much for your review and suggestions!", "Dear Reviewer 3,\n\nThank you very much for reviewing our paper. Please allow us a quick and important clarification regarding your most important criticism (\"The paper itself has no novel contributions\") as we believe this may be due to a misunderstanding:\n\nFirst, we fully agree with you that one of the datasets we created, Stylized-ImageNet, is in itself not a major contribution - we use an existing fast style transfer method to strip ImageNet images of their original texture to replace it with the uninformative texture of a painting. Stylized-ImageNet is, for us, merely a means to an end, enabling us to make three (novel) core contributions:\n\n1. Quantifying existing texture vs. shape biases.\nMany of the most influential explanations of CNN object recognition [1-3] describe it as a process of recognizing parts of objects / object shapes (the shape hypothesis). We contrast this with our carefully collected evidence for the texture hypothesis, offering an entirely different explanation. Furthermore, [e.g. 2,4-5] argue that CNNs closely mirror human object recognition and human shape perception. We here provide insights into a core difference of human and machine vision by comparing both under fair circumstances. To the best of our knowledge, our work is the first to systematically pitch shape against texture cues to investigate CNN biases and compare them to the human visual system.\n\n2. Overcoming the texture bias in CNNs.\nBased on our texture hypothesis, we hypothesized that a CNN texture bias might be changed towards a shape bias if trained on a suitable dataset. We demonstrate the effectiveness of this approach, which shows that the texture bias in standard CNNs is not an inherent property of the architecture but rather induced by the training data. To the best of our knowledge, this is a novel finding and has never been attempted before. (The method of creating a suitable dataset, AdaIN style transfer, is not new.)\n\n3. Showing emergent benefits of changed CNN biases.\nWe demonstrate substantial advantages of a shape-based over a texture-based representation in CNNs, most importantly better features for transfer learning (object detection) and a previously unmatched robustness against a number of image distortions - despite never being trained on any of them. To the best of our knowledge, ShapeResNet is the first network to approach human-level distortion robustness on distortions that were not part of the training data.\n\nWe believe that describing Stylized-ImageNet as \"novel\" - which we did in our original manuscript, e.g. in our abstract - was misleading since it is, as mentioned above and pointed out by you in your review, not a substantial contribution and, more importantly, merely a means to achieve our main and truly novel contributions. We will thus rephrase our description of Stylized-ImageNet throughout the paper to avoid any misunderstandings by future readers who might get the idea that the dataset itself is being described as our core novel contribution. Apologies for any confusion this issue may have caused in the original submission.\n\nWe would appreciate it if you could let us know whether this clarifies the issue, and changes your assessment of the novelty of our work.\n\n[1] Goodfellow, I., Bengio, Y., Courville, A., & Bengio, Y. (2016). Deep learning (Vol. 1). Cambridge: MIT press.\n[2] Kriegeskorte, N. (2015). Deep neural networks: a new framework for modeling biological vision and brain information processing. Annual review of vision science, 1, 417-446.\n[3] LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436.\n[4] Kubilius, J., Bracci, S., & de Beeck, H. P. O. (2016). Deep neural networks as a computational model for human shape sensitivity. PLoS computational biology, 12(4), e1004896.\n[5] Cadieu, C. F., Hong, H., Yamins, D. L., Pinto, N., Ardila, D., Solomon, E. A., ... & DiCarlo, J. J. (2014). Deep neural networks rival the representation of primate IT cortex for core visual object recognition. PLoS computational biology, 10(12), e1003963.", "Thanks for the reply. ", "Thank you for your interest in our work, and for your comment!\nWe are happy to clarify the questions, and will make these points more clear in the next version of our paper.\n\n1) All networks were trained on full ImageNet (or full Stylized-ImageNet); they thus recognize 1,000 classes. Concerning the mapping from 1,000 classes to 16 classes, we followed the procedure introduced in [1] (the paper where 16-class-ImageNet was proposed). In order to achieve a fair comparison to the forced-choice paradigm for human observers (who were given a choice of 16 categories on the lab response screen), only those ImageNet categories corresponding to one of the 16 entry-level categories were considered for the network response. The mapping between ImageNet and 16-class-ImageNet categories was achieved via the WordNet hierarchy [2] - e.g. ImageNet category \"tabby cat\" would be mapped to \"cat\".\n\n2) We used the 16-class-ImageNet categories introduced in [1]. These are the 16 entry-level categories from MS COCO that have the highest number of ImageNet classes mapped to them via the WordNet hierarchy.\n\n[1] Geirhos, Temme, Rauber, Schütt, Bethge, Wichmann: Generalisation in humans and deep neural networks, NIPS 2018, https://arxiv.org/pdf/1808.08750.pdf\n[2] George A Miller. Wordnet: a lexical database for English. Communications of the ACM, 38(11):39–41, 1995.", "Great idea to scrutinize the shape hypothesis. \n\nHowever, as this work is breaking some strong assumptions of the community, the following points were not clear: \n\n1) Were the networks trained to recognize only the 16 classes or the typical 1,000 of imagenet? If the latter, how was a random prediction restricted to the 16 classes? \n2) How were these categories selected? They have both distinct appearance and texture in most cases. \n\nOnce again, very promising work from the authors, looking forward for the code and implementations.\n" ]
[ -1, -1, -1, 8, -1, 7, 8, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, 4, -1, 4, 4, -1, -1, -1, -1, -1, -1, -1 ]
[ "r1lJ3Mke0X", "iclr_2019_Bygh9j09KX", "iclr_2019_Bygh9j09KX", "iclr_2019_Bygh9j09KX", "SygjL7t-0m", "iclr_2019_Bygh9j09KX", "iclr_2019_Bygh9j09KX", "HJxSI4x527", "BkeaMFAK3m", "rJlAnqlH2X", "HJxSI4x527", "BJghHSvNh7", "Bkea9yCbiQ", "iclr_2019_Bygh9j09KX" ]
iclr_2019_H1xSNiRcF7
Smoothing the Geometry of Probabilistic Box Embeddings
There is growing interest in geometrically-inspired embeddings for learning hierarchies, partial orders, and lattice structures, with natural applications to transitive relational data such as entailment graphs. Recent work has extended these ideas beyond deterministic hierarchies to probabilistically calibrated models, which enable learning from uncertain supervision and inferring soft-inclusions among concepts, while maintaining the geometric inductive bias of hierarchical embedding models. We build on the Box Lattice model of Vilnis et al. (2018), which showed promising results in modeling soft-inclusions through an overlapping hierarchy of sets, parameterized as high-dimensional hyperrectangles (boxes). However, the hard edges of the boxes present difficulties for standard gradient based optimization; that work employed a special surrogate function for the disjoint case, but we find this method to be fragile. In this work, we present a novel hierarchical embedding model, inspired by a relaxation of box embeddings into parameterized density functions using Gaussian convolutions over the boxes. Our approach provides an alternative surrogate to the original lattice measure that improves the robustness of optimization in the disjoint case, while also preserving the desirable properties with respect to the original lattice. We demonstrate increased or matching performance on WordNet hypernymy prediction, Flickr caption entailment, and a MovieLens-based market basket dataset. We show especially marked improvements in the case of sparse data, where many conditional probabilities should be low, and thus boxes should be nearly disjoint.
accepted-oral-papers
The manuscript presents a promising new algorithm for learning geometrically-inspired embeddings for learning hierarchies, partial orders, and lattice structures. The manuscript builds on the build on the box lattice model, extending prior work by relaxing the box embeddings via Gaussian convolutions. This is shown to be particularly effective for non-overlapping boxes, where the previous method fail. The primary weakness identified by reviewers was the writing, which was thought to be lacking some context, and may be difficult to approach for the non-domain expert. This can be improved by including an additional general introduction. Otherwise, the manuscript was well written. Overall, reviewers and AC agree that the general problem statement is timely and interesting, and well executed. In our opinion, this paper is a clear accept.
val
[ "H1xPEJOVsm", "Bye2jf25Rm", "rJeo2-39Rm", "SylyqW25Cm", "BkejJghqAQ", "rJglCnOshm", "SylZ79N5hm", "HylK0EWqhX", "r1lircRdnQ" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "public" ]
[ "Post-rebuttal revision: All my concerns were adressed by the authors. This is a great paper and should be accepted.\n\n------\n\nThe paper presents smoothing probabilistic box embeddings with softplus functions, which make the optimization landscape continuous, while also presenting the theoretical background of the proposed method well. The paper presents the overall idea beautifully and is very easy to follow. The overall idea of smoothed sotfplus boxes is well-founded, elegant and practical. The results on standard WordNet do not improve upon state-of-the-art, however imbalanced WordNet with abundance of negative examples gain remarkable improvements. Similarly in Flickr and MovieLens the method performs well. This paper presents a novel, theoretically well-justified idea with excellent results, and is likely going to be a high-impact paper. \n\nAn illustrating figure would still be nice to include, also for the convolutions of eq 2. The paper does not comment on running times, some kind of scalability comparison should be included since the paper claims that the model is easier to train.\n\nThe paper should clarify that the \\prod in 3.3. meet and join definitions seems to refer to a set product, while the p(a) equation has a standard product (or does it?). What is the “a” in the p(a), should it be \"p(x)” ? \n\nI have trouble understanding eq 1: the difference inside the function is always negative, while the hinge function seems to clip negative values away. The definition of the m(x) is too clever, please clarify the function in more conventional notation. ", "Thank you for your thoughtful review. Responses are included inline:\n\n> An illustrating figure would still be nice to include, also for the convolutions of eq 2. \n\nWe agree that such a rendering will be helpful, and will add it to the paper.\n\n> The paper does not comment on running times, some kind of scalability comparison should be included since the paper claims that the model is easier to train.\n\nThe ease of training leads to better results on certain data, rather than increased scalability --- both methods are applicable to large scale data, similar to other embedding methods. We added a new series of experiments testing robustness to different initialization regimes for the two models, which are included in the draft and detailed in our response to Reviewer #2.\n\n> The paper should clarify that the \\prod in 3.3. meet and join definitions seems to refer to a set product, while the p(a) equation has a standard product (or does it?). What is the “a” in the p(a), should it be \"p(x)” ? \n\nYour interpretation of the products is correct, and \"a\" was indeed a typo for \"x.\" Thanks! We have fixed this in the draft and changed the definition to clarify the meaning of the products.\n\n> I have trouble understanding eq 1: the difference inside the function is always negative, while the hinge function seems to clip negative values away. \n> The definition of the m(x) is too clever, please clarify the function in more conventional notation. \n\nThank you, there was a sign error. In the updated formula, the quantity inside the function can be positive or negative (negative if the hard boundaries of the boxes don't overlap at all). We've also switched the definition to use “min” and “max” rather than \\wedge and \\vee symbols, so it should be much clearer.\n\n", "\n> There's a strong emphasis on how smoothing makes training easier. Do you have any metrics to directly support this, such as variance under random restarts?\n\nWe do not see much variance in terms of outcome when changing only the random seed. In terms of ease/robustness of training, our experiments on imbalanced wordnet give evidence that the soft box model is more robust in the regime of sparse *training* data. However, we have updated the draft with a new series of experiments on MovieLens. In the appendix, we’ve added experiments that demonstrate the greatly decreased sensitivity of the soft box model when picking distributions for box initialization such that the boxes start off with roughly 0%, 20%, 50%, and 100% of boxes disjoint --- regimes in which the hard box model experiences much greater degradation in performance. Although we can control our initialization, we can't necessarily control the intermediate stages of learning, during which boxes may become disjoint, so this may give some useful insight.\n\nNOTE: When performing this comparison, we found a difference between the criteria to establish development set convergence in the POE, box, and soft box experiments on MovieLens and the criteria used by the (complex) bilinear baseline models. These criteria (number of steps without development set improvement) are given in the appendix. This led us to update the results (in Table 5) for POE, box, and soft box, with the best performing model (our proposed soft box model) improving by an absolute point of Spearman and Pearson's rho compared to the old tuning regime. Additionally, the hard box model outperforms all other models besides the soft box model. The soft box model outperforms it in KL and Pearson by a similar absolute margin as before, but its previous advantage of ~2.9 points of Spearman's rho over the hard box model is now only ~2.1 points. This difference in development set stopping criteria was not present in any other experiments.\n\n\n> In the abstract and introduction, it's easy to gloss over \"inspired by\" and assume that the actual model is a Gaussian convolution. Could be more direct here that it's a softplus approximation.\n\nThe model is also modified to take pointwise min and max inside the softplus, in order to maintain idempotency, as described in the second half of section 5.2. We updated this section with a clearer description. Since we not only approximate the Gaussian with a logistic, but also modify the equation to preserve the necessary idempotency (by analogy to the zero-temperature limit), \"softplus approximation\" might not be sufficient to describe the entire model. We should still try to make this part of the abstract clearer in some way.", "Thank you for the review. We will reply in detail to each point inline:\n\n> Missing citation / comparison: https://arxiv.org/pdf/1804.01882.pdf (Ganea et al. 2018) is an alternative way of generalizing order embeddings. \n> They also report very high numbers on WordNet, though I'm not sure they are directly comparable.\n\nThis is indeed a very related paper. Our work differs from hyperbolic embeddings in a couple of ways. First, by virtue of being a probabilistic model, the box model can score complex multivariate queries including negated variables. Secondly, the box structure is more suitable for general DAG embedding, as opposed to a hyperbolic model where the constant negative curvature strongly biases the model towards trees. The numbers are not directly comparable, but we will add this to related work, thank you.\n\n> The Gaussian relaxation (Eq. (2) and (3)) defines a particular length scale, \\sigma. \n> It's not clear if this is also implicit in the softplus derivation (by analogy with Eq. (4), should we assume that it approximates the \\sigma = 1 case?). \n> What effect does this have on the embedding space? Without it, it would seem that the normal BL model is scale invariant, which might be a desirable property for representing hierarchical data.\n\nThe \\sigma parameter is absorbed into the constant \\rho in the softplus approximation to the Gaussian (Proposition 1), which differs from \\sigma by the factor 1/1.702 given there. In practice, this is tuned as a global temperature for the softplus, but it is not particularly important when normalizing the space by the global coordinatewise minimum and maximum, as explained at the end of section 4.2 (this detail is probably the most important practical answer to your question). The scale invariance question is interesting. In order to solve the problem of sparse gradients, our solution sacrifices scale invariance. While scale invariance is desirable in theory, it has been known to cause instability in other contexts, such as perceptron vs. hinge loss learning, and perhaps the “scale” of the “soft edges” could be viewed as a type of margin, as well as solving the problem of sparse gradients.\n\n> The main thrust of section 5.2 is that smoothed box embeddings retain better performance with increasing numbers of negatives. \n> Could you include the ratio of positive / negative examples on the Flickr dataset, and some measure of the distribution of P(A|B) values on MovieLens to get a sense of how these datasets compare?\n\nSince the Flickr dataset consists of denotational entailment probabilities between (possibly unseen) pairs of sentences, none of the train or test probabilities are exactly 0 (negative examples). However, many such pairs have a conditional probability below 0.1, with a ratio of about 13:1. Movielens is similarly pseudosparse, not truly sparse, with a similarly large majority of its probabilities taking values below 0.1. We have added a histogram showing the distribution of these probabilities in the appendix.\n\n> Flickr data: what is the encoder model that produces the embeddings here, and how does it handle unseen captions? (Why would we expect the smoothed box model to handle unseen captions better?)\n\nThe encoder model is a single-layer LSTM with the same specifications as used in Lai and Hockenmaier 2017 and Vilnis et al. 2018. It handles unseen captions by composing token embeddings with the RNN. We have updated the draft to make this clear. As for why the soft box model improves on unseen captions more than the other tasks, it may simply be a question of there being more room to improve (the previous SOTA held-out KL divergence is about twice as large for unseen captions than for the other categories, for example.) It would be interesting to explore this further.\n\n(continued in next comment)", "Thank you for the thoughtful review.\n\n - We use Adam to perform the optimization, using the default settings given in the Adam paper for momentum / decay / ridge terms, with learning rates given in the appendix of the submission. We have also updated the appendix with more hyperparameter details, and plan to release code before publication. \n\n - The temperature / bandwidth hyperparameter is always set equal to 1.0. We address this also in our response to Reviewer #2 --- since in all experiments aside from Flickr, we divide each dimension by the global maximum across boxes, this seems to avoid scale issues. \n\n - We agree that a figure illustrating the geometric intuition would be helpful, and will add a rendering in a future draft.", "The paper proposes a method for learning embedding of hierarchies. Specifically, the paper builds on a a geometrically inspired embedding method using box representations. The key contribution of the paper is facilitating optimization of these models by gradient based methods, which eventually leads to improved accuracy on relevant benchmark data (on par or beyond SOTA). The observation is that when two boxes are disjoint in the model but have overlap in the ground truth, no gradient can flow to the model to correct the problem (which is happens in case of sparse-data.\n\nTo alleviate the above problem, the paper proposes smoothing the model. That is, transforming the original model constructed from indicator functions (hence difficult to optimize by gradient based method) to a smooth differentiable function by diffusing the landscape. The diffusion process corresponds to convincing the objective function with the Gaussian kernel.\n\nI find the idea of converting such combinatorial problems to differentiable, specially when gradient methods can succeed in optimizing them afterward, very fascinating. I believe this paper is taking a theoretically sound path to construct the differentiable form of the originally non-differentiable problem. As the authors find, the smoothed function leads to improved performance against SOTA on relevant benchmark data such as WordNet hypernymy, Flick caption entailment and MovieLnes market basket data.\n\nOne downside of the current submission is that the details of optimization are now provided at all. What algorithm do you use to optimize the objective function? What are the hyper parameters? What value of sigma (for diffusion) do you use [or maybe you use the continuation method to gradually anneal sigma from large toward zero?). These are important details that I ask the authors to include.\n\nAlso, I think some graphical illustration of the embedding would be very helpful, perhaps something like Figure 2 of \"Probabilistic Embedding of Knowledge Graphs with Box Lattice Measures\". I hope such illustration is added to the submission.", "This paper proposes a soft relaxation of the box lattice (BL) model of Vilnis et al. 2018 and applies it to several graph prediction tasks. Results are comparable to the BL model on existing artificially-balanced data but significantly better on more natural unbalanced data with a large number of negatives. The paper assumes some familiarity with the problem domain and existing works (there is not a lot of exposition for an unfamilar reader), but should be of strong interest to anyone working on embeddings or graph prediction.\n\nThe paper is well-written, with clear explanations of the desired properties of the model and a concise set of experiments that are easy to follow. The strongest result is that on unbalanced WordNet, while the Flickr and MovieLens results are a little less clear but do show that this technique does not cause any loss in performance.\n\nA few points of feedback:\n\n- Missing citation / comparison: https://arxiv.org/pdf/1804.01882.pdf (Ganea et al. 2018) is an alternative way of generalizing order embeddings. They also report very high numbers on WordNet, though I'm not sure they are directly comparable.\n\n- The Gaussian relaxation (Eq. (2) and (3)) defines a particular length scale, \\sigma. It's not clear if this is also implicit in the softplus derivation (by analogy with Eq. (4), should we assume that it approximates the \\sigma = 1 case?). What effect does this have on the embedding space? Without it, it would seem that the normal BL model is scale invariant, which might be a desirable property for representing hierarchical data.\n\n- The main thrust of section 5.2 is that smoothed box embeddings retain better performance with increasing numbers of negatives. Could you include the ratio of positive / negative examples on the Flickr dataset, and some measure of the distribution of P(A|B) values on MovieLens to get a sense of how these datasets compare?\n\n- Flickr data: what is the encoder model that produces the embeddings here, and how does it handle unseen captions? (Why would we expect the smoothed box model to handle unseen captions better?)\n\n- There's a strong emphasis on how smoothing makes training easier. Do you have any metrics to directly support this, such as variance under random restarts?\n\n- In the abstract and introduction, it's easy to gloss over \"inspired by\" and assume that the actual model is a Gaussian convolution. Could be more direct here that it's a softplus approximation.", "Hi, thanks for the comment! We actually do cite the Hierarchical Density Order Embedding paper from Athiwarakun et al. in the introduction section. The original box lattice paper from Vilnis et al. also reports the density model result of 92.3 accuracy in their wordnet table. Box embeddings get a very similar score on this task, so we only include that result, since the aim of the experiment is to compare the softbox and hard box models and not to demonstrate a new state of the art. There are also some questions about whether the density embeddings use exactly the same dataset split, or just the same method of generating negative examples, which we have not been able to determine. Hope this helps! ", "This paper seems like a great idea. However, I believe the paper misses an important reference on the WordNet task. According to Hierarchical Density Order Embeddings (Athiwaratkun, 2018) https://arxiv.org/pdf/1804.09843.pdf, their score for hypernym prediction on WordNet test split is 92.3 which is a bit higher than the paper's reported scores. " ]
[ 7, -1, -1, -1, -1, 8, 8, -1, -1 ]
[ 3, -1, -1, -1, -1, 3, 4, -1, -1 ]
[ "iclr_2019_H1xSNiRcF7", "H1xPEJOVsm", "SylyqW25Cm", "SylZ79N5hm", "rJglCnOshm", "iclr_2019_H1xSNiRcF7", "iclr_2019_H1xSNiRcF7", "r1lircRdnQ", "iclr_2019_H1xSNiRcF7" ]
iclr_2019_HJx54i05tX
On Random Deep Weight-Tied Autoencoders: Exact Asymptotic Analysis, Phase Transitions, and Implications to Training
We study the behavior of weight-tied multilayer vanilla autoencoders under the assumption of random weights. Via an exact characterization in the limit of large dimensions, our analysis reveals interesting phase transition phenomena when the depth becomes large. This, in particular, provides quantitative answers and insights to three questions that were yet fully understood in the literature. Firstly, we provide a precise answer on how the random deep weight-tied autoencoder model performs “approximate inference” as posed by Scellier et al. (2018), and its connection to reversibility considered by several theoretical studies. Secondly, we show that deep autoencoders display a higher degree of sensitivity to perturbations in the parameters, distinct from the shallow counterparts. Thirdly, we obtain insights on pitfalls in training initialization practice, and demonstrate experimentally that it is possible to train a deep autoencoder, even with the tanh activation and a depth as large as 200 layers, without resorting to techniques such as layer-wise pre-training or batch normalization. Our analysis is not specific to any depths or any Lipschitz activations, and our analytical techniques may have broader applicability.
accepted-oral-papers
This paper analyzes random auto encoders in the infinite dimension limit with an assumption that the weights are tied in the encoder and decoder. In the limit the paper is able to show the random auto encoder transformation as doing an approximate inference on data. The paper is able to obtain principled initialization strategies for training deep autoencoders using this analysis, showing the usefulness of their analysis. Even though there are limitations of paper such as studying only random models, and characterizing them only in the limit, all the reviewers agree that the analysis is novel and gives insights on an interesting problem.
train
[ "rJeRWADXyE", "B1lULFW9hm", "S1lmapk71V", "SJe6Z_6dAm", "B1lLKO6dAX", "rJehwu6dRm", "BkeCavpORQ", "rklt1LauAQ", "SygkNeB92Q", "Skeq2IQ7h7" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thank you for your reply. We are happy to know that.", "This work applies infinite width limit random network framework (a.k.a. Mean field analysis) to study deep autoencoders when weights are tied between encoder and decoder. Random network analysis allows to have exact analysis of asymptotic behaviour where the network is infinitely deep (but width taken to infinite first). This exact analysis allows to answer some theoretical questions from previous works to varying degrees of success. \n\nBuilding on the techniques from Poole et al (2016) [1], Schoenholz et al (2017) [2], the theoretical analysis to deep autoencoder with weight tied encoder/decoder shows interesting properties. The fact that the network component are split into encoder/decoder architecture choice along with weight tying shows various interesting phase of network configuration. \n\nMain concern with this work is applicability of the theoretical analysis to real networks. The autoencoding samples on MNIST provided in the Appendix at least visually do not seem to be a competitive autoencoder (e.g. blurry and irrelevant pixels showing up). \n\nAlso the empirical study with various schemes is little hard to parse and digest. It would be better to restructure this section so that the messages from theoretical analysis in the earlier section can be clearly seen in the experiments.\n\nThe experiments done on fixed learning rate should not be compare to other architectures in terms of training speed as learning rates are sensitive to the architecture choice and speed may be not directly comparable. \n\nQuestions/Comments\n- Without weight tying the whole study is not much different from just the feedforward networks. However, as noted by the authors Vincent et al (2010) showed that empirically autoencoders with or without weight tying performs comparably. What is the benefit of analyzing more complicated case where we do not get a clear benefit from? \n\n- Many auto encoding networks benefit from either bottleneck or varying the widths. The author’s regime is when all of the hidden layers grows to infinity at the same order. Would this limit capture interesting properties of autoencoders?\n\n- When analysis is for weight tied networks, why is encoder and decoder assume to have different non-linearity? It does show interesting analysis but is it a practical choice? From this work, would you recommend using different non-linearities?\n\n- It would be interesting to see how this analysis is applied to Denoising Autoencoders [3], which should be straightforward to apply similar to dropout analysis appeared in Schoenholz et al [2].\n\n[1] Ben Poole, Subhaneil Lahiri, Maithra Raghu, Jascha Sohl-Dickstein, and Surya Ganguli. Exponential\nexpressivity in deep neural networks through transient chaos. In Advances in neural information\nprocessing systems, pp. 3360–3368, 2016.\n[2] S.S. Schoenholz, J. Gilmer, S. Ganguli, and J. Sohl-Dickstein. Deep information propagation. 5th International Conference on Learning Representations, 2017.\n[3] Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. Journal of Machine Learning Research, 11(Dec):3371–3408, 2010.\n", "Thank you for responding to the comments and concerns.\n\nI appreciate the contribution of the paper more now and has reflect that in increased score.", "Question: “Without weight tying the whole study is not much different from just the feedforward networks. However, as noted by the authors Vincent et al (2010) showed that empirically autoencoders with or without weight tying performs comparably. What is the benefit of analyzing more complicated case where we do not get a clear benefit from?”\n\nAnswer: We thank the reviewer for raising this important remark by Vincent et al (2010). Since this remark, the weight-tied autoencoder has become standard. Many subsequent experimental works use it. Interestingly all recent theoretical works concerning directly autoencoders also assume weight-tying. In fact, experiments in the paper Vincent et al (2010) are done on weight-tied autoencoders. As such, we believe it is imperative to thoroughly investigate this weight-tied architecture.\n\nWe also would like to to emphasize that the weight-tying assumption is very critical to the “approximate inference” notion and the concept of reversibility (which we consider in Section 3.3). Without it, there would be no signal component (i.e. S_sig=0) for any activation and any depth, and so the random autoencoder without weight-tying cannot be said to perform approximate inference.\n\n\n********************************************\nQuestion: “Many auto encoding networks benefit from either bottleneck or varying the widths. The author’s regime is when all of the hidden layers grows to infinity at the same order. Would this limit capture interesting properties of autoencoders?”\n\nAnswer: While we require that all dimensions go to infinity, we allow their ratios to be arbitrary constants in the main theoretical result Theorem 1. For instance, an autoencoder with dimensions 1000 - 100 - 1000 would fit in this description, in that the dimensions are large, with the dimension ratio being 0.1. Likewise, in the infinite-depth analysis, we also allow the ratio \\alpha to be an arbitrary constant.\n\n\n********************************************\nQuestion: “When analysis is for weight tied networks, why is encoder and decoder assume to have different non-linearity? It does show interesting analysis but is it a practical choice? From this work, would you recommend using different non-linearities?”\n\nAnswer: For the purpose of this work, the case of different encoder/decoder activations is a test case for our hypothesis, which does not place any restriction on whether the activations are identical. It makes an interesting test case because having different activations is not yet considered in the theories for feedforward networks, but might be natural for the autoencoder setup.\n\nOn the other hand, our experiments indeed show something interesting about making \\varphi (decoder activation) different from \\sigma (encoder activation)! Let us draw attention to Scheme 4 (\\varphi=ReLU, \\sigma=tanh) and Scheme 1 (\\varphi=\\sigma=ReLU), in Fig 4. Note that tanh activation is typically considered to be rather difficult. Scheme 4 is not even an EOC initialization with respect to \\sigma, whereas Scheme 1 is. Despite these, we observe that:\n- Scheme 4 has lower test loss consistently.\n- Scheme 4 has smoother learning curve, suggesting that it is possible to use a higher learning rate on this scheme.\n\nThese observations are intriguing. As such, we believe your question of whether having different activations is better would make a good research direction.\n\n\n********************************************\nQuestion: “It would be interesting to see how this analysis is applied to Denoising Autoencoders [3], which should be straightforward to apply similar to dropout analysis appeared in Schoenholz et al [2].”\n\nAnswer: We totally agree on this! We believe it would be interesting to examine the behavior of various variants of the autoencoders, which warrant multiple serious investigations.\n", "Question: “In Section 3.5, the authors should make clear from the beginning why they are running those specific simulations. What hypothesis are they trying to check? I finally concluded that they are running simulations to check if the hypothesis they make in the first paragraph are true. They also want to compare with some other criteria in the literature, named EOC, that also gives insights about the trainability of the network. However, they could explicitly say in the beginning of the second paragraph that this is the goal.\n\nIn a similar spirit, the authors should end Section 3.5 with a clear conclusion on whether or not the framework enables us to predict the trainability of the autoencoder. “\n\nAnswer: Thank you for the comment. Your understanding is correct. We have made efforts to restructure this section. In particular, we include some headings to signal the readers what to expect. We also refine the descriptions to make the goals and the conclusions clearer.\n\n\n********************************************\nQuestion: “Typo: last but one paragraph of the introduction: \"whose analysis is typically more straighforwards\" -> \"straightforward”.”\n\nAnswer: This is fixed. Thanks!\n\n\n********************************************\nQuestion: “At the end of Section 3.2: what can be proved about the behavior of \\gamma / \\sqrt{\\rho}? It is obviously a central quantity and the authors do not say what happens in the phases where \\gamma and \\rho go to infinity for instance. Is it because it is hard to analyse?”\n\nAnswer: It is indeed hard to prove non-trivial statements on this ratio. We could make a few in the case the decoder activation \\varphi is ReLU, but it is trivial. The case \\varphi is tanh is much more difficult (note that in this case, \\varphi and \\rho are bounded).\n\nThe proven statements for \\gamma and \\rho are only intended to verify that we are not missing interesting behaviors by doing numerical simulations. In general, their behaviors, including the ratio \\gamma/\\sqrt{\\rho}, can be conveniently simulated, so we place less emphasis on having rigorous proofs for the observations we make.\n", "We very much appreciate that you find our work interesting, as well as your suggestions to improve readability. We also agree that the analysis here has the potentially to lead to more exciting research directions.\n\nWe provide specific reply to each comment in the following.\n\n\n********************************************\nQuestion: “In Section 2.1, a large number of notations are introduced. It would help a lot if the authors made a graphical representation of these. For instance, a diagram where every linearity / non-linearity is a box, and the different variables $x_l$, $\\hat{x}_l$ appear would help a lot.”\n\nAnswer: Thank you for the suggestion. We update with a schematic diagram in Figure 5 in the appendix. Admittedly we could not find a way to fit the diagram within the main 10 pages.\n\n\n********************************************\nQuestion: “Section 2.2 is rather technical. The authors could try to give some more intuition of what's happening. For instance, they could spend more time after the theorem explaining what $\\tau_l, \\gamma_l$ and $\\rho_l$ mean. They could also introduce the notation S_sig and S_var early and this section (and not in Section 3), because it helps interpreting the parameters. It would also help if they could write a heuristic derivation of the state-evolution-like equations. From the paper, the only way the reader can understand the intuition behind those complicated equations is to look at the proof of Theorem 1 (which is rather technical).”\n\nAnswer: We thank you for the suggestions. We agree that this is quite technical. In Appendix A.2, we give an outline of the proof and high-level descriptions of the ideas, from which the meanings of \\gamma_l, \\rho_l, and the intuition behind the state-evolution-like equations are clearer. The reason that we have to defer this task to the appendix is that the task necessarily requires stating a result that is in line with Proposition 5 and Corollary 6. By stating such a result, we face a risk of creating a distraction to many readers of ICLR, while we want to focus more on the implications of Theorem 1 within the main 10 pages.\n\nNevertheless we have added a sort paragraph at the beginning of Section 2.2, performing heuristic calculations on a special case to motivate the result in Theorem 1. We hope this will make the theorem easier to interpret.\n\nRegarding the notation S_sig and S_var, we did not introduce them earlier since in our opinions, they are not as central as \\tau_l, \\gamma_l and \\rho_l, whose one-dimensional evolutions inherit nice visualizable properties (as shown in Fig 2) that make the phase transition phenomena more intuitive.\n\n\n********************************************\nQuestion: “In Section 3.1, I did not understand the difference between interpretations 1 and 2. Could the authors clarify?”\n\nAnswer: The two interpretations are closely related. Interpretation 1 states that for most intermediate layers (i.e. 1<< \\ell << L), the decoder outputs \\hat{x}_\\ell can be described with \\gamma and \\rho. In Interpretation 2, we extend this property to outermost layers (e.g. \\ell = 1), which now include the final output of the autoencoder. This requires an extra mild assumption on the normalization of the input x.\n\n\n********************************************\nQuestion: “In Section 3.4, I did not understand the sentence: \"In particular, near the phase transition of \\gamma, S_sig/S_var = \\Omega(\\beta^{1.5}\". If one uses the \\Omega notation, it means that some parameter is converging to something. What is the parameter? As a consequence, I did not understand this paragraph.”\n\nAnswer: We thank you for pointing out this. We include a footnote, saying by f(\\beta)=\\Omega(g(\\beta)) in a certain range of \\beta, we mean d[f(\\beta)/g(\\beta)]/d\\beta>0 on this range. We believe this better reflects the sensitive-to-perturbation behavior. We also remove the use of the big \\Theta notation in the previous paragraph, and instead use the proportional relation.", "We would like to thank the reviewer for providing thoughtful comments. We reply to each comment in the following.\n\n********************************************\nQuestion: “Building on the techniques from Poole et al (2016) [1], Schoenholz et al (2017) [2], the theoretical analysis to deep autoencoder with weight tied encoder/decoder shows interesting properties.”\n\nAnswer: We agree that our setting is very related to the recent works by Poole, Schoenholz, Pennington, Ganguli and many others in that we consider random weights. On the other hand, we would like to emphasize a key difference: our analysis for weight-tied case heavily depends on the Gaussian conditioning technique from TAP theory / approximate message passing (AMP) literature. The analysis is challenging due to the weight-tied constraint, as acknowledged by previous works (Arora et al 2015, Chen et al 2018). In fact, the work Chen et al 2018 follows the same mean-field framework as Poole et al (2016), Schoenholz et al (2017), but has to assume weight-untied analysis for weight-tied structures. Admittedly we choose not to detail how crucial the use of this technique is in the main 10 pages, and only mention this fact briefly at the end of the second last paragraph of Section 1. This is because doing so would be technically involved, probably lengthy, and will be a distraction to many ICLR readers. We leave that to the appendix for theoretically inclined readers.\n\n\n********************************************\nQuestion: “Main concern with this work is applicability of the theoretical analysis to real networks. The autoencoding samples on MNIST provided in the Appendix at least visually do not seem to be a competitive autoencoder (e.g. blurry and irrelevant pixels showing up).”\n\nAnswer: We acknowledge this valid point. We focus solely on the effect of depth and analytical tractability, and so a simple setup (vanilla autoencoders) is much desired. This is analogous to the setting in Schoenholtz et al 2017 which considered vanilla feedforward networks - with a key difference that we consider weight-tied structures. It would be very interesting to extend the analysis to more complex setups. Nevertheless this simple setup is sufficiently challenging, yet already yielding interesting implications, considering the fact that this is the first work on weight-tied autoencoders in the considered directions.\n\n\n********************************************\nQuestion: “Also the empirical study with various schemes is little hard to parse and digest. It would be better to restructure this section so that the messages from theoretical analysis in the earlier section can be clearly seen in the experiments.”\n\nAnswer: We thank you for raising this point. We have made efforts to restructure this section. In particular, we include some headings to signal the readers what to expect. We also refine the descriptions to make the goals and the conclusions clearer.\n\n\n********************************************\nQuestion: “The experiments done on fixed learning rate should not be compare to other architectures in terms of training speed as learning rates are sensitive to the architecture choice and speed may be not directly comparable.”\n\nAnswer: We would like to clarify that the comparison is made for different initialization schemes (in Table 1) of the same pair of encoder-decoder activations and hence the same architecture. Since the architecture is the same for each pair, we believe this is a fair comparison. We have added a sentence in the revision to remind readers about this. Thanks.\n\n\n********************************************\nReferences:\nSanjeev Arora, Yingyu Liang, and Tengyu Ma. Why are deep nets reversible: A simple theory, with implications for training. arXiv preprint arXiv:1511.05653, 2015.\n\nMinmin Chen, Jeffrey Pennington, and Samuel S Schoenholz. Dynamical isometry and a mean field theory of rnns: Gating enables signal propagation in recurrent neural networks. arXiv preprint arXiv:1806.05394, 2018.\n\nS.S. Schoenholz, J. Gilmer, S. Ganguli, and J. Sohl-Dickstein. Deep information propagation. 5th International Conference on Learning Representations, 2017.", "We thank the reviewer for finding our work interesting! We completely agree that one key contribution of our work is the application of the Gaussian conditioning technique, borrowed from the TAP theory / approximate message passing literature, to the random deep weight-tied autoencoder setting.\n\nQuestion: “A minor comment: the fact that the DAE is \"weight-tied\" is fundamental in this analysis. It actually should be mentioned in the title!”\n\nAnswer: Thank you for the suggestion. We agree with that and have revised the title of the paper accordingly.\n", "Building on the recent progresses in the analysis of random high-dimensional statistics problem and in particular of message passing algorithm, this paper analyses the performances of weighted tied auto-encoder. Technically, the paper is using the state evolution formalism. In particular the main theorem uses the analysis of the multi-layer version of these algorithm, the so-called state evolution technics, in order to analyse the behaviour of optimal decoding in weight-tied decoder. It is based on a clever trick that the behaviour of the decoding is similar to the one of the reconstruction on a multilayer estimation problem. This is a very orginal use of these technics.\n\nThe results are 3-folds: (i) a deep analysis of the limitation of weight-tied DAE, in the random setting, (ii) the demonstration of the sensitivity to perturbations and (iii) a clever method for initialisation that to train a DAE.\n\nPro: a rigorous work, a clever use of the recent progresses in rigorous analysis of random neural net, and a very deep answer to interesting questions, and \nCon: I do not see much against the paper. A minor comment: the fact that the DAE is \"weight-tied\" is fundamental in this analysis. It actually should be mentioned in the title!\n\n", "This paper studies auto-encoders under several assumptions: (a) the auto-encoder's layers are fully connected, with random weights, (b) the auto-encoder is weight-tied, (c) the dimensions of the layers go to infinity with fixed ratios. The main contribution of the paper is to point out that this model of random autoencoder can be elegantly and rigorously analysed with one-dimensional equations. The idea is original and will probably lead to new directions of research. Already the first applications that the paper suggests are exciting.\n\nThe paper does a good job in justifying assumptions (a), (b) and (c) in the introduction. It is convincing in the fact that this point of view may bring practical insights on training initialization for real-world autoencoders. Thus my opinion is that this paper brings original and significant ideas in the field.\n\nOne flaw of this paper is that the writing might be clearer. For instance when presenting the technical theorem (Theorem 1), it would be useful to have an intuitive explanation for the theorem and the state-evolution-like equations. However, I believe that there are some easy fixes that would greatly improve the clarity of the exposition. Here is a list of suggestions: \n\n- In Section 2.1, a large number of notations are introduced. It would help a lot if the authors made a graphical representation of these. For instance, a diagram where every linearity / non-linearity is a box, and the different variables $x_l$, $\\hat{x}_l$ appear would help a lot. \n\n- Section 2.2 is rather technical. The authors could try to give some more intuition of what's happening. For instance, they could spend more time after the theorem explaining what $\\tau_l, \\gamma_l$ and $\\rho_l$ mean. They could also introduce the notation S_sig and S_var early and this section (and not in Section 3), because it helps interpreting the parameters. It would also help if they could write a heuristic derivation of the state-evolution-like equations. From the paper, the only way the reader can understand the intuition behind those complicated equations is to look at the proof of Theorem 1 (which is rather technical). \n\n- In Section 3.1, I did not understand the difference between interpretations 1 and 2. Could the authors clarify? \n\n- In Section 3.4, I did not understand the sentence: \"In particular, near the phase transition of \\gamma, S_sig/S_var = \\Omega(\\beta^{1.5}\". If one uses the \\Omega notation, it means that some parameter is converging to something. What is the parameter? As a consequence, I did not understand this paragraph. \n\n- In Section 3.5, the authors should make clear from the beginning why they are running those specific simulations. What hypothesis are they trying to check? I finally concluded that they are running simulations to check if the hypothesis they make in the first paragraph are true. They also want to compare with some other criteria in the literature, named EOC, that also gives insights about the trainability of the network. However, they could explicitly say in the beginning of the second paragraph that this is the goal.\n\n- In a similar spirit, the authors should end Section 3.5 with a clear conclusion on whether or not the framework enables us to predict the trainability of the autoencoder. \n\n\n\nMinor edits / remarks: \n\n- Typo: last but one paragraph of the introduction: \"whose analysis is typically more straighforwards\" -> \"straightforward\".\n\n- At the end of Section 3.2: what can be proved about the behavior of \\gamma / \\sqrt{\\rho}? It is obviously a central quantity and the authors do not say what happens in the phases where \\gamma and \\rho go to infinity for instance. Is it because it is hard to analyse?\n\n" ]
[ -1, 8, -1, -1, -1, -1, -1, -1, 9, 8 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "S1lmapk71V", "iclr_2019_HJx54i05tX", "BkeCavpORQ", "B1lULFW9hm", "Skeq2IQ7h7", "Skeq2IQ7h7", "B1lULFW9hm", "SygkNeB92Q", "iclr_2019_HJx54i05tX", "iclr_2019_HJx54i05tX" ]
iclr_2019_HkNDsiC9KQ
Meta-Learning Update Rules for Unsupervised Representation Learning
A major goal of unsupervised learning is to discover data representations that are useful for subsequent tasks, without access to supervised labels during training. Typically, this involves minimizing a surrogate objective, such as the negative log likelihood of a generative model, with the hope that representations useful for subsequent tasks will arise as a side effect. In this work, we propose instead to directly target later desired tasks by meta-learning an unsupervised learning rule which leads to representations useful for those tasks. Specifically, we target semi-supervised classification performance, and we meta-learn an algorithm -- an unsupervised weight update rule -- that produces representations useful for this task. Additionally, we constrain our unsupervised update rule to a be a biologically-motivated, neuron-local function, which enables it to generalize to different neural network architectures, datasets, and data modalities. We show that the meta-learned update rule produces useful features and sometimes outperforms existing unsupervised learning techniques. We further show that the meta-learned unsupervised update rule generalizes to train networks with different widths, depths, and nonlinearities. It also generalizes to train on data with randomly permuted input dimensions and even generalizes from image datasets to a text task.
accepted-oral-papers
The reviewers all agree that the idea is interesting, the writing clear and the experiments sufficient. To improve the paper, the authors should consider better discussing their meta-objective and some of the algorithmic choices.
train
[ "SJeJvkj5hX", "rJgRNO1Kp7", "HJeD-O1Yam", "r1eK0P1F67", "Bkgckkbah7", "r1eIOmju3m" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This work brings a novel meta-learning approach that learns unsupervised learning rules for learning representations across different modalities, datasets, input permutation, and neural network architectures. The meta-objectives consist of few shot learning scores from several supervised tasks. The idea of using meta-objectives to learn unsupervised representation learning is a very interesting idea.\n\nAuthors mentioned that the creation of an unsupervised update rule is treated as a transfer learning problem, and this work is focused on learning a learning algorithm as opposed to structures of feature extractors. Can you elaborate on what aspect of learning rules and why they can be transferable among different modalities and datasets? For this type of meta-learning to be successful, can you discuss the requirements on the type of meta-objectives? Besides saving computational cost, does using smaller input dimensions favor your method over reconstruction type of semi-supervised learning, e.g. VAE?\n\nIn the section \"generalizing over datasets and domains\", the accuracy of supervised methods and VAE method are very close. This indicates those datasets may not be ideal to evaluate semi-supervised training.\n\nIn the section \"generalizing over network architectures\", what is the corresponding supervised/VAE learning accuracy?\n\nIn the experimentation section, can you describe in more details how input permutations are conducted? Are they re-sampled for each training session for tasks? If the input permutations are not conducted, will the comparison between this method, supervised and VAE be different?\n\nAfter reviewing the author response, I adjusted the rating up to focus more on novelty and less on polished results.", "Thank you for your thoughtful review! Comments below:\n\n\"The section 5.4 is a bit hard to understand, with very very small images.\"\nWe apologize for the lack of clarity. We will improve this section and will increase the image size!\n\n\"cons only very modestly better than other methods. I would like to get a feel for why VAE is so good tbh (though the authors show that VAE has a problem with objective function mismatch).\"\nIn generative modeling, understanding what design principles lead to reusable representations is a huge open field of study, but many people have promoted compositional generative models[1,2] and information theoretic measures of how well the model captures structure in the data [3,4]. VAEs possess both of these attributes.\n\n\"One comment: the update rule takes as inputs pre and post activity and a backpropagated error; it seems natural to also use the local gradient of the neuron's transfer function here, as many three or four factor learning rules do.\"\nThis is a great suggestion! Thanks.\n\n[1]Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspectives. 2013.\n[2] Kingma, Diederik P., and Max Welling. \"Auto-encoding variational bayes.\" arXiv preprint arXiv:1312.6114 (2013).\n[2]Hinton, Geoffrey E., et al. \"The\" wake-sleep\" algorithm for unsupervised neural networks.\" Science 268.5214 (1995): 1158-1161.\n[3]Roweis, Sam T. \"EM algorithms for PCA and SPCA.\" Advances in neural information processing systems. 1998.", "Thank you for your thoughtful review! Comments below:\n\n\"Can you elaborate on what aspect of learning rules and why they can be transferable among different modalities and datasets?\"\nThis is a hypothesis based on the observation that hand designed learning rules transfer across modalities and datasets. We structure our learning rule in such a way as to aid this generalization. The specifics are largely inspired by biological neural networks--for instance the use of a neuron-local learning rule, and by the challenges associated with making meta-training stable--for instance, the use of normalization in almost every part of the system was found to be necessary to prevent meta-training from diverging. A better understanding of what aspects of learned learning rules transfer across datasets is a fascinating question and definitely something we are pursuing in future work.\n\n\"For this type of meta-learning to be successful, can you discuss the requirements on the type of meta-objectives?\"\nIn general, the meta-objective has to be easily tractable and have a well defined derivative with respect to the final layer (e.g. from backpropagation during meta-training). It should also reflect, as well as possible, performance on the eventual task. In our case, we wanted the base network to learn a representation in an unsupervised way which easily exposed class labels or other high level attributes, so we chose our meta-objective to reward few-shot learning performance using the unsupervised representation. In early experiments, we explored a number of variations on our eventual meta-objective (e.g. clustering and softmax regression). We found similar performance for these variants, and chose the meta-objective we describe in the paper (least squares) because we believed it to be the simplest.\n\n\"Besides saving computational cost, does using smaller input dimensions favor your method over reconstruction type of semi-supervised learning, e.g. VAE?\"\nWe only meta-train on the datasets with the smaller input size, but we test on both sizes (Figure 4). The VAE performance is comparable for the two input sizes, while the learned optimizer decreases in performance on mnist and remains constant on fashion mnist.\n\n\"In the section \"generalizing over network architectures\", what is the corresponding supervised/VAE learning accuracy?\"\nWe have not run these experiments, but we would expect the performance of the VAE to go up with increased model size.\n\n\"In the experimentation section, can you describe in more details how input permutations are conducted? Are they re-sampled for each training session for tasks? If the input permutations are not conducted, will the comparison between this method, supervised and VAE be different?\"\nThey are re-sampled for each new instantiation of an inner problem and kept constant while training that task. While we have not removed them, if we did we would expect the learned update rule to overfit to the meta-training distribution, causing improved performance on non-permuted image tasks, but extremely poor performance on permuted image tasks. Doing this would make comparisons to VAEs and supervised learning misleading however, as these two methods have no notion of spatial locality (whereas the learned optimizer now would). As a result, the learned optimizer’s relative performance would probably be a lot stronger. It would be very interesting in future work to use convnets for the base model--both for the learned update rule and the baselines. However, doing so would be a fairly involved process, requiring changes to the architecture of the learned update rule.\n", "Thank you for your thoughtful review! Comments below:\n\n\"Motivations are not very clear in some parts. E.g., the reason for learning backward weights (V), and the choice of meta-objective.\"\nOriginally, we did not learn backward weights, but in an effort to make the learning rule more biologically inspired we removed the transposed weights in favor of learned backward weights [1]. In practice, performance is surprisingly quite similar with both versions.\n\nAs per meta-objective: Exploring alternative meta-objectives would be very interesting! We choose the least squares meta-objective as it is allows us to compute the optimal final layer weights in closed form. This is important in that it allows us to easily differentiate the meta-objective with respect to the representation at the final layer (necessary for meta-training). We have explored alternative few-shot classification objectives (e.g. logistic regression, using implicit differentiation to get the appropriate derivative) but found performance to be similar and thus stuck with the simpler meta-objective.\n\n\"Experimental evaluation is limited to few-shot classification, which is very close to the meta-learning objective used in this paper. \"\nFor simplicity, we used the same meta-objective at evaluation time. The use of different meta-objectives (at both meta-train and meta-test) is also very interesting to us and is something we would pursue in future work.\n\n\"The result of text classification is interesting, but not so informative given no further analysis. E.g., why domain mismatch does not occur in this case?\"\nDomain mismatch does occur--just later in meta-training. Because we are learning a learning rule, as opposed to features, we expect some generalization, after all, hand designed learning rules generalize across datasets. We get some transfer performance early in meta-training, but the meta-objective on text tasks diverges later in training. We will add a few sentences to this effect. Better understanding out of domain generalization is definitely of interest to us and we are pursuing it in future work.\n\nPaper Title: This is a good point and we plan to change the paper title to: \"Meta-Learning Update Rules for Unsupervised Representation Learning\".\n\n\n[1] Crick, F. The recent excitement about neural networks. Nature 337, 129–132 (1989).", "This paper introduces a novel meta-learning approach to unsupervised representation learning where an update rule for a base model (i.e., an MLP) is meta-learned using a supervised meta-objective (i.e., a few-shot linear regression from the learned representation to classification GTs). Unlike previous approaches, it meta-learns an update rule by directly optimizing the utility of the unsupervised representation using the meta-objective. In the phase of unsupervised representation learning, the learned update rule is used for optimizing a base model without using any other base model objective. Experimental evaluations on few-shot classification demonstrate its generalization performance over different base architectures, datasets, and even domains. \n\n+ Novel and interesting formulation of meta-learning by learning an unsupervised update rule for representation learning. \n+ Technically sound, and well organized overall with details documented in appendixes. \n+ Clearly written overall with helpful schematic illustrations and, in particular, a good survey of related work. \n+ Good generalization performance over different (larger and deeper) base models, activation functions, datasets, and even a different modality (text classification).\n\n- Motivations are not very clear in some parts. E.g., the reason for learning backward weights (V), and the choice of meta-objective. \n- Experimental evaluation is limited to few-shot classification, which is very close to the meta-learning objective used in this paper. \n- The result of text classification is interesting, but not so informative given no further analysis. E.g., why domain mismatch does not occur in this case?\n\nI enjoyed reading this paper, and happy to recommend it as a clear accept paper. The idea of meta-learning update networks looks a promising direction worth exploring, indeed. \nI hope the authors to clarify the things I mentioned above. Experimental results are enough considering the space limit, but not great. Since the current evaluation task is quite similar to the meta-objective, evaluations on more diverse tasks would strengthen this paper. \n\nFinally, this paper aims at unsupervised representation learning, but it’s not clear from the current title, which is somewhat misleading. I think that's quite an important feature of this paper, so I highly recommend the authors to consider a more informative title, e.g., `Learning Rules for Unsupervised Representation Learning’ or else. ", "The paper describes unsupervised learning as a meta-learning problem: the observation is that unsupervised learning rules are effectively supervised by the quality of the representations that they yield relative to subsequent later semi-supervised (or RL) learning. The learning-to-learning algorithm allows for learning network architecture parameters, and also 'network-in-networks' that determine the unsupervised learning signal based on pre and post activations. \n\nQuality \nThe proposed algorithm is well defined, and it is compared against relevant competing algorithms on relevant problems. \nThe results show that the algorithm is competitive with other approaches like VAE (very slightly outperforms).\n\nClarity\nThe paper is well written and clearly structured. The section 5.4 is a bit hard to understand, with very very small images. \n\nOriginality\nThere is an extensive literature on meta-learning, which is expanded upon in Appendix A. The main innovation in this work is the parametric update rule for outer loop updates, which does have some similarity to the old work by Bengio in 1990 and 1992. \n\nSignificance\n- pros clear and seemingly state-of-the-art results, intuitive approach, \n-cons only very modestly better than other methods. I would like to get a feel for why VAE is so good tbh (though the authors show that VAE has a problem with objective function mismatch).\n\nOne comment: the update rule takes as inputs pre and post activity and a backpropagated error; it seems natural to also use the local gradient of the neuron's transfer function here, as many three or four factor learning rules do. " ]
[ 8, -1, -1, -1, 8, 8 ]
[ 3, -1, -1, -1, 4, 3 ]
[ "iclr_2019_HkNDsiC9KQ", "r1eIOmju3m", "SJeJvkj5hX", "Bkgckkbah7", "iclr_2019_HkNDsiC9KQ", "iclr_2019_HkNDsiC9KQ" ]
iclr_2019_HygBZnRctX
Transferring Knowledge across Learning Processes
In complex transfer learning scenarios new tasks might not be tightly linked to previous tasks. Approaches that transfer information contained only in the final parameters of a source model will therefore struggle. Instead, transfer learning at at higher level of abstraction is needed. We propose Leap, a framework that achieves this by transferring knowledge across learning processes. We associate each task with a manifold on which the training process travels from initialization to final parameters and construct a meta-learning objective that minimizes the expected length of this path. Our framework leverages only information obtained during training and can be computed on the fly at negligible cost. We demonstrate that our framework outperforms competing methods, both in meta-learning and transfer learning, on a set of computer vision tasks. Finally, we demonstrate that Leap can transfer knowledge across learning processes in demanding reinforcement learning environments (Atari) that involve millions of gradient steps.
accepted-oral-papers
This paper proposes an approach for learning to transfer knowledge across multiple tasks. It develops a principled approach for an important problem in meta-learning (short horizon bias). Nearly all of the reviewer's concerns were addressed throughout the discussion phase. The main weakness is that the experimental settings are somewhat non-standard (i.e. the Omniglot protocol in the paper is not at all standard). I would encourage the authors to mention the discrepancies from more standard protocols in the paper, to inform the reader. The results are strong nonetheless, evaluating in settings where typical meta-learning algorithms would struggle. The reviewers and I all agree that the paper should be accepted, and I think it should be considered for an oral presentation.
val
[ "Hye_eUxDk4", "BkemR3cMkE", "rkezle-C0X", "HJgKnIJY27", "BkextEnBCQ", "Byxo0m2rRm", "Bkevhwv927", "Skgx-yRE0Q", "SyxtTCp4Am", "SJgQTudgAX", "BkeSD__xR7", "HJeAEd_eR7", "SylzzddgRm", "HyeuqwdeCm", "HkeqwPulCQ", "H1xWiy_q2Q" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "Dear reviewer,\n \nThank you for taking the time to consider our rebuttal and revised manuscript.\n \nYou raise good points and we will address these in a final version of the paper; we have added a sentence following the stabilizer describing how it affects the meta gradient, and to answer your question about the norm in the Jacobian approximation, it is indeed the Schatten 1-norm.\n", "Dear reviewer,\n \nFollowing our rebuttal and discussion with R1 and R2, we hope that you find your main concerns addressed. Please let us know if there are any other questions we can answer.", "The manuscript has been improved substantially thus I updated my score.\n\n1. On page 6, there is still no explicit formula to show how \\mu (the stabilizer) is applied to the meta-gradient.\n\n2. In Appendix B, what is the ||_1 norm on the Jacobian? We need to be clear about matrix norms, because _1 can mean Schatten 1-norm, vector-induced 1-norm etc.\n\nMinor\n- As in Algo.1, the meta-gradient is applied to theta not psi, so it would make more sense for Thm.1 (and the proof in Appendix A) to use theta instead of psi (also to avoid potential confusion).\n- Correct me if I am wrong, for general p, in the the meta-gradient (Eq.8), the last term should have a single exponent (p-2) on the L_2 norm instead of p(p-2). Moreover, the coefficient before the expectation should be p instead of 2 (this does not affect the algorithm though since we have \\beta to control step size). \n- In Appendix B, the first equation, \\alpha^{i^2} is misleading, maybe use (\\alpha^i)^2\n- In Appendix A, right before \"with p = 2 defining...\", there is a \\psi^0_{s,+1} that should be \\psi^0_{s+1},", "This paper proposes Leap, a meta-learning procedure that finds better initialization for new tasks. Leap is based on past training/optimization trajectories and updates the initialization to minimize the total trajectory lengths. Experiments show that Leap outperforms popular alternatives like MAML and Reptile.\n\nPros\n- Novel idea\n- Relatively well-written\n- Sufficient experiment evidence\n\nCons\n- There exist several gaps between the theory and the algorithm\n\nI have several concerns.\n1. The idea is clearly delivered, but there are several practical treatments that are questionable. The first special treatment is that on page 5, when the objective is increased instead of decreased, the sign of the f part is flipped, which is not theoretically sound. It is basically saying that when we move from psi^i to psi^{i+1} with increased objective, we lie to the meta-learner that it is decreasing. The optimization trajectory is what it is. It would be beneficial to see the effect of removing this trick, at least in the experiments. Second, replacing the Jacobian with the identity matrix is also questionable. Suppose we use a very small but constant learning rate alpha for a convex problem. Then J^i=(I-G)^i goes to the zero matrix as i increases (G is small positive). However, instead, the paper uses J^i=I for all i. This means that the contributions for all i are the same, which is unsubstantiated.\n\n2. The proof of Thm1 in Appendix A is not complete. For example, \"By assumption, beta is sufficiently small to satisfy F\", which I do not understand the inequality. Is there a missing i superscript? Isn't this the exact inequality we are trying to prove for i=0? As another example, \"if the right-most term is positive in expectation, we are done\", how so? BTW, the right-most term is a vector so there must be something missing. It would be more understandable if the proof includes a high-level proof roadmap, and frequently reminds the reader where we are in the overall proof now.\n\n3. The set \\Theta is not very well-defined, and sometimes misleading. Above Eq.(6), \\Theta is mathematically defined as the intersection of points whose final solutions are within a tolerance of the *global* optimum, which is in fact unknown. As a result, finding a good initialization in \\Theta for all the tasks as in Eq.(5) is not well-defined.\n\n4. About the experiments. What is the \"Finetuning\" in Table 1? Presumably it is multi-headed but it should be made explicit. What is the standard deviation for Fig.4? The claim that \"Leap learns faster than a random initialization\" for Breakout is not convincing at all.\n\nMinors\n- In Eq.(4), f is a scalar so abs should suffice. This also applies to subsequent formulations.\n- \\mu is introduced above Eq.(8) but never used in the gradient formula.\n- On p6, there is a missing norm notation when introducing the Reptile algorithm.", "Dear reviewers, \n\nPlease note that we have made minor revisions since our initial rebuttal, summarized below\n\n- Revision 2 added details to experiments, as requested by R3\n- Revision 3 fixed some typos, improved the explanation of the stabilizer, and addressed R1's comment wrt the proof", "We are very impressed with your diligence and grateful for your input – your comments are very helpful in improving our manuscript! We are further grateful for your willingness to engage in the rebuttal and revise your review. Please see below for responses to your comments and questions.\n \n> Further baselines on Omniglot, miniImagenet would strengthen the paper\n \nWe respect your position, and given time and resources we would have been happy to oblige. We respectfully disagree with regards to miniImagenet being “less favourable” to Leap. Since Reptile outperforms MAML on miniImagenet, and Leap can be reduced to Reptile, Leap’s performance is “lower bounded” by Reptile. While we take your point that it would be interesting to see how much of a boost other configurations could provide in a few-shot setting, given the feedback we received, we chose to prioritize other parts of the paper as we felt that would add more value.\n \n> Pareto optimality on convex loss surfaces\n \nWe have found that giving people a visual crutch helps them to understand how Leap behaves. More generally, Leap converges to a locally Pareto optimal point. As a property though, we agree that it’s not particularly interesting, which is why we don’t emphasize it in the manuscript.\n \n> the stabilizer is still in the meta-gradient\n \nWe noticed this as well and have fixed it: we agree that it should not be part of equation 8.\n \n> confusing claim: “the stabilizer reduces emphasis on the gradient of f(\\theta)”\n \nApologies for the confusion, our choice of words was somewhat unfortunate. We have revised the manuscript to clarify this point. The weight placed on the task gradient is indeed larger, but as you note, \\mu guarantees we follow the descent direction. What we meant here is that \\mu reduces the weight placed on following that anomalous line segment, instead attempting to avoiding that neighborhood it in the updated gradient path.\n \n> I believe there might be a small mistake in the proof of Theorem 1. Nevertheless, even if this were the case, I think it would not affect the conclusion.\n \nThank you for pointing this out, we overloaded the definition of g (note the use of g(z), as opposed to g(^z) in the inner product). We have removed this overloading and explicitly define g(z), largely as you propose.\n \nThank you for a careful read, all typos fixed.\n", "\\documentclass[10pt]{article}\n\\usepackage{geometry}[1in]\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\\usepackage{amsmath}\n\\usepackage{enumerate}\n\\usepackage{indentfirst}\n\n\\begin{document}\n\t\n\t\\section*{SUMMARY}\n\t\n\tThe article proposes Leap, a novel meta-learning objective aimed at outperforming state-of-the-art approaches when dealing with collections of tasks that exhibit substantial between-task diversity.\n\t\n\tSimilarly to prior work such as MAML [1] or Reptile [2], the goal of Leap is to learn an initialization $\\theta_{0}$ for the model parameters, shared across tasks, which leads to good and data-efficient generalization performance when fine-tuning the model on a set of held-out tasks. In a nutshell, what sets Leap apart from MAML or Reptile is its cost function, which explicitly accounts for the entire path traversed by the model parameters during task-specific fine-tuning -- i.e., ``inner loop'' optimization --, rather than mainly focusing on the final value attained by the model parameters after fine-tuning. More precisely, Leap looks for an initialization $\\theta_{0}$ of the model parameters such that the energy of the path traversed by $\\gamma_{\\tau}(\\theta) = (\\theta, f_{\\tau}(\\theta))$ while fine-tuning $\\theta$ to optimize the loss $f_{\\tau}(\\theta)$ of a task $\\tau$ is minimized, on average, across $\\tau \\sim p(\\tau)$. Thus, it could be argued that Leap extends Reptile, which can be informally understood as seeking an initialization $\\theta_{0}$ that minimizes the average squared Euclidean distance between $\\theta_{0}$ and the model parameters after fine-tuning on each task $\\tau \\sim p(\\tau)$ [2, Section 5.2], by using a distance function between initial and final model parameters that accounts for the geometry of the loss surface of each task during optimization. \n\t\n\tThe final algorithm introduced in the paper considers however a variant of the aforementioned cost function, motivated by its authors on the basis of stabilising learning and eliminating the need for Hessian-vector products. The resulting approach is then evaluated on image recognition tasks (Omniglot plus a set of six additional computer vision datasets) as well as reinforcement learning tasks (Atari games).\n\t\n\t\\section*{HIGH-LEVEL ASSESSMENT}\n\t\n\tThe article proposes an interesting extension of existing work in meta-learning. In a slightly different context (meta-optimization), recent work [3] pointed out the existence of a ``short-horizon bias'' which could arise when using meta-learning objectives that apply only a small number of updates during ``inner-loop'' optimization. This observation is well-aligned with the motivation of this article, in which the authors attempt to complement successful methods like MAML or Reptile to perform well also in situations where a large number of gradient descent-based updates are applied during task-specific fine-tuning. Consequently, I believe the article is timely and relevant.\n\t\n\tUnfortunately, I have some concerns with the current version of the manuscript regarding (i) the proposed approach and the way it is motivated, (ii) the underlying theoretical results and, perhaps most importantly, (iii) the experimental evaluation. In my opinion, these should ideally be tackled prior to publication. Nonetheless, I believe that the proposed approach is promising and that these concerns can be either addressed or clarified. Thus I look forward to the rebuttal.\n\t\n\t\\section*{MAJOR POINTS}\n\t\n\t\\subsection*{1. Issues regarding proposed approach and its motivation/derivation}\n\t\n\t\\textbf{1.a} Section 2.1 argues in favour of studying the path traversed by $\\gamma_{\\tau}(\\theta) = (\\theta, f_{\\tau}(\\theta))$ rather than the path traversed by the model parameters $\\theta$ alone. However, this could in turn exacerbate the difficulty in dealing with collections of tasks for which the loss functions have highly diverse scales. For instance, taking the situation to the extreme, one could define an equivalence class of tasks $[\\tau] = \\left\\{\\tau \\mid f_{\\tau}(\\theta) = g(\\theta) + \\mathrm{constant} \\right\\}$ such that any two tasks $\\tau_{1}, \\tau_{2} \\in [\\tau]$ would essentially represent the same underlying task, but could lead to arbitrarily different values of the Leap cost function. \n\t\n\tGiven that Leap is a model-agnostic approach, like MAML or Reptile, and thus could be potentially applied in many different settings and domains, I believe the authors should study and discuss (theoretically or experimentally) the robustness of Leap with respect to between-task variation in the scale of the loss functions and, in case the method is indeed sensitive to those, propose an effective scheme to normalize them.\n\t\n\t\\textbf{1.b} The current version of the manuscript motivates defining the cost function in terms of $\\gamma_{\\tau}(\\theta) = (\\theta, f_{\\tau}(\\theta))$ rather than the model parameters $\\theta$ alone in order to ``avoid information loss'', making it seem that this modification is ``optional'' or, at least, not critical. Nevertheless, taking a closer look at the Leap objective and the meta-updates it induces, I believe it might actually be essential for the correctness of the approach. I elaborate this view in what follows. Let us write the Leap objective for a task $\\tau$ as\n\t\\[\n\tF_{\\tau}(\\theta_{0},\\widetilde{\\theta}_{0}) = \\underbrace{\\sum_{i=0}^{K_{\\tau} - 1}{\\left\\vert\\left\\vert u^{(i+1)}_{\\tau}(\\widetilde{\\theta}_{0}) - u^{(i)}_{\\tau}(\\theta_{0}) \\right\\vert\\right\\vert^{2}}}_{C_{\\tau, 1}(\\theta_{0},\\widetilde{\\theta}_{0})} + \\underbrace{\\sum_{i=0}^{K_{\\tau} - 1}{\\left( f_{\\tau}\\left(u^{(i+1)}_{\\tau}(\\widetilde{\\theta}_{0})\\right) - f_{\\tau}\\left(u^{(i)}_{\\tau}(\\theta_{0})\\right) \\right)^{2}}}_{C_{\\tau, 2}(\\theta_{0},\\widetilde{\\theta}_{0})},\n\t\\]\n\twhere $\\widetilde{\\theta}_{0}$ denotes a ``frozen'' or ``detached'' copy of $\\theta_{0}$ and $u^{(i)}_{\\tau}$ maps $\\theta_{0}$ to $\\theta_{i}$, the model parameters after applying $i$ gradient descent updates to $f_{\\tau}$ according to Equation (1) in the manuscript. Then, differentiating $C_{\\tau, 1}(\\theta_{0},\\widetilde{\\theta}_{0})$ and $C_{\\tau, 2}(\\theta_{0},\\widetilde{\\theta}_{0})$ with respect to $\\theta_{0}$ separately yields:\n\t\\begin{align*}\n\t\\nabla_{\\theta_{0}} C_{\\tau, 1}(\\theta_{0},\\widetilde{\\theta}_{0}) &= -2 \\sum_{i=0}^{K_{\\tau} - 1}{J_{i}^{T}\\left(\\theta_{i+1} - \\theta_{i} \\right)} = -2 \\alpha \\sum_{i=0}^{K_{\\tau} - 1}{J_{i}^{T} g_{i}} \\\\\n\t\\nabla_{\\theta_{0}} C_{\\tau, 2}(\\theta_{0},\\widetilde{\\theta}_{0}) &= -2 \\sum_{i=0}^{K_{\\tau} - 1}{\\left(f_{\\tau}(\\theta_{i+1}) - f_{\\tau}(\\theta_{i})\\right) J_{i}^{T}g_{i}} = -2 \\sum_{i=0}^{K_{\\tau} - 1}{\\Delta f^{i}_{\\tau} J_{i}^{T}g_{i}}\n\t\\end{align*}\n\twhere $J_{i} = J_{\\theta_{0}}u^{(i)}_{\\tau}(\\theta_{0})$ denotes the Jacobian of $u^{(i)}_{\\tau}$ with respect to $\\theta_{0}$, $g_{i} = \\left. \\nabla_{\\theta} f_{\\tau}(\\theta)\\right\\rvert_{\\theta=\\theta_{i}}$ denotes the gradient of the loss function $f_{\\tau}$ evaluated at $\\theta_{i}$ and $\\Delta f^{i}_{\\tau} = f_{\\tau}(\\theta_{i+1}) - f_{\\tau}(\\theta_{i})$ stands for the change in the loss function after the $i$-th update. To simplify the exposition, a constant ``inner-loop'' learning rate and no preconditioning were assumed, i.e., $\\alpha_{i} = \\alpha$ and $S_{i} = I$.\n\t\n\tFurthermore, the article claims that all Jacobian terms are approximated by identity matrices (i.e., $J_{i} = I$) as suggested in Section 5.2 of [1], leading to the following approximations:\n\t\\begin{align*}\n\t\t\\nabla_{\\theta_{0}} C_{\\tau, 1}(\\theta_{0},\\widetilde{\\theta}_{0}) \\approx -2 \\alpha \\sum_{i=0}^{K_{\\tau} - 1}{ g_{i}} \\\\\n\t\t\\nabla_{\\theta_{0}} C_{\\tau, 2}(\\theta_{0},\\widetilde{\\theta}_{0}) \\approx -2 \\sum_{i=0}^{K_{\\tau} - 1}{\\Delta f^{i}_{\\tau} g_{i}}\n\t\\end{align*}\n\t\n\tInterestingly, it can be seen that the contribution to the meta-update of the energy of the path traversed by the model parameters $\\theta$, $g_{\\mathrm{Leap},1} =\\nabla_{\\theta_{0}} C_{\\tau, 1}(\\theta_{0},\\widetilde{\\theta}_{0})$, actually points in exactly the opposite direction than the meta-update of Reptile, given by $g_{\\mathrm{Reptile}} = \\sum_{i=0}^{K_{\\tau} - 1}{g_{i}}$ (e.g. Equation (27) in [2]). In summary, if the Leap objective was defined in terms of $\\theta$ rather than $(\\theta, f_{\\tau}(\\theta))$, minimising the Leap cost function should maximise Reptile's cost function and viceversa. It is only the term $g_{\\mathrm{Leap},2} =\\nabla_{\\theta_{0}} C_{\\tau, 2}(\\theta_{0},\\widetilde{\\theta}_{0})$ that presumably ``re-aligns'' $g_{\\mathrm{Reptile}}$ and $g_{\\mathrm{Leap}} = g_{\\mathrm{Leap},1} + g_{\\mathrm{Leap},2}$. Indeed, \n\t\\[\n\tg_{\\mathrm{Leap}} = 2 \\sum_{i=0}^{K_{\\tau} - 1}{\\left(-\\Delta f^{i}_{\\tau} - \\alpha \\right) g_{i}}\n\t\\]\n\twill have positive inner product with $g_{\\mathrm{Reptile}}$ if each gradient update yields a sufficient decrease in the loss $f_{\\tau}$, that is, $\\Delta f^{i}_{\\tau} < -\\alpha$.\n\t\n\tMoreover, I also wonder if this is the reason why the authors introduce the ``regularization'' term $\\mu_{\\tau}^{i}$, which as it currently stands in the manuscript, does not seem to relate in a particularly intuitive manner to the original objective of minimising the energy of $\\gamma(t)$. By introducing $\\mu_{\\tau}^{i}$, the term $C_{\\tau, 2}(\\theta_{0},\\widetilde{\\theta}_{0})$ becomes\n\t\\[\n\t\tC^{\\prime}_{\\tau, 2}(\\theta_{0},\\widetilde{\\theta}_{0}) = \\sum_{i=0}^{K_{\\tau} - 1}{-\\mathrm{sign} \\left( f_{\\tau}\\left(u^{(i+1)}_{\\tau}(\\widetilde{\\theta}_{0})\\right) - f_{\\tau}\\left(u^{(i)}_{\\tau}(\\theta_{0})\\right) \\right) \\left( f_{\\tau}\\left(u^{(i+1)}_{\\tau}(\\widetilde{\\theta}_{0})\\right) - f_{\\tau}\\left(u^{(i)}_{\\tau}(\\theta_{0})\\right) \\right)^{2}},\n\t\\]\n\tleading to $g^{\\prime}_{\\mathrm{Leap},2} = 2 \\sum_{i=0}^{K_{\\tau} - 1}{\\vert \\Delta f^{i}_{\\tau} \\vert g_{i}}$ and \n\t\\[\n\tg^{\\prime}_{\\mathrm{Leap}} = 2 \\sum_{i=0}^{K_{\\tau} - 1}{\\left(\\vert \\Delta f^{i}_{\\tau} \\vert - \\alpha \\right) g_{i}}.\n\t\\]\n\tIn turn, this relaxes the sufficient condition under which Leap and Reptile lead to meta-updates with positive inner product, namely, it changes the condition $\\Delta f^{i}_{\\tau} < -\\alpha$ by a less restrictive counterpart $\\vert \\Delta f^{i}_{\\tau} \\vert \\ge \\alpha$.\n\t\n\tIf these derivations happen to be correct, then I believe the way Leap is currently motivated in the article could be argued to be slightly misleading. What seems to be its main inspiration, accounting for the path that the model parameters traverse during fine-tuning, does not seem to be what drives the meta-updates towards the ``correct'' direction. Instead, the component of the objective due to the path traversed by the loss function values appears to be more important or, at least, not optional. Furthermore, I believe the regularization term $\\mu_{\\tau}^{i}$ should be better motivated, as the current version of the manuscript does not seem to justify its need clearly enough.\n\t\n\tFinally, under the assumption that the above is not mistaken, I wonder whether further tweaks to the meta-update, such as $g^{\\prime\\prime}_{\\mathrm{Leap}} = 2 \\sum_{i=0}^{K_{\\tau} - 1}{\\mathrm{max}\\left(\\vert \\Delta f^{i}_{\\tau} \\vert - \\alpha, 0 \\right) g_{i}}$, could perhaps turn out to be helpful as well.\n\n\t\\subsection*{2. Theoretical results}\n\t\n\t\\textbf{2.a} Theorem 1 currently claims that the Pull-Forward algorithm converges to a local minimum of Equation (5). However, due to the non-convexity of the objective function, only convergence to a stationary point is established.\n\t\n\t\\textbf{2.b} Most importantly, I am not entirely certain that the proof of Theorem 1 is complete in its current form. As I understand it, using the notation introduced by the authors in Appendix A, the following identities hold:\n\t\\begin{align*}\n\t\tF(\\psi_{s};\\Psi_{s}) &= \\mathbb{E}_{\\tau,i} \\vert\\vert h_{\\tau}^{i} - z_{\\tau}^{i} \\vert\\vert^{2} \\\\\n\t\tF(\\psi_{s+1};\\Psi_{s}) &= \\mathbb{E}_{\\tau,i} \\vert\\vert h_{\\tau}^{i} - x_{\\tau}^{i} \\vert\\vert^{2} \\\\\n\t\tF(\\psi_{s};\\Psi_{s+1}) &= \\mathbb{E}_{\\tau,i} \\vert\\vert y_{\\tau}^{i} - z_{\\tau}^{i} \\vert\\vert^{2} \\\\\n\t\tF(\\psi_{s+1};\\Psi_{s+1}) &= \\mathbb{E}_{\\tau,i} \\vert\\vert y_{\\tau}^{i} - x_{\\tau}^{i} \\vert\\vert^{2}.\n\t\\end{align*}\n\t\n\tThe bulk of the proof is then devoted to show that $\\mathbb{E}_{\\tau,i} \\vert\\vert y_{\\tau}^{i} - z_{\\tau}^{i} \\vert\\vert^{2} = F(\\psi_{s};\\Psi_{s+1}) \\ge \\mathbb{E}_{\\tau,i} \\vert\\vert y_{\\tau}^{i} - x_{\\tau}^{i} \\vert\\vert^{2} = F(\\psi_{s+1};\\Psi_{s+1})$. However, I do not immediately see how to make the final ``leap'' from $F(\\psi_{s+1};\\Psi_{s+1}) \\le F(\\psi_{s};\\Psi_{s+1})$ to the actual claim of the Theorem, $F(\\psi_{s+1};\\Psi_{s+1}) \\le F(\\psi_{s};\\Psi_{s})$.\n\t\n\t\\subsection*{3. Experimental evaluation}\n\t\n\t\\textbf{3.a} The experimental setup of Section 4.1 closely resembles experiments described in articles that introduced continual learning approaches, such as [4]. However, rather than including [4] as a baseline, the current manuscript compares against meta-learning approaches typically used for few-shot learning, such as MAML and Reptile. Consequently, I would argue the combination of experimental setup and selection of baselines is not entirely fair or, at least, it is incomplete.\n\t\n\tTo this end, I would suggest to (i) include [4] (or a related continual learning approach) as an additional baseline in the experiments currently described in Section 4.1 as well as (ii) perform a new experiment to compare the performance of Leap to that of MAML and Reptile in few-shot classification tasks using OmniGlot and/or Mini-ImageNet as datasets.\n\t\n\t\\textbf{3.b} The Multi-CV experiment described in Section 4.2 currently does not have strong baselines other than Leap. If possible, I would suggest including [5] in the comparison, as it is the article which inspired this particular experiment.\n\t\n\t\\textbf{3.b} Likewise, the same holds for the experiment described in Section 4.3. In this case, I would suggest comparing to [4] for the same reason described above.\n\t\n\t\\section*{MINOR POINTS}\n\t\n\t\\begin{enumerate}\n\t\n\t\\item In Section 2.1, it is claimed that \"gradients that largely point in the same direction indicate a convex loss surface, whereas gradients with frequently opposing directions indicate an ill-conditioned loss landscape\". Nevertheless, convex loss surfaces can in principle be ill-conditioned as well.\n\t\n\t\\item Introducing a mathematical definition for the metric \"area under the training curve\" could make the experiment in Section 4.1 more self-contained.\n\t\n\t\\item Several references are outdated, as they cite preprints that have since been accepted at peer-reviewed venues.\n\t\n\t\\item The reinforcement learning experiments in Section 4.3 would benefit from additional runs with multiple seeds, and the subsequent inclusion of confidence intervals.\n\t\n\t\\item I believe certain additional experiments could be insightful. For example, (i) studying how sensitive the performance of Leap is to parameter of the ``inner-loop'' optimizer (e.g. choice of \n\toptimizer, learning rate, batch size) or (ii) describing how the introduction of $\\mu_{\\tau}^{i}$ affects the performance of Leap.\n\t\n\t\\end{enumerate}\n\t\n\t\\section*{TYPOS}\n\t\n\t\\begin{enumerate}\n\t\n\t\\item The first sentence entirely in page 6 appears to have a superfluous word.\n\t\n\t\\item The Taylor series expansion in the proof of Theorem 1 is missing the $O(\\bullet)$ terms (or a $\\approx$ sign).\n\t\n\t\\item Also in the proof of Theorem 1, if $c_{\\tau}^{i} = (\\delta_{\\tau}^{i})^{2} - \\alpha_{\\tau}^{i}\\xi_{\\tau}^{i}\\delta_{\\tau}^{i}$, wouldn't $\\omega = \\underset{\\tau, i}{\\mathrm{sup}} \\langle \\hat{x}^{i}_{\\tau} - \\hat{z}^{i}_{\\tau}, g(\\hat{x}^{i}_{\\tau}) - g(\\hat{z}^{i}_{\\tau})\\rangle + \\xi_{\\tau}^{i}\\delta_{\\tau}^{i}$ instead?\n\t\n\t\\end{enumerate}\n\n \\section*{ANSWER TO REBUTTAL}\n Please see comments in the thread.\n\n\t\n\t\\section*{REFERENCES}\n\t\n\t\\begin{enumerate}[ {[}1{]} ]\n\t\t\\item Finn et al. ``Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks.'' International Conference on Machine Learning. 2017.\n\t\t\\item Nichol et al. ``On First-Order Meta-Learning Algorithms.'' arXiv preprint. 2018\n\t\t\\item Wu et al. ``Understanding Short-Horizon Bias in Stochastic Meta-Optimization.'' International Conference on Learning Representations. 2018.\n\t\t\\item Schwarz et al. ``Progress \\& Compress: A scalable framework for continual learning.'' International Conference on Machine Learning. 2018.\n\t\t\\item Serr{\\`a} et al. ``Overcoming Catastrophic Forgetting with Hard Attention to the Task.'' International Conference on Machine Learning. 2018.\n\t\\end{enumerate}\t\n\\end{document}", "# MINOR POINTS ABOUT THE REVISED MANUSCRIPT\n\n1. The claim that \"if both tasks have convex loss surfaces there is a unique optimal initialization that achieves Pareto optimality in terms o total path distance\", while true, might not be so helpful, since initialization should in theory be irrelevant for convex losses.\n\n2. Currently, Eq. 8 is derived assuming the stabilizer $\\mu$ is included in the loss. However, the stabilizer is only introduced afterwards. This might be confusing for some readers if they attempt to derive Eq. 8 themselves when they first encounter it, prior to finishing reading page 6 entirely.\n\n3. While I think the role of the stabilizer heuristic is much more clearly explained now, there is a claim that still confuses me slightly. In the last paragraph of page 6, it is said that \"The stabilizer ... reduces the weight placed on the gradient of $f_{\\tau}(\\theta_{\\tau}^{i})$\". However, under the simplifying assumption $S_{\\tau}^{i} = I$, one would have $g_{i} := \\nabla f_{\\tau}(\\theta_{\\tau}^{i})$ and $\\Delta \\theta_{\\tau}^{i} = -\\alpha_{\\tau}^{i} g_{i}$. Then, without stabilizer, the \"weight\" of $g_{i}$ would be $\\alpha_{\\tau}^{i} - \\Delta f_{\\tau}^{i}$ while with stabilizer, the \"weight\" of $g_{i}$ would then be $\\alpha_{\\tau}^{i} + \\vert \\Delta f_{\\tau}^{i} \\vert$, which in principle could be larger (in magnitude) than the weight without stabilizer. Nonetheless, if this is not mistaken, it would be clear that the stabilizer ensures $g_{i}$ is never effectively followed in the ascent direction in rare cases when $\\Delta f_{\\tau}^{i}$ is large and positive.\n\n4. I believe there might be a small mistake in the proof of Theorem 1. Nevertheless, even if this were the case, I think it would not affect the conclusion.\n\nIn the middle of page 15, a derivation implies that $\\langle h_{\\tau}^{i} - z_{\\tau}^{i}, z_{\\tau}^{i} - x_{\\tau}^{i} \\rangle = -\\alpha_{\\tau}^{i} \\langle g(z_{\\tau}^{i}), z_{\\tau}^{i} - x_{\\tau}^{i}\\rangle$. However, I believe this ignores the contribution of the extra dimension corresponding to the loss function values. That is, $\\langle h_{\\tau}^{i} - z_{\\tau}^{i}, z_{\\tau}^{i} - x_{\\tau}^{i} \\rangle = \\langle \\hat{h}_{\\tau}^{i} - \\hat{z}_{\\tau}^{i}, \\hat{z}_{\\tau}^{i} - \\hat{x}_{\\tau}^{i} \\rangle + \\left(f_{\\tau}(\\hat{h}_{\\tau}^{i}) - f_{\\tau}(\\hat{z}_{\\tau}^{i})\\right)\\left(f_{\\tau}(\\hat{z}_{\\tau}^{i}) - f_{\\tau}(\\hat{x}_{\\tau}^{i})\\right)$. Nevertheless, I think that using $\\langle \\hat{h}_{\\tau}^{i} - \\hat{z}_{\\tau}^{i}, \\hat{z}_{\\tau}^{i} - \\hat{x}_{\\tau}^{i} \\rangle = -\\alpha_{\\tau}^{i} \\langle g(\\hat{z}_{\\tau}^{i}), \\hat{z}_{\\tau}^{i} - \\hat{x}_{\\tau}^{i}\\rangle$ and $\\left(f_{\\tau}(\\hat{h}_{\\tau}^{i}) - f_{\\tau}(\\hat{z}_{\\tau}^{i})\\right)\\left(f_{\\tau}(\\hat{z}_{\\tau}^{i}) - f_{\\tau}(\\hat{x}_{\\tau}^{i})\\right) = \\left(-\\alpha_{\\tau}^{i} {\\nabla f_{\\tau}^{i}(\\hat{z}_{\\tau}^{i})}^{T} g(\\hat{z}_{\\tau}^{i}) + O(\\alpha_{\\tau}^{i})\\right) \\left(f_{\\tau}(\\hat{z}_{\\tau}^{i}) - f_{\\tau}(\\hat{x}_{\\tau}^{i})\\right)$ should still allow bounding $\\alpha_{\\tau}^{i}$ from above to ensure the objective function decreases.\n\nI believe a similar issue (the contribution of the extra dimension not being explicitly shown) might also have occurred in the last step of the proof, when bounding $\\vert\\vert h_{\\tau}^{i} - z_{\\tau}^{i} \\vert\\vert^{p}$ from above by ${\\alpha_{\\tau}^{i}}^{p} \\vert\\vert g(\\hat{z}_{\\tau}^{i}) \\vert\\vert^{p}$. But as with the previous case, I don't think this would affect the actual argument being made.\n\n\n# TYPOS\n\nPage 2:\n\n\"... our framework can be _extend_ to learn ...\"\n\"... initialization. _Differences_ schemes represent ...\"\n\nPage 4:\n\n\"... Leap converges on _an_ locally Pareto optimal ...\"\n\"... and _progress_ via ...\"\n\nPage 5:\n\n\"... and _construct_ baseline gradient ... \"\n\nPages 8 and 9:\n\nlength metric ($d_{2}$) and energy metric ($d_{1}$) -> length metric ($d_{1}$) and energy metric ($d_{2}$)\n\nPage 9:\n\n\"27 games that _has_ an action space ...\"\n\nPages 9 and 19 (Tables 1 and 3):\n\nNo pre-training AUC for the Facescrub task in bold, but the value for PNs is smaller.\n\nPage 18:\n\n\"... (until _convergenve_) ...\"", "# HIGH-LEVEL ASSESSMENT (UPDATED)\n\nAfter reading the author rebuttal and going through the revised manuscript, I believe the authors have successfully addressed the vast majority of concerns I had about the original version of the paper. \n\nBased on the current version of the article, I lean strongly towards acceptance and have modified my score accordingly.\n\n# STATE OF PREVIOUSLY RAISED MAJOR POINTS\n\n1. In my original review, I raised issues regarding the way LEAP was motivated and derived; an opinion also voiced by Reviewer 2. \n\nI believe Section 2 of the revised manuscript has greatly improved in terms of clarity while simultaneously being more general.\n\nI apologise for the mistaken sign in $\\Delta \\theta_{\\tau}^{i}$ in the subsequent analysis. In hindsight, I should have definitely caught that error based on the very unintuitive conclusions that ensue!. The fact that LEAP reduces to Reptile when minimising the expected energy of the \"non-augmented\" gradient flow makes perfect sense and helps understand what LEAP's \"place\" is alongside MAML and Reptile. \n\nThe authors have also extended LEAP to minimise either the length or the energy of the gradient path, rather than minimising only the energy. This possibility was loosely mentioned in the original manuscript, but not implemented. As pointed out in their rebuttal, minimising the length of the gradient path instead of the energy implicitly \"normalises\" the magnitude of the gradient w.r.t. the initialisation $\\theta_{0}$ across tasks (Eq. 8), which might make LEAP most robust against heterogeneity in the scale of task losses.\n\nThe new ablation studies included in Sections B and C of the Appendix are also a great addition to study/justify empirically some of the more heuristic aspects of the paper.\n\n2. The original review also raised some concerns regarding Theorem 1 and its proof; a point also raised by Reviewer 2.\n\nThe statement of Theorem 1 and, most importantly, its proof, have been almost entirely rewritten. To the best of my knowledge, I believe the revised version is correct (potential minor inconsequential caveats described below), and is now much clearer and easy to follow.\n\n3. Besides carrying out the new ablation studies, the authors have introduced two additional baselines in Section 4.2 and now report aggregated results for 10 different seeds in Section 4.3.\n\nI still believe that having included additional baselines also in Sections 4.1 and 4.3, as well as evaluating LEAP in a \"less favourable\" few-shot learning scenario, could have further strengthened the paper. Nevertheless, given the time (and possibly compute) constraints, the revised manuscript also improved considerably in terms of experimental results and, most importantly, already provides sufficient evidence that LEAP can outperform existing approaches when tasks are sufficiently diverse.", "[This is a top-level reply with only a summary of our changes, please see our answer to each individual reviewer thread for details]\n\nDear Reviewers, thank you for throughout and thoughtful feedback and for being overall positive about our work. We have worked through our manuscript and have made several additions (including new experiments) that clarifies the link between the theory and the algorithm, provides further insight into both, and significantly strengthens our experimental results. We hope these additions address any questions raised and address any concerns you may have. In particular, we have\n\nExpanded section 2 to provide further insights into the framework and our proposed solution algorithm.\n\n- Generalized Leap to allow for the use of either the energy or length metric as measure of gradient path distance.\n\n- Re-organized the proof of theorem 1 to address concerns about completeness and clarity.\n\n- Added ablation study with respect to (a) the inclusion of the loss in the task manifold, (b) the use of the energy or length metric, and (c) the use of a regularizer/stabilizer. In short, the more sophisticated the meta objective, the better Leap performs. The length metric converges faster, but final performance is largely equivalent. Adding the loss to the task manifold improves performance, while the stabilizer speeds up convergence. \n\n- Added ablation study with respect to the Jacobian approximation, as a function of the learning rate. We find that we can use relatively large learning rates without significant deterioration of the approximation. \n\n- Added HAT and Progressive Nets as baselines on Multi-CV. Neither of them outperforms Leap.\n\n- Report confidence intervals on Atari games. We find that Leap does better than a random initialization by more consistently exploring useful parts of parameter space.\n\nPlease see answers to individual reviewer below, for particular comments.\n", "We thank you for your review. We understand your sentiment and hope that our revised paper will alleviate any concern you may have. More specifically,\n\n> 1) The details of the experiments such as parameter configurations are missing\n\nThank you for pointing out that further experimental details are needed. We will add further details to ensure our results are fully replicable during this week. \n\n> 2) Include more state-of-the-art transfer learning methods\n\nWe have added results for Progressive Nets (Rusu et et., 2017), which is a rather demanding baseline as it has more than 8 times as many parameters as Leap, and HAT (Serra et al., 2018) whose paper inspired our setup. We find that they do not change any of our conclusions.\n\n> 3) use some commonly used datasets\n\nWe would like to point out that all datasets used in our paper are frequent in transfer learning work of various kind; the point we are making here is that Leap is a general purpose framework that can tackle any of them. In particular, Omniglot is frequently used in few-shot learning (Vinyals et al., 2016, Snell et al., 2017, Finn et al., 2017, Nichol et al., 2017), while all datasets in the Multi-CV experiment are common in various forms of transfer learning (Serra et al., 2018, Zenke et al., 2018, Zagoruyko et al., 2017 (https://arxiv.org/abs/1612.03928)). Similarly, Atari is a notoriously difficult transfer learning problem (Schwarz et al., 2018, Rusu et. al., 2017).\n\nWe appreciate the sentiment, and in an ideal world we would be happy to add further datasets and baselines to our experiments. However, given time and resource constraint, running multiple large-scale experiments is not feasible. In this paper, we chose Atari as our large-scale experiment. Please also note that we have made further additions to our experimental section (as per our top-level reply) as requested by other reviewers.\n", "> 3) \\Theta is not very well-defined\n\nWe fully understand your sentiment, we think it’s caused by a misunderstanding stemming from the way we describe this constraint. We have updated the paper (section 2.2) to make the following explanations clearer. Intuitively, the purpose of \\Theta is to provide an upper bound on what we, as modellers, consider as good performance. Mathematically, we characterize this as some \\epsilon bound on the global optimum. However, the only relevant bound is the level of performance we could achieve through our second-best option, i.e. starting from a random initialization or from fine-tuning. This level of performance is what \\Theta is about. As such, the global minimum is redundant in the definition, and we have revised the definition of \\Theta to avoid it, instead emphasizing that \\Theta is defined by the performance we could otherwise achieve.\n\n> 4) Experiments\n\nThank you for these comments. We were aware of the need for multiple seeds for the RL experiments have updated our results with averages over 10 seeds. Notably, we find that Leap outperforms a random initialization because it more consistently finds good exploration spaces. \n\nPlease also note that we have made further additions to our experimental section (as per our top-level reply) as requested by other reviewers.", "We are grateful for your insightful comments and glad that you like many aspects of the paper. We understand that your concerns are related to some theoretical parts, so we hope that our clarifications below, extra experiments and appropriate amendments to the paper will resolve your concerns fully. \n\n> 1a) the sign of the f part is flipped which is not sound\n\nWe believe that this concern comes from the fact that we omitted clarifying the role of this term which is only a practical regularizer. Hence we apologize and sympathize with your comment. To resolve your concern, let us first say that the regularizer is not essential but, rather, only an optional stabilizer that practically allows for the use of larger step sizes. We have added an ablation study (appendix C) where we show that the regularizer yields faster convergence in terms of meta gradient steps, however the final performance is largely equivalent.\n\nOur revised manuscript provides a more through motivation in (section 2.3) that we hope you agree with: in short, the motivation for the stabilizer is that in stochastic gradient descent, the gradient path can be rather volatile. As you say, the gradient path is what it is. As long as it converges, so will Leap (with or without the regularizer). But if we could reduce the noise inherent in SGD, Leap could converge faster, and the stabilizer is a heuristic to do that. Other heuristics can certainly be used, or none at all.\n\n\n> 1b) Second, replacing the Jacobian with the identity matrix is also questionable\n\nWe have added a new ablation study (appendix B) which shows that the approximation is quite tight, even for relatively large learning rates. With our best performing (inner loop) learning rate, we find the approximation to be accurate to the fourth decimal. We hope that you will find this study satisfying, although we also hope that you appreciate that the question about what Jacobian approximation is better to use is out of scope of our paper and does not affect the main point of our work. \n\nMore generally, any meta-learner that optimizes over the inner training process must approximate the Jacobian in order to scale, and the identity assumption is a frequently used approach that works well in practice. The purpose of this paper is to present a new way of framing meta-learning such that it can scale, leveraging existing approach to the approximations we must make. Our approach relies on prior work by Finn et. al. (2017), who found that the assumptions works well, and Nichol et. al. (2017) who found a similar empirical result, and further showed formally that detaching the Jacobians still optimizes the original objective (approximately).\n\nImportantly, we can control the precision of this approximation through the learning rate and the number of gradient steps: for any given number of gradient steps (yielding an upper bound on i), we can choose \\alpha such to ensure the approximation is sufficiently accurate to allow meta learning. Our ablation study (appendix B) shows that the resulting restriction on \\alpha is not severe.\n\nIn summary, we believe that our (admittedly sub-optimal) treatment of the Jacobian is well motivated and in-line with existing methods; we do agree that it can be improved, but this is out of our paper's scope. \n\n\n> 2) The proof of Theorem 1 is not complete\n\nWe sympathize with your concern and agree that the presentation of the proof can be made clearer. We have taken your suggestions into account and re-organized the proof to directly establish the desired result, d(\\psi^0_{s+1}) < d(\\psi^0_s), and have included several commentaries to ensure each step of the proof is clearly linked to the overall objective. \nAs for your specific questions:\n\\beta_s is assumed to be sufficiently small to allow for gradient descent on the current baseline. The proof needs to establish that a new baseline generated from the updated initialization has a shorted gradient path length.\nThere was indeed a typo in the theorem, leading to a missed term (hence the vector).", "> 2a) Theorem 1 only assert convergence to a stationary point\n \nCorrect, apologies for our imprecision: we have updated the paper to reflect that Leap converges to a limit point in \\Theta. Our point was that gradient descent on the pull-forward objective is equivalent to gradient descent on the original objective, which we now state explicitly in section 2.3.\n \n> 2b) The proof of Theorem 1 may be incomplete\n \nThe final “leap” is implicit and unfortunately not clearly explained. We have substantially re-organized the proof to prove the desired inequality, d(\\psi^0_{s+1}) < d(\\psi^0_s), directly. We have also added a commentaries to more clearly explain each step of the proof, to avoid any confusion as to what is being established.\n \n> 3) Unfair comparisons / lack of baselines\n\nTo address concerns about lacking strong baselines, we have added two baseline related methods that do not regularize with respect to previous tasks, HAT (Serra et al., 2019) and Progressive Nets (Rusu et al., 2017), to the Multi-CV experiment. We found neither HAT nor Progressive Nets match Leap’s performance. \n\nWe hope that you appreciate that the continual learning problem is very different from the type of multitask learning we are considering here. The point we are trying to make in our paper is that MAML, Reptile, and similar methods cannot scale to problems that require more than a handful of gradient steps, while Leap can. As such, we believe that treating Omniglot–a standard few-shot learning problem where meta learning does well– as a multi-shot learning problem is highly relevant. We are not arguing that Leap is superior at few-shot learning, though it could be.\n \nPlease also note that we have made further additions to our experimental section (as per our top-level reply) as requested by other reviewers.", "Thank you for such a thorough review! We are very grateful for your feedback and are excited to use it to improve our paper. Together with our added clarifications and new experiments, we hope that we now address your concerns in full. If not, we are looking forward to discuss more. Please see details below. \n \n> 1a) including the loss in the task manifold can make learning unstable if tasks have losses on different magnitudes.\n \nLeap is indeed sensitive to differing scales across task objective functions. However, this sensitivity is not due to incorporating the loss in the task manifold, and would exist even if it were omitted. It arises from the fact that the meta gradient is an average over task gradients, which gives tasks with larger gradients (on average) greater influence. As such, this is a general problem applying equally to similar methods, like MAML and Reptile.\n \nWe were aware of this and after submission we have experimented with formulations that alleviate this issue. In fact, using the approximate gradient path length (as opposed to the energy) yields a meta gradient that scales all task gradients by a task specific norm that avoids this issue. This is an important improvement, and we are grateful for your insight here. We have generalized Leap (section 2) to allow both for a meta learning objective under the energy metric as well as under the length metric. In appendix C, we have added a new thorough ablation study across design choices and find that while Leap converges faster under the length metric (in terms of meta training steps), final performance is equivalent.\n \n> 1b) Including the loss in the task manifold is not optional, as suggested by the paper, but essential, to produce loss-minimizing meta-gradients.\n \nWe are very grateful for the time you have taken to investigate this issue! Unfortunately, your argument is based on an incorrect derivation, as there is a small mistake in the second inequality on C_1: you replace \\theta_{i+1} - \\theta_i with \\alpha g_i, but that would imply gradient ascent. The right identity is \\theta_{i+1} - \\theta_i = - \\alpha g_i (see eq. 1).\n \nCorrecting for this, C_1 is not only aligned with the Reptile gradient, it *is* the reptile gradient (this exact equivalence breaks down when we use length metric as meta objective, or if we were to jointly learn other aspects of the gradient update, e.g. learning rate / preconditioning). \n\nOur newly added ablation study in appendix C shows that Leap can converge even if we remove the loss from the task manifold, but does so at a significantly slower rate and learns a less useful initialization. Including the loss is a key feature of our framework, because it tells Leap how impactful a gradient step is: a gradient step that has a large influence on the loss will be given greater attention, allowing Leap to “prioritize”. The importance of this information is clearly illustrated in the Omniglot experiment, were Leap does significantly better than Reptile.\n \nThis ability to prioritize is also what motivated us in adding a regularizer, which perhaps is better called a stabilizer. Leap prioritizes large loss deltas, so if the learning rate is too large, or the gradient estimator very noisy, it could happen that we get a large increase in the loss, which would then be prioritized by Leap. Being an anomaly, this doesn’t derail Leap–in the end, Leap follows the entire gradient path (see appendix C). As such, the stabilizer is not critical, but it does speed up training and allows the use more aggressive learning rates. Finally, as you point out, our formulation is just one heuristic, others may be better. \n", "In this paper, the authors study an important transfer learning problem, i.e., knowledge transfer between distinct tasks, which is usually called 'far transfer' (instead of 'near transfer'). Specifically, the authors propose a lightweight framework called Leap, which aims to achieve knowledge transfer 'across learning processes'. In particular, a method for meta-learning (see Algorithm 1) is developed, which focuses on minimizing 'the expected length of the path' (see the corresponding term in Eqs.(4-6)). Empirical studies on three public datasets show the effectiveness of the proposed method. Overall, the paper is well presented.\n\nSome comments/suggestions:\n(i) The details of the experiments such as parameter configurations are missing, which makes the results not easy to be reproduced.\n\n(ii) For the baseline methods used in the experiments, the authors are suggested to include more state-of-the-art transfer learning methods in order to make the results more convincing.\n\n(iii) Finally, if the authors can use some commonly used datasets in existing transfer learning works, the comparative results will be more interesting. \n" ]
[ -1, -1, -1, 8, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ -1, -1, -1, 3, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "rkezle-C0X", "H1xWiy_q2Q", "HJeAEd_eR7", "iclr_2019_HygBZnRctX", "iclr_2019_HygBZnRctX", "Skgx-yRE0Q", "iclr_2019_HygBZnRctX", "SyxtTCp4Am", "HyeuqwdeCm", "iclr_2019_HygBZnRctX", "H1xWiy_q2Q", "SylzzddgRm", "HJgKnIJY27", "HkeqwPulCQ", "Bkevhwv927", "iclr_2019_HygBZnRctX" ]
iclr_2019_HylzTiC5Km
GENERATING HIGH FIDELITY IMAGES WITH SUBSCALE PIXEL NETWORKS AND MULTIDIMENSIONAL UPSCALING
The unconditional generation of high fidelity images is a longstanding benchmark for testing the performance of image decoders. Autoregressive image models have been able to generate small images unconditionally, but the extension of these methods to large images where fidelity can be more readily assessed has remained an open problem. Among the major challenges are the capacity to encode the vast previous context and the sheer difficulty of learning a distribution that preserves both global semantic coherence and exactness of detail. To address the former challenge, we propose the Subscale Pixel Network (SPN), a conditional decoder architecture that generates an image as a sequence of image slices of equal size. The SPN compactly captures image-wide spatial dependencies and requires a fraction of the memory and the computation. To address the latter challenge, we propose to use multidimensional upscaling to grow an image in both size and depth via intermediate stages corresponding to distinct SPNs. We evaluate SPNs on the unconditional generation of CelebAHQ of size 256 and of ImageNet from size 32 to 128. We achieve state-of-the-art likelihood results in multiple settings, set up new benchmark results in previously unexplored settings and are able to generate very high fidelity large scale samples on the basis of both datasets.
accepted-oral-papers
All reviewers recommend acceptance, with two reviewers in agreement that the results represent a significant advance for autoregressive generative models. The AC concurs.
val
[ "Bkx8AurWeN", "rJgX0gv814", "SJeStxgqRm", "HJgrrcCO2X", "H1gOyezcaX", "H1llqKX56m", "r1xv8KGcp7", "H1lNzuZ9a7", "Bkgl4P-cpm", "B1eI3IJ5pQ", "rkeURDYKpQ", "S1gjDP-927", "HkxN569T2X", "Bke6VkQAY7", "SJgqbZTaYQ" ]
[ "public", "public", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public" ]
[ "https://arxiv.org/abs/1109.4389 seems to be another relevant reference for AR models using multiple scales", "Dear authors:\n\nThank you for your really interesting and impressive ideas. The idea is really amazing and experimental results are sound. Generating 256x256 imagenet images in the auto-regressive manner is really difficult and your paper gives a really solid solution.\n\nHowever, about this paper, I have a concern about the depth-upscaling part. In your experimental results, the bits/dim of SPN and SPN+ depth-upscaling is of no difference for most datasets, and sometiems the SPN+depth-upscaling even performs poorly compared to simply SPN. However, with the depth-upscaling, the sampling time is doubled: every dimension should be sampled twice compared SPN. Can you give more explanations on the benefits of depth-upscaling? Do we really need it given the really impressive results of SPN?\n\nAnyway, this is a really solid paper and congratulations.", "To our reviewers,\n\nPlease find our latest revision uploaded. We believe it addresses most of the comments including:\n- Detailing the number of parameters for each architecture (Table 4)\n- Clarifying exactly how the slice embedder conditions the decoder\n- Clarifying depth-upscaling with SPN\n- Including information about the use of TPU (Appendix C)\n- Supplying details about the nature of temperature adjustments during sampling\n- Adding references\n- Various writing improvements\n\nWe are also currently running our setup for the 64x64 and 256x256 samples with the intent to include them shortly in our revision.\n\nWe kindly thank our reviewers for their insightful comments which have substantially improved the exposition.", "General:\nThe paper tackles a problem of learning long-range dependencies in images in order to obtain high fidelity images. The authors propose to use a specific architecture that utilizes three main components: (i) a decoder for sliced small images, (ii) a size-upscaling decoder for large image generation, (iii) a depth-upscaling decoder for generating high-res image. The main idea of the approach is slicing a high-res original image and a new factorization of the joint distribution over pixels. In this model various well-known blocks are used like 1D Transformer and Gated PixelCNN. The obtained results are impressive, the generated images are large and contain realistic details.\n\nIn my opinion the paper would be interesting for the ICLR audience.\n\nPros:\n+ The paper is very technical but well-written.\n+ The obtained results constitute new state-of-the-art on HQ image datasets.\n+ Modeling long-range dependencies among pixels is definitely one of the most important topics in image modeling. The proposed approach is a very interesting step towards this direction.\n\nCons:\n- The authors claim that the proposed approach is more memory efficient than other methods. However, I wonder how many parameters the proposed approach requires comparing to others. It would be highly beneficial to have an additional column in Table 1 that would contain number of parameters for each model.\n- All samples are take either at an extremely high temperature (i.e., 0.99) or at the temperature equal 1. How do the samples look for smaller temperatures? Sampling at very high temperature is a nice trick for generating nicely looking images, however, it could hide typical problems of generative models (e.g., see Rezende & Viola, “Taming VAEs”, 2018).\n\n--REVISION--\nI would like to thank the authors for their response. I highly appreciate their clear explanation of both issues raised by me. I am especially thankful for the second point (about the temperature) because indeed I interpreted it as in the GLOW paper. Since both my concerns have been answered, I decided to raise the final score (+2).", "Thank you for your comments. \n\n--\n- The authors claim that the proposed approach is more memory efficient than other methods. However, I wonder how many parameters the proposed approach requires comparing to others. It would be highly beneficial to have an additional column in Table 1 that would contain number of parameters for each model.\n--\n\nAs discussed with AnonReviewer2, we will include a table with the number of parameters for each model. Briefly, the models in the paper have between ~50M params and ~650M params in the most extreme case of full multidimensional upscaling on ImageNet 128.\n\nIn the case of 256x256 CelebA-HQ, we use a total of ~100M parameters to produce the depth-upscaled 8bit samples in Figure 5 and ~50M parameters to produce the 5bit samples in Figure 7. Compare this to Glow [1], whose blog post [2] indicates that up to 200M parameters are used for 5bit Celeb-A. Thus we have a ~4x reduction in the number of parameters vs Glow, with decisively improved likelihoods (see Table 3); I think this should address your concern about parameter-efficiency. We also note that autoregressive (and other) models are highly compressible at little to no loss (see e.g. [3]), which makes the absolute number of parameters only an initial, rough measure of parameter efficiency.\n\n--\n- All samples are take either at an extremely high temperature (i.e., 0.99) or at the temperature equal 1. How do the samples look for smaller temperatures? Sampling at very high temperature is a nice trick for generating nicely looking images, however, it could hide typical problems of generative models (e.g., see Rezende & Viola, “Taming VAEs”, 2018).\n--\n\nI believe there is a misunderstanding here. What we call temperature is a division on the logits of the softmax output distribution. Temperature 1.0 in our case means that the distribution of the trained model is used exactly as predicted by the model, with no adjustments or tweaks during sampling time. *Reducing* the temperature (less than 1.0) is what can hide problems, because it artificially reduces the entropy in the distribution parameterized by the model during sampling time. \n\nAs we sample at temperatures of 0.95, 0.99, and 1.0 in the paper, we respectively *slightly*, *barely*, and *do-not-at-all* reduce the entropy in the model's distribution. Thus this concern does not apply and we are actually being comparatively transparent about our model’s samples (note that Glow shows its best samples at temperature 0.7, but that “temperature” has a different operational meaning in that case).\n\n[1] - Kingma et al. https://arxiv.org/abs/1807.03039\n[2] - https://blog.openai.com/glow/\n[3] - Kalchbrenner et al. https://arxiv.org/abs/1802.08435", "Thanks for the clarification. Agreed a separate release isn't needed. It wasn't clear if you were using the same split as Reed et al doesn't talk about it. ", "Thanks for your thoroughness, AnonReviewer2.\n\nThe ImageNet dataset we use for 128x128 and 256x256 generation is the standard ILSVRC [1] benchmark used by classification models. We report final numbers on the official validation set consisting of 50k examples. We hold out 10k examples from the official training set for cross-validation, and train on the remaining 1271167 points.\n\nI don’t believe a separate release is necessary here as the data is freely available [2] and our downsampling scheme is easily reproducible (simply tf.resize_area). \n\nWe can be more explicit about this split in the experiment section.\n\n[1] - Russakovsky et al. ImageNet Large Scale Visual Recognition Challenge. IJCV, 2015.\n[2] - http://www.image-net.org/challenges/LSVRC/2014/", "--\n4. Can you clarify how you condition the self-attention + Gated PixelCNN block on the previous slice embedding you get out of the above convnet? There are two embeddings passed in if I understand correctly: (1) All previous slices, (2) Tiled meta-position of current slice. It is not clear to me how the conditioning is done for the transformer pixelcnn on this auxiliary embedding. The way you condition matters a lot for good performance, so it would be helpful for people to replicate your results if you provide all details. \n--\n\nThe output of the slice embedder -- that receives as input all previous slices and the tiled meta-position of the current slice -- is concatenated channel-wise with the 2D-reshaped output of the masked 1D transformer (which in turn receives as input only the current target slice). The resulting concatenated tensor conditions the PixelCNN decoder like $s$ in equation (5) of the Conditional PixelCNN paper [2]. I.e. the tensor $s$ maps, via 1x1 convolutions, to units which bias the masked convolution output for each layer in PixelCNN. The number of hidden units in this pathway is what is referred to as \"decoder residual channels\" in Appendix B. We will add this description to Section 3.2\n\n--\n5. I also don't understand the depth upscaling architecture completely. Could you provide a diagram clarifying how the conditioning is done there given that you have access to all pixels' salient bits now and not just meta-positions prior to this slice? \n--\n\nThe SPN which models the low-bit-depth image is identical to the exposition in section 3.2, except that the data it operates on has only 3 bits of depth. As we mention in section 3.4, the depth-upscaling SPN achieves its conditioning by concatenating (again channelwise) the full low-bit-depth image, organised into its constituent slices, to the rest of the slice-embedder's inputs. So no matter which target slice is being modelled (for the finest 5 bits), all slices of the 3bit data can be seen by the slice embedder when it produces context for the fine bits of a target slice. We will further clarify it and see how to add a diagram for this.\n\n--\n6. It is really cool that you don't lose out in bits/dim after depth upscaling that much. If you take Grayscale PixelCNN (pointed out in the anonymous comment), the bits/dim isn't as good as PixelCNN though samples are more structured. There is 0.04 b.p.d difference in 256x256, but no difference in 128x128. Would be nice to explain this when you add the citation.\n--\n\nThanks for the observation, we will note this. The ordering in the SPN, considering both subscaling and upscaling, is indeed quite different from the vanilla ordering and it’s nice to see that the NLL values are negligibly affected.\n\n--\n7. The architecture in the Appendix can be improved. It is hard to understand the notations. What are residual channels, attention channels, attention ffn layer, \"parameter attention\", conv channels? \n--\n\nThanks for bringing this to our attention. We will add the figures/explanations discussed and reference this hyperparameter table so that it's all clear. The attention parameters listed are configurable hyperparameters of the open source Transformer implementation in tensor2tensor [3] on github.\n\n\n[1] - Kalchbrenner et al. https://arxiv.org/abs/1802.08435\n[2] - Oord et al. https://arxiv.org/abs/1606.05328 \n[3] - https://github.com/tensorflow/tensor2tensor\n\nAnd we'll fix that typo too.", "Thanks for your thorough review. Addressing your comments will improve the paper.\n\n--\n-1. Can you point out the total number of parameters in the models?\n--\n\nDepending on the dataset, each SPN network has between ~50M (CelebA) and ~250M parameters (ImageNet 128/256). ImageNet64 uses ~150M weights. With depth upscaling, two separate SPNs with non-shared weights model P(3bit) and P(rest given 3bit) respectively, doubling the number of parameters. With explicit size upscaling for ImageNet128, there is a third network (decoder-only) with ~150M parameters which generates the first 3 bits of the first slice. So the maximal number of parameters used to generate a sample in the paper is full multidimensional upscaling on ImageNet 128, where the total parameter count reaches ~650M. We will include the number of parameters for each model in the table, as requested.\n\n--\n-1. Also would be good to know what hardware accelerators were used. The batch sizes mentioned in the Appendix (2048 for 256x256 Imagenet) are too big and needs TPUs? If TPU pods, which version (how many cores)?\n--\n\nTo reach batch size 2048 we used 256 TPUv3 cores. We will clarify this in the paper.\n\n--\n0. I would really like to know the sampling times. The model still generates the image pixel by pixel. Would be good to have a number for future papers to reference this.\n--\n \nOur current implementation performs only naive sampling, where the outputs of the decoder are recomputed for all positions in a slice to generate each sample. This is convenient, but time-consuming and allows to only rarely inspect the samples coming from our model. The techniques for speeding-up AR inference - such as caching of states, low-level custom implementation, sparsification and multi-output generation [1] - are equally applicable to SPNs and would make sampling reasonably fast; on the order of a handful of seconds for a 256 x 256 x 3 image.\n\n--\n1. Any reason why 256x256 Imagenet samples are not included in the paper? Given that you did show 256x256 CelebA samples, sampling time can't be an issue for you to not show Imagenet 256x256. So, it would be nice to include them. I don't think any paper so far has shown good 256x256 unconditional samples. So showing this will make the paper even stronger.\n--\n\nThanks! We’ll aim to adding 64 x 64 and 256 x 256 samples in our revision.\n\n--\n2. Until now I have seen no good 64x64 Imagenet samples from a density model. PixelRNN samples are funky (colorful but no global structure). So I am curious if this model can get that. It may be the case that it doesn't, given that subscale ordering didn't really help on 32x32. It would be nice to see both 5-bit and 8-bit, and for 8-bit, both the versions: with and without depth upscaling.\n--\n\nThe 64x64 samples look much better with SPNs. We will aim at including some of the variants that you ask for in our revision.\n\n--\n3. I didn't quite understand the architecture in slice encoding (Sec 3.2). Especially the part about using a residual block convnet to encode the previous slices with padding, and to preserve relative meta-position of the slices. The part I get is that you concatenate the 32x32 slices along the channel dimension, with padded slices. I also get that padding is necessary to have the same channel dimension for any intermediate slice. Not sure if I see the whole point of preserving ordering. Isn't it just normal padding -> space to depth in a structured block-wise fashion? \n--\n\nIt’s like a meta-convolution: the relative ordering ensures that slices are embedded with weights that depend on the relative 2d distance to the slice that is being generated. Suppose we are predicting the target slice at meta-position (i,j), so that previous slices in the 2d ordering are presented to the slice embedder. For any previous slice (m,n), the weights applied to it are a function of the offset (i-m,j-n), as opposed to their absolute positions (m,n). We will add this clarification to the paper.", "Thank you for the detailed feedback. In the next revision, we will make height, width, and channel indices in equation 1 explicit and make a thorough sweep over the rest of the equations to check for any other undefined parameters. \n\nWe will ensure that all figures are referenced, and in the correct order.", "Request the authors to provide details for the train/val split for Imagenet 128x128 and Imagenet 256x256 density estimation benchmarks. I wasn't able to find the details in Reed et al (https://arxiv.org/pdf/1703.03664.pdf). Would be ideal if the authors released the splits as done for 32x32 and 64x64 from PixelRNN in http://image-net.org/small/download.php to encourage and be useful for more people to push on this benchmark.", "Summary: \nThis paper addresses an important problem in density estimation which is to scale the generation to high fidelity images. Till now, there have been no good density modeling results on large images when taken into account large datasets like Imagenet (there have been encouraging results like with Glow, but on 5-bit color intensities and simpler datasets like CelebA). This paper is the first to successfully show convincing Imagenet samples with 128x128 resolution for a likelihood density model, which is hard even for a GAN (only one GAN paper (SAGAN) prior to this conference has managed to show unconditional 128x128 Imagenet samples). The ideas in this paper to pick an ordering scheme at subsampled slices uniformly interleaved in the image and condition slice generation in an autoregressive way is very likely to be adopted/adapted to more high fidelity density modeling like videos. Another important idea in this paper is to do depth upscaling, focusing on salient color intensity bits first (first 3 bits per color channel) before generating the remaining bits. The color intensity dependency structure is also neat: The non-salient bits per channel are conditioned on all previously generated color bits (for all spatial locations). Overall, I think this paper is a huge advance in density modeling, deserves an oral presentation and deserves as much credit as BigGAN, probably more, given that it is doing unconditional generation. \n\nDetails:\nMajor:\n-1. Can you point out the total number of parameters in the models? Also would be good to know what hardware accelerators were used. The batch sizes mentioned in the Appendix (2048 for 256x256 Imagenet) are too big and needs TPUs? If TPU pods, which version (how many cores)? If not, I am curious to know how many GPUs were used.\n0. I would really like to know the sampling times. The model still generates the image pixel by pixel. Would be good to have a number for future papers to reference this.\n1. Any reason why 256x256 Imagenet samples are not included in the paper? Given that you did show 256x256 CelebA samples, sampling time can't be an issue for you to not show Imagenet 256x256. So, it would be nice to include them. I don't think any paper so far has shown good 256x256 unconditional samples. So showing this will make the paper even stronger.\n2. Until now I have seen no good 64x64 Imagenet samples from a density model. PixelRNN samples are funky (colorful but no global structure). So I am curious if this model can get that. It may be the case that it doesn't, given that subscale ordering didn't really help on 32x32. It would be nice to see both 5-bit and 8-bit, and for 8-bit, both the versions: with and without depth upscaling.\n3. I didn't quite understand the architecture in slice encoding (Sec 3.2). Especially the part about using a residual block convnet to encode the previous slices with padding, and to preserve relative meta-position of the slices. The part I get is that you concatenate the 32x32 slices along the channel dimension, with padded slices. I also get that padding is necessary to have the same channel dimension for any intermediate slice. Not sure if I see the whole point of preserving ordering. Isn't it just normal padding -> space to depth in a structured block-wise fashion? \n4. Can you clarify how you condition the self-attention + Gated PixelCNN block on the previous slice embedding you get out of the above convnet? There are two embeddings passed in if I understand correctly: (1) All previous slices, (2) Tiled meta-position of current slice. It is not clear to me how the conditioning is done for the transformer pixelcnn on this auxiliary embedding. The way you condition matters a lot for good performance, so it would be helpful for people to replicate your results if you provide all details. \n5. I also don't understand the depth upscaling architecture completely. Could you provide a diagram clarifying how the conditioning is done there given that you have access to all pixels' salient bits now and not just meta-positions prior to this slice? \n6. It is really cool that you don't lose out in bits/dim after depth upscaling that much. If you take Grayscale PixelCNN (pointed out in the anonymous comment), the bits/dim isn't as good as PixelCNN though samples are more structured. There is 0.04 b.p.d difference in 256x256, but no difference in 128x128. Would be nice to explain this when you add the citation.\n7. The architecture in the Appendix can be improved. It is hard to understand the notations. What are residual channels, attention channels, attention ffn layer, \"parameter attention\", conv channels? \n\nMinor: \nTypo: unpredented --> unprecedented ", "Authors propose a decoder arquitecture model named Subscale Pixel Network. It is meant to generate overall images as image slice sequences with memory and computation economy by using a Multidimensional Upscaling method.\nThe paper is fairly well written and structured, and it seems technically sound.\nExperiments are convincing.\nSome minor issues:\nFigure 2 is not referenced anywhere in the main text.\nFigure 5 is referenced in the main text after figure 6.\nEven if intuitively understandable, all parameters in equations should be explicitly described (e.g., h,w,H,W in eq.1)", "Thanks for the reference - we will add the citation in the context of depth upscaling. Size upscaling in AR models goes back to at least the PixelRNN paper (van den Oord et al, 2016, see Multi-Scale section). \n\nSome differences: \n- Depth upscaling here is done by taking the most significant bits of each channel separately, as opposed to globally across the three channels as in Grayscale PixelCNN.\n- Multidimensional Upscaling used here combines both size and depth upscaling.\n", "https://arxiv.org/pdf/1612.08185 also proposed both low-resolution and sub-pixel color modelling." ]
[ -1, -1, -1, 9, -1, -1, -1, -1, -1, -1, -1, 10, 7, -1, -1 ]
[ -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, 5, 3, -1, -1 ]
[ "Bke6VkQAY7", "iclr_2019_HylzTiC5Km", "iclr_2019_HylzTiC5Km", "iclr_2019_HylzTiC5Km", "HJgrrcCO2X", "r1xv8KGcp7", "rkeURDYKpQ", "S1gjDP-927", "S1gjDP-927", "HkxN569T2X", "S1gjDP-927", "iclr_2019_HylzTiC5Km", "iclr_2019_HylzTiC5Km", "SJgqbZTaYQ", "iclr_2019_HylzTiC5Km" ]
iclr_2019_S1x4ghC9tQ
Temporal Difference Variational Auto-Encoder
To act and plan in complex environments, we posit that agents should have a mental simulator of the world with three characteristics: (a) it should build an abstract state representing the condition of the world; (b) it should form a belief which represents uncertainty on the world; (c) it should go beyond simple step-by-step simulation, and exhibit temporal abstraction. Motivated by the absence of a model satisfying all these requirements, we propose TD-VAE, a generative sequence model that learns representations containing explicit beliefs about states several steps into the future, and that can be rolled out directly without single-step transitions. TD-VAE is trained on pairs of temporally separated time points, using an analogue of temporal difference learning used in reinforcement learning.
accepted-oral-papers
The reviewers agree that this is a novel paper with a convincing evaluation.
train
[ "BJxUrnv_AQ", "SyxSfnP_0Q", "BJgAOovOC7", "rkeaQHUJam", "rJeR1S-ThQ", "BkgnEnawnm" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your review and comments. We clarified our intuitive derivation of the loss in section A. It is indeed difficult to compare the jumpy TD-VAE model to other models, as there is little work that studies such models. We updated the appendix to explain how a model similar to jumpy TD-VAE provides an approximate ELBO to the ‘jumpy’ log likelihood log p(x_{t_1}, x_{t_2}, .. x_{t_n}). As for comparison to published models, we did compare the sequential TD-VAE elbo on the simple mini-pacman dataset to classical state-space models; we also compared the belief state obtained by training a TD-VAE on the oscillator network to a more classical lstm in a recurrent classification setup. Following the line of the thinking, we believe an appropriate way to compare similar models will be through the comparison of the different belief states they learn. We highlighted this in the text.", "Thank you for the review and comments. \nThanks for the suggestion - we added missing experiment details, network specifications and hyperparameters in the appendix. \nYou are correct that q(z_{t-1}|z_t, b_{t-1}, b_t) does not need to depend on b_{t-1}, but it does not hurt to do so; we chose to do so in order to further facilitate the learning of b_{t-1}, but it may not have affected experiments.\nIf the model does not take the jump interval as input, the model has to represent the jump size by way of a multimodal distribution over possible future events. One could imagine that one of the latent variables could be learned to correspond to dt. \n\n", "Thank you for your thoughtful review and comments. \n\nThanks for noticing the typo - we will fix it.\nRegarding the exposure bias - TD-VAE may indeed reduce exposure bias by generating faraway futures in fewer steps of generation. But we have not explicitly investigated that issue in the paper.\nRegarding the distribution of (t_2-t_1), for the noisy harmonic oscillator experiment we use a mixture of two uniform distributions, one with support [1,T], the second with support [1,T’], with T’>T. Since shorter time steps are easy to model, this served as a form of ‘curriculum’ for the jumpy model; this enables us to learn the state representation, which in turns facilitates learning the ‘jumpier’ transitions from [1,T’]. We clarify this in the text. It is indeed likely that weighting [1,T'] more heavily would indeed improve the jumpier prediction.\nMore general strategies could be adopted, for instance choosing jump sizes which make the jump easy to predict (as is suggested in Neitz et al. and Jayaraman et al.), or hard to predict (a form of prioritized replay for model learning), or any other criterion. We reserve the investigation of which scheme leads to the best model to future work.\nAs for code, we will aim to release a simplified version of the code in the future.\n", "This paper proposes the temporal difference variational auto-encoder framework, a sequential general model following the intuition of temporal difference learning in reinforcement learning. The idea is nice and novel, and I vote for acceptance.\n1. The introduction of belief state in the sequential model is smart. How incorporate such technique in such an autoregressive model is not easy.\n2. Fig 1 clearly explained the VAE process.\n3. Four experiments demonstrated the main advantages of the proposed framework, including the effectiveness of proposed belief state construction and ability to jumpy rolling-out, \n\n\nOther Comments and Questions:\n1. Typo, p(s_{t_2}|s_{t_1}) in the caption of Fig 1.\n2. Can this framework partially solve the exposure bias?\n3. The author used uniform distribution for t_2 - t1, and from the ``NOISY HARMONIC OSCILLATOR`` we can indeed see larger interval will result in worse performance. However, the author also mentioned other distortion could be investigated, so I am wondering if the larger probability mass is put on larger dt, what the performance will become.\n4. The code should be released. I think that it is a fundamental framework deserving further development by other researchers.", "The authors propose TD-VAE to solve an important problem in agent learning, simulating the future by doing jumpy-rollouts in abstract states with uncertainty. The authors first formulate the sequential TD-VAE and then generalize it for jumpy rollouts. The proposed method is well evaluated for four tasks including high dimensional complex task.\n\nPros.\n- Advancing a significant problem\n- Principled and quite original modeling based on variational inference\n- Rigorous experiments including complex high dimensional experiments\n- Clear and intuitive explanation (but can be improved further)\n\nCons. \n- Some details on the experiments are missing (due to page limit). It would be great to include these in the Appendix. \n- It is a complex model. For reproducibility, detail specification on the hyperparameters and architecture will be helpful.\n\nMinor comments\n- Why q(z_{t-1}|z_t, b_{t-1}, b_t) depends both b_{t-1}, b_t, not only b_t?\n- The original model does not take the jump interval as input. Then, it is not clear how the jump interval is determined in p(z’|z)?\n", "There are several ingredients in this paper that I really liked. For example, (1) the notion that an agent should build a deterministic function of the past which implicitly captures the belief (the uncertainty or probability distribution about the state), by opposition for example to sampling trajectories to capture uncertainty, (2) modelling the world's dynamic in a learned encoded state-space (by opposition to the sensor space), (3) instead of modeling next-step probabilities p(z(t+1)|z(t)), model 'jumpy transitions' p(z(t+delta)|z(t)) to avoid unrolling at the finest time scale.\n\nNow for the weak points:\n(a) the justification for the training loss was not completely clear to me, although I can see that it has a variational flavor\n(b) there is no discussion of the issue that we can't get a straightforward decomposition of the joint probability over the data sequence according to next-step probabilities via the chain rule of probabilities, so we don't have a clear way to compare the TD-VAE models with jumpy predictions against other more traditional models\n(c) none of the experiments make comparisons against previously published models and quantitative results (admittedly because of (b) this may not be easy).\n\nSo I believe that the authors are onto a great direction of investigation, but the execution of the paper could be improved." ]
[ -1, -1, -1, 8, 9, 7 ]
[ -1, -1, -1, 4, 4, 5 ]
[ "BkgnEnawnm", "rJeR1S-ThQ", "rkeaQHUJam", "iclr_2019_S1x4ghC9tQ", "iclr_2019_S1x4ghC9tQ", "iclr_2019_S1x4ghC9tQ" ]
iclr_2019_S1xq3oR5tQ
A Unified Theory of Early Visual Representations from Retina to Cortex through Anatomically Constrained Deep CNNs
The vertebrate visual system is hierarchically organized to process visual information in successive stages. Neural representations vary drastically across the first stages of visual processing: at the output of the retina, ganglion cell receptive fields (RFs) exhibit a clear antagonistic center-surround structure, whereas in the primary visual cortex (V1), typical RFs are sharply tuned to a precise orientation. There is currently no unified theory explaining these differences in representations across layers. Here, using a deep convolutional neural network trained on image recognition as a model of the visual system, we show that such differences in representation can emerge as a direct consequence of different neural resource constraints on the retinal and cortical networks, and for the first time we find a single model from which both geometries spontaneously emerge at the appropriate stages of visual processing. The key constraint is a reduced number of neurons at the retinal output, consistent with the anatomy of the optic nerve as a stringent bottleneck. Second, we find that, for simple downstream cortical networks, visual representations at the retinal output emerge as nonlinear and lossy feature detectors, whereas they emerge as linear and faithful encoders of the visual scene for more complex cortical networks. This result predicts that the retinas of small vertebrates (e.g. salamander, frog) should perform sophisticated nonlinear computations, extracting features directly relevant to behavior, whereas retinas of large animals such as primates should mostly encode the visual scene linearly and respond to a much broader range of stimuli. These predictions could reconcile the two seemingly incompatible views of the retina as either performing feature extraction or efficient coding of natural scenes, by suggesting that all vertebrates lie on a spectrum between these two objectives, depending on the degree of neural resources allocated to their visual system.
accepted-oral-papers
The paper advocates neuroscience-based V1 models to adapt CNNs. The results of the simulations are convincing from a neuroscience-perspective. The reviewers equivocally recommend publication.
train
[ "r1eiULeC3Q", "H1gQhLTipX", "rkls0Uasp7", "BJx4_OpjpQ", "B1xQQuaiam", "rkxrOwpjpQ", "HJgMwJe5hm", "Hyg8wO1rhQ" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "EDIT: On the basis of revisions made to the paper, which significantly augment the results, the authors note: \"the call for papers explicitly mentions applications in neuroscience as within the scope of the conference\" which clarifies my other concern. For both of these reasons, I have changed my prior rating.\n\nThis paper is focused on a model of early visual representation in recognition tasks drawing motivation from neuroscience. Overall the paper is an interesting read and reasonably well written (albeit with some typos). The following addresses the positives and negatives I see associated with this work:\n\nPositives:\n- There are relatively few efforts that focus heavily on more shallow models with an emphasis on representation learning, and for this reason this paper fills an important space\n- The connections to neuroscience are interesting albeit it's unclear the extent to which this is the mandate of the conference\n- The most interesting bit of the paper to me is the following: \"A bottleneck at the output of the retina yielded center-surround retinal RFs\" - it is somewhat a foregone conclusion that most networks immediately converge on orientation selective and color opponent representations. That this model produces isotropic filters is a very interesting point.\n\nNegatives:\n- The work feels a little bit shallow. It would have been nice to see a bit more density in terms of results and ablation studies. This also relates to my second point.\n- Given the focus on early visual processing, there seems to be a missed opportunity in examining the role of normalization mechanisms or the distinction between simple and complex cells. If the focus resides in the realm of neuroscience and early visual representation, there is an important role to these mechanisms. e.g. consider the degree of connectivity running from V1 to LGN vs. LGN to V1.\n\n", "We thank the reviewer for the positive comments about the paper, and try to address his/her concerns below.\n\n[Role of normalization mechanisms.]\nLocal normalization is an ubiquitous source of non-linearity in the visual system (see Geisler and D.G. Albrecht 1992 for an example in the cortex, and Deny et al 2017 for an example in the retina), and in ML they are used to enhance contrast of images (Lyu and Simoncelli 2008, http://www.cns.nyu.edu/pub/lcv/lyu08b.pdf) and image compression algorithms (Balle et al, ICLR 2017 https://openreview.net/forum?id=rJxdQ3jeg). We thus tested the robustness of our main results to a more realistic model of the visual system with local normalization by adding local normalization at every layer of the network. We found that receptive fields still emerge as center-surround in the retina-net and as oriented in our model of V1 when we put a bottleneck. Interestingly, the normalization slightly degraded the performance of the network on the task for all parameter settings we tried. We added this complementary analysis in the main text and appendix of the article. (see section 3.1 and App C)\n\n[Distinction between simple and complex cells.]\nIt is an interesting question to ask whether neurons in our model of the VVS are more similar to simple or complex cells. To test this, we performed a one-step gradient ascent on the neural activity of VVS neurons with respect to the image, starting from several random initial images. If the neurons were acting as simple cells (i.e. are approximately linear in the stimulus), we would expect all optimized stimuli to converge to the same preferred stimulus. On the other hand, if the cells were complex (i.e. OR function between several preferred stimuli), we would expect the emergent preferred stimuli to depend on the exact initialization. Interestingly, we found that most neurons in the first layer of the VVS-net behaved a simple cells, whereas most neurons in the second layer of the VVS-net behaved as complex cells. Note that in biology, both simple and complex cells are found in V1. These results expose the fact that anatomical regions of visual cortex involve multiple nonlinearities and hence may map onto more than one layer of our simple model. Indeed, V1 itself is a multilayered cortical column, with LGN inputs coming in to layer 4, and layer 4 projecting to layers 2 and 3. Simple cells are predominantly found in layer 4 and complex cells are predominantly found in layers 2 and 3. These observations bolster the interpretation that biological V1 may correspond to multiple layers in our model. We added these interesting results and observations in the main text and appendix. (see section 3.1 and App B)\n\n[Role of the thalamo-cortical loop.]\nThe recurrence of the thalamo-cortical loop also plays an essential role in the computations of the visual system and it would be very important to understand the role of this recurrence. However, in this study we chose to focus on explaining the discrepancy between the geometry of RFs in the retina and V1, and on the differences in the non-linearity of retinal processing across species. To model these phenomena, our approach was to find the simplest model that would yield those two phenomena. Intriguingly, our results show that modeling the thalamo-cortical loop is not necessary to yield the emergence of center-surround receptive fields in the retina-net and oriented receptive fields in the V1 layer (first layer of VVS-net). Moreover, note that a number of studies of the visual system using those same simplifying assumptions (simple neurons, no recurrence, Yamins et al 2014, Cadena et al. 2017) have found good agreement of the predictions of their models with the visual system. Also, almost all classical efficient coding theories going back to Atick and Redlich and Olshausen and Field assume no top-down feedback, so it is important to first compare to them using a model without top-down feedback as a first step. \n", "[More density in terms of results and ablation studies.]\nWe have now quantified our main result about the isotropy of retinal filters, by measuring isotropy of the filters at the retinal output and in V1, on 10 different instantiations of our network. We show that the retinal filters are significantly more isotropic in the network with bottleneck than in the control network without bottleneck. We also find that the filters in V1 are significantly more oriented than in the retina-net. We added these quantifications in the appendix. (see App A)\n\nAblation study: Following a suggestion of Rev. 1, we investigated in depth whether, in the case of a deep brain network, where the retinal processing is quasi-linear, the increased separability allowed by the retinal pre-processing is due to the linear (whitening) or non-linear aspects of the retinal pre-processing (fig 3F). To test this we replaced the actual retinal processing by its best linear approximation (i.e. this is a functional ablation). We then retrained the brain network on the output of this linearized retina and tested whether separability was as good as with the real slightly non-linear retinal processing. We found that separability in the very first layer of the VVS-net was already much stronger than that of a VVS-net trained directly on natural images (without retina). This result demonstrates that linear whitening does indeed play a crucial role in making the representation easily transformable by subsequent layers into a linearly separable representation. We added this analysis in the amin text and appendix. (see section 4.2 and App E)\n\nFinally, we did a new analysis suggesting even more strongly that the retinal representation is indeed a trade-off between feature extraction and linear transmission of visual information. For 10 instantiations of a network with a retinal bottleneck containing 4 channels, we represented the linearity of each of these 4 channels against the linear separability of object categories obtained from these representations. We found, across all networks, a systematic negative correlation between linearity and linear separability across all 4 channels. This result strongly suggests that extracting features and transmitting visual information are indeed two competing goals shaping retinal representations. We added these new results in the appendix. (see section 4.2 and App D)\n\n\nIn summary, we have added 5 new complementary analyses to the article, making it substantially denser in terms of both results and ablation studies.", "[4. Whitened inputs can probably be represented more efficiently in a network trained with L2-regularization and/or SGD]\nWe thank the reviewer for this interesting explanation that we could directly verify in our model. In the case of a deep brain network, where the retinal processing is quasi-linear, the increased separability allowed by the retinal pre-processing could be due to (1) the linear whitening or (2) the slightly non-linear part of the retinal response (3) a combination of both linear and non-linear processing. To distinguish between these hypotheses, we replaced in a new experiment the true retinal processing by its best linear approximation, retrained the brain network on the output of this linearized retina and tested whether separability was as good as with the true retinal processing. We found that the first layer trained on the output of the linearized retinal representation was indeed much better than the first layer of the control network (trained directly on natural images) at separating classes of objects, suggesting that the linear whitening operation done by the retina is indeed especially transformable into linearly separable representations by a downstream neural network. We added this analysis in the appendix.\n", "We thank the reviewer for their positive appreciation, and for their thoughtful suggestions that we took into account.\n\n[Main concern - Quantifications for Fig 2A,B and C]\nFig2A and 2B: We quantified our result about the isotropy of retinal filters, by measuring orientedness of the filters at the retinal output and in V1 on 10 different instantiations of the network. We show that the retinal filters are significantly more isotropic than RFs in both the control network (without bottleneck) and the V1 filters. \nFig 2C (Hubel and Wiesel hypothesis): We quantified the anisotropy of the weight filter pooling from the retina to form oriented filters in V1, and again we found that these weight matrices are significantly oriented, confirming the hypothesis of Hubel and Wiesel in our model that simple cells in V1 are built from pooling successive center-surround filters from the preceding layer in a row. These quantifications are now referred to in the main text and detailed in the appendix.\n\n[1. Bottleneck is a sufficient constraint, not a necessary constraint]\nWe agree with the reviewer that we cannot eliminate other hypotheses about the origin of isotropic filters in the biological retina. We soften the claim everywhere in the manuscript as suggested. Note that we mention in the discussion our attempt to reproduce the results of Karklin and Simoncelli (2011), who could successfully obtain center-surround RFs with a constraint on firing rate, but with a different objective function (information preservation). However, we cannot totally eliminate the possibility that, with different network parameters, we could also obtain center-surround Rfs with a constraint on total firing rate under this object recognition objective.\n\n[2. Cell types in the retina]\nWe understand the confusion of the reviewer and we clarify this point both here and in the manuscript. The retina is organized in layers of different types of neurons (photoreceptors, bipolar cells, ganglion cells, etc), and in each of these layers the neurons can be subdivided in many subtypes: these subtypes we referred to as types in the article, which might have led to the confusion. For instance in the primate retina, there exist 20 subtypes of ganglion cells, each with different receptive field size, polarity, non-linearities etc (Dacey 2004). Each of these subtypes tile the entire visual field like a convolutional channel in a layer of a CNN and each cell of a given subtype has a stereotyped receptive field, so this is why there is a strong analogy between channels in our model and biological subtypes. We wanted to test whether the emergence of center-surround RFs in the retina was a consequence of reducing the number of channels (i.e. subtypes), or the number of neurons, and this is why we carried out the experiment described in the section “Emergence of ON and OFF populations of center-surround cells in the retina” where we untied the weights of the network. We find that the emergence of center-surround is not specifically dependent on the number of types that we allow (it only depends on the number of cells that we allow), and furthermore we find that the cells naturally arrange in two clusters of ON and OFF cells when we allow them to differentiate, which is an interesting side-observation because the polarity axis is the first axis of ganglion cell subtype classification in the retina.\n\n[3. implications of the nonlinearity being due to the first or second stage]\nThis analysis was directed at retinal experts who might want to test our predictions, and that might wonder what stage of the linear processing is responsible for the non-linearity of the retinal response as we decrease neural resources allocated to the brain. The two main sources of non-linearity in the retina are thought to be the inner retina rectification (bipolar and amacrine cells, corresponding to the first stage non-linearity in our model) and the ganglion cell rectification (corresponding to the second stage non-linearity in our model). We find that both stages become more non-linear as we decrease brain resource, which makes an interesting prediction for experimentalists. We clarified the motivation for this analysis and the corresponding prediction that it makes in the manuscript. \n", "We thank the reviewer for their positive assessment. We agree that some of these observations could be expected but it is the first time to our knowledge that cross-layer and cross-species differences in early visual representations are recapitulated and accounted for in a single unified model of the visual system.\n\nWe thank the reviewer for this interesting reference that we added as an example of how deep networks can be used to model the human visual system.\n", "I enjoyed reading this paper which is a great example of solid computational neuroscience work.\n\nThe authors trained CNNs under various biologically-motivated constraints (e.g., varying the number of units in the layers corresponding to the retina output to account for the bottleneck happening at the level of the optic nerve or varying the number of \"cortical\" layers to account for differences across organisms). The paper is clear, the hypotheses clearly formulated and the results are sound. The implications of the study are quite interesting suggesting that the lack of orientation selectivity in the retina would arise because of the bottleneck at the level of the optic nerve. The continuum in terms of degree of linearity/non-linearity observed across organisms at the level of the retina would arise as a byproduct of the complexity/depth of subsequent processing stages. While these results are somewhat expected this is to my knowledge the first time that it is shown empirically in an integrated computational model.\n\nMinor point: The authors should consider citing the work by Eberhardt et al (2016) which has shown that the exists an optimal depth for CNNs to predicting human category decisions during rapid visual categorization.\n\nS. Eberhardt, J. Cader & T. Serre. How deep is the feature analysis underlying rapid visual categorization? Neural Information Processing Systems, 2016.\n\n", "This paper addresses questions about the representation of visual information in the retina. The authors create a deep neural network model of the visual system in which a single parameter (bandwidth between the “retina” and “visual cortex” parts) is sufficient to qualitatively reproduce retinal receptive fields observed across animals with different brain sizes, which have been hard to reconcile in the past. \n\nThis work is an innovative application of deep neural networks to a long-standing question in visual neuroscience. While I have some questions about the analyses and conclusions, I think that the paper is interesting and of high quality.\n\nMy main concern is that the authors only show single examples, without quantification, for some main results (RF structure). For example, for Fig. 2A and 2B, an orientation selectivity index should be shown for all neurons. A similar population analysis should be devised for Fig 2C, e.g. like Fig 3 in [1]\n\nMinor comments:\n1. Page 4: “These results suggest that the key constraint ... might be the dimensionality bottleneck..”: The analyses only show that the bottleneck is *sufficient* to explain the differences, but “the key constraint” also implies *necessity*. Either soften the claim or provide control experiments showing that alternative hypotheses (constraint on firing rate etc.) cannot explain this result in your model.\n\n2. I don’t understand most of the arguments about “cell types” (e.g. Fig. 2F and elsewhere). In neuroscience, “cell types” usually refers to cells with completely different connectivity constraints, e.g. excitatory vs. inhibitory cells or somatostatin vs. parvalbumin cells. But you refer to different CNN channels as different “types”. This seems very different than the neuroscience definition. CNN channels just represent different feature maps, i.e. different receptive field shapes, but not fundamentally different connectivity patterns. Therefore, I also don’t quite understand what you are trying to show with the weight-untying experiments (Fig. 2E/F).\n\n3. It is not clear to me what Fig. 3B and the associated paragraph are trying to show. What are the implications of the nonlinearity being due to the first or second stage? \n\n4. Comment on Fig 3F: The center-surround RFs probably implement a whitening transform (which is linear). Whitened inputs can probably be represented more efficiently in a network trained with L2-regularization and/or SGD. This might explain why the “quasi-linear” retina improves separability later-on.\n\n[1] Cossell, Lee, Maria Florencia Iacaruso, Dylan R. Muir, Rachael Houlton, Elie N. Sader, Ho Ko, Sonja B. Hofer, and Thomas D. Mrsic-Flogel. “Functional Organization of Excitatory Synaptic Strength in Primary Visual Cortex.” Nature 518, no. 7539 (February 19, 2015): 399–403. https://doi.org/10.1038/nature14182." ]
[ 8, -1, -1, -1, -1, -1, 8, 8 ]
[ 5, -1, -1, -1, -1, -1, 5, 3 ]
[ "iclr_2019_S1xq3oR5tQ", "r1eiULeC3Q", "H1gQhLTipX", "B1xQQuaiam", "Hyg8wO1rhQ", "HJgMwJe5hm", "iclr_2019_S1xq3oR5tQ", "iclr_2019_S1xq3oR5tQ" ]
iclr_2019_SkVhlh09tX
Pay Less Attention with Lightweight and Dynamic Convolutions
Self-attention is a useful mechanism to build generative models for language and images. It determines the importance of context elements by comparing each element to the current time step. In this paper, we show that a very lightweight convolution can perform competitively to the best reported self-attention results. Next, we introduce dynamic convolutions which are simpler and more efficient than self-attention. We predict separate convolution kernels based solely on the current time-step in order to determine the importance of context elements. The number of operations required by this approach scales linearly in the input length, whereas self-attention is quadratic. Experiments on large-scale machine translation, language modeling and abstractive summarization show that dynamic convolutions improve over strong self-attention models. On the WMT'14 English-German test set dynamic convolutions achieve a new state of the art of 29.7 BLEU.
accepted-oral-papers
Very solid work, recognized by all reviewers as worthy of acceptance. Additional readers also commented and there is interest in the open source implementation that the authors promise to provide.
train
[ "SyexV29PJN", "H1glXkLRhm", "SkgRkkYAAQ", "BkxEwCe6pQ", "BJxVasJp6m", "B1lA_ikT6m", "r1gUWo16Tm", "Bkx6Dmk6Tm", "SygyHzyaTX", "BJlW-fk667", "BklnHDDcT7", "S1xedHZ_pX", "rygh1-1upQ", "Byg-bBXZaX", "H1gJoj8C3X", "Byx0nlaKnX", "Hkgbd38ChQ" ]
[ "public", "official_reviewer", "public", "author", "author", "author", "author", "author", "author", "author", "public", "public", "public", "public", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "I found your work very interesting, but there are some recent works that are closely related to your work, which take a sentence as input and generate convolutional kernels that are further applied on the sentence, but with a different granularity. I think those works are definitely worth comparing to.\n\nmissing references:\nLearning Context-Sensitive Convolutional Filters for Text Processing (Shen et al.)\nConvolutional Interaction Network for Natural Language Inference (Gong et al.)", "Overall, this is a really good paper.\nThe authors propose an alternative to content based similarity for NL applications as compared to self-attention models by proposing the parameter and sequence length efficient Lightweight and Dynamic Convolutions.\nThe authors show, over various NL tasks like Translation, LM and Abstractive summarisation, the comparison of self attention models with Lightweight and Dynamic convolution layer.\nThe weight sharing was particularly interesting and can be seen as applying different heads for the same kernel. \n\nThe experimental results give strong evidence for these alternatives proposed by the authors.\nThe lightweight and dynamic convolution layers, both perform similar or better than the self-attention layer in all the tasks.\nThe WMT EnFr result is much better than all the other models, establishing a new state of the art.\n\nQuestion for the authors:\n1. Is the weight sharing within the kernel mostly for reducing computation?\nIf so, did you trying varying H size and measure how much that affects performance? What is surprising is that, in the ablation table the weight sharing increases the BLEU score by 0.1. \n2. Did you run any experiments where the kernel size covers the whole sentence?\n3. Since the number of parameters only change linearly wrt sequence length, did you try running this on datasets that have really long sequences to show the effectiveness of this approach further?\n4. How important was softmax normalization for training?", "please note bytenet can also be used for language model, i.e., using only decoder. So it is very important to compare with it, which is also one type of lightweight cnn", "We are planning to share the code later.", "Thank you for your comments. We improved the description of the adaptive softmax hyperparameters ('band' terminology) in the updated version of the paper. We hope this is clearer now. \n\nWe refer to different subsets of the vocabulary as 'bands'. The most frequent words are denoted as \"head band\", and so on.", "Thank you for your fruitful comments.\nQ1: For DynamicConv, weight sharing reduces both computation and memory footprint, while for LightConv, it only reduces memory footprint. Yes, we did try using large H sizes; however, the performance degrades and the memory footprint increases dramatically which prohibits us from using a large batch size. As a consequence, training becomes much slower. For your information, DynamicConv with H=64 gets BLEU score 26.8 ± 0.1 on newstest2013 compared to 26.9 ± 0.2 with H=16 in Table 3. \n\nQ2: We conducted an additional experiment based on your suggestion. We set the encoder kernel size to 237 and the decoder kernel size to 267 at each layer to cover the whole sequence. The BLEU score drops slightly to 26.7 ± 0.1. This is a small difference and we expect that slightly tuned hyperparameters would close the gap.\n\nQ3: In section 6.4, we show experiments for document summarization (CNN/DailyMail) where the input sequence is capped at 400 words and the output sequence is 57 words on average with some examples having summaries of up to 478 words. Our results show that the model performs very well in this setting.\n\nQ4: We found it very important as training diverged without softmax-normalization (see Note in Table 3) for DynamicConv. We added a comparison of softmax-normalization to various alternatives to Appendix A of the updated paper.\nFurthermore, we are able to train the model without softmax-normalization with more aggressive gradient clipping, a lower learning rate (reducing it by a factor of 5) and more updates (increasing it by 5 times), but this slowed down training dramatically. ", "Thank you for your fruitful comments.\nQ: Comparison to \"Depthwise Separable Convolutions for Neural Machine Translation\" from Kaiser et al. (ICLR 2018).\nSuper-separable convolutions modify the pointwise operation (introducing groups) that follows depthwise convolutions, while we modify the latter (which focuses on aggregating the temporal information). We did try to introduce groups to the linear layers of the feed-forward block (that follows light/dynamic convolutions) but to reach the same accuracy, we had to increase the number of parameters of the model to a similar level as with the dense linear, at which point the network became slower.\n\nQ: “I'd also like to see train time numbers”\nHere are training times:\nSelf-attention, 17.5h (with or without limited window)\nDynamicConv, 16.9h\nLightConv, 16.5h\n\nOur current implementation of dynamic convolutions is actually quite inefficient. We put the various convolution kernels in a sparse tensor that has only non-zero entries for the diagonal entry, thus using a lot of space. We expect a dedicated CUDA kernel to be more efficient. We are investigating such a kernel.\n\nNote that batching is much more efficient during training which smooths out some of the speed advantages we see at test time. During inference batching is by far not as efficient due to repeated invocation of the decoder at every time step.\n\nQ: “Other papers (particularly a series of three papers from Tao Shen at University of Technology Sydney and Tianyi Zhou at UW) have proposed architectures that are similarly intermediate between self-attention and (in their case 1x1) convolution”\nWe will discuss our work in the light of Shen & Zhou (2017, 2018) and also reference Ott et al. (2018) wrt fairseq speed.\n\nQ: “You should have tried a language modeling dataset with longer-term dependencies”\nIn section 6.4, we show experiments for CNN/DailyMail document summarization which entails long input and output sequences. The input sequence is capped at 400 words and the output sequence is 57 words on average with some examples having summaries of up to 478 words. ", "We do compare to a CNN baseline (non-separable convolutions), see Table 3 \"CNN (k=3)\". However, our model does use source-target attention which is not the case for ByteNet. Finally, our model performs better on newstest2014 of WMT English-German translation at 29.7 BLEU vs. 23.75 BLEU for ByteNet.\n\nAnd yes, we will release the code.", "We are currently investigating a dedicated CUDA kernel and we will make the code available.", "Yes, we will share the code at a later stage!", "I found this paper is very interesting, would you like to share the source code, which is very helpful for fully understanding it", "1 what do you mean by saying “We expect a dedicated CUDA kernel to be much more efficient.”\n\nYou mean the efficiency. advantage in current CUDA is not obvious??\n\nis it possible to expect a new CUDA kernel specifically designed for your model\n\n2 code is not available\n\nCode and pre-trained models available at http://anonymized", "Hi,\n\nI have a question. You claim that your lightweight cnn can has fewer parameters and linear time. I think it is very necessary to compare with a well-know CNN sequence baseline, i.e. bytenet. it is also a pure con sequence model and shows very good performance in language modeling and translation. Have you compare with it?? Better accuracy or higher efficiency??\n\nDo you plan to you share your code? I am quite interested.", "Hi, the Code link is not available!", "The authors present lightweight convolutions and dynamic convolutions, two significant advances over existing depthwise convolution sequence models, and demonstrate very strong results on machine translation, language modeling, and summarization. Their results go even further than those of the Transformer paper in countering the conventional wisdom that recurrence (or another way of directly modeling long-distance dependencies) is crucial for sequence-to-sequence tasks. Some things that I noticed:\n\n- While you do cite \"Depthwise Separable Convolutions for Neural Machine Translation\" from Kaiser et al. (ICLR 2018), there are some missed opportunities to compare more directly to that paper (e.g., by comparing to their super-separable convolutions). Kaiser et al. somewhat slipped under the community's radar after the same group released the Transformer on arXiv a week later, but it is in some ways a more direct inspiration for your work than the Transformer paper itself.\n\n- I'd like to see more analysis of the local self-attention ablation. It's fantastic to see such a well-executed ablation study, especially one that includes this important comparison, but I'd like to understand more about the advantages and drawbacks of local self-attention compared to dynamic convolutions. (For instance, dynamic convolutions are somewhat faster at inference time in your results, but I'm unsure if this is contingent on implementation choices or if it's inherent to the architecture.)\n\n- From a systems and implementation perspective, it would be great to see some algorithm-level comparisons of parallelism and critical path length between dynamic convolutions and self-attention. My gut feeling is that dynamic convolutions significantly more amenable to parallelization on certain kinds of hardware, especially at train time, but that the caching that's possible in self-attention inference might make the approaches more comparable in terms of critical path latency at inference time; this doesn't necessarily line up with your results so far though.\n\n- You mostly focus on inference time, but you're not always as clear about that as you could be; I'd also like to see train time numbers. Fairseq is incredibly fast on both sides (perhaps instead of just saying \"highly optimized\" you can point to a paper or blog post?)\n\n- The nomenclature in this space makes me sad (not your fault). Other papers (particularly a series of three papers from Tao Shen at University of Technology Sydney and Tianyi Zhou at UW) have proposed architectures that are similarly intermediate between self-attention and (in their case 1x1) convolution, but have decided to call them variants of self-attention. I could easily imagine a world where one of these groups proposed exactly your approach but called it \"Dynamic Local Self-Attention,\" or even a world where they've already done so but we can't find it among the zillions of self-attention variants proposed in the past year. Not sure if there's anything anyone can do about that, but perhaps it would be helpful to briefly cite/compare to some of the Shen/Zhou work.\n\n- I think you should have tried a language modeling dataset with longer-term dependencies, like WikiText-103. Especially if the results were slightly weaker than Transformer, that would help place dynamic convolutions in the architecture trade-off space.\n\nThat last one is probably my most significant concern, and one that should be fairly easy to address. But it's already a great paper.", "The paper proposes a convolutional alternative to self-attention. To achieve this, the number of parameters of a typical convolution operation is first reduced by using a depth-wise approach (i.e. convolving only within each channel), and then further reduced by tying parameters across layers in a round-robin fashion. A softmax is applied to the filter weights, so that the operation computes weighted sums of its (local) input (LightConv).\n\nBecause the number of parameters is dramatically reduced now, they can be replaced by the output of an input-dependent linear layer (DynamicConv), which gives the resulting operation a \"local attention\" flavour. The weights depend only on the current position, as opposed to the attention weights in self-attention which depend on all positions. This implies that the operation is linear in the number of positions as opposed to quadratic, which is a significant advantage in terms of scaling and computation time.\n\nIn the paper, several NLP benchmarks (machine translation, language modeling) that were previously used to demonstrate the efficacy of self-attention models are tackled with models using LightConv and DynamicConv instead, and they are shown to be competitive across the board (with the number of model parameters kept approximately the same).\n\nThis paper is well-written and easy to follow. The proposed approach is explained and motivated well. The experiments are thorough and the results are convincing. I especially appreciated the ablation experiment for which results are shown in Table 3, which provides some useful insights beyond the main point of the paper. The fact that a linear time approach can match the performance of self-attention based models is a very promising and somewhat surprising result.\n\nIn section 5.3, I did not understand what \"head band, next band, last band\" refers to. I assume this is described in the anonymous paper that is cited, so I suppose this is an artifact of blind review. Still, even with the reference unmasked it might be useful to add some context here.", "The \"head band, next band, last band\" terminology is from https://openreview.net/forum?id=ByxZX20qFQ, which is presumably the cited anonymous paper." ]
[ -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 8, -1 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, -1 ]
[ "iclr_2019_SkVhlh09tX", "iclr_2019_SkVhlh09tX", "Bkx6Dmk6Tm", "Byg-bBXZaX", "Byx0nlaKnX", "H1glXkLRhm", "H1gJoj8C3X", "rygh1-1upQ", "S1xedHZ_pX", "BklnHDDcT7", "iclr_2019_SkVhlh09tX", "iclr_2019_SkVhlh09tX", "iclr_2019_SkVhlh09tX", "iclr_2019_SkVhlh09tX", "iclr_2019_SkVhlh09tX", "iclr_2019_SkVhlh09tX", "Byx0nlaKnX" ]
iclr_2019_r1lYRjC9F7
Enabling Factorized Piano Music Modeling and Generation with the MAESTRO Dataset
Generating musical audio directly with neural networks is notoriously difficult because it requires coherently modeling structure at many different timescales. Fortunately, most music is also highly structured and can be represented as discrete note events played on musical instruments. Herein, we show that by using notes as an intermediate representation, we can train a suite of models capable of transcribing, composing, and synthesizing audio waveforms with coherent musical structure on timescales spanning six orders of magnitude (~0.1 ms to ~100 s), a process we call Wave2Midi2Wave. This large advance in the state of the art is enabled by our release of the new MAESTRO (MIDI and Audio Edited for Synchronous TRacks and Organization) dataset, composed of over 172 hours of virtuosic piano performances captured with fine alignment (~3 ms) between note labels and audio waveforms. The networks and the dataset together present a promising approach toward creating new expressive and interpretable neural models of music.
accepted-oral-papers
All reviewers agree that the presented audio data augmentation is very interesting, well presented, and clearly advancing the state of the art in the field. The authors’ rebuttal clarified the remaining questions by the reviewers. All reviewers recommend strong acceptance (oral presentation) at ICLR. I would like to recommend this paper for oral presentation due to a number of reasons including the importance of the problem addressed (data augmentation is the only way forward in cases where we do not have enough of training data), the novelty and innovativeness of the model, and the clarity of the paper. The work will be of interest to the widest audience beyond ICLR.
test
[ "SklV7Ix9aX", "rJllkIgcTQ", "H1eFiHgqTQ", "rklS4Hl5am", "BJl9uwaQ67", "B1efz6dgpX", "S1gnFxZjnX" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your review and comments.\n\n* Eq (1) this is really the joint distribution between audio and notes, not the marginal of audio\n\nThank you for catching the mistake. We have updated the equation to include the marginalizing integral through the expectation over notes: P(audio) = E_{notes} [ P(audio|notes) ]\n\n* Table 4: What do precision, recall, and f1 score mean for notes with velocity? How close does the system have to be to the velocity to get it right?\n\nWe use the mir_eval library for calculating those metrics, and a full description is available here: https://craffel.github.io/mir_eval/#module-mir_eval.transcription_velocity\n\nIt implements the evaluation procedure described in Hawthorne et al. (2018).\n\nWe have updated the caption for Table 4 to make this more clear.\n\n* Table 6: NLL presumably stands for Negative Log Likelihood, but this should be made explicitly\n\nThanks, updated the table caption to make this more clear.\n\n* Figure 2: Are the error bars the standard deviation of the mean or the standard error of the mean?\n\nWe are calculating the standard deviation of the means (we did not divide by the square root of the sample size).", "Thank you for your review and comments.\n\n* MIDI itself is a rich language with ability to drive the generation of music using rich sets of customizable sound fonts. Given this, it is not clear that it is necessary to reproduce this function using neural network generation of sounds.\n\nSynthesizing realistic audio from symbolic representations is a complex task. While there are many good sounding piano synthesizers, many of them fall well short of producing audio that would be a convincing substitute for a real piano recording. For example, the SoundFont technology referenced can only play particular samples for particular notes (with some simple effects processing). It is incapable of modeling complex physical interactions between different parts of the piano, such as sympathetic resonance, and is limited by the quality and variety of samples included with a particular font (for example, the ability to play longer notes is often achieved by simply looping over a section of a sample). That said, there are some piano synthesis systems that can do a good job of modeling these types of interactions, though they are not as widely available as SoundFonts and are difficult to create. For a good overview of the difficulties and successes in piano modeling, see the paper we cited by Bank et al. \n\nOur WaveNet model is able to learn to generate realistic-sounding music with no information other than audio recordings of piano performances, information which would be insufficient for the creation of a SoundFont or physics-informed model. The “Transcribed” WaveNet model clearly demonstrates this because we use only the audio from the dataset and we derive training labels by using our transcription model. By training on the audio directly, we implicitly model the complex physical interactions of the instrument, unlike a SoundFont.\n\nIt is also interesting to note that the WaveNet model recreates non-piano subtleties of the recording, including the response of the room, breathing of the player, and shuffling of listeners in their seats. These results are encouraging and indicate that such methods could also capture the sound of more dynamic instruments (such as string and wind instruments) for which convincing synthesis/sampling methods lag behind piano. To clarify this point, we have added a paragraph to the Piano Synthesis section of the paper.\n\nWe have also updated the paper to further demonstrate our ability to control the output sound by adding year conditioning. Different competition years within the MAESTRO dataset had different microphone placements (e.g., near the piano or farther back in the room), and by conditioning on year, we can control whether the output sounds like a close mic recording or one with more room noise. We present several audio examples in the online supplement: https://goo.gl/6RzHZM\n\n* The further limitation of the proposed approach seems to be the challenge of decoding raw music audio with chords, multiple overlaid notes or multiple tracks. MIDI as a representation can support multiple tracks, so it is not necessarily the bottleneck.\n\nWe chose to model the music with full polyphony for a couple reasons. One is that, as described above, there are complex interactions in the physical piano and recording environment that would not be reproducible by rending notes separately and then layering them into a single output. Another is that the training data is presented as a single MIDI stream and the audio is not easily separated into multiple tracks.\n\n* How much does the data augmentation (audio augmentation) help?\n\nWe have added a table showing the differences between training with and without audio augmentation. In the process of analyzing these results, we realized that audio augmentation helps significantly when evaluating on the MAPS dataset (likely because the model is more robust to differences in recording environment and piano qualities), it actually incurs a slight penalty when evaluating on the MAESTRO test set. We have updated the paper with a discussion of these differences.", "Thank you for your review and comments.\n\n* Is MAPS actually all produced via sequencer? Having worked with this data I can almost swear that at least a portion of it (in particular, the data used here for test) sounds like live piano performance captured on Disklavier. Possibly I'm mistaken, but this is worth a double check.\n\nAccording to the PDF file that accompanies the MAPS dataset (“MAPS - A piano database for multipitch estimation and automatic transcription of music”): “These high quality files have been carefully hand-written in order to obtain a kind of musical interpretation as a MIDI file.” We have updated the citation to point to this paper specifically to make things more clear. More information about the process is available on the website that contains the source MIDI files for MAPS: http://www.piano-midi.de/technic.htm\n\n* Referring to the triple of models as an auto-encoder makes me slightly uncomfortable given that they are all trained independently, directly from supervised data. \n\nThis is a very reasonable point, because there are no learned feature vectors in the latent representation (they come from labels). We have updated the text to instead refer to the model as a “generative model with a discrete latent code of musical notes”. We have kept the encoder/decoder/prior notation because it still seems appropriate. \n\n* The MAESTRO-T results are less interesting than they might appear at first glance given that the transcriptions are from train. The authors do clearly acknowledge this, pointing out that val and test transcription accuracies were near train accuracy. But maybe that same argument could be used to support that the pure MAESTRO results are themselves generalizable, allowing the authors to simplify slightly by removing MAESTRO-T altogether. In short, I'm not sure MAESTRO-T results offer much over MAESTRO results, and could therefore could be omitted. \n\nOur goal with the MAESTRO-T dataset was to clearly demonstrate that both the language modeling tasks (Music Transformer) and audio synthesis (WaveNet) can produce compelling results without having access to ground truth labels. We agree that using the train dataset does somewhat diminish this demonstration, but argue that it does more clearly demonstrate the usefulness of the “Wave2Midi2Wave” process than just using ground truth labels. In future work, we plan to expand our use of these models to datasets that do not have ground truth labels. We have added to the conclusion to clarify this point.\n", "Thank you to all reviewers for your careful review and comments on the paper. We will address specific questions in responses to particular reviews, but we also wanted to highlight some general updates we have made since the initial submission of the paper:\n\nOur transcription results have improved (Note w/ offset F1 score on MAPS configuration 2 test went from 64.03 to 66.33) due to two modifications:\n* We added an offset detection head to the model, inspired by Kelz et al. (2018).\n* We trained the transcription for more steps (670k instead of 178k).\n\nOur synthesis results have improved because we switched to using a larger receptive field for the Piano Synthesis WaveNet model (6 instead of 3 sequential stacks).\n\nIn order to more accurately compare our WaveNet models, we also trained an unconditioned WaveNet model trained only with the audio from the combined MAESTRO training/validation splits with no conditioning signal.\n\nWe improved our listening study by:\n* Rerunning it with the improved WaveNet model\n* Switching to 20-second samples instead of 10-second samples\n* Clarifying our question to ask the raters which clip they thought sounded more like a recording of somebody playing a musical piece on a real piano.\n\nThe study results now show that there is not a statistically significant difference in participant ratings between real recordings and samples from the WaveNet Ground/Test and WaveNet Transcribed/Test models.\n\nTo better control the timbre of synthesis output, we implemented year conditioning, which can produce outputs that mimic the microphone placement of the different competition years in the dataset.\n\nFinally, we decided to name the process of transcription, MIDI manipulation, and then synthesis Wave2Midi2Wave.", "This paper describes a new large scale dataset of aligned MIDI and audio from real piano performances and presents experiments using several existing state-of-the-art models for transcription, synthesis, and generation. As a result of the new dataset being nearly an order of magnitude larger than existing resources, each component model (with some additional tuning to increase capacity) yields impressive results, outperforming the current state-of-the-art on each component task. \nOverall, while the modeling advances here are small if any, I think this paper represents a solid case study in collecting valuble supervised data to push a set of tasks forward. The engineering is carefully done, well-motivated, and clearly described. The results are impressive on all three tasks. Finally, if the modeling ideas here do not, the dataset itself will go on to influence and support this sub-field for years to come. \nComments / questions:\n-Is MAPS actually all produced via sequencer? Having worked with this data I can almost swear that at least a portion of it (in particular, the data used here for test) sounds like live piano performance captured on Disklavier. Possibly I'm mistaken, but this is worth a double check.\n-Refering to the triple of models as an auto-encoder makes me slightly uncomfortable given that they are all trained independently, directly from supervised data. \n-The MAESTRO-T results are less interesting than they might appear at first glance given that the transcriptions are from train. The authors do clearly acknowledge this, pointing out that val and test transcription accuracies were near train accuracy. But maybe that same argument could be used to support that the pure MAESTRO results are themselves generalizable, allowing the authors to simplify slightly by removing MAESTRO-T altogether. In short, I'm not sure MAESTRO-T results offer much over MAESTRO results, and could therefore could be omitted. \n", "The paper addresses the challenge of using neural networks to generate original and expressive piano music. The available techniques today for audio or music generation are not able to sufficient handle the many levels at which music needs to modeled. The result is that while individual music sounds (or notes) can be generated at one level using tools like WaveNet, they don't come together to create a coherent work of music at the higher level. The paper proposes to address this problem by imposing a MIDI representation (piano roll) in the neural modeling of music audio that serves as an intermediate (and interpretable) representation between the analysis (music audio -> MIDI) and synthesis (MIDI -> music audio) in the pipeline of piano music generation. In order to develop and validate the proposed learning architecture, the authors have created a large data set of aligned piano music (raw audio along with MIDI representation). Using this data set for training, validation and test, the paper reports on listening tests that showed slightly less favorable results for the generated music. A few questions and comments are as follows. MIDI itself is a rich language with ability to drive the generation of music using rich sets of customizable sound fonts. Given this, it is not clear that it is necessary to reproduce this function using neural network generation of sounds. The further limitation of the proposed approach seems to be the challenge of decoding raw music audio with chords, multiple overlayed notes or multiple tracks. MIDI as a representation can support multiple tracks, so it is not necessarily the bottleneck. How much does the data augmentation (audio augmentation) help?", "This paper combines state of the art models for piano transcription, symbolic music synthesis, and waveform generation all using a shared piano-roll representation. It also introduces a new dataset of 172 hours of aligned MIDI and audio from real performances recorded on Yamaha Disklavier pianos in the context of the piano-e-competition. \n\nBy using this shared representation and this dataset, it is able to expand the amount of time that it can coherently model music from a few seconds to a minute, necessary for truly modeling entire musical pieces.\n\nTraining an existing state of the art transcription model on this data improves performance on a standard benchmark by several percentage points (depending on the specific metric used).\n\nListening test results show that people still prefer the real recordings a plurality of the time, but that the syntheses are selected over them a fair amount. One thing that is clear from the audio examples is that the different systems produce output with different equalization levels, which may lead to some of the listening results. If some sort of automatic mastering were done to the outputs this might be avoided.\n\nWhile the novelty of the individual algorithms is relatively meager, their combination is very synergistic and makes a significant contribution to the field. Piano music modeling is a long-standing problem that the current paper has made significant progress towards solving.\n\nThe paper is very well written, but there are a few minor issues:\n* Eq (1) this is really the joint distribution between audio and notes, not the marginal of audio\n* Table 4: What do precision, recall, and f1 score mean for notes with velocity? How close does the system have to be to the velocity to get it right?\n* Table 6: NLL presumably stands for Negative Log Likelihood, but this should be made explicity\n* Figure 2: Are the error bars the standard deviation of the mean or the standard error of the mean?\n" ]
[ -1, -1, -1, -1, 8, 8, 8 ]
[ -1, -1, -1, -1, 5, 2, 4 ]
[ "S1gnFxZjnX", "B1efz6dgpX", "BJl9uwaQ67", "iclr_2019_r1lYRjC9F7", "iclr_2019_r1lYRjC9F7", "iclr_2019_r1lYRjC9F7", "iclr_2019_r1lYRjC9F7" ]
iclr_2019_r1xlvi0qYm
Learning to Remember More with Less Memorization
Memory-augmented neural networks consisting of a neural controller and an external memory have shown potentials in long-term sequential learning. Current RAM-like memory models maintain memory accessing every timesteps, thus they do not effectively leverage the short-term memory held in the controller. We hypothesize that this scheme of writing is suboptimal in memory utilization and introduces redundant computation. To validate our hypothesis, we derive a theoretical bound on the amount of information stored in a RAM-like system and formulate an optimization problem that maximizes the bound. The proposed solution dubbed Uniform Writing is proved to be optimal under the assumption of equal timestep contributions. To relax this assumption, we introduce modifications to the original solution, resulting in a solution termed Cached Uniform Writing. This method aims to balance between maximizing memorization and forgetting via overwriting mechanisms. Through an extensive set of experiments, we empirically demonstrate the advantages of our solutions over other recurrent architectures, claiming the state-of-the-arts in various sequential modeling tasks.
accepted-oral-papers
Well-written paper that motivates through theoretical analysis new memory writing methods in memory augmented neural networks. Extensive experimental analysis support and demonstrate the advantages of the new solutions over other recurrent architectures. Reviewers suggested extension and clarification of the analysis presented in the paper, for example, for different memory sizes. The paper was revised accordingly. Another important suggestion was considering ACT as a baseline. Authors explained clearly why it wasn't considered as a baseline, and updated the paper to include references and explanations in the paper as well.
train
[ "HyxFDRZuCX", "B1lqiDbOAQ", "Byl6cSbdC7", "SygH6Dzjpm", "BJxv6enuhm", "SJeG8ByF6m", "Bklj1SJKa7", "rJeuIPUjnm" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer" ]
[ "Thanks for your responses and paper revisions. I still agree this is a nicely conducted piece of research and will retain my score of 8.\n\nRe. (1) & (2) I see, in most cases when people perform the copy task they train on programatically generated sequences that essentially cannot be overfit to (e.g. because the domain of possible sequences to copy is very very large). In this setting more than 10,000 steps is usually useful, and I would expect the NTM to always converge to low loss-error eventually. However I now understand your setup a little better, thanks for clarifying.\n\nRe (3). Yes but my point is that it was jarring not to see the Bahdanau attention, which uses dot-product as a distance metric, or at least have an ablation where you show that your more general MLP attention performs significantly better. But I don't feel strongly.\n\nRe (5) Well I'm sure you would have included it in the main table if your model had better performance ;-) ", "Dear Reviewers, \n\nThank you for your insightful comments and valuable suggestions. We have revised our manuscript according to your feedback. In this revision, we have added results for copy task with larger memory (Appendix H) and analyses on memory operations (Appendix I). For document classification running details, we have included the statistics for 4/6 datasets (Appendix M). We have been working with the other two datasets and will put the complete result in the next revision. ", "Thank you for your constructive comments. We would like to address your concerns as follows,\n\n1. In Appendix H of the last revision, we included an analysis on memory writing for the Sinusoidal Regression task, which gives some insight into the difference between DNC, UW and CUW's writing strategy. We have conducted other analyses on some synthetic tasks and put the results in Appendix I in this revision. We hope that together with the one in the Sinusoidal Regression task (Appendix J in this revision), these inspections provide readers with better understanding on operations inside memory under regular and irregular writing policies. For real-world tasks, it is very hard to visualize the memory operations with realistic data as they are are very long sequences with unknown properties. Hence, we will leave that for future works.\n\n2. We agree with you on the addition of larger memory experiments. We have included one in Appendix H in this revision. For flatten image recognition tasks, we tried with different memory sizes {15, 30, 60} and reported the best results for DNC. For UW and CUW, as increasing memory size makes these methods approach to DNC's performance, their memory sizes are fixed to 15. ", "This paper looks at ways to improve memory-writing in memory augmented neural networks. Authors proposed two methods to compare against \"regular writing\" method as well as compare against each other, namely \"uniform writing\" and \"cached uniform writing\". Latter one attempts to utilize a small size memory efficiently by introducing memory overwriting in other words \"forgetting\".\n\nAuthors started with a very interesting section (namely section 2.1.1) and presented a theoretical formulation of \"remembering\" capability of RNNs, which is fundamental to this work and I really liked it that they did not jump to the proposed methods right away and instead focused on something very fundamental. Authors presented details of the proposed methods very well, and evaluated them on simple tasks such as \"double task\", \"synthetic reasoning\", etc. as well as on more challenging/real tasks such as \"document classification\" or \"image recognition task from MNIST\". I really liked the fact that the paper looked at different tasks instead of going with one. Results are convincing overall, especially for CUW. One thing that will improve the paper is the analysis part.\n\nDue to having 5+ tasks in the results section, I got the feeling that it is hard to follow the analysis presented by authors within each task as well as across tasks. Also, in some tasks analysis is quite limited. It would be great for authors to zoom into the memory write operations in each task (e.g., taking a diff between RW and URW for example and see how memory changes and more importantly how \"remember\" capability changes) and provide more stats on these, and do this across tasks in one section rather than in different sections allocated for each task. Also, analysis in more realistic tasks (e.g., document classification) can be extended as well, rather than only comparing against state-of-the-art methods in terms of final metric.\n\nWhile reviewing the paper, I couldn't help asking why larger memories were not tried. I can see the motivation of trying to use smaller augmented memory, however experimentation around slightly larger augmented memories will be useful for the audience to draw some conclusions. Especially I'm curious about the effect of memory size on accuracy in tasks like image recognition or document classification.\n\n", "This paper deals with Memory Augmnted Neural Networks (MANN) and introduces an algorithm which allows full writes to the dense memory to be only exectued every L timesteps. The controller produces a hidden output at most timestps, whih is appended to a cache. Every L steps, soft attention is used to combine this cache of N hidden states to a single one, and then this is used as the input hidden state for the controller, with the outputs performing a write in the full memory M, along with clearing the cache.\n\nThe authors first derive \"Uniform Writing\" (UW) which updates the memory at regular intervals instead of every timestep. The derivation is based on the \"contribution\" which is norm of the gradient of some input timestep to some hidden state (potentially at a different timestep). I am not clear on whether this terminology for the quantity is novel, if this is the case maybe the authors should state this more clearly. UW says that if all timesteps are equally important, and only D writes can be made in a sequence of length T, then writes should be done every T/(D+1) steps. I have not checked the proof in detail but this seems reasonable that it would maximise the contribution quantity introduced. I am less clear on whether this is obviously the right thing to do - sometimes this value is referred to in relation to information, but that term does not strictly seem to be being used in the information theory sense (no mention of bits or nats anywhere). Regardless, as the authors point out, in real problems there are obviously timesteps which have less or no useful information, and clearly UW is mostly defined in order to build towards CUW.\n\nCUW expands on UW by adding the cache of different hidden states, and using soft attention over them. This feels like a reasonable step, although I would presume there are times when the L hidden states were collected over timesteps with no information, and so the resulting write is not that useful, and times when all of hte L timesteps contain different useful information. In these circumstances it seems like the problem of getting the *useful* information into the memory is still present, as the single write done with the averaged hidden state will need to contain lots of information, which may be more ideal written with several timesteps.\n\nThe experiments are well described and overall the paper seems reproducable. The standard toy datasets of copy / reverse / sinusoid are used. The results are interesting - regular DNC with memory size 50 performs surprisingly badly on clean Sinusoid, my guess would be that with hyperparameter tuning this could be improved upon. I'm not sure that using exactly the same hyperparameters for a wide variety of models is appropriate - even with optimizers like Adam and RMSProp, I would want to see at least some sweeping for the best hyperparams, and then graphs like figure 3 should show error bars averaged across multiple runs with the best per-model hyperparameters. However, The DNC with CUW seems to perform well across all synthetic tasks.\n\nThere is no mention of Adaptive Computation Time/ACT (Graves, https://arxiv.org/abs/1603.08983) throughout the paper, which is surprising considering Alex Graves' models form two of the baselines used throughout the paper. ACT aims to execute an RNN a variable number of times, usually to do >1 timestep of processing for a single timestep of input. In the context of this paper, I believe it could be adapted to do either zero or one steps of computation per timestep, and that would yield a very comparable network where the LSTM controller always executes, and writes to the memory only happen sometimes. Given that it allows a learned process to decide whether to write, as opposed to having a fixed L which separates full writes, this should have the potential to outperform CUW, as it could learn that at certain times, writes must happen at every step. In my view ACT is attempting to solve essentially the same problem as this paper, so it should either be included as a baseline, or the manuscript should be updated to explain why this is not an appropriate comparison.\n\n\nI think this is an interesting paper, trying to make progress on an important problem. The results look good, but I can only give a borderline score due to missing ACT numbers, and a few other unclear points. The addition of ACT experiments, and error bars on certain results, would change my mind here.\n\n\nNotes:\n\n\"No solution has been proposed to help MANNs handle ultra long sequence\" - (Rae et al 2016) is an attempt to do this, by improving the complexity of reads / writes. This allows bigger memory and longer sequences to be processed.\n\n\"Current MANNS only support dense writing\" - presumably this means dense as in 'every timestep', but this terminology is overloaded - you could consider NTM / DNC as doing dense writing, and then work of Rae et al 2016 doing sparse writing.\n\nIn my experience training these kind of RNNs can have reasonably high variance across seeds - figures 2 & 3 should have error bars, and especially Table 4 as that contains the most important results. Getting 99 percent accuracy when previous SOTA is only 0.1% lower is only really meaningful if the standard deviation across seeds is very small.\n\nAppendix A: the 'by induction' result - I believe there is an error, it should be:\n\nh_t = \\sigma_{i=1}^t U_{t-i}W x_i + C\n\nAs W is applied to inputs, before the repeated applications of U? I believe the rest of the derivation still holds the same, after the correction.\n\n", "Reviewer 1:\n\nThank you for your constructive comments. We would like to address your concerns as follows,\n\n1. We are aware of the unexpected performance of NTM+RNN with 14 memory slots. It should be noted that the results reported in Table 2 (c) are the averages of accuracy over multiple running times, in which NTM+RNN converges sometimes but not always under our training setting. To demonstrate that our UW is helpful under various training settings, we have reassessed the models with different learning rates (0.001,0.0001) and gradient clipping (1,5,10). We have reported the mean performance with error bars in the updated manuscript. \n\n2. There are two reasons for stopping after 10,000 training steps. First, the learning curves look stable and show no promise to gain big improvement around 10,000 steps. Second and more importantly, in our synthetic tasks, training with more steps means the models access to more training data and are likely to gradually overfit. This behavior is clearer when the number of memory slots increases where both regular and uniform writing often solve the synthetic tasks perfectly if they are trained with unlimited data. We want to avoid that setting and focus on measuring the performance on unseen test data given a moderate amount of training samples as in reality the training data is very limited. \n\n3. Eq. (9) is inspired by Bahdanau attention [1] (the “concat”) in which the alignment model is implemented as a neural network with additional parameters. We think this mechanism will be more flexible than your direct softmax query (the “dot”) as the attention does not restrict to similarity. Also, we want to utilize read values from the memory, which may give useful information for the attention. The “concat” form naturally suits our purpose. \n\n4. Thank for pointing out the typo in S.2.2.2. We have fixed the typo in this revision.\n\n5. In Table 3, we aim to validate our method against other recurrent baselines in their capacity to memorize efficiently. The Transformer, on the other hand, accesses to all timesteps and thus, does not need manage memorization. For completeness, we have now included the results of the Transformer, together with the Dialated CNN, as non-recurrent baselines in Appendix I in this revision. \n\n6. We have conducted the copy task with bigger memory (number of memory slots=50 and sequence length=500). At this moment, after 40,000 batches, DNC +UW's best validation accuracy is 38.1% while DNC's is 17.2%. The final results will be put in Appendix in the next revision. \n\n[1] Bahdanau et al., Neural Machine Translation by Jointly Learning to Align and Translate. ICLR'15", "Thank you for your helpful comments. We would like to address your concerns as follows,\n\n1. To the best of our knowledge, this is the first time the norm \\left\\Vert \\frac{\\partial h_{t}}{\\partial x_{i}}\\right\\Vert is used in measuring memorization capacity of a recurrent network, which can be regarded as a novelty. We have made this point clearer in this revision.\n\n2. Regarding to your concern on the validity of our quantity, we agree that there is no direct link to the “information” in information theory sense. Actually, we approached the problem from a different viewpoint. In recurrent networks, one often makes prediction based on h_{t}, which can be considered as a function of timestep inputs, i.e, h_{t}=f\\left(x_{1},x_{2},...,x_{t}\\right). One way to measure how much an input x_{i} contributes to h_{t} is to calculate \\left\\Vert \\frac{\\partial h_{t}}{\\partial x_{i}}\\right\\Vert . If the norm equals zero, h_{t} is constant w.r.t x_{i}. That is, h_{t} does not contain any “information” on x_{i}. A bigger norm implies more influence of x_{i} on h_{t}. As we cannot know in advance which (or all) inputs are required for h_{t} to make good predictions, a reasonable policy is to ensure, on average, all of these norms do not approach zeros, which leads to a maximization problem as shown in our paper. Our empirical results have demonstrated the benefit of following this principle, which enhances our belief that this is the right thing to do. \n\n3. We have added hyper-parameter tuning for the Sinusoidal Regression task and updated the results in this revision.\n\n4. Your reasoning on CUW operation is correct. However, even when writing every timestep can capture several important events, this behavior will finally lead to overwriting and loss of information because of finite memory size. Therefore, we believe a balance between following a generic principle and allowing a flexible learning mechanism is beneficial. CUW is one possible solution and we need further investigation to find better writing strategies in future work.\n\n5. We are aware of ACT [1] and decided not to include it in our references as the goal of our paper and ACT are totally different. While our paper aims to answer the question “when to write to the memory”, ACT aims to answer the question “how many computational steps to take”. However, we agree that if adapted as the reviewer suggests, ACT supports a simple mechanism of learning to write or not to write and should be cited in related works (updated in this revision).\nNevertheless, the adapted ACT is somehow equivalent to LSTM controller with DNC memory module. When the number of computation steps n is either 0 or 1, ACT mean-field approximation is equivalent to multiplying the state with a learnable gate and we think the output gate in LSTM already supports that. Extending to memory level, this is equivalent to multiplying the writing weight with a learnable gate (if the gate equals zero, there is no writing at that timestep). DNC is equipped with a write gate g_{t}^{w}, which executes the same function (see Eq. (2) in [2]). Hence, we strongly believe that an ACT baseline is unnecessary as DNC is capable of deciding whether to write at each timestep. In theory, DNC itself can learn uniform writing strategy. However, in practice, it is very hard to learn a particular writing scheme without any guidance. This emphasizes the importance of searching for a writing policy that is guided by optimal principles instead of trying to learn everything end-to-end. The fact that DNC+CUW outperforms DNC in various experiments further validates our argument. \n\n6. Our original claim is “no solution has been proposed to help MANNs handle ultra-long sequences given limited memory”. The authors in [3] aim to learn longer sequences by scaling the memory size, which is not conditioned on our limited memory setting. To make our claim less confusing, we have added another sentence to differentiate between our work and [3].\n\n7. We admit the term “dense writing” is confusing, and thank you for pointing it out. The same confusion may apply to the term “sparse writing”. Therefore, we have replaced the two terms with “regular writing” and “irregular writing”, respectively. \n\n8. We agree with you on the addition of error bars on Figs. 2, 3 and Table 4. We have collected and included these statistics for the synthetic tasks in this version of our paper. We have been working on the document classification task and hope that we can include error bars for this task before the revision deadline. \n\n9. Your comment on the order of U and W is correct. We have fixed that in this revision. Thank you for your detailed reading.\n\n[1] Graves et al., Adaptive computation time for recurrent neural networks. arXiv preprint arXiv:1603.08983 (2016)\n[2] Graves et al., Hybrid computing using a neural network with dynamic external memory. Nature, 2016. \n[3] Rae et al., Scaling memory-augmented neural networks with sparse reads and writes. NIPS'16", "This paper investigates the average contribution of a sequence input to the contents of memory and derives a simple scheme to maximize the information content in memory, which is essentially to write at uniformly spaced intervals. Furthermore they present an attention-based version, where the network caches all hidden states in an interval and selects the hidden state to store via attention. \n\nThe paper is very well written and has a nice balance of relevant theoretic motivation and experiments. Furthermore the question that the authors are tackling --- how should we compress information into external memories --- feels important and under-explored. The fact that the resulting scheme is simple is nice, because it's easy for people to try, and it now has some motivation beyond a heuristic decision.\n\nI think this paper will have impact in opening up more comprehensive research into the reduction of redundancy in the external memories of neural networks, and also could be instantly impactful for people using DNCs and NTMs --- especially since we see the incorporation of UW / CUW can help bridge the gap (or even surpass) LSTMs for the modeling of natural data. As such I think it is a clear accept. \n\n---\n\nComments to the authors:\n\nThe results in Figure 2 (c) I think are misleading. The NTM with an RNN controller can solve this task, the limit of 10,000 steps implies that the model may converge to some 50% value with 14 slots but I am absolutely certain that the NTM + RNN controller would converge in 10,000 steps with a careful tuning of gradient clipping and learning rate. I think this is basically a false result. Furthermore I would like to really know what the best final performance of the models are on this task once converged, it's not clear if 10,000 steps was enough.\n\nFor equation (9), was it necessary to construct the attention weights in this way? How much better was it to a direct softmax query: softmax(h_{t-1}^T d_j)? If you are backpropagating through the attention then the network can shape the hidden states to facilitate the relevant attention, as well as contain the information.\n\nIn the second paragraph of S2.2.2 you have \"a_{t, j} is the attention score\" but you should have \"\\alpha_{t, j} is the attention score\".\n\nTable 3: just include the Transformer results in the table!? The reasoning to exclude it is not really coherent.\n\nIt would have been nice (and would raise my score) to see the UW scheme operating with a large(ish) number of memory slots.\n\n\n\n\n\n\n" ]
[ -1, -1, -1, 7, 7, -1, -1, 8 ]
[ -1, -1, -1, 3, 4, -1, -1, 4 ]
[ "SJeG8ByF6m", "iclr_2019_r1xlvi0qYm", "SygH6Dzjpm", "iclr_2019_r1xlvi0qYm", "iclr_2019_r1xlvi0qYm", "rJeuIPUjnm", "BJxv6enuhm", "iclr_2019_r1xlvi0qYm" ]
iclr_2019_rJEjjoR9K7
Learning Robust Representations by Projecting Superficial Statistics Out
Despite impressive performance as evaluated on i.i.d. holdout data, deep neural networks depend heavily on superficial statistics of the training data and are liable to break under distribution shift. For example, subtle changes to the background or texture of an image can break a seemingly powerful classifier. Building on previous work on domain generalization, we hope to produce a classifier that will generalize to previously unseen domains, even when domain identifiers are not available during training. This setting is challenging because the model may extract many distribution-specific (superficial) signals together with distribution-agnostic (semantic) signals. To overcome this challenge, we incorporate the gray-level co-occurrence matrix (GLCM) to extract patterns that our prior knowledge suggests are superficial: they are sensitive to the texture but unable to capture the gestalt of an image. Then we introduce two techniques for improving our networks' out-of-sample performance. The first method is built on the reverse gradient method that pushes our model to learn representations from which the GLCM representation is not predictable. The second method is built on the independence introduced by projecting the model's representation onto the subspace orthogonal to GLCM representation's. We test our method on the battery of standard domain generalization data sets and, interestingly, achieve comparable or better performance as compared to other domain generalization methods that explicitly require samples from the target distribution for training.
accepted-oral-papers
The paper presents a new approach for domain generalization whereby the original supervised model is trained with an explicit objective to ignore so called superficial statistics present in the training set but which may not be present in future test sets. The paper proposes using a differentiable variant of gray-level co-occurrence matrix to capture the textural information and then experiments with two techniques for learning feature invariance. All reviewers agree the approach is novel, unique, and potentially high impact to the community. The main issues center around reproducibility as well as the intended scope of problems this approach addresses. The authors have offered to include further discussions in the final version to address these points. Doing so will strengthen the paper and aid the community in building upon this work.
train
[ "S1xWkt7QCX", "BJet3_XXRQ", "S1gEYuQXC7", "SJxqbOQmRX", "rJxbWynhhm", "H1ehlWduhm", "HJee7cfwh7" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for the strong positive assessment of our work. We’re glad that you appreciated the originality of our approach, the value of our new datasets, and the quality of our exposition. We will continue to improve the draft in the camera-ready version.", "Thanks for a detailed review. We are grateful both for your big-picture feedback and for your extensive granular suggestions to improve the exposition of our paper. We were glad to see that you appreciated our creativity in using GLCM and recognized the modularity of our design. Your question regarding F_L and F_P is insightful and we’re glad that you identified this missing detail in the paper. We compared evaluation with F_L and F_P and discovered that performance was equivocal. This favors the use of F_P, allowing us to use the machinery of the GLCM at training time but discarding it at test time. We promise to add this discussion and supporting experiments to the camera-ready version. Additionally, we will revise the first paragraph of 3.2 per your suggestions and fix the numerous small typos and type-setting corrections that you identified. Thanks again for your generous feedback and attention to detail.", "Thank you very much for these comments. We are glad that you appreciated the paper’s overall aims and recognized the general applicability of the methodology that we propose. We are also grateful for your constructive suggestions:\n\n* To address your concerns about reproducibility we will add an appendix providing extensive detail about all heuristics employed during training. Additionally, we plan to release open source version of all of our code upon publication. \n\n* Regarding Table 2: thanks for pointing this out. We agree that while an argument can be found in the main text, Table 2 is poorly described in the caption and must be better presented in the camera-ready version. In short, domains D and W here overlap significantly. Therefore a model trained on one and evaluated on the other perform well, and we conjectured that discarding the superficial information can actually degrade performance. \n", "We would like to thank all of the reviewers for their constructive reviews. Overall, we are glad to see that all three reviewers champion the paper, appreciating the paper’s overall aim, creativity in revisiting GLCM, proposed experiment set-ups, and the strength of the empirical results. We are also grateful for the reviewers’ constructive suggestions which will help to improve the camera-ready version of the paper. Please find comments We will answer the reviewers’ comments individually. ", "The paper is clear regarding motivation, related work, and mathematical foundations. The introduced cross-local intrinsic dimensionality- (CLID) seems to be naive but practical for GAN assessment. Notably, the experimental results seem to be convincing and illustrative. \n\nThe domain generalization idea from CNN-based discriminative feature extraction and gray level co-occurrence matrix-based high-frequency coding (superficial information), is an elegant strategy to favor domain generalization. Indeed, the linear projection learned from CNN, and GLCM features could be extended to different real-world applications regarding domain generalization and transferring learning. So, the paper is clear to follow and provides significant insights into a current topic.\n\nPros: \n- Clear mathematical foundations.\n- The approach can be applied to different up-to-date problems.\n-Though the obtained results are fair, the introduced approach would lead to significant breakthroughs regarding domain generalization techniques.\n\nCons:\n-Some experimental results can be difficult to reproduce. Indeed, authors claim that the training heuristic must be enhanced.\n-Table 2 results are not convincing.\n", "Summary:\nThe paper proposes an unsupervised approach to identify image features that are not meaningful for image classification tasks. The goal is to address the domain adaptation (DA)/domain generalization (DG) issue. The paper introduces a new learning task where the domain identity is unavailable during training, called unguided domain generalization (UDG). The proposed approach is based on an old method of using gray level co-occurence matrix, updated to allow for differentiable training. This new approach is used in two different ways to reduce the effect of background texture in a classification task. The paper introduces a new dataset, and shows extensive and carefully designed experiments using the new data as well as existing domain generalization datasets.\n\nThis paper revisits an old idea from image processing in a new way, and provides an interesting unsupervised method for identifying so called superficial features. The proposed block seems to be very modular in design, and can be plugged into other architectures. The main weakness is that it is a bit unclear exactly what is being assumed as \"background texture\" by the authors.\n\n\nOverall comments:\n- Some more clarity on what you mean by superficial statistics would be good. E.g. by drawing samples. Are you assuming the object is centered? Somehow filling the image? Different patch statistics? How about a texture classification task?\n- please derive why NGLCM reduces to GLCM in the appendix. Also show the effect of dropping the uniqueness constraint.\n- Section 3.2: I assume you are referring to an autoencoder style architecture here. Please rewrite the first paragraph. The current setup seems to indicate that you doing supervised training, since you have labels y, but then you talk about decoder and encoder.\n- Section 3.2: Please expand upon why you use F_L for training but F_P during testing\n\n\nMinor typos/issues:\n- Last bullet in Section 1: DG not yet defined, only defined in Section 2.\n- page 2, Section 2, para 1: data collection conduct. Please reword.\n- page 2, Section 2, para 2: Sentence: For a machine learning ... There is no object in this sentence. Not sure what you are trying to define.\n- page 2, Section 2, para 2: Is $\\mathcal{S}$ and $\\mathcal{T}$ not intersecting?\n- page 2, Section 2.1: Heckman (1977), use \\citep\n- page 2, Section 2.1: Manski, citep and missing year\n- page 3, Section 2.1: Kumagai, use citet\n- page 3, Section 3.1: We first expand ... --> We first flatten A into a row vector\n- page 4, Section 3.1: b is undefined. I assume you mean d?\n- page 4, Section 3.1: twice: contrain --> constraint\n- page 4, Section 3.2: <X,y> --> {X,y} as used in Section 3.1.\n- page 4, Section 3.2, just below equation: as is introduced in the previous section. New sentence about MLP please. And MLP not defined.\n- page 4, Section 3.2, next paragraph: missing left bracket (\n- page 4, Section 3.2: inferred from its context.\n- page 5, Section 4: popular DG method (DANN)\n- page 7: the rest one into --> the remaining one into\n- page 8: rewrite: when the empirical performance interestingly preserves.\n- page 8, last sentence: GD --> DG\n- A2.2: can bare with. --> can deal with.\n- A2.2: linear algebra and Kailath Variant. Unsure what you are trying to say.\n- A2.2: sensitive to noises --> sensitive to noise.\n", "The paper proposed a novel differentiable neural GLCM network which captures the high reference textural information and discard the lower-frequency semantic information so as to solve the domain generalisation challenge. The author also proposed an approach “HEX” to discard the superficial representations. Two synthetic datasets are created for demonstrating the methods advantages on scenarios where the domain-specific information is correlated with the semantic information. The proposal is well structured and written. The quality of the paper is excellent in terms of novelty and originality. The proposed methods are evaluated thoroughly through experiments with different types of dataset and has shown to achieve good performance. " ]
[ -1, -1, -1, -1, 7, 7, 9 ]
[ -1, -1, -1, -1, 4, 4, 3 ]
[ "HJee7cfwh7", "H1ehlWduhm", "rJxbWynhhm", "iclr_2019_rJEjjoR9K7", "iclr_2019_rJEjjoR9K7", "iclr_2019_rJEjjoR9K7", "iclr_2019_rJEjjoR9K7" ]
iclr_2019_rJVorjCcKQ
Slalom: Fast, Verifiable and Private Execution of Neural Networks in Trusted Hardware
As Machine Learning (ML) gets applied to security-critical or sensitive domains, there is a growing need for integrity and privacy for outsourced ML computations. A pragmatic solution comes from Trusted Execution Environments (TEEs), which use hardware and software protections to isolate sensitive computations from the untrusted software stack. However, these isolation guarantees come at a price in performance, compared to untrusted alternatives. This paper initiates the study of high performance execution of Deep Neural Networks (DNNs) in TEEs by efficiently partitioning DNN computations between trusted and untrusted devices. Building upon an efficient outsourcing scheme for matrix multiplication, we propose Slalom, a framework that securely delegates execution of all linear layers in a DNN from a TEE (e.g., Intel SGX or Sanctum) to a faster, yet untrusted, co-located processor. We evaluate Slalom by running DNNs in an Intel SGX enclave, which selectively delegates work to an untrusted GPU. For canonical DNNs (VGG16, MobileNet and ResNet variants) we obtain 6x to 20x increases in throughput for verifiable inference, and 4x to 11x for verifiable and private inference.
accepted-oral-papers
The authors propose a new method of securely evaluating neural networks. The reviewers were unanimous in their vote to accept. The paper is very well written, the idea is relatively simple, and so it is likely that this would make a nice presentation.
train
[ "Hkgpl17URQ", "BJl-sAfICQ", "Hyx47Cf80Q", "SylJsazLRm", "rkgtvmXc2m", "r1eyl4tHhQ", "HJg2YxyGoQ" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "In response to the below reviews, we have made the following main changes to our paper:\n\n- As suggested by the second reviewer, we have moved some of the content from the Appendix back to the main body. These include the microbenchmark results, as well as a discussion of the challenges in extending Slalom to DNN training.\n\n- We have included new results for evaluating Slalom on ResNet models. This showcases Slalom's broad applicability to more complex models than the feed-forward architectures we had evaluated so far. We also illustrate Slalom's graceful scaling to very deep networks (e.g., the 152-layer ResNet with a 4.6x increase in private and verifiable throughput over the baseline).\n\n- We have unified all figures to primarily display Slalom's *savings* over the baseline, rather than the raw throughput in images per second.", "We thank the reviewer for the extremely positive review of our paper. As noted in our responses to the other reviewers, we have made some editorial changes to the paper (mainly moving some experiments from the Appendix into the main body), and we have included additional results for ResNet architectures as suggested by the second reviewer.\n\nTo answer your insightful questions:\n\n1. This is a great question, and such a system has recently been suggested in [1] (see below), which we now reference in our writeup. Probably the main differentiator is that in such a system, every single user requires trusted hardware (e.g., a recent Intel CPU), whereas in the cloud-outsourcing scheme which we and others have considered, only the server requires specialized hardware. \nUsing Slalom's techniques in a client-side execution might work, but the downside is that our algorithm assumes that the untrusted host has knowledge of the computed model. \nOne client-side application where Slalom might come in useful is for guaranteeing integrity in Federated Learning. Here, each client evaluates a model on their own data, and the server may need some guarantees that those computations were performed correctly.\n\n2. We have added a note for this, thanks.\n \n3. Our evaluation primarily focuses on throughput. Low latency is also desirable of course, but this might require using a different untrusted processor as GPUs tend to be outperformed by CPUs when operating on single-element batches. In our experiments, we evaluate batches of images (around 16 images) on the GPU, and then verify a single image at a time in the TEE. By replacing the GPU with a high-end CPU, we could thus also achieve low latency using Slalom.\n \n4. VGG16 has the particularity that the feature extraction part (i.e., without the dense layers) makes up roughly 95% of the model's computation, but uses only maybe 5% of the weights (simply because VGG16's first dense layer is huge). So outsourcing only the dense layers would unfortunately leave the client to do most of the work.\nWhat we had in mind here was the use of VGG16's features as a building block for other models (e.g., object detection with SSD). We performed some preliminary experiments with SSD and a VGG16 backend and found that Slalom could also achieve around 10x speed improvements for such object-detection tasks.\n\n5. Indeed, as you correctly note, the pre-computation takes the same time as the baseline computation in the TEE. The results in Section 4.3 do not include this pre-computations as we would of course not obtain any savings by doing so.\nA similar approach is taken by many Cryptographic approaches to secure outsourcing (of ML tasks or other computations). The rationale is that the pre-computation is data-independent and can be performed offline (e.g., during periods of low system use) and thus doesn't count towards the cost of the online throughput (or latency as discussed above). \n\n[1] MLCapsule: Guarded Offline Deployment of Machine Learning as a Service, Hanzlik et al., https://arxiv.org/abs/1808.00590", "We thank the reviewer for the positive review and insightful comments. It is encouraging to hear that our paper was easy to read for someone outside the field of HW security and privacy.\n \nWe have made some changes to our manuscript to better illustrate the limitations and scalability of our approach. We have moved our micro-benchmarks from the Appendix to the main body, as suggested by the second reviewer. These benchmarks show that as the computation performed in a layer gets larger (e.g., more channels in a convolution), the savings incurred by Slalom increase!\nIn principle, Slalom scales linearly with the number of layers added to a model, so long as all the pre-computed values do not exceed SGX's memory limits. To illustrate, we have experimented with the family of ResNet architectures, that range from 18 to 152 layers (and 44MB to 230MB of weights). For five different models (18, 34, 50, 101 and 152 layers), Slalom provides large savings in throughput (4.4x-14.4x) and the savings tend to be larger for larger networks. We have added these results to our manuscript and we believe they illustrate Slalom's applicability and scalability to large models. In particular, the 152-layer ResNet model is among the deepest and most accurate models trained on ImageNet to date.\nLastly, we have also moved a section from the Appendix to the main body wherein we discuss some limitations and challenges with extending Slalom to DNN training. The main issues here are with weights changing during training which hinders quantization, pre-computation of Freivalds' checks, as well as pre-computed blinding factors for privacy.\n \nRegarding the usefulness of integrity checks, these are also meant as a security guarantee. The threat model we consider here is that the server might *intentionally* compute incorrect values and send these back to the client. The SafetyNets paper by Ghodsi et al. contains a good discussion of reasons why a client may want integrity guarantees from the server. One example is a \"model-downgrade attack\", where the server runs a cheaper (i.e., smaller) model than advertised, to minimize costs. More generally, it is commonly agreed upon in the cryptographic community that privacy without integrity is an insufficient guarantee (e.g., by tampering with a client's results and observing the side effects, a server might later learn something about the client's data).\n\nFor quantization, we simply assume that inputs are standard RGB images in the range [0, 255]. We then choose the quantization scales for inputs and weights so that none of the intermediate values in the network ever grow beyond p=2^23. The inputs of all layers after the first one are simply assumed to lie in the interval [-p/2, p/2].", "We thank the reviewer for the positive review and insightful comments.\n\nWe followed the suggestion to make use of 10 pages of content (we originally found the ICLR Call for Papers to be somewhat unclear in this regard). We have moved parts of the Appendix into the main body, i.e., the SGX microbenchmarks as well as our discussion of challenges with extending Slalom to DNN training.\n\nWe agree that the notation PR_{s \\overset{s}{\\gets}\\mathbb{S}^{n}}[...] is overly verbose and we have removed the redundant in our updated manuscript. We have also added a definition for a negligible function.\n \nWe followed the great suggestion to apply Slalom to more complex architectures. We have added experiments with ResNet models which make use of residual connections (handling concatenation layers would require similar changes to our framework). Extending Slalom's integrity checks (the left part in Figure 1) is quite trivial. The TEE simply applies Freivalds' algorithm to every linear operator and makes sure that it performs appropriate \"book-keeping\" of which layers' outputs correspond to which other layers' inputs. For privacy (the right part in Figure 1), things can get a bit more complicated as the TEE and GPU have to interact for each linear layer. For a residual layer, the TEE and GPU essentially run Slalom on both \"paths\" of the layer one after the other. The TEE saves intermediate results in its memory and then merges the results. The same would work for concatenation layers. Our results with ResNets are on par with those obtained with VGG16 and MobileNet. We tried different variants (18, 34, 50, 101 and 152 layers), and achieve 6.6-14.4x speedups for integrity and 4.4x-9.0x speedups with additional privacy.", "In this paper, the authors consider solving three ML security related challenges that would primarily arise\nin the cloud based ML model. Namely, they consider the setting where a client wishes to obtain predictions\nfrom an ML model hosted on a server, while being sure that the server is running the model they believe is being run\nand without the server learning nothing about their input. Additionally, the server wishes for the user to learn \nnothing about the model other than its output on the user's input. To solve this problem, the authors introduce a\nnew scheme for running ML algorithms in a trusted execution environment. The key idea is to oursource expensive\ncomputation involved with forwarding images through a model to an untrusted GPU in a way that still allows for\nthe TEE to verify the integrity of the GPU's output. Because the authors' method is able to utilize GPU computing,\nthey achieve substantial speed-ups compared to methods that run the full neural network in trusted hardware.\n\nOverall, I found the paper to be very well written and easy to digest, and the basic idea to be simple. The \nauthors strike a nice balance between details left to the appendix and the high level overview explained in\nthe paper. At the same time, the authors' proposed solution seems to achieve reasonably practicable performance\nand provides a simple high-throughput solution to some interesting ML security problems that seems readily\napplicable in the ML-as-a-cloud-service use case. I only have a few comments and feedback.\n\nI would recommend the authors use the full 10 pages available by moving key results from the appendix to the main\ntext. At present, much of the experimental evaluation performed is done in the appendix (e.g., Figures 3 through \n5). \n\nThe notation PR_{s \\overset{s}{\\gets}\\mathbb{S}^{n}}[...] is not defined anywhere as far as I can tell\nbefore its first usage in Lemma 2.1. Does this just denote the probability over a uniform random draw of\ns from \\mathbb{S}? If so, I might recommend just dropping the subscript: A, B, and C being deterministic\nmakes the sample space unambiguous. \"negl(\\lambda)\" is also undefined. \n\nIn section three you claim that Slalom could be extended to other architectures like residual networks.\nCan you give some intuition on how straightforward it would be to implement operations like concatenation\n(required for DenseNets)? I would expect these operations could be implemented in the TEE rather than \non the coprocessor and then verified. However, the basic picture on the left of Figure 1 may then change,\nas the output of each layer may need to be verified before concatenation? I think augmenting the right\nof Figure 1 to account for these operations may be straightforward. It would be interesting to see\nthroughput results on these networks, particularly because they are known to substantially outperform\nVGG in terms of classification performance.", "The authors propose a new method of securely evaluating neural networks. The approach builds upon existing Trusted Execution Environments (TEE), a combination of hardware and software that isolates sensitive computations from the untrusted software stack. The downside of TEE is that it is expensive and slow to run. This paper proposes outsourcing the linear evaluation portions of the DNN to an untrusted stack that's co-located with the TEE. To achieve privacy (i.e., the input isn't revealed to the untrusted evaluator), the approach adds a random number r to the input vector x, evaluates f(x+r) on the untrusted stack, then subtracts off f(r) from the output. This limits the approach to be applicable to only linear functions. To achieve integrity (verify the correctness of the output), the paper proposes testing with random input vectors (an application of Freivalds theorem, which bounds the error probability). The techniques for integrity and privacy works only on integer evaluations, hence the network weights and inputs need to be quantized. The paper tries to minimize degradation in accuracy by quantizing as finely as numerically allowable, achieving <0.5% drop in accuracy on two example DNNs. Overall, compared to full evaluation in a TEE, this approach is 10x faster on one DNN, and 40x to 64x faster on another network (depending on how the network is formulated).\n\nDisclaimer: I am a complete outsider to the field of HW security and privacy. The paper is very readable, so I think I understand its overall gist. I found the approach to be novel and the results convincing, though I may be missing important context since I'm not familiar with the subject.\n\nTo me, the biggest missing piece is a discussion of the limitations of the approach. How big of a network can be evaluated this way? Is it sufficient for most common applications? What are the bottlenecks to scaling this approach?\n\nIt's also not clear why integrity checks are required. Is there a chance that the outsourcing could result in incorrect values? (It's not obvious why it would.)\n\nLastly, a question about quantization. You try to quantize as finely as possible (to minimize quantization errors) by multiplying by the largest power of 2 possible without causing overflow. Since quantization need to be applied to both input and network weights, does this mean that you must also bound the scale of the input? Or do you assume that the inputs are pre-processed to be within a known scale? Is this possible for intermediate outputs (i.e., after the input has been multiplied through a few layers of the DNN)?\n\nPros:\n- Simple yet effective approach to achieve the goals laid out in the problem statement\n- Clearly written\n- Thorough experiments and benchmarks\n- Strong results\n\nCons:\n- No discussion of limitations\n- Minor questions regarding quantization and size limits\n\nDisclaimer: reviewer is generally knowledgeable but not familiar with the subject area.", "\nGiven the growing interest in building trust worthy and privacy protecting AI systems, this paper demonstrates a novel approach to achieve these important goals by allowing a trusted, but slow, computation engine to leverage a fast but untrusted computation engine. For the sake of protecting privacy, this is done by establishing an additive secret share such that evaluation on one part of the share is performed offline and the computation on the other part of the share is performed on the untrusted engine. To verify the correctness of the computation on the untrusted server, a randomized algorithm is used to sample the correctness of the results. Using these techniques, the authors demonstrate an order of magnitude speedup compared to running only on the trusted engine and 3-4 orders of magnitude speedup compared to software-based solutions.\n\nOverall this is a strong paper which presents good ideas that have influence in ML and beyond. I appreciate the fact that the authors are planning to make their code publicly available which makes it more reproducible. Below are a few comments/questions/suggestions \n\n1.\tThis papers, and other papers too, propose mechanisms to protect the privacy of the data while outsourcing the computation on a prediction task. However, an alternative approach would be to bring the computation to the data, which means performing the prediction on the client side. In what sense is it better to outsource the computation? Note that outsourcing the computation requires both complexity on the server side and additional computation on the client side (encryption & decryption).\n2.\tYou present the limitations of the trust model of SGX only in the appendix while in the paper you compare to other techniques such as Gazzelle which have a different trust model and assumption. It makes sense to, at least, hint the reader on these differences. \n3.\tIn section 2.2: “has to be processed with high throughput when available” is it high throughput that is required or low latency?\n4.\tIn Section 4.3: in one of the VGG experiment you computed only the convolution layers which, as you say, are commonly used to generate features. In this case, however, doesn’t it make more sense that the feature generation will take place on the client side while only the upper layers (dense layers) will be outsourced?\n5.\tIn section 4.3 “Private Inference” : do you include in the time reported also the offline preprocessing time? As far as I understand this should take the same amount of time as computing on the TEE.\n" ]
[ -1, -1, -1, -1, 7, 7, 9 ]
[ -1, -1, -1, -1, 3, 2, 4 ]
[ "iclr_2019_rJVorjCcKQ", "HJg2YxyGoQ", "r1eyl4tHhQ", "rkgtvmXc2m", "iclr_2019_rJVorjCcKQ", "iclr_2019_rJVorjCcKQ", "iclr_2019_rJVorjCcKQ" ]
iclr_2019_rJgMlhRctm
The Neuro-Symbolic Concept Learner: Interpreting Scenes, Words, and Sentences From Natural Supervision
We propose the Neuro-Symbolic Concept Learner (NS-CL), a model that learns visual concepts, words, and semantic parsing of sentences without explicit supervision on any of them; instead, our model learns by simply looking at images and reading paired questions and answers. Our model builds an object-based scene representation and translates sentences into executable, symbolic programs. To bridge the learning of two modules, we use a neuro-symbolic reasoning module that executes these programs on the latent scene representation. Analogical to human concept learning, the perception module learns visual concepts based on the language description of the object being referred to. Meanwhile, the learned visual concepts facilitate learning new words and parsing new sentences. We use curriculum learning to guide the searching over the large compositional space of images and language. Extensive experiments demonstrate the accuracy and efficiency of our model on learning visual concepts, word representations, and semantic parsing of sentences. Further, our method allows easy generalization to new object attributes, compositions, language concepts, scenes and questions, and even new program domains. It also empowers applications including visual question answering and bidirectional image-text retrieval.
accepted-oral-papers
Strong paper in an interesting new direction. More work should be done in this area.
train
[ "r1g6tF8F3X", "BylnXIUC0Q", "HJeHGTF5nX", "Bkx7KxOjpX", "r1xJIbv5A7", "ryxKPxw9Rm", "rJxoZ-_ipQ", "SJgbnnCgRX", "rJl4slOsa7", "SyxhAx_jpm", "rJx2mlOjTQ", "Sklo1V_znQ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "To achieve the state-of-the-art on the CLEVR and the variations of this, the authors propose a method to use object-based visual representations and a differentiable quasi-symbolic executor. Since the semantic parser for a question input is not differentiable, they use REINFORCE algorithm and a technique to reduce its variance. \n\nQuality: \nThe issue of invalid evaluation should be addressed. CLEVR dataset has train, validation, and test sets. Since the various hyper-parameters are determined with the validation set, the comparison of state-of-the-art should be done using test set. As the authors mentioned, REINFORCE algorithm may introduce high variance, this notion is critical to report valid results. However, the authors only report on the validation set in Table 2 including the main results, Table 4. For Table 5, they only specify train and test splits. Therefore, I firmly recommend the authors to report on the test set for the fair comparison with the other competitive models, and please describe how to determine the hyperparameters in all experimental settings. \n \nClarity:\nAs mentioned above, please specify the experimental details regarding setting hyperparameters.\nIn Experiments section, the authors used less than 10% of CLEVR training images. How about to use 100% of the training examples? How about to use the same amount of training examples in the competitive models? The report is incomplete to see the differential evident from the efficient usage of training examples.\n\nOriginality and significance:\nThe authors argue that object-based visual representation and symbolic reasoning are the contributions of this work (excluding the recent work, NS-VQA < 1 month). However, bottom-up and top-down attention work [1] shows that attention networks using object-based visual representation significantly improve VQA and image captioning performances. If the object-based visual representation alone is the primary source of improvement, it severely weakens the argument of the neuro-symbolic concept learner. Since, considering the trend of gains, the contribution of the proposing method seems to be incremental, this concern is inevitable. To defend this critic, the additional experiment to see the improvement of the other attentional model (e.g, TbD, MAC) using object-based visual representations, without any other annotations, is needed.\n\nPros:\n- To confirm the effective learning of visual concepts, words, and semantic parsing of sentences, they insightfully exploit the nature of the CLEVR dataset for visual reasoning diagnosis.\n\nCons:\n- Invalid evaluation to report only on the validation set, not test set.\n- The unclear significance of the proposed method combining object-based visual representations and symbolic reasoning\n- In the original CLEVR dataset paper, the authors said \"we stress that accuracy on CLEVR is not an end goal in itself\" and \"..CLEVR should be used in conjunction with other VQA datasets in order to study the reasoning abilities of general VQA systems.\" Based on this suggestion, can this work generalize to real-world settings? This paper lacks to discuss its limitation and future direction toward the general problem settings.\n\nMinor comments:\nIn 4.3, please fix the typos, \"born\" -> \"brown\" and \"convlutional\" -> \"convolutional\".\n\n\n[1] Anderson, P., He, X., Buehler, C., Teney, D., Johnson, M., Gould, S., & Zhang, L. (2018). Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering. IEEE Computer Vision and Pattern Recognition (CVPR'18).", "The authors sufficiently clarified the experimental procedures for fair comparisons what I had concerned. Although the work seems to be limited in natural images and language (VQS), I appreciate the authors to include in the paper for the future works.\n\nI decide to increase my rating by 1.", "\nSummary:\n=========\nThe paper proposes a joint learning of visual representation and word and semantic parsing of the sentences given paired images and paired Q/A with a model called neuro-symbolic concept learner using curriculum learning. The paper reads well and is easy to follow. The idea of jointly learning visual concepts and language is an important task. Human reasoning involves learning and recall from multiple moralities. The authors use the CLEVR dataset for evaluation.\n\nStrength:\n========\n- Jointly learning the language parsing and visual representations indirectly from paired Q/A and paired images is interesting. Combining the visual learning with the visual questions answers by decomposing them into primitive symbolic operations and reasoning in symbolic space seems interesting.\n\n- End-to-end learning of the visual concepts, Q/A decomposition into primitives and program execution was shown to be competitive to baseline methods.\n\nWeakness:\n=========\n- Although, the joint learning and composition is interesting, the visual task is simplistic and it is not obvious how this would generalize into other complex VQA tasks.\n\n- Experiments are not as rigorous as the discussion of the methods suggests. Evaluation on more datasets would have made the comparisons and drawn conclusions more stronger. Although CLEVR is suited for learning relational concepts from referential expressions, it is a toy dataset. Applicability of the proposed method on other realistic datasets would have made the paper more stronger.", "4. Specific Questions\n- Choice of hyperparameters.\nWe use the open-sourced implementation of Mask-RCNN [5] to generate object proposals. For all the training processes described in the rest of the paper, we used learning rate 1e-3 with a weight decay of 5e-4. We decay the learning rate by a factor 0.1 after 60% of the designated training epochs. The REINFORCE optimizer uses a discount factor of 0.95. In the main text, the variance of REINFORCE means the variance of the gradient estimation but not the variance of the performance (accuracy). We will also add the standard deviation of the model performance in the revision. \n\n- Data-efficiency\nThanks for the very nice suggestion. We have conducted a more systematic study on the data efficiency and will include it the revision. The results are\n\nTrained on 10% of the images:\nTbD: 54.2%.\nMAC: 67.3%.\nNS-CL: 98.9%.\n\nTrained on 100% of the images:\nTbD: 99.1%.\nMAC: 98.9%.\nNS-CL: 99.2%.\n\nThese results demonstrate that our model is more data-efficient.\n\nWe have also listed all other planned changes in our general response above. Please don’t hesitate to let us know for any additional comments on the paper or on the planned changes.\n\n[1] Anderson et al. \"Bottom-up and top-down attention for image captioning and visual question answering.\" In CVPR, 2018.\n[2] Baradel et al. \"Object Level Visual Reasoning in Videos.\" In ECCV, 2018.\n[3] Artzi, Yoav, and Zettlemoyer. \"Weakly supervised learning of semantic parsers for mapping instructions to actions.\" TACL 1 (2013): 49-62.\n[4] Oh et al. \"Zero-Shot Task Generalization with Multi-Task Deep Reinforcement Learning.\" In ICML, 2017.\n[5] https://github.com/roytseng-tw/Detectron.pytorch", "Dear reviewer, we have updated our paper with the promised results.\n\n1. Train/Val/Test split:\nWe have included the new results of NS-CL using 100% of the CLEVR training images. We use 95% of the training images for learning, and the remaining 5% for validation, hyper-parameter tuning, and model selection. Validation images are used only in testing. This further pushes the overall accuracy of NS-CL to 99.2% on the validation split. Please refer to Section 4.2 for the new results. We adopt the same strategy in all newly added experiments, including those on the Minecraft dataset and the VQS dataset. For the results using only 10% of the CLEVR training images, we simply used the training set accuracy for model selection. \n\nWe have tried to contact the authors of the CLEVR dataset, and will be pleased to share further information regarding the test split upon receiving any responses.\n\n2. Object-based Representations and Data Efficiency.\nThank for suggesting the related work and the additional baselines. We have added additional experiments that incorporate object-based representations into TbD/MAC (Section 4.2). NS-CL achieves higher data efficiency. We believe that this comes from the full disentanglement of visual concept learning and symbolic reasoning: how to execute program instructions based on the learned concepts is programmed in NS-CL.\n\nCompared with the attention-based baselines, our use of symbolic programs enables better integration with object-based representations, e.g., in modelling relations and quantities. For the detailed implementation of the baselines, please refer to Appendix E.3.\n\nThanks again for your comments. ", "We thank all reviewers for their constructive comments and have updated our paper accordingly. Please check out the new version!\n\nSpecific changes include\n\n1) We have compared with additional baselines that incorporate object-based representation with attention-based methods (MAC/TbD). The results are in Section 4.2 and the implementation details are in Appendix E.3. The symbolic program execution module in NS-CL shows better utilization of object-based representations. \n\n2) We provided a systematic analysis of data efficiency in Section 4.2. NS-CL achieves higher data efficiency by disentangling visual concept learning and program-based symbolic reasoning.\n\n3) We added the results on a new visual reasoning testbed --- the Minecraft dataset. Results can be found in Appendix F.1.\n\n4) We added both quantitative and qualitative results on the VQS dataset, composed of natural images from the COCO dataset and human-annotated question-answering pairs. Please kindly find these results in Section 4.6 and the implementation details in Appendix F.2. NS-CL achieves a comparable results with the baselines and learns visual concepts from the noisy inputs.\n\n5) We have cited and discussed the suggested related work.\n\n6) We have also included more discussions on future work.\n\nPlease don’t hesitate to let us know for any additional comments on the paper.\n", "We thank all reviewers for their comments. In addition to the specific response below, here we summarize our goal and the changes planned to be included in the revision. \n\nWe study concept learning---discovering both visual concepts and language concepts from natural supervision (unannotated images and question-answer pairs). With these learned concepts, our model can solve many problems, such as image captioning, retrieval, as well as VQA. But here the ability to solve VQA is really a by-product, not our end goal---learning accurate (Sec. 4.1), interpretable (Sec. 4.2), and transferrable (Sec. 4.5) concepts. \n\nWe agree with the reviewers that it’s important to demonstrate how our model works on real images with more complex visual appearance. As suggested, we plan to include the following changes in the revision by Nov. 26 (the new official revision deadline, extended from Nov. 23):\n- We will include quantitative and qualitative results on new datasets: the VQA dataset of real-world images [1] and the Minecraft dataset used by Yi et al. [2].\n- We will add a systematic study regarding the data efficiency of our model, compared with other VQA baselines in Sec. 4.2.\n- We will compare our model with other baselines (TbD and MAC) built upon the object-based representations.\n- We will include additional discussions on limitation and future work.\n\nPlease don’t hesitate to let us know for any additional comments on the paper or on the planned changes.\n\n[1] Antol, Stanislaw, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, and Devi Parikh. \"Vqa: Visual question answering.\" In ICCV, 2015.\n[2] Yi, Kexin, Jiajun Wu, Chuang Gan, Antonio Torralba, Pushmeet Kohli, and Joshua B. Tenenbaum. \"Neural-Symbolic VQA: Disentangling Reasoning from Vision and Language Understanding.\" In NIPS, 2018.\n", "Sincerely thank you for the detailed explanations and comments for a constructive rebuttal.\n\nRe: 1. Train/test split\nR1-1) I understand the current evaluation issue on CLEVR. Then, could you confirm that your hyperparameters are found using **ONLY** training split since you have used the validation split like the test split to compare state-of-the-art?\nR1-2) Have you asked the authors of CLEVR regarding this issue? And, what's their response? I appreciate if you can cite the authors' reply in Appendix as a pointer to refer in future works.\n\nRe: 2. Object-based representations and baselines\nR2) With the positive results, I would like to consider increasing my rating considering the authors' argument of fair comparison.", "Thank you very much for the constructive comments. \n\n1. Train/test split\nOur evaluation is valid and fair, because all previous papers have also reported results only on the validation set, and we follow the tradition in this paper. They did this because there are no ground-truth labels or evaluation servers provided for the CLEVR test split. Evaluation on the test split is therefore impossible. We agree that it’s important to ensure all evaluation valid, and we’ll include this clarification into the revision.\n\n2. Object-based representations and baselines\nThanks for the suggestion. We’ll cite and discuss the paper that used object-based visual representation. We will also add additional experiments that incorporate object-based representations into TbD/MAC: Instead of the image feature extracted from a ResNet, we change the input visual feature to the reasoning neural architecture to be an object-based representation as in [1]. Please let us know if you have any suggestion regarding the comparison.\n\nWe also want to clarify that the object-based representation alone is not the main contribution of the paper. Instead, our key contribution is the integration of object-based representations and symbolic reasoning. Such combination helps us disentangle visual concept learning and language understanding, and has three advantages over alternatives, as explored in the paper:\n\n1) Executing symbolic programs on object-based representations naturally facilitates complex reasoning that includes quantities (counting), comparisons, and relations. It also brings combinatorial generalization by design (Sec. 4.4): for example, trained on scenes with <= 6 objects, our model (but not the baselines) can also perform counting on scenes with 10 objects.\n\n2) It fully disentangles the visual concept learning and reasoning: once the visual concepts are learned, they can be systematically evaluated (Sec. 4.1) and deployed in any visual-semantic applications (such as image caption retrieval, as shown in Sec. 4.5). In contrast, earlier methods like IEP, TbD, and MAC learn visual concepts and reasoning in an entangled manner and cannot be easily adapted to new problem domains (e.g., show in Table 6, VQA baselines are only able to infer the result on a partial set of the image-caption data).\n\n3) Symbolic execution over the object space brings full transparency. One can easily trace back the error answer and even detect adversarial (ambiguous or wrong) questions (please refer to Appendix. E for some examples).\n\n3. Limitation and future work\nWe’d like to clarify that we are not targeting at a specific application such as VQA; instead, we want to build a system that learns accurate (Sec. 4.1), interpretable (Sec. 4.2), and transferrable (Sec. 4.5) concepts from natural supervision: images and question-answer pairs. To achieve this, we propose a novel framework that 1) disentangles the learning of both, but 2) bridges them with a reasoning module and 3) lets them bootstrap the learning of each other.\n\nToward concept learning from realistic images and complex language, the current model design suggest multiple research directions. First, our model relies on object-based representations; constructing 3D object-based representations for realistic scenes (or videos) needs further exploration [1,2]. Second, our model assumes a domain-specific language for a formal description of semantics. The integration of formal semantics into the processing of complex natural language would be meaningful future work [3,4]. We hope our paper could motivate future research in visual concept learning, language learning, and compositionality.\n", "Thank you very much for the constructive comments.\n\n1. Semantic parsing.\nIn short, the semantic parsing module is a neural sequence-to-tree model. Given a natural language question, the module translates into an executable program with a hierarchy of primitive operation. We present an overview in Sec. 3.1 (last paragraph of Page 4), with more implementation details in Appendix B. We’ll revise the text for better clarity.\n\nThe module begins with encoding the question into a fixed-length embedding vector using a bidirectional GRU. The decoder, taking the sentence embedding as input, recovers the hierarchy of the operations in a top-down manner: It first predicts the root token (the question type: query/count/… in the VQA case); then, conditioned on the root token, it predicts the tokens of the root’s children. The decoding algorithm runs recursively.\n\n2. Counting.\nWe perform counting in a quasi-symbolic manner, based on the object-based scene representation. As an example, consider a simple program: Count(Filter(Red)), which counts the number of red objects in the scene. The operation Filter(Red) assigns each object with a value p_i, as the confidence of classifying this object as a red one. Counting is performed as: $\\sum_i p_i$. During inference, we round this value to the nearest integer. More details can be found in Sec. 3,1. (Page 5) and Appendix C. We will also revise the text for better clarity.\n\nCompared with alternatives, our method enjoys combinatorial generalization with the notion of `objects’: for example, trained on scenes with <= 6 objects, our model can also perform counting on scenes with 10 objects.\n\n3. Future direction\nWe thank the reviewer for the suggestions on future directions and will include the following discussions in the revision:\n\nCompositionality. We currently view the scene as a collection of objects with latent representations. Building scene (or video) representations that also reflects the compositional nature of objects (e.g., an object is a combination of multiple primitives) will be an interesting research direction. \n\nInfer relations from words and behavior. Modelling actions (e.g., push and pull) as concepts is another interesting direction. People have studied the symbolic representation of skills [1] and learning word (instruction) meanings from interaction [2].\n\nVideos and words. Our framework can also be extended to the video domain. Video techniques such as detection and tracking are needed to build the object-based representation [3]. Also, the semantic representation of sentences should be extended to include actions / interactions besides static spatial relations. \n\nWe have also listed all other planned changes in our general response above. Please don’t hesitate to let us know for any additional comments on the paper or on the planned changes.\n\n\n[1] Konidaris, George, Leslie Pack Kaelbling, and Tomas Lozano-Perez. \"From skills to symbols: Learning symbolic representations for abstract high-level planning.\" Journal of Artificial Intelligence Research 61 (2018): 215-289.\n[2] Oh, Junhyuk, Satinder Singh, Honglak Lee, and Pushmeet Kohli. \"Zero-Shot Task Generalization with Multi-Task Deep Reinforcement Learning.\" In ICML, 2017.\n[3] Baradel, Fabien, Natalia Neverova, Christian Wolf, Julien Mille, and Greg Mori. \"Object Level Visual Reasoning in Videos.\" In ECCV, 2018.\n", "Thank you very much for the encouraging and constructive comments. We agree that generalizing to more complex visual domains would be essential for our task. In the revision, we will include the results of NS-CL on new datasets, including the VQA dataset of real-world images [1] and the Minecraft dataset used by Yi et al. [2].\n\nWe have also listed all other planned changes in our general response above. Please don’t hesitate to let us know for any additional comments on the paper or on the planned changes.\n\n[1] Antol, Stanislaw, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, and Devi Parikh. \"Vqa: Visual question answering.\" In ICCV, 2015.\n[2] Yi, Kexin, Jiajun Wu, Chuang Gan, Antonio Torralba, Pushmeet Kohli, and Joshua B. Tenenbaum. \"Neural-Symbolic VQA: Disentangling Reasoning from Vision and Language Understanding.\" In NIPS, 2018.\n", "The paper is well written and flow well. The only thing I would like to see added is an elaboration of \n\"run a semantic parsing module to translate a question into an executable program\". How to do semantic parsing is far from obvious. This topic needs at least a paragraph of its own. \n\nThis is not a requirement but an opportunity, can you explain how counting work? I think you have it at the standard level of the magic of DNN but some digging into the mechanism would be appreciated. \n\nIn concluding maybe you can speculate how far this method can go. Compositionality? Implicit relations inferred from words and behavior? Application to video with words? " ]
[ 6, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, 9 ]
[ 4, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 5 ]
[ "iclr_2019_rJgMlhRctm", "r1xJIbv5A7", "iclr_2019_rJgMlhRctm", "r1g6tF8F3X", "SJgbnnCgRX", "iclr_2019_rJgMlhRctm", "iclr_2019_rJgMlhRctm", "Bkx7KxOjpX", "r1g6tF8F3X", "Sklo1V_znQ", "HJeHGTF5nX", "iclr_2019_rJgMlhRctm" ]
iclr_2019_rJl-b3RcF7
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
Neural network pruning techniques can reduce the parameter counts of trained networks by over 90%, decreasing storage requirements and improving computational performance of inference without compromising accuracy. However, contemporary experience is that the sparse architectures produced by pruning are difficult to train from the start, which would similarly improve training performance. We find that a standard pruning technique naturally uncovers subnetworks whose initializations made them capable of training effectively. Based on these results, we articulate the "lottery ticket hypothesis:" dense, randomly-initialized, feed-forward networks contain subnetworks ("winning tickets") that - when trained in isolation - reach test accuracy comparable to the original network in a similar number of iterations. The winning tickets we find have won the initialization lottery: their connections have initial weights that make training particularly effective. We present an algorithm to identify winning tickets and a series of experiments that support the lottery ticket hypothesis and the importance of these fortuitous initializations. We consistently find winning tickets that are less than 10-20% of the size of several fully-connected and convolutional feed-forward architectures for MNIST and CIFAR10. Above this size, the winning tickets that we find learn faster than the original network and reach higher test accuracy.
accepted-oral-papers
The authors posit and investigate a hypothesis -- the “lottery ticket hypothesis” -- which aims to explain why overparameterized neural networks are easier to train than their sparse counterparts. Under this hypothesis, randomly initialized dense networks are easier to train because they contain a larger number of “winning tickets”. This paper received very favorable reviews, though there were some notable points of concern. The reviewers and the AC appreciated the detailed and careful experimentation and analysis. However, there were a couple of points of concern raised by the reviewers: 1) the lack of experiments conducted on large-scale tasks and models, and 2) the lack of a clear application of the idea beyond what has been proposed previously. Overall, this is a very interesting paper with convincing experimental validation and as such the AC is happy to accept the work.
train
[ "S1xmvZRayE", "Hkelbn3I14", "r1l-QxArJ4", "ryggsG-VkV", "HJeDy85anQ", "HygUFDOTAX", "Bkg5UpU52m", "ryemP68v2m", "SkloQowaAm", "BygUAeW5R7", "BJeHlbWcCX", "SylwJEW9Cm", "H1gQ-4Z5CQ", "ryg2lfbcRQ", "SygdQnlqRm", "r1xZ0Ag5Cm", "BylPD1gKT7" ]
[ "author", "public", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "public" ]
[ "\n\nWe have an update with several further experiments that examine the relationship between SNIP and our paper.\n\nWe have simplified our pruning mechanism to prune weights globally (instead of per-layer) with otherwise the same pruning technique. For our three main networks (MNIST, Resnet-18, and VGG-19), we find that globally-pruned winning tickets reach higher accuracy at higher levels of sparsity and learn faster than SNIP-pruned networks.\n\nFor example, VGG19 reaches 92% test accuracy when pruned by at most 97.2% with SNIP vs. at most 99.5% for globally-pruned winning tickets. Resnet-18 achieves 90% accuracy when pruned by at most 27% with SNIP vs. at most 89% for globally-pruned winning tickets.\n\nWe also performed several further experiments exploring the effect of initialization and structure on SNIP-pruned networks. We find that SNIP-pruned networks can be randomly reinitialized as well as randomly rearranged (i.e., randomly choose the locations of unpruned connections within layers) with limited impact on their accuracy. However, these networks are neither as accurate nor learn as quickly as winning tickets.\n\nThe fact that SNIP-pruned networks can be rearranged suggests that SNIP largely identifies the proportions in which layers can be pruned such that the network is still able to learn, leaving significant opportunity to exploit the additional, initialization-sensitive understanding demonstrated by our results.\n\nWe provide several graphs here (https://drive.google.com/drive/folders/1lpxJFpkF0Afq1rRqkEDnLcPN0kMV8BBC?usp=sharing) to support these claims. We will add these experiments to the final version of our paper. \n", "We appreciate that the authors spared time to address our comment, and we believe that the confusion on the effect of (re-)initialization is clarified.\nWe look forward to trying SNIP in your experimental setting once your code is released.\n", "(Edited to improve clarity and update replication results.)\n\nThank you for sharing your work; we are very excited to see your results, since they seem to support the lottery ticket hypothesis as posed and add substantial further evidence to our hypothesis via a different pruning technique. We will be sure to refer to the SNIP results in the final version of our paper.\n\nThe main statement of the lottery ticket hypothesis does not exclude the possibility that winning tickets are still trainable when reinitialized. Specifically, while the hypothesis conjectures that, given a dense network and its initialization, there exists a subnetwork that is still trainable with the original initializations, it does not require any particular behavior of this subnetwork under other initializations. Thank you for this comment; we will revise our language to make this clear.\n\nIn our experiments, we do find initialization to have a significant impact on the success of the pruned subnetworks we find (hence the quote you provide from our paper). You mention in your rebuttal that “SNIP finds the architecturally important parameters in the network,” perhaps reducing the relative importance of initialization for the winning tickets that you find.\n\nOnce your source code is made available, we would be very interested in analyzing your preliminary comparison between SNIP-pruned networks with the original initialization and SNIP-pruned networks when reinitialized; we have replicated the SNIP algorithm as presented in your paper in our own framework and produce the following results: \n\n* Lenet (MNIST): We confirm that the accuracy of SNIP-pruned networks does not change when they are reinitialized. In addition, we find that, although SNIP outperforms random pruning, SNIP-pruned networks do not match the test accuracy of our winning tickets or our randomly reinitialized winning tickets.\n\n* Resnet-18 (CIFAR10): We did not have time (in the 24 hours between your comment and the end of the comment period) to confirm your random reinitialization experiments on this network. We find that, although SNIP outperforms random pruning, it does not match the test accuracy of the winning tickets and only slightly outperforms the randomly-reinitialized winning tickets.\n\n* VGG19 (CIFAR10): (Updated) We confirm that the accuracy of the SNIP-pruned networks does not change when they are reinitialized. When training with warmup, SNIP produces networks that nearly match the accuracy of our winning tickets at the corresponding level of sparsity. However, our winning tickets learn faster than the SNIP-pruned networks. \n\nWe look forward to discussing SNIP in the final version of our paper as a potential “method of choice for the further exploration of [our] hypotheses.” However, our preliminary, replicated results suggest that there is a gap in accuracy and speed of learning between SNIP-pruned networks and our winning tickets.\n\n(One minor nit: you mention that we only test our method for moderate sparsity levels, but our graphs show that we continue to find winning tickets at extreme sparsity levels (> 90%) similar to those in your paper.)", "Thank you for the interesting work.\n\nConcurrently, we proposed a new pruning method, SNIP, ( https://openreview.net/forum?id=B1VZqjAcYX ), that finds extremely sparse networks by single-shot at random initialization, and the pruned sparse networks are then trained in the standard way.\n\nWe found one of your hypotheses \"When randomly reinitialized, a winning ticket learns more slowly and achieves lower test accuracy\" intriguing. Therefore, we tested to see if this behavior holds on subnetworks obtained by SNIP.\n\nSpecifically, we tested various models (LeNets, AlexNets, VGGs and WRNs) on MNIST and CIFAR-10 datasets for the same extreme sparsity levels (> 90%) used in our paper. As a result, we found that there are no differences in performance between re-initializing and NOT-initializing the subnetworks (after pruning by SNIP and before the start of training): 1) the final accuracies are almost the same (the difference is less than 0.1%) and 2) the training behavior (the training loss and validation accuracy curves) is very similar.\n\nIt seems that our finding, albeit preliminary, is contradictory to the aforementioned hypothesis. This discrepancy may be due to the fact that the conclusions in your paper are based on magnitude based pruning and the method is tested for moderate sparsity levels, etc.\n\nAs stated in your latest version (Section 7), \"we intend to explore more efficient methods for finding winning tickets that will make it possible to study the lottery ticket hypothesis in more resource-intensive settings\" or \"... non-magnitude pruning methods (which could produce smaller winning tickets or find them earlier)\", we believe that SNIP could be a method of choice for the further exploration of your hypotheses.\n\nWe hope to hear your thoughts.", "It was believed that sparse architectures generated by pruning are difficult to train from scratch. The authors show that there exist sparse subnetworks that can be trained from scratch with good generalization performance. To explain the difficulty of training pruned networks from scratch or why training needs the overparameterized networks that make pruning necessary, the authors propose a lottery ticket hypothesis: unpruned, randomly initialized NNs contain subnetworks that can be trained from scratch with similar generalization accuracy. They also present an algorithm to identify the winning tickets.\n\nThe conjecture is interesting and it is still a open question for whether a pruned network can reach the same accuracy when trained from scratch. It may helps to explain why bigger networks are easier to train due to “having more possible subnetworks from which training can recover a winning ticket”. It also shows the importance of both the pruned architecture and the initialization value. Actually another submission (https://openreview.net/forum?id=rJlnB3C5Ym) made the opposite conclusions.\n\nThe limitations of this paper are several folds:\n\n- The paper seems a bit preliminary and unfinished. A lot of notations seems confusing, such as “when pruned to 21%”. The author defines a winning lottery ticket as a sparse subnetwork that can reaching the same performance of the original network when trained from scratch with the “original initialization”. It is quite confusing as there is no definition anywhere about the “original initialization”. It would be clearer if the author can use some math notations.\n\n- As identified by the authors themself, lacking of supporting experiments on large-scale dataset and real-world models. Only MNIST/CIFAR-10 and toy networks like LeNet, Conv2/Conv4/Conv6 are used. The author has done experiments on resnet, I would be better to move it to the main paper.\n\n- There is no explanation about why the “lottery ticket” can perform well when trained with the “original initialization” but not with random initialization. Is it because the original initialization is not far from the pruned solution? Then this is a kind of overting to the obtained solution.\n\n- The other problem is that the implications are not clearly useful without showing any applications. The paper could be stronger if the authors can provide more results to support the applications of this conjecture.\n\n- The authors only explore the sparse networks. Model compression by sparsification has good compression rate, especially for networks with large FC layers. However, the acceleration relies on specific hardware/libraries. It would be more complete if the author can provide experiments on structurally pruned networks, especially for CNNs.\n\n- The x-axis of pruning ratios in Figure 1/4/5 could be uniformly sampled and make the figure easier to read.\n\nQuestions:\n- Does the winning tickets always exist?\n- What is the size of winning tickets for a very thin network? Would it also be less than 10%?\n\n\n------update----------\n\nI appreciate the author’s efforts on providing detailed response and more experiments. After reading the rebuttal and the revised version, though the paper has been improved, my concerns are not fully addressed to safely accept it.\n\nIt can be summarized that there exists a sparse network that can be trained well only provided with certain weight initialization.The winning tickets can only be found via iterative pruning of the trained network. This is a chicken-egg problem and I failed to see how it can improve the network design. It still feels incomplete to me by just providing a hypothesis with limited sets of experiments. The implications are actually the most valuable/attractive part, such as “Improve our theoretical understanding of neural networks”, however, they are very vague with no clear instructions even after accepting this hypothesis. I would expect analysis of the reason behind failure and success. I understand that it could be left for another paper, but the observations/experiments only are not strong enough for confirming the the hypothesis.\n\nSpecifically, the experiments are conducted on relatively wide and shallow CNNs. Note that VGG-16/19 and ResNet-18 are designed for ImageNet but not CIFAR-10, which are much wider than normal CIFAR-10 networks, such as ResNet-56. Even “resnet18 has 16x fewer parameters than conv2 and 75x fewer than VGG19”, it is mainly due to the removal of FC layers with average pooling and cannot be claimed as “much thinner” networks. As increasing the wideness usually ease the optimization, and the pruned sparse network still enjoy this property unless significantly pruned. Thus, I still doubt whether the conclusion can hold for much thinner network, i.e., “winning tickets near or below 10-20%, depending on the level of overparameterization of the original network.”\n\nThe observation of “winning ticket weights tend to change by a larger amount then weights in the rest of the network” in Figure 19 seems natural and the conjecture of the reason “magnitude-pruning biases the winning tickets we find toward those containing weights that change in the direction of higher magnitude” sounds reasonable. It would be great if the authors can dig into this and make more comparison with the distribution of random weights initialization.\n\nThe figures could also be improved and simplified as the lines are hard to read and compare.\n\n", "Thank you for your response, for addressing my previous concerns with the paper, and for taking the additional time for revising your original submission. Please see my updated review above.", "==== Summary ====\n\nIt is widely known that large neural networks can typically be compressed into smaller networks that perform as well as the original network while directly training small networks can be complicated. This paper proposes a conjecture to explain this phenomenon that the authors call “The Lottery Ticket Hypothesis”: large networks that can be trained successfully contain at initialization time small sub-networks — which are defined by both connectivity and the initial weights that the authors call “winning tickets” — that if trained separately for similar number of iterations could reach the same performance as the large network. The paper follows by proposing a method to find these winning tickets by pruning methods, which are typically used for compressing networks, and then proceed to test this hypothesis on several architectures and tasks. The paper also conjectures that the reason large networks are more straightforward to train is that when randomly initialized large networks have more combinations for subnetworks which makes have a winning ticket more likely.\n\n==== Detailed Review ====\n\nI have found the hypothesis that the paper puts forth to be very appealing, as it articulates the essence of many ideas that have been floating around for quite a while. For example, the notion that having a large network makes it more probable for some of the initialized weights to be in the “right” direction for the beginning of the training, as mentioned in [1] that was cited in this submission. Given our lack of understanding of the optimization and generalization properties of neural networks, as well as how these two interact, then any insight into this process, like this paper suggests, could have a significant impact on both theory and practice. To that effect, I generally found the experiments in support of the hypothesis to be pretty convincing, or at the very least that there is some truth to it. Most importantly, the hypothesis and experiments presented in this paper gave me a new perspective on both the generalization and optimization problem, which as a theoretician gave me new ideas on how to approach analyzing them rigorously — and that is why I strongly vote for the acceptance of this paper.\n\nThough I have very much enjoyed reading this submission, which for the most part is very well written, it does have some issues:\n\n1. Though this is an empirical paper about an observed phenomenon, it should contain a bit more background and discussion on the theoretical implications of its subject. For example, see [2] which is also an empirical work about a theoretical hypothesis, but still includes the right theoretical context that helps the reader judge the meaning of their results. The same should be done here. For instance, there is a growing interest in the link between compression and generalization that is relevant to this work [3,4], and the effect of winning ticket leading to better generalization could be explained via other works which link structure to inductive bias [5,6].\n2. The lottery ticket hypothesis is described in the paper as being both about optimization (faster “convergence”) and about generalization (better “generalization accuracy”). However, there is a slight issue with how these terms are treated in the paper. First, “convergence” is defined as the point at which the test accuracy reaches to a minimum and before it begins to rise again, but it does not mean (and most likely not) that it is the point at which the optimization algorithm converged to its minimum — it is better to write that early stopping regularization was used in this case. Second, the convergence point is chosen according to the test set which is bad methodology, because the test set cannot be used for choosing the final model (only the training and validation sets). Third, the training accuracies are not reported in the paper, and without them, it is difficult to judge if a given model fails to generalize is simply fails to converge to 100% accuracy on the training set. As a minor note, “generalization accuracy” as a term is not that common and might be a bit confusing, so it is better to write “test accuracy”.\n\nTo conclude, even though I urge the authors to address the above issues, which could significantly improve its quality and clarity, I think that this article thought-provoking and highly deserving of being accepted to ICLR.\n\n[1] Bengio et al. Convex neural networks. NIPS 2006.\n[2] Zhang et al. Understanding deep learning requires rethinking generalization. ICLR 2017.\n[3] Arora et al. Stronger generalization bounds for deep nets via a compression approach. ICML 2018.\n[4] Zhou et al. Compressibility and Generalization in Large-Scale Deep Learning. Arxiv preprint 2018.\n[5] Cohen et al. Inductive Bias of Deep Convolutional Networks through Pooling Geometry. ICLR 2017.\n[6] Levine et al. Deep Learning and Quantum Entanglement: Fundamental Connections with Implications to Network Design. ICLR 2018. \n\n==== Updated Review Following Rebuttal ====\n\nThe authors have addressed all of the concerns that I have mentioned above, and so I have updated my score accordingly. The additional background on related works, as well as the additional experiments in response to the other reviews will help readers appreciate the observations that are raised by the authors. The new revision is a very strong submission, and I highly recommend accepting it to ICLR. ", "(Score raised from 8 to 9 after rebuttal)\nThe paper examines the hypothesis that randomly initialized (feed-forward) neural networks contain sub-networks that train well in the sense that they converge equally fast or faster and reach the same or better classification accuracy. Interestingly, such sub-networks can be identified by simple, magnitude-based pruning. It is crucial that these sub-networks are initialized with their original initialization values, otherwise they typically fail to be trained, implying that it is not purely the structure of the sub-networks that matters. The paper thoroughly investigates the existence of such “winning-tickets” on MNIST and CIFAR-10 on both, fully connected but also convolutional neural networks. Winning-tickets are found across networks, various optimizers, at different pruning-levels and across various other hyper-parameters. The experiments also show that iterative pruning (with re-starts) is more effective at finding winning-tickets.\n\nThe paper adds a novel and interesting angle to the question of why neural networks apparently need to be heavily over-parameterized for training. This question is intriguing and of high importance to further the understanding of how neural networks train. Additionally, the findings might have practical relevance as they might help avoid unnecessary over-parameterization which, in turn, might save use of computational resources and energy. The main idea is simple (which is good) and can be tested with relatively simple experiments (also good). The experiments conducted in the paper are clean (averaging over multiple runs, controlling for a lot of factors) and should allow for easy reproduction but also for clean comparison against future experiments. The experimental section is well executed, the writing is clear and good and related work is taken into account to a sufficient degree. The paper touches upon a very intriguing “feature” of neural networks and, in my opinion, should be relevant to theorists and practitioners across many sub-fields of deep learning research. I therefore vote and argue for accepting the paper for presentation at the conference. The following comments are suggestions to the authors on how to further improve the paper. I do not expect all issues to be addressed in the camera-ready version.\n\n1) The main “weakness” of the paper might be that, while the amount of experiments and controls is impressive, the generality of the lottery ticket hypothesis remains somewhat open. Even when restricting the statement to feed-forward networks only, the networks investigated in the paper are relatively “small” and MNIST and CIFAR-10 bear the risk of finding patterns that do not hold when scaling to larger-scale networks and tasks. I acknowledge and support the author’s decision to have thorough and clean experiments on these small models and tasks, rather than having half-baked results on ImageNet, etc. The downside of this is that the experiments are thus not sufficient to claim (with reasonable certainty) that the lottery ticket hypothesis holds “in general”. The paper would be stronger, if the existence of winning tickets on larger-scale experiments or tasks other than classification were shown - even if these experiments did not have a large number of control experiments/ablation studies.\n\n2) While the paper shows the existence of winning tickets robustly and convincingly on the networks/tasks investigated, the next important question would be how to systematically and reliably “break” the existence of lottery tickets. Can they be attributed to a few fundamental factors? Are they a consequence of batch-wise, gradient-based optimization, or an inherent feature of neural networks, or is it the loss functions commonly used, …? On page 2, second paragraph, the paper states: ”When randomly reinitialized, our winning tickets no longer match the performance of the original network, explaining the difficulty of training pruned networks from scratch”. I don’t fully agree - the paper certainly sheds some light on the issue, but an actual explanation would result in a testable hypothesis. My comment here is intended to be constructive criticism, I think that the paper has enough “juice” and novelty for being accepted - I am merely pointing out that the overall story is not yet conclusive (and I am aware that it might need several more publications to find these answers).\n\n3) Do the winning tickets generalize across hyper-parameters or even tasks. I.e. if a winning ticket is found with one set of hyper-parameters, but then Optimizer/learning-rate/etc. are changed, does the winning-ticket still lead to improved convergence and accuracy? Same question for data-sets: do winning-tickets found on CIFAR-100 also work for CIFAR-10 and vice versa? If winning-tickets turn out to generalize well, in the extreme this could allow “shipping” each network architecture with a few good winning-tickets, thus making it unnecessary to apply expensive iterative pruning every time. I would not expect generalization across data-sets, but it would be highly interesting to see if winning tickets generalize in any way (after all I am still surprised by how well adversarial examples generalize and transfer).\n\n4) Some things that would be interesting to try:\n4a) Is there anything special about the pruned/non-pruned weights at the time of initialization? Did they start out with very small values already or are they all “behind” some (dead) downstream neuron? Is there anything that might essentially block gradient signal from updating the pruned neurons? This could perhaps be checked by recording weights’ “trajectories” during training to see if there is a correlation between the “distance weights traveled” and whether or not they end up in the winning ticket.\n4b) Do ARD-style/Bayesian approaches or second-order methods to pruning identify (roughly) the same neurons for pruning?\n\n5) Typo (should be through): “we find winning tickets though a principled search process”\n\n6) For the standard ConvNets I assume you did not use batchnorm. Does batchnorm interfere in any way with the existence of winning tickets? (at least on ResNet they seem to exist with batchnorm as well)\n", "Thanks for the very detailed response, the additional experiments and analysis and the updated manuscript. I am particularly pleased to see the additional experiments (not that the original manuscript was lacking experimental results) and the analysis in Appendix D. I think that the current paper is \"filled to the brink\" with interesting experiments and results (which are conducted in a very solid fashion) - there are many interesting follow-up questions (quite a few of which have been named by the reviewers) and it is tempting to add even more results, but I agree with the authors that these questions deserve a separate publication.\n\nI also appreciate a more formal statement of the lottery-ticket hypothesis.\n\nThe questions and issues raised in my review have all been addressed in a satisfactory fashion - the paper got even stronger. Looking forward to reading followup work on how well winning tickets generalize, whether they appear in non-classification tasks and whether other pruning methods identify the same winning tickets or not. ", "\n> The authors only explore the sparse networks. Model compression by sparsification has good compression rate, especially for networks with large FC layers. However, the acceleration relies on specific hardware/libraries. It would be more complete if the author can provide experiments on structurally pruned networks, especially for CNNs.\n\nThis is a great observation. We agree that structured pruning techniques produce pruned networks that are more amenable to existing software/hardware acceleration techniques. In the limitations section of the updated version (Section 7), we have explicitly noted structured pruning as an opportunity to connect our empirical observations of winning tickets to concrete practice.\n\n---\n\n> The x-axis of pruning ratios in Figure 1/4/5 could be uniformly sampled and make the figure easier to read.\n\nDone - thank you for the suggestion!\n\n---\n\n> Does the winning tickets always exist?\n\nOur experiments indicate that winning tickets do seem to exist for the variety of network architectures considered in this paper (and as explicitly scoped by our stated limitations in Section 7 - we acknowledge that we only consider a limited subset of neural network tasks in this paper). However, in the most literal sense, no: winning tickets do not always exist for all datasets and networks. Take, as an example, a minimal dense network for two-way XOR which has two hidden units. If the parameters of the network are initialized to values that give the correct outputs from the very start, then removing any one parameter makes it impossible to reach the same accuracy as the unpruned network.\n\n---\n\n> What is the size of winning tickets for a very thin network? Would it also be less than 10%?\n\nIn the updated version of the paper (Section 5), we have studied several networks that are much thinner than those described in the original version of the paper: VGG16, VGG19, and resnet18. For VGG16 and VGG19, we continue to find winning tickets that are at or less than 10% of the original size of the network. For resnet18 (which has 16x fewer parameters than conv2 and 75x fewer than VGG19), we find winning tickets that are about 15% of the size of the original network. Our results suggest that, for several exemplary thin networks, we still find winning tickets near or below 10-20%, depending on the level of overparameterization of the original network.", "(Edit: we reworded this comment for clarity, but the content is otherwise the same)\n\nThank you so much for your thoughtful review. Below, you will find our responses to your questions and comments. We have modified the paper to reflect your feedback, and we are very interested in any further feedback you have about the new version of the paper. \n\nWe have summarized the changes in the new version of the paper in a top-level comment called \"Summary of Changes in the New Version.\"\n\nWhere multiple reviewers made similar comments, we have grouped the answers into a \"Common Questions\" comment; you can find this comment as a response to our top-level comment called \"Summary of Changes in the New Version.\"\n\n---\n\n> Actually another submission (https://openreview.net/forum?id=rJlnB3C5Ym) made the opposite conclusions.\n\nUp to a certain level of pruning, a randomly reinitialized network can match the accuracy (and often learning speed) of the original network. We find this to be true throughout our paper, particularly in the conv2/4/6 experiments. However, past this point, winning tickets continue to match the performance of the original network when randomly reinitialized networks cannot. Furthermore, at the levels of pruning for which randomly reinitialized networks do match the performance of the original network, winning tickets reach even higher accuracy and learn faster. As a concrete example, in Section 5 of the updated version of our paper, we include lottery ticket experiments on the same VGG19 network for CIFAR10 as appears in \"Rethinking the Value of Network Pruning.\" We find that, when randomly reinitialized, subnetworks found via iterative pruning remain within 0.5 percentage points of the accuracy of the original network until pruned by about 70%; after this point, accuracy drops off as in random reinitialization experiments throughout our paper. This result supports the findings of \"Rethinking the Value of Network Pruning:\" up to a certain level of pruning, VGG19 continues to reach accuracy close the original network even when randomly reinitialized. However, past the initial two or three pruning iterations, these randomly reinitialized networks do not qualify as winning tickets by our definition. In contrast, iterative pruning produces winning tickets when the network is pruned by up to 94.5%.\n\n---\n\n> It would be clearer if the author can use some math notations.\n\nWe agree; thank you for the feedback. In the updated version, we have made our definitions precise through mathematical notation.\n\n---\n\n> As identified by the authors themself, lacking of supporting experiments on large-scale dataset and real-world models. Only MNIST/CIFAR-10 and toy networks like LeNet, Conv2/Conv4/Conv6 are used. The author has done experiments on resnet, I would be better to move it to the main paper.\n\nPlease see \"Common Questions.\"\n\n---\n\n> There is no explanation about why the “lottery ticket” can perform well when trained with the “original initialization” but not with random initialization. Is it because the original initialization is not far from the pruned solution? Then this is a kind of overting to the obtained solution.\n\nPlease see \"Common Questions.\"\n\n---\n\n> The other problem is that the implications are not clearly useful without showing any applications. The paper could be stronger if the authors can provide more results to support the applications of this conjecture.\n\nWe largely consider the value of this paper to be its identification of an avenue to understand properties of neural networks, independent of the current applicability of this understanding to end objectives (e.g., faster training). We intend for this paper to pose an opportunity for future applications. However, we agree that we do not evaluate them.\n\nIf winning tickets do seem to exist in a wide variety of networks, we believe that the most concrete application is in line with contemporary work on distillation/compression/pruning: if a technique can find winning tickets early on in training, then those winning tickets can be used for the remainder of learning, thereby reducing resource demands and speeding up learning (depending on the profitablity of exploiting the sparsity of a winning ticket, as you note next).\n", "\n> 4. Some things that would be interesting to try: 4a) Is there anything special about the pruned/non-pruned weights at the time of initialization? Did they start out with very small values already or are they all “behind” some (dead) downstream neuron? Is there anything that might essentially block gradient signal from updating the pruned neurons? This could perhaps be checked by recording weights’ “trajectories” during training to see if there is a correlation between the “distance weights traveled” and whether or not they end up in the winning ticket.\n\nIn the new Appendix D, we study the pruned and non-pruned weights at the time of initialization. We find that winning ticket initializations tend to come from the extremes of the truncated normal distribution from which the unpruned networks are initialized. We are interested in studying the other questions you mention in future work. We also look at the distance weights travel in the unpruned network, finding that weights that are part of the eventual winning tickets tend to move more than weights that are not part of the winning ticket.\n\n---\n\n> 4b) Do ARD-style/Bayesian approaches or second-order methods to pruning identify (roughly) the same neurons for pruning?\n\nThese are great questions that we are interested in understanding as well. In order to keep our experiments as simple and tractable as possible, we opted to focus on a single, simple, widely-accepted pruning method. However, we have updated our limitations section (Section 7) to reflect that we only use a single identification technique and that other techniques may produce winning tickets with different properties (e.g., fewer weights, improved training times, better generalization, or better performance on hardware).\n\n---\n\n> 5. Typo (should be through): “we find winning tickets though a principled search process”\n\nNice catch - it should now be corrected!\n\n---\n\n> For the standard ConvNets I assume you did not use batchnorm. Does batchnorm interfere in any way with the existence of winning tickets? (at least on ResNet they seem to exist with batchnorm as well)\n\nThe new networks (resnet18 and vgg16/19) all use batchnorm. You're correct that lenet and conv2/4/6 do not use batchnorm. As you note, since we still find winning tickets on these larger networks, it does not appear that batchnorm interferes with the existence of winning tickets.", "\nThank you so much for your thoughtful review. Below, you will find our responses to your questions and comments. We have modified the paper to reflect your feedback, and we are very interested in any further feedback you have about the new version of the paper.\n\nWe have summarized the changes in the new version of the paper in a top-level comment called \"Summary of Changes in the New Version.\"\n\nWhere multiple reviewers made similar comments, we have grouped the answers into a \"Common Questions\" comment; you can find this comment as a response to our top-level comment called \"Summary of Changes in the New Version.\"\n\n---\n\n> I acknowledge and support the author’s decision to have thorough and clean experiments on these small models and tasks, rather than having half-baked results on ImageNet, etc. The downside of this is that the experiments are thus not sufficient to claim (with reasonable certainty) that the lottery ticket hypothesis holds “in general”. The paper would be stronger, if the existence of winning tickets on larger-scale experiments or tasks other than classification were shown - even if these experiments did not have a large number of control experiments/ablation studies.\n\nPlease see Common Questions.\n\n---\n\n> 2. While the paper shows the existence of winning tickets robustly and convincingly on the networks/tasks investigated, the next important question would be how to systematically and reliably “break” the existence of lottery tickets. Can they be attributed to a few fundamental factors?\n\nPlease see Common Questions.\n\n---\n\n> Are they a consequence of batch-wise, gradient-based optimization, or an inherent feature of neural networks, or is it the loss functions commonly used, …?\n\nIn Appendices D and E, we show that the existence of winning tickets in lenet and conv2/4/6 is independent of the instantiation of a gradient-based optimization method (at least across Adam, SGD, and Momentum). However, we agree that there are still broader questions about the origin of winning tickets. We hope that the work in this paper makes it possible for us and others to follow with answers to these questions.\n\n---\n\n> On page 2, second paragraph, the paper states: ”When randomly reinitialized, our winning tickets no longer match the performance of the original network, explaining the difficulty of training pruned networks from scratch”. I don’t fully agree - the paper certainly sheds some light on the issue, but an actual explanation would result in a testable hypothesis. My comment here is intended to be constructive criticism, I think that the paper has enough “juice” and novelty for being accepted - I am merely pointing out that the overall story is not yet conclusive (and I am aware that it might need several more publications to find these answers).\n\nThis is an excellent observation and we have changed our language accordingly.\n\n---\n\n> 3. Do the winning tickets generalize across hyper-parameters or even tasks. I.e. if a winning ticket is found with one set of hyper-parameters, but then Optimizer/learning-rate/etc. are changed, does the winning-ticket still lead to improved convergence and accuracy? Same question for data-sets: do winning-tickets found on CIFAR-100 also work for CIFAR-10 and vice versa? If winning-tickets turn out to generalize well, in the extreme this could allow “shipping” each network architecture with a few good winning-tickets, thus making it unnecessary to apply expensive iterative pruning every time. I would not expect generalization across data-sets, but it would be highly interesting to see if winning tickets generalize in any way (after all I am still surprised by how well adversarial examples generalize and transfer).\n\nThis is a great question that we are interested in as well. We have conducted some exploratory experiments in each of these directions (changing hyperparameters and changing datasets) in preparation for future research, but the results are too preliminary to merit discussion. We have noted the dataset transfer direction in our list of implications at the end of Section 1, and we think that answering these question precisely will require a separate publication.\n", "\nThank you so much for your thoughtful review. Below, you will find our responses to your questions and comments. We have modified the paper to reflect your feedback, and we are very interested in any further feedback you have about the new version of the paper.\n\nWe have summarized the changes in the new version of the paper in a top-level comment called \"Summary of Changes in the New Version.\"\n\n---\n\n> 1. Though this is an empirical paper about an observed phenomenon, it should contain a bit more background and discussion on the theoretical implications of its subject. For example, see [2] which is also an empirical work about a theoretical hypothesis, but still includes the right theoretical context that helps the reader judge the meaning of their results. The same should be done here. For instance, there is a growing interest in the link between compression and generalization that is relevant to this work [3,4], and the effect of winning ticket leading to better generalization could be explained via other works which link structure to inductive bias [5,6].\n\nWe have rewritten our discussion section (Section 6) to connect with contemporary understanding of inductive bias, generalization (and its relation to compressibility), and optimization of overparameterized networks. We hope that this section provides appropriate context for interpreting these results, however we are open to additional suggestions.\n\n---\n\n> 2. The lottery ticket hypothesis is described in the paper as being both about optimization (faster “convergence”) and about generalization (better “generalization accuracy”). However, there is a slight issue with how these terms are treated in the paper. First, “convergence” is defined as the point at which the test accuracy reaches to a minimum and before it begins to rise again, but it does not mean (and most likely not) that it is the point at which the optimization algorithm converged to its minimum — it is better to write that early stopping regularization was used in this case.\n\nThank you for this very helpful suggestion. We have updated our language throughout the paper to ensure that we are using this terminology properly.\n\n---\n\n> Second, the convergence point is chosen according to the test set which is bad methodology, because the test set cannot be used for choosing the final model (only the training and validation sets).\n\nWe have updated all of our experiments in the main body of the paper to report the iteration of early-stopping based on validation loss and to report the accuracy at that iteration based on test loss. The conclusions from our results remain the same.\n\n---\n\n> Third, the training accuracies are not reported in the paper, and without them, it is difficult to judge if a given model fails to generalize is simply fails to converge to 100% accuracy on the training set.\n\nWe have updated the paper to include graphs of the training accuracies at early-stopping time for lenet and conv2/4/6. In general, training accuracy at early-stopping time rises with test accuracy. However, at the end of the training process, training accuracy generally reaches 100% for all but the most heavily pruned networks (see the new Appendix B); this is true for both winning tickets and randomly reinitialized networks (although winning tickets generally still reach 100% training accuracy when pruned slightly further (e.g., 3.6% vs. 1.9% for MNIST)). Even so, the accuracy patterns witnessed at early-stopping time remain in place at the end of training: winning tickets see test accuracy improvements and reach higher test accuracy than when randomly reinitialized, indicating that winning tickets indeed generalize better.\n\n---\n\n> As a minor note, “generalization accuracy” as a term is not that common and might be a bit confusing, so it is better to write “test accuracy”.\n\nWe have updated our language to reflect this suggestion.\n\n\n\n", "(Edit: we reworded this comment for clarity, but the content is otherwise the same)\n\nWe would like to thank the reviewers for their thorough feedback. In response to the many valuable suggestions and questions they provided, we have made substantial revisions to the paper. In this comment, we summarize those changes section-by-section.\n\n-----\n\nChanges throughout the paper:\n\n* As suggested by Reviewer 2, we no longer refer to network \"convergence.\" Instead, we describe the same phenomenon as \"the iteration at which early-stopping would occur.\" Rather than discussing faster convergence times, we instead refer to faster learning as indicated by an earlier iteration of early-stopping.\n\n* As suggested by Reviewer 1, we have added mathematical notation throughout the paper where appropriate. We adopt the syntax P_m = k% to describe a winning ticket for which the pruning mask m contains 1's in k% of its indices.\n\n* As suggested by Reviewer 2: for all of our training iterations/test accuracy experiments, we measure early-stopping with the validation set and report accuracy at early-stopping using the test set. Our results throughout the paper are the same as in the original submission.\n\n-----\n\n\nSection 1: \n\n* As suggested by Reviewer 1, we have added a formal characterization of the lottery ticket hypothesis in mathematical notion. The meaning of this statement is the same as the informal statement made in the original submission.\n\n-----\n\nSection 2:\n\n* As suggested by Reviewer 2, we have added graphs that show training accuracy at early-stopping time and test accuracy at the end of training (i.e., when training accuracy reaches 100%). Generating this data required re-running our experiments. Therefore, we have updated all reported numbers in this section to reflect the recollected values. Our results remain the same.\n\n* We integrated the P_m notation to streamline the prose. Otherwise,the semantics of this text is exactly the same.\n\n-----\n\nSection 3: We applied the same changes as in Section 2 (described above). Our results remain the same.\n\n-----\n\nSection 4: This section compares results with dropout to results from Section 3. The only change we make is an update to the numbers reported from Section 3 (which are updated as described above). Otherwise, our results are the same.\n\n-----\n\nSection 5: As suggested by Reviewers 1 and 3, we have moved the content for resnet-18 on CIFAR 10 that was in Appendix D in the original submission to this section. Additionally, we provide new experiments for VGG16/19 on CIFAR10.\n\nTo briefly summarize our results, we continue to find winning tickets. However, we show that our results are sensitive to learning rate (as was previously reported for resnet-18 in Appendix D in the original submission). Specifically, at the higher learning rates typically used to train these networks, there is a small accuracy gap between the identified winning ticket and the original network. We show that learning rate warmup eliminates this gap.\n\n-----\n\nSection 6: As suggested by Reviewer 2, we have expanded this section to integrate theoretical context related to generalization, optimization, and inductive bias. Otherwise, our conclusions remain the same.\n\n-----\n\nSection 7: We have added content to our Limitations to reflect the additions that we have promised in our responses to individual reviews. \n\n-----\n\nSection 8: Unchanged.\n\n-----\n\nAppendices:\n\nWe have added content to our Appendix to reflect the additions that we have promised in our responses to individual reviews.\n", "There were a couple of questions that were asked by more than one reviewer. We have centralized our responses to those common questions here.\n\n---\n\n> Reviewer 1: As identified by the authors themself, lacking of supporting experiments on large-scale dataset and real-world models. Only MNIST/CIFAR-10 and toy networks like LeNet, Conv2/Conv4/Conv6 are used. The author has done experiments on resnet, I would be better to move it to the main paper.\n\n> Reviwer 3: The paper would be stronger, if the existence of winning tickets on larger-scale experiments or tasks other than classification were shown - even if these experiments did not have a large number of control experiments/ablation studies.\n\nIn the new version of the paper, we have added experiments on resnet18 and vgg16/19 with CIFAR10 (Section 5 for VGG19 and resnet-18 and Appendix H for VGG16), where we continue to find winning tickets. Notably, our iterative-pruning method for finding winning tickets becomes sensitive to learning rate, so we have to modify the learning rate schedule from the default values to find winning tickets (e.g., by adding warmup). Unfortunately, running pruning experiments on Imagenet or the like was beyond our means during the rebuttal period. The new experiments, which better evoke real-world architectures, improve our confidence in the generality of the lottery ticket hypothesis. However, we acknowledge this concern.\n\n---\n\n> Reviewer 1: There is no explanation about why the “lottery ticket” can perform well when trained with the “original initialization” but not with random initialization. Is it because the original initialization is not far from the pruned solution? Then this is a kind of overting to the obtained solution.\n\n> Reviewer 3: While the paper shows the existence of winning tickets robustly and convincingly on the networks/tasks investigated, the next important question would be how to systematically and reliably “break” the existence of lottery tickets. Can they be attributed to a few fundamental factors?\n\nWe have not yet been able to definitively answer why a winning ticket can perform well with the original initialization but not random initialization. However, in the updated version, we have added an appendix that provides more detail about the internals of winning tickets from lenet for MNIST (Appendix D). Specifically, we investigate two questions: 1) (as suggested by Reviewer 1) are the initial values of winning tickets close to their trained values? and 2) what is the distribution of weights in winning tickets before initialization?\n\n* Question 1: we actually find the opposite of what Reviewer 1 suggests: in the unpruned network, weights that are part of the eventual winning tickets tend to move more than weights that are not part of the winning ticket.\n\n* Question 2: we find that the winning ticket initializations tend to come from a different distribution than the network as a whole: a bimodal distribution with two peaks toward the extremes of the truncated normal distribution from which the network was originally initialized. We try reinitializing winning tickets from this distribution, but doing so performs no better than random reinitialization. We also try performing magnitude pruning before training based on the hypothesis that low-magnitude weights are unlikely to be part of the eventual winning ticket; this approach also performs no better than random reinitialization. We conclude that these insights based on magnitude at initialization are not sufficient to identify a lottery ticket.\n\nThese results do not definitively answer the questions posed, but they represent the first set of clues on the path to doing so. We intend to continue down this path in our future work.", "I share many of this reviewer's concerns and hope they can be addressed by the authors.\n\nHowever, I found the point about \"original initialization\" to be rather pedantic. The majority of the audience will understand \"original initialization\" to be the values of the weights before any optimization.\n\nWhile it is possible that some light verbiage would be helpful to clarify, I do not think that \"math notations\" will help a bit (and in fact may serve to further confuse).\n\nI am not affiliated with the authors in any way." ]
[ -1, -1, -1, -1, 5, -1, 9, 9, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, -1, 4, -1, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "Hkelbn3I14", "r1l-QxArJ4", "ryggsG-VkV", "iclr_2019_rJl-b3RcF7", "iclr_2019_rJl-b3RcF7", "ryg2lfbcRQ", "iclr_2019_rJl-b3RcF7", "iclr_2019_rJl-b3RcF7", "SylwJEW9Cm", "HJeDy85anQ", "HJeDy85anQ", "ryemP68v2m", "ryemP68v2m", "Bkg5UpU52m", "iclr_2019_rJl-b3RcF7", "SygdQnlqRm", "HJeDy85anQ" ]
iclr_2019_rJxgknCcK7
FFJORD: Free-Form Continuous Dynamics for Scalable Reversible Generative Models
A promising class of generative models maps points from a simple distribution to a complex distribution through an invertible neural network. Likelihood-based training of these models requires restricting their architectures to allow cheap computation of Jacobian determinants. Alternatively, the Jacobian trace can be used if the transformation is specified by an ordinary differential equation. In this paper, we use Hutchinson’s trace estimator to give a scalable unbiased estimate of the log-density. The result is a continuous-time invertible generative model with unbiased density estimation and one-pass sampling, while allowing unrestricted neural network architectures. We demonstrate our approach on high-dimensional density estimation, image generation, and variational inference, achieving the state-of-the-art among exact likelihood methods with efficient sampling.
accepted-oral-papers
This paper proposes the use of recently propose neural ODEs in a flow-based generative model. As the paper shows, a big advantage of a neural ODE in a generative flow is that an unbiased estimator of the log-determinant of the mapping is straightforward to construct. Another advantage, compared to earlier published flows, is that all variables can be updated in parallel, as the method does not require "chopping up" the variables into blocks. The paper shows significant improvements on several benchmarks, and seems to be a promising venue for further research. A disadvantage of the method is that the authors were unable to show that the method could produce results that were similar (of better than) the SOTA on the more challenging benchmark of CIFAR-10. Another downside is its computational cost. Since neural ODEs are relatively new, however, these problems might resolved with further refinements to the method.
train
[ "Sylsscjbe4", "ryxeNAsF14", "r1xI56j2R7", "HklkaJG92m", "Hylm9LQd3X", "H1xNi4_oT7", "rJglGJOi6m", "rJlBrCwsp7", "rklhnnPj6X", "ryeYYxV9n7", "Byg3817r5Q" ]
[ "author", "public", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "public" ]
[ "Thank for you pointing this out. We will update the camera-ready version with the correct results if our paper is accepted. ", "It looks like the authors are not reporting the most up-to-date likelihoods using TANs (as per the Table 1 in official ICML paper http://proceedings.mlr.press/v80/oliva18a.html ). Hence the numbers reported in Table 2 in the paper should be updated.", "I got answers to all questions I had by reviewing comments to my concerns and other reviewers'. I have no further question and keep my score same. Thanks\n", "Summary:\nThis paper discusses an advance in the framework of normalizing flows for generative modeling, named FFJORD. The authors consider normalizing flows in the form of ordinary differential equations, as also discussed in [1]. Their contributions are two-fold: (1) they use an unbiased estimator of the likelihood of the model by approximating the trace of the jacobian with Hutchinson’s trace estimator, (2) they have implemented the required ODE solvers on GPUs. \n\nThe models are evaluated on a density estimation task on tabular data and two image datasets (MNIST and CIFAR10), as well as on variational inference for auto-encoders, where the datasets MNIST, Omniglot, Freyfaces and Caltech Silhouettes are considered. \n\nThe authors argue that the trace estimator, in combination with reverse-mode automatic differentiation to compute vector-Jacobian products, leads to a computational cost of O(D), instead of O(D^2) for the exact trace of the jacobian. \nThey compare this to the cost of computing a Jacobian determinant for finite flows, which is O(D^3) in general. They argue that in general all works on finite flows have adjusted their architectures for the flows to avoid the O(D^3) complexity, and that FFJORD has no such restriction.\nHowever, I would like the authors to comment on the following train of thought: autoregressive models, such as MAF, as well as IAF (inverse of an autoregressive model) do not require O(D^3) to compute jacobian determinants as the jacobian is of triangular form. Note however, they are still universal approximators if sufficient flows are applied, as any distribution can be factorized in an autoregressive manner. With this in mind, I find the red cross for MAF under free-form Jacobian slightly misleading. Perhaps I misunderstood something, so please clarify. \n\nAnother topic that I would like the authors to comment on is efficiency and practical use. One of the main points that the authors seem to emphasise, is that contrary to autoregressive models, which require D passes through the model to sample a datapoint of size D, FFJORD is a ‘single-pass’ model, requiring only one pass through the model. They therefore indicate that they can do efficient sampling. However, for FFJORD every forward pass requires a pass through an ODE solver, which as the authors also state, can be very slow. I could imagine that this is still faster than an autoregressive model, but I doubt this is actually of comparable speed to a forward pass of a finite flow such as glow or realNVP. \nOn the other hand, autoregressive models do not require D passes during training, whereas, if I understand correctly, FFJORD relies on two passes through ODE solvers, one for computing the loss, and a second to compute the gradient of the loss with respect to model parameters. So autoregressive models should train considerably faster. The authors do comment on the fact that FFJORD is slower than other models, but they do not give a hint as to how much slower it is. This would be of importance for practical use, and for other people to consider using FFJORD in future work. \n\nFor the density estimation task, FFJORD does not have the best performance compared other baselines, except for MNIST, for which the overall best model was not evaluated (MAF-DDSF). For variational inference FFJORD is stated to outperform all other flows, but the models are only evaluated on the negative evidence lower bound, and not on the negative log-likehood (NLL). I suspect the NLL to be absent from the paper as it requires more computation, and this takes a long time for FFJORD. Without an evaluation on NLL the improvement over other methods is questionable. Even if the improvement still holds for the NLL, the relative improvement might not weigh heavily enough against increased runtime. FFJORD does require less memory than its competitors.\n\nThe improved runtime by implementing the ODE solvers on GPU versus the runtime on a CPU would be useful, given that this is listed as one of the main contributions.\n\nBesides these questions/comments, I do think the idea of using Hutchinsons trace estimator is a valid contribution, and the experimental validation of continuous normalizing flows is of interest to the research community. Therefore, in my opinion, the community will benefit from the information in this paper, and it should be accepted. However I do wish for the authors to address the above questions as it would give a clearer view of the practical use of the proposed model. \n \nSee below for comments and questions:\n\nQuality\nThe paper has a good setup, and is well structured. The scope and limitations section is very much appreciated. \n\nClarity\nThe paper is clearly written overall. The only section I can comment on is the related work section, which is not the best part of the paper. The division in normalizing flows and partitioned transformations is a bit odd. Partitioned transformations surely are also normalizing flows. Furthermore IAF by Kingma et al. is put in the box of autoregressive models, whereas it is the inverse of an autoregressive model, such that it does not have the D-pass sample problem. For a reader who is not too familiar with normalizing flows literature, I think this section is a little confusing. Furthermore, there is no related work discussed on continuous time flows, such as (but not limited to) [2].\n\nOriginality\nThe originality of the paper is not stellar, but sufficient for acceptance. \n\nSignificance\nThe community can benefit from the experimental analysis of continuous time flows, and the GPU implementation of the ODE solver. Therefore I think this work is significant. \n\nDetailed questions/comments:\n\n1. In section 4.2, as an additional downside to MAF-DDSF, the authors argue that sampling cannot be performed analytically. Since FFJORD needs to numerically propagate the ODE, I do not think FFJORD can sample analytically either. Is this correct?\n2. The authors argue that they have no restriction on the architecture of the function f, even if they have O(D) estimation of the trace of the jacobian. However, they also say they make use of the bottle-neck trick to reduce the variance that arises due to Hutchinson’s estimate of the trace. This seems like a limitation on the architecture to me. Can the authors comment?\n3. In B.1 in the appendix, the street view house numbers dataset is mentioned, but no results appear in the main text, why not?\n4. In the results section, it is not clear to me which numbers of the baselines for different datasets are taken from other papers, and which numbers are obtained by the authors of this paper. Please clarify.\n5. In the conclusions, when discussing future work, the authors state that they are interested in reducing the number of function evaluations in the ODE solvers. In various disciplines many people have worked on this problem for a long time. Do the authors think major improvements are soon to be made?\n6. In section 5.2 the dependence of the number of function evaluations (NFE) on the data dimension D is discussed. As a thought experiment they use the fact that going from an isotropic gaussian distribution (in any D), to an isotropic gaussian distribution has a corresponding differential equation of zero. This should convince the reader that NFE is independent of D. However, this seems to me to be such a singular example, that I gain no insight from it, and it is not very convincing. Do the authors agree that this particular example does not add much? If not, please explain. \n\n[1] Chen et al. Neural ordinary differential equations. NIPS 2018\n[2] Chen et al. Continuous-time flows for deep generative models.\n\n**** EDIT *****\n\nI have read the response of the authors and appreciate their clarifications and the additional information on the runtimes. See my response below for the concern that remains about the absence of the estimate of the log likelihood for the VAE experiments. Besides this issue, the other comments/answers were satisfactory, and I think this paper is of interest to the research community, so I will stick with my score.\n\n", "This paper further explores the work of Chen et al. (2018) applied to reversible generative modelling. While section 1 and 2 focuses on framing the context of this work. The ODE solver architecture for continuous normalizing flow learn a density mapping using an instantaneous change of variable formula.\nThe contribution of this work seems to be enabling the use of deeper neural network than in Chen et al. (2018) as part of the ODE solver flow. While the single-layer architecture in Chen et al. (2018) enable efficient exact computation of the Jacobian Trace, using a deeper architecture compromises that property. As a result, the authors propose to use the unbiased Hutchinson trace estimator of the Jacobian Trace. Furthermore, the authors observe that using a bottleneck architecture reduces the rank of the Jacobian and can therefore help reducing the variance of the estimator. \nThe density estimation task in 2D is nice to see but lacks comparison with Chen et al. (2018), on which this paper improves. Moreover, is the Glow model used here only using additive coupling layers? If so, this might explain the difficulties of this Glow model. \nAlthough the model presented in this paper doesn't obtain state-of-the-art results on the larger problems, the work presented in this paper demonstrates the ability of ODE solvers as continuous normalizing flows to be competitive in the space of prescribed model.\nConcerning discussions and analysis:\n- given the lack of improvement using the bottleneck trick, is there an actual improvement in variance using this trick? or is this trick merely explaining why using a bottleneck architecture more suited for the Hutchinson trace estimator?\nIn algorithm 1, is \\epsilon only one random vector that keeps being reused at every step of the solver algorithm? I would be surprised that the use of a single random vector across different steps did not significantly increased the variance of the estimator.", "Thank you for your kind words about our paper. Your experience is very similar to ours in developing this method. Regarding the bottleneck, yes, we agree that using using this type of architecture will in general produce a weaker model. An updated version of our paper clarifies this. In our experiments we did not actually use this because we found that using wide networks gives better performance and a single sample to estimate the expectation from equation 8 worked fine in training our models. ", "Thank you very much for your kind words about our work. We will address you comments in order.\n\nWe have added a comparison to Chen et al. (2018) in our 2D density estimation experiments. Our model performs favorably compared to this baseline.\n\nWe note that the Glow model we used in all of our density estimation experiments used affine transformation layers with both learned scale and translation. We have added a note to the appendix to clarify this.\n\nRegarding the state-of-the-art performance of the model we would like to stress that or model is mainly comparable to Glow and Real NVP since it is a reversible flow-based generative model with efficient sampling. When compared to models in this class, FFJORD performs the best by a wide margin on all datasets tested except for CIFAR10. The models in the lower half of table 2 are not directly comparable to FFJORD since they are autoregressive and cannot be efficiently sampled from (some cannot even be analytically sampled from). We include those models to demonstrate that FFJORD performs comparably to these models (and outperforms them on some datasets) for density estimation but also has the additional ability to be sampled from and inverted.\n\nRegarding the bottleneck variance reduction we note that we do observe reduced variance as can be seen in figure 4. However, we did find that the variance of Hutchinson’s estimator did not provide any problems when training FFJORD models so we simply used it as is in our experiments. We also did notice that using bottleneck layers tended to reduce performance. We included a discussion of the bottleneck trick to provide the community with some ideas on how to deal with the variance that the estimator adds if it becomes a problem for any future practitioners implementing our method. We have added a sentence to section 3.1.1 to make this more clear.\n\nTo clarify, we use a single epsilon for each integration. This can be seen in the last line of equation 8 where the integral is inside of the expectation indicating that we first sample epsilon, then integrate (epsilon^T * (df/dz) * epsilon). Resampling epsilon during every step of the numerical solver would mean we’re solving a random ordinary differential equation, which our solvers are not equipped to handle; it would dramatically increase numerical instability.\n\nWe thank you for your review and hope you will appreciate the changes that it has inspired in our paper. \n", "Response:\n\nWe thank the reviewer for their thoughtful comments and questions. We address them in order.\n\nYou ask us to clarify why we have said that autoregressive models do not have a free-form jacobian. As you mention, each step of flow in these models is restricted to have a triangular jacobian. While these models can be universal density estimators, their underlying neural network components must be restricted to allow for efficient training. For this reason, we say they do not have a free-form jacobian. \n\nFFJORD requires the use of a numerical ODE solver and it is true that for each step of the solver, a forward pass must be computed through the gradient function f(z, t) = dz/dt but we demonstrate in figure 5 that the number of these forward evaluations is not a function of D in practice so it can be considered a “one-pass” model. \n\nRegarding efficiency of training, FFJORD requires two calls to an ODE solver. One to compute the z_t and one to compute the gradients. This is analogous to the standard backpropagation algorithm which requires one forward pass to compute the intermediate activations and one backward pass to compute the gradients. \n\nWe have added a more clarification regarding the speed of FFJORD compared to competing approaches in the conclusion.\n\nRegarding comparisons to other methods on density estimation we split table 2 into two sections. The top section consists of reversible generative models (like FFJORD) and the bottom section contains autoregressive models. We believe FFJORD is more fairly comparable to reversible models like Real NVP and Glow (which FFJORD outperforms on all datasets except CIFAR10). We add the results in the bottom half to demonstrate that FFJORD performs comparably to powerful autoregressive models (unlike other reversible generative models) while also being efficient to sample from. \n\nRegarding the background section on partitioned transformation and normalizing flows, we decided to separate out partitioned transformations since they have been successfully used on their own to build large-scale generative models where other types of normalizing flow have only been used successfully in conjunction with variational autoencoders. \n\n1) We will clarify this in the text but we would argue that FFJORD does have a known analytical inverse that is written as an integral. It is true we don’t solve this integral analytically--which we will clarify in the text--but in the case of MAF-DDSF the inverse is simply not known.\n\n2) We place no restrictions on the architecture that we use in our experiments. This can be seen in Appendix B. We do not utilize the bottleneck trick in our experiments. We found the additional variance added by Hutchinson’s estimator does not negatively impact training at all. We present the bottleneck trick to give the reader an approach they can use to reduce the variance if this becomes problematic for them if they implement our method. We have added a sentence to section 3.1.1 to clarify this. \n\n3) We initially used SVHN as a middle ground dataset which was more difficult than MNIST but less difficult than CIFAR10. We did not include quantitative results on this dataset since it is not a widely reported benchmark. We have removed all references to the dataset. \n\n4) We have added a note in the appendix clarifying this, where we also report standard deviations for table 2.\n\n5) This is a very widely studied problem but most of the existing research in this area is not easily applicable to our problem. The ODEs we are dealing with are of much higher dimension than those studied heavily in the literature on numerical ODE methods. Moreover, we have unique ODE structures which are not typically explored in numerical methods literature. An example of such a structure is that we are integrating a mini-batch of data through our ODE which has the dynamics defined by the same function f. It is possible that computation in this can be better reused. We hope that the numerical methods community will now become interested in these types of problems.\n\n6) While yes, it is a simple example, we felt that it was sufficient since we also have experimental results that back up this claim. \n", "We thank the reviewer for their time and their kind word about our work. We will address your concerns and questions in the order you wrote them.\n\nYou mention that we do not compare to Chen et al. We chose not to compare directly with this method because CNFs, as presented in their paper, should not be expected to scale to high dimensional data. The analogous comparison in discrete-time flows would be comparing Glow to stacked planar flows. We have added a comparison to Chen et al. On the 2D datasets which illustrates this point. \n\nThe variance of Hutchinson’s estimator is well understood and we do note the asymptotic variance of the estimator in section 3.1.1. While we do not prove this trick reduces variance, we do demonstrate this empirically which can be seen in section 5.1.\n\nRegarding the “dimensionality” going from D^2 -> D. We believe you mean computation, correct? We believe this is clearly explained in section 3.1 which introduces the estimator that allows the computation to be reduced. \n\nWe have added a note in the introduction to explain Figure 1.\nWe have fixed the typo noticed in section 3.1. \n\nRegarding a comparison to Chen et al., we chose not to include one because while Chen et al. proposed CNFs and the objective that they optimize, they did not really present a generative model that uses this framework. The CNF they presented is comparable to the planar flows first presented in Rezende et al. (2015). These are fairly weak transformation and will not easily scale to the high-dimensional datasets we experimented with. An analogy would be comparing Glow to a stack of planar flows.\n\nYou are correct in your reasoning about why we did not present log-likelihoods for the VAE experiments. While Hutchinson’s trace estimator gives us unbiased estimates of the ELBO, using it to estimate the log-likelihood with importance sampling gives an upper-bound due to the stochasticity of the estimator. It would be possible to use the brute-force Jacobian to estimate this, but the computation of doing so proved to be prohibitive. \n", "This paper discusses a technique for continuous normalization flow in which the transformations are not required to be volume preserving (the transformation with unity Jacobian), and architecture of neural network does not need to be designed to hold such property. Instead authors proposed no restriction on architecture of neural network to design their reversible mapping.\nThe Paper has good background and literature review, and as authors mentioned this paper is base on the idea of Chen, Tian Qi, et al. \"Neural Ordinary Differential Equations.\" arXiv preprint arXiv:1806.07366 (2018). Chapter two of this paper is summary of \"Neural Ordinary Differential Equations.\" and chapter Three is main contribution of this paper that can be summarized under two points:\n\n1- Authors borrowed the \"continuous normalizing flow \" in Chen et al. and they have designed unbiased log density estimator using Hutchinson trace estimator and evaluated the trace with complexity of O(D) (dimension of data) instead of O(D^2) that is used in chen et al. Paper\n\n2- They proposed by reducing the hidden layer dimension of neural network, it is possible that variance of estimator to be reduced \n\nNovelty and Quality:\nthe main contribution of this paper is summarized above.\nThe paper do not contain any significant theorem or mathematical claims, it is more focused on design of linear algorithm that estimate continuous normalizing flow that they have borrowed from the Chen et al. paper. This is a good achievement that can help continuous normalizing flow scale on data with higher dimensions, but in results and experiments section no comparison has been made to performance of chen et al. Also no guarantees or bound has been given about the variance reduction of estimator and it is more based on the authors intuition.\n\nClarity:\nThe paper is well written and previous relevant methods have been reviewed well. There are a few issues that are listed below:\n1-in section 3 the reason that dimensionality of estimator can reduce to D from D^2 can be explained more clearly \n\n2- Figure 1 is located on first page of the paper but it has never been referred in main paper, just it is mentioned once in appendix , it can be moved to appendix.\n\n3- in section 3.1.1 the “view view” can be changed to “view”\n\nsignificance and experiments:\nThe experiments are very detailed and extensive and authors have compared their algorithm with many other competing algorithms and showed improvement in many of the cases. \nAs mentioned in Quality and Novelty part of the review, just one comparison is missing and that is the comparison to method that the paper is inspired by. It would be interesting to see how much trace estimator approach that has been used in this paper, would sacrifice the negative log-likelihood or ELBO specially in real data like MNIST and CIFAR 10. it seems original paper has not reported the performance on those data-sets as well, is this difficult as chen et. al. paper algorithm for trace calculation has complexity of O(D^2)? \n", "I find this paper and its predecessor a game changer and I am happy to see this more detailed analysis on backpropagating ODE's for density estimation. https://arxiv.org/pdf/1806.07366.pdf\nI wrote an implementation of the neural ODE with its gradient in tensorflow, and was saddened to see that while I initially estimated it's speed at O(D) (from the naive O(D^3)), it was actually O(D^2) as the trace is indeed expensive. This unfortunately makes it considerably slower than for example a Real NVP, which is O(D). I did notice Hutchinson’s trace estimator might improve this neural ODE to O(D) again, but have not gone around to verify that unlike in this paper.\n\nHowever, the bottleneck trick seems to undermine the original strength of the approach (allowing for wide networks). I wonder if you have an analysis to show the relation between the number of samples in equation 8 and the error in the likelihood. Specifically I wonder if it would be possible to train using only a single sample for the expectation in equation 8? Do you have an analysis on the amount of samples and how the estimate converges? While introducing considerable noise, it would speed up the training phase allowing big batches again. In the testing phase, evaluating a big batch is typically less of a problem and there more samples could be used anyway.\n\nOtherwise, I admire this approach and hope to see more work in this direction in the future." ]
[ -1, -1, -1, 7, 7, -1, -1, -1, -1, 7, -1 ]
[ -1, -1, -1, 4, 3, -1, -1, -1, -1, 4, -1 ]
[ "ryxeNAsF14", "iclr_2019_rJxgknCcK7", "rklhnnPj6X", "iclr_2019_rJxgknCcK7", "iclr_2019_rJxgknCcK7", "Byg3817r5Q", "Hylm9LQd3X", "HklkaJG92m", "ryeYYxV9n7", "iclr_2019_rJxgknCcK7", "iclr_2019_rJxgknCcK7" ]
iclr_2019_ryGs6iA5Km
How Powerful are Graph Neural Networks?
Graph Neural Networks (GNNs) are an effective framework for representation learning of graphs. GNNs follow a neighborhood aggregation scheme, where the representation vector of a node is computed by recursively aggregating and transforming representation vectors of its neighboring nodes. Many GNN variants have been proposed and have achieved state-of-the-art results on both node and graph classification tasks. However, despite GNNs revolutionizing graph representation learning, there is limited understanding of their representational properties and limitations. Here, we present a theoretical framework for analyzing the expressive power of GNNs to capture different graph structures. Our results characterize the discriminative power of popular GNN variants, such as Graph Convolutional Networks and GraphSAGE, and show that they cannot learn to distinguish certain simple graph structures. We then develop a simple architecture that is provably the most expressive among the class of GNNs and is as powerful as the Weisfeiler-Lehman graph isomorphism test. We empirically validate our theoretical findings on a number of graph classification benchmarks, and demonstrate that our model achieves state-of-the-art performance.
accepted-oral-papers
Graph neural networks are an increasingly popular topic of research in machine learning, and this paper does a good job of studying the representational power of some newly proposed variants. The framing of the problem in terms of the WL test, and the proposal of the GIN architecture is a valuable contribution. Through the reviews and subsequent discussion, it looks like the issues surrounding Theorem 3 have been resolved, and therefore all of the reviewers now agree that this paper should be accepted. There may be some interesting followup work based on studying depth, as pointed out by reviewer 1, but this may not be an issue in GIN and is regardless a topic for future research.
train
[ "rkl2Q1Qi6X", "B1xYlDERRX", "SJeYuLH41V", "HJgMSgUqhQ", "BygALwN0CX", "B1et5yXJ14", "H1gRfJQJy4", "rkxt80KARX", "rJxY7atRCX", "BkgrFw3iRQ", "S1egpyLCAX", "H1xW3wVA0X", "rkeW9FDnnQ", "S1ljyieA0m", "BJgIGNTP27", "BkxHNhu607", "rJx9PavpA7", "BJeRhEhiRX", "ryeaD73iRX", "SJxgqg2N0m", "SyeZ3MU4AX", "BkxOBQ8VA7", "r1xnFfUVA7", "Byg6WVL4Rm", "B1l2k48VCQ", "H1xLhQUEA7", "SklslP4NRX", "HJlp9LgVRX", "Hke3DIJNAQ", "ByxsKEkV07", "SklgY8Ky07", "B1xLcPaKpQ", "H1gkUYX76Q", "BJgd4DhjiQ", "HJgofotjs7", "HkeLiClijQ", "S1xDl1Ovim", "rklL6jDwjm", "SJlxQFl4i7", "Bkg-9GNXi7", "HJltX0els7", "HJgxQah1sQ", "Byx_XQZJjQ", "H1lpE_ACc7", "r1xAhRX05X", "Syl6WEQn9m", "r1g3L4zh5X", "BygugHlo9Q", "H1grV6yjcQ", "r1giX8SqqQ", "HyxT16oFqm", "HJgCBjM5cm", "HJe9n0eFcQ", "SyguON1Ocm", "S1evjSRPc7" ]
[ "public", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "public", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "public", "public", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "public", "public", "public", "author", "public", "author", "author", "public", "public", "public", "author", "public", "author", "public", "public", "author", "public", "author", "public", "author", "author", "public", "public", "author", "public" ]
[ "I do not think that Equation (4.1) is as powerful as the 1-WL. Consider the two labeled graphs \n\nr -- g\n| |\ng -- r\n\nand \n\nr -- g\n| |\nr -- g\n\nwith node color \"g\" and \"r\". Clearly, the 1-WL can distinguish between these two graphs. Howeover, when using (4.1) with an 1-hot encoding of the labels, both graphs will end up with the same two features. The set of node features will always be the same. ", "Thank you for the detailed response. Regarding the depth of the networks, GIN does not suffer from the curse of depth, i.e. we can use many layers, because we apply architectures similar to JK-Nets (specifically, JK-Concat) in Xu et al. 2018 for readout as described in Section 4.2. We conducted graph classification experiments using 5-layers GNNs (with JK-net) and they work nicely in our experiments. Moreover, as R1 nicely suggested, the influence distribution expansion phenomenon in Xu et al. 2018 indeed would apply to GraphSAGE, GIN etc, though the transition probabilities may not follow canonical random walks when MLP is applied. That being said, Xu et al. 2018 is a great work and we like it. We just wanted to clarify that Theorem 1 was about influence distribution rather than node features, thus, there would be no issue for GIN in terms of invertibility. We hope you are happy with our clarification. \n\nRegarding cross-validation, thanks for letting us know the work. We will mention it in the final version. To clarify, we use the boldface to indicate the best performance in terms of mean accuracy. As we mentioned in the rebuttal, the graph classification benchmark datasets are extremely small compared to other standard deep learning benchmarks for computer vision or NLP, e.g. ImageNet. That’s why standard deviations are high for all the methods (including the previous methods). We do believe we all should move beyond the conventional evaluation on these small datasets, but that is beyond the scope of this paper.\n\nThank you again for your nice suggestions and detailed reviews. We hope our clarification regarding the analysis addresses your concerns. ", "Thank you for the detailed response. Regarding the depth of the networks, GIN does not suffer from the curse of depth, i.e. we can use many layers, because we apply architectures similar to JK-Nets (specifically, JK-Concat) in Xu et al. 2018 for readout as described in Section 4.2. We conducted graph classification experiments using 5-layers GNNs (with JK-net) and they work nicely in our experiments. Moreover, as R1 nicely suggested, the influence distribution expansion phenomenon in Xu et al. 2018 indeed would apply to GraphSAGE, GIN etc, though the transition probabilities may not follow canonical random walks when MLP is applied. That being said, Xu et al. 2018 is a great work and we like it. We just wanted to clarify that Theorem 1 was about influence distribution rather than node features, thus, there would be no issue for GIN in terms of invertibility. We hope you are happy with our clarification. \n\nRegarding cross-validation, thanks for letting us know the work. We will mention it in the final version. To clarify, we use the boldface to indicate the best performance in terms of mean accuracy. As we mentioned in the rebuttal, the graph classification benchmark datasets are extremely small compared to other standard deep learning benchmarks for computer vision or NLP, e.g. ImageNet. That’s why standard deviations are high for all the methods (including the previous methods). We do believe we all should move beyond the conventional evaluation on these small datasets, but that is beyond the scope of this paper.\n\nThank you again for your nice suggestions and detailed reviews. We hope our clarification regarding the analysis addresses your concerns. ", "The author study the expressive power of neighborhood aggregation mechanisms used in Graph Neural Networks and relates them to the 1-dimensional Weisfeiler-Lehman heuristic (1-WL) for graph isomorphism testing. The authors show that GCNs with injections acting on the neighborhood features can distinguish the same graphs that can be distinguished by 1-WL. Moreover, they propose a simple GNN layer, namely GIN, that satisfies this property. Moreover, less powerful GNN layers are studied, such as GCN or GraphSage. Their advantages and disadvantages are discussed and it is shown which graph structures they can distinguish. Finally, the paper shows that the GIN layer beats SOTA GNN layers on well-known benchmark datasets from the graph kernel literature.\n\nStudying the expressive power of neighborhood aggregation mechanisms is an important contribution to the further development of GCNs. The paper is well-written and easy to follow. The experimental results are well explained and the evaluation is convincing.\n\nHowever, I have some concerns regarding the main result in Theorem 3. A consequence of the theorem is that it makes no differences (w.r.t. expressive power) whether one distinguishes the features of the node itself from those of its neighbors. This is remarkable and counterintuitive, but not discussed in the article. However, it is discussed in the proof of Theorem 3 (Appendix) which suggests that the number of iterations must be increased for some graphs in order to obtain the same expressive power. Unfortunately, at this point, the proof is a bit vague. I would like to see a discussion of this differences in the article. This should be clarified in a revised version. \n----\nEdit:\nThe counter example posted in a comment ( https://openreview.net/forum?id=ryGs6iA5Km&noteId=rkl2Q1Qi6X&noteId=rkl2Q1Qi6X ) actually shows that my concerns regarding Theorem 3 and its proof were perfectly justified. I agree that the two graphs provide a counterexample to the main result of the paper. Therefore, I have adjusted my rating. I will increase my rating again when the problem can be resolved. However, this appears to be non-trivial.\n----\nMoreover, the novelty of the results compared to the related work, e.g., mentioned in the comments, should be pointed out.\n\n\nSome further questions and remarks:\n\n(Q1) Did you use a validation set for evaluation? If not, what kind of stopping criteria did was use?\n\n(Q2) You use the universal approximation theorem to prove Theorem 3. Could you please say something about the needed width of the networks?\n\n(R1) Could you please provide standard deviations for all experiments. I suspect that the accuracies on the these small datasets fluctuates quite a bit.\n\n(R2) In the comments it was already mentioned, that some important related work, e.g., [1], [2], are not mentioned. You should address how your work is different from theirs.\n\n\nMinor remarks:\n\n- The colors in Figure 1 are difficult to distinguish\n\n\n\n[1] https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=4703190\n[2] https://people.csail.mit.edu/taolei/papers/icml17.pdf\n\n-------------------\nUpdate:\nMost of the weak points were appropriately addressed by the authors and I have increased my rating accordingly.", "Thank you for the response. We address your question regarding experimental setup. First, past work in graph classification report the best cross-validation accuracy as what we did in our experiments [3]. The graph classification dataset sizes are often small, and therefore using a (single) validation dataset to select hyper-parameters is very unstable (for instance, MUTAG only has 180 data points, so each validation set only contains 18 data points. Compare this to standard deep learning benchmark sets like MNIST that has 70000 data points)**. Therefore, in our paper, we reported cross-validation accuracy for fair comparison to the previous methods. Moreover, our GNN variants and the WL kernel all follow the same experimental setups, so the comparison among them is definitely meaningful; consequently, our conclusion regarding the expressive power is also meaningful. We are planning for future work to evaluate our method on larger datasets, e.g. those mentioned in the post by one of our readers Mr. Christopher Morris, in https://openreview.net/forum?id=ryGs6iA5Km&noteId=B1xLcPaKpQ.\n\nWe have thoroughly addressed all the concerns of R2. If Reviewer2 still has other questions or concerns regarding our work, we are happy to answer them. \n\n**[5] uses a test set, but its experiments focus on the larger datasets.", "Thank you for the clarification. We would like to first clarify that the letters (phi, f) in Corollary 6 and Theorem 3 do not have direct correspondence; but we can easily rearrange Eqn (4.1) to obtain the corresponding (phi, f) in the form of Theorem 3. Intuitively, what Theorem 3 asks for is to injectively represent a pair of a node and its neighbors, so the injective function, g(c, X), corresponds to (phi, f) in Theorem 3. \n\nFurthermore, our motivation for designing Eqn (4.1), i.e. GIN-0 and GIN-eps, rather than simply applying concatenation, is for better empirical performance. In our preliminary experiments, we found such concatenation was harder to train compared to our simple GINs (both GIN-0 and GIN-eps) and achieved lower test accuracy than GINs. The simplicity of GINs brings better performance in practice. We leave the extensive investigation and comparison to our future work.", "Regarding experimental setup, we emphasize again that the graph classification datasets are extremely small compared to standard benchmarks in computer vision and NLP, e.g. ImageNet. Therefore, using a (single) validation dataset to select hyper-parameters is very unstable (for instance, MUTAG only has 180 data points, so each validation set only contains 18 data points). Therefore, following some of the previous deep learning work, we reported the ordinary cross-validation accuracy (the same hyper-parameters, such as number of epochs and minibatch size, were used for the entire folds). That being said, we understand the existing benchmarks and evaluation for graph classification are limited and we should all move on to large datasets as an anonymous reader pointed out in https://openreview.net/forum?id=ryGs6iA5Km&noteId=ryGs6iA5Km&noteId=H1gkUYX76Q. In the final version, we will also state our experimental setup more clearly. Thank you for your nice suggestion.\n", "Even though the same approach was used in a previous paper, it is not convincing. Typically the results vary greatly between the epochs. Picking the one with the best validation accuracy leads to unrealistic results. Also the comparison to the results of the WL kernel is not meaningful since it was obtained with an SVM, where the number of hyperparameters is less. Therefore, you cannot pick the best value from such a large set of values. It is questionable to speak of \"generalization\" in the discussion of your results.\n\nI would like to propose to state the method you used more clearly in the paper and check the experimental setup used to obtain the results you have copied from other papers.\n\nSince the main contribution of the paper is theoretical, I will keep my rating, although I think that the experimental setup is a clear weak point.", "What I meant was, in g(c, X) you have two functions phi and f, which is the form required by Theorem 3. The problem of the counter-example comes in when you used a single function instead of 2 functions, which ignores the difference between the node at the center and all its neighbors. \n Introducing an epsilon is a technical solution to this problem (in my opinion), I think you actually don't need this because the original form of g(c, X) is enough, and using a single function rather than 2 does not save you much.\n\nNote: I think of phi and f as MLPs, \",\" as concat, and \"{}\" as some aggregation operator, like sum.", "Thanks for your interest. GIN is different from the paper you mentioned. Critically, GIN uses MLP while Dai et al. uses perceptron. \nThere are many GNN variants and we leave the analysis of some of them for the future work. Note that the graph Laplacian normalization can decrease the representational power of GNNs, but it can also induce useful inductive bias for the applications of interest, e.g, semi-supervised learning. Therefore, we can not draw a decisive conclusion about the normalization only from the perspective of representational power. It is our future work to investigate generalization, inductive bias and optimization of different GNN variants.", "Neural network-based graph embedding for cross-platform binary code similarity detection\nhttps://arxiv.org/pdf/1708.06525.pdf", "Thank you for the encouraging review! We respond to your further comments below. \n\n1) We probably do not fully understand your comment regarding Eqn (4.1) and g(c,X). Especially, could you please clarify your meaning of “simplify g(c, X)”? In our GIN in Eqn (4.1), we compose phi and f in Corollary 6.\n\n2) We will further edit related work according to your suggestions. Interaction Networks is a great work and we like it. ", "This papers presents an interesting take on Weisfeiler-Lehman-type GNNs, where it shows that a WL-GNNs classification power is related to its ability to represent multisets. The authors show a few exemplar networks where the mean and the max aggregators are unable to distinguish different multisets, thus losing classification power. The paper also proposes averaging the node representation with its neighbors (foregoing the “concatenate” function) and using sum pooling rather than mean pooling as aggregator. All these observations are wrapped up in a GNN, called GIN. The experiments on Table 1 are inconclusive, unfortunately, as the average accuracies of the different methods are often close and there are no confidence intervals and statistical tests to help guide the reader to understand the significance of the results.\n\nMy chief concern is equating the Weisfeiler-Lehman test (WL-test) with Weisfeiler-Lehman-type GNNs (WL-GNNs). The WL-test relies on countable set inputs and injective hash functions. Here, the paper is oversimplifying the WL-GNN problem. After the first layer, a WL-GNN is operating on uncountable sets. On uncountable sets, saying that a function is injective does not tells us much about it; we need a measure of how closely packed we find the points in the function’s image (a measure in measure theory, a density in probability). On countable sets, saying a function is injective tells us much about the function. Moreover, the WL-test hash function does not even need to operate over sets with total or even partial orders. As a neural network, the WL-GNN “hash” ($f$ in the paper) must operate over a totally ordered set (\\mathbb{R}^n, n > 0). Porting the WL-test argument of “convergence to unique isomorphic fingerprints” to a WL-GNN requires a measure-theoretic analysis of the output of the WL-GNN layers, and careful analysis if the total order of the set does not create attractors when they are applied recursively. \n\nTo illustrate the above *attractor* point, let’s consider the construct of Theorem 1 of (Xu et al., 2018), where the WL-GNN “hash” ($f$) is (roughly) described as the transition probability matrix of a random walk on the input graph. Under well-known conditions, the successive application of this operator (\"hash\" or transition probability matrix P in this case) can go towards an attractor (the steady state). Here, we need a measure-theoretic analysis of the “hash” even if it is bijective: random walk mixing. The random walk transition operator can be invertible (bijective), but we still say the random walker will mix, i.e., the walker forgets where it started, even if the transition operation can be perfectly undone by inversion (P^{-1}). In a WL-GNN that only uses the last layer for classification, this would manifest itself as poor performance in a WL-GNN with a large number of layers, and vanishing gradients. Of course, since (Xu et al., 2018) argued to revert back to the framework of (Duvenaud et al., 2015) of using the embeddings of all layers, one can argue that this mixing problem is just a problem of “wasted computation”.\n\nThe matrix analysis of the last paragraph also points to another potential problem with the sum aggregator. GIN needs to be shallow. With ReLU activations the reason is simple: for an adjacency matrix $A$, the value of $A^j$ grows very quickly with $j$ (diverges). With sigmoid activations, GIN would experience vanishing gradients in graphs with high variance in node degrees.\n\nThe paper should be careful with oversimplifications. Simplifications are useful for insight but can be dangerous if not prefaced by clear warnings and a good understanding of their limitations. I am not asking for a measure-theoretic analysis revision of the paper (it could be left to a follow-up paper). I am asking for a *relatively long* discussion of the limitations of the analysis.\n\nSuggestions to strengthen the paper:\n•\tPlease address the above concerns.\n•\tTable 1 should have confidence intervals (a statistical analysis of significance would be a welcome bonus).\n•\tPlease mention the classes of graphs where the WL-test cannot distinguish two non-isomorphic graphs. See (Douglas, 2011), (Cai et al., 1992) and (Evdokimov and Ponomarenko, 1999) for the examples. It is important for the WL-GNN literature to keep track of the more fundamental limitations of the method.\n•\t(Hamilton et al, 2017) also uses the LSTM aggregator, besides max aggregator and mean aggregator, which outperforms both max and mean in some tasks. Does the LSTM aggregator also outperforms the sum aggregator in the tasks of Table 1? It is important for the community to know if unusual aggregators (such as the asymmetric LSTM) have some yet-to-be-discovered class-distinguishing power.\n\n\n--------- Update -------\n\nThe counter-example in \nhttps://openreview.net/forum?id=ryGs6iA5Km&noteId=rkl2Q1Qi6X\nis indeed a problem for Theorem 3 if \\{h_v^{(k-1)}, h_u^{(k-1)} : u \\in \\mathcal{N}_v\\} is not a typo for a set of tuples \\{(h_v^{(k-1)}, h_u^{(k-1)}) : u \\in \\mathcal{N}_v\\}. Unfortunately, in their proof, the submission states \"difficulty in proving this form of aggregation mainly lies in the fact that it does not immediately distinguish the root or central node from its neighbors\", which means \\{h_v^{(k-1)}, h_u^{(k-1)} : u \\in \\mathcal{N}_v\\} is actually \\{h_v^{(k-1)}\\} \\cup \\{ h_u^{(k-1)} : u \\in \\mathcal{N}_v\\}, which is not as powerful as WL. Concatenating is more powerful than the summing the node's own embedding, but it results in a simpler model and could be easier to learn in practice. And I am still concerned about the countable x uncountable domain/image issue I raised in my review.\n\nStill, the reviewers seem to be doing all the discussion among themselves, with no input from the authors. I am now following Reviewer 2.\n\n----\n\nReverting my score to my original score. The authors have addressed most of my concerns, thank you. The restricted theorems and propositions better describe the contribution.\n\nI would like to note that while the proof of (Xu et al., 2018) is limited that does not mean it is not applicable to GIN or GraphSAGE or similar models. The paper uses 5 GNN layers, which in my experience is the maximum I could ever use with GNNs without seeing a degradation in performance. I don't think this should be a topic for this paper, though.\n\n\nXu, K., Li, C., Tian, Y., Sonobe, T., Kawarabayashi, K., & Jegelka, S. (2018). Representation Learning on Graphs with Jumping Knowledge Networks. In ICML.\n\nCai, J. Y., Fürer, M., & Immerman, N. (1992). An optimal lower bound on the number of variables for graph identification. Combinatorica, 12(4), 389-410.\n\nDouglas, B. L. (2011). The Weisfeiler-Lehman method and graph isomorphism testing. arXiv preprint arXiv:1101.5211.\n\nEvdokimov, S., & Ponomarenko, I. (1999). Isomorphism of coloured graphs with slowly increasing multiplicity of Jordan blocks. Combinatorica, 19(3), 321-333.\n\n\n", "For 10-fold cross validation, it is important to highlight that it tends to underestimate the confidence interval range (see (Bengio and Grandvalet, 2004)). Important to let readers know that there is more uncertainty in the results, which was not quantified. \n\nI also find the use of boldface confusing. Summing and subtracting the confidence intervals and a lot more models overlap. \n\nBengio, Yoshua, and Yves Grandvalet. \"No unbiased estimator of the variance of k-fold cross-validation.\" Journal of machine learning research 5, no. Sep (2004): 1089-1105.", "This paper presents a very interesting investigation of the expressive capabilities of graph neural networks, in particular focusing on the discriminative power of such GNN models, i.e. the ability to tell that two inputs are different when they are actually different. The analysis is based on the study of injective representation functions on multisets. This perspective in particular allows the authors to distinguish different aggregation methods, sum, mean and max, as well as to distinguish one layer linear transformations from multi-layer MLPs. Based on the analysis the authors proposed a variant of the GNN called Graph Isomorphism Networks (GINs) that use MLPs instead of linear transformations on each layer, and sum instead of mean or max as the aggregation method, which has the most discriminative power following the analysis. Experiments were done on node classification benchmarks to support the claims.\n\nOverall I quite liked this paper. The study of the expressive capabilities of GNNs is a very important problem. Given the popularity of this class of models recently, theoretical analysis for these models is largely missing. Previous attempts at studying the capability of GNNs focus on the function approximation perspective (e.g. Mapping Images to Scene Graphs with Permutation-Invariant Structured Prediction by Hertiz et al. which is worth discussing). This paper presents a very different angle focusing on discriminative capabilities. Being able to tell two inputs apart when they are different is obviously just one aspect of representation power, but this paper showed that studying this aspect can already give us some interesting insights.\n\nI do feel however that the authors should make it clear that discriminative power is not the only thing we care, and in most applications we are not doing graph isomorphism tests. The ability to tell, for example, how far two inputs are, when they are not the same is also very (and maybe more) important, which such isomorphism / injective map based analysis does not capture at all. In fact the assumption that each feature vector can be mapped to a unique label in {a, b, c, ...} (Section 3 first paragraph) is overly simplistic and only makes sense for analyzing injective maps. If we want to reason anything about the continuity of the features and representations, this assumption does not apply, and the real set is not countable so such a mapping cannot exist.\n\nIn equation 4.1 describes the GIN update, which is proposed as “the most powerful GNN”. However, such architecture is not really new, for example the Interaction Networks (Battaglia et al. 2016) already uses sum aggregation and MLP as the building blocks. Also, it is said that in the first iteration a simple sum is enough to implement injective map, this is true for sum, but replacing that with mean and max can lose information very early on. Another MLP on the input features at least for mean or max aggregation for the first iteration is therefore necessary. This isn’t made very clear in the paper.\n\nThe training set results presented in section 6.1 is not very clear. The plots show only one run for each model variant, which run was it? As the purpose is to show that some variants fit well, and some others overfit, these runs should be chosen to optimize training set performance, rather than generalization. Also the restrictions should be made clear that all models are given the same (small) amount of hidden units per node. I imagine if the amount of hidden units are allowed to be much bigger, mean and max aggregators should also catch up.\n\nAs mentioned earlier I quite liked the paper despite some restrictions anc things to clarify. I would vote for accepting this paper for publication at ICLR.\n\n--------\n\nConsidering the counter-example given above, I'm lowering my scores a bit. The proof of theorem 3 is less than clear. The proof for the first half of theorem 3 (a) is quite obvious, but the proof for the second half is a bit hand-wavy.\n\nIn the worst case, the second half of theorem 3 (a) will be invalid. The most general GNN will then have to use an update function in the form of the first half of 3(a), and all the other analysis still holds. The experiments will need to be rerun.\n\n--------\n\nUpdate: the new revision resolved the counter-example issue and I'm mostly happy with it, so my rating was adjusted again.", "I thank the authors for the revision of the paper and the response. I have readjusted my rating.\n\nThe solution to the question raised by the counter example in the new equation (4.1) is a technical one, I would rather prefer not to simplify the function g(c, X) which uses two functions phi and f in this form, as it really doesn't buy us much.\n\nW.r.t. related work, the statement \"Not surprisingly, some building blocks of GIN, e.g. sum aggregation\nand MLP encoding, also appeared in other models\" (section 6) is not fair and misleading. As it is not the case that \"some building blocks\" also appear in other models, but rather some other models, like interaction networks, already contains \"all\" the essential building blocks (sum, MLP, etc.) presented in this paper. This doesn't undermine the theoretical contribution of this paper, but the authors should be fair to previous work.\n\n\n\n", "Thanks for your detailed reply. The mentioned weak points 1, 2 and 4 were appropriately addressed by the authors and I have increased my rating accordingly.\n\nRegarding point 3.\n>> We selected an epoch with the highest cross-validation accuracy (averaged over 10 folds) following what previous deep learning papers do, e.g., [3][4].\n\nI think there is no common approach to this and the experimental setup in previous papers differs. Many papers use nested cross-validation, others use cross-validation with a fixed validation set, e.g., [5]. Also in [4] a validation seems to be used.\nIf I understand your method correctly, you report the best accuracy value obtained for any combination of hyperparameters -- instead of applying the classifier with the hyperparameters that work best for a validation set to the test set. In my opinion the approach is problematic. In particular, comparing to accuracy results obtained with a different experimental setup is not meaningful.\n\n[5] Hierarchical Graph Representation Learning with Differentiable Pooling\nRex Ying, Jiaxuan You, Christopher Morris, Xiang Ren, William L. Hamilton, Jure Leskovec \nNeurIPS 2019\n", "According to the current paper, can one say that all the graph Laplacian normalizations in previous GCN are not essential? Or redundant in some sense? \nWhat's really essentially in graph neural network is equation (4.1) for GIN, or equation (10) for structure2vec in Dai et al.? \nAnd a potentially really different representation power will probably come from a different message passing update as in eq (14) & (15) in Dai et al.? \n", "GIN is essentially the same as the graph neural network in \nequation (10) of this paper: \nDai et al. ICML 2016. Discriminative Embeddings of Latent Variable Models for Structured Data\nhttps://arxiv.org/pdf/1603.05629.pdf\n\nA discussion of this related work, and compare to structure2vec in their datasets will help improve the paper. \n\nAlso how about the other message passing version of graph neural network developed in Dai et al. (eq (14) & (15)) ? Will it be more powerful? ", "We thoroughly addressed the counter-example and the related concern in https://openreview.net/forum?id=ryGs6iA5Km&noteId=SyeZ3MU4AX\nFurthermore, we revised our paper.", "We begin by acknowledging that Eqn (4.1) and Theorem 3a-Eqn.2) in our initial submission (which does not distinguish the center nodes from their neighbors) were indeed insufficient to be as powerful as the WL test. The example provided by the anonymous reader makes a great point about the corner case. That said, we agree that in order to realize the most powerful GNN, its aggregation scheme needs to distinguish the center node from its neighbors.\n\nThe good news is that we can resolve this corner case by making a very simple modification to our GIN aggregation scheme in Eq. (4.1) of the initial submission, so that the modified GIN can provably distinguish the root/center node from its neighbors during the aggregation. This implies that our modified GIN handles the counter-example raised by the anonymous reader, and, more importantly, we can prove that the modified GIN is as powerful as the WL test under the common assumption that the input node features are from a countable universe. In the following, we will explain these points in more detail.\n\nFirst, we present a simple update to our current GIN aggregation scheme, and show that it now handles the counter-example provided by the anonymous reader. Our simple modification to the original GIN aggregation in Eq. (4.1) of the initial submission is:\n\nh_v^{(k)} = MLP ( (1 + \\epsilon) h_v^(k-1) + \\sum_{u \\in neighbor} h_u^(k-1) ), (**), Eq. (4.1) of the revised paper.\n\nwhere \\epsilon is a fixed or learnable scalar parameter. We will show that there exist infinitely many \\epsilon where the modified GIN (as defined above) is as powerful as WL. Note that setting \\epsilon = 0 reduces to our original GIN aggregation in Eq. (4.1) of the initial submission. Thus, the above equation (Eq. (**)) smoothly “extrapolates” the original GIN architecture, and with the epsilon term, the modified GIN can now distinguish the center node from its neighbors. Before moving to the formal proof, let us first illustrate how modified GIN handles the counter-example by the anonymous reader:\n\n R - R R - G\n| | v.s. | |\n G - G G - R\n\nAssume we use the one-hot encodings for the input node features, i.e., R = [1, 0] and G = [0, 1]. After 1 iteration of aggregation defined by Eq. (**), our modified GIN obtains the following node representations (before applying MLP in (**)); thus, it successfully distinguishes the two graphs with small non-zero eps=\\epsilon.\n\n[2+eps, 1] -- [2+eps, 1] [1+eps, 2] -- [2, 1+eps]\n| | vs. | |\n| | | |\n[1, 2+eps] -- [1, 2+eps] [2, 1+eps] -- [1+eps, 2]\n\nThe key here is that with non-zero (small) eps, [2+eps, 1] and [2, 1+eps] are now different. In other words, adding \\epsilon term in Eq. (**) enables the modified GIN to “identify” the center nodes and distinguish them from neighboring nodes. \n\nWith the intuition above, we now give a formal proof for the modified GIN architecture. We start with Lemma 5 (universal multiset functions) in our revised paper, and extend it to Corollary 6 in the revised paper that can distinguish center node from the neighboring nodes. Crucially, the function h(c, X) in Corollary 6 is now the injective mapping over the *pair* of a center node c and its neighbor multiset X. This implies that h(c, X) in Corollary 6 can distinguish center nodes from their neighboring nodes.\n\nCorollary 6\nAssume \\mathcalcal{X} is countable. There exists a function f: \\mathcal{X} → R^n so that for infinitely many choices of \\epsilon, including all irrational numbers, h(c, X) \\equiv (1 + \\epsilon) f(c) + \\sum_{x \\in X} f(x) is unique for each pair (c, X), where c \\in \\mathcal{X}, and X \\subset \\mathcal{X} is a finite multiset. \n\n---Proof sketch (details are provided in Appendix of the revised paper, see Proof of Corollary 6)\nThe proof builds on Lemma 5 that constructs the function f that maps each finite multiset uniquely to a rational scalar with N-digit-expansion representation. With the same choice of f from Lemma 5, the irrationality of \\epsilon enables us to distinguish the center node representation c from any combination of multiset representation, which is always rational. That is, h(c,X) is unique for each unique pair (c,X).\n----\n\nUsing h(c, X) for the aggregation, we can straightforwardly derive our modified GIN aggregation in Eq. (**) (similarly to the MLP-sharing-across-layer trick described after Lemma 5.) We included a detailed derivation in Section 4.1 of the revised paper.", "We sincerely appreciate all the reviews, they give positive and high-quality comments on our paper with a lot of constructive feedback. We also thank the many anonymous commenters for their interest and helpful discussion. In the revised paper, we did our best to address the concerns and suggestions to strengthen our paper. We sincerely hope reviewers revisit the rating in light of our revision and response. The following summarizes our revisions. Please see our rebuttal for the detailed discussion. \n\nMajor revisions:\n1. An anonymous reader and Reviewer2 made a clever observation that our original GIN aggregation in Eq. (4.1) and Theorem 3a-Eqn.2) of the initial submission and cannot distinguish certain corner case graphs that the WL test can distinguish. We fixed this issue by 1) making a slight modification to GIN’s aggregation in Eq. (4.1), and 2) adding Corollary 6 to show Eqn. (4.1) in the revised paper is as powerful as WL, 3) removed Theorem 3a-Eqn.2). The modified GIN aggregation smoothly extrapolates the original one, avoids the corner case, and can be shown to be as powerful as the WL test. We conducted extensive experiments on the modified GIN to further validate our model. (see below https://openreview.net/forum?id=ryGs6iA5Km&noteId=ryGs6iA5Km&noteId=SyeZ3MU4AX for our detailed response.)\n\n2. Based on the helpful comments of Reviewer1 on countability of node features, we have now made our setting much clearer: We clarified the common assumption that input node features are from a countable set, and we further added Lemma 4 in the revised paper to prove that the hidden node features are also always from a countable set under this assumption. With the countability assumption, it is meaningful to discuss injectiveness in Theorem 3, and our countability assumption used in Lemma 5 (universal multiset functions) always holds. We also provided detailed discussion on the correspondence between the WL test and WL-GNN under the countability assumption, validating our theory to equate those two.\n\n\nMinor revisions:\n1. R3 makes a great point that beyond distinguishing different graphs, it is equally important for GNNs to capture their structural similarity. We have already mentioned this point after Theorem 3. We now made this clearer and added a more detailed discussion in Section 4.\n2. In response to R3 and R2, we added Section 6 for detailed discussion of related work.\n3. Following the suggestions by R1 and R2, we added standard deviations in the experiments.\n4. Based on the great insight by an anonymous reader, we added discussion on the expressive power of Sum-Linear when the bias term is included.", "We also conducted extensive experiments on the modified GIN architecture with Eq. (**), where we learn epsilon by gradient descent. We included the additional results in Section 7 of our revised paper. In terms of training accuracy, which is the main focus of our paper, we observed from our new Figure 4 (in the revised paper) that the modified GIN (we call it GIN-eps in our paper) gives the same results as our original GIN (GIN-0) does, showing no improvement on the training accuracy. This is because the original GIN already fits the training data very well, achieving nearly 100% training accuracy on almost all of our datasets. Consequently, the explicit learning of epsilon in the modified GIN (GIN-eps) does not help much. Interestingly, in terms of the test accuracy, we observed from Table 1 (in the revised paper) that for GIN-eps (modified GIN) there is a slight drop in test accuracy (0.5% on average) compared to GIN-0 (original GIN). Since GIN-0 and GIN-eps showed almost no difference in training accuracy, both have sufficient discriminative power on this data, and the slight drop in test accuracy should be explained by generalization rather than expressiveness. We leave the investigation of the effectiveness of GIN-0 for future work. We want to emphasize that the pooling scheme (sum vs. average vs. max) and mapping scheme (MLP vs. linear) does affect the performance w.r.t. training accuracy, and consequently also affects the test accuracy. Thus, our main findings distinguishing the sum-MLP architecture from other aggregation schemes for maximally expressive GNNs is still valid. \n\nAs a final remark, as R1 nicely commented, instead of Eq. (**), a node and neighbors can be concatenated, rather than summed, to achieve the same power as the WL test. Interestingly, as R1 cleverly predicted, in our preliminary experiments, we found such concatenation was harder to train compared to our simple GINs (both GIN-0 and GIN-eps) and achieved lower test accuracy. We leave the extensive investigation and comparison to our future work.\n\nWe sincerely appreciate the reviewer and commenter for the great suggestions and insights, which enabled us to further strengthen our work and make our paper stronger. We hope our new version resolves the reviewers’ main concerns.", "We thank the reviewer for the positive review and constructive feedback! We are glad that the reviewer likes our paper.\n\nFirst, we completely agree that the ability of GNNs to capture structural similarity of graphs is very important besides their discriminative power, and we believe this is one of the most important benefits of using GNNs over WL kernel. We have now made this point clearer in Section 4. Furthermore, we emphasized that we do consider node features to lie in R^d so that they can capture the similarity. The subtlety is that (as R1 nicely pointed out), we need a common assumption that node features at each layer are from countable set in R^d (not from the entire R^d). This is satisfied if the input node features are from a countable set, because for a graph neural network, countability propagates across all layers in a GNN. We leave uncountable node input features for future work and add a more detailed discussion in Section 4 of the revised paper. \n\nIn the following, we respond to R3’s other helpful comments and suggestions:\n\n1. RE: Architecture is similar to, e.g., Interaction Networks\nThank you for the pointers. Some of our GIN’s building blocks, e.g. sum and MLP indeed appeared in other architectures. We emphasize that while previous work tend to be somewhat ad-hoc in designing GNN architectures, our main emphasis is on deriving our GIN architecture based on the theoretical motivation. In Section 6 of the revised version, we mention related GNN architectures and discuss the differences. \n\n2. RE: Using MLP for mean or max in the initial step is more fair?\nWe think there might be a slight misunderstanding here: as we discussed with concrete examples in Section 5.2, mean or max pooling are inherently incapable of capturing the multiset information regardless of the use of MLP. Especially, in our experiments, we use one-hot encodings as input node features, so the use of MLP on top of them does not increase the discriminative power of mean/max pooling.\n\n3. RE: Training set results optimized for test performance?\nThe results were not actually optimized for test performance. Instead, we used exactly the same configurations for all the datasets: For all the GNNs, the same configurations were used across datasets: 5 GNN layers (including the input layer), hidden units of size 64, minibatch of size 128, and 0.5 dropout ratio. For the WL subtree kernel, we set the number of iterations to 4, which is comparable to the 5 GNN layers. We clarified this in Figure 6 of the revised paper.", "Thank you for the detailed reviews. In the general post, we have addressed your chief concern regarding our original Eqn (4.1) and part of Theorem 3a). We sincerely hope R2 can revisit the rating in light of our revision and response.\n\nAnswers to R2’s other questions:\n1. RE: Standard deviations\nWe added the standard deviations in Table 1. Note that on many datasets, standard deviations are fairly high for the previous methods as well as our methods due to the small training datasets. Our GINs achieved statistically significant improvement on the two REDDIT datasets where the number of graphs are fairly large. We leave the empirical evaluation on larger datasets to future work, but we believe that more expressive GNN models like our GINs can benefit more from larger training data by better capturing important discriminative structural features.\n\n2. RE: Discussion on related work\nFollowing the suggestion, in Section 6 of the revised paper, we discuss the difference of our work to e.g., [1][2]. In short, the important difference is that [1][2] both focus on the specific GNN architectures, while we provide a general framework for analyzing and characterizing the expressive power of a broad class of GNNs in the literature.\n\n3. RE: Experimental setup and stopping criteria\nWe selected an epoch with the highest cross-validation accuracy (averaged over 10 folds) following what previous deep learning papers do, e.g., [3][4]. This is for fair comparison as most previous papers on graph classification only report cross-validation accuracy.\n\n4. RE: Network width\nOur proofs focus on existential analysis, i.e., there exists a way we can represent multisets with unique representations. Thus, the network width necessary for the functions provided in our proofs may only serve as an upper bound. For practical purposes, in our experiments, we found 32 or 64 hidden units are usually sufficient to perfectly fit the training set.\n\n[3] Mathias Niepert, Mohamed Ahmed, and Konstantin Kutzkov. Learning convolutional neural networks for graphs. In International Conference on Machine Learning (ICML), pp. 2014–2023, 2016.\n[4] Sergey Ivanov and Evgeny Burnaev. Anonymous walk embeddings. In International Conference on Machine Learning (ICML), pp. 2191–2200, 2018.", "Thank you for the detailed reviews and constructive feedback! We are glad that the reviewer finds our paper interesting. We apologize for the somewhat delayed response; it took us time to run additional experiments and add more careful analysis so that we can present an improved and more polished paper to everyone. We appreciate your understanding.\n\nIn the following, we first address the main concern on equating the WL test and the WL-GNNs by showing its validity under a mild practical assumption. Then, we clarify the misunderstanding regarding the random walk mixing behavior of the WL-GNNs, showing that our GIN architecture does not suffer from such behavior. Finally, we discuss confidence intervals of our experimental results and also address other concerns of the reviewer.\n\n1. RE: Validity of equating the WL test operating on countable sets to the WL-GNN operating on uncountable sets.\nThe reviewer makes a great observation that countability of node features is essential and necessary for our theory, and we acknowledge that our current Theorem 3 and Lemma 5 are built on the common assumption that input node features are from a countable universe. We have now made this clear in our paper. We also filled in a technical gap/detail to address R1’s concern that after the first iteration, we are in an uncountable universe: this actually does not happen. We can show that for a fixed aggregation function, hidden node features also form a countable universe, because the countability of input node features recursively propagates into deeper layers. We also added a rigorous proof for this (Lemma 4 in our revised paper). As the reviewer nicely suggests, for the uncountable setting, it would be useful to have measure-theoretic analysis, which we leave for future work. Often input node features in graph classification applications (e.g., chemistry, bioinformatics, social) come from a countable (in fact, finite) universe, so our assumption is realistic. In the revised version, we clearly stated our assumptions at the beginning of Section 3 and have added further discussion on the relation between the WL test and WL-GNN after Theorem 3.\n\n2. RE: Random walk mixing behavior of the GIN architecture.\nWe think there might be a slight misunderstanding here: (1) Theorem 1 of (Xu et al., 2018) relates the random walk to the influence distribution in Definition 3.1 of (Xu et al., 2018), rather than the precise node representation, and (2) the analysis of Theorem 1 is specific to the GCN architecture (Kipf & Welling, 2017), where 1-layer perceptrons with mean pooling are used for neighbor aggregation. The GIN architecture does not suffer from the problem of random walk mixing because (1) Theorem 1 in (Xu et al., 2018) shows the influence distribution converges to a random walk limit distribution, however, it does not yet tell whether the node representations converge to the random walk limit distribution. Thus, “the walker forgetting where it started” may not happen. (2) The GIN architecture uses MLPs rather than the 1-layer perceptron in (Kipf & Welling, 2017). The analysis in (Xu et al., 2018) specifically applies to models using 1-layer perceptrons, and therefore, it is not clear whether this analysis still holds for GIN. \nFurthermore, the reviewer is concerned with a possibly exploding value due to the sum aggregation, but this can be avoided because we have different learnable neural networks at each layer that can scale down the summed output (also, in practice, we did not observe such explosion).\n\n3. RE: Confidence interval in experiments\nFollowing the suggestion, we added the standard deviations in Table 1. Because of space limit, we only added standard deviation in Table1, and confidence interval can be obtained via the standard deviation. The confidence interval of 95% is mean 0.754*std, and confidence interval of 90% is mean 0.611*std. Note that on many datasets, standard deviations are fairly high for the previous methods as well as our methods due to the small training datasets. Our GINs achieved statistically significant improvement on the two REDDIT datasets where the number of graphs are fairly large. We leave the empirical evaluation on larger datasets to future work, but we believe that more expressive GNN models like our GINs can benefit more from larger training data by better capturing important discriminative structural features.\n\n4. Other comments:\nWe also thank the reviewer for many other comments to strengthen our paper. In the revised paper, we clarified that WL-test cannot distinguish e.g., regular graphs. We discussed in Section 5.5 that the expressive power of other poolings such as LSTM and attention pooling can be analyzed under our framework, but we leave the empirical investigation to future work.", "I understood \\{h_v^{(k-1)}, h_u^{(k-1)} : u \\in \\mathcal{N}_v\\} as a typo for a set of tuples \\{(h_v^{(k-1)}, h_u^{(k-1)}) : u \\in \\mathcal{N}_v\\}. Which would have been fine.\n\nBut you are right that looking at the proof in the appendix, it states \"difficulty in proving this form of aggregation mainly lies in the fact that it does not immediately distinguish the root or central node from its neighbors\" ... which is not how WL is supposed to work. Thanks!\n\nOn top of these issues, WL requires a countable space while their approach operates over uncountable spaces (which remains my main concern). Even reverting to aggregation will not fix this mismatch.", "We are now working hard for the thorough response and revision to fully address the concern of Reviewer2 and the anonymous reader. Thanks for your patience.", "The counterexample appears to be related to a flaw in Theorem 3, see this comment: https://openreview.net/forum?id=ryGs6iA5Km&noteId=ByxsKEkV07\n\nIn my opinion, a statement of the authors (and a revision) is absolutely necessary.", "Theorem 3 states (as a sidline) that it makes no difference whether we consider a) (label(v), {label(u) : uv in E}) or b) just the set {label(v)} \\cup {label(u) : uv in E}. The set notation used for b) in the paper is a bit unclear, but this appears to be the intended meaning (from the proof and the approach used in section 4.1). For this set, Equation (4.1) yields an injection as claimed. Therefore the error actually affects Theorem 3, the main result of the paper. Clearly, WL is not perfect (otherwise it would solve the graph isomorphism problem), but that does not make the flaw any less serious. In my opinion, a revision of the authors is absolutely necessary.", "Thanks for pointing out the dataset. But I believe those datasets contain many small graphs. A dataset of many large graphs is still missing.", "There are already larger real-world datasets available, see e.g., [1].\n\n[1] http://moleculenet.ai/datasets-1", "A comment on the dataset. I think current dataset is very limited for evaluating different graph learning algorithms. A new paper showed that using very simple degree statistics already can perform on par with the state-of-the-art graph neural networks and graph kernel. Imagenet Like dataset is strongly needed for evaluating different algorithms fairly.\n\nReference:\nA simple yet effective baseline for non-attribute graph classification https://arxiv.org/abs/1811.03508 ", "As we have pointed out in the experiment section, although stronger discriminative power does not directly imply better generalization, it is reasonable to expect that models that can accurately capture graph structures of interest also perform well on test set. In particular, with many existing GNNs, the discriminative power may not be enough to capture graph substructures that are important for classifying graphs. Therefore, we believe strong discriminative power is generally advantageous for graph classification. In our experiments, we empirically demonstrated that our powerful GIN has better generalization as well as better fitting to training datasets compared to other GNN variants. GINs performed the best in general, and achieved state-of-the-art test accuracy. We leave further theoretical investigation of generalization to our future work.", "I understand that GIN provably has more discriminative power than other variants of GNN. But the ability to differentiate non-isomorphic graphs does not necessarily imply better graph classification accuracy, right? Would it be possible to strong discriminative power will backfire for the graph classification? After all, we don't need to solve graph isomorphism here.", "Thanks for your interest. Answers to your inquiries: \n\n1. Note that being powerful entails “being able to” map nodes with different subtrees to different representations. If a model is not capable of achieving this, then it’s intrinsically less powerful in distinguishing different graphs. In addition, to combat noise, we can simply regularize the mapping function to be locally smooth (e.g., by using Virtual Adversarial Training [1]). Nonetheless, in many graph classification applications including those in our experiments, the node features have specific meanings (e.g. an atom of certain types) and are not noisy. \n\n2. Note that our paper focuses on expressive power of GNNs, and there are two main reasons why it is not very interesting for us to conduct node classification experiments to validate our claim.\nFirst, as we have emphasized in Section 5 and 5.3, in many node classification applications, node features are rich and diverse (e.g. bag-of-words representation of papers in a citation network), so GNN models like GCN and GraphSAGE are often already able to fit the training data well. Second, many node classification tasks assume limited training labels (semi-supervised learning scenario); thus, the inductive bias of GNNs also plays a key role in empirical performance. For example, as we discussed in Section 5.3, the statistical and distributional information of neighborhood features may provide a strong signal for many node classification tasks. \n\nOur GINs may potentially perform well on node classification tasks. However, due to our explanations above, the performance on node classification tasks are less directly explained by our theory of representational power, so we leave the experiments for future work. We believe our experiments on graph classification are sufficient and great for validating our theoretical claim on expressive power of GNNs. \n\n3. We set the numbers of hidden units and output units of MLP to be same. So the parameter complexity of Sum-MLP is roughly two times as many as that of Sum-Linear. However, note that with more hidden units, the performance of models with 1-layer perceptrons usually decreases. \n\n[1] https://arxiv.org/abs/1704.03976", "We thank everyone for interest and many inquiries about our work. \n\nTo Anonymous 3: Thanks for bringing up this related work. Graph representation learning is an increasingly popular research topic with a surge of many wonderful works. We will make sure to add all the relevant references in our updated version. To emphasize the difference with the related work, [5] shows their proposed architecture lies in the RKHS of graph kernels, but does not tell anything about which graphs can actually be discriminated by the network. In contrast, we address the question of which graphs can be distinguished, and provide a framework for addressing this representational question in a general way, settling the representational power of a broad class of GNNs.", "Hi!I'm writing to ask some questions.\n\n1. In Section 3, you said that \"Intuitively, the most powerful GNN maps two nodes to the same location only if they have identical subtrees structures with identical features on the corresponding nodes\". However, in my opinion, a powerful model should map nodes with different labels into different locations instead of features, since there may be some noise in features. \n\n2. In the paper, you said that GIN is the most powerful model. But you only reported experimental results on graph classification. Have you validated the proposed model on node classification tasks? Based on my understanding, it's also important to consider the performance on node classification when judging the power of a GNN model?\n\n3. Instead of Mean/Max aggregators in GCN and GraphSAGE, MLP is used as the aggregator in each layer. Have you compared the parameter complexity with other baselines?\n\nThank you!", "Thank you so much for providing possible ideas for future directions! The materials you referenced look very helpful and I will take a look at graph minor theory and spectral graph theory.", "I think you also miss other important related work [5], which shows that the features computed by GNNs lie in the same Hilbert space as WL.\n\n\n[5] https://people.csail.mit.edu/taolei/papers/icml17.pdf", "Thank you for your interest in our work! \n\nGreat that you found the framework presented in our paper intuitive/natural for understanding graph representations. We think the spectral perspectives [1] [2] also provide a very valuable and important angle. It would be interesting to understand how to connect and relate the different perspectives. Regarding future directions, besides what we have mentioned in our conclusion, we do not have further comments at this moment. Combining and applying techniques from many other communities indeed sounds very interesting and promising. Ideas from graph minor theory [3] and spectral graph theory [4] [5] may be interesting and are not fully explored in the current message passing frameworks, although we do not have detailed suggestions at the moment.\n\n[1] Bruna, J., Zaremba, W., Szlam, A., and LeCun, Y. Spectral networks and locally connected networks on graphs. International Conference on Learning Representations (ICLR), 2014.\n[2] Bronstein, M. Bruna, J., Szlam, A., LeCun, Y. and Vandergyst, P. Geometric Deep Learning: going beyond Euclidean Data IEEE Sig. Proc. Magazine, 2017 \n[3] https://www.birs.ca/workshops/2008/08w5079/report08w5079.pdf\n[4] http://www.cs.yale.edu/homes/spielman/561/\n[5] http://courses.csail.mit.edu/6.S978/", "Thanks for the thoughtful and provocative work! The paper answered some questions I have been thinking about. Graph convolution that many people talk about was motivated by Fourier transform of graph Laplacian and analogy with computer vision, yet I thought it’s not quite the same as vision. I was curious what are the more natural explanations. The view of “capturing graph structures with powerful aggregators” sounds much more natural to me and also natural to graphs problems. Very provocative!\n\nI wonder what possible good future directions look like for graphs? Many great works these years apply theoretical computer science techniques to machine learning, e.g. Prof Sanjeev Arora group from Princeton and Prof. Aleksander Madry group from MIT. Do you see similar directions for graphs?", "We thank both Anonymous 1 and Anonymous 2 for your interest in our work! \n\nTo Anonymous 1: Thanks for bringing up this early work! We will comment on the differences below. We would like to refer to Anonymous 2’s comment first, which made a very good point. \n\nTo Anonymous 2: Thank you for the insightful comment! Indeed, [1] analyzes a specific model with recurrent contraction maps, but our analysis framework applies to general GNNs with message passing/neighbor aggregation. Regarding the connection and differences of contraction, recurrent maps and more general aggregators, the talk/paper by Yujia Li et al [3][4] provide some very good explanations and insights! Highly recommended!\n\nMore detailed explanations on the differences: \n\n1) As Anonymous 2 pointed out, the 2009 paper [1] analyzes a specific architecture designed in [2] that uses contraction maps and the same aggregator in all layers. Although [1] proves [2] can capture rooted subtree structures, it has been observed e.g. in [3][4], that it does not perform ideally in practice, thus leading to the surge of a large amount of modern GNN architectures like Gated GNN, GCN, GraphSAGE etc. Our architecture GIN is shown to perform well in practice. To Anonymous 2: in our preliminary experiments, we also tried sharing the same aggregator across all layers of GNN, but the training accuracy was fairly low (usually < 80%), possibly due to optimization or capacity issues.\n\n2) While [1] focuses on the specific GNN in [2], we provide a general framework for characterizing the expressive power of many different GNN variants proposed so far in the literature. Our results are not only applicable to [2], GIN etc. but also applicable to almost all modern GNN architectures like GCNs and GraphSAGE.\n\n3) We made an explicit comparison of different GNN variants both theoretically and empirically so that we can have better understandings of their theoretical properties. Specifically, we characterized what graph substructures different aggregation schemes can capture, and discussed how that might affect empirical performance. We also made it clear that injectiveness of the aggregation function is the key to achieving high expressive power in GNNs. \n\nTherefore, we believe our work plays an important role in rethinking and structuring the 10-year literature of GNNs from the viewpoint of expressive power, despite some similarity to [1] in terms of capturing rooted subtree structures. We will also discuss [1] and [2] in our updated version.\n\n\n[1] Scarselli, Franco, et al. \"Computational capabilities of graph neural networks.\" IEEE Transactions on Neural Networks 20.1 (2009): 81-102.\n[2] Scarselli, Franco, et al. \"The graph neural network model.\" IEEE Transactions on Neural Networks 20.1 (2009): 61-80.\n[3] https://www.cs.toronto.edu/~yujiali/files/talks/iclr16_ggnn_talk.pdf\n[4] Li, Yujia, et al. \"Gated graph sequence neural networks.\" arXiv preprint arXiv:1511.05493 (2015).", "I am also curious if there is any connections here. From my understanding, one difference is that Scarselli et al. (2009) focus on a specific type of GNN (with a recurrent contraction aggregator), so the analysis probably doesn't apply to mordern GNN architectures like GCN. On the other hand, this paper provides a general framework that gives insight to a number of GNN architectures.", "There is an article from 2009 [1] which has a similar theoretical contribution. Could you please comment on the differences.\n\n[1] https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=4703190", "You are right; we can simply pick sufficiently large N that is bigger than the size of any graphs of interest. Also, all graphs of our interest are of bounded sizes, and we explicitly stated in our Lemma 4 that we dealt with finite multiset*; thus, your second question does not make sense to us.\n\n*https://en.m.wikipedia.org/wiki/Finite_set", "Could you clarify how you can always find an N that works without an upper bound? My understanding is that N should be at least as large as the largest degree you would encounter in the set of all training + testing graphs, for the function to be injective in all of these graphs. Please correct me if I am wrong.\n\nIf the set of training + testing graphs are bounded in size, sure I can pick a large constant for N and that should work. But it's possible the distribution of graphs includes graphs of unbounded size (e.g., number of nodes drawn from a geometric distribution). What N should I pick then? \n\nIn practice, of course, all graphs have bounded size and it doesn't matter. But I want to understand what is the precise theoretical statement to be made here.", "Thank you for your interest! The finite node degrees |X| can be arbitrarily large, and we can always find an N that works (we do not have to put an upper bound on N). Note that Lemma 4 only shows the existence of injective functions, and in practice, we need our neural networks to learn these functions from data. ", "The proof of Lemma 4 assumes the graphs have a constant degree bound (|X|<N). Is the statement true even in general (i.e., finite |X|, but not bounded by a constant)? E.g., in inductive setting test graphs could have high degree. ", "That’s a good observation. Indeed, there are great stuff in the nature possibly found by accident, e.g. rare grasses in Chinese medicine. Here, our goal is to study and develop theory to understand the underlying principles, so that we can appreciate the great stuff, and that in the future, with the insight of our theory, we can build even better graph deep learning models!\n", "Thank you for your interest and positive comments on our work! Let us try to answer your questions. There are many GNN formulations. So it is always interesting to understand the power of different variants!\n\n1) Thanks for this insightful comment! With sufficiently large dimensionality of output units, ReLU with bias might indeed be able to distinguish different multisets (larger output dimensionality is generally needed as we have more multisets to distinguish). In our experiments, we actually had the bias term, and we empirically observed that under-fitting still sometimes occurred for models with 1-layer perceptrons (with bias) (see Figure 4). We think it could be due to the limited number of output units or optimization.\n\nWe would like to emphasize that with MLPs, we can enjoy universal approximation of multiset functions. This allows Sum-MLP (GIN) to go beyond just distinguishing different multisets and to learn suitable representations that are useful for applications of interest. In fact, Sum-MLP outperformed Sum-1-layer in 7 out of 9 datasets (comparable in the other 2) in terms of test accuracy!\n\nWe will further discuss these points and practical implications in our updated version.\n\n2) There can certainly be other GNN architectures with the same discriminative power as GIN (as long as they satisfy conditions in our Theorem 3). Your proposed formulation with COMBINE could potentially also work, although we do not fully understand your description. It would be great future work to investigate other powerful GNN models with potentially better generalization and optimization.\n\n3) (2.2) is indeed not exactly the same as the original GCN. Our emphasis here was that MEAN aggregation was used in GCN. We used the formulation (2.2) to share the same framework with GraphSAGE (MAX aggregation) to save space. We will include the exact formulation of GCN in the updated version. Also, we mentioned after (2.2) that GCN does not have a COMBINE step and aggregates a node along with its neighbors. \n", "Thank you for the discussion!\n\nI'd like to clarify my point 2) further (it is an observation, not criticism):\n\nAssuming we have a COMBINE operation like described above: \\sigma ( W_1*x + W_2*y + b)\nIf we now stack n layers (NO weight sharing over time) and assume W_1 = 0 for the first n-1 of them, we arrive exactly at the formulation where we have an MLP with n-1 layers, followed by a normal GNN layer.\n\nThe point I wanted to make: There are architectures in current literature that already achieve injectivity (maybe by \"accident\") through this construction. Maybe it can be said: As long as there is an individual W for the self-connection, the condition can be fulfilled through stacking.\n\nExamples are:\nDefferrard et al.: Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering, 2016 (individual parameter for k=0 neighbourhood)\nGilmer et al.: Neural Message Passing for Quantum Chemistry, 2017 (depending on implementation, i guess)\n\n", "Thank you for this very interesting work which gives a lot of insight into graph neural networks and structures the large amount of related work out there.\n\nI have some remarks/opinions regarding the use of non-linearities in this work.\n\n1) Regarding section 5.1 and lemma 5: I do not think that more than 1 layer is necessary. The ReLU non-linearity does only show its full potential when used together with a bias. In most literature, the bias term is (unfortunately) omitted in the paper but still used in the implementation. ReLU without bias separates based on a hyperplane which always goes through the origin, which is why the example in the proof of Lemma 5 works. All values lie in one piece-wise linear subspace of the functions range. When using a bias, the non-linear point can be shifted to separate both examples in a non-linear fashion and the example that proves Lemma 5 does not work anymore. I am not sure though if there is another example that works if a bias is present. I suspect though, that one layer with a \"working non-linearity\", e.g. ReLU with bias, should be enough.\n\nTherefore, I guess the insight here is: We need a (working) non-linear mapping before doing the feature aggregation (assuming no one-hot encoding), otherwise, we lose injectivity and therefore, discriminative power. In many current GNN models (including GCNs), this is not the case.\n\n2) Further, I suspect that depending on how the COMBINE operation is defined, the discriminative power of WL can also be obtained by stacking 2 layers in the following way: \nAssuming COMBINE to be \\sigma ( W_1*x + W_2*y + b), with x being the result of neighbourhood aggregation and y the last current node feature. Further, in the first layer, let the features from the neighbourhood aggregation get discarded (W_1 = 0), resulting in a node-wise fully connected layer with nonlinearity (or \"1x1-convolution\" or however it might be called). \nThen, the second layer receives features which went through a non-linear function before aggregation. Since the network could learn W_1 = 0, those two layers should have the same discriminative power as the WL.\n\n3) I think the formulation of GCN in Equation 2.2 is not correct. The original GCN aggregates first and applies the non-linearity afterwards. \nIt should be noted that since GCN does not have individual W's for the root node and the neighbourhood (W_1 and W_2 in the equation above) the mentioned construction from 2) does not work here.", "Thanks for your questions!\n\nAs we mentioned in Section 4 right after Theorem 3, GIN generalizes the WL graph isomorphism test by learning to embed the subtrees to continuous space. This enables GIN to not only discriminate different structures, but also to learn to map similar graph structures to similar embeddings and capture dependencies between graph structures. Such learned embeddings are particularly helpful for generalization when the co-occurrence of subtrees is sparse across different graphs or there are noisy edges (Yanardag & Vishwanathan, 2015).\n\nRegarding the dataset, we did not try reddit-12K at this moment.", "Since the GIN is developed to achieve as strong expressive power as WL graph isomorphism test, why does it still has much better result on reddit-binary and reddit-5K than WL subtree Kernel? Do you also tried on larger dataset such as reddit-12K?\n\n" ]
[ -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, 7, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, 5, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2019_ryGs6iA5Km", "S1ljyieA0m", "S1ljyieA0m", "iclr_2019_ryGs6iA5Km", "rJx9PavpA7", "rJxY7atRCX", "rkxt80KARX", "BygALwN0CX", "H1xW3wVA0X", "BJeRhEhiRX", "BkgrFw3iRQ", "BkxHNhu607", "iclr_2019_ryGs6iA5Km", "H1xLhQUEA7", "iclr_2019_ryGs6iA5Km", "Byg6WVL4Rm", "B1l2k48VCQ", "ryeaD73iRX", "iclr_2019_ryGs6iA5Km", "Hke3DIJNAQ", "iclr_2019_ryGs6iA5Km", "iclr_2019_ryGs6iA5Km", "iclr_2019_ryGs6iA5Km", "BJgIGNTP27", "HJgMSgUqhQ", "rkeW9FDnnQ", "ByxsKEkV07", "ByxsKEkV07", "rkl2Q1Qi6X", "rkeW9FDnnQ", "B1xLcPaKpQ", "H1gkUYX76Q", "iclr_2019_ryGs6iA5Km", "HJgofotjs7", "iclr_2019_ryGs6iA5Km", "rklL6jDwjm", "Bkg-9GNXi7", "iclr_2019_ryGs6iA5Km", "HJltX0els7", "Byx_XQZJjQ", "HJgxQah1sQ", "iclr_2019_ryGs6iA5Km", "H1lpE_ACc7", "r1xAhRX05X", "iclr_2019_ryGs6iA5Km", "r1g3L4zh5X", "BygugHlo9Q", "H1grV6yjcQ", "iclr_2019_ryGs6iA5Km", "HJgCBjM5cm", "HJe9n0eFcQ", "HyxT16oFqm", "iclr_2019_ryGs6iA5Km", "S1evjSRPc7", "iclr_2019_ryGs6iA5Km" ]
iclr_2019_B1G5ViAqFm
Convolutional Neural Networks on Non-uniform Geometrical Signals Using Euclidean Spectral Transformation
Convolutional Neural Networks (CNN) have been successful in processing data signals that are uniformly sampled in the spatial domain (e.g., images). However, most data signals do not natively exist on a grid, and in the process of being sampled onto a uniform physical grid suffer significant aliasing error and information loss. Moreover, signals can exist in different topological structures as, for example, points, lines, surfaces and volumes. It has been challenging to analyze signals with mixed topologies (for example, point cloud with surface mesh). To this end, we develop mathematical formulations for Non-Uniform Fourier Transforms (NUFT) to directly, and optimally, sample nonuniform data signals of different topologies defined on a simplex mesh into the spectral domain with no spatial sampling error. The spectral transform is performed in the Euclidean space, which removes the translation ambiguity from works on the graph spectrum. Our representation has four distinct advantages: (1) the process causes no spatial sampling error during initial sampling, (2) the generality of this approach provides a unified framework for using CNNs to analyze signals of mixed topologies, (3) it allows us to leverage state-of-the-art backbone CNN architectures for effective learning without having to design a particular architecture for a particular data structure in an ad-hoc fashion, and (4) the representation allows weighted meshes where each element has a different weight (i.e., texture) indicating local properties. We achieve good results on-par with state-of-the-art for 3D shape retrieval task, and new state-of-the-art for point cloud to surface reconstruction task.
accepted-poster-papers
1. Describe the strengths of the paper. As pointed out by the reviewers and based on your expert opinion. - The paper tackles an interesting and challenging problem with a novel approach. - The method gives improves improved performance for the surface reconstruction task. 2. Describe the weaknesses of the paper. As pointed out by the reviewers and based on your expert opinion. Be sure to indicate which weaknesses are seen as salient for the decision (i.e., potential critical flaws), as opposed to weaknesses that the authors can likely fix in a revision. The paper - lacks clarity in some areas - doesn't sufficiently explain the trade-offs between performing all computations in the spectral domain vs the spatial domain. 3. Discuss any major points of contention. As raised by the authors or reviewers in the discussion, and how these might have influenced the decision. If the authors provide a rebuttal to a potential reviewer concern, it’s a good idea to acknowledge this and note whether it influenced the final decision or not. This makes sure that author responses are addressed adequately. Reviewers had a divergent set of concerns. After the rebuttal, the remaining concerns were: - the significance of the performance improvements. The AC believes that the quantitative and qualitative results in Table 3 and Figures 5 and 6 show significant improvements with respect to two recent methods. - a feeling that the proposed method could have been more efficient if more computations were done in the spectral domain. This is a fair point but should be considered as suggestions for improvement and future work rather than grounds for rejection in the AC's view. 4. If consensus was reached, say so. Otherwise, explain what the source of reviewer disagreement was and why the decision on the paper aligns with one set of reviewers or another. The reviewers did not reach a consensus. The final decision is aligned with the more positive reviewer, AR1, because AR1 was more confident in his/her review and because of the additional reasons stated in the previous section.
test
[ "HyxCDlbK0Q", "r1gpWlWFC7", "H1xYOJWKAQ", "HJlzBRwC2m", "BJxMFSeonm", "SJlRpk09hQ" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank you for your review and feedback, and we hope to be able to address your concerns below.\n\nOur paper addresses the issue of handling irregular domains, with possibly mixed topologies in the context of deep learning, and propose an optimal spectral sampling scheme for constructing a volumetric representation in structured grids. Our experiments are chosen to reflect these as aspects. i.e., MNIST experiment compares our representation to conventional schemes (binary pixel, SDF) to show that our representation conserves more shape information and is more robust to limited resolution. Shape retrieval experiment shows the applicability of our method to standard classification tasks, and the surface reconstruction task highlights the ability of our framework to handle mixed topologies. The fact that limits our choice to performing CNNs in the physical domain is that even though the convolution step can be performed in the spectral domain, nonlinearities cannot. Hence in order to perform spectral convolution there needs to be repeated forward and inverse transformations, rendering the process less efficient.\n", "Thanks for your helpful feedback and suggestions! We will try to address your various comments below:\n\n- Size of the grid structure\nThe size, or resolution of the given representation, is predetermined prior to running the CNN. Higher resolution leads to improved performance (see Fig. 3, 4) but also higher computational and memory budget. 128^3 resolution can comfortably fit in Modern GPUs (such as NVIDIA 1080 Ti). Also as inverse FFT is concerned, it is an extremely efficiently algorithm (o(nlogn)) and reasonable resolutions (such as 128^3) is manageable.\n\n- Spectra truncation\nWe have compared the results of varied resolution in Fig. 3, 4, where higher resolution generally results in higher performance. Also Fig. 3 compares the effect of resolution on different shape representations, where our NUFT representation is less sensitive to resolution. Unfortunately, to our knowledge, graph-based methods for the given tasks are not available in the literature for comparison.\n\n- Functional maps\nIdeas of using functional maps towards deep learning have been explored (SyncSpecCNN, Yi et al. 2017). However, it targets a fundamentally different scenario. Functional maps in this context only concerns different topologies of the surface manifold (i.e., across genuses), whereas the different topologies in our case are the different degrees of simplex units (points, lines, surfaces, volumes), irrespective of shape genus.", "We appreciate your thorough review and helpful suggestions. We will try to address you questions and suggestions below:\n- Sec.1 Representational Error\nOur NUFT transformation of simplex meshes analytically and exactly computes all spectral coefficients under a certain cut-off frequency, hence it is an ideal low-pass filter, resulting in no aliasing error due to the strict containment of frequencies below the cut-off frequency. We acknowledge that the term “information content” could be potentially misleading, and replace it with “shape information” instead. We provide empirical evidence of the effects of such shape information on machine learning tasks (MNIST experiment) as well as purely for shape reconstruction purpose (Appendix B & Figure 10).\n\n-Sec. 2 Additional references\nWe appreciate the additional references provided, and added them to the literature overview section (see Sec. 2).\n\n-Sec. 3 Representing the spectra\nThe spectra is represented as the spectral coefficients that are limited under a certain cut-off frequency, hence the signal is band-limited. Exactly computing only spectral coefficients under a cut-off frequency followed by inverse transform to the physical domain is equivalent to filtering the original signal (defined on a simplex mesh) with an ideal low-pass filter. We show the effects of truncating the spectrum on deep learning tasks in experiments (MNIST and shape retrieval by varying input resolution).\n\n-Sec. 4 Error bars\nAdding error bars to figures is unfortunately difficult to accomplish as the deep learning algorithms are deterministic in the current form. Models in the vision literature are usually much more deterministic and reproducible than those of other fields such as reinforcement learning. Moreover the baseline models we compared to do not have error bars in the original papers, and it is non-trivial to reproduce their results since most of them did not release source code.\n\n-Sec. 4 Downsampling\nIn the paper, we described the sampling process for the various representations: “The polygonized digits are converted into (n × n) binary pixel images and into distance functions by uniformly sampling the polygon at the (n × n) sample locations. For NUFT, we first compute the lowest (n × n) Fourier modes for the polygonal shape function and then use an inverse Fourier transform to acquire the physical domain image.” By downsampling, we mean simply using a smaller value of n in the conversion. The disparity between NUFT and SDF is not too huge, as compared to the disparity between NUFT and Binary Pixels.\n\n-Sec. 4 Fig. 4\nDetailed values for the best run in Fig 4 is provided in Table 2 for comparison with baseline models. Fig 4 provides a comparison between input NUFT representations (volume vs surface) and shows that the difference between the representations are small. We use the official partition of the data for the experiment for fair comparison with baselines.\n\n-Arc and training details\nThe DLA backbone architecture is a state-of-the-art CNN architecture that performs well in this task. As fairness is concerned, in the MNIST experiment, we controlled all aspects of the experiment (architecture, training schedule, etc.) except for input representation to study the effects of shape representation, hence the same architecture is used for all cases.\n\n-Sec. 3/4 Reconstructed back into a dense sampling domain\nThis is indeed the case. The fact that limits our choice to performing CNNs in the physical domain is that even though the convolution step can be performed in the spectral domain, nonlinearities cannot. Hence in order to perform spectral convolution there needs to be repeated forward and inverse transformations, rendering the process less efficient.\n\n-Sec. 4 Baseline algorithms\nThe baseline PSR algorithm is not learning based, and training on single categories has its limitations. But for targeted application scenarios (indoor structure inference etc.,) this is reasonable. We would incorporate multi-category training as part of our future work.\n\n-Sec. 4 Comments on Table 3\nThis is partially a result of the evaluation metric. Accuracy and Complete metrics favor different aspects. With noise in the input, the resulting mesh tends to be more “thick”, hence provides better completeness, while sacrificing the tightness of the prediction. Chamfer distance, which is the mean of the two, is resulting very similar with and without noise. We could expect the results to favor the noiseless when the noise to signal ratio is too large.", "Overall Thoughts:\n\nI think this paper addresses an interesting topic and that the community as a whole are interested on the application of learning algorithms to non-Euclidean domains. It is nice to see the application of Fourier sampling to geometric primitives in a sensible manner and I am positive about that part of the paper. However, in its current form, I have quite a few questions about the approach and the empirical studies - I would need to here more information from the authors on the points below.\n\nSpecific Comments/Questions:\n\nSec1: The authors make a number of assertions about the representational errors that occur in other approaches - I feel that these claims should be supported by specific references.\n\nContributions: It is not clear to me that the experiments show that the method “preserves maximal information content” - in my understanding, information content has a formal definition and I don’t see where this is presented in the results?\n\nSec2: Before CNNs there has been a substantial analysis of Fourier methods applied to shape models, e.g. elliptical Fourier series for shape contours (and signed distance representations) by Prisacariu et al.\n\nSec2: It is also worth noting that there is substantial literature on non-uniform Fourier methods including the non-uniform Fourier transform and a number of accelerations (e.g. NUFFT) as well as consideration of the implications of band-limiting the sampled spectra.\n\nSec3: The mathematical derivation all makes sense to me and makes use of results for piecewise uniform signals. Please could the authors provide more details on how the spectra are represented? These discontinuous signals (esp. delta functions) will have infinite bandwidth in the spectral domain so how they are stored would seem very important to me. Are the signals band limited at some point? If so, how does this affect the approximation and should filtering/windowing be used? Otherwise is there not a difficult storage problem? The final paragraph suggests all the analytic signals might be stored but this has a big impact on how efficient the algorithm is and really is far too important a point to just have a single sentence - please can the authors expand on how this is actually implemented and what the computational considerations are (and resulting impacts on performance)?\n\nSec4: Please can the authors add error bars (at the least) to all the tables/plots in the results. It is entirely unreasonable to make any statements about how significant the results may be without even the most basic of analysis. Ideally we should see histograms of the results for the retrieval and shape reconstruction results. \n\nSec4: How is the downsampling performed in the MNIST experiment? It would seem very important to take care with this for the purpose of comparison. A significant disparity between the signed distance function and the NUFT would seem slightly surprising to me? Again, without error bars we really cannot say very much about the results on the right of Fig3(b).\n\nSec4: Could the Fig4 results not be provided as histograms? We also need many more details about how the results were obtained and procedures to ensure that the results are meaningful and robust (e.g. repeated tests and partitions of the data, etc..)\n\nArch and Training Details: How are we to know that these choices provide fair comparisons to previous approaches?\n\nSec3/4: All the results seem to require the input to be reconstructed back into a dense sampling domain (via inverse FT) - is this the case? Would it not be more efficient to perform the convolutions in the spectral domain where the signal is sparse?\n\nSec4: It seems pretty unfair to train and test on a single category of shapes in the shape test since the data is known not to be very diverse? Particularly when, unless I’ve misunderstood, the baselines on the shape recovery test to not involve learning and so (while helpful to have) they are not really fair baselines compared to other learning approaches? Also, please can the authors provide much more information about the extra processing applied (e.g. the part starting with the “extra mesh thickness”) since there seem to be some extra steps that are nothing to do with the rest of the method and may impact the results significantly. \n\nSec4: It is interesting that Table 3 (again difficult to say without error bars) indicates that there are times when the method performs better with the additional of noise - this seems counter-intuitive - please could the authors comment on this?\n\nOther Points:\n\nI’m afraid that there are quite a few grammatical errors in the text (too many to list here) so I would recommend another round of proof-reading.", "Convolutiuonnal Neural Networks on Non-uniform Geometrical Signals using Euclidean Spectral Transformation\n\nThe paper tackles the challenging problem of learning across mixed graph topologies, which is today a real challenge. It is highly original due to the unified general framework for handling differing graph topologies. The method is highly generalizable to other learning techniques since it proposes a transformation of varied topologies into a cartesian grid-like embedding, via the new non-uniform Fourier transform. The method is evaluated on MNIST, Shape Retrieval, and point to surface reconstruction. The paper is dense and uses non-trivial mathematical formulations, but reads well and remains easy to follow. The experiments supports well, and greatly adds clarity to understand the proposed methodology. Overall, recommendation towards acceptance. \n\nPositive\n+ Develop a method to analyze signal on mixed topologies with a new non-uniform Fourier transform.\n+ The proposed approach has advantages on -reducing sampling error, -unified framework for mixed topology, -reducing heuristics in designing can architecture, -local weights in mesh structures.\n+ Improved performance in surface reconstruction task.\n\nSpecific Comments\n- In the inverse Fourier Transformation, a voxel-like grid structure is used, however - how to control the size of this volume? If the size is large, or explodes, the complexity of the cnn architecture would explode as well - How is this size issue tackled?\n- In this same inverse Fourier Transformation, the whole infinite space would be obviously hard to sample - Spectral information would be lost - How bad is this and how does this impact results? How would this compare to direct graph-based methods, for instance, in a fixed graph structure?\n- Ovsjanikov’s Functional maps, siggraph 2012, have been proposed to find maps between differing graph spectrum, partly solving the problem of handling graphs of multiple topologies. One way would be to find spectral correspondences between embeddings. How helpful would this be to find similarities between embeddings in this new proposed unified framework?\n\nTypos - geometris", "This paper introduces a method for handing input data that is defined on irregular mesh-type domains. If I understand it correctly, the core technique is to perform Fourier analysis to transform the input data to the frequency domain, which is then transformed back to a regular domain before applying a standard neural network. The claimed result is that this is better than standard linear interpolations. The key technical contribution is to define FT on points, edges, and meshes (This reviewer appreciates these efforts). Explicit formula are given. However, the paper does not perform convolutions on the input irregular domain directly, which is quite disappointing. The experimental results are preliminary. It is expected to perform evaluation on more applications such as semantic segmentation. \n\n\nThe major issue of the paper is that the goal was not stated clearly. Does it target for a neural network that is defined on irregular domains or simply a technique to handling irregular domains? Given the Fourier transform, it is possible to define convolutions directly as multiplications in the Fourier domain....the paper can be more interesting, if it follows this line.\n\nOverall, it is hard to champion the paper based on the current technical approach and the experimental results. \n" ]
[ -1, -1, -1, 5, 7, 4 ]
[ -1, -1, -1, 3, 4, 3 ]
[ "SJlRpk09hQ", "BJxMFSeonm", "HJlzBRwC2m", "iclr_2019_B1G5ViAqFm", "iclr_2019_B1G5ViAqFm", "iclr_2019_B1G5ViAqFm" ]
iclr_2019_B1G9doA9F7
Augmented Cyclic Adversarial Learning for Low Resource Domain Adaptation
Training a model to perform a task typically requires a large amount of data from the domains in which the task will be applied. However, it is often the case that data are abundant in some domains but scarce in others. Domain adaptation deals with the challenge of adapting a model trained from a data-rich source domain to perform well in a data-poor target domain. In general, this requires learning plausible mappings between domains. CycleGAN is a powerful framework that efficiently learns to map inputs from one domain to another using adversarial training and a cycle-consistency constraint. However, the conventional approach of enforcing cycle-consistency via reconstruction may be overly restrictive in cases where one or more domains have limited training data. In this paper, we propose an augmented cyclic adversarial learning model that enforces the cycle-consistency constraint via an external task specific model, which encourages the preservation of task-relevant content as opposed to exact reconstruction. We explore digit classification in a low-resource setting in supervised, semi and unsupervised situation, as well as high resource unsupervised. In low-resource supervised setting, the results show that our approach improves absolute performance by 14% and 4% when adapting SVHN to MNIST and vice versa, respectively, which outperforms unsupervised domain adaptation methods that require high-resource unlabeled target domain. Moreover, using only few unsupervised target data, our approach can still outperforms many high-resource unsupervised models. Our model also outperforms on USPS to MNIST and synthetic digit to SVHN for high resource unsupervised adaptation. In speech domains, we similarly adopt a speech recognition model from each domain as the task specific model. Our approach improves absolute performance of speech recognition by 2% for female speakers in the TIMIT dataset, where the majority of training samples are from male voices.
accepted-poster-papers
The authors propose a method for low-resource domain adaptation where the number of examples available in the target domain are limited. The proposed method modifies the basic approach in a CycleGAN by augmenting it with a “content” (task-specific) loss, instead of the standard reconstruction error. The authors also demonstrate experimentally that it is important to enforce the loss in both directions (target → source and source --> target). Experiments are conducted on both supervised as well as unsupervised settings. The main concern expressed by the reviewers relates to the novelty of the approach since it is a relatively straightforward extension of CycleGAN/CyCADA, but in the view of a majority of reviewers the work serves a useful contribution as a practical method for developing systems in low-resource conditions where it is feasible to label a few new instances. Although the reviewers were not unanimous in their recommendations, on balance in the view of the AC the work is a useful contribution with clear and detailed experiments in the revised version.
train
[ "rkgVnNv52Q", "HJxLgpUNJE", "HJxHpfhH2m", "SJgRrjSf14", "Byez806lJE", "Syl3KFjyJN", "BylYeKiykE", "HyeIY_i1y4", "rkxI_hF3RX", "HklKo3NW6Q", "B1xgX-Y0am", "BJxNk5FwCQ", "H1gOeZtCpQ", "ByxZHgYCam", "SylfAxKCpQ" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "The authors propose an extension of cycle-consistent adversarial adaptation methods in order to tackle domain adaptation in settings where a limited amount of supervised target data is available (though they also validate their model in the standard unsupervised setting as well). The method appears to be a natural generalization/extension of CycleGAN/CyCADA. It uses the ideas of the semantic consistency loss and training on adapted data from CyCADA, but \"fills out\" the model by applying these techniques in both directions (whereas CyCADA only applied them in the source-to-target direction).\n\nThe writing in this paper is a little awkward at times (many omitted articles such as \"the\" or \"a'), but, with a few exceptions, it is generally easy to understand what the authors are saying. They provide experiments in a variety of settings in order to validate their model, including both visual domain adaptation and speech domain adaptation. The experiments show that their model is effective both in low-resource supervised adaptation settings as well as high-resource unsupervised adaptation settings. An ablation study, provided in Section 4.1, helps to understand how well the various instantiations of the authors' model perform, indicating that enforcing consistency in both methods is crucial to achieving performance beyond the simple baselines.\n\nIt's a little hard to understand how this method stands in comparison to existing work. Table 3 helps to show that the model can scale up to the high-resource setting, but it would also be nice to see the reverse: comparisons against existing work run in the limited data setting, to better understand how much limited data negatively impacts the performance of models that weren't designed with this setting in mind.\n\nI would've also liked to see more comparisons against the simple baseline of a classifier trained exclusively on the available supervised target data, or with the source and target data together—in my experience, these baselines can prove to be surprisingly strong, and would give a better sense of how effective this paper's contributions are. This corresponds to rows 2 and 3 of Table 1, and inspection of the numbers in that table shows that the baseline performance is quite strong even relative to the proposed method, so it would be nice to see these numbers in Table 2 as well, since that table is intended to demonstrate the model's effectiveness across a variety of different domain shifts.\n\nWhile it's nice that the model is experimentally validated on the speech domain, the experiment itself is not explained well. The speech experiments are hard to understand—it's unclear what the various training sets are, such as \"Adapted Male\" or \"All Data,\" making it hard to understand exactly what numbers should be compared. Why is there no CycleGAN result for \"Female + Adapted Male,\" or \"All Data + Adapted Male,\" for example? The paper would greatly benefit from a more careful explanation and analysis of this experimental setting.\n\nUltimately, I think the idea is a nice generalization of previous work, and the experiments seem to indicate that the model is effective, but the limited scope of the experiments prevent me from being entirely convinced. The inclusion of additional baselines and a great deal of clarification on the speech experiments would improve the quality of this paper enormously.\n\n---\n\nUpdate: After looking over the additional revisions and experiments, I'm bumping this to a weak accept. I agree with reviewer 3 that novelty is not the greatest, but there is a useful contribution here, and the demonstration of its effectiveness on low resource settings is valuable, since in a practical setting it is usually feasible to manually label a few examples.\n\nI'm still not convinced by the TIMIT experiments, now that I better understand them, since the F+M baseline is quite strong and very simple to run. It simply doesn't seem worthwhile to introduce all of this extra machinery for such a marginal improvement, but the experiment does serve the job of at least demonstrating an improvement over existing methods.", "To provide more baselines, we added more comparisons with CyCADA model on low and high-resource unsupervised domain adaptation in updated Figure 2 and Table 2 in the following links\n\nFigure 2: https://bit.ly/2E812qf\nTable 2: https://bit.ly/2QwEogQ\n", "This paper introduces a domain adaptation approach based on the idea of Cyclic GAN. Two different algorithms are proposed. The first one incorporates a semantic consistency loss based on domain-specific classifiers acting on full cycles of the of the generators. The second one also makes use of domain-specific classifiers, but acting either directly on the training samples or on the data mapped from one domain to the other.\n\nStrengths:\n- The different terms in the proposed loss functions are well justified.\n- The results on low-resources supervised domain adaptation indicate that the method works better than the that of Motiian et al. 2017.\n\nWeaknesses:\n- Novelty is limited: The two algorithms are essentially small modification of the semantic consistency term used in Hoffman et al. 2018. They involve making use of both the source and target classifiers, instead of only the source one, and, for the relaxed version, making use of complete cycles instead of just one mapping from one domain to the other. While the modifications are justified, I find this a bit weak for ICLR.\n\n- It is not clear to me why it is worth presenting the relaxed cycle-consistency object, since it always yields worse results than the augmented one. In fact, at first, I though both objectives would be combined in a single loss, and was thus surprised not to see Eq. 5 appear in Algorithm 1. It only became clear when reading the experiments that the authors were treating the two objectives as two different algorithms. Note that, in addition to not performing as well as the augmented version, it is also unclear how the relaxed one could work in the unsupervised scenario.\n\n- Experiments:\n* In 4.1, the authors mention that 10 samples per class are available in the target domain. Are they labeled or unlabeled? If labeled, are additional unlabeled samples also used?\n* In Table 1, and in Table 3, is there a method that corresponds to CyCADA? I feel that this comparison would be useful considering the similarity. That said, I also understand that CyCADA uses both a reconstruction term (as in Eq. 4) and a semantic consistency one, whereas here only a semantic reconstruction term is used. I therefore suggest the authors to also compare with a baseline that replaces their objective with the semantic consistency one of CyCADA, i.e., CyCADA without reconstruction term.\n* In 4.2, it is again not entirely clear if the authors use only the few labeled samples, or if this is complemented with additional unlabeled samples. In any event, does this reproduce the setting used by Motiian et al. 2017?\n* As the argument is that the proposed loss is better than the reconstruction one and that of Hoffman et al. 2018 for low-resource supervised adaptation, it would be worth demonstrating this empirically in Table 2.\n\nSummary:\nThe proposed objective functions are well motivated, but I feel that novelty is too limited and the current set of experiments not sufficient to warrant publication at ICLR.\n\nAfter Response:\nAfter the authors' response/discussion, while I appreciate the additional results provided by the authors, I still feel that the contribution is a bit weak for ICLR.\n", "In section 3 of CyCADA's paper, under equation 3, \"...We pretrain a source task model f_S, fixing the weights, we use this model as a noisy labeler ...\". So for eq. 4 (i.e. semantic loss) in their work, only G_t_s and G_s_t is optimized. \n\n", "Thanks for the new results.\n\nAbove, you stated that:\n\"CyCADA’s model does not get tuned in the consistency loss, whereas in our methods all models are tuned.\"\n\nCan you expand on this? It is not clear to me what you mean.", "We would like emphasize again that our focus in this paper is for low resource domain adaptation, and our main contribution and novelty is in the introduction of two cycles for low resource domain adaptation. From our experiments, it is clear that the introduction of an additional cycle is necessary to get robust performance in low resource settings, irrespective of whether learning is supervised or unsupervised. While this may seem like a subtle difference, the benefit of the additional cycle is clear, as shown in both our ablation study (Table 1) and the comparison between CyCADA and our method (Fig 2). Our intuition for this benefit is that conversion from both direction makes more training data available for the model, which results in more robust models. This improvement is both consistent and significant as compared to CycleGAN and other methods in various experiments under different settings, which shows the significance of the change.\n\nWe agree that differences between our approach and previous approaches may be subtle; however, we argue that the key contribution of our work is both well motivated by the need to make efficient use of low resource data and is empirically supported by the various improvements we demonstrate.\n\nUpdated Figure 2 and Table 2 are here,\nFigure 2: https://bit.ly/2E812qf\nTable 2: https://bit.ly/2QwEogQ\n\n", "We have ran additional experiments and the updated Fig. 2 and Table 2 is provided in the following links. \nFigure 2: https://bit.ly/2E812qf\nTable 2: https://bit.ly/2QwEogQ\n\n\n\n", "We would like emphasize again that our focus in this paper is for low resource domain adaptation, and our main contribution and novelty is in the introduction of two cycles for low resource domain adaptation. From our experiments, it is clear that the introduction of an additional cycle is necessary to get robust performance in low resource settings, irrespective of whether learning is supervised or unsupervised. The two cycle structure is not present in CyCADA. While this may seem like a subtle difference, the benefit of the additional cycle is clear, as shown in both our ablation study (Table 1) and the comparison between CyCADA and our method (Fig 2). Our intuition for this benefit is that conversion from both direction makes more training data available for the model, which results in more robust models. This improvement is both consistent and significant as compared to CycleGAN and other methods in various experiments under different settings, which shows the significance of the change.\n\nWe agree that the last two loss terms are similar from ours eq. 6 and 7 to eq. 4 in CyCADA, because the motivation of these terms are similar. However, there are still some differences:\nIn CyCADA paper they would like the classifier to perform consistently across domains, whereas ours tries to make sure that the generator between domains preserves task specific information. And the consistency is preserved through two task losses. Ours uses the true label when available, whereas CyCADA uses model output.\nCyCADA’s model does not get tuned in the consistency loss, whereas in our methods all models are tuned.\n\nAdditionally, CyCADA has another discriminator at the feature level, which helps features transfer between domains, and we only have one in the data space. Our design is simpler and more robust in this case, since it is non-trivial to design a good discriminator that works well when one departs from static data like images--for example, when modeling sequential data such as text or audio.\n", "I read the authors' response and have a couple more comments:\n\n- The thing that bothered me regarding novelty, and that the authors did not comment on in their response, is that CyCADA also uses a semantic consistency loss. This is the loss in Eq. 4 of the CyCADA paper, which looks very similar to the last two terms in Eqs. 6 and 7 of this submission. I understand that there are differences, but I find them a bit thin as ICLR contributions.\n\n- I appreciate the comparison to CyCADA in Fig. 2 and Table 2. The results in Fig. 2, however, only represent a subset of the pairs considered in Fig. 3. I would suggest including all the pairs. Considering that the authors were able to compute CyCADA results for Fig. 2, I imagine that it would also be possible for them to fill the missing CyCADA values in Table 2.", "\nI am putting \"weak accept\" because I think the paper addresses an important problem (domain adaptation) and has an interesting approach. As the other reviewers pointed out, it's maybe not *super* novel. But it's still interesting, and pretty readable for the most part. \n\nI do question the statistical significance of the TIMIT experiments: TIMIT has a very tiny test set to start with, and by focusing on the female portion only you are further reducing the amount.\n\nSmall point: I don't think GANs are technically nonparametric, as the neural nets do have parameters.\n\nI am a little skeptical that this method would have as general applicability or usefulness as the authors seem to think. The reason is that, since the cycle constraint no longer exists, there is nothing to stop the network from just figuring out the class label of the input (say) image, and treating all the rest of the information in that image as noise the same way a regular non-cyclic GAN would treat it. Of course, one wouldn't expect a convolutional network to behave like this, but in theory it could happen in general cases. This is just speculation though. Personally I would have tended to accept the paper, but I'm not going to argue with the other reviewers, who are probably more familiar with GAN literature than me.\n\n--\nI am changing from \"marginally above acceptance threshold\" to \"clear accept\" after reading the response and thinking about the paper a bit more. I acknowledge that the difference from previously published methods is not that large, but I still think it has value as it's getting quite close to being a practical method for generating fake training data for speech recognition.\n", "Comment on Weakness, and similarity to CyCADA model:\nTo differentiate between our model and CyCADA, below is the detail of two models and how they perform semantic consistency, and enforcing style adaptation\nCyCADA: \nsemantic (content) consistency is enforced by two loss; reconstruction loss (CycleGAN); and additionally using reconstruction at feature level.\nStyle adaptation is enforced using adversarial learning on pixel (observation) and feature (hidden) space. Therefore, it need to learn additional model for representing data in feature space. \n\nAugmented-Cyc:\nsemantic consistency is shown to be achieved by only using auxiliary task loss for each cycle. \nStyle adaptation is achieved by using adversarial learning on pixel (observation) space only.\nWe use cycles in both direction to achieve robust performance in low resource (either supervised or unsupervised) setting.\n\nTherefor, CyCADA requires an additional adversarial learning at feature space, while our model achieve this by only adaptation at observation space. Moreover, to compare the performance of the two model on variable-size target domain, we added more experiments for low resource unsupervised adaptation (see Figure 2). It is evident that CyCADA model fails to provide suitable adaptation, while our model outperforms by large margin, when target domain data is small\nNote: Both our ablation (see Table 1) and additional experiments (see Figure 2) suggest the benefit of using two cycles for low resource situation, whether supervised or unsupervised. Therefore, we think this is an important aspect for robust domain adaptation under resource constraint.\n\nComment on relaxed cycle consistency:\nThe main purpose of presenting relaxed-consistency results in ablation study is to demonstrate the effectiveness of using auxiliary task loss in any or both cycles, rather than L1 reconstruction loss. We have only evaluated relaxed-consistency in low-resource supervised setting, and it is not evaluated for unsupervised adaptation. In unsupervised setting, we are using source classifier M_{S} as pseudo-labeler of target samples. \n\nNote: In this setting, if we turn off using task model M_{T} to be trained using source data, this is similar to using relaxed version in unsupervised adaptation \n\n\nComments on Experiments:\n\nFor all low resource target domain experiment, only the denoted number of samples are used, irrespective if they are labeled or not. For example, in supervised case, 10 labeled sample per class means we only use 10 labeled samples per class in the target domain is used and no other data is used in the target domain. Similarly for unsupervised case, 5 samples per class means only used 5 unsupervised samples from target domain.\n\n- Section 4.1: we only used 10 labeled sample per class. In this experiment, NO unlabeled data is used. \n\n- Table 1: this table is intended for ablation of our model. \n\n- Table 3: we have added CyCADA results in this table for comparison. To directly compare our model with CyCADA, we added new experiments on variable-size target domain which is presented in Figure 2. \n\n- Section 4.2: Table 2 is replaced with Figure 3, for low-resource supervised adaptation. In this experiment, no unlabeled data is used, and it is a direct comparison between our model and FADA (Motiian et al. 2017)\n\nIn Figure 2, we have shown the benefit of the proposed auxiliary task-specific loss to reconstruction loss (CyCADA) on low-resource unsupervised domain adaptation.", "Comment on Experiments:\n\nTable 1: we also added the results for CyCADA model with no reconstruction loss in Figure 2, referred to as \"CyCADA (Relaxed)\", to provide more baselines. ", "Comment on “low-resource supervised adaptation, Table 2”:\nTo provide more baseline results on low-resource supervised adaptation, we ran additional experiments and replaced table 2 with bar plots in Figure 3. Baselines include classifier trained on low-resource target data, and including source data, with no adaptation. As shown in Figure 3, Augmented-Cyc algorithm outperforms FADA model and the two baselines.\n\nComment on “comparing with existing works on low-resource unsupervised adaptation”:\nWe added experiments on low-resource unsupervised adaptation to compare with CyCADA, and the results are shown in Figure 2. This experiment investigates the effectiveness and robustness of using two cycle with semantic consistency enforced by auxiliary task loss, compared to CyCADA, where semantic consistency is enforced by reconstruction loss. As shown in Figure 2, CyCADA model fails to learn a good adaptation, where target domain contains few unsupervised data. Additionally, CyCADA model shows high instability in low-resource situation. Our model achieves more robust and better performance. We think this is attributed to proper use of source classifier to enforce consistency and robustness that we get by using two cycles (also shown in ablation study in Table 1).\n\nComment on speech domain experiments:\n We have edited the speech experiment section for more clarification. To mention some, “Adapted Male” is changed to “Male-> Female” to preserve consistency in notation. “All Data” refers to “Male+Female” with no adaption. CycleGAN results are added for\"Female + Adapted Male,\" or \"All Data + Adapted Male,”", "We appreciate all reviewers for providing insightful comments on technical aspect of the proposed method.\nBelow, we summarize the revision briefly. Detailed responses to each reviewer/comment are followed based on each reviewer feedback.\n- Additional experiment is performed to compare the performance of our model on low-resource unsupervised domain with CyCADA model (see Figure 2)\n- Table 2 (low-resource supervised experiment) in the previous version is now replaced with Figure 3 (bar plot), where additional baselines are added, to emphasize the significance of domain adaptation in our model in comparison to state of the art models.\n- Table 3 (Speech experiments): The updated acronyms in this table are now consistent with those in visual adaptation section.We further added additional CycleGAN results to provide mores baselines for comparison between models. \n- The new title for the work is “Augmented Cyclic Adversarial Learning for Low Resource Domain Adaptation”\n- The acronym for the proposed model is changed from “Augmented-Cyc” to “ACAL” and “Relaxed-Cyc” to “RCAL”\n- In all experiments, Tables and Figures, number of samples are per class, whether supervised or unsupervised\n- All changes are highlighted in the paper\n- We have used the official CyCADA open-source code to reproduce its results and get the performance on low-resource unsupervised adaptation in Figure 2. \nCyCADA code: https://github.com/jhoffman/cycada_release\n\n\nClarification on Novelty:\nIn this paper, we address the problem of domain adaptation for low resource situation in supervised, semi and unsupervised situations. We emphasize the necessity of using two cycle in tackling this problem. As evident from our experiments (see results in Table 1 and Figure 2), current one-cycle based models (such as CyCADA) or conventional two-cycle method (CycleGAN) fail in stable and good adaptation in low-resource situation.\n", "- Comment on “statistical significance on TIMIT experiments”:\nWe have chosen TIMIT dataset because of its inherent low-resource domain for different genders. As shown in Table 3, when using only Male speech for training the network, testing on female genders results in a large margin (11% on phoneme error recognition), compared to baseline. However by using only “Male->Female” data in training of proposed model, this gap can be reduced by ~10% for 124 voices in validation and 64 voices in test set for female domain.\n\n - Comment on “Whether GAN’s are parametric or non-parametrics”:\nHere we refer to the classical parametric models for modeling data distribution. In this sense, the generator in GAN implicitly models the true distribution. Therefore, we categorize GAN as a non-parametric density estimation model since it does not assume any form of distribution.\n\n- Comment on general applicability of the proposed domain adaptation model:\nSince for any sample, whether target or source, there are two classifier in the cycle to preserve the class label information during transformation across domains, we believe that this implicit enforcement of content preservation will hold in broader applications. If the model is able to figure out which part is important for a certain class and ignore other parts, that is a desired behavior, since only those parts are important for the task in mind. \n" ]
[ 6, -1, 5, -1, -1, -1, -1, -1, -1, 8, -1, -1, -1, -1, -1 ]
[ 4, -1, 4, -1, -1, -1, -1, -1, -1, 2, -1, -1, -1, -1, -1 ]
[ "iclr_2019_B1G9doA9F7", "H1gOeZtCpQ", "iclr_2019_B1G9doA9F7", "Byez806lJE", "HyeIY_i1y4", "ByxZHgYCam", "rkxI_hF3RX", "rkxI_hF3RX", "BJxNk5FwCQ", "iclr_2019_B1G9doA9F7", "HJxHpfhH2m", "B1xgX-Y0am", "rkgVnNv52Q", "iclr_2019_B1G9doA9F7", "HklKo3NW6Q" ]
iclr_2019_B1GAUs0cKQ
Variance Networks: When Expectation Does Not Meet Your Expectations
Ordinary stochastic neural networks mostly rely on the expected values of their weights to make predictions, whereas the induced noise is mostly used to capture the uncertainty, prevent overfitting and slightly boost the performance through test-time averaging. In this paper, we introduce variance layers, a different kind of stochastic layers. Each weight of a variance layer follows a zero-mean distribution and is only parameterized by its variance. It means that each object is represented by a zero-mean distribution in the space of the activations. We show that such layers can learn surprisingly well, can serve as an efficient exploration tool in reinforcement learning tasks and provide a decent defense against adversarial attacks. We also show that a number of conventional Bayesian neural networks naturally converge to such zero-mean posteriors. We observe that in these cases such zero-mean parameterization leads to a much better training objective than more flexible conventional parameterizations where the mean is being learned.
accepted-poster-papers
The authors describe a very counterintuitive type of layer: one with mean zero Gaussian weights. They show that various Bayesian deep learning algorithms tend to converge to layers of this variety. This work represents a step forward in our understanding of bayesian deep learning methods and potentially may shine light on how to improve those methods.
val
[ "SyeR8Podp7", "ByloevouTQ", "Hkx_V8sdam", "SkxrImCK2Q", "r1e0vhrKhX", "HygV2aqO3Q" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your review and your questions!\n\n> (1) My main concern is verification. Most of the comparisons are between variance layer (zero-mean) and conventional binary dropout, while the main argument of the paper is that it’s better to set Gaussian posterior’s mean to zero. So in all the experiments the paper should compare zero-mean variance layer against variational dropout (neuron-wise Eq. 14) and sparse variational dropout (additive Eq. 14), where the mean isn’t zero.\n\nUsually a fully-factorized Gaussian posterior achieves the same performance as the binary dropout posterior (e.g. shown in [1,2]), which we also observed in our experiments.\n\n> (2) The paper applies variance layers to some specific layers. Are there any guidelines to select which layers should be variance layers?\n\nNeural networks are usually not very stable to high amounts of noise in the first layers. Also, we have observed that it is hard to train a variance network with the last layer (right before softmax) being variance layer. Therefore a simple rule of thumb is to set the first layers to be conventional deterministic layers, then add several variance layers, and then add the last deterministic layer to obtain the logits.\n\n> (2) What’s the prior distribution used in the experiment of Table 1?\n\nWe have used the log-uniform prior in this experiment. The result for the ARD prior is the same.\n\n\n[1] Gal, Yarin, and Zoubin Ghahramani. \"Dropout as a Bayesian approximation: Representing model uncertainty in deep learning.\" ICML 2016.\n[2] Louizos, Christos, and Max Welling. \"Multiplicative normalizing flows for variational bayesian neural networks.\" ICML 2017", "Thank you for your review and your questions!\n\n> Q1: if every transformation is antisymmetric non-linear, then it seems that the expected distribution of $t$ in (2) is zero. Is this true or not? In another word, class information has to be read out from the encoding of instances in Fig 1. It seems antisymmetric operators cannot do so, as it will only get symmetric distributions from symmetric distributions.\n\nIf we have antisymmetric non-linearities, the expected value of each neuron of each layer is indeed zero. This would fail at the regression task, as the expected output of the network would be always zero. However, in multiclass classification, we use softmax to obtain predictions, so the posterior predictive distribution (the expected softmax) is non-trivial and allows to obtain reasonable predictions.\n\n> Q2: it is not straightforward to see why KL term needs to go zero. In my understanding, the posterior aims to fit two objectives: maximizing data likelihood and minimizing KL term. When the signal from the data is strong (e.g. large amount of data), the first objective becomes more important. Then q does not really try to make KL zero, and alpha has no reason to go infinity. Can you explain more?\n\nUnfortunately, in VI for Bayesian neural networks, the number of parameters is much larger than the amount of data, and the data-term in the ELBO gets overwhelmed by the KL-term. Most papers on VI in BNNs use some kind of tricks to avoid that: some downscale the KL-term (e.g. [1,3]), others restrict the variance of the approximate posterior (e.g. [1,2,4]) or underfit the ELBO in other ways. We do not use such tricks in this paper. This is one reason for alpha to go to infinity. Usually, it is not possible to set the KL to zero and retain good predictive performance with conventional priors. However, for the log-uniform and the ARD priors, the Argmin(KL(q(w)||p(w))) is a broad family of distributions, the zero-mean fully-factorized Gaussians. As we show, such family is enough to achieve a good predictive performance, so the overall objective is better: the KL is set to zero and the data-term is similar to the data-term of models with the full FFG posterior.\n\n> Q3: Is the claimed benefit from the optimization procedure or the special structure of the variance layer? Is it possible to test the hypothesis by 1) initializing a q distribution with learnable mean by the solution of variance neural network and then 2) optimizing q? Then the optimization procedure should continue to increase ELBO. Then compare the learned q against the variance neural network. If the learned q is better than the variance network -- it means the network structure is better for optimization, but the structure itself might not be so special. If the learned q is worse than the variance network, then the structure is interesting.\n\nWe did try to do it. The ELBO does not increase, and the network does not change: it is equivalent to fine-tuning the variances of the variance network. The variance network is a stable local optimum: if the data-term is already good enough, the KL term would prevent the means from increasing (when the mean mu is orders of magnitude smaller than the standard deviation sigma, the KL-term behaves like log|mu+eps| for a very small eps), and the data-term would not favor increasing mu in any way.\n\n[1] Kingma, Diederik P., Tim Salimans, and Max Welling. \"Variational dropout and the local reparameterization trick.\" NIPS 2015.\n[2] Louizos, Christos, and Max Welling. \"Multiplicative normalizing flows for variational bayesian neural networks.\" ICML 2017\n[3] Ullrich, Karen, Edward Meeds, and Max Welling. \"Soft weight-sharing for neural network compression.\" ICLR 2017\n[4] Blundell, Charles, et al. \"Weight uncertainty in neural networks.\" ICML 2015", "Thank you for your review and your questions!\n\n> I think the claim for benefits of variance layer is not well supported. Variance layer requires test-time averaging in test time to achieve competitive accuracy, while the additive case in Eq. (14) using mean propagation achieves similar performance (e.g., the results in Table 1).\n\nMost techniques for training stochastic neural networks like dropout, variational inference or MCMC require test-time averaging for good uncertainty estimation. If the inference time is crucial, one may use distillation techniques to mimic the predictive distribution of the variance network with a fast deterministic DNN. If one is only interested in the accuracy, the variance networks are probably not the best way to go.\n\n> The results in Sec 6 lack comparison to other Bayesian methods (e.g., the additive case in Eq. (14)).\n\nUsually a fully-factorized Gaussian posterior achieves the same performance as the binary dropout posterior (e.g. shown in [1, 2]), which we also observed in our experiments. \n\n> Which prior is chosen to produce the results in Table 1? KL(q||p)=0 for the zero-mean case corresponds to the fact that the variational posterior equals the prior, which implies the ARD prior if I did not misunderstand. In this case, the ground truth posterior p(w|D) for different methods is different and corresponding ELBO for them are incomparable.\n\nWe have used the log-uniform prior in Table 1, however, the results for the ARD prior are the same. The result of this experiment can be discussed even without the Bayesian interpretation. Here we have 5 models with exactly the same objective function. Two of the models (weight-wise and additive) are equivalent and contain other models (neuron-wise, layer-wise, zero-mean) as special cases. We would expect more \"rich\" models to achieve a better value of the training objective. Surprisingly, in practice, we observe exactly the opposite.\n\n> The setting in Table 2 is also unclear. As ``Variance’’ stands for variational dropout, what does ``Dropout’’ means? The original Bernoulli dropout?\n\nYes, we compare to plain binary (Bernoulli) dropout. ''Variance'' stands for a variance network that is trained using variational dropout (we explicitly switch to the zero-mean parameterization during test time to obtain a variance network).\n\n> Besides, I’m wondering why directly variance layer (i.e., zero-mean case in Eq. (14)) is not implemented in this case.\n\nIt is hard to train variance layers from scratch, whereas the training of variational dropout in layer-wise multiplicative parameterization is stable (see Appendix B). During test-time, we explicitly use the zero-mean parameterization to ensure that we obtain a true variance network.\n\n[1] Gal, Yarin, and Zoubin Ghahramani. \"Dropout as a Bayesian approximation: Representing model uncertainty in deep learning.\" ICML 2016.\n[2] Louizos, Christos, and Max Welling. \"Multiplicative normalizing flows for variational bayesian neural networks.\" ICML 2017", "This paper investigates the effects of mean of variational posterior and proposes variance layer, which only uses variance to store information.\n\nOverally, this paper analyzes an important but not well explored topic of variational dropout methods—the mean propagation at test time, and discusses the effect of weight variance in building a variational posterior for Bayesian neural networks. This findings are interesting and I appreciate the analysis. \n\nHowever, I think the claim for benefits of variance layer is not well supported. Variance layer requires test-time averaging in test time to achieve competitive accuracy, while the additive case in Eq. (14) using mean propagation achieves similar performance (e.g., the results in Table 1). The results in Sec 6 lack comparison to other Bayesian methods (e.g., the additive case in Eq. (14)). \n\nBesides, there exists several problems which needs to be addressed.\n\nSec 5.\nSec 5 is a little hard to follow. Which prior is chosen to produce the results in Table 1? KL(q||p)=0 for the zero-mean case corresponds to the fact that the variational posterior equals the prior, which implies the ARD prior if I did not misunderstand. In this case, the ground truth posterior p(w|D) for different methods is different and corresponding ELBO for them are incomparable.\n\nSec 6. \nThe setting in Table 2 is also unclear. As ``Variance’’ stands for variational dropout, what does ``Dropout’’ means? The original Bernoulli dropout? Besides, I’m wondering why directly variance layer (i.e., zero-mean case in Eq. (14)) is not implemented in this case.\n\n", "This paper studies variance neural networks, which approximate the posterior of Bayesian neural networks with zero-mean Gaussian distributions. The inference results are surprisingly well though there is no information in the mean of the posterior. It further shows that the several variational dropout methods are closed related to the proposed method. The experiment indicates that the ELBO can actually better optimized with this restricted form of variational distribution. \n\nThe paper is clearly written and easy to follow. The technique in the paper is solid.\n\nHowever, the authors might need to clarify a few questions below. \n\n\nQ1: if every transformation is antisymmetric non-linear, then it seems that the expected distribution of $t$ in (2) is zero. Is this true or not? In another word, class information has to be read out from the encoding of instances in Fig 1. It seems antisymmetric operators cannot do so, as it will only get symmetric distributions from symmetric distributions. \n\nQ2: it is not straightforward to see why KL term needs to go zero. In my understanding, the posterior aims to fit two objectives: maximizing data likelihood and minimizing KL term. When the signal from the data is strong (e.g. large amount of data), the first objective becomes more important. Then q does not really try to make KL zero, and alpha has no reason to go infinity. Can you explain more? \n\nQ3: Is the claimed benefit from the optimization procedure or the special structure of the variance layer? Is it possible to test the hypothesis by 1) initializing a q distribution with learnable mean by the solution of variance neural network and then 2) optimizing q? Then the optimization procedure should continue to increase ELBO. Then compare the learned q against the variance neural network. If the learned q is better than the variance network -- it means the network structure is better for optimization, but the structure itself might not be so special. If the learned q is worse than the variance network, then the structure is interesting. \n\n\nA few detailed comments:\n\n1. logU used without definition. \n2. if the paper has a few sentence explaining \"Gaussian dropout approximate posterior\", section 4 will be smoother to read. ", "This paper introduced a new stochastic layer termed variance layer for Bayesian deep learning, where the posterior on weight is a zero-mean symmetric distribution (e.g., Gaussian, Bernoulli, Uniform). The paper showed that under 3 different prior distributions, the Gaussian Dropout layer can converge to variance layer. Experiments verified that it can achieve similar accuracies as conventional binary dropout in image classification and reinforcement learning tasks, is more robust to adversarial attacks, and can be used to sparsify deep models.\n\nPros:\n(1)\tProposed a new type of stochastic layer (variance layer)\n(2)\tCompetitive performance on a variety of tasks: image classification, robustness to adversarial attacks, reinforcement learning, model compression\n(3)\tTheoretically grounded algorithm\n\nCons:\n(1)\tMy main concern is verification. Most of the comparisons are between variance layer (zero-mean) and conventional binary dropout, while the main argument of the paper is that it’s better to set Gaussian posterior’s mean to zero. So in all the experiments the paper should compare zero-mean variance layer against variational dropout (neuron-wise Eq. 14) and sparse variational dropout (additive Eq. 14), where the mean isn’t zero.\n(2)\tThe paper applies variance layers to some specific layers. Are there any guidelines to select which layers should be variance layers?\n\nSome minor issues:\n(1)\tPage 4, equations of Gaussian/Bernoulli/Uniform variance layer, they should be w_ij=…, instead of q(w_ij)= …\n(2)\tWhat’s the prior distribution used in the experiment of Table 1?\n\n" ]
[ -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, 3, 4, 4 ]
[ "HygV2aqO3Q", "r1e0vhrKhX", "SkxrImCK2Q", "iclr_2019_B1GAUs0cKQ", "iclr_2019_B1GAUs0cKQ", "iclr_2019_B1GAUs0cKQ" ]
iclr_2019_B1GMDsR5tm
Initialized Equilibrium Propagation for Backprop-Free Training
Deep neural networks are almost universally trained with reverse-mode automatic differentiation (a.k.a. backpropagation). Biological networks, on the other hand, appear to lack any mechanism for sending gradients back to their input neurons, and thus cannot be learning in this way. In response to this, Scellier & Bengio (2017) proposed Equilibrium Propagation - a method for gradient-based train- ing of neural networks which uses only local learning rules and, crucially, does not rely on neurons having a mechanism for back-propagating an error gradient. Equilibrium propagation, however, has a major practical limitation: inference involves doing an iterative optimization of neural activations to find a fixed-point, and the number of steps required to closely approximate this fixed point scales poorly with the depth of the network. In response to this problem, we propose Initialized Equilibrium Propagation, which trains a feedforward network to initialize the iterative inference procedure for Equilibrium propagation. This feed-forward network learns to approximate the state of the fixed-point using a local learning rule. After training, we can simply use this initializing network for inference, resulting in a learned feedforward network. Our experiments show that this network appears to work as well or better than the original version of Equilibrium propagation. This shows how we might go about training deep networks without using backpropagation.
accepted-poster-papers
The paper investigates a novel initialisation method to improve Equilibrium Propagation. In particular, the results are convincing, but the reviewers remain with small issues here and there. An issue with the paper is the biological plausibility of the approach. Nonetheless publication is recommended.
train
[ "rJxVRXrulV", "ryeI_q3v3Q", "Hyl4OzIIJN", "Syx_vW0HkE", "rkeWkpvBk4", "r1gcB3JH14", "H1l2Nbf50Q", "B1gzOC6Wa7", "HkxVPjhFAQ", "rkenvhdtA7", "rkxaD5dY07", "rye_pK_YAm", "H1xeru_K07", "r1eOy7xKhQ" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "Have bumped score to 7, in anticipation of the final improvements from this thread being included in the camera ready.", "This paper presents an improvement on the local/derivative-free learning algorithm equilibrium propagation. Specifically, it trains a feedforward network to initialize the iterative optimization process in equilibrium prop, leading to greater stability and computational efficiency, and providing a network that can later be used for fast feedforward predictions on test data. Non-local gradient terms are dropped when training the feedforward network, so that the entire system still doesn't require backprop. There is a neat theoretical result showing that, in the neighborhood of the optimum, the dropped non-local gradient terms will be correlated with the retained gradient terms.\n\nMy biggest concern with this paper is the lack of significant literature review, and that it is not placed in the context of previous work. There are only 12 references, 5 of which come from a single lab, and almost all of which are to extremely recent papers. Before acceptance, I would ask the authors to perform a literature search, update their paper to include citations to and discussion of previous work, and better motivate the novelty of their paper relative to previous work. Luckily, this is a concern that is addressable during the rebuttal process! If the authors perform a literature search, and update their paper appropriately, I will raise my score as high as 7.\n\nHere are a few related topic areas which are currently not discussed in the paper. *I am including these as a starting point only! It is your job to do a careful literature search. I am completely sure there are obvious connections I'm missing, but these should provide some entry points into the citation web.*\n- The \"method of auxiliary coordinates\" introduces soft (often quadratic) couplings between post- and pre- activations in adjacent layers which, like your distributed quadratic penalty, eliminate backprop across the couplings. I believe researchers have also done similar things with augmented Lagrangian methods. A similar layer-local quadratic penalty also appears in ladder networks.\n- Positive/negative phase (clamped / unclamped phase) training is ubiquitous in energy based models. Note though that it isn't used in classical Hopfield networks. You might want to include references to other work in energy based models for both this and other reasons. e.g., there may be some similarities between this approach and continuous-valued Boltzmann machines?\n- In addition to feedback alignment, there are other approaches to training deep neural networks without standard backprop. examples include: synthetic gradients, meta-learned local update rules, direct feedback alignment, deep Boltzmann machines, ...\n- There is extensive literature on biologically plausible learning rules -- it is a field of study in its own right. As the paper is motivated in terms of biological plausibility, it would be good to include more general context on the different approaches taken to biological plausibility.\n\nMore detailed comments follow:\n\nThank you for including the glossary of symbols!\n\n\"Continuous Hopfield Network\" use lowercase for this (unless introducing acronym)\n\n\"is the set non-input\" -> \"is the set of non-input\"\n\n\"$\\alpha = ...$ ... $\\alpha_j \\subset ...$\" I could not make sense of the set notation here.\n\nwould recommend using something other than rho for nonlinearity. rho is rarely used as a function, so the prior of many readers will be to interpret this as a scalar. phi( ) or f( ) or h( ) are often used as NN nonlinearities.\n\ninline equation after \"clamping factor\" -- believe this should just be C, rather than \\partial C / \\partial s.\nMove definition of \\mathcal O up to where the symbol is first used.\n\ntext before eq. 7 -- why train to approximate s- rather than s+? It seems like s+ would lead to higher accuracy when this is eventually used for inference.\n\neq. 10 -- doesn't the regularization term also decrease the expressivity of the Hopfield network? e.g. it can no longer engage in \"explaining away\" or enforce top-down consistency, both of which are powerful positive attributes of iterative estimation procedures.\n\nnotation nit: it's confusing to use a dot to indicate matrix multiplication. It is commonly used in ML to indicate an inner product between two vectors of the same shape/orientation. Typically matrix multiplication is implied whenever an operator isn't specified (eg x w_1 is matrix multiplication).\n\neq. 12 -- is f' supposed to be h'? And wasn't the nonlinearity earlier introduced as rho? Should settle on one symbol for the nonlinearity.\n\nThis result is very cool. It only holds in the neighborhood of the optimum though. At initialization, I believe the expected correlation is zero by symmetry arguments (eg, d L_2 / d s_2 is equally likely to have either sign). Should include an explicit discussion of when this relationship is expected to hold.\n\n\"proportional to\" -> \"correlated with\" (it's not proportional to)\n\nsec. 3 -- describe nonlinearity as \"hard sigmoid\"\n\nbeta is drawn from uniform distribution including negative numbers? beta was earlier defined to be positive only.\n\nFigure 2 -- how does the final achieved test error change with the number of negative-phase steps? ie, is the final classification test error better even for init eq prop in the bottom row than it is in the top?\n\nThe idea of initializing an iterative settling process with a forward pass goes back much farther than this. A couple contexts being deep Boltzmann machines, and the use of variational inference to initialize Monte Carlo chains\n\nsect 4.3 -- \"the the\" -> \"to the\"", "We will use your suggestion to update the final draft, and in general, we'll do a proof-read over the paper to make sure we're not making overly-confidant claims. Thank you for your constructive feedback throughout this review process. ", "Thank you for your additional responses, and the added argument motivating gradient alignment.\n\nIn terms of Figure 1, I would recommend modifying\n\"Perhaps surprisingly, the locally trained model converges faster. This is likely because\nlearning to optimize local targets is a simpler problem.\"\nand noting that no hyper-parameter tuning has been performed, and that the relative performance of local vs. global learning thus remains an open question, with the current comparison at best suggestive.\n\nOtherwise -- I am out of comments. Interesting paper. :)", "- \"In the new Figure 1a, \" ... \"I have a suspicion that the better performance of the local network may be due only to hyperparameters being better tuned for local rather than global training.\"\n\nNo hyperparameter search was done for this experiment. The only hyperparameter that we tuned was the scale of the initial weights / target-network weights, and that was tuned to ensure approximately equal activation magnitudes for each layer (so that activations neither die out or saturate with depth). The network had 8 layers (including input) of 200 units, training was done with SGD with learning rate of 0.01 and momentum of 0.9, but the results look the same with other optimizers we tried (Adagrad, Adamax). When we used no adaptation in the step size (ie momentumless SGD), the local started off slower (presumably because the gradients were simply smaller), but then overtook the global optimiser.\n\n- \"I'm pretty sure this is not an accurate statement about the method of auxiliary coordinates?\"\n\nYou are correct, I'd misread the Z-step of MAC to be sample-local but layer-global, when in fact it is both sample-local and (sort of) layer local (still involves backprop across one layer). We will update the draft for the final. Thank you for pointing that out. \n\n- \"Can you say more about why you would expect the gradients to be aligned at initialization?\"\n\nIt comes down to the fact that for a random matrix w, <x, x w w^T> will tend to be positive. At random initialization, a component of d_L1/d_w1 and a component of d_L2/d_w1 are related by a (w w^T) term, so tend to be positively aligned. We have posted an (anonymous) explanation here, and will add it to the appendix: https://srv-file1.gofile.io/download/VfLgmv/2ceb9fae950fd7ab1b532534f4cd2435/Why-Alignment-at-Init.pdf \n\nIn any case, the initial local-distant alignment is not that important, because even if it were random, the local gradient will still be aligned with the *global* gradient (which is the local+distant gradient) initially. So from the start, phi is being pushed towards phi*, which in turn leads to an increased local-distant alignment.\n\n- \"Another thread of references that comes to mind, for biologically plausible local learning rules\"...\n\nThank you for the paper. We read this one, as well as the more recent \"Learning Unsupervised Learning Rules\" and a few other \"Learning to Learn\" papers. We don't (yet) see a close connection to the method in this work - both still require backpropagation (or a less efficient non-gradient-based search method) to train the parameters of the optimizer. However we agree that this deserves mention as another line of work that may lead to biologically plausible local learning rules. \n", "Thank you for the updates! The paper is much improved. I have raised my score. I still have some specific concerns, below:\n\nIn the new Figure 1a, could you talk about how you search over optimization and initialization hyper-parameters for the local and global loss cases? I have a suspicion that the better performance of the local network may be due only to hyperparameters being better tuned for local rather than global training.\n\nre \"they differ from our method in that their layer activations z_k are calculated using backpropagation-based optimization.\"\nI'm pretty sure this is not an accurate statement about the method of auxiliary coordinates? At least, it is my understanding that, due to the quadratic coupling between z_k in adjacent layers, the gradient with respect to z_k only depends on the z_{k-1 ... k+1}, and so there is no backpropagation through multiple layers in the network. Similarly for the gradient with respect to the weights in a given layer.\n\nMore minor questions/comments:\n\nre: \"Even at random initialization the gradient alignment S(d_L1/d_phi1, d_L2/d_phi1) is in general NOT zero-mean. For a randomly initialized network, d_L2/d_phi1 and d_L1/d_phi1 are both zero-mean random variables (for the reason you mention - that d_L2/d_s2 is equally likely to have either sign), but they are not independent - they are both functions of phi1, and this dependency induces alignment. We’ve changed Figure to demonstrate that the initially weak alignment becomes stronger as training progresses (and phi approaches phi*), and added a derivation of the alignment result in Appendix B.\"\n\nCan you say more about why you would expect the gradients to be aligned at initialization? Is it just that both gradients are expected to have a non-zero projection in the phi1 direction (because the distance between two random vectors will tend to shrink if either vector is moved towards the origin)?\n\nAnother thread of references that comes to mind, for biologically plausible local learning rules in machine learning, is the use of meta-learning to learn those rules. A seed paper is:\nYoshua Bengio, Samy Bengio, and Jocelyn Cloutier. Learning a synaptic learning rule. Université de Montréal,\nDépartement d’informatique et de recherche opérationnelle, 1990.\n", "Thank you for the reply and for having a sharp eye for notational errors.\n\nAbout Eq. (3) ... both work, and behave very similarly. They both compute something proportional to the correct gradient in the limit of small beta. The difference is that in the $E^{\\beta}(s+, x, y)$ version parameters of the final layer directly optimise C on top the contrastive term. The version in our paper (containing $E(s+, x)$) is consistent with the Hebbian update rule in Equation 4, so for consistency we'd like to leave it as is (also, that is the version implemented in our code).\n\nAbout $g(\\delta w)$...We're writing out the 1st order expansion about 0 in the form \"f(x) \\approx f(0) + x f'(0)\", and noting that in the limit of small x, this approximation becomes exact, which we believe should be correct. We have changed our derivative notation to make clear that we are taking the \"derivative of g, evaluated at 0\" and not \"the derivative of g(0)\". \n\nWe applied your other corrections in our new draft. ", "Summary:\nThis paper aims at improving the speed of the iterative inference procedure (during training and deployment) in energy-based models trained with Equilibrium Propagation (EP), with the requirement of avoiding backpropagation. To achieve this, the authors propose to train a feedforward network to predict a fixed point of the \"equilibrating network\". Gradients are approximated by local gradients only. The method is compared to standard EP on MNIST.\n\nThe overall idea of the paper to speed up the slow iterative inference (during training and deployment) seems very reasonable. However, the paper seems to be still work in progress and could be improved on the theoretical side, the presentation, and especially the experimental evaluation. \nThe paper is rather weak on the theoretical side. The main theoretical result is perhaps the analysis of the gradient alignment. However, I cannot follow their analysis and suspect that it is false. More detailed comments follow. Regarding the presentation, I found many typos which I don't consider in my evaluation. However, there are both minor and major issues with several equations. Details follow below. Another major concern is the lack of experimental evaluation. There is only a single plot that shows the learning curves of EP and the proposed Initialized EP with 2 different numbers of negative-phase steps and for 2 different architectures. The authors should put a lot more effort into the evaluation. For example, evaluate the influence of the hyperparameter in Eq. (10) (Is lambda > 0 detrimental to the capacity of the equilibrating network?), etc.\n\nLastly, as of my current understanding, the whole motivation for the EP framework is biological plausibility. In my opinion, this paper lacks a discussion of that motivation with respect to the proposed approach.\n\nTo summarize, there are too many major problems that cannot be addressed only in the rebuttal phase. \n\n\nDetails:\n- Sec. 1.1. Equilibrium Propagation --> Sec. 2 (It is not part of the introduction) \n- In 1.1., \"Equilibrium Propagation is a method for training a Continuous Hopfield Network for classification\". EP is a method for training various energy-based models, not just hopfield networks. \n- Eq. (1): I find the notation very confusing. Specifically, I can't make sense of:\n a) \"$\\alpha = \\{\\alpha_j: j \\in S\\}$ denotes the network architecture\". What does it mean for alpha to denote an architecture? Please be more specific. \n b) In the definition of $\\alpha_j$, you are constructing a set of neurons $i \\in S \\cup I$, but then you are re-defining i in the same set, using the forall operator. \n c) Even if the two above is corrected, I can't follow. Please simplify the notation (the energy function is not that complicated).\n- Eq. (1): Why is it $i \\in S$ everywhere, rather than all neurons, including input neurons (as in [Scellier and Bengio 2017])? \n- The text between Eq. (2) and Eq. (3) introduces the classification targets by adding the gradients of another energy function $C(s_O, y)$ to the previously described energy function from Eq. (1). First $C(s_O, y)$ is nowhere defined. Second, The energy is a scalar, while the gradient is a vector, so there must be a mistake. I suppose it should be just $C(s_O, y)$ rather than its gradients?\n- Eq. (6): $f_{\\phi_{j}}$ is defined as a function of multiple $f_{\\phi_{i}}$ ? \n- Eq. (9): Again the index i is used twice. \n- Sec. 2.1: Can you elaborate on why the equilibrating network can create targets that are not achievable by the feedforward network? Is it a problem of your particular choice of model architecture? Isn't the \"regularization\" then detrimental to the (capacity of the) equilibrating network? \n- In Sec. 2.2 on page 5, you claim that given random parameter intitialization, the gradients should almost always be aligned. For random weight matrices, where the weights are drawn with zero mean, I cannot see how this is true. To compute gradients of layer $l$, backpropagation (in an MLP) computes the matrix-vector multiplication between transposed weight matrix and the gradients of layer l+1 (I am ignoring the activation function here). The resulting gradient should have zero mean.\n- Eq. (11): Is it the L1 Norm or L2?\n- Eq. (12): In the preceding text, you made claims about the gradient alignment for random parameter initialization. In Eq. (12) you analyze the gradients close to the optimum?\n- Eq. (12): What is f, it has never been defined. I suppose it should be the h from above? \n- Eq. (12): I don't understand how you arrived at these gradient equations, even the first one. Shouldn't it be the standard backpropagation in an MLP or am I missing something? Using the chain rule $\\frac{\\partial L_1}{\\partial w_1} = \\frac{\\partial L_1}{\\partial s_1} \\frac{\\partial s_1}{\\partial w_1}$, I arrive at a different result. How can there be the derivative of f (or h) twice.\n- Sec. 3: Is beta really sampled from a zero-centred uniform distribution? On page 2, beta is introduced as a small positive number. Would a negative beta not cause the model to settle to a fixed point where maximally wrong targets are predicted?\n\n\n[Scellier and Bengio 2017] Equilibrium Propagation: Bridging the Gap Between Energy-Based Models and Backpropagation", "Dear Authors,\nthe new version was improved greatly. Many mistakes have been corrected and most of the raised issues have been addressed. \nThe derivation in App. B is very helpful, it wasn't clear before that you were doing a Taylor approximation and hence get twice the derivative. The analysis of the regularization hyper-parameter is very useful as well. \nI will adjust my rating. \n\nI do still have few issues:\nIn Eq. (3), shouldn't the first term be $E^{\\beta}(s+, x, y)$ incl the loss for the target, rather than $E(s+, x)$? Then it would also be clear why beta can be negative.\nYou still use f instead of roh in a few places, e.g. after Eq. (12) and in App. B.\nIn App. B you also use both L and l. \nIn App. B, better write $g(\\delta w)$ rather than g(0) in the line where you have the $lim_{\\delta w \\rightarrow 0}$.", "Thank you for your flexible review system which incentivized us to do a proper exploration of related work! In response to your concern, we have significantly expanded the related work section. We now mention what we think is an important link to amortized variational inference. We have also simplified notation in response to your other comments. Below we respond to some of your more detailed comments:\n\n---\n- “would recommend using something other than rho for nonlinearity. rho is rarely used as a function, so the prior of many readers will be to interpret this as a scalar. phi( ) or f( ) or h( ) are often used as NN nonlinearities.”\n\nIn Equilibrium Propagation and other related papers rho is used as the nonlinearity. So we would like to keep rho here in order to be consistent with those papers.\n----\n- “why train to approximate s- rather than s+? It seems like s+ would lead to higher accuracy when this is eventually used for inference.”\n\nGood observation - we added a footnote to address it: We could also minimize the distance with s+, but found experimentally that this actually works slightly worse than s−. We believe that this is because equilibrium propagation depends on the s− being very close to a true minimum of the energy function, and so initializing the negative phase to sf≈s− will lead to better gradient computations than when we initialize the negative phase to sf≈s+\n\n----\n\n- “eq. 10 -- doesn't the regularization term also decrease the expressivity of the Hopfield network? e.g. it can no longer engage in \"explaining away\" or enforce top-down consistency, both of which are powerful positive attributes of iterative estimation procedures.”\n\nIt does potentially reduce the expressivity, for the reasons you describe. We now expand a bit about that in Section 2.3. However, our primary concern here is to train a feedforward network without backpropagation, so we already accept that we’re producing a model with the expressivity of a feedforward network. That said, because of concerns about this, we add an experiment in Appendix C where we run training to completion for various swept values of lambda. We do observe that too-high a lambda causes eq.prop to become unstable and fail, but we do not observe what we might expect if constrained-expressivity were a problem: (i.e. the error for s^f decreasing and the error for s- increasing with increased lambda). \n\n----\n\n- “This result is very cool. It only holds in the neighborhood of the optimum though. At initialization, I believe the expected correlation is zero by symmetry arguments (eg, d L_2 / d s_2 is equally likely to have either sign). Should include an explicit discussion of when this relationship is expected to hold.”\n\nThank you! We also found it unintuitive at first. Even at random initialization the gradient alignment S(d_L1/d_phi1, d_L2/d_phi1) is in general NOT zero-mean. For a randomly initialized network, d_L2/d_phi1 and d_L1/d_phi1 are both zero-mean random variables (for the reason you mention - that d_L2/d_s2 is equally likely to have either sign), but they are not independent - they are both functions of phi1, and this dependency induces alignment. We’ve changed Figure to demonstrate that the initially weak alignment becomes stronger as training progresses (and phi approaches phi*), and added a derivation of the alignment result in Appendix B.\n\n----\n- \"proportional to\" -> \"correlated with\" (it's not proportional to)\n\nOur statement is “when the term is proportional to an identity matrix, we see that dL1/dw1 and G1 are perfectly aligned”. This is true: we’re just describing the case in which ideal alignment happens, not saying that in a normal situation it’s proportional. \n. ---\n\n- “Figure 2 -- how does the final achieved test error change with the number of negative-phase steps? ie, is the final classification test error better even for init eq prop in the bottom row than it is in the top?”\n\nWe modified Figure 2 to show the scores. Answer is no, once there are enough negative steps for training to be stable, the forward pass doesn’t become more accurate with additional steps. \n", "Thank you for your helpful review. We agree with all the points you made, and have addressed them in the paper. Below we address some of your questions individually:\n----\n\n- “My main concern is with the mathematical argument in section 2.2. s* is not the same as s- , and in general, it is not clear at all that there should be a phi* such that s*=s-. Also, the derivation in eqn 12 assumes that w is very close to w*, which is not clear at al” … I don't think that this is a deal-breaker, but I think that this section needs to be more prudent in the way that it concludes from these observations (the math and the experiments).\n\nThis is definitely a valid concern. We’ve added a paragraph to the end of this section (now 2.4) addressing your point. We also mention that in future work, this problem could be dealt-with by figuring out a smart way to anneal the lambda-term (Introduced in now-section 2.3: Including the forward states in the energy function) in a way that does not harm training of the equilibrating network. We note that in the limit of lambda -> infinity, s-=sf, and so the targets provided by the equilibrating network are achievable by the forward network. However we observe experimentally (see added experiment in Appendix C) that setting lambda too high can cause training instabilities. \n\n-----\n\n- “One question I have is about biological plausibility. The whole point of EqProp was to produce a biologically plausible variation on backprop. How plausible is it to have two sets of weights for the feedforward and recurrent parts? That is where a trick such as proposed in Bengio et al 2016 might be useful, so that the same set of weights could be used for both.”\n\nWe felt that the most natural thing to do in this work was not to tie the parameters of the feedforward and inference network. This is more in line with the amortized variational inference view - where the variational encoder network typically does not share parameters with the generative model. However, we agree that in terms of biological plausibility, it makes more sense that the feedforward network would share parameters with the equilibrating, as in [Bengio et al 2016]. This, however, was not the aspect of biologically plausible deep learning that we aim to attack in this paper. The big problem, as we see it, is that the brain does not use backpropagation, yets seems to train fast inference networks. Once that is solved, we can attack the other aspects of biological plausibility.\n\nWe have also added some emphasis in the Discussion section on the other motivation for this work - the design of future hardware for deep learning. This work could be a useful starting point for the design of an efficient analog circuit for training feedforward networks. \n\n----\n- “It might be good to mention Bengio et al 2016 in the introduction since it is the closest paper (trying to solve the same problem of using a feedforward net to approximate the true recurrent computation), rather than pushing that to the end.“\n\nWe agree the this paper is very relevant, but couldn’t find it a way to work it into the introduction without having to explain the idea prematurely. We agree that it should be more emphasized here, so we’ve moved our mention of it to the beginning of the related work section. \n\n", "Thank you for doing such a detailed review of our work. We have addressed all of your points in our paper. We have used your feedback to simplify notation, add two experiments, and better support the theoretical analysis in the text. We hope that our changes allay your concerns, and we appreciate time and effort you put into reviewing and improving our paper. \n\n----\n\n- “Eq. (1): Why is it $i \\in S$ everywhere, rather than all neurons, including input neurons (as in [Scellier and Bengio 2017])? “\n\nThe phrasing eq. 1 of in [Scellier and Bengio 2017] is slightly imprecise, because in reality no nonlinearity is applied to the input. We instead separate out the input terms from the other terms in the network. This also allows us to just use “s” to describe the state of the network, and not need the additional vector “u” to describe “the state of the network AND the inputs:”. \n\n----\n\n- “Sec. 2.1: Can you elaborate on why the equilibrating network can create targets that are not achievable by the feedforward network? Is it a problem of your particular choice of model architecture? Isn't the \"regularization\" then detrimental to the (capacity of the) equilibrating network? “\n- “ For example, evaluate the influence of the hyperparameter in Eq. (10) (Is lambda > 0 detrimental to the capacity of the equilibrating network?)”\n\nI rewrote this section (now numbered 2.3) to clarify why the function of the equilibrating network is more “flexible” than that of the feedforward network. You are correct that the “regularization” is then detrimental to the capacity of the equilibrating network, but that is not important if our objective is just to use the equilibrating network to train the feedforward network. \n\nWe added a new experiment in Appendix C where we sweep lambda to demonstrate its effect.\n\n---\n\n“In Sec. 2.2 on page 5, you claim that given random parameter intitialization, the gradients should almost always be aligned. For random weight matrices, where the weights are drawn with zero mean, I cannot see how this is true. To compute gradients of layer $l$, backpropagation (in an MLP) computes the matrix-vector multiplication between transposed weight matrix and the gradients of layer l+1 (I am ignoring the activation function here). The resulting gradient should have zero mean.”\n\nThe alignment effect is real, just somewhat intuitive. It may help to think of it this way: For a random network with parameters phi, with targets assigned by another random network with parameters phi*, the gradients d_L1/d_phi1 and d_L2/d_phi1 are indeed zero-mean random variables. But they are not independent. Both of these gradients depend on phi1. Our analysis shows that when we are close to the ideal parameters (ie when phi is close to phi*), this dependency tends to produce to a positive correlation between the two gradients. \n\nTo increase the reader’s confidence in this point, we have added a full derivation in Appendix B, and Figure 1 and 3, which demonstrate this alignment effect empirically.\n\n---- \n\n- “Eq. (12): I don't understand how you arrived at these gradient equations, even the first one. <….> How can there be the derivative of f (or h) twice.”\n\nWe added a full derivation in Appendix B. The squared-derivative arises from us us doing a first-order Taylor expansion to calculate the derivative in the limit of small Delta_w. We did, discover that we’d had an accidental a minus-sign in our equation, which we have corrected. We have also tested them numerically (see anonymous test script at https://pastebin.com/RRgcCnrb )\n\n-----\n\n- “Sec. 3: Is beta really sampled from a zero-centred uniform distribution? On page 2, beta is introduced as a small positive number. Would a negative beta not cause the model to settle to a fixed point where maximally wrong targets are predicted?”\n\nYes, but the learning rate is also multiplied by beta, so when it is negative it tries to raise the energy of the “maximally wrong” targets. I have added a footnote explaining why this is done (it is a trick inherited from [Scellier & Bengio, 2017]). \n\n----\n\n- “Lastly, as of my current understanding, the whole motivation for the EP framework is biological plausibility. In my opinion, this paper lacks a discussion of that motivation with respect to the proposed approach.”\n\nWe are still many steps away from full biological plausibility. But one of the main gaps in between Deep-Learning models and our understanding of the brain is that the brain clearly does not use backprop. We think that addressing this issue alone (how to train a feedforward network without backprop) is sufficient scope for this paper. \n\nMoreover, there is a second motivation: Equilibrium Propagation could lead to efficient neural networks implementations in analog hardware. However, it still requires a settling process, which is not ideal for fast inference. We address exactly that concern. We have added a paragraph to the Discussion making that point. \n", "Dear Reviewers,\n\nThank you for providing such in depth reviews. All of your suggestions have been used to improve our paper. We have made the following changes in response to your comments:\n- Added a derivation of the alignment effect result (code verifying the result is available here: https://pastebin.com/RRgcCnrb )\n- Changed figure 1 and added Figure 3. These figures demonstrate on a toy problem and MNIST, respectively, that that local and distant gradients tend to align during training.\n- Expanded the related work section, drawing links to variational inference and other alternative methods for neural network training.\n- Simplified some notation, and clarified numerous points suggested by reviewers. \n\nPlease also see our separate responses to the comments and questions in each separate review. We hope our improvements address your concerns. \n\n", "This is a nice improvement on Equilibrium Propagation (EqProp) based on training a separate network to initialize (and speed-up at test time) the recurrent network trained by EqProp. The feedforward network takes as laywerwise targets the activities of each layer when running the recurrent net to convergence (s-). The surprising result (on MNIST) is that the feedforward approximation does as well as the recurrent net that trains it. This allows faster run-time, which is practically very useful.\n\nMy main concern is with the mathematical argument in section 2.2. s* is not the same as s- , and in general, it is not clear at all that there should be a phi* such that s*=s-. Also, the derivation in eqn 12 assumes that w is very close to w*, which is not clear at all. So this derivation is more suggestive, and the empirical results are the ones which could be convincing. My only concern there is that the only experiments performed are on MNIST, which is known to be easily dealt with using the kind of feedforward architectures studied here. Things could break down if much more non-linearity (which is what the fixed point recurrence provides) is necessary (equivalently this would correspond to networks for which much more depth is necessary, given some budget of number of parameters). I don't think that this is a deal-breaker, but I think that this section needs to be more prudent in the way that it concludes from these observations (the math and the experiments).\n\nOne question I have is about biological plausibility. The whole point of EqProp was to produce a biologically plausible variation on backprop. How plausible is it to have two sets of weights for the feedforward and recurrent parts? That is where a trick such as proposed in Bengio et al 2016 might be useful, so that the same set of weights could be used for both.\n\nIt might be good to mention Bengio et al 2016 in the introduction since it is the closest paper (trying to solve the same problem of using a feedforward net to approximate the true recurrent computation), rather than pushing that to the end.\n\nIn sec. 1.1, I would replace 'training a Continuous Hopfield Network for classification' by 'energy-based models, with a recurrent net's updates corresponding to gradient descent in the energy'. The EqProp algorithm is not just for the Hopfield energy but is general. Then before eq 1, mention that this is the variant of Hopfield energy studied in the EqProp paper.\n\nI found a couple of typos (scenerio, of the of the).\n\n\n" ]
[ -1, 7, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, 8 ]
[ -1, 5, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, 5 ]
[ "Hyl4OzIIJN", "iclr_2019_B1GMDsR5tm", "Syx_vW0HkE", "rkeWkpvBk4", "r1gcB3JH14", "rkenvhdtA7", "HkxVPjhFAQ", "iclr_2019_B1GMDsR5tm", "rye_pK_YAm", "ryeI_q3v3Q", "r1eOy7xKhQ", "B1gzOC6Wa7", "iclr_2019_B1GMDsR5tm", "iclr_2019_B1GMDsR5tm" ]
iclr_2019_B1MXz20cYQ
Explaining Image Classifiers by Counterfactual Generation
When an image classifier makes a prediction, which parts of the image are relevant and why? We can rephrase this question to ask: which parts of the image, if they were not seen by the classifier, would most change its decision? Producing an answer requires marginalizing over images that could have been seen but weren't. We can sample plausible image in-fills by conditioning a generative model on the rest of the image. We then optimize to find the image regions that most change the classifier's decision after in-fill. Our approach contrasts with ad-hoc in-filling approaches, such as blurring or injecting noise, which generate inputs far from the data distribution, and ignore informative relationships between different parts of the image. Our method produces more compact and relevant saliency maps, with fewer artifacts compared to previous methods.
accepted-poster-papers
Important problem (explainable AI); sensible approach, one of the first to propose a method for the counter-factual question (if this part of the input were different, what would the network have predicted). Initially there were some concerns by the reviewers but after the author response and reviewer discussion, all three recommend acceptance (not all of them updated their final scores in the system).
train
[ "rJxUUoj8g4", "S1edEowWgN", "rJxRJWdayV", "H1x2bX-p1V", "B1eKk_WRaX", "HkguvdW0a7", "HJlz9u-RaQ", "H1g195-CT7", "BklNCIGhh7", "SklSwVYo37", "SkephArKjX" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "As we mentioned above, Fan et al. is orthogonal to our work. We highly recommend you to reread our manuscript to understand the scope of our work.", "Fan et al. is used in saliency prediction and seems to achieve good accuracy as reported in other papers:\nhttps://openreview.net/forum?id=BJxbYoC9FQ\n\n", "Good question. I think it's not just because it's not adversarially generated, since other heuristics infilling are also not trained to do so. \n\nI think in high dimensional datasets like images, the infilling has so much freedom to generate out-of-distribution inputs. Image inpainting algorithm explicitly train to restict the infilling to natural images, so it makes the infilling harder to find adversarial perturbations. [1] also has similar insights of using generative models that protects it from the adversarial attack. I also think it's also the reason for fewer artifacts of SSR than SDR since finding the evidence for 1 class have way less freedom than finding evidence for other 999 classes. \n\nThank you for reading the rebuttal. We will include this discussion in the version later.\n\n[1] Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models\nhttps://openreview.net/forum?id=BkJ3ibb0-", "\nThe rebuttal addresses some of the issues...\n* Figure 11 now clearly shows that the proposed algorithm is not merely a combination of two existing approaches.\n* Authors mentioned and discussed the limitation of this approach in Section 5.\n\nAfter reading the revised paper, I have additional comments.\nIt is known that a classification network can be fooled by a small amount of (adversarial) noise. It is also true that an image inpainting algorithm inevitably synthesizes some artifacts. Then, why do artifacts rendered by the inpainting algorithm not severely corrupt the task that tries to find good regions for classification? Is it just because artifacts are not adversarially generated? It is not necessary but it would be great if the paper discusses this aspect as well.", "We thank each of the reviewers for their thoughtful comments, which have helped us to improve the paper in the latest revision. We made the following changes: \n- We include a new ablation study to help the reader better understand the importance of each technical contribution. The specific goal is to understand whether BBMP, the method most closely related to ours, could be improved with CA-GAN infilling, and find that BBMP+CA-GAN substantially underperforms relative FIDO+CA-GAN. This points toward that the FIDO framework is necessary to leverage expressive generative models in the interpretation of classifiers. We discuss this experiment in further detail below in the response to AnonReviewer2.\n- For our quantitative evaluations (weakly supervised localization and Dabowski & Gal 2017’s “saliency metric”) we evaluate three additional baseline models. These are Gradient-based class saliency (Simonyan et al 2013), DeconvNet (Springenberg et al 2014), and GradCAM (Selvajaru et al 2016).\n- We expand our discussion to better describe how the FIDO framework depends on the capacity of the generative model.\n- We confirm our original findings with increased statistical confidence by evaluating over the entire validation set (50k images). We note there is a discrepency of WSL performances with what Dabowski and Gal 2017 reported. We try to resolve it by communicating with the authors but unfortunatelly they are unable to provide neither the evaluation code nor the original model they use. However for completeness we still compare with this model.\n- Ground truth labels are now displayed beside the images for the qualitative comparison of saliency maps.\n- In the supplement we show how batch size of mask samples M affects the saliency computed by FIDO-CA. Performance degrades with small batch size (< 4).\n- We include additional qualitative examples in the supplementary.\n\nWe summarize our key contributions:\n- We propose a novel framework, called FIDO, for explaining classifier decisions that efficiently search for explanations that respect the distribution of input data by generative model. \n- We show that incorporating strong generative models reduces artifacts substantially and provides more relevent pixels of explanation. This addresses the common shortcomings of existing methods that uses out-of-distribution (o.o.d.) data that leads to increasing artifacts, as shown in our experiment.\n- We quantitatively show the generative models perform better than heuristics infill on two widely-used evaluation methods. We also extensively compare with the recent literature.\n- We also show SDR (used by Fong & Vedaldi, 2017) is prone to a much higher degree of artifact compared to SSR qualitatively. \n\nThe individual concerns of each reviewer will be addressed in the comments below. Please let us know if you have additional comments, and if there are particular revisions that would increase your assessment of our paper.", "We thank you very much for your effort in assessing our work, and for pointing us to the workshop paper on weakly localized supervision by Fan et al. 2017. We suspect that any lack of clarity about our method---its motivation, novelty, and improvement relative to baselines---is due to a misunderstanding about the scope of our paper and its key contribution. We hope to clarify this here and explain why Fan et al. 2017 is not a suitable baseline.\n\nOur goal is to explain the prediction produced by a differentiable classifier (that has been previously trained and whose weights are frozen) on a new test input x’. We formulate this as a search for features of x’ that change the classifier prediction significantly when they are marginalized out in a probabilistic framework. By contrast BBMP also searches over masks in continuous [0,1] to a point estimate and infills with heuristics (rather than marginalizing). This makes this method susceptible to artifacts in the computed saliency since it produces an explanation that relies on out-of-distribution (o.o.d.) inputs, where the classifier behavior isn’t well specified. Our key technical differences with BBMP (see blue text in figure 5) are firstly using Bernoulli distribution over masks, and secondly use an expressive generative model for efficient marginalization. These are novel to our knowledge, and neither of these differences are workable alone using existing algorithms; we add a new ablation study in the revision to emphasize this. \n\nMeanwhile, Fan et al. seek to solve weakly supervised localization (WSL) of objects in images using adversarial training. The goal of WSL is to locate the object, not to explain a pre-trained classifier; Fan et al 2017 include a classifier in their model, but this classifier’s weights are trained by their algorithm. We do not train the classifier, since we are trying to explain its predictions. Also, Fan et al use background infilling rather than a strong generative model. It is possible that a generative infilling model could be trained jointly with a classifier for improved WSL relative to Fan et al, but that is orthogonal to the goal and scope of our work.\n\nDespite common usage within saliency map papers, WSL is not a fully satisfactory evaluation for saliency map algorithms. Firstly (related to the above point) saliency map algorithms attempt to explain a known classifier rather than predict object localizations. For example, if the classifier ignores the object and classifies based on contextual information, then the correct saliency map should score poorly on WSL because it will also ignore the object. Nevertheless for completeness we evaluated FIDO on this task. There are some other shortcomings of WSL as a saliency metric that we can discuss if you are curious. All of this is motivation for the “saliency metric” proposed by Dabowski and Gal 2017, which we also evaluate. In the revision we also compare against a larger class of baseline models (Grad, DeconvNet, GradCAM) along both metrics, which we hope addresses your concern about how our model compares with other methods from the literature.", "We thank you for your thorough assessment of our work. You have concisely summarized the key contribution, and we agree with your explanation of how including a generative model of the input space allows FIDO to ask a more meaningful counterfactual question than existing approaches.\n\nIn response to your specific comments: \n1. We totally agree that the ideal evaluation here would sample from the true conditional infilling distribution, so the fact that we instead sample from CA-GAN is a limitation and might lead to a preferable performance on FIDO-CA. However we still observe a win that using other generative models (Local and VAE) over heuristics (Mean, Blur and Random). This suggests that generative infilling can still identify more relevant pixels corresponding to the classifier. We include this limitation in our revision.\n2. Extending BBMP for use with a generative in-filler is not natural since it optimizes over continuous masks in [0, 1] rather than parameters of discrete masks in {0, 1} so the mask does not partition features into observed/unobserved. But we implemented an attempt at this in the revision and describe the result below.\n3. We believe the reader will also benefit from this ablation study. In section 4.6 of the revision we investigate whether BBMP could be improved by using the CA-GAN to do infill. We threshold the BBMP masks then in-fill with CA-GAN. We find that this approach---called BBMP-CA---remains susceptible to artifacts in the resulting saliency maps and is brittle w.r.t its threshold value. BBMP-CA performs worse on the quantitative metrics than FIDO-CA, and about on par with FIDO-Blur and FIDO-Random, which do not use expressive generative models. Therefore we believe that one must model a discrete distribution over masks (not a point estimate like BBMP) in order to leverage the expressivity of an in-filling generative model.\n\t\nIn response to your minor comments:\n1. It is true that \\phi has an indirect dependence on \\hat x. But since \\hat x is dependent also on x and z as a random variable drawn from the generative models, we think \\phi(x, z) is still valid in this case. We make note of the stochasticity of \\phi in the revision.\n2. We include all the true labels in the revision.", "We thank you very much for your thorough review and for acknowledging that our proposed use of generative models is a sensible approach and its significance over the field. We believe that this conceptual contribution will help to progress interpretability beyond current limitations when most of the methods are still based on out-of-distribution inputs. \n\nWe respectfully disagree with your assessment of the paper as lacking technical originality, as our method is not combining two off-the-shelf methods. Integrating a powerful generative model into current saliency algorithms efficiently is non-trivial (and to our knowledge has not been done, whereas your review might suggest such an algorithm already exists). The difficulty of combining existing generative models with existing saliency algorithms BBMP is evidenced by a new ablation study in the revision (section 4.6). It shows that naive combination of BBMP and CA perform much worse than our method FIDO. By parameterizing a distribution over dropped-out input features, FIDO provides a principled and efficient way to integrate over sensible counterfactual inputs to determine the relevant input features to a rendered prediction.\n\nYou have raised an important point about the performance of this framework being upper bounded by the capacity of the in-filling generative model. We believe the reader will benefit from a discussion of this limitation, which we include in the revision (section 5). It is true that the ability to explain the classifier c(y|x) that learns p(y|x) is now somehow tied to the ability of the generative model g(x) to fit p(x). However, we strongly believe optimization-based saliency strategies that ignore or over-simplify p(x) (as existing methods do) are fundamentally misspecifying the counterfactual “what if” question that will yield an explanation. Moreover this upper bound on performance will increase in the future as generative models improve.", "This paper introduces a new saliency map extractor to visualize which input features are relevant for the deep neural network to recognize objects in the image. The proposed saliency map extractor searches over a big space of potentially relevant image features and in-fills the irrelevant image regions using generative models.\n\nThe algorithmic machinery in the paper is poorly justified, as it is presented as a series of steps without providing much intuition why these steps are useful (especially compared to previous works). Also, I would like to know how this paper compares to Fan et al. \"Adversarial localization network\" (NIPS workshop, 2017), which has not been cited and it proposes similar ideas.\n\nAlso, the results are not convincing. Only one previous work (among many) has been compared with the proposed algorithm, and the qualitative examples are not enlightening showing the advantages of the introduced saliency map extractor. What are the new insights into the functioning of deep networks that were gained from the proposed saliency map extractor?\n\nIn summary, it is unclear to me if there is any novelty in the approach (missing references, lack of motivation of the algorithm) and if the results show any improvement over previous works (only one previous work has been compared and the qualitative examples do not show anything particularly interesting).", "The paper is aimed at answering the following question: \"for model M, given an instance input and a predicted label, what parts of the input are most relevant for making the M choose the predicted label?\". \nThis is by far not the first paper aimed at answering this question, but it makes important innovations to the best of my knowledge. The most important one is proposing a stronger approach to the counterfactual question \"had this part of the input been different, what would have been the output?\". Because the input can be different in many ways, an important question is addressing in what specific way would it have been different. \n\nSpecifically in the domain of images, most models assume a blurring or simple local in-painting approach: \"if this patch were just a blurry average, what would have been the output?\". However, ss the current paper correctly points out, blurring or other simple in-painting methods leads to an image which is outside the manifold of natural images and outside the domain of the training set. This can lead to biased or inaccurate results. \n\nThe paper therefore propose two innovations on top of existing methods, most closely building on work by Fong & Vedaldi (2017): \n(1) Optimizing an inference network for discovering image regions which are most informative\n(2) Using a GAN to in-paint the proposed regions, leading to a much more natural image and a more meaningful counterfactual question.\n\nThe presentation is crisp, especially the pseudo-code in Figure 5. In addition, the paper includes several well-executed experiments assessing the contributions of different design choices on different metrics and making careful comparisons with several recent methods addressing the same problem. \n\nSpecific comments:\n\n1. In sec. 4.5, the comparison is not entirely fair because FIDO was already trained with CA-GAN, and therefore might be better adapted for it.\n2. Related to the point above: could one train BBMP with a CA-GAN in-painting model?\n3. I would have liked to see an ablation experiment where either one of the two innovations presented in this paper is missing.\n\n\nMinor:\n1. In eq. (2), wouldn't it be more accurate to denote it as \\phi(x,z,\\hat{x}) ? \n2. I would like to know the true labels for all the examples presented in the paper.", "Summary: This paper aims to find important regions to classify an image. The main algorithm, FIDO, is trained to find a saliency map based on SSR or SDR objective functions. The main novelty of this work is that it uses generative models to in-fill masked out regions by SSR or SDR. As such, compared to existing algorithms, FIDO can synthesize more realistic samples to evaluate.\n\nI like the motivation of this paper since existing algorithms have clear limitations, i.e., using out-of-distribution samples. This issue can be addressed by using a generative network as described in this paper.\n\nHowever, I think this approach yields another limitation: the performance of the algorithm is bound by the generative network. For example, let’s assume that a head region is important to classify birds. Also assume that the proposed algorithm somehow predicts a mask for the head region during training. If the generative network synthesizes a realistic bird from the mask, then the proposed algorithm will learn that the head region is a supporting region of SSR. In the other case, however, the rendered bird is often not realistic and classified incorrectly. Then, the algorithm will seek for other regions. As a result, the proposed method interprets a classifier network conditioned on the generative network parameters. Authors did not discuss these issues importantly in the paper.\n\nAlthough the approach has its own limitation, I still believe that the overall direction of the paper is reasonable. It is because I agree that using a generative network to in-fill images to address the motivation of this paper is the best option we have at this current moment. In addition, authors report satisfactory amount of experimental results to support their claim.\n\nQuality: The paper is well written and easy to follow.\n\nClarify: The explanation of the approach and experiments are clear. Since the method is simple, it also seems that it is easy to reproduce their results.\n\nOriginality: Authors apply off-the-shelf algorithms to improve the performance of a known problem. Therefore, I think there is no technical originality except that authors found a reasonable combination of existing algorithms and a problem.\n\nSignificance: The paper has a good motivation and deals with an important problem. Experimental results show improvements. Overall, the paper has some amount of impact in this field.\n\nPros and Cons are discussed above. As a summary,\nPros: \n+ Good motivation.\n+ Experiments show qualitative and quantitative improvements.\n\nCons: \n- Lack of technical novelty and justification of the approach.\n" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 5 ]
[ "S1edEowWgN", "HkguvdW0a7", "H1x2bX-p1V", "B1eKk_WRaX", "iclr_2019_B1MXz20cYQ", "BklNCIGhh7", "SklSwVYo37", "SkephArKjX", "iclr_2019_B1MXz20cYQ", "iclr_2019_B1MXz20cYQ", "iclr_2019_B1MXz20cYQ" ]
iclr_2019_B1VZqjAcYX
SNIP: SINGLE-SHOT NETWORK PRUNING BASED ON CONNECTION SENSITIVITY
Pruning large neural networks while maintaining their performance is often desirable due to the reduced space and time complexity. In existing methods, pruning is done within an iterative optimization procedure with either heuristically designed pruning schedules or additional hyperparameters, undermining their utility. In this work, we present a new approach that prunes a given network once at initialization prior to training. To achieve this, we introduce a saliency criterion based on connection sensitivity that identifies structurally important connections in the network for the given task. This eliminates the need for both pretraining and the complex pruning schedule while making it robust to architecture variations. After pruning, the sparse network is trained in the standard way. Our method obtains extremely sparse networks with virtually the same accuracy as the reference network on the MNIST, CIFAR-10, and Tiny-ImageNet classification tasks and is broadly applicable to various architectures including convolutional, residual and recurrent networks. Unlike existing methods, our approach enables us to demonstrate that the retained connections are indeed relevant to the given task.
accepted-poster-papers
This method proposes a criterion (SNIP) to prune neural networks before training. The pro is that SNIP can find the architecturally important parameters in the network without full training. The con is that SNIP only evaluated on small datasets (mnist, cifar, tiny-imagenet) and it's uncertain if the same heuristic works on large-scale dataset. Small datasets can always achieve high pruning ratio, so evaluation on ImageNet is quite important for pruning work. The reviewers have consensus on accept. The authors are recommended to compare with previous work [1][2] to make the paper more convincing. [1] Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efficient neural network. NIPS, 2015. [2] Yiwen Guo, Anbang Yao, and Yurong Chen. Dynamic network surgery for efficient dnns. NIPS, 2016.
train
[ "HkgD860RJN", "BJlFsBZA14", "HkeMgJ6LyN", "r1ecBMxwTm", "rygEepnLJV", "Skx0ADeVkE", "rye-Cr95Am", "Bygrtyec3X", "HylRy09FCX", "SygWrMoHRX", "HkeeUt2G0X", "rylC4nnfCQ", "rJgNZnhzCX", "S1gwW9hGRX", "HkxRgu-z0X", "BylnY5EPTX", "SJlJlW6g6m", "Hkx-JZ3vh7", "SJxr7w6B3Q", "H1e1b9pz3Q", "Bye48Fieh7", "BketXC-xnX", "BJe3n82hs7", "SylwYaFoom", "HJe3mWK5qX", "HkeTWJWqcQ" ]
[ "author", "public", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "public", "official_reviewer", "author", "official_reviewer", "author", "public", "author", "public", "author", "public", "author", "public" ]
[ "\nWe believe that the comparison is misleading since [1] and SNIP focus on different (orthogonal) aspects of network pruning, and we elaborate this below.\n- SNIP focuses on finding a subnetwork at single-shot with a mini-batch of data, and shows that the subnetwork can be trained in the standard way. There are no hyperparameters involved in finding the subnetwork.\n- [1] states that there is a way to find a subnetwork -- based on costly iterative pruning and retraining (see the 2nd last paragraph of Section 2) -- and once this process finishes, the subnetwork can be trained as fast as training the original dense network.\n- In other words, SNIP is efficient in finding the subnetwork, and [1] is efficient in training the subnetwork.\n- Essentially, these two works are orthogonal exploring different aspects of network pruning, and therefore, it is misleading to compare these two approaches only on the aspect of training the subnetwork (i.e. provided figures).\n\nIt is surely interesting to see how subnetworks obtained by different approaches compare to each other, and we hope to examine further into this once the code is released.", "\n\nWe are the authors of the lottery ticket paper [1]. We have replicated the SNIP algorithm as presented in your paper in our own framework, and we performed several experiments that examine the relationship between SNIP and our paper. Our findings are:\n\n* Our winning tickets reach higher accuracy at higher levels of sparsity and learn faster than SNIP-pruned networks. (See https://openreview.net/forum?id=rJl-b3RcF7&noteId=S1xmvZRayE for details on the performance gap.)\n\n* SNIP-pruned networks can be randomly reinitialized as well as randomly rearranged (i.e., randomly choose the locations of unpruned connections within layers) with limited impact on their accuracy. However, these networks are neither as accurate nor learn as quickly as our winning tickets.\n\nThe fact that SNIP-pruned networks can be rearranged suggests that SNIP largely identifies the proportions in which layers can be pruned such that the network is still able to learn, leaving significant opportunity to exploit the additional, initialization-sensitive understanding demonstrated by our results.\n\nWe provide several graphs here (https://drive.google.com/drive/folders/1lpxJFpkF0Afq1rRqkEDnLcPN0kMV8BBC?usp=sharing) to support these claims.\n\nWe are eager to hear your thoughts about our experiments!", "Thank you for the extra experiments and clarifications, this makes the paper even more intriguing (score up).", "Post rebuttal update/comment:\n\nI thank the authors for the revision and have updated the score (twice!)\n\nOne genuinely perplexing result to me is that the method behaves better than random pruning, yet after selecting the salient neurons the weights can be reinitialized, as per the rebuttal:\n\n> # Initialization procedure\n- It is correct that the weights used to train the pruned model are possibly different from the ones used to compute the connection sensitivity. Given (variance scaled) initial weights, SNIP finds the architecturally important parameters in the network, then the pruned network is established and trained in the standard way.\n\nFirst, there is work which states quite the opposite (e.g. https://arxiv.org/abs/1803.03635). Please relate to it.\n\nFundamentally, if you decouple weight pruning from initialization it also means that:\n- the first layer will be pruned out of connections to constant pixels (which is seen in the visualizations), this remains meaningful even after a reinitialization\n- the second and higher layers will be pruned somewhat randomly - even if the connections pruned were meaningful with the original weights, after the reinitialization the functions computed by the neurons in lower layers will be different, and have no relation to pruned weights. Thus the pruning will be essentially random (though possibly from a very specific random distribution). In other words - then neurons in a fully connected layer can be freely swapped, each neuron in the next layer behaves on al of them anyway we are thinking here about the uninitialized neurons, with each of them having a distribution over weights and not a particular set of sampled weights, this is valid because we will reinitialize the neurons). Because of that, I wouldn't call any particular weight/connection architecturally important and find it strange that such weights are found.\n\nI find this behavior really perplexing, but I trust that your experiments are correct. however, please, if you have the time, verify it.\n\nOriginal review:\n\nThe paper presents an intriguing result in which a salient, small subset of weights can be selected even in untrained networks given sensible initialization defaults are used. This result is surprising - the usual network pruning procedure assumed that a network is pretrained, and only then important connections are removed.\n\nThe contributions of the paper are two-fold:\n1) it reintroduces a multiplicative sensitivity measure similar to the Breiman garotte\n2) and shows which other design choices are needed to make it work on untrained networks, which is surprising.\n\nWhile the main idea of the paper is clear and easy to intuitively understand, the details are not. My main concern is that paper differentiates between weights and connections (both terms are introduced on page iv to differentiate from earlier work). However, it is not clear what are the authors referring to:\n- a conv layer has many repeated applications of the same weight. Am I correct to assume that a conv layer has many more connections, than weights? Furthermore, are the dramatic sparsities demonstrated over connections counted in this manner? This is important - on MNIST each digit has a constant zero border, all connections to the border are not needed and can be trivially removed (one can crop the images to remove them for similar results). Thus we can trivially remove connections, without removing weights.\n- in paragraph 5.5 different weight initialization schemes are used for the purpose of saliency estimation, but the paragraph then says \"Note that for training VS-X initialization is used in all the cases.\" Does it mean that first a set of random weights is sampled, then the sensitivities are computed, then a salient set of connections is established and the weights are REINITIALIZED from a distribution possibly different than the one used to compute the sensitivity? The fact that it works is very surprising and again suggests that the method identifies constant background pixels rather than important weights.\n- on the other hand, if there is a one-to-one correspondence between connections and weights, then the differentiation from Karnin (1990) at the bottom of p. iv is misleading.\n\nI would also be cautious about extrapolating results from MNIST to other vision datasets. MNIST has dark backgrounds. Let f(w,c) = 0*w*c. Trivially, df/dw = df/dc = 0. Thus the proposed sensitivity measure picks non-background pixels, which is also demonstrated in figure 2. However, this is a property of the dataset (which encodes background with 0) and not of the method! This should be further investigated - a quick check is to invert MNIST (make the images black-on-white, not white-on-black) and see if the method still works. Fashion MNIST behaves in a similar way. Thus the only non-trvial experiments are the ones on CIFAR10 (Table 2), but the majority of the analysis is conducted on white-on-black MNIST and Fashion-MNIST.\n\nFinally, no experiment shows the benefit of introducing the variables \"c\", rather than using the gradient with respect to the weights. let f be the function computed by the network. Then:\n- df/d(cw) is the gradient passed to the weights if the \"c\" variables were not introduced\n- df/dw = df/d(cw) d(cw)/dw = df/d(cw) * c = df/d(cw)\n- df/dc = df/d(cw) d(cw)/dc = df/d(cw) * w\n\nThus the proposed change seems to favor a combination of weight magnitude and the regular df/dw magnitude. I'd like to see how using the regular df/dw criterion would fare in single-shot pruning. In particular, I expect using the plain gradient to lead to similar selections to those in Figure 2, because for constant pixels 0 = df/d(cw) = df/dc = df/dw.\n\nSuggested corrections:\nIn related work (sec. 2) it is pointed that Hessian-based methods are unpractical due to the size od the Hessian. In fact OBD uses a diagonal approximation to the hessian, which is computed with complexity similar to the gradient, although it is typically not supported by deep learning toolkits. Please correct.\n\nThe description of weight initialization schemes should also be corrected (sec. 4.2). The sentence \"Note that initializing neural networks is a random process, typically done using normal distribution with zero mean and a fixed variance.\" is wrong and artificially inflates the paper's contribution. Variance normalizing schemes had been known since the nineties (see efficient backprop) and are the default in many toolkits, e.g. Pytorch uses the Kaiming rule which sets the standard deviation according to the fan-in: https://github.com/pytorch/pytorch/blob/master/torch/nn/modules/linear.py#L56.\n\nPlease enumerate the datasets (MNIST, Fashion-MNIST, CIFAR10) in the abstract, rather than saying \"vision datasets\", because MNIST in particular is not representative of vision datasets due to the constant zero padding, as explained before.\n\nMissing references:\n- Efficient Backprop http://yann.lecun.com/exdb/publis/pdf/lecun-98b.pdf discusses variance scaling initialization, and approximations to the hessian. Since both are mentioned in the text this should be cited as well.\n- the Breiman non-negative garotte (https://www.jstor.org/stable/1269730) is a similar well-known technique in statistics\n\n\nFinally, I liked the paper and wanted to give it a higher score, but reduced it because of the occurrence of many broad claims made in the paper, such as: 1) method works on MNIST => abstract claims it generally works on vision datasets 2) paper states \"typically used is fixed variance init\", but the popular toolkits (pytorch, keras) actually use the variance scaling one by default 3) the badly explained distinction between connection and weight and the relation that it implies to prior work. I will revise the score if these claims are corrected.", "We asked the authors of [1] for clarification on the effect of (re-)initialization of subnetworks, and we received the following response.\n\n\"The main statement of the lottery ticket hypothesis does not exclude the possibility that winning tickets are still trainable when reinitialized. Specifically, while the hypothesis conjectures that, given a dense network and its initialization, there exists a subnetwork that is still trainable with the original initializations, it does not require any particular behavior of this subnetwork under other initializations. Thank you for this comment; we will revise our language to make this clear.\"\n\n( link to the full response: https://openreview.net/forum?id=rJl-b3RcF7&noteId=r1l-QxArJ4 )\n\nWe hope this clears out the reviewer’s confusion on (re-)initializing subnetworks.", "Thank you for the further comments. We have investigated [1] carefully and conducted experiments on (re-)initialization. Here we provide the results.\n\nWe tested various models (LeNets, AlexNets, VGGs and WRNs) on MNIST and CIFAR-10 datasets for the same extreme sparsity levels used in our paper. As a result, we found that there are no differences in performance between re-initializing and NOT-initializing the subnetworks (after pruning by SNIP and before the start of training): 1) the final accuracies are almost the same (the difference is less than 0.1%) and 2) the training behavior (the training loss and validation accuracy curves) is very similar.\n\nThis finding is contradictory to the one of the hypotheses in [1]: \"When randomly reinitialized, a winning ticket learns more slowly and achieves lower test accuracy\". However, their conclusions are based on magnitude based pruning (and there are differences in sparsity levels etc.), which might be the reason for the discrepancy.\n\nNotably, in the most updated version of [1] (27 Nov 2018), the authors explicitly state as a future work that \"we intend to explore more efficient methods for finding winning tickets that will make it possible to study the lottery ticket hypothesis in more resource-intensive settings\" or \"... non-magnitude pruning methods (which could produce smaller winning tickets or find them earlier)\".\n\nBeing a single-shot pruning method at initialization, SNIP could be a method of choice for the further exploration of [1].\n\nReferences\n[1] The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks ( https://arxiv.org/abs/1803.03635 )", "Thank you for clarifying this further in the paper and for addressing all of my other comments. I updated my score to reflect this.", "This work introduces SNIP, a simple way to prune neural network weights before training according to a specific criterion. SNIP identifies prunable weights by the normalised gradient of the loss w.r.t. an implicit multiplicative factor “c” on the weights, denoted as the “sensitivity”. Essentially, this criterion takes two factors into account when determining the relevance of each weight; the scale of the gradient and the scale of the actual weight. The authors then rank the weights according to their sensitivity and remove the ones that are not in the top-k. They then proceed to train the surviving weights as normal on the task at hand. In experiments they show that this method can offer competitive results while being much simpler to implement than other methods in the literature.\n\nThis paper is well written and explains the main idea in a clear and effective manner. The method seems to offer a viable tradeoff between simplicity of implementation and effective sparse models. The experiments done are also extensive, as they cover a broad range of tasks: MNIST / CIFAR 10 classification with various architectures, ablation studies on the effects of different initialisations, visualisations of the pruning patterns and exploration of regularisation effects on a task involving fitting random labels.\n\nHowever, this work has also an, I believe important, omission w.r.t. prior work. The idea of using that particular gradient as a guide to selecting which parameters to prune is actually not new and has been previously proposed at [1]. The authors of [1] considered unit pruning but the modification for weight pruning is trivial. It is worth pointing out that [1] is also discussed in one of the other citations of this work, namely [2]. For this reason, I believe that the main contribution of this paper is more on the thorough experimental evaluation of an existing idea rather than the proposed sensitivity metric.\n\n\nAs for other general comments:\n\n- The authors argue that SNIP can offer training time speedups by only optimising the remaining parameters. In this spirit, the authors might also want to discuss about other works that seem relevant to this task, e.g. [3, 4]. They also allow for pruned and sparse networks during training (thus speeding it up), without needing to conform to a specific sparsity pattern. \n\n- SNIP seems to be a good candidate for applying it to randomly initialised networks; nevertheless, a lot of times we are also interested in pruning pre-trained networks. Given that SNIP is relying on the magnitude of the gradient to determine relevance, how good does it handle this particular case (given that the magnitude of the gradients is close to zero at convergence)?\n\n- Why is the normalisation of the magnitude of the gradients necessary? The normalisation doesn’t change the relative ordering so we could simply just rank according to |g_j(w; D)|.\n\n- While the experiment at section 5.6 is interesting, the result is still dependent on the a-priori chosen cut-off point “k”. For this reason it might be worthwhile to plot the behaviour of the network as a function of “k”. Furthermore, the authors should also refer to [5] as they originally did the same experiment and showed that they can obtain the same behaviour without any hyper parameters.\n\n[1] Skeletonization: A Technique for Trimming the Fat from a Network via Relevance Assessment.\n[2] A Simple Procedure for Pruning Back-Propagation Trained Neural Networks.\n[3] Learning Sparse Neural Networks through L_0 Regularization.\n[4] Generalized Dropout.\n[5] Variational Dropout Sparsifies Deep Neural Networks.", "Following the reviewer’s suggestion, we have made the explanation of [1] clearer and edited the manuscript as follows:\n(before) “$\\alpha$ refers to neurons”\n(after) “$\\alpha$ refers to the connectivity of neurons”\n\nWe thank the reviewer for the constructive feedback, and we believe it indeed helped us improve the quality of the paper.", "Thank for addressing all of my comments.\n\n- It is interesting to see that the error of the network is higher when pruning a pre-trained model. This seems to suggest that SNIP might be less effective on pre-trained networks. Of course, this conclusion could also be specific to the toy MNIST task, so further investigation is necessary to verify if this is indeed the case.\n\n- It seems that the explanation for [1] in the updated manuscript is not entirely accurate; it is stated that [1] uses - dl / d\\alpha, with \\alpha being a neuron. From what I understand, [1] instead prunes according to -dl / dc, where c is a 0-1 multiplicative term for a neuron (similar to what you introduced for weights), i.e. output_neuron_j = f(sum_i w_ij * c_i * input_neuron_i).", "Thank you for the interest in our work and positive feedback. We find the comments highly insightful and address the key points below.\n\n# Optimizing c\n- We have attempted to optimize c and w together in an alternating optimization paradigm. Specifically, at each iteration, we fix one variable and optimize the other, and vice versa. We were able to achieve sparse networks with comparable accuracies to the reference network in some cases, however, in general the optimization was quite unstable. We believe that this is a promising direction to pursue, and yet further investigation will be required.\n\n# VS-H singularity and its dependency on task or architecture\n- We have further tested several variance scaling methods (including VS-X and VS-H) with different hyperparameters (e.g. distribution type and fan mode) and observed that all variance scaling initialization methods are robust to various architectures and models used in our work. It would be interesting to see how it behaves on different tasks other than the image classification task, and we are keen on exploring more on this as a future work.\n\n# Comparison to different prunings\n- (SNIP vs. random pruning) We have tested random pruning for all models used in the paper for the same extreme sparsity levels. We also checked for a few relaxed sparsity levels (e.g. 70%). As expected, none of the randomly pruned sparse models is able to learn properly (the loss does not decrease). All of them record accuracies around 10%, which is the case of random guessing for the 10-way classification task. This implies that the randomly pruned sparse network does not have enough capacity to learn to perform the task. One potential reason would be that random pruning does not ensure the basic connectivity in the network, which can hinder the flow of activations in the forward pass as well as the gradients in the backward pass. In the worst case, all connections between two layers can be pruned away resulting in a completely disconnected network.\n- (SNIP vs. magnitude based pruning) We have also tested the pruning based on the magnitude of the initial weights and weights updated for a few iterations. We ensured to use the variance scaling initialization as the same as SNIP. As a result, the magnitude based pruning achieves the accuracies that are lower than the results with SNIP (e.g. 17.7% (Magnitude) vs. 14.99% (SNIP) on Alexnet-s).\n\n# Comparison to distillation\n- The objective of knowledge distillation is to transfer knowledge from the teacher network to the student network. Typically, this is achieved by enforcing the student network outputs the same as the teacher network (e.g. matching output activations or Jacobians). Hence, in order to perform knowledge distillation, a practitioner needs to pre-train the teacher network, and importantly, design the student network (smaller than teacher) in advance. Therefore, knowledge distillation can be complementary to SNIP; SNIP can be used to find the student network which is then trained with the objective of knowledge distillation.\n\n# Pruning a large architecture for architecture search\n- In fact, we have conducted experiments on a bulky architecture (by densely connecting residual blocks in ResNets) and applied SNIP to prune connections. As a preliminary result, we found out that the obtained architecture turned out to be somewhat different from the original ResNets, and yet improves the performance (1-2% increases in several variants of ResNets). We believe that this is an interesting direction to pursue, and we are planning to investigate more.\n\n# (Additional) Tiny-Imagenet results\n- Additionally, we have conducted more experiments on the Tiny-Imagenet classification task. Tiny-Imagenet is much larger and more complex than CIFAR-10, however, we observed that SNIP was still able to prune a large amount of parameters while achieving a comparable accuracies to the reference network. Please check the results in Table 4, Appendix C.\n\nWe hope our response addresses the reviewer’s comments adequately. Otherwise, please leave us any further comments - we will do our best to update further.\n", "Thank you for the interest and positive feedback. We address the reviewer’s comments below.\n\n# (Fashion-)MNIST\n- First of all, both datasets are normalized before passing into the network, meaning that the dark region is actually not zero valued and the gradients from the dark region are not zero either.\n- Moreover, we ran the same experiment, but with inverted data as suggested by the reviewer (i.e. bright and dark regions are swapped). As a result, this led to the same results as in Figure 2. Please check the results from Figure 5, Appendix A.\n- Furthermore, we also ran the same experiment, but with $dL/dw$ as suggested by the reviewer. As a result, this led to the different results from Figure 2. In addition, using $dL/dw$ does not produce the same results when the dataset is inverted, as opposed to SNIP. Please check the results from Figure 6, Appendix A.\n- Therefore, it is incorrect to conclude that 1) \"the proposed sensitivity measure picks non-background pixels\", or 2) \"the experiments on (Fashion-)MNIST is trivial and a property of the dataset, not of the method\".\n\n# Connections (c) and weights (w)\n- We recognize that the initial description of connections (c) was not clear enough. Connections (c) are auxiliary indicator variables introduced to represent the connectivity of the parameters or weights (w). It is always one-to-one correspondence between connections (c) and weights (w) even for conv layers. Hence, the size of c and w are the same as m (i.e. sizeof(c) = sizeof(w) = m) as noted in Equation 3. Therefore, conv layers do not have more connections than weights, and the sparsity is measured correctly based on the total number of parameters (m).\n- We have updated the first line in the first paragraph of Section 4.1. as follows:\n(before) \"we introduce auxiliary indicator variables c representing the presence of each connection.\"\n(after) \"we introduce auxiliary indicator variables c representing the connectivity of parameters w.\"\n\n# Initialization procedure\n- It is correct that the weights used to train the pruned model are possibly different from the ones used to compute the connection sensitivity. Given (variance scaled) initial weights, SNIP finds the architecturally important parameters in the network, then the pruned network is established and trained in the standard way.\n\n# Differentiation from [Karnin 1990]\n- The fundamental idea behind [Karnin 1990] is to identify weights that least degrade the performance when removed. Specifically, the saliency criterion in [Karnin 1990] is defined as $-dL/dw$ (note the sign), which prunes weights that least increase the loss when removed. This means that this criterion, in fact, depends on the loss value before pruning, which requires the network to be pre-trained. Furthermore, to ensure minimal loss in performance, an iterative pruning scheme is employed in [Karnin 1990], leading to expensive prune -- retain cycles.\n- In contrast, the saliency criterion in SNIP ($|dL/dc|$) is designed to measure the “sensitivity”, defined as how much influence an element has on the loss function regardless of whether it is positive or negative. This criterion alleviates the dependency on the value of the loss, thereby eliminating the need for pre-training. This is a fundamental conceptual difference of our approach. Consequently, the network can be pruned at single-shot prior to training. This is in stark contrast to previous works including [Karnin 1990] where the saliency is measured using the entire dataset within an iterative optimization procedure.\n- These conceptual and significant differences in the saliency criterion between SNIP and [Karnin 1990] result in fundamentally different pruning algorithms.\n- We recognize the importance of this discussion and have added it at the end of Section 4.1.\n", "\n\n# Description of Hessian-based methods in Section 2\n- We agree that the complexity in computing the diagonal approximation of Hessian can be similar to that of the gradient. \n- We have updated the second line of the second paragraph (Modern advances) in Section 2 as follows:\n(before) \"While Hessian based approaches suffer from the burden of the Hessian computation for large models,\"\n(after) \"While Hessian based approaches employ the diagonal approximation due to its computational overhead,\"\n\n# Description of weight initialization in Section 4.2.\n- We find it correct that the idea of using variance scaled weight initialization is suggested in [Efficient Backprop; Section 4.6] and is also commonly employed in modern networks.\n- Therefore, we have updated the third paragraph in Section 4.2 as follows:\n(before) \"..., typically done using normal distribution with zero mean and a fixed variance. However, even if the initial weights have a fixed variance, the signal passing through each layer no longer guarantees to have the same variance.\"\n(after) \"..., typically done using normal distribution. However, if the initial weights have a fixed variance, the signal passing through each layer no longer guarantees to have the same variance, as noted by [Efficient Backprop].\"\n\n# (Additional) Tiny-Imagenet results\n- Additionally, we have conducted more experiments on the Tiny-Imagenet classification task. Tiny-Imagenet is much larger and more complex than CIFAR-10, however, we observed that SNIP was still able to prune a large amount of parameters while achieving a comparable accuracies to the reference network. Please check the results in Table 4, Appendix C.\n\n# Dataset enumeration in the abstract (including results on Tiny-Imagenet)\n- We would like to ensure that we do not claim that our experimental finding on (Fashion-)MNIST will generalize to other \"vision datasets\". Therefore, we have updated the abstract to be more explicit as follows:\n(before) \"... on image classification tasks …\"\n(after) \"... on the MNIST, CIFAR-10, and Tiny-Imagenet image classification tasks ...\"\n\n# [Nonnegative Garotte by Breiman]\n- We recognize the relevance and have cited it in the beginning of Section 4.1.\n\nWe hope our response addresses the reviewer’s comments adequately. Otherwise, please leave us any further comments - we will do our best to update further.", "Thank you for the positive and constructive feedback. We recognize the importance of the suggested point on the difference between [1] and SNIP. We have added this discussion at the end of Section 4.1.\n\nWe address the remaining comments below.\n\n# Training time speedup\n- We thank the reviewer for the nice pointer. We find that [3] reports the expected FLOPs ([4]: a rough observation) which is essentially attributed to the sparsity level. Notice, however, the maximum speedup by [3, 4] is achievable at the end of sparsification because they reach their maximum sparsity at the end of the process (which is the case for most pruning algorithms). In contrast, SNIP starts with the maximum sparsity from the beginning. This means that the speedup that can be achieved by [3, 4] will be upper-bounded by SNIP. We have updated the paper with this at the end of Section 5.2.\n\n# SNIP on pretrained networks\n- We have tested SNIP on pretrained networks and observed that SNIP also achieves comparable accuracies on pretrained networks (e.g. errors on LeNets: 3.1% (pretrain) vs. 2.4% (no-pretrain) on LeNet-300-100; 1.2% (pretrain) vs. 1.1% (no-pretrain) on LeNet-5-Caffe). We believe that it is most likely due to the fact that the gradients are hardly exactly zero in practice.\n\n# Normalization\n- We add the normalization as a means to handling the moving average over multiple mini-batches, which may arise in the case of a large model or dataset as noted in Section 4.2.\n\n# Fitting random labels\n- We have conducted the experiment with varying sparsity levels ($\\kappa = 10, 30, 50, 70, 90, 99$) and updated the paper with this result in Figure 7, Appendix B. We have also cited [5] in Section 5.6.\n\n# (Additional) Tiny-Imagenet results\n- Additionally, we have conducted more experiments on the Tiny-Imagenet classification task. Tiny-Imagenet is much larger and more complex than CIFAR-10, however, we observed that SNIP was still able to prune a large amount of parameters while achieving a comparable accuracies to the reference network. Please check the results in Table 4, Appendix C.\n\nWe hope our response addresses the reviewer’s remaining comments adequately. Otherwise, please leave us any further comments - we will do our best to update further.\n", "Dear authors,\n\nThank you very much for the very interesting work. Although it is omitting to cite our previous publications on this topic which transmit a similar message \"neural networks shall and can have a sparse connectivity before training at no loss in accuracy\", your paper is a nice read and propose a nice method. Thus, I would kindly ask you to discuss in your paper the relation between SNIP and our two previous publications on this topic. If you are not familiar with our work, to help with this discussion below are the main findings of our papers:\n\n1) Mocanu et al.: “A topological insight into restricted Boltzmann machines”, Machine Learning, 2016 ( https://link.springer.com/article/10.1007%2Fs10994-016-5570-z ) where we show that we can create sparse connected Restricted Boltzmann Machines before training using some data statistics. These sparse RBMs can achieve similar performance with their fully connected counterpart or with the ones that use prune – retrain cycles procedures.\n\n2) Mocanu et al.: “Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science”, Nature Communications, 2018 ( https://www.nature.com/articles/s41467-018-04316-3 ). The original submitted version of this paper has been posted on arXiv from July 2017 (https://arxiv.org/pdf/1707.04780v1.pdf). On short, it says that the number of parameters in the deep neural networks fully connected layers can be quadratically reduced before training using our proposed method, i.e. Sparse Evolutionary Training (SET). SET starts from a Erdős–Rényi random graphs connectivity and further on it uses an evolutionary process during training to adapt the sparse connectivity to the data. In this way, SET is able to achieve with few order of magnitude faster training time and quadratically lower memory footprint for sparse deep nets in all stages (e.g. design, training, inference). We tried SET in combination with three neural network models, i.e. RBMs, MLPs, CNNs, on 15 datasets and our results show that the sparse models trained with SET achieve usually better (or at least the same) accuracy with their fully connected counterparts.\n\nFor any reason, the code of our proposed method is available online:\nhttps://github.com/dcmocanu/sparse-evolutionary-artificial-neural-networks\n\nTo summarize, in my opinion, one important difference between our works is that you still have a pruning step before training, while we start directly with a sparse connected network.\n\nI am looking forward to hear your opinion.\n\nBest wishes,\nDecebal ", "Thank you for the detailed response on the differences between the two methods. Judging from your response, the main difference between the metric of [1] and SNIP is that SNIP considers the absolute value of the same gradient. This similarity seems important, and for this reason I believe that it is worthwhile to include this particular discussion in the main submission, especially when introducing the metric at section 4.1. Furthermore, I believe that the details about the way that each respective criterion is employed in practice (e.g. single-shot vs prune-retrain cycles, moving average of the metric etc.) are orthogonal to this discussion, as these concern specific choices rather than the core metric idea.\n\nI will wait for the authors to address all of the other points before I update my score. \n", "Thank you for the positive and constructive feedback. We appreciate that the reviewer finds that SNIP is clearly explained, viable and thoroughly evaluated.\n\nIn this reply, we clarify the reviewer’s conjecture about the similarity between SNIP and the early work [1] (Skeletonization). Meanwhile, responses to the other comments will be provided in a succeeding reply.\n\n# Summary\n- It is incorrect to conclude that the idea behind SNIP is the same as the one presented in [1]. The differences are as follows.\n\n# SNIP vs. Skeletonization [1]\n- The fundamental idea behind [1] (also [2], OBD and OBS) is to identify elements (e.g. neurons, weights) that least degrade the performance when removed. Specifically, the saliency criterion in [1] is defined as $-dL/d\\alpha$ (note the sign), which prunes elements that least increase the loss when removed. This means that this criterion, in fact, depends on the loss value before pruning, hence it requires the network to be pre-trained. Furthermore, to ensure minimal loss in performance, an iterative pruning scheme is employed in [1], leading to expensive prune -- retrain cycles.\n\n- In contrast, the saliency criterion in SNIP ($|dL/dc|$) is designed to measure the “sensitivity”, defined as how much influence an element has on the loss function regardless of whether it is positive or negative. This criterion alleviates the dependency on the value of the loss, thereby eliminating the need for pre-training. This is a fundamental conceptual difference of our approach. Consequently, the network can be pruned at single-shot prior to training. Moreover, we would like to point out that this aspect of SNIP allows us to interpret the retained connections (Section 5.4). Notice, such an experiment is not plausible (if not impossible) in previous works including [1].\n\n- Furthermore, in [1], robust auxiliary loss function ($L_1$) and exponentially decaying moving average (within the learning process) are required to suppress noise in the saliency score which is not the case in SNIP.\n\n- These conceptual and significant differences in the saliency criterion between SNIP and [1] result in fundamentally different pruning algorithms.\n\n# Citation of [1]\n- We would like to point out that we did not omit [1] and have cited [1] already in our submission (Sections 1 and 2).", "Summary\nThe paper focuses on pruning neural networks. They propose to identify the nodes to be pruned even before training the whole network (conventionally, it is done as a separate step after the nn was trained and involves a number of iterations of retraining pruned nn). This initial step that identifies the connections to be pruned works off a mini-batch of data.\n\nAuthors introduce a criterion to be used for identifying important parts of the network (connection sensitivity), that does not depend on the magnitude of the weights for neurons: they start by introducing a set of binary weights (one per a weight from a neuron) that indicate whether the connection is on or off and can be removed. Reformulating the optimization problem and relaxing the constraints on the binary weights, they approximate the sensitivity of the loss with respect to these indicator variables via the gradient. Then the normalized magnitude of these gradients is used to chose the connections to keep (keeping top k connections)\n\nClarity:\nWell written, easy to follow\n\nDetailed comments\nOverall, very interesting. Seemingly very simple idea that seem to work well. \nTable 2 does look impressive and it seems that it also reduces the overfiting, and the experiment with random labels on mnist seem to demonstrate that the method indeed preserves only connections relevant to the real labels, simplifying the architecture to a point when it cant just memorize the data\n\nSeveral questions/critiques:\n- When you relax the binary constraints, it becomes an approximation to an optimization problem, any indication of how far you are off solving it this way? \n- For the initialization method of the weights, you seem to state that VS-H is the one to use. I wonder if it actually task dependent and architecture dependent. If yes, then the propose method still has a hyperparameter - how to initialize the weights initially\n- How does it compare with just randomly dropping the connections or dropping them based on the magnitude of the initial weights. It seems that the meat comes from the fact that you are able to use the label and good initial values, i wonder if just doing a couple of iterations of forward-backprop and then dropping the weights based on their magnitude can give you comparable results \n- How does it compare to a distillation - it does not involve many cycles of retraining and can speed up inference time too\n-Can it replace the architecture search - initialize a large architecture, use the method to prune the connections and here you go. Did you try that instead of using already pre-tuned architectures like AlexNet.\n\n", "Thank you for the interest in our work.\n\n# SNIP vs. modified Fisher pruning\n- The main idea behind Fisher or in general Hessian based pruning methods (e.g. OBS, OBD) is to remove parameters that least affect the loss at a local minimum based on second-order information. However, if we modify Fisher pruning as mentioned above, it does not satisfy the local minimum assumption or use second-order information. Therefore it is not clear whether this modified criterion would lead to effective pruning.\n- Furthermore, this resulting pruning criterion would be $|dL/dw|$ which is different from SNIP criterion ($|dL/dc|$) and does not measure the connection sensitivity as discussed at the end of Section 4.1.\n\n# SNIP for channel pruning\n- We believe that extending SNIP to channel pruning is surely feasible (e.g. by measuring connection sensitivities over channels). This can further save computational complexity and is an interesting direction to pursue for future work.", "A very interesting paper!\n\nI wanted to better understand the connection between SNIP and Fisher pruning (as described in abs/1801.05787).\n\nSpecifically, Fisher pruning would repeatedly: (1) train the network for some time and (2) remove the least important parameter. Let's say we modify Fisher pruning as follows:\n\n* reduce the amount of training to just a single batch;\n* remove all the parameters we want to remove at once (rather than one by one);\n* when determining parameter importance, use absolute value instead of squared gradient;\n* use variance-scaled initial weights.\n\nHow close to SNIP algorithm would this get us?\n\nAlso, another question: do you think SNIP will work if it's adapted to prune entire feature maps, i.e., channels (as discussed in the paper I quoted)? The rationale is that the CNN FLOPs cost is not affected much unless an entire channel is removed.\n\nThanks!", "Thank you for the question, and the answer is 1.\nThe connection sensitivity is computed for all parameters globally and only top-k parameters are retained. Please refer to Equations (6) and (7) (also Lines 3 and 5 in Algorithm 1) where $m$ denotes the total number of parameters in the network.\n", "I'm unclear on the specifics of your pruning procedure. When you select weights to prune, do you:\n\n1) Compute the sensitivities of all parameters globally (without considering which layer the parameters come from) and remove the k% of smallest-sensitivity parameters?\n\n2) Compute the sensitivities of all parameters, normalize by layer, and then remove the k% of smallest-sensitivity parameters globally?\n\n3) Remove the k% of smallest-sensitivity parameters in each layer?\n\n4) Something else?\n\nThank you so much for the help!", "Thank you for the interest in our work. We address your comments below.\n\n# Black background in (Fashion-)MNIST\n- Both datasets are normalized before passing into the network, meaning that the black region is not zero valued. Thus, gradients are not zero and do exist regardless of the intensity or region of the image.\n- Furthermore, we conducted the same experiment, but with reversed data (i.e. bright and dark regions are swapped), and this led to the same results as in Fig. 2: SNIP retains the same connections as it does with non-reverse data. This clearly indicates that there is no direct correlation between the image intensity (or brightness) and the connection sensitivity.\n\n# Visualization and interpretation of retained connections in the first layer (c_{l=1}) on CIFAR\n- The same experiment is not feasible with CIFAR, because the first layer in all tested networks is not fully connected. Hence, there is no one-to-one correspondence between the input and connectivity parameters c.\n- We can surely visualize convolutional parameters c, however, it will only reveal the level of sparsity rather than verifying the validity of the retained connections, which is not the purpose of this experiment.\n\n# Quality of random pruning against SNIP\n- In terms of interpretability, random pruning results in c that is completely random, on both (Fashion-)MNIST and CIFAR. Even though SNIP may not have the “reconstruction effect” on more complex data, it is likely to prune connections that are less important to perform the task - in case of image classification, it could be the background region. We believe that this is still far more meaningful than completely random patterns obtained by random pruning.\n- In terms of performance, random pruning fails to perform the task for all networks and datasets (see our response to Thomas Pfeil’s comment below), whereas SNIP achieves extremely sparse networks that are able to perform the task while maintaining the accuracy (see Table 2 for results on CIFAR).\n", "Thanks for the interesting paper. On both MNIST and Fashion-MNIST, the object of interest is centered and the entire background of the image is black. Given you are using gradient information to select which connections should be removed, it seems obvious that the patterns you show in section 5.4 would occur.\n\nDid you try this experiment on CIFAR? If so, would you be willing to share what you observed? It feels like this method could be duped by a dataset where the object of interest is not necessarily the brightest part of the image. More generally it feels like this technique would regress to the quality of random pruning on more complex datasets where the initial connection gradient is not informative.", "Thank you for the interest in our work.\n\nWe have tested random pruning for all models used in the paper for the same extreme sparsity levels. We also checked for a few relaxed sparsity levels (e.g. 70%).\n\nAs expected, none of the randomly pruned sparse models is able to learn properly (the loss does not decrease). All of them record accuracies around 10%, which is the case of random guessing for the 10-way classification task.\n\nThis implies that the randomly pruned sparse network does not have enough capacity to learn to perform the task. One potential reason would be that random pruning does not ensure the basic connectivity in the network, which can hinder the flow of activations in the forward pass as well as the gradients in the backward pass. In the worst case, all connections between two layers can be pruned away resulting in a completely disconnected network.", "Thank you for this interesting article. How does your method compare to random pruning using the same pruning rates?" ]
[ -1, -1, -1, 8, -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, 9, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, 5, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "BJlFsBZA14", "rygEepnLJV", "rygEepnLJV", "iclr_2019_B1VZqjAcYX", "Skx0ADeVkE", "rJgNZnhzCX", "HylRy09FCX", "iclr_2019_B1VZqjAcYX", "SygWrMoHRX", "S1gwW9hGRX", "Hkx-JZ3vh7", "r1ecBMxwTm", "r1ecBMxwTm", "BylnY5EPTX", "iclr_2019_B1VZqjAcYX", "SJlJlW6g6m", "Bygrtyec3X", "iclr_2019_B1VZqjAcYX", "H1e1b9pz3Q", "iclr_2019_B1VZqjAcYX", "BketXC-xnX", "iclr_2019_B1VZqjAcYX", "SylwYaFoom", "iclr_2019_B1VZqjAcYX", "HkeTWJWqcQ", "iclr_2019_B1VZqjAcYX" ]
iclr_2019_B1e0X3C9tQ
Diagnosing and Enhancing VAE Models
Although variational autoencoders (VAEs) represent a widely influential deep generative model, many aspects of the underlying energy function remain poorly understood. In particular, it is commonly believed that Gaussian encoder/decoder assumptions reduce the effectiveness of VAEs in generating realistic samples. In this regard, we rigorously analyze the VAE objective, differentiating situations where this belief is and is not actually true. We then leverage the corresponding insights to develop a simple VAE enhancement that requires no additional hyperparameters or sensitive tuning. Quantitatively, this proposal produces crisp samples and stable FID scores that are actually competitive with a variety of GAN models, all while retaining desirable attributes of the original VAE architecture. The code for our model is available at \url{https://github.com/daib13/TwoStageVAE}.
accepted-poster-papers
The reviewers acknowledge the value of the careful analysis of Gaussian encoder/decoder VAE presented in the paper. The proposed algorithm shows impressive FID scores that are comparable to those obtained by state of the art GANs. The paper will be a valuable addition to the ICLR program.
val
[ "Bkg4rgm93Q", "rygm8MFA0X", "BJxjvk7nCm", "H1xarJXnCm", "S1exXym2Cm", "Bye1byCKCX", "HJxVsagYnm", "Sye7L4k1AQ", "ryg--PcTTQ", "SyxF01pF6X", "Bkletw5FT7", "ByeNgv9KTQ", "H1xC7LAvaQ", "S1low-VDpX", "BJxGPNr4TX", "B1xK2VH4T7", "ByebcVS4TX", "BJlRhQrETQ", "BJ-97B4pQ", "B1eNhzHVTX", "Bye1QOek6m" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "public", "author", "public", "author", "author", "public", "public", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper proposed a two-stage VAE method to generate high-quality samples and avoid blurriness. It is accomplished by utilizing a VAE structure on the observation and latent variable separately. The paper exploited a collection of interesting properties of VAE and point out the problem existed in the generative process of VAE. I have several concerns about the paper:\n\n1.\tIt is necessary to explain why the second-stage VAE can have its latent variable more closely resemble N(u|0,I). Even if the latent variable closely resemble N(u|0,I), How does it make sure the generated images are realistic? I admit that the VAE model can reconstruct realistic data based on its inferred latent variable, however, when given a random sample from N(u|0,I), the generated images are not good, which is true when the dimension of the latent space is high. I still can’t understand why a second-stage VAE can relief this problem.\n2.\tThe adversarial auto-encoder is also proposed to solve the latent space problem, by comparison, what is the advantage of this paper?\n3.\tWhy do you set the model as two separate stages? Will it enhance the performance if we train theses two-stages all together?\n4.\tThe proofs for the theory 2 and 3 are under the assumption that the manifold dimension of the observation is r, while in reality it is difficult to obtain this r, do these theories applicable if we choose a value for the dimension of the latent space that is smaller than the real manifold dimension of the observation? How will it affect the performance of the proposed method?\n5.\tThe value of r and k in each experiment should be specified.\n\n", "After our original submission, we have continued investigating a wider variety of generative models and evaluation metrics for broader research purposes. We summarize a few updates here that are relevant to our submission:\n\n* As a highly-relevant benchmark, we have obtained additional FID scores for all of the GAN models trained using suggested hyperparameter settings (from original authors), as opposed to the scores we originally reported from (Lucic et al., 2018) that were based on a large-scale, dataset-dependent hyperparameter search. When averaged across all four datasets (i.e., MNIST, Fashion, CIFAR10, CelebA), all GAN models trained with suggested settings had a mean FID score above 45. In contrast, with hyperparameters optimized across 100 different settings independently for each dataset as in (Lucic et al., 2018), the mean GAN FID scores are all within the range 31-45. As a point of reference, our proposed 2-Stage VAE model with no tuning whatsoever (the same default settings across all datasets) has a mean FID below 40, which is significantly better than all of the GANs operating with analogous fixed/suggested settings, and well within the range of the heavily-optimized GANs. And all other existing VAE baselines we have tried (including additional ones computed since our original submission), are considerably above this range.\n\n* In our original submission we also included results from a model labeled 2-Stage VAE*, where we coarsely optimized the hyperparameter kappa (the dimension of the latent representation). However, upon further reflection we have decided that it is probably better to remove this variant for two reasons. First, although the optimized GAN models involved searching over values from 7 hyperparameter categories (see the supplementary file from the latest NeurIPS 2018 version of (Lucic et al., 2018)), varying kappa was apparently not considered. Therefore it is somewhat of an apples-and-oranges comparison between our 2-Stage VAE* and the optimized GANs. Secondly, we have recently noticed that PyTorch and TensorFlow implementations of FID scores are sometimes a bit different (this appears to be the result of different underlying Inception models upon which the FID score is based). This discrepancy is inconsequential for our 2-Stage VAE model and associated baselines, but for 2-Stage VAE* the mean improvement differs by 4 depending on the FID implementation (this could be in part because optimizing over FID scores may exacerbate implementation differences). Regardless, this issue highlights the importance of using a consistent FID implementation across all models (a seemingly under-appreciated issue in the literature).\n\n* Although normalizing flows have been frequently reported to improve log-likelihood values in VAE models, this type of encoder enhancement has not as of yet been shown to improve FID scores (at least in the literature we are aware of). Of course log-likelihood values are not a good indicator of generated sample quality at measured by FID (Theis et al., ICLR 2016), so improving one need not correlate with improving the other. Even so, per the suggestion of AnonReviewer1, we have conducted experiments using VAE models with normalizing flows (Rezende and Mohamed, ICML 2015) as an additional baseline. Thus far, we have not found any instances where the addition of flows improves the FID score within the standardized/neutral testing framework from (Lucic et al., 2018), and sometimes the flows can actually make the FID worse. Still there are numerous different flow-based models, and further investigation is warranted to examine whether or not some versions could indeed help in certain scenarios.\n\n* Finally, we have also performed evaluations using the new Kernel Inception Distance (KID) quantitative metric of sample quality. This metric was proposed in (Binkowski et al., ICLR 2018) and serves as an alternative to FID. Note that we cannot evaluate all of the GAN baselines using the KID score; only the authors of (Lucic et al., 2018) could easily do this given the huge number of trained models involved that are not publicly available, and the need to retrain selected models multiple times to produce new average scores at optimal hyperparameter settings. However, we can at least compare our trained 2-Stage VAE model to other VAE baselines. In this regard we have found that the same improvement patterns reported in our original submission with respect to FID are preserved when we apply KID instead, providing further confidence in our approach.", "Thank you for reading our earlier response carefully and showing continued interest in understanding the details. Just to clarify though, we are not arguing that joint training is unhelpful in other types of hierarchical generative models (such as in the references the reviewer mentioned, where we agree it can be advantageous). Rather, our analysis merely suggests that within the narrow context of our particular 2-stage VAE structure, joint training is unlikely to be beneficial. But the underlying reason for this is not actually a mystery. Although admittedly counterintuitive at first, the inadequacy of joint training is exactly what is predicted by the theory (the same core analyses that inspired our non-obvious approach to begin with). Furthermore, this prediction can be empirically tested, which we have done in multiple ways. For example, we have tried fusing the respective encoders and decoders from the first and second stages to train what amounts to a slightly more complex single VAE model. We have also tried merging the two stages including the associated penalty terms. In both cases, joint training does not help at all, with performance no better than the first stage VAE (which contains the vast majority of parameters).", "To help provide a clearer explanation of this phenomena, we revisit the two criteria required for producing good samples from a generative model built upon an autoencoder structure (like a VAE). Per the analysis from reference (Makhzani et al., 2016) and elsewhere, these criteria are: (i) small reconstruction error when passing through the encoder-decoder networks, and (ii) an aggregate posterior q(z) that is close to some known distribution like N(0,I) that is easy to sample from. As mentioned in a previous response, the latter criteria ensures that we have access to a tractable distribution from which we can easily generate random input samples that, when passed through the learned decoder, will be converted to output samples resembling the training data.\n\nThe two stages of our proposed VAE model can be motivated in one-to-one correspondence with these two criteria. In brief, the first VAE stage addresses criteria (i) by pushing both the encoder and decoder variances towards zero such that accurate reconstruction is possible. However, the detailed analysis from Sections 2 and 3 of our submission suggests that as these variances go towards zero to achieve this goal, the reconstruction cost dominates the overall VAE objective because the ambient space is higher-dimensional than the latent space where the KL penalty resides. The consequence is that, although criteria (i) is satisfied, the aggregate posterior q(z) need not be close to N(0,I) (this is predicted by theory and explicitly confirmed by experiments, e.g., see Figure 1, rightmost plot). This then implies that if we take samples from N(0,I) and pass them through the learned decoder, the result will not closely resemble samples from the training data.\n\nOf course if we had a way to directly sample from q(z), we would not need to use N(0,I), since by design of any autoencoder-structured generative model samples from q(z) passed through the decoder will represent the training data (assuming the reconstruction criteria has been satisfied as mentioned above). Therefore, the second VAE stage of our proposal can be viewed as addressing criteria (ii) by learning a tractable approximation of q(z) that we can actually sample from intead of N(0,I). This estimate of q(z) is formed from a special, independent VAE structure explicitly designed such that the ambient and latent spaces have the same dimension allowing us to apply Theorem 1, which guarantees that a good approximation can be found when reconstruction and KL terms are in some sense properly balanced. Therefore, we now have access to a tractable process for producing samples from q(z), even though q(z) need not be close to N(0,I). Per the notation of our submission on page 6, bullet point 3, sampling u from N(0,I) and then z from p(z|u) is a close approximation to sampling z from q(z). This z can then be passed to the first-stage decoder to produce the desired data x.\n", "Returning to the original question, how might joint training of the first and second VAE stages interfere with this process? The problem lies in the dominant influence of the reconstruction term from the first VAE stage. As the decoder variance goes to zero (as needed for perfect reconstruction), this term can be pushed towards minus infinity at an increasingly fast rate. If trained jointly, the extra degrees-of-freedom from the second-stage VAE parameters will be distracted from their original intended purpose of modeling q(z). Instead they will largely be used to push the dominant reconstruction term even further towards minus infinity (with increasing marginal gains) at the expense of working to address criteria (ii) which has only a modest effect on the overall cost.\n\nAnother way to think about this is to consider the following illustrative scenario. Suppose we have a 2-stage VAE model that produces a reconstruction error that is infinitesimally close to zero, but provides a poor estimate of q(z). Because the reconstruction term becomes increasingly dominant when close to zero per the analysis from Section 3, during joint training *all* parameters, including those from the second stage, will focus on pushing the reconstruction error even closer to zero, rather than improving the estimate of q(z). But from a practical standpoint generating realistic samples this is unhelpful, because it is far better to improve the estimate of q(z) than to make the reconstruction error infinitesimally closer to zero. This is why separate training is so critical, because it isolates the second-stage and forces it to address criteria (ii), rather than needlessly focusing on infinitesimal changes to the reconstruction term from criteria (i) that makes no perceptual difference to generated samples. And indeed, when we do train jointly, although the reconstruction errors are quite small as expected, the more pivotal FID scores measuring sample quality are bad precisely because q(z) has been neglected.\n\nRegardless, we realize that there are many subtleties involved here, and hope that the above comments provide helpful clarification and background.", "Thanks for the detailed reply. The answer for question 3 still bothers me. The authors state that the joint training of the two stage have no benefit for the model. This does not make sense, and the reason cannot convince me. There are many popular hierarchical generative models, i.e. DBM[1], DBN[2], GBN[3], which have an enhanced performance in joint training. I think the authors should find out the reason for the failed joint training.\n\n[1] Salakhutdinov R, Larochelle H. Efficient learning of deep Boltzmann machines[C]//Proceedings of the thirteenth international conference on artificial intelligence and statistics. 2010: 693-700.\n[2] Hinton G E. Deep belief networks[J]. Scholarpedia, 2009, 4(5): 5947.\n[3] Zhou M, Cong Y, Chen B. Gamma Belief Networks[J]. arXiv preprint arXiv:1512.03081, 2015.\n", "Overview:\nI thank the authors for their interesting and detailed work in this paper. I believe it has the potential to provide strong value to the community interested in using VAEs with an explicit and simple parameterization of the approximate posterior and likelihood as Gaussian. Gaussianity can be appropriate in many cases where no sequential or discrete structure needs to be induced in the model. I find the mathematical arguments interesting and enlightening. However, the authors somewhat mischaracterize the scope of applicability of VAE models in contemporary machine learning, and don't show familiarity with the broad literature around VAEs outside of this case (that is, where a Gaussian model of the output would be manifestly inappropriate). Since the core of the paper is valuable and salvageable from a clarity standpoint, my comments below are geared towards what changes the authors may make to move this paper into the \"pass\" category.\n\nPros: \n- Mathematical insights are well reasoned and interesting. Based on the insight from the analysis in the supplementary materials, the authors propose a two-stage VAE which separate learning the a parsimonious representation of the low-dimensional (lower than the ambient dimension of the input space), and the training a second VAE to learn the unknown approximate posterior. The two-stage training procedure is both theoretically motivated and appears to enhance the output quality of VAEs w.r.t. FID score, making them rival GAN architectures on this metric.\n\nCons:\n- The title and general tone of the paper is too broad: it is only VAE models with Gaussian approximate posteriors and likelihoods. This is hardly the norm for most applications, contrary to the claims of the authors. VAEs are commonly used for discrete random variables, for example. Many cases where VAEs are applied cannot use a Gaussian assumption for the likelihood, which is the key requirement for the proofs in the supplement to be valid (then, the true posterior is also Gaussian, and the KL divergence between that and the approximate posterior can be driven to zero during optimization--clearly a Gaussian approximate posterior will never have zero KL divergence with a non-Gaussian true posterior).\n- None of the proofs consider the approximation error garnered by only having access to empirical samples through a sample of the ground truth population. (The ground-truth distribution must be defined with respect to the population rather just the dataset in hand, otherwise we lose all generalizability from a model.) Moreover, the proofs hold asymptotically. Generalization bounds and error from finite time approximations are very pertinent issues and these are ignored by the presented analyses. Such concerns have motivated many of the recent developments in approximate posterior distributions. Overall, the paper contains little evidence of familiarity with the recent advances in approximate Bayesian inference that have occurred over the past two years.\n- A central claim of the paper is that the two-stage VAE obviates the need for highly adaptive approximate posteriors. However, no comparison against those models is done in the paper. How does a two-stage VAE compare against one with, e.g., a normalizing flow approximate posterior? I acknowledge that the purpose of the paper was to argue for the Gaussianity assumption as less stringent than previously believed, but all of the mathematical arguments take place in an imagined world with infinite time and unbounded access to the population distribution. This is not really the domain of interest in modern computational statistics / machine learning, where issues of generalization and computational efficiency are paramount.\n- While the mathematical insights are well developed, the specifics of the algorithm used to implement the two-stage VAE are a little opaque. Ancestral sampling now takes place using latent samples from a second VAE. An algorithm box is badly needed for reproducibility.\n\nRecommendations / Typos\n\nI noted a few typos and omissions that need correction.\n\n- Generally, the mathematical proofs in section 7 of the supplement are clear. At the top of page 11, though, the paragraph correctly begins by stating that the composition of invertible functions is invertible, but fails to establish that G is also invertible. Clearly it is so by construction, but the explicit reasons should be stated (as a prior sentence promises), and so I assume this is an accidental omission.\n- The title of Section 8.1 has a typo: clearly is it is the negative log of p_{theta_t} (x) which approaches its infimum rather than p_{theta_t} (x) approaching negative infinity.\n- Equation (4): the true posterior has an x as its argument instead of the latent z.\n- Missing parenthesis under Case 2 and wrong indentation. This analysis also seems to be cut off. Is the case r > d relevant here?\n\n* EDIT: I have read the authors' detailed response. It has clarified a few key issues, and convinced me of the value to the community for publication in its present (slightly edited according to the reviwers' feedback) form. I would like to see this published and discussed at ICLR and have revised my score accordingly. *", "Thanks for the reply. I think this is a great exposition of the differences and the paper will be strengthened by making some of these points in the revision.", "Thanks for the continued engaging dialogue, and we can try to further clarify what we believe to be critical differentiating factors. First, you mentioned that the link between our method and (Tomczak and Welling, 2018) is that we both consider issues caused by the mismatch between the aggregate posterior q(z) and the prior p(z). But whether explicitly stated or not, essentially all methods based on an autoencoder structure share this exact same link on some level, so this is not any particular indication of close kinship in and of itself. And if this mismatch is ignored, then samples drawn from p(z) and passed through the decoder are highly unlikely to follow the true ground-truth distribution (see for example (Makhzani et al., 2016) mentioned in our submission).\n\nBeyond this though, the means by which we deal with this central, shared issue are fundamentally different. In our case, we exploit provable conditions whereby an independent second-stage VAE can effectively learn and sample from the unknown q(z) produced by a first stage VAE, and additionally, we provide direct empirical evidence supporting this theory (e.g., see Figure 1, righthand plot). Hence it no longer matters that p(z) and q(z) are not the same since we can just sample from the latter using the second-stage VAE. Even though this approach may seem counter-intuitive at first glance, an accurate model can in fact be learned (provably so in certain situations), and our state-of-the-art results for a VAE model relative to GANs (the very first such demonstration in the literature) provide further strong corroborating evidence.\n\nIn contrast, (Tomczak and Welling, 2018) choose to parameterize p(z) in such a way that the additional flexibility can provide simpler pathways for pushing p(z) and q(z) closer together. This is certainly an interesting idea, but it is significantly different from ours. But of course we agree that the ultimate purpose is the same: to have access to a known distribution with which to initiate passing samples through the decoder, a common goal shared by all autoencoder-structured models, including ours and many others like (Makhzani et al., 2016), where an adversarial loss is used to push p(z) and q(z) together. What ultimately distinguishes these methods is, to a large degree, the specific way in which this goal is addressed. We have no reservations about including additional discussion of (Tomczak and Welling, 2018), and these broader points in a revised version of our paper. ", "Thanks for your reply. I appreciate that there are certainly differences between the two, including in their original motivations, and I certainly not trying to imply your work is just a rehashing of theirs. I should point out that I am in no way associated with that paper so I have no ulterior motive to try and promote it or similar.\n\nHowever, I think the link between the two is a lot stronger than something to do with hierarchical priors and so I disagree with your suggestion above. The link is that both consider issues caused by the mismatch between the aggregate posterior q(z) and the prior p(z). In your work, you learn a second network to generate samples from q(z) and thus in turn p(x|z)q(z). In their formulation, they instead replace p(z) with q(z), therefore generating samples from exactly the same model as yours, at least in theory. In practice, they have to make approximations because q(z) is not directly available. Consequently, the two approaches are intimately linked to one another, the key methodological differences, in my opinion, being that in your case you only approximate q(z) after training and you use a different method to approximate q(z). There is a bit of a trade-off here, your method for approximating q(z) is almost certainly better, but this better approximation prevents you using it during training, which is likely to lead to a worse model being learned.\n\nConsequently, I think the link is a lot stronger than you are suggesting above, and thus this is an essential piece of related work to be considering.", "Thank you for the reference to (Tomczak and Welling, 2018), which proposes a nice two-stage hierarchical prior to replace the parameter-free standardized Gaussian N(0,I) that is commonly used with VAE models. Note that multiple stages of latent variables have been aggregated in the context of VAE-like models going back to (Rezende et al., 2014). However, beyond the common use of two sets/stages of latent variables, our approach bares relatively little similarity to (Tomczak and Welling, 2018) or other multi-stage alternatives. For example, the underlying theoretical design principles/analysis, aggregate energy function parameterizations, and training strategies are not at all the same. Likewise, the empirical validation is completely different and incomparable as well; (Tomczak and Welling, 2018) focuses on demonstrating improved log-likelihood scores, while we concentrate exclusively on improving the quality of generated samples as explicitly quantified by FID scores. And we stress that these two evaluation criteria can be almost completely unrelated to one another in many circumstances (see for example, Theis et al., \"A Note on the Evaluation of Generative Models,\" ICLR 2016). And as a final point of differentiation, (Tomczak and Welling, 2018) tests only on small black-and-white images and includes no comparisons against GANs, while we include testing with larger color images like CelebA and directly compare against GANs in a neutral setting. Regardless, (Tomczak and Welling, 2018) still represents a compelling contribution, and space permitting, we can try to provide broader context in a revision.", "Thanks for your interest in our work. Regarding the situation when gamma -> 0, the VAE will not actually default to a regular AE. Note that we can multiply both reconstruction and regularization terms (eqs. (8) and (9)) by gamma and then examine the limit as gamma becomes small; however, this does not allow us to discount all of the regularization factors even though they may be converging to zero as well. The convergence details turn out to be critical here.\n\nTo see this, consider the following simplified regularized regression problem which reflects the core underlying issue. Assume that we would like to solve\n\nmin_w (1/gamma)||y - A w||^2 + ||w||^2,\n\nwhere 1/gamma is a trade-off parameter, y is a known observation vector, A is an overcomplete matrix (full rank, with more columns than rows), and w represents unknown coefficients we would like to compute. If gamma -> 0, then any optimal solution must be in the feasible region where y = A w, meaning zero reconstruction error. Therefore, when gamma -> 0 solving this problem becomes formally equivalent to solving the constrained problem\n\nmin_w ||w||^2 subject to y = A w.\n\nOf course we could equally well consider multiplying both sides of the original objective by gamma, producing\n\nmin_w ||y - A w||^2 + gamma ||w||^2.\n\nThis shouldn't change the optimal w since we have just multiplied by a constant independent of w. But if gamma -> 0, then technically speaking, the regularization factor gamma ||w||^2 becomes arbitrarily small; however, this does not mean that we can simply ignore it because there are an infinite number of solutions whereby the data factor ||y - A w||^2 equals zero, i.e., a fixed, minimizing constant. The direct implication is that\n\nlimit gamma -> 0 arg min_w ||y - A w||^2 + gamma ||w||^2 \\neq arg min_w ||y - A w||^2,\n\nwhere the righthand side is just the objective obtained when gamma = 0, and it has an infinite number of minimizers unlike the lefthand side. In general, the regularization factor ||w||^2 will always have an influence in choosing which solution, out of the infinite number satisfying y = A w, is optimal, and the minimizing argument will again provably be the same as from the constrained problem above. This notion is well-established in the regularized regression literature, and generalizes to generic problems composed of data-fitting and regularization terms where the former in isolation has multiple equivalent minima. Returning to the VAE, if extra unneeded latent dimensions are present, then there will be an infinite number of latent representations capable of producing perfect reconstructions. The lingering KL regularization terms then determine which is optimal, per our analysis in Section 3 of the paper.\n\nAdditionally, in terms of adding small isotropic noise to observations x, the results will actually not be much different. This is because in practice, gamma need not converge to exactly zero, but only a small value near zero. This allows the model to slightly expand around the manifold and still apply high probability to the data. If the noise level is within such a modest expansion, then the behavior is more-or-less the same as if a low-dimensional manifold were present. Of course if added noise or other deviations from the manifold are too large, then obviously using additional dimensions to model the data may be required.\n\nFinally, with regard to your other question, we have also considered training a second-stage VAE on top of a regular autoencoder. This structure is discussed in footnote 5 on page 7. ", "It is an interesting and refreshing paper. I have a question regarding the analysis on Eq. (9). When \\gamma->0, the coefficient (1/\\gamma) of the reconstruction term of Eq. (9) will approach infinity, which results in a loss function that is similar to that of a plain AE. To see that, we can multiply Eqs. (8) and (9) by \\gamma, then the coefficient of the reconstruction term becomes 1, while that of Eq. (9) approaches 0. Note that \\gamma\\log(\\gamma)->0 when \\gamma->0. So I don't see why \\hat{r} will be pushed to as small as possible. Intuitively, if we add some small (e.g., stddev=0.01) isotropic gaussian noise to x, we wouldn't expect the resulting model to be significantly different, while the analysis seems to suggest that \\hat{r} will suddenly increase from r to \\kappa (assuming \\kappa<d), since the manifold of the noisy x is d-dimensional. Moreover, it would be interesting to see if adding a second stage VAE on top of a plain AE can lead to similar performance gain.", "The two-stage process you introduce seems very closely related to using a Vamp prior (https://arxiv.org/abs/1705.07120), wherein one effectively tries to replace the original prior with the aggregate posterior q(z) (though this is not achieved exactly for computational reasons). Obviously, there are some differences, but this seems like a natural baseline that should probably be compared to and at the very least a paper that should be being cited and discussed.", "- Reviewer Comment: No comparisons against VAE models with more flexible approximate posteriors such as those produced via normalizing flows\n\nOur Response: We agree that more flexible, explicitly non-Gaussian approximate posteriors have recently been proposed, such as the many flavors that utilize normalizing flows. But such models have not as of yet been objectively shown to improve sampling quality (see comments above) despite the tremendous community-wide incentive to publish such a demonstration. Moreover, the added flexibility often comes with a significant cost (e.g., increased training difficulty, more expensive inference). Furthermore, if we consider broader VAE modifications beyond just the encoder, then even within this wider domain, the only VAE-related enhancement we are aware of that objectively/quantitatively produces improved samples is the WAE model from ICLR this year (Tolstikhin et al., 2018), which is already explicitly addressed in Section 5 of our submission. Consequently, unless there is some very recent reference we may have missed, our experiments represent the state-of-the-art for non-adversarial VAE/autoencoer-based structures in term of the objective evaluation of generated samples, and the first to close the gap with GANs (this is also consistent with the comments from AnonReviewer2).\n\n\n- Reviewer Comment: Some details about the proposed 2-stage process are unclear\n\nAlthough there was unfortunately no space for a separate algorithm box in our submission, the three bullet points on page 6 describe the specific process we used. Note that the ancestral sampling required is very straightforward as described in bullet point 3 on page 6. This is exactly what we followed for generating new samples via our method, but we are happy to provide further clarification if the reviewer has a specific suggestion.\n\n\n- Reviewer Comment: Recommendations/Typos\n\nOur Response: We sincerely appreciate the effort in finding typos and checking the proofs. We have corrected each of the cases the reviewer uncovered. This will certainly be of benefit to future readers. Additionally, r can never be greater than d, because r is the manifold dimension within the ambient space of dimension d.\n", "Thanks for providing detailed comments regarding our manuscript, including constructive ideas on how to improve the presentation and clarify the context. We address each main point in turn.\n\n\n- Reviewer Comment: Limitation of Gaussian assumptions for likelihoods and approximate posteriors\n\nOur Response: In the introduction, we state that the most commonly adopted distributional assumption is that the encoder and decoder are Gaussian. This claim was based on an informal survey of numerous recent papers involving VAE models applied to continuous data (e.g., images, etc.). However, we completely agree that VAEs can also be successfully applied to discrete data types like language models, where these Gaussian assumptions can be more problematic. Although all of our theoretical developments are clearly framed in the context of continuous data on a manifold, we are happy to revise the introduction to better explain this issue up front. And of course the whole point of our paper is rigorously showing that even with seemingly restrictive Gaussian assumptions, highly non-Gaussian continuous distributions can nonetheless be accurately modeled.\n\nAlso, just to clarify one lingering point: although the decoder p(x|z) is defined to be Gaussian, it does not follow that the associated posterior p(z|x) will necessarily be Gaussian as well. In fact this will usually not be the case when using deep models and parameters in general position. However, the VAE can still push the KL divergence between p(z|x) and q(z|x) to zero even when the latter is constrained to be Gaussian as long as there exists at least some specially matched encoder-decoder parameterizations capable of pushing them together everywhere except on a space of measure zero. This was left as an open problem under general conditions in the most highly-cited VAE tutorial (Doersch, 2016), and is what we demonstrate in Section 2.", "- Reviewer Comment: Approximation error arising from finite samples not addressed; missing references to advances in approximate inference\n\nOur Response: In an ideal world we would obviously like to have optimal finite sample approximations that closely reflect practical testing scenarios. But such a bar is impossibly high at this point. Overall, we believe the value of theoretical inquiry into asymptotic regimes (i.e., population data rather than finite samples) cannot be dismissed out of hand, especially when simplifying assumptions of some sort are absolutely essential in making any reasonable progress. Even so, the true test of any theoretical contribution is the degree to which it leads to useful, empirically-testable predictions about behavior in real-world settings. In the present context, our theory makes the seemingly counter-intuitive prediction that a simple two-stage VAE could circumvent existing problems and produce realistic samples. We then tested this idea via the neutral DNN architecture and comprehensive experimental design from reference (Lucic et al., 2018) and it immediately worked. It is also critical to emphasize that these experiments were designed by others to evaluate top-performing GAN models with respect to generated sample quality, they were not developed to favor our approach in any way via some carefully tuned architecture or setting. Therefore, regardless of whether or not our theory involves asymptotic assumptions, it made testable, non-obvious predictions that were confirmed in a real-world practical environment, providing the very first VAE-based architecture that is quantitatively competitive with GANs in generating novel samples (at least with continuous data like images). We strongly believe that this is the hallmark of a significant contribution.\n\nThe reviewer also mentions that we may be unfamiliar with certain recent advances in approximate Bayesian inference, but no references were provided. Which papers in particular is the reviewer referring to? We are quite open to hearing about relevant work that we may have missed; however, presently we are unaware of any overlooked references that might serve to discount the message of our paper. Note that there is an extensive recent literature developing more sophisticated VAE inference networks using normalizing flows and related. However, to the best of our knowledge, none of these works contain quantitative evaluations of generated sample quality (our focus here), and many (possibly most) do not even contain visualizations of images generated by the model. Please see reference (van den Berg et al., \"Sylvester Normalizing Flows for Variational Inference,\" UAI 2018) for the latest representative example we have found. Of course our point here is not to disparage insightful papers of this type that provide significant advances in approximate inference. Rather we are merely arguing that they seem to be somewhat out of the scope of our present submission, especially given the limited space for broader discussions. But we can try to squeeze in more references and background perspective of this nature if the reviewer feels it could be helpful.", "Thanks for providing feedback regarding our submission and indicating specific points of uncertainty. We provide detailed answers to each question as follows:\n\n\n1.\tReviewer Comment: Why do the second-stage VAE latent variables more closely resemble N(0,I), and how does this ensure that the generated samples are realistic, especially if the dimension of the latent space is high?\n\nOur Response: These issues are addressed in Section 4 of our paper, building on foundational properties of VAE models and our theory from Sections 2 and 3, but we can provide some additional background details here. First, it can be helpful to check reference (Makhzani et al., 2016) which defines the aggregate posterior q(z) = \\int q(z|x)p_gt(x)dx, where q(z|x) serves as the encoder and p_gt(x) is the ground-truth data density. The basic idea behind generative models framed upon an autoencoder structure (VAE or otherwise) is that two criteria are required for producing good samples: (i) small reconstruction error when passing through the encoder-decoder networks, and (ii) an aggregate posterior q(z) that is close to some known distribution like N(0,I) that is easy to sample from. Without the latter criteria, we have no tractable way of generating random inputs to the learned decoder that will produce realistic samples resembling the training data distribution.\n\nIn the context of our paper and VAE models, we argue that the first-stage VAE provides small reconstruction errors using a minimal number of latent dimensions (if parameterized properly with a trainable decoder variance), but not necessarily an aggregate posterior q(z) that is close to N(0,I). This is because the basic VAE cost function is heavily biased towards finding low-dimensional manifolds upon which the data resides at the expense of learning the correct distribution within this manifold, which also prevents the aggregate posterior from nearing N(0,I). However, although the VAE may partially fail in this regard, it nonetheless provides a useful mapping to a lower-dimensional space in such a way that we can apply Theorem 1 from our work. In this lower dimensional space we treat q(z) \\neq N(0,I) as a revised ground-truth data distribution p_gt(z), and train a new VAE with latent variables u. Based on Theorem 1, in this restricted setting there will exist at least some parameterizations of the new encoder q(u|z) and decoder p(z|u) such that perfect reconstructions are possible, p_gt(z) is fully recovered, and KL[ q(u|z) || p(u|z) ] -> 0. If this all occurs, then we have the new second-stage aggregate posterior\n\nq(u) = \\int q(u|z)p_gt(z)dz = \\int p(u|z)p_gt(z)dz = \\int p_gt(z|u)p(u)dz = p(u) = N(0,I)\n\nas desired. For practical deployment, we then only need sample u from N(0,I), then z from p(z|u), and finally x from p(x|z). Note also that if the latent dimension of z is higher than actually needed, the first-stage VAE decoder is effectively capable of blocking/pruning the extra dimensions as discussed in Section 3. This will not guarantee high quality samples, but it is adequate for preparing the data from the aggregate posterior q(z) to satisfy Theorem 1, which can then be leveraged by the second-stage VAE as mentioned above and in our paper.", "2.\tReviewer Comment: The adversarial autoencoder is also proposed to solve the latent space problem, by comparison, what is the advantage of this paper?\n\nOur Response: The adversarial autoencoder (Makhzani et al., 2016) requires adversarial training, meaning that like all GAN-related models, a complex min-max problem must be optimized in search of a saddle point. A well-recognized advantage of VAEs is that the training involves pure minimization of a fixed variational energy function, which is generally more stable and resistant to mode collapse. We should also point out that unlike VAEs, the adversarial autoencoder has no mechanism for pruning superfluous dimensions in the latent space. Regardless of these key differences, we are aware of no published work where the adversarial autoencoder has been shown to produce competitive results generating novel samples like other GAN-related models (rather it has been tested on auxiliary tasks like semi-supervised learning, which is not in our scope). Indeed the exhaustive recent testing from (Lucic et al., 2018) upon which we based our experiments, does not even include the adversarial autoencoder as a benchmark.\n\n\n\n3.\tReviewer Comment: Why train the model as two separate stages? Will it enhance the performance if we train these two stages together?\n\nOur Response: We have addressed this question on the bottom of page 7, which states the following: \"It should also be emphasized that concatenating the two stages and jointly training does not improve the performance. If trained jointly the few extra second-stage parameters are simply hijacked by the dominant objective from the first stage and forced to work on an incrementally better fit of the manifold. As expected then, on empirical tests (not shown) we have found that this does not improve upon standard VAE baselines.\" Our theoretical results and algorithm development from Sections 2-4 also directly support this conclusion. Regardless, we are happy to clarify further if needed.\n\n\n\n4.\tReviewer Comment: Do the technical proofs require knowledge of the ground-truth manifold dimensions r? And how is the proposed algorithm affected when r is unknown?\n\nOur Response: None of our proofs require that the ground-truth r is known explicitly in advance. All that is required is that we set kappa >= r (please see proof statements for Theorems 1-3). In other words, we only need to set the latent dimension kappa to be bigger than the ground-truth manifold dimension r. The VAE then has a natural mechanism in place for discarding superfluous dimensions. Of course obviously in practice if we set kappa to be far too large, then the training could potentially become a bit more difficult, since in addition to learning the correct ground-truth manifold, we are also burdening the model to detect a much larger number of unnecessary dimensions. But the VAE is arguably more robust to kappa than most methods, and the basic point still holds: we need not set kappa = r, we just need to choose kappa to be a reasonable value that is at least as big as r. In contrast, if we set kappa < r, then the theory starts to break down and practical performance will begin to degrade as expected.\n\n\n\n5.\tReviewer Comment: The value of r and kappa in each experiment should be specified.\n\nOur Response: The true latent manifold dimension r is unknown in all of our experiments since we are using real-world data. However, for the dimension of the VAE latent code, we chose kappa = 64 for all experiments, except for the 2-Stage VAE* model results, where we used 32 for MNIST and Fashion-MNIST, 192 for CIFAR-10, and 256 for CelebA. Note that these values were not carefully tuned and need not be exact per the arguments responding to reviewer comment 4 above. We just tried a single smaller value for the simpler data (MNIST and FashionMNIST), and a couple larger values for the more complex ones (CIFAR-10 and CelebA).\n", "We appreciate the detailed and positive comments, which truly reflect many of the essential contributions of our work. Likewise to the best of our knowledge, the FID scores we report are indeed the first to close the gap between GANs and non-adversarial AE-based methods as the reviewer points out. Regarding the small comments concluding the review, we answer as follows:\n\n\n- Reviewer Comment: Is the code / checkpoints going to be available anytime soon?\n\nOur Response: It was our original intention to simply post the code on Github after decisions were issued and papers were de-anonymized. However, if there is a need to make the code available earlier while preserving anonymity, we could presumably pursue that as well (but not sure if this is considered acceptable under ICLR guidelines).\n\n\n- Reviewer Comment: Reference to an alternative method for estimating the aggregate posterior, and another paper addressing causes of blurry VAE representations.\n\nOur Response: Thanks for the nice references. These papers actually look very interesting; we can cite them and provide context in the revision.\n\n\n- Reviewer Comment: Line after Eq. 3: I think it should be \\int p_gt(x) \\log p_\\theta(x) dx ?\n\nOur Response: It is true that L(\\theta, \\phi) >= - \\int p_gt(x) \\log p_\\theta(x) dx. However, we further have that - \\int p_gt(x) \\log p_\\theta(x) dx >= -\\int p_gt(x) \\log p_gt(x) dx, which is the expression we include below Eq. (3) in the paper. The equality holds iff KL[q_\\phi(z|x) || p_\\theta(z|x)] = 0 and p_\\theta(x) = p_gt(x) almost everywhere.\n\n\n- Reviewer Comment: Line after Eq 40. Why exactly is D(u^*) finite?\n\nOur Response: Because \\varphi(u) is a diffeomorphism, is has a differentiable inverse and \\Lambda(u) = (d\\varphi(u)^-1/du)^\\top (d\\varphi(u)^-1/du) is always finite. Furthermore, D(u^*) is the maximum of \\Lambda(u) in a closed set centered at u^*, so it is finite. We will update the proof to include these extra details.\n\n\n- Reviewer Comment: Minor typos/corrections\n\nOur Response: Thanks for catching each of these and also checking the proofs carefully. We have fixed each typo/suggestion in a revised version. ", "The paper provides a number of novel interesting theoretical results on \"vanilla\" Gaussian Variational Auto-Encoders (VAEs) (sections 1, 2, and 3), which are then used to build a new algorithm called \"2 stage VAEs\" (Section 4). The resulting algorithm is as stable as VAEs to train (it is free of any sort of adversarial training, it comes with a little overhead in terms of extra parameters), while achieving a quality of samples which is *very impressive* for an Auto-Encoder (AE) based generative modeling techniques (Section 5). In particular, the method achieves FID score 24 on the CelebA dataset which is on par with the best GAN-based models as reported in [1], thus sufficiently reducing the gap between the generative quality of the GAN-based and AE-based models reported in the literature. \n\nMain theoretical contributions:\n\n1. In some cases the variational bound of Gaussian VAEs can get tight (Theorem 1).\nIn the context of vanilla Gaussian VAEs (Gaussian prior, encoders, and decoders) the authors show that if (a) the intrinsic data dimensionality r is equal to the data space dimensionality d and (b) the latent space dimensionality k is not smaller than r then there is a sequence of encoder-decoder pairs achieving the global minimum of the VAE objective and simultaneously (a) zeroing the variational gap and (b) precisely matching the true data distribution. In other words, in this setting the variational bound and the Gaussian model does not prevent the true data distribution from being recovered.\n\n2. In other cases Gaussian VAEs may not recover the actual distribution, but they will recover the real manifold (Theorems 2, 3, 4 and discussions on page 5).\nIn case when r < d, that is when the data distribution is supported on a low dimensional smooth manifold in the input space, things are quite different. The authors show that there are still sequences of encoder-decoder pairs which achieves the global minimum of the VAE objective. However, this time only *some* of these sequences converge to the model which is in a way indistinguishable from the true data distribution (and thus again Gaussian VAEs do not fundamentally prevent the true distribution from being recovered). Nevertheless, all sequences mentioned above recover the true data manifold in that (a) the optimal encoder learns to use r dimensional linear subspace in the latent space to encode the inputs in a lossless and noise-free way, while filling the remaining k - r dimensions with a white Gaussian noise and (b) the decoder learns to ignore the k - r noisy dimensions and use the r \"informative\" dimensions to produce the outputs perfectly landing on the true data manifold. \n\nMain algorithmic contributions:\n(0) A simple 2 stage algorithm, where first a vanilla Gaussian VAE is trained on the input dataset and second a separate vanilla Gaussian VAE is trained to match the aggregate posterior obtained after the first stage. The authors support this algorithm with a reasonable theoretical argument based on theoretical insights listed above (see end of page 6 - beginning of page 7). The algorithm achieves state-of-art FID scores across several data sets among AE based models existing in the literature. \n\nReview summary: \nI would like to say that this paper was a breath of fresh air to me. I really liked how the authors make a strong point that *it is not the Gaussian assumptions that harm the performance of VAEs* in contrast to what is usually believed in the field nowadays. Also, I think *the reported FID scores alone may be considered as a significant enough contribution*, because to my knowledge this is the first paper significantly closing the gap between generative quality of GAN-based models and non-adversarial AE-based methods. \n\n***************\n*** Couple of comments and typos:\n***************\n(0) Is the code / checkpoints going to be available anytime soon?\n(1) I would mention [2] which in a way used a very similar approach, where the aggregate posterior of the implicit generative model was modeled with a separate implicit generative model. Of course, two approaches are very different ([2] used an adversarial training to match the aggregate posterior), however I believe the paper is worth mentioning.\n(2) In light of the discussion on page 6 as well as some of the conclusions regarding commonly reported blurriness of the VAE models, results of Section 4.1 of [3] look quite relevant. \n(3) It would be nice to specify the dimensionality of the Sz matrix in definition 1.\n(4) Line ater Eq. 3: I think it should be $\\int p_gt(x) \\log p_\\theta(x) dx$ ?\n(5) Eq 4: p_\\theta(x|x)\n(6) Page 4: \"... mass to most all measurable...\".\n(7) Eq 34. Is it sqrt(\\gamma_t) or just \\gamma_t?\n(8) Line after Eq 40. Why exactly D(u^*) is finite?\n\nI only checked proofs of Theorems 1 and 2 in details and those looked correct. \n\n[1] Lucic et al., 2018.\n[2] Zhao et al., Adversarially regularized autoencoders, 2017, http://proceedings.mlr.press/v80/zhao18b.html\n[3] Bousquet et al., From optimal transport to generative modeling: the VEGAN cookbook. 2017, https://arxiv.org/abs/1705.07642" ]
[ 6, -1, -1, -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 9 ]
[ 3, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2019_B1e0X3C9tQ", "iclr_2019_B1e0X3C9tQ", "Bye1byCKCX", "Bye1byCKCX", "Bye1byCKCX", "BJ-97B4pQ", "iclr_2019_B1e0X3C9tQ", "ryg--PcTTQ", "SyxF01pF6X", "Bkletw5FT7", "S1low-VDpX", "H1xC7LAvaQ", "iclr_2019_B1e0X3C9tQ", "iclr_2019_B1e0X3C9tQ", "HJxVsagYnm", "HJxVsagYnm", "HJxVsagYnm", "Bkg4rgm93Q", "Bkg4rgm93Q", "Bye1QOek6m", "iclr_2019_B1e0X3C9tQ" ]
iclr_2019_B1exrnCcF7
Disjoint Mapping Network for Cross-modal Matching of Voices and Faces
We propose a novel framework, called Disjoint Mapping Network (DIMNet), for cross-modal biometric matching, in particular of voices and faces. Different from the existing methods, DIMNet does not explicitly learn the joint relationship between the modalities. Instead, DIMNet learns a shared representation for different modalities by mapping them individually to their common covariates. These shared representations can then be used to find the correspondences between the modalities. We show empirically that DIMNet is able to achieve better performance than the current state-of-the-art methods, with the additional benefits of being conceptually simpler and less data-intensive.
accepted-poster-papers
All reviewers agree that the proposed method interesting and well presented. The authors' rebuttal addressed all outstanding raised issues. Two reviewers recommend clear accept and the third recommends borderline accept. I agree with this recommendation and believe that the paper will be of interest to the audience attending ICLR. I recommend accepting this work for a poster presentation at ICLR.
val
[ "r1gPRr2jT7", "r1lCSghsam", "ByxLt0iipQ", "r1e5d52hhQ", "Hkxxm_Qsh7", "HkxvRZesnQ" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We sincerely appreciate the review for the recognition of our novelty and many valuable suggestions.\n\nOur main contribution mainly lies in proposing a cross modal matching framework called DIMNet, which learns a shared representation for different modalities by mapping them individually to their common covariates. Our basic intuition is that if the learned embeddings of voices and faces can be correctly classified by a unified (linear) classifier, the embeddings of the same class should be in a common decision region and close to each other.\nCompared to the existing work [3,4], the supervision could be any combination of covariates, which enables us to isolate and analyze the effect of the individual covariate to the learned embeddings. Moreover, DIMNet makes better use of the multiple covariates in the course of training. \n\nIn order to perform fair comparisons, we exactly follow the experimental setup in pioneering work [3,4], and achieve significant improvements compared to these strong baselines [3,4].\n\nQ1. In my opinion, perhaps the only exception ... in order to make the article self-contained.\nA1. We thank the reviewer for this suggestion. We do mention the two scenarios in the paper, but the reviewer is right, we do not explicitly introduce them. We now do so in the updated paper.\n\nIn summary, the audio data we used in Section 3.4 is the same as those in other experiment sections, while the visual data is extracted from the video frames in VoxCeleb dataset at 25/6 fps. For fair comparison, we follow the train/val/test split strategy from [4] and evaluate our DIMNet models under Seen-Heard (closed-set) and Unseen-Unheard (open-set)scenarios. More details can be found in the updated paper.\n\nAction taken: Provided more details about the datasets, and experimental settings in Section 3.4 and appendix A.\n\n\nQ2. Given that the authors claimed to have run 5 repetitions ... strengthen the results.\nA2. We thank the reviewer for this suggestion. We have now computed the standard deviations of the results and added them to each table.\n\nAction taken: Added standard deviations of the results to each table.\n\nQ3. However, I believe that the success of the experimental results, ..., validation and test sets are disjoint.\nA3. Our definition of covariate, as stated in the paper, are the ID-sensitive factors that can simultaneously affect voice and face, e.g. nationality, gender, identity, etc. We do not require the value these factors take to be the same between the training and test set. Thus, from the perspective of our model, we only require that faces and voices in the test set co-vary with ID; we do not require that ID to be present in training. What we are learning is the nature of the covariation with the variable in general, not merely the covariation with the specific values the variable takes in the training set.\n\nTo give another example, if we were to consider age as a covariate (which we have not in the current set of experiments, since we do not desire age-sensitive matching), we would expect to learn how both voice and face embeddings vary with age. This then could be used to match voice and face embeddings in the test set even if the corresponding age were not observed in training.\n\nAction taken: Added the above discussions about covariates to introduction section.\n\nQ4. In my opinion, this calls into question the hypothesis ..., thanks to not requiring (face image, audio recording) pairs as input.\nA4. More efficient usage of the data is indeed one of the advantages of our DIMNet framework, as we state in both the introduction and the discussion. And this is achieved, by design, by exploiting (and explicitly modelling) the dependence between the modalities and covariates in a generalizable manner. The outcomes we observe in our experiments are entirely to be expected, from our hypothesis, and we believe that the rather detailed set of experiments (and the analyses in our appendix) show that the results are not merely fortuitous. As indicated by our experiments, DIMNet-I achieves 83.45% accuracy on 1:2 matching task since ID is undoubtedly the most informative covariate. Even using less informative covariates, DIMNet-G still achieves 72% matching accuracy.\n \nQ5. Typos\nA5. We thank the reviewer for the pointing out the typos. All the typos are fixed in the updated paper.\n\"... image.mGiven ...\" -> \"... image. Given ...\"\n|Fv||Ff| -> ||Fv||_2||Ff||_2\n\"Here we are give a probe input ...\" -> Here we are given a probe input …”\n\n[3] Nagrani, Arsha, et al. \"Seeing voices and hearing faces: Cross-modal biometric matching.\" IEEE CVPR 2018.\n[4] Nagrani, Arsha, et al. \"Learnable PINs: Cross-Modal Embeddings for Person Identity.\" arXiv preprint arXiv:1805.00833 (2018).\n[5] Chung, Joon Son, et al. \"Out of time: automated lip sync in the wild.\" ACCV, 2016.", "We thank the reviewer for the very positive and encouraging review. \n\nQ1. My feeling is that paired positive examples are easier to obtain (e.g., from unlabeled video) than inputs labeled with these covariates, although paired negative examples require labeling and so may be as difficult to obtain.\n\nA1. We agree with the reviewer. Compared to covariates, the pairwise label is usually easier to obtain. However, some challenges still exist for collecting the examples from video, making it a non-trivial problem. For example, the cases of reaction shots, flashbacks and dubbing in videos may result in noisy labels. Previous work [6] investigated the use of the paired data in self-supervised learning manner, where SyncNet [7] is adopted to obtain the speaking faces.\n\nFor our paper, we focus on proposing a DIMNet framework to learn embeddings for cross-modal matching with the given cross-modal data and their labeled covariates. How to collect data is perhaps beyond the scope of this paper but could be an interesting direction for our future work.\n\nQ2. Typos\nA2. We thank the reviewer for pointing out the typos. All the typos are fixed in the updated paper.\nCitations: we have carefully checked the citations and accordingly fixed them one by one .\nFigures: The waveforms have been replaced by log Mel-spectrograms.\n“state or art” -> “state-of-the-art”\n“mGiven” -> “Given”\n\"Nagrani et al. Nagrani et al. (2018b)\" -> “Nagrani et al. (2018b)”; typo in Table 2 is fixed\n“G,N” -> \"G, N\"\n\n[6] Nagrani, Arsha, Samuel Albanie, and Andrew Zisserman. \"Learnable PINs: Cross-Modal Embeddings for Person Identity.\" arXiv preprint arXiv:1805.00833 (2018).\n[7] Chung, Joon Son, and Andrew Zisserman. \"Out of time: automated lip sync in the wild.\" Asian Conference on Computer Vision. Springer, Cham, 2016.", "We thank the reviewer for the recognition of the novelty and the detailed experimental evaluation in our contribution.\n\nQ1. Fixing the output dimension to d (for both voice and image-based CNN outputs) could lead to unstable results. Indeed, the comparison of voice and face-based covariate estimates are not entirely fair due to the intrinsic dimensionality can vary for each domain. Alternatives as canonical correlation analysis can be coupled to joint properly both domains.\nA1. In order to compare embeddings from two modalities (domains), the dimensionality of the embeddings need to be the same. We agree with the reviewer that the intrinsic dimensionality of data in different modalities (domains) could vary. However, it does not contradict the fact that these data can be well represented by the identical-dimensioned embeddings through CNNs, and most importantly, the performance (in the following table) is very stable within a wide range of embedding dimension, showing that the accuracy is not sensitive to the embedding dimension. The idea of using the identical-dimensioned embeddings is also adopted by [1] and [2].\n\nThe accuracies of DIMNet-I with different embedding dimensions on 1:2 matching experiments\n-------------------------------------------------------------------------------\nDimension 32 64 128 256 512\n-------------------------------------------------------------------------------\nDIMNet-I 82.20 83.45 83.87 83.43 83.16\n-------------------------------------------------------------------------------\n\nAction taken: Added this experiment in appendix A with analysis.\n\nCanonical correlation analysis (CCA) is a good idea to investigate the correlation of data between different domains, and it could indeed be used to match different-dimensioned embeddings derived from the two modalities, and was indeed one of our ideas enroute to the development of DIMNet. The reasons we do not use it are the following: (a) The final projection in CCA is a linear transform that is easily subsumed within the network (in fact a linear projection may be viewed as a fully-connected layer with linear activations). (b) More importantly, the underlying idea of CCA is very different from DIMNet. Specifically, CCA requires one-to-one correspondence between the two modalities it considers, an assumption DIMNet explicitly tries to avoid. Specifically, in the case of static face images vs. voice samples, it is unclear that such correspondence is derivable. Given that we have multiple face images and multiple voice recordings for any person, all captured at different times, which pairs of voice recordings and face images would we group together? Any correspondence imposed would be artificial. On the other hand, DIMNet builds correspondences between voices (or faces) and their covariates, and does not expect direct correspondence between the two modalities -- in fact this is one of the key features of our model which differentiates it from prior work. The comparison could be more intuitively noted from Fig. 1 in our paper.\n\nQ2. Table 4 - column ID results are not convincing (maybe are not clear for me).\nA2. The ID column in Table 4 shows the mean average precision (mAP) of the retrieved ID, when one modality (e.g. face) is posed as the query and retrieval of corresponding recordings of the other modality (e.g. voice) must be performed. The evaluation dataset consists of 21,799 voices and 58,420 faces, both from 182 identities. Compared to gender (2 classes) and nationality (unbalanced 28 classes), it is a challenging problem to rank the gallery voices (faces) based on the probe face (voice) given these many identities (182 classes). Chance-level performance (i.e., random guess) is about 0.55% for voice->face and 0.58% for face->voice, while we achieved 1.07~4.25% for voice->face and 1.03%~4.17% for face->voice. It means that the DIMNet models do learn useful associations between voices and faces.\n\nAction taken: Added one row of chance level results to Table 4 with analysis.\n\n[1] Nagrani, Arsha, Samuel Albanie, and Andrew Zisserman. \"Learnable PINs: Cross-Modal Embeddings for Person Identity.\" arXiv preprint arXiv:1805.00833 (2018).\n[2] Kim, Changil, et al. \"On Learning Associations of Faces and Voices.\" arXiv preprint arXiv:1805.05553 (2018).\n", "Authors aim to reveal relevant dependencies between voice and image data (under a cross-modal matching framework) through common covariates (gender, ID, nationality). Each covariate is learned using a CNN from each provided domain (speak recordings and face images), then, a classifier is determined from a shared representation, which includes the CNN outputs from voice-based and image-based covariate estimations. The idea is interesting, and the paper ideas are clear to follow.\n\nPros:\n- New insights to support cross-modality matching from covariates.\n- Competitive results against state-of-the-art.\n-Convincing experiments.\n\nCons:\n-Fixing the output dimension to d (for both voice and image-based CNN outputs) could lead to unstable results. Indeed, the comparison of voice and face-based covariate estimates are not entirely fair due to the intrinsic dimensionality can vary for each domain. Alternatives as canonical correlation analysis can be coupled to joint properly both domains.\n- Table 4 - column ID results are not convincing (maybe are not clear for me).", "# Summary\n\nThe article proposes a deep learning-based approach aimed at matching face images to voice recordings belonging to the same person. \n\nTo this end, the authors use independently parametrized neural networks to map face images and audio recordings -- represented as spectrograms -- to embeddings of fixed and equal dimensionality. Key to the proposed approach, unlike related prior work, these modules are not directly trained on some particular form of the cross-modal matching task. Instead, the resulting embeddings are fed to a modality-agnostic, multiclass logistic regression classifier that aims to predict simple covariates such as gender, nationality or identity. The whole system is trained jointly to maximise the performance of these classifiers. Given that (face image, voice recording) pairs belonging to the same person must share equal for these covariates, the neural networks embedding face images and audio recordings are thus indirectly encouraged to map face images and voice recordings belonging to the same person to similar embeddings.\n\nThe article concludes with an exhaustive set of experiments using the VGGFace and VoxCeleb datasets that demonstrates improvements over prior work on the same set of tasks.\n\n# Originality and significance\n\nThe article follows-up on recent work [1, 2], building on their original application, experimental setup and model architecture. The key innovation of the article, compared to the aforementioned papers, lies on the idea of learning face/voice embeddings to maximise their ability to predict covariates, rather than by explicitly trying to optimise an objective related to cross-modal matching. While the fact that these covariates are strongly associated to face images and audio recordings had already been discussed in [1, 2], the idea of actually using them to drive the learning process is novel in this particular task.\n\nWhile the article does not present substantial, general-purpose methodological innovations in machine learning, I believe it constitutes a solid application of existing techniques. Empirically, the proposed covariate-driven architecture is demonstrated to lead to better performance in the (VGGFace, VoxCeleb) dataset in a comprehensive set of experiments. As a result, I believe the article might be of interest to practitioners interested in solving related cross-modal matching tasks.\n\n# Clarity\n\nThe descriptions of the approach, related work and the different experiments carried out are written clearly and precisely. Overall, the paper is rather easy to read and is presented using a logical, easy-to-follow structure.\n\nIn my opinion, perhaps the only exception to that claim lies in Section 3.4. If possible, I believe the Seen-Heard and Unseen-Unheard scenarios should be introduced in order to make the article self-contained. \n\n# Quality\n\nThe experimental section is rather exhaustive. Despite essentially consisting of a single dataset, it builds on [1, 2] and presents a solid study that rigorously accounts for many factors, such as potential confounding due to gender and/or nationality driving prediction performance in the test set. \n\nMultiple variations of the cross-modal matching task are studied. While, in absolute terms, no approach seems to have satisfactory performance yet, the experimental results seem to indicate that the proposed approach outperforms prior work.\n\nGiven that the authors claimed to have run 5 repetitions of the experiment, I believe reporting some form of uncertainty estimates around the reported performance values would strengthen the results.\n\nHowever, I believe that the success of the experimental results, more precisely, of the variants trained to predict the \"covariate\" identity, call into question the very premise of the article. Unlike gender or nationality, I believe that identity is not a \"covariate\" per se. In fact, as argued in Section 3.1, the prediction task for this covariate is not well-defined, as the set of identities in the training, validation and test sets are disjoint. In my opinion, this calls into question the hypothesis that what drives the improved performance is the fact that these models are trained to predict the covariates. Rather, I wonder if the advantages are instead a \"fortunate\" byproduct of the more efficient usage of the data during the training process, thanks to not requiring (face image, audio recording) pairs as input.\n\n# Typos\n\nSection 2.4\n1) \"... image.mGiven ...\"\n2) Cosine similarity written using absolute value |f| rather than L2-norm ||f||_{2}\n3) \"Here we are give a probe input ...\"\n\n# References\n\n[1] Nagrani, Arsha, Samuel Albanie, and Andrew Zisserman. \"Learnable PINs: Cross-Modal Embeddings for Person Identity.\" arXiv preprint arXiv:1805.00833 (2018).\n[2] Nagrani, Arsha, Samuel Albanie, and Andrew Zisserman. \"Seeing voices and hearing faces: Cross-modal biometric matching.\" Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018.", "This paper aims at matching people's voices to the images of their faces. It describes a method to train shared embeddings of voices and face images. The speech and image features go through separate neural networks until a shared embedding layer. Then a classification network is built on top of the embeddings from both networks. The classification network predicts various combinations of covariates of faces and voices: gender, nationality, and identity. The input to the classification network is then used as a shared representation for performing retrieval and matching tasks.\n\nCompared with similar work from Nagrani et al (2018) who generate paired inputs of voices and faces and train a network to classify if the pair is matched or not, the proposed method doesn't require paired inputs. It does, however, require inputs that are labeled with the same covariates across modalities. My feeling is that paired positive examples are easier to obtain (e.g., from unlabeled video) than inputs labeled with these covariates, although paired negative examples require labeling and so may be as difficult to obtain.\n\nSeveral different evaluations are performed, comparing networks that were trained to predict all subsets of identity, gender, and nationality. These include identifying a matching face in a set of faces (1,2 or N faces) for a given voice, or vice versa. Results show that the network that predicts identity+gender tends to work best under a variety of careful examinations of various stratifications of the data. These stratifications also show that while gender is useful overall, it is not when the gender of imposters is the same as that of the target individual. The results also show that even when evaluating the voices and faces not shown in the training data, the model can achieve 83.2% AUC on unseen/unheard individuals, which outperforms the state-of-the-art method from Nagrani et al (2018).\n\nAn interesting avenue of future work would be using the prediction of these covariates to initialize a network and then refine it using some sort of ranking loss like the triplet loss, contrastive loss, etc.\n\n\nWriting:\n* Overall, ciations are all given in textual form Nagrani et al (2018) (in latex this is \\citet{} or \\cite{}), when many times parenthetical citations (Nagrani et al, 2018) (in latex this is \\citep{}) would be more appropriate.\n* The image of the voice waveform in Figures 1 and 2 should be replaced by log Mel-spectrograms in order to illustrate the network's input.\n* \"state or art\" instead of \"state-of-the-art\" on page 3. \n* In subsection 2.4: \"mGiven\" is written instead of \"Given\". \n* On Page 6 Section 3.1 \"1:2 matching\" paragraph. \"Nagrani et al.\" is written twice. * * Page 6 mentions that there is a row labelled \"SVHF-Net\" in table 2, but there is no such row is this table. \n* Page 7 line 1, “G,N” should be \"G, N\".\n" ]
[ -1, -1, -1, 7, 6, 7 ]
[ -1, -1, -1, 4, 3, 4 ]
[ "Hkxxm_Qsh7", "HkxvRZesnQ", "r1e5d52hhQ", "iclr_2019_B1exrnCcF7", "iclr_2019_B1exrnCcF7", "iclr_2019_B1exrnCcF7" ]
iclr_2019_B1ffQnRcKX
Automatically Composing Representation Transformations as a Means for Generalization
A generally intelligent learner should generalize to more complex tasks than it has previously encountered, but the two common paradigms in machine learning -- either training a separate learner per task or training a single learner for all tasks -- both have difficulty with such generalization because they do not leverage the compositional structure of the task distribution. This paper introduces the compositional problem graph as a broadly applicable formalism to relate tasks of different complexity in terms of problems with shared subproblems. We propose the compositional generalization problem for measuring how readily old knowledge can be reused and hence built upon. As a first step for tackling compositional generalization, we introduce the compositional recursive learner, a domain-general framework for learning algorithmic procedures for composing representation transformations, producing a learner that reasons about what computation to execute by making analogies to previously seen problems. We show on a symbolic and a high-dimensional domain that our compositional approach can generalize to more complex problems than the learner has previously encountered, whereas baselines that are not explicitly compositional do not.
accepted-poster-papers
pros: - the paper is well-written and presents a nice framing of the composition problem - good comparison to prior work - very important research direction cons: - from an architectural standpoint the paper is somewhat incremental over Routing Networks [Rosenbaum et al] - as Reviewers 2 and 3 point out, the experiments are a bit weak, relying on heuristics such as a window over 3 symbols in the multi-lingual arithmetic case, and a pre-determined set of operations (scaling, translation, rotation, identity) in the MNIST case. As the authors state, there are three core ideas in this paper (my paraphrase): (1) training on a set of compositional problems (with the right architecture/training procedure) can encourage the model to learn modules which can be composed to solve new problems, enabling better generalization. (2) treating the problem of selecting functions for composition as a sequential decision-making problem in an MDP (3) jointly learning the parameters of the functions and the (meta-level) composition policy. As discussed during the review period, these three ideas are already present in the Routing Networks (RN) architecture of Rosenbaum et al. However CRL offers insights and improvements over RN algorithmically in a several ways: (1) CRL uses a curriculum learning strategy. This seems to be key in achieving good results and makes a lot of sense for naturally compositional problems. (2) The focus in RN was on using the architecture to solve multi-task problems in object recognition. The solutions learned in image domains while "compositional" are less clearly interpretable. In this paper (CRL) the focus is more squarely on interpretable compositional tasks like arithmetic and explores extrapolation. (3) The RN architecture does support recursion (and there are some experiments in this mode) but it was not the main focus. In this paper (CRL) recursion is given a clear, prominent role. I appreciate that the authors' engagement in the discussion period. My feeling is that the paper offers nice improvements, a useful framing of the problem, a clear recursive formulation, and a more central focus on naturally compositional problems. I am recommending the paper for acceptance but suggest that the authors remove or revise their contributions (3) and (4) on pg. 2 in light of the discussion on routing nets. Routing Networks, Adaptive Selection of Non-Linear Functions for Multi-task Learning, ICLR 2018
train
[ "r1gbHlap3X", "BJl1SmLj3m", "SygCi7pr14", "BJeiNE7TaQ", "H1ei6m7TaX", "Byg3F7Qap7", "B1eifeX9n7", "r1x-kZyT2X", "Hkx25xk6hQ", "SJg_qTBs3X", "HJlzO5a1nm" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "public", "public" ]
[ "Summary: This paper is about trying to learn a function from typed input-output data so that it can generalize to test data with an input-output type that it hasn't seen during training. It should be able to use \"analogy\" (if we want to translate from French to Spanish but don't know how to do so directly, we should translate from French to English and English to Spanish). It should also be able to generalize better by learning useful \"subfunctions\" that can be composed together by an RL agent. We set up the solution as having a finite number of subfunctions, including \"HALT\" which signifies the end of computation. At each timestep an RL agent chooses a subfunction to apply to the current representation until \"HALT\" is chosen. The main idea is we parameterize these subfunctions and the RL agent as neural networks which are learned based on input -output data. RL agent is also penalized for using many subfunctions. The algorithm is called compositional recursive learner (CRL). Both analogy and meaningful subfunctions should arise purely because of this design.\n\nMultilingual arithmetic experiment. I found this experiment interesting although it would be helpful to specify that it is about mod-10 arithmetic. I was very confused for some time since the arithmetic expressions didn't seem to be evaluated correctly. It also seems that it is actually the curriculum learning that helps the most (vanilla CRL doesn't seem to perform very well) although authors do note that such curriculum learning doesn't help the RNN baseline. It also seems that CRL with curriculum doesn't outperform the RNN baseline that much on test data with the same length as training data. The difference is larger when tested on longer sequences. However here, the CRL learning curve seems to be very noisy, presumably due to the RL element. The qualitative analysis illustrates well how the subfunctions specialize to particular tasks (e.g. translation or evaluating a three symbol expression) and how the RL agent successively picks these subfunctions in order to solve the full task.\n\nImage transformations experiment. This experiment feels a bit more artificial although the data is more complicated than in the previous experiment. Also, in some of the examples in Figure 2, the algorithms seems to perform translation (action 2) twice in a row while it seems like this could be achieved by only one translation. How does this perform experimentally in comparison to an RNN (or other baseline)?\n\nI found this paper to be well-written. Perhaps it could be stronger if the \"image transformations\" experiment quantitatively compared to a baseline. I'm not an expert in this area and don't know in detail how this relates to existing work (e.g. by Rosenbaum et al; 2018).\n\nEdit: change score to 7 in light of revisions and new experiment.", "This is a good review paper. I am not sure how much it adds to the open question of how to learn representation with high structure. \n\nI would like to see more detail on what is communicated between the controller and the evaluator. Is it a single function selected or a probability distribution that is sent? How does the controller know how many function the evaluator has created? Or visa versa. \n\nThere is a penalty for the complexity of the program, is there a penalty for the number of functions generated? \n\nHaving just read Hudson and Manning's paper using a separate controller and action/answer generator they make strong use of attention. It is not clear if you use attention? Maybe in that you can operate on a portion of X. What role does attention play in your work?", "Below is a quantitative evaluation of how CRL compares with a CNN baseline.\n\nThe dataset contains MNIST digits that have been scaled (S), rotated (R), and translated (T). There are two types of scaling: large and small. There are two types of rotation: left and right. There are four types of translation: left, right, up, and down. The set of depth-2 compositions (20 total) we considered are scale->translate (2*4 possible), rotate->translate (2*4 possible), scale->rotate (2*2 possible). “scale->translate” means that the image was first scaled, then translated. The set of depth-3 compositions we considered are scale->rotate->translate (2*2*4 possible). \n\nThe training set is 16 out of the 20 depth-2 compositions, the first hold-out set is the remaining 4 out of the 20 depth-2 compositions, and the second hold-out set is the set of depth-3 compositions. The first hold-out set tests extrapolation to a disjoint set of transformation combinations of the same depth as training; the second hold-out set tests extrapolation to a set of transformation combinations of longer depth than in training.\n\nThe CNN baseline was pre-trained to classify canonical MNIST digits, and it continued training on transformed MNIST digits.\nCRL used the same pre-trained MNIST classifier as a decoder (whose weights are frozen), and learned a set of Spatial Transformer Networks (STN) constrained to rotate, scale, or translate.\nWe noticed instability in training the STNs to model drastic translations (where the digit was translated more than 15% the width of the images). A potential reason for this is that because the weights of CRL’s decoder (pre-trained MNIST classifier) are frozen, the classifier acts as a more complex loss functions for the upstream STNs. We addressed this challenge by defining a curriculum for the translated data, where initially the digit was translated by a small amount, and at the end of the curriculum, the digit is translated to the far edge of the image. We applied this curriculum to both CRL and the baseline.\n\nThe results are as follows (over 5 random seeds):\n\nTraining set accuracy:\n———————————\nCNN\nmedian: 0.98\n10% quantile: 0.98\n90% quantile: 0.98\nCRL\nmedian: 0.89\n10% quantile: 0.87\n90% quantile: 0.90\n\nHold-out set (same depth)\n———————————\nCNN\nmedian: 0.22\n10% quantile: 0.19\n90% quantile: 0.23\nCRL\nmedian: 0.67\n10% quantile: 0.59\n90% quantile: 0.71\n\nHold-out set (longer depth)\n———————————\nCNN\nmedian: 0.26\n10% quantile: 0.26\n90% quantile: 0.27\nCRL\nmedian: 0.69\n10% quantile: 0.60\n90% quantile: 0.71\n\nWe notice that CRL performs a bit worse on the training set because it is constrained to go through the bottleneck of only using Spatial Transformation Networks, whereas the CNN is free to fit the training set without such constraints. In the hold-out sets, it is clear that the CNN overfits to the training set and is unable to classify MNIST digits that have been transformed by a set of transformation combinations it has not seen before. CRL, on the other hand, generalizes significantly better because it re-uses the primitive spatial transformations it had learned during training to re-represent the image into a canonical MNIST digit.", "Based on OP’s suggestions, we have included a paragraph in Section 3.4 (“Discussion of Design Choices”) that features a discussion that compares CRL with Routing Networks.\n\nTo avoid misrepresenting Routing Networks, we have revised the wording of the experiment of Appendix D.2 to compare with a mixture-of-expert- inspired baseline, rather than Routing Networks, because as OP points out, 1) RN does not necessarily have a separate controller per time step and 2) RN does not necessarily use a different set of functions per computation step. The purpose of this experiment is to show the benefits of reusing modules across computation steps and to show the benefit of allowing a flexible computation horizon.\n", "We thank Reviewer 2 for their constructive review, which helped us improve the paper in the following aspects. We would be happy to incorporate any other suggestions Reviewer 2 may have for the paper.\n\n1. We have revised Section 3.1 and the introductory paragraph of Section 3 to be more precise about the domain-specific assumptions CRL makes about the problem distribution. In particular, we included a discussion about restricting the representational vocabulary and the functional form of the modules as a way to incorporate as an inductive bias domain-specific knowledge of the problem distribution. \n\n2. We agree with Reviewer 2 that the “recursive”/”translational” terminology should be clearer. Therefore, we have revised the “Problems” and “The goal” paragraphs in Section 2 to remove the discussion on translational problems and only focus on recursive problems, where the input and output representations are drawn from the same vocabulary.\n\n3. Further, we agree with and appreciate Reviewer 2’s analysis that our paper is only a first step towards the full general problem of discovering subproblem decomposition. Accordingly we have revised the end of Section 6 (Discussion) to acknowledge this. We also revised “The challenge” paragraph in Section 2 to be more precise that we are not solving the general subproblem decomposition problem, but rather solving the problem of learning to compose partial solutions to subproblems when the general form of the subproblem decomposition of a task distribution is known.\n", "We thank Reviewer 3 for their constructive review, which helped us improve the paper in various aspects. We would be happy to incorporate any other suggestions Reviewer 3 may have for the paper. We would like to make the following clarifications:\n\n1. We have clarified in Section 4.1 that arithmetic problems are modulo-10.\n\n2. With regards to how CRL compares to the RNN on test data with the same length as the training data, Figure 2b shows that there is a substantial difference between CRL (red curve) and RNN (purple curve). It is only with 10x more data does the RNN (yellow curve) reach comparable performance with CRL.\n\n3. Reviewer 3 noted that in the right half of Figure 4, the top-two examples showed that CRL performs transformation twice, when in fact this can be achieved by only translation. This is true. For simplicity, we had fixed the number of transformations to two transformations. That CRL finds alternate ways of achieving the same end representation (using two translations instead of one) illustrates a core feature of the CRL framework: that it is possible to solve a problem (e.g. a large translation) by composing together partial solutions (two small translation).\n\n4. We will have the baseline experiments Reviewer 3 requested in time for the final, and will endeavor to add these in to the paper during the discussion period.\n", "==== Summary ====\n\nThis paper proposes a model for learning problems that exhibit compositional and recursive structure, called Compositional Recursive Learner (CRL). The paper approaches the subject by first defining a problem as a transformation of an input representation x from a source domain t_x to a target domain t_y. If t_x = t_y then it is called a recursive problem, and otherwise a translational problem. A composite problem is the composition of such transformations. The key observation of the paper is that many real-world problems can be solved iteratively by either recursively transforming an instance of a problem to a simpler instance, or by translating it to a similar problem which we already know how to solve (e.g., translating a sentence from English to French through Spanish). The CRL model is essentially composed of two parts, a set of differential functions and a controller (policy) for selecting functions. At each step i, the controller observes the last intermediate computation x_i and the target domain t_y, and then selects a function and the subset of x_i to operate on. For each instance, the resulting compositional function is trained via back-propagation, and the controller is trained via policy gradient. Finally, the paper presents experiments on two synthetic datasets, translating an arithmetic expression written in one language to its outcome written in another language, and classifying MNIST digits that were distorted by an unknown random sequence of affine transformations. CRL is compared to RNN on the arithmetic task and shown to be able to generalize both to longer sequences and to unseen language pairs when trained on few examples, while RNN can achieve similar performance only using many more examples. On MNIST, it is qualitatively shown that CRL can usually (but not always) find the sequence of transformations to restore the digit to its canonical form.\n\n==== Detailed Review ====\n\nI generally like this article, as it contains a neat solution to a common problem that builds on and extends prior work. Specifically, the proposed CRL model is a natural evolution of previous attempts at solving problems via compositionally, e.g. Neural Programmer [1] that learns a policy for composing predefined commands, and Neural Module Networks [2] that learns the parameters of shared differential modules connected via deterministically defined structure (found via simple parse tree). The paper contains a careful review of the related works and highlights the similarities and differences from prior approaches. Though the experiments are mostly synthetic, the underlying method seems to be readily applicable to many real-world problems.\n\nHowever, the true contributions of the paper are somewhat muddied by presenting CRL as more general than what is actually supported by the experiments. More specifically, the paper presents CRL as a general method for learning compositional problems by decomposing them into simpler sub-problems that are automatically discovered, but in practice, a far more limited version of CRL is used in the experiments, and the suggested translational capabilities of CRL, which are important for abstract sub-problem discovery, are not properly validated:\n\n1. In both experiments, the building-block functions are hand-crafted to fit to the prior knowledge on the compositionally of the problem. For the arithmetic task, the functions are limited to operate each step just on a single window of encompassing 3 symbols (e.g., <number> <op> <number>, <op> <number> <op>) and return a distribution over the possible symbols, which heavily forces the functions to represent simple evaluators for simple expressions of the form <number> <op> <number>. For the distorted MNIST task, the functions are limited to neural networks which choose the parameter of predetermined transformations (scaling, translation, or rotation) of the input. In both cases, CRL did not *found* sub-problems for reducing the complexity of the original instance but just had to *fine tune* loosely predefined sub-problems. Incorporating expert knowledge into the model like so is actually an elegant and useful trick for solving real problems, and it should be emphasized far clearly in the article. The story of “discovering subproblems” should be left for the discussion / future research section, because though it might be a small step towards that goal, it is not quite there yet.\n2. The experiments very neatly show how recursive transformations offer a nice framework for simplifying an instance of a problem. However, the translation capabilities of the model are barely tested by the presented experiments, and it can be argued that all transformations used by the model are recursive in both experiments. First, only the arithmetic task has a translation aspect to it, i.e., the task is to read an expression in one language and then output the answer in a different language. Second, this problem is only weakly related to translation because it is possible to translate the symbols independently, word by word, as opposed to written language that has complex dependencies between words. Third, the authors report that in practice proper translation was only used in the very last operation for translating the computed value of the input expression to the requested language, and not as a method to translate one instance that we cannot solve into another that we can. Finally, all functions operate and return on all symbols and not ones limited to a specific language, and so by the paper’s own definition, these are all recursive problems and not translational ones.\n\nIn conclusion, I believe this paper should be accepted even with the above issues, mostly because the core method is novel, clearly explained, and appears to be very useful in practice. Nevertheless, I strongly suggest to the authors to revise their article to focus on the core qualities of their method that can be backed by their current experiments, and correctly frame the discussion on possible future capabilities as such.\n\n[1] Reed et al. Neural Programmer-Interpreters. ICLR 2016.\n[2] Andreas et al. Neural Module Networks. CVPR 2016.", "Question 1\n\nAlthough we have acknowledged the similarities in \"Response to Relation of the Compositional Recursive Learner to Routing Networks\", we respectfully disagree with OP that “CRL is in effect a Routing Network.” To make such a statement would be to mischaracterize the difference between the generative nature of CRL and the routing-based nature of RN and to ignore the respective problem domains that CRL and RN tackles.\n\nWe focus on the extrapolation problem (Sec 2 and 3), for which learning on multiple tasks is a means to this end, whereas Rosenbaum et al. focus on task interference, for which multi-task learning is the end itself (see abstract of Rosenbaum et al.). Because our focus is on subproblem decomposition, CRL restricts the representation space such that harder problems can be expressed in the same vocabulary as easier problems. RN do not focus on subproblem decomposition, so it is not clear whether their modules learn any interpretable atomic functionality or whether their representations capture semantic boundaries between subproblems that comprise a larger problem. Therefore, RN does not have the inductive bias for extrapolation problems that require the learner to re-represent the new problem in terms of problems the learner has seen during training.\n\nThe key methodological difference between CRL and RN lies in the generative nature of CRL and the routing-based nature of RN. RN and other work such as PathNet (Fernando et al. 2017) route input-dependent paths through a large fixed architecture. In contrast, the extrapolation problem necessitates CRL be generative, meaning that it incrementally builds module on top of module without a fixed computational horizon. This is necessary for the problem domain we consider, in which we want to train and extrapolate to different problems that require various computation depths. Therefore, variable-length computation horizon, the restrictions on the representational vocabulary, and the emergent semantic functionality of its submodules as solutions to subproblems within a larger problem (see Figure 3) are crucial design considerations for the capability of CRL that RN does not incorporate in their approach.\n\nQuestion 2\n\nBased on the crucial difference between the generative nature of CRL and the routing-based nature of RN, the variable computation horizon is a crucial feature of CRL, not a minor difference, as we discussed above and in the Related Work. Because of the variable computation horizon, it is not possible to have a separate controller at each timestep/depth because the number of time steps of computation unknown; therefore this is also not a minor difference. \n\nWe agree with OP that the particular RL algorithm (PPO vs MARL-WPL) is not particularly relevant to the central focus of our paper, which is extrapolation in compositionally structured problems, and we indeed did not claim so. Nevertheless, our work represents an algorithmic improvement that does make the single controller architecture more effective (above 90% extrapolation accuracy for multilingual arithmetic) than Rosenbaum et al.’s architecture (Figure 4 and Figure 5 of Rosenbaum et al. shows < 50% accuracy, whereas their best method achieves around 60%).\n\nCRL’s focus on capturing interpretable atomic functionality in its modules and using representations capture semantic boundaries between subproblems that comprise a larger problem are important ingredients for CRL’s analogical reasoning: literally re-representing a problem in terms of problems it has already seen. This is another key difference between RN and CRL, because the architectural design of RN do not have the inductive bias (restrictions on the modules and representations) that encourage it re-represent problems in literally terms of previously-seen problems.\n\nQuestion 3: Novelty\n\nThe novelty of our work (with respect to RN) lies in the generative nature of CRL because we reframe of the extrapolation problem as a problem of learning algorithmic procedures over transformations between representations, as discussed in the abstract, intro, and discussion. CRL generates function composition, in contrast to how RN routes through function paths. As shown in the experiments section, the transformations CRL learns have interpretable, atomic functionality and the representations capture semantic boundaries between subproblems that comprise a larger problem. These features of the CRL architecture crucially differentiate it from other routing-based architectures, including RN and PathNet.", "We are grateful to the Anonymous Commenter (OP) for their detailed and insightful comment.\n\nIt is true, as OP points out, that there is a close connection to Routing Networks (RN), an important and interesting paper that seeks to mitigate task interference in multi-task learning by routing through the modules of a convolutional neural network. Like RN, a feature of our work is that the learner creates and executes a different computation graph for different inputs, where this computation graph consists of a series of functions applied according to a controller. Therefore, it is possible to see CRL as taking a step beyond the single-controller (which they refer to as “single-agent” in Rosenbaum et al.) version of RN by incorporating several algorithmic improvements that make the single controller version not only effective for solving the task (c.f. Figure 4 and Figure 5 of Rosenbaum et al.) but also effective for extrapolation, a problem domain that Rosenbaum et al. does not consider. \n\nWe will follow OP’s recommendation and make the comparison with RN more salient in the experiments and related work section. However, we would like to emphasize that the problem that RN tackles (mitigating task interference in multi-task learning) is not the central focus of the paper. That CRL and RN started from significantly different motivations and problem domains but converged to a similar architecture design serves as encouraging evidence in support of an old idea that exploiting modularity and encapsulation yield help more efficiently capture the modalities of a task distribution, and we are excited that both we and Rosenbaum et al. are actively pushing this front.\n\nWe thank OP for pointing out it is indeed true that 1) RN does not necessarily have a separate controller per time step and 2) RN does not necessarily use a different set of functions per computation step; we will follow OP’s recommendation and clarify this in the next version of the paper to avoid potential misunderstanding. One source for our misunderstanding is that the exposition of RN in section 3 of Rosenbaum et al. (e.g. “If the number of function blocks differs from layer to layer in the original network, then the router may accommodate this by, for example, maintaining a separate decision function for each depth” (page 4, Rosenbaum et al.) and “The approximator representation can consist of either one MLP that is passed the depth (represented in 1-hot), or a vector of d MLPs, one for each decision/depth” (page 5, Rosenbaum et al.)) seems to heavily suggest the two assumptions we made on page 15 of our manuscript, so we thought that the single-controller or shared function cases were included in Rosenbaum et al. mostly for the sake of comparison. The reason that our submission discussed points (1) and (2) was not intended to misrepresent RN. Rather it was because we interpreted Figure 4, Figure 5, Table 3, Table 4 of Rosenbaum et al. as claiming the routing-all-fc (one-agent-per-task, separate controller per depth, different functions-per-layer) as the flag bearer of their results. To make the comparison that most fairly represents RN’s claims, we had conducted our comparison based on the best version of RN reported in Rosenbaum et al. (routing-all-fc), which uses a separate controller per depth and a different set of functions per depth (according to Table 3 and 4 in Rosenbaum et al.).", "Now that the review period is officially over, I was hoping to get a response to the issues raised above. I ask the authors to address the following questions in particular:\n1. Do the authors agree with the assessment that the CRL is in effect a Routing Network? (I might point out that the authors even hint at that in the arxiv version of this paper)\n2. Do the authors agree that the only two minor differences (apart from the training schedule) are (1) that the CRL has infinite horizon recurrence, while RNs only have limited horizon recurrence, and (2) the RL algorithm chosen? (this implies a mischaracterization of RNs on the authors part) \n3. In light of the previous two points, why do the authors claim that their architecture is novel? (this critique does not extend to the other parts of their paper) ", "I have read the paper \"Automatically Composing Representation Transformations as a Means for Generalization\" with great pleasure. I particularly enjoyed how the paper tries to link compositionality to analogical reasoning. I think an architecture for compositional reasoning that can solve even complex tasks elegantly is of great value. \nI do though have some concerns about the relationship between the \"Compositional Recursive Learner\" (CRL) and \"Routing Networks\" (RN). Specifically, it seems to me that the CRL is an example of a single agent recursive routing network, as described in (Rosenbaum et al, ICLR 2018). In particular, the design of a compositional computation and learning framework that combines trainable function blocks with a reinforcement learning meta learner (as described in section 3.2 and 3.3) is highly similar (section 3.2) or nearly identical (section 3.3) to the formulation in the routing networks paper.\nThe main difference is that while (Rosenbaum et al) focused on a limited-horizon recurrence (see pages 1, 3, 4, 7, and particularly 14 in the appendix), CRL uses an infinite-horizon recurrence.\nSurprisingly, this relationship is not discussed in the paper in any detail. Routing Networks are more closely examined in the appendix only. Additionally, there are two stated assumptions (on p. 15) on routing networks that I do not think are true: (1) Routing Networks necessarily have a separate controller per computation step and (2) Routing Networks necessarily use a different set of functions per computation step. The idea of an RN with a single controller applied across computation steps is discussed on page 5 of (Rosenbaum et al). The idea of re-using function blocks across computation steps is discussed on pages 1, 3, 4, 7 and 14.\n\nGiven the obviously close relationship between these two works, I feel that the connection should be more emphasized and the comparison more central to the paper. And indeed, the results shown for routing networks are somewhat hard to believe (at least for smaller problems as routing networks are not expected to scale to inputs of the same size). Is the routing networks implementation compared to actually also recurrent? Does the routing network receive the same curriculum learning strategy training?\n\nThe link to Rosenbaum et al in ICLR 2018: https://openreview.net/forum?id=ry8dvM-R-" ]
[ 7, 9, -1, -1, -1, -1, 7, -1, -1, -1, -1 ]
[ 2, 4, -1, -1, -1, -1, 3, -1, -1, -1, -1 ]
[ "iclr_2019_B1ffQnRcKX", "iclr_2019_B1ffQnRcKX", "Byg3F7Qap7", "Hkx25xk6hQ", "B1eifeX9n7", "r1gbHlap3X", "iclr_2019_B1ffQnRcKX", "SJg_qTBs3X", "HJlzO5a1nm", "HJlzO5a1nm", "iclr_2019_B1ffQnRcKX" ]
iclr_2019_B1fpDsAqt7
Visual Reasoning by Progressive Module Networks
Humans learn to solve tasks of increasing complexity by building on top of previously acquired knowledge. Typically, there exists a natural progression in the tasks that we learn – most do not require completely independent solutions, but can be broken down into simpler subtasks. We propose to represent a solver for each task as a neural module that calls existing modules (solvers for simpler tasks) in a functional program-like manner. Lower modules are a black box to the calling module, and communicate only via a query and an output. Thus, a module for a new task learns to query existing modules and composes their outputs in order to produce its own output. Our model effectively combines previous skill-sets, does not suffer from forgetting, and is fully differentiable. We test our model in learning a set of visual reasoning tasks, and demonstrate improved performances in all tasks by learning progressively. By evaluating the reasoning process using human judges, we show that our model is more interpretable than an attention-based baseline.
accepted-poster-papers
Important problem (modular & interpretable approaches for VQA and visual reasoning); well-written manuscript, sensible approach. Paper was reviewed by three experts. Initially there were some concerns but after the author response and reviewer discussion, all three unanimously recommend acceptance.
train
[ "SkeIsnWJlN", "HygakjZyl4", "SJgyyPTv3m", "SkeHNjEqRQ", "BkgO1UGq07", "Hygs-DpF0X", "S1enb1hOCm", "BylkHTN_RX", "rygVoa6DRQ", "H1xozmAE0m", "SyeoH17BTQ", "SJlY63MSpX", "Sklfd2fSpX", "H1gUmqkh3Q", "r1eaptK5hm" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "We have added the GT captions experiment in the 'plug-and-play architecture' paragraph in Section 4.1.\n\nThank you again for your great suggestion!", "Thanks for the response! It is interesting that the GT captions can help improve the VQA performance, please incorporate the results and update the manuscripts accordingly.\n\nAgain, I think this is a good paper and will not change my rating.", "Summary:\nThe authors propose a network for VQA incorporating hand-crafted modules and their hierarchy, each of which is a network for a high-level vision task. Some modules may share the same sub-modules at a different level in the module hierarchy. Each module is individually (not end-to-end) trained with a dataset containing a dedicated annotation for their high-level tasks. The proposed model shows comparable scores to the existing models.\n\nPresentation and clarity:\nThe paper is well written and easy to follow and contains reasonable experiments for understanding the proposed method.\n\nOriginality and significance:\nI mainly do not agree that this work generalizes NMN. Instead, I believe that this work is a special case of NMN where the modules and their hierarchy are manually defined based on the authors' intuition. Meanwhile, the proposed network architecture is static, and thus the main idea of having multiple modules in a network is not novel as other approaches using static network architectures such as [A] also facilitate multiple modules for different sub-procedures (e.g., RNN for questions and CNN for image) and sometimes share modules in multiple stages too. The main difference between this and previous works is that the modules in this work deal with high-level tasks chosen by the authors. I am not convinced that designing the modules with high-level tasks is a better choice over designing modules that are less task-specific. Rather, I see more drawbacks as the proposed method requires multiple datasets with diverse task-specific annotation. Also, the modules and their connectivity are less scalable and extendable as they are not learned.\n\nConsidering all the model and dataset complexities, the improvements over black-box models are mostly marginal. The main benefits we get from all these complexities are the interpretability. However, for many modules, the interpretability comes from indirect signals that are often not clear how to interpret for the question answering. On the other hand, the manually designed sub-tasks may cause error propagation in the network as these modules are not directly optimized for the final objective.\n\nSome questions and comments:\nI do not understand why it is necessary to have the image captioning module as it does not directly relate to the question answering. Moreover, the caption itself is generated without conditioning on the question.\n\n[A] Yang, Zichao, et al. \"Stacked attention networks for image question answering.\" CVPR 2016.\n\n\n== After discussion phase\nBased on the rebuttal and additional experiments that clarified and resolved my questions, I change my initial rating.", "What you suggested above is also a good way of looking at it.\nWe tested it out:\n\n || Mobj used | Mobj not used . | Total\n----------------------------------------------------------------------------------------------------------\nMobj correct || A. 15005 (95%) | C. 777 (5%) | 15782 (100%) \nMobj incorrect || B. 7764 (58%) | D. 5513 (42%) | 13277 (100%)\n\nWhen Mobj's output is equal to the ground-truth output (the first row), it is almost always used. When its output is not correct, it is less likely to be used. Note that B. may seem high because the questions studied (questions starting with 'what') likely need some information from objects, and also Mobj is doing a 1600-way classification, therefore it is not always easy for Mobj to be correct (e.g. if the ground truth answer is 'monitor' and Mobj outputs 'tv', it would be considered as incorrect).\n\n\nWe thank the reviewer for all the interesting suggestions. It seems that the concerns have been addressed, and we hope it will reflect in the reviewer's final rating.", "I deeply appreciate all your efforts for the extra experiments. Those tables also show meaningful statistics. However, I think you can simply measure what portion of questions are answered using M_obj (probably with soft weight thresholded) when M_obj outputs correct/incorrect answers. I think this can simply show if the model actually learns to ignore modules when they produce incorrect outputs.\n\nAgain, I thank the authors for these additional experiments and I also want to point out that the authors resolved my questions and concerns.", "We agree that inspecting module outputs is beneficial. It is hard to directly identify intermediate erroneous outputs as we do not have ground truth labels, so we conducted a new experiment that indirectly measures how erroneous outputs affect the model performance. \n\nBefore that, we would like to clarify PMN chooses which modules to use, not which outputs to use (see line 5 in Algorithm 1). In the previous response, we stated “PMN combines information from the lower modules through importance scores. For a given set of questions, if a module produces erroneous outputs, PMN can learn to ignore such outputs and rely on other modules or it’s own residual.”. This might have caused the confusion, and we apologize. We meant to say - if PMN learns that certain modules’ outputs are not useful for solving a particular type of questions, it can learn to ignore those modules (with low importance scores) for that type of questions.\n\nAs there are more than 200K questions in the val set, we focus on questions starting with ‘what’ and divide questions into 10 types. This amounts to 74K questions. For this experiment, we choose one submodule Mobj (the object classifier) of Mvqa and analyze 1) the type of questions when this module is used/not used, and 2) how Mobj’s output affects the final performance.\n\nNote: we say Mobj is ‘used’ if its importance score is higher than any other module.\n\n1) We show the number of questions Mobj is used/not used depending on the question types:\n\n Q types ||\n(eg. what+‘time’) || #Qs Mobj used | #Qs Mobj not used \n---------------------------------------------------------------------------------------\ncolor || 102 | 23628 \nkind || 8264 | 1760\ntime || 15 | 1731\nsport || 1435 | 5 \nanimal || 1093 | 46 \nis the number || 326 | . 1727\nis on, in the || 3903 | 14\nbrand || 896 | 43\nis, are || 24025 | 931\nroom || 931 | 0\n\n\nThis shows PMN correctly learns to use/not use Mobj for certain types of questions (e.g. kind, sport, animal uses Mobj while color, time, number does not).\n\n2) To see the effect of erroneous outputs, we select 23K questions out of the 74K questions whose ground-truth answer is in the vocabulary of Mobj and Mobj is used. That is, it is more likely that Mobj’s output is useful to infer the final answer.\nLet gt_ans be the ground-truth answer and obj_ans be the output label of Mobj.\n\n || final answer correct | final answer incorrect | Total\n------------------------------------------------------------------------------------------------------------------------\n#Qs Obj_ans == gt_ans || A. 12315 (82%) | B. 2690 (18%) | 15005 (100%)\n#Qs Obj_ans != gt_ans || C. 3233 (42%) | D. 4531 (58%) | 7764 (100%)\n\nThe numbers in the above table correspond to the number of questions. For example, A is the number of questions where Mobj’s output is equal to the ground-truth answer, leading to the correct final answer. \n\nFor a large number of questions (A. 54%), PMN behaves as expected with correct Mobj’s outputs leading to correct Mvqa’s outputs. As A > B and C < D, we can see that if Mobj produces the correct answer, it is more likely that the final answer of Mvqa is correct, and if Mobj produces an erroneous output, it is more likely that the final answer is incorrect. We would like to stress that this is a weakness of not only our model but any other deep learning model. If some part of a model’s computation path is erroneous, it is more likely that the model performance suffers.\n\n\nB and C show cases where even though Mobj’s output was correct/incorrect, the final answer is incorrect/correct. One reason could be other contributing modules. Since PMN’s selection process is ‘soft’ using softmax of importance scores, other modules could confuse Mvqa. This could be moderated in future works by employing a hard selection process with gumbel softmax or reinforcement learning. It is also interesting to see that A (82%) is much greater than B (18%) and the difference between C (42%) and D (58%) is not as large. It suggests that other contributing modules might be helpful in situations where Mobj’s output is incorrect.\n\nIt is not easy to directly quantify the effect of erroneous outputs without ground-truth labels. We hope you find these experiments helpful, and if you have another experiment in mind to show this more clearly, we would be happy to do it.\n", "1. I missed those results at the time of my response and now agree that the progression property of PMNs actually improves the performances of higher-level tasks. In addition to that, I also accept that the plug-and-play architecture is beneficial for further improvement in the future.\n\n2. As the authors argue, the human evaluation involves the quality of the intermediate outputs. But this measure involves many other factors and is hard to understand the effects of the erroneous outputs. I believe including this kind of experiments would be still beneficial.", "Thank you for the response. We address your concerns below.\n\n\n1. Performance \n\nWe believe the experiments section (Section 4.1), in particular Table 1 (relationship detection), 2 (counting) and 3 (visual question answering), shows how tasks benefit from utilizing submodules and learning progressively. Also, PMN for the VQA task achieves 64.68% validation accuracy which is an increase of more than 2.5% over the baseline (Table 3). This is without exploiting the additional questions from Visual Genome (which most other state-of-the-art models use, e.g. see Teney et al. 2018) and we do not employ additional data augmentation (Jiang et al. 2018). Since PMN is a general framework, we also do not use advanced VQA-specific techniques such as bilinear attention (Kim et al. 2018) or ensembling different architectures (Jiang et al. 2018).\n\n\n\n\n2. Erroneous outputs\n\nWe agree that PMN might not be perfect in avoiding erroneous outputs, just like how children make mistakes in reasoning leading to incorrect conclusions. The human evaluation (Section 4.2) measures the quality of intermediate reasoning process as PMN with incorrect explanations are penalized by human workers. As shown in Table 5, PMN gets more part marks than the baseline even when its final output is incorrect. This shows intermediate outputs are good. Moreover, as stated in the previous response, PMN can be naturally used as a plug-and-play model (see the \"plug-and-play architecture\" added in the experiments (Section 4.1)). Therefore, PMN has a very promising way of improving itself by utilizing better and less erroneous submodules, unlike other models. The query-and-answer level communication within PMN also makes it possible to have human feedback.\n\n\n\n- Teney et al. Tips and Tricks for Visual Question Answering: Learnings from the 2017 Challenge, 2018\n- Kim et al. Bilinear attention networks, 2018\n- Jiang et al. Pythia v0. 1: the winning entry to the VQA challenge 2018, 2018 \n", "I appreciate the response from the reviewer.\n\nI accept the authors' arguments that PMNs are different from NMNs and static models, although I believe NMNs are proposed more generally and can still be designed with progression property. And, the explicit architecture design with the progression and the experiments on it should still be counted as the authors' contribution. Also, I also agree that the interpretability of the model is improved compared to previous methods.\n\nHowever, there are several things I still do not agree. To argue that it is beneficial to build a module for a task on top of other modules for lower-level tasks, the higher-level modules should show significantly improved performance compared to the other approaches. Otherwise, it can be thought that having an end-to-end black box model should be enough without the progression even though the concept of progression seems to be similar to the human learning process. To confirm this, I believe the improvements for intermediate tasks should be also measured since the learning process is \"progressive\".\n\nI agree that it is possible that the model has the capability to learn to avoid utilizing erroneous intermediate outputs. However, I am not sure if the model can correctly identify the erroneous outputs through unsupervised attention model. It is especially doubtful as the erroneous outputs are usually produced from hard examples. So the argument of model's capability for not utilizing erroneous outputs by the attention process should be experimentally verified.\n\nOverall, I agree that I underestimated some of the paper's contributions and thus want to raise my score. But, at the same time, I still see some weak points in the arguments that may be resolved by more experiments.", "\n1. We clarified some notational ambiguities pointed out by Reviewer 3.\n\n2. We added an experiment demonstrating the plug-and-play nature of PMN as suggested by Reviewer 2.\n\n3. As we do not claim PMN is a generalization of Neural Module Networks, we edited the paper to remove misunderstanding our wording may have caused. \n\nWe thank all reviewers for their valuable feedbacks.", "Thanks for the feedback. We hope to convince you that PMN is a framework to learn continuously from previous knowledge, and not just a solution to VQA.\n\n\n1. PMN vs. NMN\n- We agree that PMN is not a generalization of NMN. However, we argue that it is not a special case of NMN either. We highlight two significant differences:\n\na) Progression: PMN is a framework that learns to do (visual) reasoning by starting with simpler tasks (object labels) and building up to more complex tasks (VQA). This is an important step towards building intelligent agents that continuously learn new tasks by using the tasks they are already good at. There is no sense of progression in NMN and everything is learned from scratch. From the experiments related to the low data regime (Table 6), we see that PMN can make efficient use of available data to learn to communicate with experts and solve the task.\n\nb) Task Modules: Communication in PMN is at the *query-answer level*. Since module outputs are answers to other (human-designed) tasks, the process is easier to interpret (more human-readable). On the other hand, NMN’s modules, as showcased in their paper, contained one or two conv. or linear layers and solve sub-functions such as attention or classification.\n\nWe edited the paper to remove misunderstanding our wording may have caused. \n\n\n2. PMN vs. Static models\n- In addition to the above differences to NMN, PMN has three more significant differences to static models ([A]).\n\na) Dynamic choice of modules: PMN’s state and importance function choose which modules to consider. This can go even further, and using a threshold, we may not execute some modules at all (during inference). Static models always go through the same steps.\n\nb) Information propagates in a tree-like fashion: A high-level module asks for some information from a lower module, that further produces queries for its own lower modules (see Fig. 1). For example, VQA calls counting which calls relationship detection.\n\nc) Direct querying of lower modules: PMN produces explicit queries for lower tasks using the query transmitter Q (see Fig. 2, step 3). Based on the current state, it can choose to ask information about a specific query that may be helpful to answer the question.\n\nThe inter-module communication and the computation graph are all learned.\n\n\n3. Modules with high-level tasks, Multiple datasets\n- Task-specific models are the default practice in machine learning. However, to have an intelligent agent that can learn a host of tasks over time it is beneficial to have the tasks build on top of each other (See 1. (a)). This is similar to the human learning process where kids first learn object names and attributes, followed by increasingly harder tasks such as counting. Datasets in the community are typically focused on one specific task, and thus we are forced to use multiple datasets and annotations to progressively learn visual reasoning abilities. \n\n\n4. Minor improvements over black-box models, Interpretability\n- We encourage the reviewer to look at our paper in a more holistic view. Our main aim is to mimic challenging real scenarios in which we want to train agents to learn many tasks, increasing in their complexity, rather than squeezing numbers for one particular dataset. The VQA dataset has a strong bias that is exploited by black-box models [B]. This is one of the reasons why PMN performs much better than the other models in the low data regime (Table 6) - the gap gets smaller with more data as black-box models learn to exploit dataset bias. The paper showcases how to learn tasks by progression and modularity. We hope this is interesting to readers beyond just the numbers. The fact that the performance also improves is a nice bonus.\n \nWith respect to interpretability, the query-answer communication within PMN is more human-readable than other models (See 1. (b)). For example, as shown in Fig. 2, it produces queries for the relationship module (bird, ‘on top of’) and the relationship module returns the box corresponding to 'bench'. Other examples such as Fig. 3, App. C&D, and the human evaluation concretely support the fact that generated outputs are much more interpretable than standard attention maps. \n\n\n5. Error propagation\n- PMN combines information from the lower modules through importance scores. For a given set of questions, if a module produces erroneous outputs, PMN can learn to ignore such outputs and rely on other modules or it’s own residual.\n\n\n6. Captioning for VQA\n- In captioning, one describes the most salient aspects of the picture. For example, a caption “a married couple walking on the beach”, provides answers to several questions ('are they married?', 'where are they?', etc). If the actual question relates to these, then the VQA module can simply leverage the information. In response to R2, we evaluated how well VQA can leverage ground-truth captions and see a large 2.0% improvement.\n\n\n[B] Agrawal, et al. Overcoming Priors for VQA. 2018\n", "We thank the reviewer for the comments and feedback. We will also include the suggested experiment that shows the plug-and-play nature of PMN.\n\n1. Residual modules\n- Residual modules are small neural networks (e.g., an MLP for Mvqa, Sec. 3.4, (4)) that a task module may use when other lower level modules are incapable of providing a solution to a given query. For example, consider the question “is this person going to be happy?” on an image of a person opening a present. Lower level modules of Mvqa may not be sufficient to solve the question. Therefore, Mvqa would make use of its residual module, which would essentially learn to “pick up” all queries that lower level modules cannot answer. \n\n2. Effect of fine-tuning\n- While it might be beneficial to fine-tune the modules for a specific parent task we want each module to be an expert for their own task as it facilitates a plug-and-play architecture. Fine-tuning may push the modules towards blindly improving parent module’s performance but (i) badly affect interpretability of inputs and outputs; and (ii) may also reduce the lower module’s performance on its own task. Most importantly, it would not scale with the number of tasks, as for each task the agent would need to keep several fine-tuned modules of the lower tasks in memory.\n\n3. Feeding in the ground-truth\n- Thanks for this great suggestion. We performed an experiment where we evaluate the benefits that the VQA model may achieve by using ground-truth captions instead of captions generated by the caption module. Our preliminary experiments show a gain of about 2.0% which is a relatively high gain for VQA. \nThis points to important properties of the PMN allowing human-in-the-loop type of continual learning, where a human teacher can pinpoint flaws in the reasoning process and potentially help the model to fix them.\n", "We thank the reviewer for the comments and feedback. We will certainly clarify them in the final paper.\n\n1. Title of the paper\n- We agree that the main highest-level task that we show is VQA, even though our method is more general. Our title aimed to convey that we showcase PMN on a host of increasingly complex visual reasoning tasks such as relationship detection, counting, and captioning, as well as VQA. Our focus is on VQA as it happens to be one of the most complex visual reasoning tasks that can leverage each of the (relatively) simpler tasks.\n\n2. Description of variables\n- Thanks for the feedback. Epsilon means the environment, some of the definitions are written in Section 3, but we agree that it can be somewhat challenging to interpret as there are many variables. We edited the text to address variables more gently and to explain the arrow sign. \n\n3. Query for the relationship module\n- The relationship module is fed an N-dimensional (corresponding to N image regions) one-hot vector as input during training. When it is called by other task modules (such as counting), an N-dimensional probability vector is computed using softmax on image regions (see A.4, point 3) and not using the importance scores. This acts as a soft version of the one-hot sampled vector so that we can backpropagate gradients.\n\n4. CIDEr score of captioning \n- That may be true to some extent. However, we think that explicit label information might still be useful since the visual features (environment) are from Faster RCNN and contain diverse information such as edges, background, color, and size.\n\n5 and 6. Comparison with SOTA models for counting and relationship detection\n- To the best of our knowledge, Zhang et al. (2018) is the SOTA method on counting in the context of visual question answering. Our counting module leverages that but achieves higher performance on the number questions - 54.39% with ensembling and 52.12% without vs. 51.62% of Zhang et al. (2018). Note that 51.62% of Zhang et al. (2018) is from a single highly regularized model that provides small gains from ensembling. This shows that additional modules help. Kim et al. (2018) which is concurrent to our work shows similar performance. For the relationship detection task, other works such as Lu et al. (2016) unfortunately have a different setup which makes direct comparison difficult.\n\n7. Table 4, accuracies are from Zhang et al. 2018\n- Yes, the numbers are from their paper. One possible explanation for this could be their use of high regularization for a single model instead of ensembling. Thus, the performance improvement from training on the train set (evaluating on validation) to training on train+val (evaluating on test-dev) is smaller.\n\n(Zhang et al. 2018) Learning to Count Objects in Natural Images for Visual Question Answering\n(Kim et al. 2018) Bilinear Attention Networks\n(Lu et al. 2016) Visual Relationship Detection with Language Priors\n", "[Summary]\nThis paper presents a multi-task learning approach for VQA that represent a solver for each task as a neural module that calls existing modules in a program manner. The authors manually design the task hierarchy and propose a progressive module network to recursive calls the lower modules and gather the information by soft-attention. The final prediction uses all the states and question to infer the final answers. The authors verify the effectiveness of the proposed method on the performance of different tasks and modules. Experiment on VQA shows the proposed model benefits from utilizing different modules. The authors also qualitatively show the model's reasoning process and human study on judging answering quality.\n\n[Strength]\n1. The proposed method is novel and explores to use the existing modules as a black box for visual question answering. This is different from most existing work. \n\n2: By examing different modules, the proposed method is more interpretable compare to canonical methods. \n\n3: The experiment results are good, especially for the counting problem. \n\n[Weakness] \n1. The title of the paper is \"visual reasoning by progressive module networks.\" The title may be a little overstated since the major task is focused on visual question answering (VQA). \n\n2. Annotation is not clear in this paper. For example, on page 3, Query transmitter and receiver, \"the output o_k = M_k(q_k) received from M_k is modified using receiver function as v_k = R_{k->n}(s^t, o_k). \" There are multiple new variables in this paragraph, without specifying the dimension and meaning for each attribute, it's really hard to understand. On page 4, State update function, what is the meaning of variable \"Epsilon\" in the equation? From the supplementary, it seems Epsilon means the environment? \n\n3. On the object counting task, the query transmitter needs to produce a query for a relationship module. The authors mentioned that this is softly calculated by softmax on the importance score. Since q_rel require one hot vector as input, how to sample the q_rel given the importance score and how backprob the gradient in this case? \n\n4. The cider score of image captioning is 109 compared to the baseline 108. The explanation is the COCO dataset has a fixed set of 80 object categories and does not benefit from training the diverse data. Since the input visual feature is the same, the only difference is the proposed model has additional label embedding as input. My assumption is the visual feature already contains the label information for image captioning. \n\n5. On relational detection task, is there a way to compare with the STOA method on some specific data split? This will leads to much more convincing results. \n\n6. Similar as above question, on the object counting task, is there a way to compare with previous counting methods? \n\n7. In Table 4, the accuracy of number on Zhang et.al is 49.39, which is higher than other methods, while on test-dev, the accuracy is 51.62, which is lower than others. Is the number right? ", "The paper proposes to learn task-level modules progressively to perform the task of VQA. Such task-level modules include object/attribute prediction, image captioning, relationship detection, object counting, and finally VQA model. The benefit of using modules for reasoning allows one to visualize the reasoning process more easily to understand the model better. The results are mainly shown on VQA 2.0 set, with a good amount of analysis.\n\n- I think overall this is a good paper, with clear organization, detailed description of the approach, solid analysis of the approach and cool visualization. I especially appreciate that analysis is done taking into consideration of extra computation cost of the large model; the extra data used for visual relationship detection. I do not have major comments about the paper itself, although I did not check the technical details super carefully.\n\n- One thing I am confused about is the residual model, which seems quite important for the pipeline but I cannot find details describing it and much analysis on this component. \n\n- I am in general curious to see if it will be beneficial to fine-tune the modules themselves can further improve performance. It maybe hard to do it entirely end-to-end, but maybe it is fine to fine-tune just a few top layers (like what Jiang et al did)? \n\n- One great benefit of having a module-based model is feed in the *ground truth* output for some of the modules. For example, what benefit we can get if we have perfect object detection? Where can we get if we have perfect relationships? This can help us not only better understand the models, but also the dataset (VQA) and the task in general. " ]
[ -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7 ]
[ -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5 ]
[ "HygakjZyl4", "SJlY63MSpX", "iclr_2019_B1fpDsAqt7", "BkgO1UGq07", "Hygs-DpF0X", "S1enb1hOCm", "BylkHTN_RX", "rygVoa6DRQ", "SyeoH17BTQ", "iclr_2019_B1fpDsAqt7", "SJgyyPTv3m", "r1eaptK5hm", "H1gUmqkh3Q", "iclr_2019_B1fpDsAqt7", "iclr_2019_B1fpDsAqt7" ]
iclr_2019_B1g30j0qF7
Bayesian Deep Convolutional Networks with Many Channels are Gaussian Processes
There is a previously identified equivalence between wide fully connected neural networks (FCNs) and Gaussian processes (GPs). This equivalence enables, for instance, test set predictions that would have resulted from a fully Bayesian, infinitely wide trained FCN to be computed without ever instantiating the FCN, but by instead evaluating the corresponding GP. In this work, we derive an analogous equivalence for multi-layer convolutional neural networks (CNNs) both with and without pooling layers, and achieve state of the art results on CIFAR10 for GPs without trainable kernels. We also introduce a Monte Carlo method to estimate the GP corresponding to a given neural network architecture, even in cases where the analytic form has too many terms to be computationally feasible. Surprisingly, in the absence of pooling layers, the GPs corresponding to CNNs with and without weight sharing are identical. As a consequence, translation equivariance, beneficial in finite channel CNNs trained with stochastic gradient descent (SGD), is guaranteed to play no role in the Bayesian treatment of the infinite channel limit - a qualitative difference between the two regimes that is not present in the FCN case. We confirm experimentally, that while in some scenarios the performance of SGD-trained finite CNNs approaches that of the corresponding GPs as the channel count increases, with careful tuning SGD-trained CNNs can significantly outperform their corresponding GPs, suggesting advantages from SGD training compared to fully Bayesian parameter estimation.
accepted-poster-papers
There has been a recent focus on proving the convergence of Bayesian fully connected networks to GPs. This work takes these ideas one step further, by proving the equivalence in the convolutional case. All reviewers and the AC are in agreement that this is interesting and impactful work. The nature of the topic is such that experimental evaluations and theoretical proofs are difficult to carry out in a convincing manner, however the authors have done a good job at it, especially after carefully taking into account the reviewers’ comments.
val
[ "SklzQ5QfgN", "Skluw57MeN", "SygaK6RU14", "SJgkvoCrJE", "BkxQiXRYhX", "BJgxlnn4yV", "BJejjc3VkV", "SkgZdQ0t37", "rJx7Qcjq07", "HJgaDeh9R7", "S1lesx2c0X", "Hkes7gncAm", "r1eI1gn5Cm", "ryeVtyn907", "Bkg_Xk290X", "BygZgkn5RQ", "BJl_qRsqCQ", "BygYYYicRX", "H1xWncjq0Q", "ryl9t5scRQ", "BkgW5ui5Cm", "rke_hxhHhX" ]
[ "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "------------------------------------------------------------------------------------\n>>> - Demonstrate through some sample figures that GP-CNN with pooling achieves invariance while GP-CNN with out pooling fail to capture it.\n\nThank you for the suggestion, we are working on and are planning to include covariance visualizations on toy data in the next revision.\n\n------------------------------------------------------------------------------------\n>>> - Is the best result on CIFAR-10 achieved using the proposed method ? See Deep convolutional Gaussian processes by Kenneth Blomqvist, Samuel Kaski, Markus Heinonen\n\nWe cite them in related work and point out that their model (as are other deep GPs) is not a GP but a more complex and expressive probabilistic model. To the best of our knowledge, our result is SOTA on CIFAR10 for GPs without trainable kernels.\n\n------------------------------------------------------------------------------------\n>>> - Include the results with CNN-GP both with pooling and without pooling in Table 1 and Table 2. \n\nAs mentioned above, we do not have these results since running a CNN-GP with pooling is prohibitively expensive on such large datasets, especially for a large-scale grid search as was done for Table 2 (see A.7.5)\n\n------------------------------------------------------------------------------------\n>>> - Provide the results of best SGD trained CNN against CNN-GP, both with pooling, as in Figure 3.c. Is the same trend observed in this case also ?\n\nPlease see our comments above - we believe evaluating CNN-GP with pooling on complete datasets for such a large grid to lie beyond the scope of this work.\n\n------------------------------------------------------------------------------------\n>>> - Experimental comparison and results on other Image datasets, specifically MNIST. Does the same observations hold on MNIST too ? \n\nWe have only run our large-scale grid searches on CIFAR10 since this is the dataset that benefits from the convolutional architecture the most (among considered) and allows to confidently distinguish the performance of different models (see e.g. Table 1). We expect the general trends to generalize to other image datasets.\n\n------------------------------------------------------------------------------------\n>>> [...] ( Axis labels are missing for some figures in Figure 3, \n\nSince all plots share common axes and ranges, we only displayed the title of the x-axis (“#Channels”) at the bottom once, and the title of the y-axis (“Validation accuracy”) in the center to avoid clutter. We can fix it in the next revision.\n\n------------------------------------------------------------------------------------\n>>> and provide legends wherever possible). \n\nPlease note that it is not practical to have a complete legend in Figures 3.a and 3.c due to each point representing one of many different hyper-paramater settings (if you refer to other Figures, please let us know which ones).\n\n------------------------------------------------------------------------------------\n>>> The is also an ambiguity in what CNN-GP refers to, with pooling to without pooling.\n- The term CNN-GP is overloaded in many places in the experimental section. I guess in Table 1, its CNN-GP without pooling, while in Table 2, its CNN-GP with pooling. Kindly make the distinction clear in the nomenclature itself, by calling one of them by a different name. Its also not clear when they mention SGD trained CNN, if it is with pooling or without pooling.\n\nThroughout the work, pooling is only used if explicitly mentioned (e.g. “CNN-GP w/ pooling”, “CNN w/ pooling” etc). Otherwise CNN(-GP) is without pooling. We will make it more explicit in the next revision.\n\n------------------------------------------------------------------------------------ \n>>> - What is the difference between the top and bottom pair of figures in Figure 3 (b). Why is the GP performance different in top and bottom cases.?\n\nAs per text labels to the left of the table, top are LCNs (locally-connected networks, CNNs without weight sharing), while bottom are regular CNNs. If pooling is present, LCNs and CNNs result in different respective GPs, hence different performance (see discussion in section 5.1). We will make the text labels more noticeable in the next revision.\n\n------------------------------------------------------------------------------------\n>>> - What does 10, 100, 1000 correspond to in Figure 3 ? Please explain it in caption.\n\nThe numbers are depth, as indicated by the label in top-left, and the caption mentions it as well. We will make it more explicit in the next revision.", "Thank you for your very comprehensive and encouraging review! Please find our replies to your specific comments below.\n\n------------------------------------------------------------------------------------\n>>> A discussion on reasons for best CNN to give a performance better than GP-CNN (especially with pooling), and a experimental comparison with finite width Bayesian CNN would have made the paper more concrete. \n\nPlease note that we have only observed this difference in performance for CNN-GP without pooling, which we explain in discussion (sections 5.1 and 5.3). We don’t make any claims regarding comparisons CNN-GP with pooling and the respective CNN. We emphasize that CNN-GP throughout the paper refers to a no-pooling architecture, and in the rare cases where we evaluate it with pooling we say so explicitly (e.g. “MC-CNN-GP with pooling”, “Global average pooling”). We will make this more clear in the next revision.\n\n------------------------------------------------------------------------------------\n>>> [...]- (Page 5) on convergence of K^l : From Equations (3) and (4), it can be seen that K^l converges to C(K^{l-1}), with C(K^{l-1}) defined slightly different from the paper, in that the expectation over z is taken w.r.t z~ N(0;A(K)) instead of z ~ N(0; K). Is this equivalent to the expressions (7) and (8) described in the paper for a non-linear function \\phi ?\n\nYou are correct, it is exactly equivalent (\\circ represents composition). We will make this step more clear in the next revision.\n\n------------------------------------------------------------------------------------\n>>> - Experimental comparison with Bayesian CNN, demonstrating the effect of increasing the number of channels.\n\nThank you for the suggestion, we agree such experiments would be highly relevant. However, the computational requirements of training Bayesian CNN prohibits us from performing experiments in the many channel setting, which is on the other hand tractable with SGD. Additionally, the connection between SGD and Bayesian inference is an area of active and sometimes contradictory research in the ML community currently, and we believe our experimental results comparing the NN-GP to SGD trained network will therefore be of significant interest.\n\n------------------------------------------------------------------------------------\n>>> - (Page 7) GP-CNN with pooling : Paper proposes subsampling one particular pixel to improve computational efficiency. Has some experiments been performed to evaluate the performance of this approach ? How accurate is this approach ?\n\nPlease note that this approach (section 3.2.2) is only related to pooling (section 3.2.1) in that both approaches are particular cases of projection (section 3.2). Subsampling the center pixel is instead more similar to vectorization (section 3.1) in terms of both performance and compute, and is compared to other methods in Figure 1 (blue curve).\n\n------------------------------------------------------------------------------------\n>>> - Discussion on the positive semi-definiteness of the recursive GP-CNN kernel\n\nThank you for the suggestion, we will include explicit derivation of this property in the next revision.\n\n------------------------------------------------------------------------------------\n>>> - More explanations on why the best SGD-trained CNN gives a better performance than GP-CNN, especially with pooling. Does the Monte-Carlo approximation of GP-CNN kernel computation could impact this performance? I suppose hyper-parameters of the GP-CNN kernel are not learnt from the data, could this result in a lower accuracy ?\n\nPlease see our comment above - we did not evaluate the CNN-GP with pooling on the whole CIFAR10 dataset, since it was prohibitively expensive. The explanation for the difference in performance between the best CNN and best CNN-GP (both without pooling) is given in sections 5.1 and 5.3. Whenever we evaluate a CNN-GP with pooling, it is stated explicitly.\n\n------------------------------------------------------------------------------------\n>>> - Discussion on learning the hyper-parameters of the GP-CNN kernel and its impact on the performance of the model. \n\nThank you for the suggestion. This work was primarily focused on comparing CNNs and their respective CNN-GPs, hence we only considered CNN-GP parameters that follow directly from the respective CNN architecture, and are learned only via a grid search (non-linearity, depth, weight and bias variance, pooling). It would indeed be very interesting in future work to do gradient descent of the GP/NN likelihood w.r.t. weight and bias variance, as well as parameterizing the nonlinearity in a differentiable way, and comparing these models.\n\nRelatedly, we would like to also draw your attention to the discussion in Appendix A.2, where we link the hyperparameters of the CNN-GP kernel to previous work in deep information propagation.", "The paper establishes a connection between infinite channel Bayesian convolutional neural network and Gaussian processes. The authors prove that taking the number of channels in a Bayesian CNN to infinite leads to a GP with a specific Kernel (GP-CNN) and provide a Monte Carlo approach to evaluate the kernels when it is intractable. They show that without pooling the kernel fails to maintain the equivariance property that is achievable with a CNN without pooling. GP-CNN with pooling maintains the invariance property. They make extensive experimental comparison with CNN, demonstrating that as the number of channels become large, CNN achieve performance close to a GP-CNN. A discussion on reasons for best CNN to give a performance better than GP-CNN (especially with pooling), and a experimental comparison with finite width Bayesian CNN would have made the paper more concrete. The paper has both strong theoretical and experimental contribution, and is also very relevant to the ICLR conference.\n\nQuality\n\nThe paper provides a theoretical connection between Bayesian CNN with infinite wide channels and Gaussian processes with a recursive kernel (GP-CNN). The derivations and arguments seem correct. The experiments are conducted comparing the performance of SGD trained CNN with GP-CNN, and other models on mainly on CIFAR-10 data set. \nHowever, some discussion and clarity on the following points will be useful to improve the paper.\n\n- (Page 5) on convergence of K^l : From Equations (3) and (4), it can be seen that K^l converges to C(K^{l-1}), with C(K^{l-1}) defined slightly different from the paper, in that the expectation over z is taken w.r.t z~ N(0;A(K)) instead of z ~ N(0; K). Is this equivalent to the expressions (7) and (8) described in the paper for a non-linear function \\phi ?\n- Experimental comparison with Bayesian CNN, demonstrating the effect of increasing the number of channels.\n- (Page 7) GP-CNN with pooling : Paper proposes subsampling one particular pixel to improve computational efficiency. Has some experiments been performed to evaluate the performance of this approach ? How accurate is this approach ?\n- Discussion on the positive semi-definiteness of the recursive GP-CNN kernel\n- More explanations on why the best SGD-trained CNN gives a better performance than GP-CNN, especially with pooling. Does the Monte-Carlo approximation of GP-CNN kernel computation could impact this performance? I suppose hyper-parameters of the GP-CNN kernel are not learnt from the data, could this result in a lower accuracy ?\n- Discussion on learning the hyper-parameters of the GP-CNN kernel and its impact on the performance of the model. \n- Demonstrate through some sample figures that GP-CNN with pooling achieves invariance while GP-CNN with out pooling fail to capture it.\n- Is the best result on CIFAR-10 achieved using the proposed method ? See Deep convolutional Gaussian processes by Kenneth Blomqvist, Samuel Kaski, Markus Heinonen\n- Include the results with CNN-GP both with pooling and without pooling in Table 1 and Table 2. \n- Provide the results of best SGD trained CNN against CNN-GP, both with pooling, as in Figure 3.c. Is the same trend observed in this case also ?\n- Experimental comparison and results on other Image datasets, specifically MNIST. Does the same observations hold on MNIST too ? \n\nClarity\n\nThe paper is relatively well written and clearly provides main ideas leading to the results. However, notations could have made more succinct, and figures could have been more legible( Axis labels are missing for some figures in Figure 3, and provide legends wherever possible). The is also an ambiguity in what CNN-GP refers to, with pooling to without pooling.\n- The term CNN-GP is overloaded in many places in the experimental section. I guess in Table 1, its CNN-GP without pooling, while in Table 2, its CNN-GP with pooling. Kindly make the distinction clear in the nomenclature itself, by calling one of them by a different name. Its also not clear when they mention SGD trained CNN, if it is with pooling or without pooling. \n- What is the difference between the top and bottom pair of figures in Figure 3 (b). Why is the GP performance different in top and bottom cases.?\n- What does 10, 100, 1000 correspond to in Figure 3 ? Please explain it in caption.\n\nOriginality\n\nPrevious works of Lee and G. Matthews (2018) had shown the equivalence between Deep Neural Networks and GPs. This paper has extended it to deep convolutional neural network setting, but is interesting in its own way. The have come up with an equivalent kernel corresponding to infinite wide Bayesian convolution neural network and provided a monte-carlo approach to compute it. Along with the theoretical contribution, they have also provided extensive experimental comparison. \n\nSignificance\n\nThe paper has made significant contributions connecting the Bayesian convolutional neural networks with Gaussian processes, in deriving the equivalent kernel for GPs, and in demonstrating the performance of the proposed approach on Image datasets", "Dear Reviewer,\n\nThank you again for your very thorough and insightful review. We believe we have effectively implemented most of your very helpful suggestions, by expanding the discussion in the text and improving the clarity of figures and exposition. Both reviewers 1 and 3 have raised their scores on the strength of our rebuttal and paper improvements. We are wondering if you also feel that we have significantly improved our paper, and if so whether you would be willing to increase your score as a result. \n\nThank you for your consideration!", "Overall Score: 7/10.\nConfidence Score: 3/10. (This paper includes so many ideas that I have not been able to prove that are right due to\nmy limited knowledge, but I think that there are correct).\n\nSummary of the main ideas: This paper establishes a theoretical correspondence between BCNN with many channels and GP and\npropsoes a Monte Carlo method to estimate the GP corresponding to a NN architecture. It is a very strong and complete\npaper since its gives theoretical contents and experiments content. I think that it is a really good result that should\nbe read by anyone interested in Neural Network and GP equivalences, and that Machine Learning in general needs these kind\nof papers that establish this complicated equivalences.\n\nRelated to: The work by Lee and G. Matthews (2018) regarding equivalence between Deep Neural Networks and GPs and the\nConvolutional Neural Network framework.\n\nStrengths:\nTheoretical content, Experiments and methodology content (even a Monte Carlo approach) makes it a very complete paper.\nHaving been able to establish complicated and necessary equivalences.\n\nWeaknesses:\nVery difficult for newcomers or non expert technical readers.\n\nDoes this submission add value to the ICLR community? : Yes, it adds, and a lot.\n\nQuality:\nIs this submission technically sound?: Yes it is, it is a necessary step in GP-NN equivalence research.\nAre claims well supported by theoretical analysis or experimental results?: Yes, quite sure.\nIs this a complete piece of work or work in progress?: Complete piece of work.\nAre the authors careful and honest about evaluating both the strengths and weaknesses of their work?: Yes, they are.\n\nClarity:\nIs the submission clearly written?: Yes, but I suggest giving formal introductions to some concepts in the introduction\nand include a figure with the ideas given or the equivalences.\nIs it well organized?: Yes, although sometimes section feel a little but put one after the another. More cohesion would be\nadded if they are introduce before.\nDoes it adequately inform the reader?: Yes.\n\nOriginality:\nAre the tasks or methods new?: The monte carlo is new, the other methods not but the task of the equivalence is new.\nIs the work a novel combination of well-known techniques?: It is kind of a combination, but the proposed ideas are new, it is very theoretical.\nIs it clear how this work differs from previous contributions?: Yes, authors bother in explaining it clearly.\nIs related work adequately cited?: Yes, this is a huge positive point of the paper.\n\nSignificance:\nAre the results important?: From my point of view, yes they are.\nAre others likely to use the ideas or build on them?: I think so, because the topic is hot right now.\nDoes the submission address a difficult task in a better way than previous work?: It is a new task.\nDoes it advance the state of the art in a demonstrable way?: Yes, clearly.\nDoes it provide unique data, unique conclusions about existing data, or a unique theoretical or experimental approach?: Yes, the theoretical approach is sound.\n\n\nArguments for acceptance: It is a paper that provides theory, methodology and experiments regarding a very difficult and challenging task that add value to the community and makes progress in the area of the equivalence between NN and GPs.\n\nArguments against acceptance: I do not have.\n\nTypos:\n\n-> Define the channel concept in introduction.\n-> Put in bold best results of the experiments.\n-> Why not put \"deep\" in the title?\n-> In the introduction, introduce formally a CNN. (brief)\n-> Define the many channel limit.\n-> Put a figure with the equivalences and with the contents of the paper explaining a bit.\n\n\n\nAfter rebuttal:\n=============\n\nAuthors have addressed many topics that not only I but rev 3 address and hence I score this paper with a 7 and recommend it for publication.", "Dear Reviewer,\n\nThank you again for your thoughtful reading and review. We believe you lowered your score due to the justified technical concerns raised by Reviewer 3. However, we have now updated the paper to address those specific issues. Reviewer 3 is satisfied with our response, and has raised their score by 4 points. Would you now be willing to restore your original score, since we have addressed all the open technical concerns, and both other reviewers are now voting for acceptance?\n\nThank you very much for your consideration!", "Thank you for promptly reviewing our revision!\n\n------------------------------------------------------------------------------------\n>>> - (p.20, A.5.1) To ensure the random variables are well-defined, please state explicitly which sigma algebra is F (I am assuming the product Borel sigma-algebra + the relevant definitions of the random variables). This is important for the reader to understand what convergence in distribution on this particular space does and does not imply. \n\nWill do.\n\n------------------------------------------------------------------------------------\n>>> Some readers might also appreciate if you used the mentioned \"infinite width, finite fan-out, networks\" (Matthews et al.) construction (or similar) which would ensure that the collection of random variables {z_i^l}_{i \\in N*} is well-defined for any network width and l, which currently does not seem to be the case according to Eqs. (28-29). If the full countably infinite vectors of random variables are not defined for all networks in the sequence, it is not possible to prove their convergence in distribution to the relevant GPs.\n\nThank you, we agree that currently the construction process in A.5.3 is not explicit enough to define the countably-infinite collection {z_i^{l, \\infty}}_{i \\in N*} (as you point out below in more detail), and we will make it so in the next revision.\n\n------------------------------------------------------------------------------------\n>>> - (p.21, A.5.3) Thank you for clarifying the definition of elements of the sequential limit. If possible, I would further recommend first fixing the probability space and then defining the random variables (the argument just before Theorem A.2 seems somewhat circular as R.V.s should first be defined on some space, and not put on a probability space post-hoc; perhaps some product space with the product sigma-algebra would work here?!). \n\nThank you for the suggestion. One can define {z_i^{l, \\infty}}_{i \\in N*} in the place-holder (A.5.1.iii) before defining the neural networks. This avoids reconstructing the probability space / apparent circularity. We will make sure to be more explicit about it in the next revision.\n\n------------------------------------------------------------------------------------\n>>> Furthermore, if I understand correctly, there are now L sequences of neural networks (one sequence for networks with 0, ..., L-1 \"infinite layers\"), rather than a single sequence, and the \"infinite layers\" are squashed into a single \"infinite layer\" which is represented by z_i^\\infty? In other words, all the infinite layers are replaced by iid samples from a particular GP and only the finite layers have the standard neural network structure? If I am mistaken (or not), perhaps a further explanatory footnote would help the reader.\n\nYou are correct, and we will elaborate on this more in the next revision. This is an inconvenience of the sequential limit approach, since the outputs of any hidden layers only converge in distribution and not necessarily almost surely (point-wisely). Thus we have to re-define/construct them. We believe this inconvenience to be present in all prior / concurrent work using the sequential limit. It might be possible to circumvent this issue with the help of Skorokhod’s Representation Theorem.\n\n------------------------------------------------------------------------------------\n>>> - (p.21, A.5.3 & p.23, A.5.4) Thank you for improving the discussion of joint convergence. Please clarify that proving convergence for any finite m is sufficient for proving convergence in distribution of the countably infinite vector {z_i}_{i \\in N*} for the **product Borel sigma-algebra** (e.g. using an argument like the one on p.19 of Billingsley (1999)).\n\nWill do, thank you for pointing this out. \n\n------------------------------------------------------------------------------------\n>>> - (p.21) \"Uniformly square-integrable\": to me, this phrase suggests that the collection of squares of the functions has to be uniformly integrable but the definition in Eq. (27) only states one of the conditions in definition of uniform integrability. Please clarify that \"uniform square-integrability\" here is not related to the standard notion of \"uniform integrability\" in the literature.\n\nThanks. Will do. ", "****Reply to authors' rebuttal****\n\nDear Authors,\n\nI greatly appreciate the effort you have put into the rebuttal. The changes you have made have addressed most of my concerns and I believe that the few outstanding ones can be fixed without significantly affecting the main message of the paper. I will thus be recommending acceptance of the paper.\n\nBest wishes,\nRev 3\n\n\nSeveral remarks on the updated version:\n\n- (p.20, A.5.1) To ensure the random variables are well-defined, please state explicitly which sigma algebra is F (I am assuming the product Borel sigma-algebra + the relevant definitions of the random variables). This is important for the reader to understand what convergence in distribution on this particular space does and does not imply. Some readers might also appreciate if you used the mentioned \"infinite width, finite fan-out, networks\" (Matthews et al.) construction (or similar) which would ensure that the collection of random variables {z_i^l}_{i \\in N*} is well-defined for any network width and l, which currently does not seem to be the case according to Eqs. (28-29). If the full countably infinite vectors of random variables are not defined for all networks in the sequence, it is not possible to prove their convergence in distribution to the relevant GPs.\n\n- (p.21, A.5.3) Thank you for clarifying the definition of elements of the sequential limit. If possible, I would further recommend first fixing the probability space and then defining the random variables (the argument just before Theorem A.2 seems somewhat circular as R.V.s should first be defined on some space, and not put on a probability space post-hoc; perhaps some product space with the product sigma-algebra would work here?!). Furthermore, if I understand correctly, there are now L sequences of neural networks (one sequence for networks with 0, ..., L-1 \"infinite layers\"), rather than a single sequence, and the \"infinite layers\" are squashed into a single \"infinite layer\" which is represented by z_i^\\infty? In other words, all the infinite layers are replaced by iid samples from a particular GP and only the finite layers have the standard neural network structure? If I am mistaken (or not), perhaps a further explanatory footnote would help the reader.\n\n- (p.21, A.5.3 & p.23, A.5.4) Thank you for improving the discussion of joint convergence. Please clarify that proving convergence for any finite m is sufficient for proving convergence in distribution of the countably infinite vector {z_i}_{i \\in N*} for the **product Borel sigma-algebra** (e.g. using an argument like the one on p.19 of Billingsley (1999)).\n\n- (p.21) \"Uniformly square-integrable\": to me, this phrase suggests that the collection of squares of the functions has to be uniformly integrable but the definition in Eq. (27) only states one of the conditions in definition of uniform integrability. Please clarify that \"uniform square-integrability\" here is not related to the standard notion of \"uniform integrability\" in the literature.\n\n\n\n\n\n****Summary****\n\nThis paper extends recent results on convergence of Bayesian fully connected networks (FCNs) to Gaussian processes (GPs), to the equivalent relationship between convolutional neural networks (CNNs) and GPs. This is currently an area of high interest, with Xiao et al. (2018) examining the same relationship from a mean-field perspective, and two other concurrent papers making contributions:\n\nhttps://arxiv.org/abs/1808.05587\nhttps://arxiv.org/abs/1810.10798\n\nThus the scope of the paper fits well within the aims of the conference.\n\nI really appreciate that the authors did not shy away from studying the effect of pooling layers, and find the connection to locally connected networks they describe intriguing and insightful. On the experimental side, the investigation of the relative importance of compositionality, equivariance and invariance on performance of CNNs is very interesting.\n\nThese experiments and investigations are however based on a theoretical foundation which suffers from several issues. The main problems are an incorrect proof of convergence of the joint distribution of filters, and an improper use of convergence in probability in cases where random variables do not share a common underlying probability space. Unfortunately, either of these by itself invalidates the main theoretical claims which is why I am recommending rejection of the paper.\n\nHowever, I believe that the argument in (A.4.3) can potentially be rectified, and, as I detail below, is of greater interest to the community relative to the ones in (A.4.1) and (A.4.2). If this is accomplished and the proofs in (A.4.1) and (A.4.2) are either also fixed or left out (A.4.3 is sufficient to justify the claims in the main body), I am willing to significantly improve my rating of this paper and potentially recommend acceptance. For this reason, a \"detailed comments\" section is appended at the end of the standard review where the technical issues are described in much greater detail.\n\n\n****General comments****\n\n**Bayesian vs. infinite neural networks**\n\nThe main theoretical claims concerning the relationship between Bayesian CNNs and GPs are within Section 2. Therein on top of page 4, the authors say \"In Appendix A.4 we give several **alternative** derivations of the correspondence\" (emphasis mine), and then progress to outline the skeleton of the argument (A.4.2) in Sections 2.2.1-2.2.3. Section 2.2.3 is concluded by statement of the main theoretical result of this paper, Eq. (10), which comes from (A.4.3) and can only be linked to the rest of Section 2 through the claim of equivalence between the \"alternative derivations\" (A.4.1), (A.4.2) and (A.4.3). The problem is that the equivalence claim does not hold, as explained below:\n\nThe most important distinction here is between what I will call a \"sequential\" and a \"simultaneous\" limit. In the \"sequential\" case (A.4.1 & A.4.2, Sections 2.2.1-2.2.3), layers are taken to infinity one by one, whereas in the \"simultaneous\" case (A.4.3, used to obtain the result concluding Section 2.2.3) all layers are **finite** for **all** members of the sequence, growing in width simultaneously.\n\nThe \"simultaneous\" limit (A.4.3) is in my view more interesting as it tells us that **finite** BNNs do indeed converge to GPs in distribution, i.e. that for each expectation of a continuous bounded function of the outputs of the limiting GP, there exists a BNN with a **finite** number of neurons in **each** layer for which the expectation of the same function is arbitrarily close. From a practical perspective, \"simultaneous\" limit tells us that inference algorithms for BNNs (which can be inaccurate and/or computationally expensive) can sometimes be replaced by exact or approximate inference algorithms for the limiting GP (cf. Section 5 in (Matthews et al., 2018, extended version)).\n\nThe \"sequential\" limit (A.4.1 & A.4.2) on the other hand does not establish existence of finite BNNs arbitrarily close to a particular GP, or justify use of the GP limit as approximation for finite BNNs as above. This is because the width of individual layers goes to infinity in a sequence from first to last. This means that most of the networks that constitute the sequence converging to the GP will have **one or more infinitely wide layers** and thus do not correspond to the finite BNNs we usually work with. In other words, \"sequential\" limit can only ever establish that there exists a network with **all but the final hidden layer infinite** that is arbitrarily close to the limiting GP. The only case where \"sequential\" and \"simultaneous\" limits agree is thus in the single hidden layer case first studied by Neal (1996). I will call the networks with one or more infinite layers \"infinite networks\", inspired by the work of Williams (1997) and others. Notice that infinite networks cannot be described by Eqs. (1) and (2) as the weights would be zero with probability one and thus output of the network would only depend on biases. It is not immediately obvious how to formally replace Eqs. (1) and (2) in the case of infinite networks which is one of the technical issues with the approaches in (A.4.1) and (A.4.2) (see the detailed comments section for further discussion).\n\nOthers may of course disagree and find \"sequential\" limits more interesting, but if the authors wish to keep the description of (A.4.2) in the main paper (Sections 2.2.1-2.2.3), it would be highly beneficial if readers were given the opportunity to understand the differences between the two types of limits so that they can form their own judgement. The authors should then also make clearer that the approach described in Sections 2.2.1-2.2.3 cannot be used to obtain the final result, Eq. (10). I would rather recommend reworking Sections 2.2.1-2.2.3 based on the \"simultaneous\" limit argument in (A.4.3) which unlike the current one can justify the result in Eq. (10) stated at the end.\n\n\n**Other comments**\n\n- (p.2, top) You say your results are \"strengthening and extending the result of Matthews et al. (2018)\" which is somewhat confusing. Matthews et al. prove a result for FCNs whereas this paper focuses on CNNs. Extension of (A.4.3) to FCNs may well be possible but is not included in this paper. Results in (A.4.1) and (A.4.2) are for the \"sequential\" whereas Matthews et al. study the \"simultaneous\" limit. Further differences:\n\t- Matthews et al. prove convergence for any countable rather than only finite input sets.\n\t- In Matthews et al.'s work, Gaussianity is obtained through use of a particular version of CLT, whereas this work exploits Gaussianity of the prior over weights and biases. Going forward, an extension to more general priors/initialisations (like uniform or any sub-Gaussian) is likely to be easier using the CLT approach.\n\t- Matthews et al.'s assumption on the activation functions is independent of the input set (p.7, Definition 1), whereas this work uses an assumption that is explicitly dependent on input (Eq. (37)) which might be potentially difficult to check.\n\n- (p.15, A.2 end) Should also mention Titsias (2009), \"Variational Learning of Inducing Variables in Sparse Gaussian Processes\", as a classical reference for approximate GP inference.\n\n\n****Questions****\n\n- (Section 4) Can you please provide more details on the MC approximation? Specifically, is only the last kernel approximated, or rather all of them, sequentially resampling from the Gaussian with empirical covariance in each layer? In case you tried, is there any qualitative or quantitative difference between the two approaches?\n\n- (Section 4 and Appendix A) Daniely et al. (2016) assume that the inputs to the neural network are l^2 normalised. You mention that the inputs have been normalised in the experiments (A.6). Is this assumption used in any of your proofs? Have you observed that l^2 normalisation improves empirical performance?\n\n- (p.8, Figure 6) How was \"the best CNN with the same parameters\" selected? If training error is zero for all, was it selected by validation accuracy? I was assuming that what is plotted is an estimate of the **expected** generalisation error, whereas the above selection procedure would be estimating supremum of the support of the generalisation error estimator which does not seem like a fair comparison. Can you please clarify?\n\n- (p.8 and A.6) Why only neural networks with zero training loss were allowed as benchmarks? How did the ones with non-zero training error fared in comparison? Can you please expand on footnote 3?\n\n- (p.8, last sentence) \"an observation specific to CNNs and FCNs or LCNs\": Matthews et al. (2018, extended version) observed in Section 5.2 that BNNs and their corresponding GP limits do not always perform the same even in the FCN case (cf. their Figure 8). Their paper unfortunately does not compare to equivalent FCNs trained by SGD. Have you experimented with or have an intuition for whether the cases where SGD trained models prevail coincide with the cases where BNNs+MCMC posterior inference outperform their GP limit?\n\n- (p.15, Table 3) The description says you were using erf activation (instead of the more standard ReLU): why? Have you observed any significant differences? Further, how big a proportion of the values in the image is black due to the numerical issues mentioned in A.6.4?\n\n- (p.18, just after Eq. 39) Use of PSD_{|X|d} in (A.4.3) suggests this proof assumes \"same\" padding is used?! Does the proof generalise to any padding/changing dimensions of filters inside the network?\n\n- (A.6) Can you comment on the pros & cons of \"label regression\" for classification and how does it compare with approximate inference when softmax is put on top of a GP (perhaps illustrating by a simple experiment on a toy dataset)?\n\n\n[end of standard review]\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n[detailed comments]\n\n****Technical concerns****\n\nNotation-wise, I would strongly encourage incorporating the dependence on network width into your notation, at the very least throughout the appendix. It would greatly reduce the amount of mental book-keeping the reader currently has to do, and significantly increase clarity at several places.\n\nOne of my main concerns is that the random variables and their underlying probability space are never formally set-up. This is problematic because convergence in probability is only defined for random variables sharing the same underlying space. At the moment, networks with different widths are not set-up to share a probability space. The practical implication for the approaches relying on convergence in probability of the empirical covariance matrices K is that the convergence in probability is not well-defined exactly because the empirical covariance matrices are not set-up on the same underlying probability space. A possible way to address this issue is to use an approach akin to what Matthews et al. (2018, extended version) call \"infinite width, finite fan-out, networks\" on page 20. This puts the networks on the same underlying space and because the empirical covariance matrices are measurable functions of thus defined random variables, they will also share the same underlying probability space.\n\nAlso regarding convergence in probability, please state explicitly with respect to which metric is the convergence considered when first mentioned (A.4.3 is explicitly using l^\\infty; A.4.2 perhaps l^2 or l^\\infty?), and make any necessary changes (e.g. show continuity of the mapping C in A.4.2).\n\nAt several places within the paper, you state that the law of large numbers (LLN) or the central limit theorem (CLT) can be applied. Apart from other concerns detailed later, these come with conditions on finiteness of certain expectations (usually the first one or two moments of the relevant random variables). Please provide proofs that these expectations are indeed finite and make any assumptions that you need explicit in the main text.\n\nAnother major concern is that none of (A.4.1), (A.4.2) and (A.4.3) successfully proves joint convergence of the filters at the top layer as claimed in the main text (e.g. Eq. (10)), and instead only focuses on marginal convergence of each filter which is not sufficient (cf. the comment on joint vs. pairwise Gaussianity below). This is perhaps sufficient if a single filter is the output of the network, but insufficient otherwise, especially when proving convergence with additional layers added on top of the last convolutional layer (as in Section 3) whenever the number filters is taken to infinity.\n\nIt would be nice, but not necessary for acceptance of the paper, to extend the proofs to uncountable index sets. I think you could use the same argument as described towards the end of Section 2.2 in (Matthews et al., 2018, extended version) and references therein.\n\n\n**Other comments**\n\n- I would strongly encourage distinguishing more clearly between probability distributions and density functions. For example, I would infer that lower case p refers to the probability distribution from Eq. (6); however, in Eqs. (8) and (9) the same notation is used for density functions (whilst integrating against the Lebesgue measure). This is quite confusing in this context as the two objects are not the same (see next two comments). I would suggest using capital P when referring to distribution, and lower case p when referring to its density.\n\n- (p.4, Eq. 6) If p is a density, it cannot be equal to a delta distribution. If it is a probability distribution then I am similarly confused - convergence in probability is a statement about behaviour of random variables, not probability distributions; in that case possibly Eq. (6) is trying to say that the empirical distribution of K^l (which is a random variable) conditioned on K^{l-1} converges weakly to the delta distribution on the RHS in probability? Please clarify.\n\n- (p.5, Eq. 10) I would recommend stating explicitly the mode of convergence. If p is the density then even assuming A.4.3 can be fixed to prove weak convergence of the **joint** distribution of filters is not enough not justify Eq. (10) - convergence in distribution does not imply pointwise convergence of the density function. If p is the distribution, then I would possibly use the more standard notation '\\otimes' instead of '\\prod'.\n\n- (p.17, end of A.4.2) You say \"Note that addition of various layers on top (as discussed in Section 3) does not change the proof in a qualitative way\". Can you please provide the formal details? At the very least, joint convergence of filters will have to be established if fully connected layers are added on top. This is the main reason why joint convergence of filters in the top layer is important.\n\n\n****Specific comments & issues for individual proofs****\n\n**Approaches suited infinite networks (\"sequential limit\")**\n\nAs mentioned in the beginning, it is not entirely clear how to formalise infinite networks in a way analogous to Eqs. (1) and (2) in your paper. This is important because you are ultimately proving statements about random variables, like convergence in probability, and this is not possible if those random variables are not formally defined. This section only comments on technical issues with the approaches described in (A.4.1) and (A.4.2). From now on, I assume that the authors' were able to formally define all the mentioned random variables in a way that fits with (A.4.1) and (A.4.2).\n\n\n(i) Hazan and Jaakola type approach (A.4.1)\n\nThis approach essentially iteratively applies a version of the recursion first studied by Hazan and Jaakola (2015), \"Steps Toward Deep Kernel Methods from Infinite Neural Networks\".\n\n- (p.16, A.4.1) Please provide reference for the claim that \"pairwise independent Gaussian implies joint independent Gaussian\". This seems to assume that the variables are jointly Gaussian which is, as far as I can see, not established here.\n\t- see second part of the linked answer for a nice example of three random variables with pairwise standard normal marginals, but joint not the multivariate standard normal:\n\n\thttps://stats.stackexchange.com/questions/180708/x-i-x-j-independent-when-i%E2%89%A0j-but-x-1-x-2-x-3-dependent/180727#180727 \n\n- (p.16, A.4.1) The application of the multivariate CLT is slightly more complicated than the text suggests. Except for the necessity of proving finiteness of the relevant moments, multivariate CLT does not out-of-the-box apply to infinite dimensional random variables like {z_j^{l+1}}_{1 \\leq j \\leq \\infty} as claimed. Hence joint convergence is not proved which will be problematic for the reasons explained earlier.\n\n\n(ii) Lee et al. type approach (A.4.2)\n\nThis type of approach follows the technique used by Lee et al. (2018), \"Deep Neural Networks as Gaussian Processes\".\n\nApplication of the weak law of large numbers (wLLN): As mentioned before, convergence in probability is only possible between random variables on the same underlying space. This is usually not a problem when wLLN is applied as the random variables converge to a constant random variable. Because every constant random variable generates the trivial sigma-algebra, it is measurable for any underlying probability space and thus convergence in probability is well-defined. The situation here is more complicated because the target is constant only conditionally on the previous layer, i.e. is not constant. As a side note, even the conditioning is only well-defined if all random variables live on the same space (conditioning on a random variable is technically conditioning on the sub-sigma-algebra it generates on the shared space).\n\nAssuming the problem with all K^{l, t} (t denotes the dependence on network width), for all l \\in {1, ... L} and t \\in {1, 2, 3, ...}, being on the same underlying probability space is solved, the next point is application of the wLLN itself. You claim \"we can apply the law of large numbers and conclude that [Eq. (6)]\" (p.4) which is not entirely correct here. Focusing on the application when the sizes of all the previous layers are held fixed, the two conditions that have to be checked here are: (i) the conditional expectation of the iid summands in Eq. (3) is finite; (ii) the sequence of iid variables is fixed. Please provide an explicit proof of (i). Regarding (ii), I am specifically concerned with the fact that with changing t (and thus network widths), the sequence of random variables changes (because the previous K^{l-1,t} matrix changes) which means that completely different size of the current layer may be necessary to get sufficiently close to the target (which has itself changed with t). In other words, instead of having a fixed infinite sequence of iid random variables, you currently have a sequence of growing finite sets of random variables which are iid only within the finite sets, but not between members of the sequence (different t). The direct implication is that this type of proof is not applicable to the \"simultaneous limit\" case as claimed in the main text (Section 2.2 says all proofs are equivalent and lead to Eq. (10) which explicitly takes the simultaneous limit), since the application would require some form of uniform convergence in probability akin to (A.4.3). I think that the approach taken in (A.4.3) is a correct way to address this issue and would thus recommend focusing on (A.4.3) and leaving (A.4.2) out. The appendix seems to acknowledge that (A.4.2) does not work for the \"simultaneous limit\" - please adapt the main text accordingly.\n\nA note on convergence in probability: In Eq. (3), the focus is on convergence in probability of individual entries of the K matrices. This in general does not imply convergence of all entries jointly. However, the type of convergence studied here is convergence to a constant random variable which is fortunate because simultaneous convergence of all entries in probability can be obtained for free in this case (thanks to having a **finite** number of entries of K). I think it might be potentially beneficial for the reader if this was explicitly stated as a footnote with an appropriate reference included.\n\nA note on marginal vs joint probability: As you say above Eq. (23), you are only proving convergence of a single filter marginally, instead of the full sequence {z_j^L}_{1 \\leq j \\leq \\infty} jointly. Convergence of the marginals does not imply convergence of the joint, which will be problematic for the reasons explained earlier.\n\n\n**Approaches for BNNs (\"simultaneous limit\")**\n\n(iii) The proof in (A.4.3)\n\nMy biggest concern about this approach is that it only establishes convergence of a single filter marginally, instead of the full sequence {z_j^L}_{1 \\leq j \\leq \\infty} jointly. Convergence of the marginals does not imply convergence of the joint, which will be problematic for the reasons explained earlier.\n\nOther comments:\n\n- (p.17) You say \"Using Theorem A.1 and the arguments in the above section, it is not difficult to see that a sufficient condition is that the empirical covariance converges in probability to the analytic covariance\".\n\t- Can you please provide more detail as it is unclear what exactly do you have in mind?\n\t- I will be assuming from now on that you show that a particular combination of the Portmanteau theorem and convergence of K^L in probability to get pointwise convergence of the characteristic function is sufficient.\n\n- (p.18) Condition on activation function: The class \\Omega(R) is dependent on the considered input set X through the constant R. This seems slightly cumbersome as it would be desirable to know whether a particular activation function can be used without any reference to the data. It would be nice (but not necessary) if you can derive a condition on \\phi which would not rely on the constant R but allows ReLU.\n\n- (p.19, Eq. 48) I see where Eq. (48) is coming from, i.e. from Eq. (44) and the assumption of \\bar{\\varepsilon} ball around A(K_\\infty^l) being in PSD(R), but it would be nicer if you could be a bit more verbose here and also write out the bound explicitly (caveat: I did not check if the definition of \\bar{\\varepsilon} matches up but assume a potential modification would not affect the proof in a significant way).\n\n- (p.19) The second part of the proof is a little confusing, especially after Eq. (49) - please be more verbose here. For example, just after Eq. (49), it is said that because the two random variables have the same distribution, property (3) of \\Omega(R)'s definition can be applied. However the two random variables are not identical and importantly are not constructed on the same underlying probability space. Property (3) is a statement about the the set of random variables {T_n (Sigma)}_{Sigma \\in PSD_2(R)} and not about the different 2x2 submatrices of K^{l+1}, but it needs to be applied to the latter. When this is clarified, the next point that could be made clearer is in the following sentence where changing t will affect the 2x2 submatrices of K^{l+1,t} as well as the bound through U(t) and V(t); it is not immediately obvious that the proof goes through as claimed so please be a bit more verbose.\n\n\n****Typos and other minor remarks****\n\n- (p.2, top) \"hidden layers go to infinity uniformly\": The use of word uniformly is non-standard in this context. Please clarify.\n\n- (p.3, Eq. 2) Using x for both inputs and post-activations is slightly confusing.\n\n- (p.4, Eq. 5) Should v_\\beta multiply \\sigma_\\beta^2 ?\n\n- (p. 4) The summands in Equation (3) are iid -> \"conditionally iid\" (please also specify the conditioning variables/sigma-algebra).\n\n- (p.4, Eq. 4) Eq. (4) is slightly confusing given you mention that K is a 4D object on the previous page.\n\t- I only understood K is \"flattened\" into |X|d x |X|d matrix when I reached (A.4.3) - this should be stated in main text as otherwise the above confusion arises.\n\n- (p.5, 3 and 3.1) The introduction of \"curly\" K is slighlty confusing. Please provide more detail when introducing the notation, e.g. state in what space the object lives.\n\n- (p.5, before Eq. (11)) Is R^{n^(l+1)} the right space for vec(z^L) ? It seems that the meaning of z changes here as compared to the definition in Eq. (2). If z is still defined as in Eq. (2), how exactly is the vec operator defined here? Please clarify.\n\n- (p.16, A.4.2) \"law of large number\" -> \"weak law of large numbers\"\n\n- (p.17) T_n is technically not a function from PSD_2 only but also from some underlying probability space into a measurable space (i.e. can be viewed as a random variable from the product space of PSD_2 and some other measurable space).\n\n- (p.18, Eq. 38) Missing dot at the end. Also the K matrix either should or shouldn't have the superscript \"l\" (now mixed); it does have the superscript in Eq. (39) so probably \"should\".\n\n- (p.18, Eq. 39) Slightly confusing notation. Please clarify that both K and A(K) should have diagonal within the given range.\n\n- (p.18) \"squared integrable\" -> \"square integrable\" or \"square-integrable\"\n\n- (p.18) Last display before Eq. (43): second inequality can be replaced by equality?!\n\n- (p.19, Eq. 47) The absolute value should be sup norm.\n\n- (p.19, Eq. 49) LHS is a scalar, RHS a 2x2 matrix (typo).\n\n- (p.19, last sentence of the proof) It does not seem the inequalities need to be strict.", "------------------------------------------------------------------------------------\n>>> Regarding results, effort has clearly gone to keep the comparisons as fair as possible, but with these large datasets it is difficult to disentangle the many factors that might effect performance (as acknowledged on p9). It is a weakness of the paper that there is no toy example. An example demonstrating a situation which can only be solved with hierarchical features (e.g. features that are larger than the receptive field of a single layer) would be particularly interesting, as in this case I think the GP-CNN would fail, even with the average pooling, whereas the finite Bayesian-CNN would succeed (with a sufficiently accurate inference method). \n\nThank you for your suggestion. To the best of our knowledge, while (as you have referenced earlier) arguments have been made in the literature against GPs due to lack of hierarchical representation learning present in CNNs (Matthews et al, 2018, section 7), (MacKay 2003, section 45.7), (Neal 1996, Chapter 5) the practical impact of these assertions in a supervised regression setting has not been carefully investigated empirically or theoretically. Moreover, it is unclear if these beliefs hold if we use a sufficiently powerful class of kernels, and we explicitly construct such a class in our work. Further, we believe it is important to decouple hierarchy and finite representations in this discussion. NN-GPs do have a hierarchical kernel, and CNN-GPs have a spatially-local hierarchical kernel with a receptive field (3x3 per layer in our work) smaller than the input images, and they do end up benefiting from hierarchy significantly (see Figure 1 (former 3); further, the best CNN-GP models in Table 2 (former 1) are at least 8 layers deep). Finally, we highlight the similarity in performance between the best finite SGD-trained fully- and locally-connected networks in our work (Tables 1, 2), Lee et al 2018 (Tables 1, 2), as well as the similarities between small Bayesian NNs and NN-GPs in Matthews et al 2018 (section 5.3). Considering all of the above, we believe construction of meaningful datasets that will decisively disentangle performance of finite-feature models from GPs in the context of regression to be a non-trivial research problem and lie beyond the scope of this work.\n\n------------------------------------------------------------------------------------\n>>> It would improve readability to stress the 1D notation in the main text rather than in a footnote. \n\nDone, see beginning of section 2.1.\n\n------------------------------------------------------------------------------------\n>>> On first reading I missed this detail and was confused as I was trying to interpret everything as a 2D convolution. On reflection I think notation is used in the paper is good, but I think the generalization to 2D should be elevated to something more than the footnote. Perhaps a paragraph explaining how the 2D case works would be appropriate, especially as all the experiments are in 2D cases. \n\nDone, see beginning of section 2.1 referencing section A.3 with an added paragraph “ND convolutions” at the end.\n\n------------------------------------------------------------------------------------\n>>> 1,2,4 I think ‘easily’ is a bit of an overstatement. In this work the kernel is itself defined via a recursive convolutional operation, which doesn’t seem to me much more interpretable than the parametric convolution. At least the filters can be examined in parametric case, which isn’t the case here. I do agree with the sentiment that a function prior is better than an implicit weight prior, however.\n\nThank you for this comment. Indeed, at that at the moment the kernel definition does not seem easily interpretable; in-depth investigation of its consequences is the subject of future work. We nonetheless think having compact expressions for the computation performed by a NN, both for the prior and posterior, can open up a novel route towards theoretical understanding. Note that examining filters in parametric space, to the best of our knowledge, can only be done after training and not analytically, the prior therefore remaining difficult to analyze. We have removed the word ‘easy’ from the text and added a footnote referencing filter visualization.", "------------------------------------------------------------------------------------\n>>>However, I believe that the argument in (A.4.3) can potentially be rectified, and, as I detail below, is of greater interest to the community relative to the ones in (A.4.1) and (A.4.2). If this is accomplished and the proofs in (A.4.1) and (A.4.2) are either also fixed or left out (A.4.3 is sufficient to justify the claims in the main body), I am willing to significantly improve my rating of this paper and potentially recommend acceptance. For this reason, a \"detailed comments\" section is appended at the end of the standard review where the technical issues are described in much greater detail.\n\nThank you, we believe to have addressed all of your concerns in the new revision (section A.5). We have left the section A.4.2 out as you advised.\n\n------------------------------------------------------------------------------------\n>>> ****General comments****\n**Bayesian vs. infinite neural networks**\n[...] Others may of course disagree and find \"sequential\" limits more interesting, but if the authors wish to keep the description of (A.4.2) in the main paper (Sections 2.2.1-2.2.3), it would be highly beneficial if readers were given the opportunity to understand the differences between the two types of limits so that they can form their own judgement. The authors should then also make clearer that the approach described in Sections 2.2.1-2.2.3 cannot be used to obtain the final result, Eq. (10). I would rather recommend reworking Sections 2.2.1-2.2.3 based on the \"simultaneous\" limit argument in (A.4.3) which unlike the current one can justify the result in Eq. (10) stated at the end.\n\nThank you, we have both revamped the presentation in section 2.2, and added a discussion about different limit approaches in section A.5.\n\n------------------------------------------------------------------------------------\n>>> **Other comments**\n- (p.2, top) You say your results are \"strengthening and extending the result of Matthews et al. (2018)\" which is somewhat confusing. Matthews et al. prove a result for FCNs whereas this paper focuses on CNNs. Extension of (A.4.3) to FCNs may well be possible but is not included in this paper. \n\nWe added a clarification on how to apply our results to LCNs and FCNs, see section A.5.\n\n------------------------------------------------------------------------------------\n>>> Results in (A.4.1) and (A.4.2) are for the \"sequential\" whereas Matthews et al. study the \"simultaneous\" limit. \n\nThe emphasis in our work (and in section 2.2 in particular) is now on the simultaneous limit as well.\n\n------------------------------------------------------------------------------------\n>>> Further differences:\n\t- Matthews et al. prove convergence for any countable rather than only finite input sets.\n\nThank you for pointing this out, our proof indeed generalizes to the same setting as we now mention in section A.5.1.\n\n------------------------------------------------------------------------------------\n>>>\t- In Matthews et al.'s work, Gaussianity is obtained through use of a particular version of CLT, whereas this work exploits Gaussianity of the prior over weights and biases. Going forward, an extension to more general priors/initialisations (like uniform or any sub-Gaussian) is likely to be easier using the CLT approach.\n\nWe have partially relaxed our assumptions on the priors, see section A.5.1 in the new revision. However, please also note that Matthews et al explicitly assume Gaussian priors in their work to the best of our knowledge.\n\n------------------------------------------------------------------------------------\n>>> - Matthews et al.'s assumption on the activation functions is independent of the input set (p.7, Definition 1), whereas this work uses an assumption that is explicitly dependent on input (Eq. (37)) which might be potentially difficult to check.\n\nWe no longer have this dependency in the new revision.\n\n------------------------------------------------------------------------------------\n>>> - (p.15, A.2 end) Should also mention Titsias (2009), \"Variational Learning of Inducing Variables in Sparse Gaussian Processes\", as a classical reference for approximate GP inference.\n\nThank you, done.\n\n------------------------------------------------------------------------------------\n>>> ****Questions****\n- (Section 4) Can you please provide more details on the MC approximation? Specifically, is only the last kernel approximated, or rather all of them, sequentially resampling from the Gaussian with empirical covariance in each layer? In case you tried, is there any qualitative or quantitative difference between the two approaches?\n\nWe have only tried to approximate the last kernel, i.e. sampling random networks and averaging their top-level activations.", "Thank you for your _extremely_ detailed and insightful review. Your suggestions have allowed us to significantly improve on the quality of our submission and we are very grateful for your hard work. Please find below a summary of our changes, as well as responses to your specific comments.\n\n------------------------------------------------------------------------------------\n****Summary****\nWe believe the simultaneous limit proof in section A.4.3 (now A.5.4) to be largely correct, however, as you rightly pointed out, it was lacking in terms of explicit treatment of various aspects and suffered from typos / notational inconsistencies. We believe the current revision addresses all the relevant issues.\nWe have made sure to have a more explicit, consistent, and rigorous notation throughout the paper. We especially encourage you to review the new section 2.1. “Shapes and indexing” where we describe our notation in detail.\nIn response to your valid concerns we have omitted section A.4.2 and have rewritten section 2.2 in the main text to reference results from A.4.3 (now A.5.4). Section A.4.1 (now A.5.3) was revamped to rigorously define a sequential limit NN-GP and show that it results in the same covariance as the simultaneous limit (A.4.3, now A.5.4).\n\n------------------------------------------------------------------------------------\n>>> These experiments and investigations are however based on a theoretical foundation which suffers from several issues. The main problems are an incorrect proof of convergence of the joint distribution of filters, and an improper use of convergence in probability in cases where random variables do not share a common underlying probability space. Unfortunately, either of these by itself invalidates the main theoretical claims which is why I am recommending rejection of the paper.\n\nWe now formally define an underlying probability space (see A.5.1). Note that random variables {K^l} have constant dimensionality (|X|d x |X|d, see Equation 4) that does not change with widths. Same convention was implied in the previous revision, however we acknowledge that the notation was not explicit enough and may have been a source of confusion, especially in conjunction with the derivations in A.4.2. Further, we derive the joint convergence (wherever applicable) which can be obtained by coupling the convergence of the covariance in probability to deterministic quantities and an argument using characteristic function. Please see Theorems A.2 and A.5. ", "------------------------------------------------------------------------------------\n>>> - (Section 4 and Appendix A) Daniely et al. (2016) assume that the inputs to the neural network are l^2 normalised. You mention that the inputs have been normalised in the experiments (A.6). Is this assumption used in any of your proofs? Have you observed that l^2 normalisation improves empirical performance?\n\nThe assumption is not used in the new revision. We did not try other (or no) normalization approaches, and normalized inputs mainly as a common preprocessing practice in machine learning.\n\n------------------------------------------------------------------------------------\n>>> - (p.8, Figure 6) How was \"the best CNN with the same parameters\" selected? If training error is zero for all, was it selected by validation accuracy?\n\nYes, we state this in experimental details (A.7.5), and now also in that caption (now Figure 3, c).\n\n------------------------------------------------------------------------------------\n>>> I was assuming that what is plotted is an estimate of the **expected** generalisation error, whereas the above selection procedure would be estimating supremum of the support of the generalisation error estimator which does not seem like a fair comparison. Can you please clarify?\n\nIf we understand you correctly (please let us know if not), your concern is with us reporting validation and not test accuracy. This is indeed not a fair comparison, and is slightly biased in favor of NNs over GPs. We have replace it with test accuracy (now Figure 3, c), which is extremely similar.\n\n------------------------------------------------------------------------------------\n>>> - (p.8 and A.6) Why only neural networks with zero training loss were allowed as benchmarks? \n\nPlease note that for practical benchmarking purposes we have presented Table 2 (former 1), where non-zero accuracy (not loss - exactly zero loss was not achieved by our trained NNs) results are presented in parentheses and were emphasized in the caption. Otherwise, we wanted to put the two classes of models in as similar conditions as was practically possible; since the GP without regularization perfectly fits the training set, we filtered for this condition in the networks with SGD training. \n\nRelatedly, note that NN-GP correspondence could be obtained by Sample-then-optimize procedure of [1], where one train only the read-out weights to convergence (infinite steps) using gradient descent training. For realizable problems (over-parameterized) the trained networks will obtain zero loss. Therefore, trained networks that would correspond to NN-GP necessarily should have zero loss (or close to zero loss if only finite training steps were taken). \n\n In our NN experiments with SGD, we relaxed this requirement but still required models to produce 100% accurate train set predictions, and believe that controlling for perfect accuracy allowed us to make arguably more interesting conclusions. E.g. one of the results of this paper is an observation that SGD-trained CNNs can significantly outperform equivalent CNN-GPs. Without controlling for train accuracy the difference may come from CNNs benefitting from underfitting. However the fact that SGD-trained CNNs significantly outperform CNN-GPs even with conditioning for zero error indicates an interesting and more specific mechanism of breakdown of NN-GP correspondence in SGD training.\n\n[1] Alexander G. de. G Matthews, Jiri Hron, Richard E. Turner, and Zoubin Ghahramani. Sample-then-optimize posterior sampling for bayesian linear models. In NIPS Workshop on Advances in Approximate Bayesian Inference, 2017\n\n------------------------------------------------------------------------------------\n>>> How did the ones with non-zero training error fared in comparison?\n\nAs can be seen in Table 2 (former 1) and noted in the caption, underfitting tends to improve generalization for CNNs. Further, we have produced the analogous plots without the 100% accuracy requirement (NNs can underfit):\nhttps://www.dropbox.com/s/vxuhzyfj9we9pj2/underfit.pdf?dl=0\nAs we can see, on full CIFAR10 (top) now the majority of models perform better in the NN case, suggesting that properly tuned underfitting can be a contributing factor of good generalization. However, on the smaller task (bottom), while the trend is altered, the plots are qualitatively similar, potentially due to underfitting on a small dataset being unlikely and hence not playing a significant role.\n\n------------------------------------------------------------------------------------\n>>> Can you please expand on footnote 3?\nPlease see our comments above + we have added a sentence emphasizing that underfitting can lead to better generalization in the footnote.", "------------------------------------------------------------------------------------\n>>> - (p.8, last sentence) \"an observation specific to CNNs and FCNs or LCNs\": Matthews et al. (2018, extended version) observed in Section 5.2 that BNNs and their corresponding GP limits do not always perform the same even in the FCN case (cf. their Figure 8). Their paper unfortunately does not compare to equivalent FCNs trained by SGD. Have you experimented with or have an intuition for whether the cases where SGD trained models prevail coincide with the cases where BNNs+MCMC posterior inference outperform their GP limit?\n\nWe have not explored BNNs+MCMC experiments in this work. As mentioned in the Discussion (section 5.3), we attribute the observation (SGD-trained finite CNNs outperforming their GPs) to the loss of pixel-pixel covariances. This happens in infinite Bayesian (contrary to finite SGD-trained) models, and we do not have strong intuitions at the moment on whether to attribute this to Bayesian treatment or infinite width (or both). However, as we have mentioned in the conclusion, we enthusiastically agree that this is a very interesting question to answer in future work!\n\n------------------------------------------------------------------------------------\n>>>- (p.15, Table 3) The description says you were using erf activation (instead of the more standard ReLU): why? Have you observed any significant differences? \n\nWe did not have a particular reason, and have produced some preliminary results for ReLU below:\nhttps://www.dropbox.com/s/d3lmb84o9b06syt/infoprop_relu.pdf?dl=0,\nwhere we see a qualitatively similar trend that is in agreement with Lee et al. 2018 (Figure 4.b, Figure 9, bottom row; rightmost phase diagram is borrowed from their paper as well in our plot).\n\n------------------------------------------------------------------------------------\n>>>Further, how big a proportion of the values in the image is black due to the numerical issues mentioned in A.6.4?\n\nTotal of 13%, 2792 out of total 20000 trials (2500 per plot in the table) failed.\n% of failures per each plot:\n\n----------------------------------------\nDepth: | 1 | 10 | 100 | 1000 |\n----------------------------------------\nCNN-GP | 0 | 0 | 9 | 44 |\n----------------------------------------\nFCN-GP | 0 | 0 | 13 | 45 |\n----------------------------------------\n\nPlease note that the line between a numerical failure and poor performance is blurry and depends on the specific experimental setup (see A.7.4). Indeed, not all numerical issues result in failures and sometimes will simply produce poor / random results.\n\n------------------------------------------------------------------------------------\n>>> - (p.18, just after Eq. 39) Use of PSD_{|X|d} in (A.4.3) suggests this proof assumes \"same\" padding is used?! Does the proof generalise to any padding/changing dimensions of filters inside the network?\n\nWe now state that we use circular padding and the spatial shape indeed is considered to remain fixed for simplicity (see section 2.1). While we do not consider changing padding / dimensions inside the network, we believe the proof to generalize to such cases easily (by introducing a different A^l operator for each layer, which will still be affine and Lipschitz-continuous). \n\n------------------------------------------------------------------------------------\n>>> - (A.6) Can you comment on the pros & cons of \"label regression\" for classification and how does it compare with approximate inference when softmax is put on top of a GP (perhaps illustrating by a simple experiment on a toy dataset)?\n\nIn order to establish and understand correspondence to the GPs we focused on cases where exact inference on GP side was possible (a benefit of label regression) while working on realistic well known dataset for CNNs. \n\nApparent downsides of label regression are: (a) the independent prior on different output classes, which discards our prior knowledge about them being mutually-exclusive and (b) complications in interpreting GP predictions and their uncertainty on categorical outputs. However, the practical impact of softmax on best achieved accuracy in classification tasks is, to the best of our knowledge, not clear due to how well our MSE-trained NNs perform in this work (Table 2 (former 1); we believe FCN results to be close to SOTA using cross-entropy loss, and CNN results to be decent yet unfortunately hard to compare to SOTA due to architecture limitations), and due to FCN- and CNN-GPs performing similarly to the best considered FCNs and LCN. Therefore, while we certainly believe there to be a difference between label regression and proper classification, we do not think a simple toy task can fully illustrate it. \n\nWe still think it is interesting future work to implement and investigate the effects of softmax output using cross entropy loss.", "------------------------------------------------------------------------------------\n>>>[detailed comments]\n****Technical concerns****\nNotation-wise, I would strongly encourage incorporating the dependence on network width into your notation, at the very least throughout the appendix. It would greatly reduce the amount of mental book-keeping the reader currently has to do, and significantly increase clarity at several places.\n\nDone, we now use “_t” subscript to show dependence on n^1(t), ..., n^L(t) in the appendix.\n\n------------------------------------------------------------------------------------\n>>> One of my main concerns is that the random variables and their underlying probability space are never formally set-up. This is problematic because convergence in probability is only defined for random variables sharing the same underlying space. At the moment, networks with different widths are not set-up to share a probability space. The practical implication for the approaches relying on convergence in probability of the empirical covariance matrices K is that the convergence in probability is not well-defined exactly because the empirical covariance matrices are not set-up on the same underlying probability space. A possible way to address this issue is to use an approach akin to what Matthews et al. (2018, extended version) call \"infinite width, finite fan-out, networks\" on page 20. This puts the networks on the same underlying space and because the empirical covariance matrices are measurable functions of thus defined random variables, they will also share the same underlying probability space.\n\nDone, we now define the probability space in section A.5.1. Networks of different widths now do share the underlying probability space, and hence {K^l} covariances as well.\n\n------------------------------------------------------------------------------------\n>>> Also regarding convergence in probability, please state explicitly with respect to which metric is the convergence considered when first mentioned (A.4.3 is explicitly using l^\\infty; A.4.2 perhaps l^2 or l^\\infty?), and make any necessary changes (e.g. show continuity of the mapping C in A.4.2).\n\nConvergence is w.r.t. l^\\infty and we now state it explicitly in section A.5.6. However, note that due to finite dimensionality all norms are equivalent. While we no longer have section A.4.2, continuity of map C follows from Lemma A.6.2 in the new revision.\n\n------------------------------------------------------------------------------------\n>>> At several places within the paper, you state that the law of large numbers (LLN) or the central limit theorem (CLT) can be applied. Apart from other concerns detailed later, these come with conditions on finiteness of certain expectations (usually the first one or two moments of the relevant random variables). Please provide proofs that these expectations are indeed finite and make any assumptions that you need explicit in the main text.\n\nWe now prove finiteness of the necessary moments (see Theorem A.2).\n------------------------------------------------------------------------------------\n>>> Another major concern is that none of (A.4.1), (A.4.2) and (A.4.3) successfully proves joint convergence of the filters at the top layer as claimed in the main text (e.g. Eq. (10)), and instead only focuses on marginal convergence of each filter which is not sufficient (cf. the comment on joint vs. pairwise Gaussianity below). This is perhaps sufficient if a single filter is the output of the network, but insufficient otherwise, especially when proving convergence with additional layers added on top of the last convolutional layer (as in Section 3) whenever the number filters is taken to infinity.\n\nDone, we now explicitly prove joint convergence wherever applicable.", "------------------------------------------------------------------------------------\n>>> It would be nice, but not necessary for acceptance of the paper, to extend the proofs to uncountable index sets. I think you could use the same argument as described towards the end of Section 2.2 in (Matthews et al., 2018, extended version) and references therein.\n\nThank you, indeed our proof extends to the case of countably many inputs with the metric referenced in Matthews et al. 2018, and we now mention it in section A.5.1.\n\n------------------------------------------------------------------------------------\n>>> **Other comments**\n- I would strongly encourage distinguishing more clearly between probability distributions and density functions. For example, I would infer that lower case p refers to the probability distribution from Eq. (6); however, in Eqs. (8) and (9) the same notation is used for density functions (whilst integrating against the Lebesgue measure). This is quite confusing in this context as the two objects are not the same (see next two comments). I would suggest using capital P when referring to distribution, and lower case p when referring to its density.\n\nDone, we believe the new revision should not have any confusing notation.\n\n------------------------------------------------------------------------------------\n>>> - (p.4, Eq. 6) If p is a density, it cannot be equal to a delta distribution. If it is a probability distribution then I am similarly confused - convergence in probability is a statement about behaviour of random variables, not probability distributions; in that case possibly Eq. (6) is trying to say that the empirical distribution of K^l (which is a random variable) conditioned on K^{l-1} converges weakly to the delta distribution on the RHS in probability? Please clarify.\n\nThank you, we no longer use delta-function notation in the main text and are clear about modes of convergence.\n\n------------------------------------------------------------------------------------\n>>> - (p.5, Eq. 10) I would recommend stating explicitly the mode of convergence. If p is the density then even assuming A.4.3 can be fixed to prove weak convergence of the **joint** distribution of filters is not enough not justify Eq. (10) - convergence in distribution does not imply pointwise convergence of the density function. If p is the distribution, then I would possibly use the more standard notation '\\otimes' instead of '\\prod'.\n\nThank you for pointing this out, we now always state modes explicitly and do not imply convergence of probability densities.\n\n------------------------------------------------------------------------------------\n>>> - (p.17, end of A.4.2) You say \"Note that addition of various layers on top (as discussed in Section 3) does not change the proof in a qualitative way\". Can you please provide the formal details? At the very least, joint convergence of filters will have to be established if fully connected layers are added on top. This is the main reason why joint convergence of filters in the top layer is important.\n\nDone, see Theorem A.6.\n\n------------------------------------------------------------------------------------\n>>> ****Specific comments & issues for individual proofs****\n**Approaches suited infinite networks (\"sequential limit\")**\nAs mentioned in the beginning, it is not entirely clear how to formalise infinite networks in a way analogous to Eqs. (1) and (2) in your paper. This is important because you are ultimately proving statements about random variables, like convergence in probability, and this is not possible if those random variables are not formally defined. This section only comments on technical issues with the approaches described in (A.4.1) and (A.4.2). From now on, I assume that the authors' were able to formally define all the mentioned random variables in a way that fits with (A.4.1) and (A.4.2).\n\nDone. Specifically, we provide a definition in A.5.3 (former A.4.1, “Sequential limit”; note that, we don’t make any convergence in probability statements here, only in distribution). A.4.2 is left out.\n\n------------------------------------------------------------------------------------\n>>> (i) Hazan and Jaakola type approach (A.4.1)\n[...]\n- (p.16, A.4.1) The application of the multivariate CLT is slightly more complicated than the text suggests. Except for the necessity of proving finiteness of the relevant moments, multivariate CLT does not out-of-the-box apply to infinite dimensional random variables like {z_j^{l+1}}_{1 \\leq j \\leq \\infty} as claimed. Hence joint convergence is not proved which will be problematic for the reasons explained earlier.\n\nWe have significantly revamped this section (now A.5.3, “Sequential limit”), including proving joint convergence and finiteness of the moments.", "------------------------------------------------------------------------------------\n>>> (ii) Lee et al. type approach (A.4.2)\n[...]\n\nPer your suggestion we have removed the section A.4.2. in this revision.\n\n------------------------------------------------------------------------------------\n>>> A note on convergence in probability: In Eq. (3), the focus is on convergence in probability of individual entries of the K matrices. This in general does not imply convergence of all entries jointly. However, the type of convergence studied here is convergence to a constant random variable which is fortunate because simultaneous convergence of all entries in probability can be obtained for free in this case (thanks to having a **finite** number of entries of K). I think it might be potentially beneficial for the reader if this was explicitly stated as a footnote with an appropriate reference included.\n\nWe have added a footnote 4 clarifying this step (we believe the limit being a constant not necessary though as long as their number is finite).\n\n------------------------------------------------------------------------------------\n>> A note on marginal vs joint probability: As you say above Eq. (23), you are only proving convergence of a single filter marginally, instead of the full sequence {z_j^L}_{1 \\leq j \\leq \\infty} jointly. Convergence of the marginals does not imply convergence of the joint, which will be problematic for the reasons explained earlier.\n\nWe now prove joint convergence in the new revision.\n\n------------------------------------------------------------------------------------\n>>> **Approaches for BNNs (\"simultaneous limit\")**\n(iii) The proof in (A.4.3)\nMy biggest concern about this approach is that it only establishes convergence of a single filter marginally, instead of the full sequence {z_j^L}_{1 \\leq j \\leq \\infty} jointly. Convergence of the marginals does not imply convergence of the joint, which will be problematic for the reasons explained earlier.\n\nDone, see Theorem A.6.\n\n------------------------------------------------------------------------------------\n>>> Other comments:\n- (p.17) You say \"Using Theorem A.1 and the arguments in the above section, it is not difficult to see that a sufficient condition is that the empirical covariance converges in probability to the analytic covariance\".\n\t- Can you please provide more detail as it is unclear what exactly do you have in mind?\n\nDone, see Theorem A.6.\n\n------------------------------------------------------------------------------------\n>>> - (p.18) Condition on activation function: The class \\Omega(R) is dependent on the considered input set X through the constant R. This seems slightly cumbersome as it would be desirable to know whether a particular activation function can be used without any reference to the data. It would be nice (but not necessary) if you can derive a condition on \\phi which would not rely on the constant R but allows ReLU.\n\nDone, there’s no more dataset dependency.\n\n------------------------------------------------------------------------------------\n>>>- (p.19, Eq. 48) I see where Eq. (48) is coming from, i.e. from Eq. (44) and the assumption of \\bar{\\varepsilon} ball around A(K_\\infty^l) being in PSD(R), but it would be nicer if you could be a bit more verbose here and also write out the bound explicitly (caveat: I did not check if the definition of \\bar{\\varepsilon} matches up but assume a potential modification would not affect the proof in a significant way).\n\nDone (now Equations 70-72).\n\n------------------------------------------------------------------------------------\n>>> - (p.19) The second part of the proof is a little confusing, especially after Eq. (49) - please be more verbose here. For example, just after Eq. (49), it is said that because the two random variables have the same distribution, property (3) of \\Omega(R)'s definition can be applied. However the two random variables are not identical and importantly are not constructed on the same underlying probability space. Property (3) is a statement about the the set of random variables {T_n (Sigma)}_{Sigma \\in PSD_2(R)} and not about the different 2x2 submatrices of K^{l+1}, but it needs to be applied to the latter. \n\nDone (Equations 76-77 + see new modified revision of property 3 (now Equation 48)).\n\n------------------------------------------------------------------------------------\n>>> When this is clarified, the next point that could be made clearer is in the following sentence where changing t will affect the 2x2 submatrices of K^{l+1,t} as well as the bound through U(t) and V(t); it is not immediately obvious that the proof goes through as claimed so please be a bit more verbose.\n\nDone, we have substantially expanded that part of the proof (starting from Equation 75).", "------------------------------------------------------------------------------------\n>>> ****Typos and other minor remarks****\n- (p.2, top) \"hidden layers go to infinity uniformly\": The use of word uniformly is non-standard in this context. Please clarify.\n\nDone. The “uniform” qualifier was used by analogy of uniform function convergence.\n\n------------------------------------------------------------------------------------\n>>>- (p.3, Eq. 2) Using x for both inputs and post-activations is slightly confusing.\n\nChanged post-activations (called activations in the text) to “y”.\n\n------------------------------------------------------------------------------------\n>>>- (p.4, Eq. 5) Should v_\\beta multiply \\sigma_\\beta^2 ?\n\nIt should not, thank you, fixed.\n\n------------------------------------------------------------------------------------\n>>> - (p. 4) The summands in Equation (3) are iid -> \"conditionally iid\" (please also specify the conditioning variables/sigma-algebra).\n\nDone, thank you.\n\n------------------------------------------------------------------------------------\n>>> - (p.4, Eq. 4) Eq. (4) is slightly confusing given you mention that K is a 4D object on the previous page.\n\t- I only understood K is \"flattened\" into |X|d x |X|d matrix when I reached (A.4.3) - this should be stated in main text as otherwise the above confusion arises.\n\nThank you, fixed and clarified (section 2.1.Shapes and indexing).\n\n------------------------------------------------------------------------------------\n>>>- (p.5, 3 and 3.1) The introduction of \"curly\" K is slighlty confusing. Please provide more detail when introducing the notation, e.g. state in what space the object lives.\n\nDone (see also new section 2.1.Shapes and indexing). \n\n------------------------------------------------------------------------------------\n>>> - (p.5, before Eq. (11)) Is R^{n^(l+1)} the right space for vec(z^L) ? It seems that the meaning of z changes here as compared to the definition in Eq. (2). If z is still defined as in Eq. (2), how exactly is the vec operator defined here? Please clarify.\n\nNote that it’s n^{L+1} times d, yet you are correct that it should’ve been the dimension of z^L(x), not z^L. We have fixed the error and substantially improved the clarity of this section and clarified the notation in section 2.1.Shapes and indexing.\n\n------------------------------------------------------------------------------------\n>>> - (p.16, A.4.2) \"law of large number\" -> \"weak law of large numbers\"\n\nDone.\n\n------------------------------------------------------------------------------------\n>>> - (p.17) T_n is technically not a function from PSD_2 only but also from some underlying probability space into a measurable space (i.e. can be viewed as a random variable from the product space of PSD_2 and some other measurable space).\n\nWe no longer use T_n notation in the new revision.\n\n------------------------------------------------------------------------------------\n>>> - (p.18, Eq. 38) Missing dot at the end. Also the K matrix either should or shouldn't have the superscript \"l\" (now mixed); it does have the superscript in Eq. (39) so probably \"should\".\n\nDone.\n\n------------------------------------------------------------------------------------\n>>> - (p.18, Eq. 39) Slightly confusing notation. Please clarify that both K and A(K) should have diagonal within the given range.\n\nDone (no such confusing notation in the new revision).\n\n------------------------------------------------------------------------------------\n>>>- (p.18) \"squared integrable\" -> \"square integrable\" or \"square-integrable\"\n\nDone.\n\n------------------------------------------------------------------------------------\n>>> - (p.18) Last display before Eq. (43): second inequality can be replaced by equality?!\n\nThank you, done.\n\n------------------------------------------------------------------------------------\n>>> - (p.19, Eq. 47) The absolute value should be sup norm.\n\nWe believe the expression is correct.\n\n------------------------------------------------------------------------------------\n>>> - (p.19, Eq. 49) LHS is a scalar, RHS a 2x2 matrix (typo).\n\nBoth are scalars (\\Tau_\\infty is defined as a scalar).\n\n------------------------------------------------------------------------------------\n>>> - (p.19, last sentence of the proof) It does not seem the inequalities need to be strict.\n\nThank you, fixed for n^{l+1}.", "------------------------------------------------------------------------------------\n>>> 1,2,-1 This seems too vague to me, as at least to some extent, Matthew 2018 did indeed consider using NN-GPs to gain insight about equivalent NN models (e.g. section 5.3)\n\nTo our best knowledge, we are the first to learn the role of architecture on the functions represented with the networks using NN-GP correspondence. Specifically, in our Table 1 (former 2), we disentangle the role of network topology and equivariance in CNNs. Previous works (both Lee et al 2018 and Matthews et al 2018) focused on establishing the correspondence and understanding the properties of the corresponding GPs. It would be helpful if you could clarify specific insights laid out in Matthews et al 2018 (section 5.3). As far as we can tell the section discusses how the Bayesian NNs would match NN-GPs. \n\n------------------------------------------------------------------------------------\n>>> 1.1,:,: I find it very surprising that there are no references to Cho and Saul 2009 in this section (one does appear in 2.2.2, however). \n\nWe have updated the related work section to address this point.\n\n------------------------------------------------------------------------------------\n>>> 1.1,3,-2:-1 ‘Our work differs from all of these in that our GP corresponds exactly to a fully Bayesian CNN in the many channel limit’ I do not think this is completely true, as the deep convolution GP does correspond to an infinite limit of a Bayesian CNN, just not the same limit as the one taken in this paper. Similarly a DGP following the Danianou and Lawrence 2013 is an infinite limit of a NN, but one with bottlenecks between layers. It is important that readers appreciate that infinite limits can be taken in different ways, and the resulting models may be very different.\n\nThank you for the comment, we have modified the text in this section to emphasize this distinction more strongly. However, this seems mostly to be a question of semantics in using the term “infinite limit”, i.e. whether one means to include or exclude bottleneck layers. We wish to point out, though, that the limit we take is nevertheless interesting and likely relevant to networks in which all layers widths are similarly large, which is arguably a rather large class of models used in the wild.\n\n------------------------------------------------------------------------------------\n>>> This certain limit taken in this work has desirable computational properties, but arguably undesirable modelling implications.\n\nGood point, and we now emphasize this in the text as well.\n\n------------------------------------------------------------------------------------\n>>> 1.1,-1,-2 It should be made more clear here that the SGD trained models are non-Bayesian. \n\nDone.\n\n------------------------------------------------------------------------------------\n>>> Figure 3 The MC-CNN-GP appears to have performance that is nearly independent of the depth, even including 1 layer. Could this be explained?\n\nWe believe there are two factors at play. Firstly, to the best of our knowledge, dependence of model (be it NN-GPs or SGD-trained NNs) performance on depth is poorly understood and is difficult to decouple (if at all possible) from the particular dataset and architectural decisions like pooling or residual connections. Therefore, we do not necessarily find the lack of a clear and interpretable dependence surprising. Secondly, performance of MC-GP is subject to approximation noise and bias (we only used 16 filters, see Appendix A.7.3) as well as poor conditioning (see dark bands in Figures 2 and 7 in the new revision). Therefore we conjecture that there could be a similar underlying depth dependence to the one observed in other curves on the plot in Figure 1 (former 3), yet it is mild enough (just like other curves don’t have steep slopes as well) to be overrun with the MC approximation imperfections.\n\n------------------------------------------------------------------------------------\n>>> 2.2,2,: The z^l variables are zero mean Gaussian with a fixed covariance, not delta functions, as I understand it. They are independent of each other due to the deterministic K^l, certainly, but they are not themselves deterministic. Could this be clarified? \n\nYou are correct and we have edited the text to clarify this fact.", "Thank you for the very thorough and insightful review. We are glad you found our research useful. We have adjusted the text to address your suggestions. Please see below our responses to specific comments:\n\n------------------------------------------------------------------------------------\n>>> Firstly, and rather mundanely: the figures. Fig 1 is not easy to read due to the density of plotting, and as there is no key it isn’t possible to tell what it shows.\n>>> Figure 6 is also missing a key\n\nThank you for the suggestion. We have added (partial) keys to figures, increased figures axes / ticks / title fonts, and increased the size of Figures 1, 5, 6 (now Figure 3, a, b, c) to make them more legible. Please note that displaying a full key is not practical in Figures 1 and 6 (now Figure 3, a, c) since each line / point respectively corresponds to one of many distinct setting of hyper-parameters like weight and bias variances, non-linearity, and depth. Instead, these plots serve to give a general picture across many different configurations, various styles serving merely to make different entries more visually separate.\n\n------------------------------------------------------------------------------------\n>>> Figure 2 is rather is called a ‘graphical model’ but the variables (weights and biases) are not shown. It should be specified that this is the graphical model of the infinite limit, in which case the K variables should not be random. Also, the caption on this figure refers to variables that aren’t in the figure, and is grammatically incorrect (perhaps something like ‘the limit of an infinitely wide convolutional’ is missing?).\n\nPlease note that the graphical model does represent a finite CNN with random covariance matrices K^l after marginalizing out the weights and biases, and we believe it to be accurate. Otherwise, we agree that in the infinite channel/width limit the model also remains correct, and covariance matrices K^l indeed become deterministic. However, in the revised version we have replaced the justification in section 2.2 in terms of marginalizing over {K^l} with a more rigorous approach, and this figure no longer appears in the text.\n\n------------------------------------------------------------------------------------\n>>> Figure 3 has a caption which seems to be inconsistent with the coloring (for example green is center pixel in the text, but blue in the key). \n\nThank you for noticing this. We have fixed it in the updated version (now Figure 1).\n\n------------------------------------------------------------------------------------\n>>> In Figure 5, what does the tick symbol denote?\n\nWe added keys to the figure to clarify the symbols. All plots share x and y axes where each denote number of channels and accuracy. Note that the x-axis is in log scale. Crosses are displayed at #channel values for which NN experiments were run.", "------------------------------------------------------------------------------------\n>>> Finally, the value some of Table 1 is questionable as so many entries are missing. For example, the Fashion-MNIST column has only two values, which seems to me of little use. \n\nSince running huge parameter sweeps for NNs (see Appendix A.7.5) is expensive, we have focused the full suite of experiments on CIFAR10, as the dataset benefiting from convolutional structure the most (among considered), thus allowing to gauge qualitatively the difference between different models (see e.g. Table 1 (former 2)). However, we still ran our (much smaller in the number of hyper-parameters) GP experiments on MNIST and Fashion-MNIST (a very recent dataset, hence no results from other work to report), to position our work among current and future SOTA results.\n\n------------------------------------------------------------------------------------\n>>> There is an important distinction between finite width Bayesian-CNNs and the infinite limit, and this distinction is indeed made in the paper but not clearly enough in my view. I would anticipate that some readers might come away after a cursory reading thinking that Bayesian-CNNs are fundamentally worse than their parametric counterparts, but this is emphatically not the message of the paper. It seems that the infinite limit that is the cause of two problems. The first problem (or perhaps benefit) is that the infinite limit gives Gaussian inner layers, just as in the fully connected case. The second problem (and I’d say this is definitely a problem this time) is that the infinite limit loses the covariance between the pixels, at least with a fully connected final layer. I would recall [Matthews 2018, long version] section 7, which discusses that point that taking the infinite limit in the fully connected is actually potentially undesirable. To quote Matthews 2018, “MacKay (2002, p. 547) famously reflected on what is lost when taking the Gaussian process limit of a single hidden layer network, remarking that Gaussian processes will not learn hidden features”. Some discussion of this would enhance the presented paper, in my view. \n\nThank you for your comment and your references. We have added a clear disclaimer at the end of introduction that we make no claims about finite width Bayesian networks and added a footnote 8 to expand the discussion section.\n\n------------------------------------------------------------------------------------\n>>> The discussion of eq (7) could be made more clear. Eq (7) is only defined on K, and not in composition with A. It is important that the alpha dependency is preserved by the A operation, and while I suppose this is obvious I would welcome a bit more detail. It would help to demonstrate the application of the results of [Cho and Saul 2009] to the convolution case explicitly (i.e. for C o A), in my view. \n\nThank you for your comment, we have clarified the application of the A operation by a) referencing the specific derivation of Equation (7) in Xiao et al, 2018 (Lemma A.1), b) defining A’s domain and codomain in section 2.2.1, and c) adding a section 2.1. “Shapes and indexing” to make our matrix/vector notation more precise.\n", "Thank you for your detailed and encouraging review! We are glad you found our research interesting. Please find below our replies to your specific comments:\n\n------------------------------------------------------------------------------------\n>>> -> Put in bold best results of the experiments.\n\nThank you for the suggestion. Tables 1 and 2 are updated. \n\n------------------------------------------------------------------------------------\n>>> -> Why not put \"deep\" in the title?\n\nGood suggestion, we have updated our title.\n\n------------------------------------------------------------------------------------\n>>> -> Define the channel concept in introduction.\n>>> -> In the introduction, introduce formally a CNN. (brief)\n\nWe have formally defined convolutional operation with convolution filters in Section 2.1 (preliminaries).\n\n------------------------------------------------------------------------------------\n>>> -> Define the many channel limit.\n\nThe revised introduction describes the many channel limit more concretely (point 1 under the contributions), also see the end of the new section 2.1 “Shapes and indexing”.\n\n------------------------------------------------------------------------------------\n-> Put a figure with the equivalences and with the contents of the paper explaining a bit.\n\nThank you for the suggestion. We have added Figure 4 to better explain the notation and different concepts used in the paper.", "This paper extends the recent results concerning GP equivalence of infinitely wide FC nets to the convolutional case. This paper is generally of a high quality (notwithstanding the lack of keys on figures) and provides insights to an important class of model. I recommend that this paper be accepted, but I think it could be improved in a few ways. \n\nFirstly, and rather mundanely: the figures. Fig 1 is not easy to read due to the density of plotting, and as there is no key it isn’t possible to tell what it shows. Figure 2 is rather is called a ‘graphical model’ but the variables (weights and biases) are not shown. It should be specified that this is the graphical model of the infinite limit, in which case the K variables should not be random. Also, the caption on this figure refers to variables that aren’t in the figure, and is grammatically incorrect (perhaps something like ‘the limit of an infinitely wide convolutional’ is missing?). Figure 3 has a caption which seems to be inconsistent with the coloring (for example green is center pixel in the text, but blue in the key). Figure 6 is also missing a key. In Figure 5, what does the tick symbol denote? Finally, the value some of Table 1 is questionable as so many entries are missing. For example, the Fashion-MNIST column has only two values, which seems to me of little use. [I would have given the paper a rating of 7 were it not for these issues]\n\nRegarding the presentation of the content, I found this paper generally easy to follow and the arguments sound. Here are few points:\n\nThere is an important distinction between finite width Bayesian-CNNs and the infinite limit, and this distinction is indeed made in the paper but not clearly enough in my view. I would anticipate that some readers might come away after a cursory reading thinking that Bayesian-CNNs are fundamentally worse than their parametric counterparts, but this is emphatically not the message of the paper. It seems that the infinite limit that is the cause of two problems. The first problem (or perhaps benefit) is that the infinite limit gives Gaussian inner layers, just as in the fully connected case. The second problem (and I’d say this is definitely a problem this time) is that the infinite limit loses the covariance between the pixels, at least with a fully connected final layer. I would recall [Matthews 2018, long version] section 7, which discusses that point that taking the infinite limit in the fully connected is actually potentially undesirable. To quote Matthews 2018, “MacKay (2002, p. 547) famously reflected on what is lost when taking the Gaussian process limit of a single hidden layer network, remarking that Gaussian processes will not learn hidden features”. Some discussion of this would enhance the presented paper, in my view. \n\nThe discussion of eq (7) could be made more clear. Eq (7) is only defined on K, and not in composition with A. It is important that the alpha dependency is preserved by the A operation, and while I suppose this is obvious I would welcome a bit more detail. It would help to demonstrate the application of the results of [Cho and Saul 2009] to the convolution case explicitly (i.e. for C o A), in my view. \n\nRegarding results, effort has clearly gone to keep the comparisons as fair as possible, but with these large datasets it is difficult to disentangle the many factors that might effect performance (as acknowledged on p9). It is a weakness of the paper that there is no toy example. An example demonstrating a situation which can only be solved with hierarchical features (e.g. features that are larger than the receptive field of a single layer) would be particularly interesting, as in this case I think the GP-CNN would fail, even with the average pooling, whereas the finite Bayesian-CNN would succeed (with a sufficiently accurate inference method). \n\nIt would improve readability to stress the 1D notation in the main text rather than in a footnote. On first reading I missed this detail and was confused as I was trying to interpret everything as a 2D convolution. On reflection I think notation is used in the paper is good, but I think the generalization to 2D should be elevated to something more than the footnote. Perhaps a paragraph explaining how the 2D case works would be appropriate, especially as all the experiments are in 2D cases. \n\nSome further smaller points on specific [section, paragraph, line]s\n\n1,2,4 I think ‘easily’ is a bit of an overstatement. In this work the kernel is itself defined via a recursive convolutional operation, which doesn’t seem to me much more interpretable than the parametric convolution. At least the filters can be examined in parametric case, which isn’t the case here. I do agree with the sentiment that a function prior is better than an implicit weight prior, however.\n\n1,2,-1 This seems too vague to me, as at least to some extent, Matthew 2018 did indeed consider using NN-GPs to gain insight about equivalent NN models (e.g. section 5.3)\n\n1.1,:,: I find it very surprising that there are no references to Cho and Saul 2009 in this section (one does appear in 2.2.2, however). \n\n1.1,3,-2:-1 ‘Our work differs from all of these in that our GP corresponds exactly to a fully Bayesian CNN in the many channel limit’ I do not think this is completely true, as the deep convolution GP does correspond to an infinite limit of a Bayesian CNN, just not the same limit as the one taken in this paper. Similarly a DGP following the Danianou and Lawrence 2013 is an infinite limit of a NN, but one with bottlenecks between layers. It is important that readers appreciate that infinite limits can be taken in different ways, and the resulting models may be very different. This certain limit taken in this work has desirable computational properties, but arguably undesirable modelling implications.\n\n1.1,-1,-2 It should be made more clear here that the SGD trained models are non-Bayesian. \n\nFigure 3 The MC-CNN-GP appears to have performance that is nearly independent of the depth, even including 1 layer. Could this be explained?\n\n2.2,2,: The z^l variables are zero mean Gaussian with a fixed covariance, not delta functions, as I understand it. They are independent of each other due to the deterministic K^l, certainly, but they are not themselves deterministic. Could this be clarified? \n" ]
[ -1, -1, 7, -1, 7, -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ -1, -1, 3, -1, 2, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "SygaK6RU14", "SygaK6RU14", "iclr_2019_B1g30j0qF7", "rke_hxhHhX", "iclr_2019_B1g30j0qF7", "BkxQiXRYhX", "SkgZdQ0t37", "iclr_2019_B1g30j0qF7", "rke_hxhHhX", "SkgZdQ0t37", "SkgZdQ0t37", "SkgZdQ0t37", "SkgZdQ0t37", "SkgZdQ0t37", "SkgZdQ0t37", "SkgZdQ0t37", "SkgZdQ0t37", "rke_hxhHhX", "rke_hxhHhX", "rke_hxhHhX", "BkxQiXRYhX", "iclr_2019_B1g30j0qF7" ]
iclr_2019_B1gTShAct7
Learning to Learn without Forgetting by Maximizing Transfer and Minimizing Interference
Lack of performance when it comes to continual learning over non-stationary distributions of data remains a major challenge in scaling neural network learning to more human realistic settings. In this work we propose a new conceptualization of the continual learning problem in terms of a temporally symmetric trade-off between transfer and interference that can be optimized by enforcing gradient alignment across examples. We then propose a new algorithm, Meta-Experience Replay (MER), that directly exploits this view by combining experience replay with optimization based meta-learning. This method learns parameters that make interference based on future gradients less likely and transfer based on future gradients more likely. We conduct experiments across continual lifelong supervised learning benchmarks and non-stationary reinforcement learning environments demonstrating that our approach consistently outperforms recently proposed baselines for continual learning. Our experiments show that the gap between the performance of MER and baseline algorithms grows both as the environment gets more non-stationary and as the fraction of the total experiences stored gets smaller.
accepted-poster-papers
Pros: - novel method for continual learning - clear, well written - good results - no need for identified tasks - detailed rebuttal, new results in revision Cons: - experiments could be on more realistic/challenging domains The reviewers agree that the paper should be accepted.
train
[ "rye41gla2Q", "HkxQpF3zk4", "B1xXbsgkJ4", "SkeOiAJ9RQ", "rJl0zU-cCQ", "SJlIlAJqCm", "Bkxpap1cAm", "H1l7x3R_h7", "B1eZ4nh_nm", "ByxHXBo8h7", "Hyxl43qlnQ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer" ]
[ "The transfer/ interference perspective of lifelong learning is well motivated, and combining the meta-learning literature with the continual learning literature (applying reptile twice), even if seems obvious, wasn't explored before. In addition, this paper shows that a lot of gain can be obtained if one uses more randomized and representative memory (reservoir sampling). However, I'm not entirely convinced with the technical contributions and the analysis provided to support the claims in the paper, good enough for me to accept it in its current form. Please find below my concerns and I'm more than happy to change my mind if the answers are convincing.\n\nMain concerns:\n\n1) The trade-off between transfer and interference, which is one of the main contributions of this paper, has recently been pointed out by [1,2]. GEM[1] talks about it in terms of forward transfer and RWalk[2] in terms of \"intransigence\". Please clarify how \"transfer\" is different from these. A clear distinction will strengthen the contribution, otherwise, it seems like the paper talks about the same concepts with different terminologies, which will increase confusion in the literature. \n\n2) Provide intuitions about equations (1) and (2). Also, why is this assumption correct in the case of \"incremental learning\" where the loss surface itself is changing for new tasks?\n\n3) The paper mentions that the performance for the current task isn't an issue, which to me isn't that obvious as if the evaluation setting is \"single-head [2]\" then the performance on current task becomes an issue as we move forwards over tasks because of the rigidity of the network to learn new tasks. Please clarify.\n\n4) In eq (4), the second sample (j) is also from the same dataset for which the loss is being minimized. Intuitively it makes sense to not to optimize loss for L(xj, yj) in order to enforce transfer. Please clarify.\n\n5) Since the claim is to improve the \"transfer-interference\" trade-off, how can we verify this just using accuracy? Any metric to quantify these? What about forgetting and forward transfer measures as discussed in [1,2]. Without these, its hard to say what exactly the algorithm is buying.\n\n6) Why there isn't any result showing MER without reservoir sampling. Also, please comment on the computational efficiency of the method (which is crucial for online learning), as it seems to be very slow. \n\n7)The supervised learning experiments are only shown on the MNIST. Maybe, at least show on CONV-NET/ RESNET (CIFAR etc).\n\n8) It is not clear from where the gains are coming. Do the ablation where instead of using two loops of reptile you use one loop.\n\nMinor:\n=======\n1) In the abstract, please clarify what you mean by \"future gradient\". Is it gradient over \"unseen\" task, or \"unseen\" data point of the same task. It's clear after reading the manuscript, but takes a while to reach that stage.\n2) Please clarify the difference between stationary and non-stationary distribution, or at least cite a paper with the proper definition.\n3) Please define the problem precisely. Like a mathematical problem definition is missing which makes it hard to follow the paper. Clarify the evaluation setting (multi/single head etc [2])\n4) No citation provided for \"reservoir sampling\" which is an important ingredient of this entire algorithm.\n5) Please mention appendix sections as well when referred to appendix.\n6) Provide citations for \"meta-learning\" in section 1.\n\n\n[1] GEM: Gradient episodic memory for continual learning, NIPS17.\n[2] RWalk: Riemannian walk for incremental learning: Understanding forgetting and intransigence, ECCV2018.", "Thank you for your thorough reply. I'm satisfied with the updated draft, it's much cleaner and easy to follow. Most of my comments have been addressed and incorporated in the updated draft. I am upgrading my rating.\n\n", "I'm satisfied with the extra information provided by the authors and I'm keeping my score. The improvements suggested by the other reviewers will substantially help the manuscript and should be implemented, but I believe this paper should be accepted.", "Thank you for your detailed review and comments about our work. \n\nYou bring up an interesting question related to the effect of varying buffer sizes. Based on our experiments, we found that the train-test generalization gap has a complicated relationship with buffer size. The network tends to learn the data that is in the memory buffer at the end of training to approximately perfect accuracy. Intuitively, the network will tend to overfit even more on the buffer data as the buffer becomes smaller. The test set accuracy tends to be higher when the buffer is larger and generalization becomes better as overfitting on the items in the buffer is less of an issue. That being said, the training set accuracy does not necessarily follow the pattern of the accuracy on the items in the buffer. As the model has been potentially trained as little as one step on some examples many training steps ago, models that tend to generalize poorly to the test set also generalize poorly to some parts of the training set that are not included in the buffer. \n\nIn order to address your comments about our ablation studies, we have revamped Table 6 of Appendix K to include more experiments to help make our findings clearer. We included, based on your suggestion, experiments demonstrating that adaptive optimizers like Adam and RMSProp do not account for the gap between ER and MER. Particularly for smaller buffer sizes, these approaches seem to overfit more on the buffer and actually hurt generalization in comparison to simple SGD. We also added detail on the performance of the different variants of MER proposed in algorithms 1, 5, and 6. Additionally, we have included new experiments about the impact of the buffer strategy, including those showing how reservoir sampling can also improve GEM although it still slightly underperforms ER. We have also conducted experiments performed using a DQN with reservoir sampling, finding that it consistently underperforms a DQN with typical recency-based sampling in the RL settings we explore. In the final draft, we will include updated charts with the results of DQN with reservoir sampling and DQN-MER with recency sampling added. \n\nThank you for your comment about the ambiguity in our experiments. In addition to retained accuracy, we have now also included learned accuracy (LA) which represents the average accuracy for each task directly after learning that task. As you can see in our updated experiments, MER consistently achieves the best performance for this metric as well as retained accuracy. While it is true that attempting to approximate the multi-task setting could potentially result in interference from other tasks, our proposed regularization is seeking to minimize this interference and maximize transfer across tasks which should mitigate the potential for dissimilar tasks to have a negative effect on learning. \n\nWe have found that MER, for example in algorithm 6, is not particularly sensitive to the gamma hyperparameter. Overall, for a fixed gamma*alpha which functions as an effective learning rate, we see fairly consistent performance when varying gamma and alpha. In the final draft we will include a chart demonstrating this in the appendix. \n\nWe will provide detailed charts in the final draft including performance results for a DQN with reservoir sampling and a single task DQN. Regarding your comments about Flappy Bird, we find that a DQN with MER achieves approximately the asymptotic performance for the single task DQN by the end of training for most tasks. On the other hand, DQN with reservoir sampling achieves worse performance than the standard DQN, so it is clear that, in this particular setting where a later task is subsumed in previous tasks, keeping easy experiences alone does not account for the benefit of DQN-MER. ", "Thank you for your great suggestions about the RL experiments. We have made substantial revisions to the RL experiment sections in the main text and appendix. Additionally, we will still add more ablation experiments that we have performed to our charts for the final draft. \n\nTo clarify, the y axis in Catcher refers to the number of fruits caught during the full game span. We have tried to make this more clear within our reinforcement learning experiment details in Appendix M.1. \n\nIn the final draft we will provide charts that details the single-task performances after 25k steps, 150k steps and asymptotic performance. For example, we report some of our results for Flappy Bird averaged across runs below to give you an idea of the comparative performance of DQN-MER. \n\nSingle Task DQN on Flappy Bird\n=======================\n\n25k Step Single Task Results:\nDQN Task 0 at 25k steps =-1.13\nDQN Task 1 at 25k steps =-0.47\nDQN Task 2 at 25k steps =-2.66\nDQN Task 3 at 25k steps =-3.95\nDQN Task 4 at 25k steps =-4.14\nDQN Task 5 at 25k steps =-4.95\n\n150k Step Single Task Results:\nDQN Task 0 at 150k steps = 23.73\nDQN Task 1 at 150k steps = 19.34\nDQN Task 2 at 150k steps = 13.65\nDQN Task 3 at 150k steps = 6.91\nDQN Task 4 at 150k steps = 8.02\nDQN Task 5 at 150k steps = -0.92\n\n1M Step Single Task Results:\nDQN Task 0 at 1M steps = 28.08\nDQN Task 1 at 1M steps = 25.56\nDQN Task 2 at 1M steps = 17.72\nDQN Task 3 at 1M steps = 17.72\nDQN Task 4 at 1M steps = 14.49\nDQN Task 5 at 1M steps = 10.00\n\nContinual Learning with DQN-MER on Flappy Bird\n=======\n\nContinual Learning Results After 25k Steps On The Task:\nDQN-MER Task 0 after training on Task 0 (at 25k steps) = 1.32\nDQN-MER Task 1 after training on Task 1 (at 50k steps) = 11.92\nDQN-MER Task 2 after training on Task 2 (at 75k steps) = 19.42\nDQN-MER Task 3 after training on Task 3 (at 100k steps) = 21.98\nDQN-MER Task 4 after training on Task 4 (at 125k steps) = 15.30\nDQN-MER Task 5 after training on Task 5 (at 150k steps) = 8.46\n\nContinual Learning Results After Training On All 6 Tasks:\nDQN-MER Task 0 at 150k steps = 36.63\nDQN-MER Task 1 at 150k steps = 26.72\nDQN-MER Task 2 at 150k steps = 19.83\nDQN-MER Task 3 at 150k steps = 14.63\nDQN-MER Task 4 at 150k steps = 11.06\nDQN-MER Task 5 at 150k steps = 8.46\n\nClearly DQN-MER performs better at training the first task and experiences positive forward transfer for the remaining tasks over what is possible just training for 25k steps on a single task. In most cases, DQN-MER achieves similar performance to the DQN that takes 1 million steps and achieves asymptotic performance. On the first three tasks DQN-MER performs better and it performs a bit worse for the later tasks where it has less time to train. There does not seem to a price paid by DQN-MER in these experiments for not forgetting the easier tasks. Actually, we find that on the final tasks, DQN-MER achieves significant transfer from easier tasks and achieves better performance than the single task DQN does after even training on those tasks alone for 150k steps. We have also conducted experiments performed using a DQN with reservoir sampling, finding that it consistently underperforms a DQN with typical recency-based sampling in the RL settings we explore. In the final draft, we will include updated charts with the results of DQN with reservoir sampling and DQN-MER with recency-based sampling added. We really appreciate you suggesting these kinds of experiments and we look forward to improving our charts in the final draft to provide much more context for understanding our RL results. \n", "Main Concern #4) We are not sure that we totally follow your intuition. When you consider the effective loss function being optimized over in the offline case (which we are discussing in Equation 4) the extra L(xj,yj) term really only has the effect of increasing the priority of the traditional supervised learning loss function rather than the regularization term. This effect should be largely arbitrary because it can be absorbed by tuning alpha. We have edited the text to further emphasize this point. We report it in this way because this is consistent with what we do in our implementation. \n\nMain Concern #5) Thank you for this suggestion as this really improves our discourse. Following (Lopez-Paz & Ranzato, NIPS 2017) in addition to retained accuracy, we now also report backward transfer / interference (BTI) and forward transfer / interference (FTI). Unfortunately, forward transfer only makes sense for single headed settings with correlated tasks, which only applies to our MNIST-Rotations experiments. We include these results in Table 5 of Appendix K. As such, we report the accuracy on a task directly after learning that task (LA) for all of our experiments to express plasticity to incoming tasks. We can see in all cases that the high retained accuracy achieved by MER is the byproduct of the best balance between learned accuracy (LA) and backward transfer / interference (BTI). \n\nMain Concerns #6 and #7) In order to address your question about getting rid of reservoir sampling, we have added experiments using the buffer strategy from (Lopez-Paz & Ranzato, NIPS 2017) instead to our ablation experiments in Table 6 of Appendix L. Our experiments demonstrate that reservoir sampling results in the best performance for all methods. ER and GEM perform similarly regardless of the buffer management policy. We have preliminary results for MER without using reservoir sampling as well which we will include in the final draft. Regardless of buffer strategy, MER results in considerable improvements on top of both ER and GEM, especially for small buffer sizes. Thank you for mentioning the computational efficiency of MER. In Figure 2 we highlight the performance characteristics on Omniglot for which we use CNN models in a supervised learning setting. We highlight that MER achieves clearly the best tradeoff between learning performance and computation time as methods like GEM have a difficult time scaling to this kind of architecture. We have worked to make it clearer in the text that we use CNNs here in addition to in our RL DQN experiments. \n\nMain Concern #8) Thank you for your comment. We have proposed three variants of MER in this work which we detail in algorithms 1,5, and 6 in the updated draft. What you are asking for with one straightforward Reptile loop is detailed in algorithm 5, where algorithms 1 and 6 provide different mechanisms of adding more weight to the current example. We provide results for all variants of these models and not just algorithm 1 in Table 6 of Appendix L and provide more detail about the connection between the different approaches in Appendix H and I. We summarize these results in the second paragraph related to Question 6 in Section 6 of the main text. Algorithm 5 results in significant gains over ER and GEM in all cases. Additionally, algorithms 1 and 6 result in further gains on top of that by increasing the prioritization of the current example. \n\nMinor Comment #1) Thank you for pointing out the possible confusion here. We have added Footnote 1 to the abstract in order to help clarify this confusion at the onset of the paper. In this work, we focus on algorithms that are agnostic to task boundaries, so we really mean both gradients with respect to unseen examples of the current task and gradients with respect to unseen examples of unseen tasks. \n\nMinor Comments #2 and #3) Thank you for the comment. This is a good point. We have added Appendix A to make our definition of the problem and nonstationary setting more rigorous. \n\nMinor Comment #4) Thank you for bringing our attention to this issue. We now provide a comprehensive overview of reservoir sampling in Appendix F and algorithm 3. \n\nMinor Comments #5 and #6) Thank you for these suggestions. We have addressed them in the revised submission. ", "Thank you for your detailed review and questions. We will address each comment individually: \n\nMain Concern #1) Thank you for pointing out that the terminology used in our submitted version may be confusing. As you pointed out, it is important to make clear that many of the main ideas we used in our paper including the concepts of transfer and interference in forward and backward directions, the link between transfer and weight sharing, and the idea of involving gradient alignment in a formulation for continual learning have been explored before. The main contribution of the transfer-interference tradeoff we propose in this work is a novel perspective on the goal of gradient alignment for the continual learning problem. We have added additional details in the abstract, Section 1, Section 2, and Appendix B in an attempt to make the comparative novelty of our approach clearer. The transfer-interference tradeoff view of continual learning can be very useful as this temporally symmetric view of this tradeoff in relation to weight sharing leads to a natural meta-learning perspective of continual learning. We have attempted to make this clearer in Figure 1 and Section 2 Footnote 3. Moreover, we have added Appendix C to make the connection with weight sharing more explicit. \n\nHowever, our operational measures of transfer and interference are in fact the same as forward and backward transfer considered in (Lopez-Paz & Ranzato, NIPS 2017). Following the terminology of (Lopez-Paz & Ranzato, NIPS 2017), we simply use the term “transfer” to refer to our temporally symmetric view of the problem that does not make a distinction between the forward and backward direction. We use “interference” as is common in the literature to refer to the case where transfer is negative. Intransigence and forgetting are also very related to our work as well as the stability-plasticity dilemma. Intransigence and forgetting measure very similar phenomenon to the metrics learned accuracy (LA) and backward transfer and interference (BTI) that we have added to our experiments. We should clarify that we do not consider the way we measure performance to be novel or noteworthy. We have tried to emphasize this by adding additional performance measures such as backward transfer (BTI) and forward transfer (FTI) as used in (Lopez-Paz & Ranzato, NIPS 2017) to our experiments. \n\nMain Concern #2) We have tried to make it clear at the beginning of Section 2 that these operational statements only hold at an instant in time with a set of parameters theta. Because we are considering both data points to be evaluated by the same set of parameters, these equations hold despite the fact that the data points may be drawn from different tasks. This is in fact very similar to the instantaneous notion of transfer considered for continual learning in (Lopez-Paz & Ranzato, NIPS 2017) with the main distinction being that we consider transfer on the example level and not the task level. Obviously, you are right that gradients with respect to the parameters at different points in time may be out of date, which would mean these equations wouldn’t hold. However, it is important to note that we do not implement this case even in the continual learning setting as replayed memories are always considered with the current parameters theta along with the current example. It is true that the notion of generalizing based on this learning about transfer and interference into the future will itself be a non-stationary learning problem. This is because as the parameters change, the notion of good updates for transfer and interference with past examples changes as well. That being said, we are also stabilizing learning for this non-stationary process with experience replay. \n\nMain Concern #3) Thank you for your comment. We would first like to clarify that our experiments on Omniglot would be considered “multi-head” (Chaudhry et al., ECCV 2018). We have updated the text to make this clearer. We have also added a new metric learned accuracy (LA) representing performance on a task right after learning that task to our supervised learning experiments and made the task switches clearer for our RL experiments to directly address your concern. Empirically speaking we find that MER results in the best LA in all cases. Despite using a single head, MER is apparently able to efficiently navigate the transfer-interference tradeoff of weight sharing to achieve good LA while at the same time achieving good backward transfer and interference (BTI) performance. \n ", "The authors frame continual learning as a meta-learning problem that balances catastrophic forgetting against the capacity to learn new tasks. They propose an algorithm (MER) that combines a meta-learner (Reptile) with experience replay for continual learning. MER is evaluated on variants of MNIST (Permutated, Rotations, Many) and Omniglot against GEM and EWC. It is further tested in two reinforcement learning environments, Catcher and FlappyBird. In all cases, MER exhibits significant gains in terms of average retained accuracy.\n\nPro's\n\nThe paper is well structured and generally well written. The argument is both easy to follow and persuasive. In particular, the proposed framework for trading off catastrophic forgetting against positive transfer is enlightening and should be of interest to the community. \n\nWhile the idea of aligning gradients across tasks has been proposed before (Lopez-Paz & Ranzato, 2017), the authors make a non-trivial connection to Reptile that allows them to achieve the same goal in a surprisingly simple algorithm. That the algorithm does not require tasks to be identified makes it widely applicable and reported results are convincing. \n\nThe authors have taken considerable care to tease out various effects, such as how MER responds to the degree of non-stationarity in the data, as well as the buffer size. I’m particularly impressed that MER can achieve such high retention rates using only a buffer size of 200. Given that multiple batches are sampled from the buffer for every input from the current task, I’m surprised MER doesn’t suffer from overfitting. How does the train-test accuracy gap change as the buffer size varies?\n\nThe paper is further strengthened by empirically verifying that MER indeed does lead to a gradient alignment across tasks, and by an ablation study delineating the contribution from the ER strategy and the contribution from including Reptile. Notably, just using ER outperforms previous methods, and for a sufficient large buffer size, ER is almost equivalent to MER. This is not surprising given that, in practice, the difference between MER and ER is an additional decay rate ( \\gamma) applied to gradients from previous batches. \n\nCon's\n\nI would welcome a more thorough ablation study to measure the difference between ER and MER. In particular, how sensitive is MER is to changes in \\gamma? And could ER + an adaptive optimizer (e.g. Adam) emulate the effect of \\gamma and perform on par with MER. Similarly, given that DQN already uses ER, it would be valuable to report how a DQN with reservoir sampling performs.\n\nI am not entirely convinced though that MER maximizes for forward transfer. It turns continual learning into multi-task learning and if the new task is sufficiently different from previous tasks, MER’s ability to learn the current task would be impaired. The paper only reports average retained accuracy, so the empirical support for the claim is ambiguous.\n\nThe FlappyBird experiment could be improved. As tasks are defined by making the gap between pipes smaller, a good policy for task t is a good policy for task t-1 as well, so the trade-off between backward and forward transfer that motivates MER does not arise. Further, since the baseline DQN never finds a good policy, it is essentially a pseudo-random baseline. I suspect the only reason DQN+MER learns to play the game is because it keeps \"easy\" experiences with a lot of signal in the buffer for a longer period of time. That both the baseline and MER+DQN seems to unlearn from tasks 5 and 6 suggests further calibration might be needed.", "The paper considers a number of streaming learning settings with various forms of dataset shift/drift of interest for continual learning research, and proposes a novel regularization-based objective enabled by a replay memory managed using the well known reservoir sampling algorithm.\n\nPros:\nThe new objective is not too surprising, but figuring out how to effectively implement this objective in a streaming setting is the strong point of this paper. \n\nTask labels are not used, yet performance seems superior to competing methods, many of which use task labels.\n\nResults are good on popular benchmarks, I find the baselines convincing in the supervised case.\n\nCons:\nDespite somewhat frequent usage, I would like to respectfully point out that Permuted MNIST experiments are not very indicative for a majority of desiderata of interest in continual learning, and i.m.h.o. should be used only as a prototyping tool. To pick one issue, such results can be misleading since the benchmark allows for “trivial” solutions which effectively freeze the upper part of the network and only change first (few) layer(s) which “undo” the permutation. This is an artificial type of dataset shift, and is not realistic for the type of continual learning issues which appear even in single task deep reinforcement learning, where policies or value functions represented by the model need to change substantially across learning.\n\nI was pleased to see the RL experiments, which I find more convincing because dataset drifts/shifts are more interesting. Also, such applications of continual learning solutions are attempting to solve a ‘real problem’, or at least something which researchers in that field struggle with. That said, I do have a few suggestions. At first glance, it’s not clear whether anything is learned in the last 3 versions of Catcher, also what the y axis actually means. What is good performance for each game is very specific to your actual settings so I have no reference to compare the scores with. The sequence of games is progressively harder, so it makes sense that scores are lower, but it’s not clear whether your approach impedes learning of new tasks, i.e. what is the price to pay for not forgetting?\n\nThis is particularly important for the points you’re trying to make because a large number of competing approaches either saturate the available capacity and memory with the first few tasks, or they faithfully model the recent ones. Any improvement there is worth a lot of attention, given proper comparisons. Even if this approach does not strike the ‘optimal’ balance, it is still worth knowing how much training would be required to reach full single-task performance on each game variant, and what kind of forgetting that induces. \n", "We would like to sincerely thank you for your comments about our work and for your questions. These will help us further improve our empirical discourse. We will definitely make sure that we address all of your questions in the revised version of our paper once the open review tool allows for revisions.\n\nFirst Question: Thank you for bringing this important detail up. The answer requires some contextualization. In the case of Catcher, there is no predefined (hard coded) maximum score in the library we used. Under some soft assumptions but with a realistic settings, such as the defaults used in the experimental section (default pellet speed, default player speed, etc), the score grows approximately linearly with the number of frames for a perfect player. It can be approximated by 0.12 x n_frames (empirically we found it could be possible to achieve a score of 1 in 10 frames, 12 in 100 frames, 120 in 1k frames, 597 in 5k frames, and so on). In the case of FlappyBird, based on reported videos on popular channels, the hard limit of the original game was set to 999 points. However, for the emulator used in this experiment there is no trace of such a hard limit. Maybe a more interesting question is human performance: the fact that it was a very popular game raised the public question of the overall difficulty of the game for humans (see https://www.theguardian.com/news/2014/mar/03/flappy-bird-what-does-the-data-say). As it is stated in the article referenced above, and even in the Wikipedia article, human performance is on average much lower than this hard limit: in the analysis above it is observed that it typically takes more than 350 attempts (full episodes) to achieve a couple of games with score 12. It makes sense to us then that a 'Platinum level' is achieved with a score of 40. We are compiling more information to reliably compute the distribution of scores in human players and we will update the appendix of the revision with this information.\n\nSecond and Fifth Questions: The question of asymptotic scores for the RL experiments is an interesting one. We are running experiments now and think this is a good suggestion that will help provide additional context for the results. As a sneak peek for soft reference, our preliminary experiments with another model (A3C) resulted in 296 as the asymptotic score for Catcher. We have found that learning may proceed quite slowly after the initial period, so we would like to run our models for a very long period to ensure we have truly found the asymptotic performance.\n\nThird Question: Thank you for this question as it also improves our discourse to highlight this point, showcasing the significant extent of transfer across tasks that MER achieves during continual lifelong training. We originally provided this information through our figures in the main text, but will make sure to update the format of the figures and provide details in the text to make this much clearer. After 25k steps of training from scratch, a DQN achieves an average score across runs of 143.02 on Catcher and -2.83 on Flappy Bird. In contrast, MER achieves an average score across runs of 187.93 on Catcher and 1.32 on Flappy Bird.\n\nFourth Question: Thank you for suggesting this ablation experiment. It fits nicely in the context of our ablation analysis section. This will help highlight the value add of incorporating meta-learning.", "Thanks for the paper! I'm particularly impressed by the RL experiments, which I find a bit difficult to fully interpret without more information. For example:\n- What are the maximum scores achievable in these games/versions?\n- What score does DQN get asymptotically on each version separately, and how much data is required?\n- How much can be learned in 25K frames from scratch in each game?\n- How does DQN perform with reservoir sampling without MER? Any ablation experiments and data would be useful.\n- What is the asymptotic effect of MER on a single task. Does it get to the same level of performance as DQN with enough data? Is this the case for all tasks considered?\n" ]
[ 6, -1, -1, -1, -1, -1, -1, 8, 7, -1, -1 ]
[ 5, -1, -1, -1, -1, -1, -1, 4, 5, -1, -1 ]
[ "iclr_2019_B1gTShAct7", "SJlIlAJqCm", "rJl0zU-cCQ", "H1l7x3R_h7", "B1eZ4nh_nm", "Bkxpap1cAm", "rye41gla2Q", "iclr_2019_B1gTShAct7", "iclr_2019_B1gTShAct7", "Hyxl43qlnQ", "iclr_2019_B1gTShAct7" ]
iclr_2019_B1gstsCqt7
Sparse Dictionary Learning by Dynamical Neural Networks
A dynamical neural network consists of a set of interconnected neurons that interact over time continuously. It can exhibit computational properties in the sense that the dynamical system’s evolution and/or limit points in the associated state space can correspond to numerical solutions to certain mathematical optimization or learning problems. Such a computational system is particularly attractive in that it can be mapped to a massively parallel computer architecture for power and throughput efficiency, especially if each neuron can rely solely on local information (i.e., local memory). Deriving gradients from the dynamical network’s various states while conforming to this last constraint, however, is challenging. We show that by combining ideas of top-down feedback and contrastive learning, a dynamical network for solving the l1-minimizing dictionary learning problem can be constructed, and the true gradients for learning are provably computable by individual neurons. Using spiking neurons to construct our dynamical network, we present a learning process, its rigorous mathematical analysis, and numerical results on several dictionary learning problems.
accepted-poster-papers
While there has been lots of previous work on training dictionaries for sparse coding, this work tackles the problem of doing son in a purely local way. While previous work suggests that the exact computation of gradient addressed in the paper is not necessarily critical, as noted by reviewers, all reviewers agree that the work still makes important contributions through both its theoretical analyses and presented experiments. Authors are encouraged to work on improving clarity further and delineating their contribution more precisely with respect to previous results.
train
[ "S1xN4BLApX", "Hyl2hXI067", "rJgVjEUATX", "rklBey5vTQ", "SJe04G-g67", "ryxykzEyam" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Figure 2 serves to illustrate our theoretical results and shows how the algorithm is run in practice. We revised the caption of Figure 2, providing a more detailed and clear description.\n\nWe indeed cited and discussed the early \"similarity matching\" work (Hu et al. 2014) in our original submission. In our updated paper, we further included a later, more developed work (Pehlevan et al. 2018) in the reference. This line of work focuses on a novel learning objective function, while we study the sparse coding objective function that has been widely studied not only in neuroscience but also in signal processing and machine learning.", "We thank all the reviewers for the detailed and constructive feedback. We have uploaded a revision of our submission to address the concerns as explained in the responses below.", "1. Our intention to separate B and D is for their different physical meanings: the former corresponds to particular connection weights and the latter is an argument in an optimization problem. The reviewer's feedback to merge B and D certainly seems useful. We will further revise our paper with more succinct mathematical notation but need a little more time to ensure the presentation remains coherent.\n\n2. We do not intend to claim to be the first to use feedback connections to train neural networks. This idea has a long history in contrastive learning (as we pointed out in the introduction) and can be even traced back to the recirculation algorithm for training autoencoders (Hinton \\& McClelland, 1988) that we discussed in Appendix C.2. We have updated Section 1.2 to incorporate the growing body of work in this direction suggested by the reviewer, and help clarify our contributions.\n\n3. The algorithm we established can be mapped to a massively parallel architecture and, in principle, lead to performance gains in two ways: the run time can be shorter due to a higher level of parallelism, and the energy cost of each operation can be lower because the memory is located closer to each computation unit. To further quantify the efficiency advantage of the algorithm, one must design a new parallel computer architecture such as (Davies et al., 2018), since the efficiency gain is coupled with the ability to build hardware differently. Otherwise, there is no easy comparison to make against a conventional numerical optimization algorithm implemented on a general-purpose CPU. The goal of this paper is to effect dictionary learning on this specialized computational model, which is already a non-trivial task, and we hope this work can motivate future study of hardware designs to realize the potential performance gains.\n\n4. We have updated Section 1.1 and Appendix C.2 to avoid potential confusions pointed out by the reviewer. In summary, the prior work that we cited all have significant gaps between the learning rules and solving the dictionary learning objective functions. The arguments made by each work are detailed and discussed in Appendix C.2.\n\n5. We believe our ability to compute the correct gradient is an important contribution, notwithstanding the reviewer's highly relevant comments. First, it is known to the deep learning and more generally the machine learning community that gradients need not be exact. For example, in the review article Optimization Methods for Large-Scale Machine Learning by Bottou/Curtis/Nocedal in SIAM Review, 2018, Equation 4.7a shows that as long as the approximate gradients on the average are uniformly smaller than 90 degrees, convergence can occur. In practice, however, it is hard to establish such a property in a non-trivial way. Such an acute angle property is satisfied by using an *exact* gradient on a batch of data points. In this case, the approximate gradients are unbiased estimates of the full-batch gradient and thus make a zero degree angle. In the works that we cited in our article that use approximations to gradient, none claim that the approximate gradients employed satisfy the acute angle property. This brings us to our second point. The work by Lillicrap et. al that the reviewer cited shows a number of excellent results. Note that, however, this work does not explicitly prove that the approximate gradient used satisfy the acute angle property. It shows that a random (but fixed) projection of error will lead the weights being so adjusted that the random projection becomes a suitable approximate gradient. Moreover, while successful learning is observed in quite a few examples, the authors established convergence proof in the supplemental material (Note 11) only for a linear network without nonlinear activations. As discussed in Note 16, the theoretical results are limited so far. We thus circumvent many theoretical difficulties by being able to compute exact gradients in the first place.", "The authors study sparse coding models in which unit activations minimize a cost that combines: 1) the error between a linear generative model and the input data; an 2) the L1 norm of unit activations themselves. They seek models in which both the inference procedure -- generating unit activations in response to each input data example -- and the learning procedure -- updating network connections so that the inferences minimize the cost function -- are local. By \"local\" they mean that the update to each unit's activation, and the updates to the connection weights, rely only on information about the inputs and outputs from that unit / connection. In a biological neural network, these are the variables represented by pre- and post-synaptic action potentials and voltages, and in hardware implementations, operations on these variables can be performed without substantially coordinating between different parts of the chip, providing strong motivation for the locality constraint(s). \n\nThe authors achieve a local algorithm that approximately optimizes the sparse coding objective function by using feedback: they send the sparse coding units' activities \"back\" to the input layer through feedback connections. In the case where the feedback connection matrix is the transpose of the sparse coding dictionary matrix (D), the elementwise errors in the linear generative model (e.g., the non-local part of the sparse coding learning rule obtained by gradient descent) are represented by the difference between the inputs and this feedback to the input layer: that difference can be computed locally at the input units and then sent back to the coding layer to implement the updates. The feedback connections B are updated in another local process that keeps them symmetric with the feedforward weights: B= D= F^T throughout the learning process. \n\nThe authors provide several theorems showing that this setup approximately solves the sparse coding problem (again, using local information), and show via simulation that their setup shows similar evolution of the loss function during training, as does SGD on the sparse coding cost function.\n\nI think that the paper presents a neat idea -- feedback connections are too often ignored in computational models of the nervous system, and correspondingly in machine learning. At the same time, I have some concerns about the novelty and the presentation. Those are described below:\n\n1. The paper is unnecessarily hard to read, at least in part due to a lack of notational consistency. As just one example, with B=D, why use two different symbols for this matrix? This just makes is so that your reader needs to keep track mentally of which variable is actually which other variable, and that quickly becomes confusing. I strongly recommend choosing the simplest and most consistent notation that you can throughout the paper.\n\n2. Other recent studies also showed that feedback connections can lead to local updates successfully training neural networks: three such papers are cited below. The first two papers do supervised learning, while the third does unsupervised learning. It would be helpful for the authors to explain the key points of novelty of their paper: application of these feedback connection ideas to sparse coding. Otherwise, readers may mistakenly get the impression that this work is the first to use feedback connections in training neural networks.\n\nGuerguiev, J., Lillicrap, T.P. and Richards, B.A., 2017. Towards deep learning with segregated dendrites. ELife, 6, p.e22901.\n\nSacramento, J., Costa, R.P., Bengio, Y. and Senn, W., 2018. Dendritic cortical microcircuits approximate the backpropagation algorithm. arXiv preprint arXiv:1810.11393.\n\nFederer, C. and Zylberberg, J., 2018. A self-organizing short-term dynamical memory network. Neural Networks.\n\n3. Given that the performance gains of the locality (vs something like SparseNet) are given such emphasis in the paper, those should be shown from the numerical experiments. This could be quantified by runtime, or some other measure.\n\n4. The discussion of prior work is a little misleading -- although I'm sure this is unintentional. For example, at the top of p. 3, it is mentioned that the previous local sparse coding models do not have rigorous learning objectives. But then the appendix describes the learning objectives, and the approximations, made in the prior work. I think that the introduction should have a more transparent discussion of what was, and was not, in the prior papers, and how the current work advances the field.\n\n5. The paper -- and especially appendix C2 -- makes strong emphasis of the importance finding local implementations of true gradient descent, as opposed to the approximations made by prior authors. I'm not sure that's such a big deal, given that Lillicrap et al. showed nicely in the paper cited below that any learning rule that is within 90 degrees of true gradient descent will still minimize the cost function: even if an algorithm doesn't move down the steepest path, it can still have updates that always move \"downhill\", and hence minimize the loss function. Consequently, I think that some justification is needed showing that the current model, being closer to true gradient descent, really outperforms the previous ones. \n\nLillicrap, T.P., Cownden, D., Tweed, D.B. and Akerman, C.J., 2016. Random synaptic feedback weights support error backpropagation for deep learning. Nature communications, 7, p.13276.", "The seminal work of Olshausen and Field on sparse coding is widely accepted as one of the main sources of inspiration for dictionary learning. This contribution makes the connection from dictionary learning back to a neuronal approach. Building on the Local Competitive Algorithm (LCA) of Rozell et al. and the theoretical analysis of Tang et al., this submission revisits the dictionary learning under two constraints that the gradient is learned locally and that the neural assemblies maintain consistent weight in the network. These constraints are relevant for a better understanding of the underlying principles in neuroscience and for applicative development on neuromorphic chipsets.\n\nThe proposed theorems extend the previous work of sparse coding with spiking neurons and address the update of dictionary using only information available from local neurons. The submission cares as well for the possible implementation on parallel architectures. The numerical experiments are conducted on three datasets and show the influence of weight initialization and the convergence on each dataset. An example of image denoising is provided in appendix. ", "This paper proposes a dynamical neural network for sparse coding where all the interactions terms are learned. In previous approaches (Rozell et al.) some weights were tied to the others. Here the network consists of feedforward, lateral, and feedback weights, all of which have their own learning rule. The authors show that the learned weights converge to the desired solution for solving the sparse coding objective. This seems like a nice piece of work, an original approach that solves a problem that was never really fully resolved in previous work, and it brings things one step closer to both neurobiological plausibility and hardware implementation.\n\nOther comments:\n\nWhat exactly is being shown in Figure 2 is still not clear to me.\n\n It would be nice to see some other evaluations, for example sparsity vs. MSE tradeoff (this is reflected in the objective function in part but it would be nice to see the tradeoff). \n\nThere is recent work from Mitya Chklovskii's group on \"similarity matching\" that also addresses the problem of developing a fully local learning rule. The authors should incorporate a discussion of this in their final paper.\n" ]
[ -1, -1, -1, 6, 9, 8 ]
[ -1, -1, -1, 4, 4, 4 ]
[ "ryxykzEyam", "iclr_2019_B1gstsCqt7", "rklBey5vTQ", "iclr_2019_B1gstsCqt7", "iclr_2019_B1gstsCqt7", "iclr_2019_B1gstsCqt7" ]
iclr_2019_B1lKS2AqtX
Eidetic 3D LSTM: A Model for Video Prediction and Beyond
Spatiotemporal predictive learning, though long considered to be a promising self-supervised feature learning method, seldom shows its effectiveness beyond future video prediction. The reason is that it is difficult to learn good representations for both short-term frame dependency and long-term high-level relations. We present a new model, Eidetic 3D LSTM (E3D-LSTM), that integrates 3D convolutions into RNNs. The encapsulated 3D-Conv makes local perceptrons of RNNs motion-aware and enables the memory cell to store better short-term features. For long-term relations, we make the present memory state interact with its historical records via a gate-controlled self-attention module. We describe this memory transition mechanism eidetic as it is able to effectively recall the stored memories across multiple time stamps even after long periods of disturbance. We first evaluate the E3D-LSTM network on widely-used future video prediction datasets and achieve the state-of-the-art performance. Then we show that the E3D-LSTM network also performs well on the early activity recognition to infer what is happening or what will happen after observing only limited frames of video. This task aligns well with video prediction in modeling action intentions and tendency.
accepted-poster-papers
Strengths: Strong results on future frame video prediction using a 3D convolutional network. Use of future video prediction to jointly learn auxiliary tasks shown to to increase performance. Good ablation study. Weaknesses: Comparisons with older action recognition methods. Some concerns about novelty, the main contribution is the E3D-LSTM architecture, which R1 characterized as an LSTM with an extra gate and attention mechanism. Contention: Authors point to novelty in 3D convolutions inside the RNN. Consensus: All reviewers give a final score of 7- well done experiments helped address concerns around novelty. Easy to recommend acceptance given the agreement.
val
[ "HJeJHrJc27", "BJgyWXsuh7", "rJl_P_YA0m", "S1gVVTLVhX", "SyxImDCiA7", "ryeBsAat0Q", "ByVd51AFRm", "HyxkqaatRm", "r1e-K3aF0Q" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "AFTER REBUTTAL:\n\nThis is an overall good work, and I do think proves its point. The results on the TaxiBJ dataset (not TatxtBJ, please correct the name in the paper) are compelling, and the concerns regarding some of the text explainations have been corrected.\n\n-----\n\nThe proposed model uses a 3D-CNN with a new kind of 3D-conv. recurrent layer named E3D-LSTM, an extension of 3D-RCNN layers where the recall mechanism is extended by using an attentional mechanism, allowing it to update the recurrent state not only based on the previous state, but on a mixture of previous states from all previous time steps.\n\nPros:\nThe new approach displays outstanding results for future video prediction. Firstly, it obtains better results in short term predictions thanks to the 3D-Convolutional topology. Secondly, the recall mechanism is shown to be more stable over time: The prediction accuracy is sustained over longer preiods of time (longer prediction sequences) with a much smaller degradation. Regarding early action recognition, the use of future video prediction as a jointly learned auxiliary task is shown to significantly increase the prediction accuracy. The ablation study is compelling.\n\nCons:\nThe model does not compare against other methods regarding early action recognition. Since this is a novel field of study in computer vision, and not too much work exists on the subject, it is understandable. Also, it is not the main focus of the work.\n\nIn the introduction, the authors state that they account for uncertainty by better modelling the temporal sequence. Please, remove or rephrase this part. Uncertainty in video prediction is not due to the lack of modelling ability, but due to the inherent uncertainty of the task. In real world scenarios (eg. the KTH dataset used here) there is a continuous space of possible futures. In the case of variational models, this is captured as a distribution from which to sample. Adversarial models collapse this space into a single future in order to create more realistic-looking predictions. I don't believe your approach should necessarily model that space (after all, the novelty is on better modelling the sequence itself, not the possible futures, and the model can be easily extended to do so, either through GANs or VAEs), but it is important to not mislead the reader.\n\nIt would have been interesting to analyse the work on more complex settings, such as UCF101. While KTH is already a real-world dataset, its variability is very limited: A small set of backgrounds and actions, performed by a small group of individuals.\n\n", "The paper proposes a spatiotemporal modeling of videos based on two currently available spatiotemporal modeling paradigms: RNNs and 3D convolutions. The main idea of this paper is to get the best world of both in a unified way. The method first encodes a sequence of frames using 3D-conv to capture short-term motion patterns, passes it to a specific type of LSTM (E3D-LSTM) which accepts spatiotemporal feature maps as input. E3D-LSTM captures long-term dependencies using an attention mechanism. Finally, there are 3D-conv based decoders which receive the output of E3D-LSTM and generate future frames. The message of the paper, I believe, is that 3D-conv and RNNs can be integrated to perform short and long predictions. They show in the experiments how the model can remember far past for reasoning and prediction.\nThe nice point of the method is that it is heavily investigated through experiments. It's evaluated on two datasets, with ablation studies on both. Moreover, the paper is well-written and clear. technically, the paper seems correct.\nHowever, my only big concern is about the limited novelty of the method. E3D-LSTM is the core of the novelty, which is basically an LSTM with extra gate, and attention mechanism. \n\nother comments:\n- As the method by essence is a spatiotemporal learning model, why the method is not evaluated on full-length videos of the something-something dataset for classical action classification task, in order to compare it with the full architecture of I3D, or S3D?\n\n- While the paper discusses self-supervised learning, I would suggest showing its benefit on online action recognition task. One without frame-prediction loss and one with. \n\n- the something-something dataset has 174 classes, how was the process of selecting 41 classes out of it?", "Q3:\nAs R1 said, there are works integrating 2d convolution and RNNs, like \"VideoLSTM convolves, attends and flows for action recognition\". still, novelty is not convincing.\n\nQ4: A typical video classification model which can see full-length videos may make decisions mainly depending on the scene information.\nI understand this paper aims to predict the future. however, \"Zhou et. al, Temporal Relational Reasoning in Videos\" show that for recognizing actions in something-something dataset, scene clues are not enough and modeling temporal dependencies are important. so a classical classification problem on this dataset makes sense. \n\nThough novelty is still not fully convincing, the paper can shed insights into the topic.", "# 1. Summary\nThis paper presents a model for future video prediction, which integrates 3D convolutions into RNNs. The internal operations of the RNN are modified by adding historical records controlled via a gate-controlled self-attention module. The authors show that the model is effective also for other tasks such as early activity recognition.\n\nStrengths:\n* Nice extensive experimentation on video prediction and early activity recognition tasks and comparison with recent papers\n* Each choice in the model definition are motivated, although some clarity is still missing (see below)\n\nWeaknesses:\n* Novelty: the proposed model is a small extension of a previous work (Wang et al., 2017) \n\n\n# 2. Clarity and Motivation\nIn general, the paper is clear and general motivation makes sense, however some points need to be improved with further discussion and motivation:\n\nA) Page 2 “Unlike the conventional memory transition function, it learns the size of temporal interactions. For longer sequences, this allows attending to distant states containing salient information”: This is not obvious. Can the authors add more details and motivate these two sentences? How is long-term relations are learned given Eq. 1? \nB) Page 5 “These two terms are respectively designed for short-term and long-term video modeling”: How do you make sure that Recall(.) does not focus on the short-term modeling instead? Not clear why this should model long-term relations.\nC) Page 5 and Eq 1: motivation why layer norm is required when defining C_t^k is not clear\nD) What if the Recall is instead modeled as attention? The idea is to consider only C_{1:t-1}^k (not consider R_t) and have an attentional model that learn what to recall based only on C. Also, why does Recall need to depend on R_t?\nE) Page 5 “to minimize the l1 + l2 loss over every pixel in the frame”: this sentence is not clear. How does it relate to Eq. 2?\n\n\n# 3. Novelty\nNovelty is the major concern of this paper. Although the introduced new concepts and ideas are interesting, the work seems to be an extension of ST-LSTM and PredRNN where Eq 1 is slightly modified by introducing Recall. \nIn addition the existing relation between the proposed model and ST-LSTM is not clearly state. Page 2, first paragraph: here the authors should state that model is and extension of ST-LSTM and highlight what are the difference and advantage of the new model.\n\n\n# 4. Significance of the work\nThis paper deals with an interesting and challenging topic (video prediction) as well as it shows some results on the early activity recognition task. These are definitively nice problem which are far to be solved. From the application perspective this work is significant, however from the methodological perspective it lacks a bit of significance because of the novelty issues highlighted above.\n\n\n# 5. Experimentation\nThe experiments are robust with nice comparisons with recent methods and ablation study motivating the different components of the model (Table 1 and 2). Some suggested improvements:\n\nA) Page 7 “Seq 1 and Seq 2 are completely irrelevant, and ahead of them, another sub-sequence called prior context is given as the input, which is exactly the same as Seq 2”: The COPY task is a bit unclear and need to be better explained. Why are Seq. 1 and 2 irrelevant? I would suggest to rephrase this part.\nB) Sec. 4.2, “Dataset and setup”: which architecture has been used here?\nC) Sec. 4.3, “Hyper-parameters and Baselines“: the something-something dataset is more realising that the other two “toy” dataset. Why did the authors choose to train a 2 layers 3D-CNN encoders, instead of using existing pretrained 3D CNNs? I would suspect that the results can improve quite a bit.\n\n\n# 6. Others\n* The term “self-supervised auxiliary learning” is introduced in the abstract, but at this point it’s meaning is not clear. I’d suggest to either remove it or explain its meaning.\n* Figure 1(a): inconsistent notation with 2b. Also add citation (Wang et al., 2017) since it ie the same model of that paper\n\n-------\n# Post-discussion\nI increased my rating: even if novelty is not high, the results support the incremental ideas proposed by the authors.\n", "Q7 (novelty)\n1) It may be the first work using 3d convolutions in RNNs, however there is already a previous work using 2d convolution in RNNs: \"Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting, NIPS 2015.\". \n\nQ8 E) Please add this info to the paper.\n\nQ9 C) It would have been interesting to see an experiment with one of these pre-trained models, because the used 2-layer network might be not be able to learn good features for the task. \n\nOverall novelty is still not fully convincing, however the results support the incremental ideas proposed by the authors.", "\nQ3: Concern about the novelty: my only big concern is about the limited novelty of the method. E3D-LSTM is the core of the novelty, which is basically an LSTM with extra gate, and attention mechanism.\n\nThe concern on limited novelty is mainly due to the seeming similarity to the prior work [Wang et al., 2017]. Below we clarify the differences to the prior work:\n\n1. Our paper is one of, if not the first, work to systematically explore 3D convolutions **inside** the RNN. More importantly, it is the first to show a carefully designed method achieves the state-of-the-art results on several public benchmarks. The improvements are otherwise not shown for any known combination of 3D convolutions and RNN. \n\n2. Our technical difference to the existing work includes: \na. we study where to apply the 3D info. For example, combine 2D or 3D inputs (see Figure 1), inflate the LSTM cell to 3D (see Figure 2b), or separate the 3D convolutions in the input and LSTM cell (see Table 4). \nb. we propose how to effectively embed the 3D convolution inside the LSTM (i.e. we introduce a new recall gate in Equation 1 for the 3D-memory transition inside the LSTM).\n\nAmong the recent advances in deep learning, many great models appear to be similar to prior work (e.g. ResNet and Highway Network, ConvLSTM and LSTM, C3D/I3D CNN and 2D CNN). However, it is not true as the devil is in the important details. Similarly, we build upon prior work, make only necessary, yet important, model designs, and validate the necessities with ablation studies to demonstrate their merits. Our designs are driven by a clear motivation, innovative thinkings, and validated by extensive experiments (as agreed by all reviewers). \nWe hope this can resolve the concern on novelty.\n\nQ4: As the method by essence is a spatiotemporal learning model, why the method is not evaluated on full-length videos of the something-something dataset?\n\nThe main reason is that predicting on the full-length video may not align well with our topic. A typical video classification model which can see full-length videos may make decisions mainly depending on the scene information. As shown in Fig. 5, suppose the tasks is to predict a category “Poking a stack of [Something], so the stack collapses”. The problem would be very simple as long as the model sees the last frame which shows the outcome of the action. \n\nIn contrast, the early activity recognition task makes the model have no other choices but to depend on an inference of the action intentions when making decisions. It aligns well with the video prediction task, in which the sequential tendency and causality are important.\n\nWe notice that it would be more accurate to claim our model as a spatiotemporal predictive learning model, rather than a broad “spatiotemporal learning model”. We have revised that in the paper.\n\nQ5: Show the benefit on online action recognition task. \n\nAs suggested, we have added online early activity recognition by making the classifier only depend on a concatenation of the recurrent outputs regarding the last 5 timestamps. As such, the historical recurrent states are only kept for 5 timestamps and then discarded. In particular, we apply a sliding window of limited length to the inputs of the Recall gate, using $C_{t-5:t-1}$ instead of $C_{1:t-1}$ in Eq. 1. Experimental results are shown in Table 7. Despite the slightly decreased accuracy, applying the sliding window on the Recall gate improves the scalability of E3D-LSTM.\n\nQ6: How was the process of selecting 41 classes out of the something-something dataset?\n\nIn the original paper [Goyal et al. 2017] of the Something-Something dataset, the 41 classes (in Table 7) are listed as a standard and official dataset setting. This split contains 56k video clips for training and 7.5k for validation and is large enough and meanwhile computational convenient to compare a variety of baseline methods. We have clarified this point in the paper (Page 9).", "\nQ7 Novelty: the proposed model is a small extension of a previous work [Wang et al., 2017]\n>> Please see the answer to Q3 above (the novelty of the paper)\n\nQ8 Clarity and Motivation:\n\nA) Page 2...How long-term relations are learned given Eq. 1? \n>> Different from standard LSTMs, we are motivated by modeling the long-term relations across frames. The long-term relations are learned by the RECALL function in Eq. 1, whose inputs are the historical memory states C_{t-\\tau:t-1}^k (in particular, we use C_{1:t-1}^k for most experiments in this work). The RECALL function queries useful information from C_{t-\\tau:t-1}^k using R_t. We have clarified this point in the paper (Page 4).\n\nB) Page 5...Not clear why Recall(.) should model long-term relations.\n>> The RECALL function enables an adaptive learning of short-term and long-term modeling. More specifically, in Eq. 1, C_{t-1}^k is added to C_t^k via a short-cut connection controlled by the forget gate. Intuitively, it conveys short-term information, thus allowing the RECALL function to focus on long-term relations. Empirically, the COPY task verifies that our model could make use of information from the distant memory states when future predictions are severely dependent on the distant past.\n\nC) Eq 1: why layer norm is required when defining C_t^k is not clear.\n>> We use the layer normalization technique to mitigate the covariant shift and stabilize the training process, as it has been commonly used in RNNs. We have made it clear in the paper. \n\nD) What if the Recall is instead modeled as attention?\n>> Making the RECALL function solely based on memory states C will make the relations between C_{t-1}^k and itself (or the relations between very short-term memory states) dominate the result of RECALL(.). Thus, we encode X_t and H_{t-1}^k into R_t, and use it as the query of the attentive RECALL function. \n\nE) Page 5 “to minimize the l1 + l2 loss over every pixel in the frame” is not clear.\n>> We use different objective functions for different tasks:\n1. Video prediction: L1 + L2 loss.\n2. Early activity recognition: Eq. 3 in the revised paper.\n\nQ9 Experiments:\n\nA) Page 7...Why are Seq. 1 and Seq. 2 irrelevant?\n>> We have rephrased this part in Page 6. Basically, the COPY task is to evaluate whether our model could recall useful information from the distant memory states. A well-performed predictive model should make precise predictions regarding Seq 2, as it has seen all frames of this sequence before. But this task is difficult for previous LSTM networks. Because the Seq 1 is totally irrelevant, making predictions of Seq 1 will erase its memory of Seq 2.\n\nB) Sec. 4.2, “Dataset and setup”: which architecture has been used here?\n>> We have made it clear that the architecture for KTH is exactly the same as that for the Moving MNIST.\n\nC) Sec. 4.3...the something-something dataset is more realising than the other two “toy” dataset. Why did the authors choose to train a 2 layers 3D-CNN encoders, instead of using existing pretrained 3D CNNs?\n>> In this paper, our goal is to explore a generic method that can infer the action tendency and intentions from sequential video frames. We show that in a fair setting (the same training set and similar #learnable parameters), the improvements of our work come from a better model to capture and predict low-level video data trends, along with a better understanding of high-level actions.\nAlthough using the 3D-CNN model pre-trained on video datasets may improve the results, it also makes fair comparisons among all methods very tricky. First, suppose a model improves the results; it is less clear whether it is because the model learns a better representation on the pretrained data, or it is actually better in modeling the target dataset. Second, due to the domain difference, it is hard to select which pretrained models (e.g. Sports1M or Kinetics) to use on which dataset, and the pretrained model works on one dataset (e.g. something-something) may not work well on another dataset(e.g. KTH). These issues can result in a lengthy and unclear experimental section.\n", "\nQ1: In the introduction, the authors state that they account for uncertainty by better modeling the temporal sequence... \n\nWe have rephrased this expression for clarity in the revised paper (Page 2). Below is a copy: Future prediction errors of an imperfect model can be categorized by two factors: a) the “systematic errors” caused by a lack of modeling ability to the deterministic variations; b) the stochastic, inherent uncertainty of the future. We aim to minimize the first factor in this work.\n\nQ2: Analyze the work in more complex settings.\n\nWe have experimented with the Something-Something dataset for video prediction, but the generated frames are not satisfying even when integrated with adversarial training and variational methods. The results are not surprising as the number of training samples is too limited to capture the diverse scenes of real-world videos (due to the illumination, occlusion, camera motion, to name a few). This makes future prediction considerably difficult for all existing methods, including ours. Exploring very complex datasets will be an interesting future research direction for this task. \n\nHowever, as R3 suggested, we further evaluate our method on a real-world dataset for traffic flow prediction, i.e., TaxtBJ. In this dataset, traffic flows (in consecutive heat maps) are collected from the chaotic real-world environment. Predicting urban traffic conditions is a complex setting, as the heat maps are very noisy and we do not have any corresponding, underlying, additional information. Implementation details and empirical results can be found in Appendix B. We train the networks to predict 4 frames (the next 2 hours) from 4 observations and report MSE at every timestamp. As shown, our method achieves the state-of-the-art result in Table 8 and generates the most accurate predictions in Fig. 6.\n", "We thank reviewers for the valuable comments. Based on the reviews, we make the following changes (we mark these changes in blue in the revised paper):\n\n1. As suggested by R1, we enable our method to perform the online recognition tasks and compare our online model with and without the frame-prediction loss in Table 7.\n\n2. As suggested by R3, we add an additional real-world dataset on traffic flow prediction and evaluate our method under this complex setting. The results are presented in Appendix B.\n\n3. We rephrase/clarify all of the points raised by the reviewers.\n\nWe will address all questions in the individual replies.\n" ]
[ 7, 7, -1, 7, -1, -1, -1, -1, -1 ]
[ 5, 4, -1, 4, -1, -1, -1, -1, -1 ]
[ "iclr_2019_B1lKS2AqtX", "iclr_2019_B1lKS2AqtX", "ryeBsAat0Q", "iclr_2019_B1lKS2AqtX", "ByVd51AFRm", "BJgyWXsuh7", "S1gVVTLVhX", "HJeJHrJc27", "iclr_2019_B1lKS2AqtX" ]
iclr_2019_B1lnzn0ctQ
ALISTA: Analytic Weights Are As Good As Learned Weights in LISTA
Deep neural networks based on unfolding an iterative algorithm, for example, LISTA (learned iterative shrinkage thresholding algorithm), have been an empirical success for sparse signal recovery. The weights of these neural networks are currently determined by data-driven “black-box” training. In this work, we propose Analytic LISTA (ALISTA), where the weight matrix in LISTA is computed as the solution to a data-free optimization problem, leaving only the stepsize and threshold parameters to data-driven learning. This significantly simplifies the training. Specifically, the data-free optimization problem is based on coherence minimization. We show our ALISTA retains the optimal linear convergence proved in (Chen et al., 2018) and has a performance comparable to LISTA. Furthermore, we extend ALISTA to convolutional linear operators, again determined in a data-free manner. We also propose a feed-forward framework that combines the data-free optimization and ALISTA networks from end to end, one that can be jointly trained to gain robustness to small perturbations in the encoding model.
accepted-poster-papers
This is a well executed paper that makes clear contributions to the understanding of unrolled iterative optimization and soft thresholding for sparse signal recovery with neural networks.
train
[ "HJen7j87JN", "HklQ7zppRQ", "B1xca8C4nX", "rkxXkC3DTQ", "r1xH4AhDpX", "HyltGkJFiQ", "ryxQcC8jp7", "rJey90hw6m", "r1lP83nDpX", "SyxX7z5uhQ" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "[Opening is okay]\n\nPoints 1 and 2: There is no word \"tree\" or \"graph\", no \"beta\" or \"$\\beta$\", in our paper. We are confused and think they may refer to another paper. Could you kindly clarify?\n\n3: This is great suggestion. The matrix W is the solution of a convex quadratic program subject to linear constraints and, thereby, a linear system. Solving this system costs a negligible amount compared to training the remaining parameters. For example, when W is 250-by-500, computing W takes a few seconds but the remaining of ALISTA takes 1.5 hours. As you suggested, we will add this explanation and the complexity of computing W to the camera-ready version.", "Just some minor comments on the responses from the authors:\n[Removed comments that were incorrectly included in this response]\n* The complexity of the algorithm should also include that of the optimization that finds the matrix W, equation (16) in Stage 1.", "The paper describes ALISTA, a version of LISTA that uses the dictionary only for one of its roles (synthesis) in ISTA and learns a matrix to play the other role (analysis), as seen in equations (3) and (6). The number of matrices to learn is reduced by tying the different layers of LISTA together.\n\nThe motivation for this paper is a little confusing. ISTA, FISTA, etc. are algorithms for sparse recovery that do not require training. LISTA modified ISTA to allow for training of the \"dictionary matrix\" used in each iteration of ISTA, assuming that it is unknown, and offering a deep-learning-based alternative to dictionary learning. ALISTA shows that the dictionary does not need to change, and fewer parameters are used than in LISTA, but it still requires learning matrices of the same dimensionality as LISTA (i.e., the reduction is in the constant, not the order). If the argument that fewer parameters are needed is impactful, then the paper should discuss the computational complexity (and computing times) for training ALISTA vs. the competing approaches.\n\nThere are approaches to sparse modeling that assume separate analysis and synthesis dictionaries (e.g., Rubinstein and Elad, \"Dictionary Learning for Analysis-Synthesis Thresholding\"). A discussion of these would be relevant in this paper.\n\n* The intuition and feasibility of identifying \"good\" matrices (Defs. 1 and 2) should be detailed. For example, how do we know that an arbitrary starting W belongs in the set (12) so that (14) applies? \n* Can you comment on the difference between the maximum entry \"norm\" used in Def. 1 and the Frobenius norm used in (17)?\n* Definition 3: No dependence on theta(k) appears in (13), thus it is not clear how \"as long as theta(k) is large enough\" is obtained. \n* How is gamma learned (Section 2.3)?\n* The notation in Section 3 is a bit confusing - lowercase letters b, d, x refer to matrices instead of vectors. In (20), Dconv,m(.) is undefined; later Wconv is undefined.\n* For the convolutional formulation of Section 3, it is not clear why some transposes from (6) disappear in (21).\n* In Section 3.1, \"an efficient approximated way\" is an incomplete sentence - perhaps you mean \"an efficient approximation\"?. Before (25), Dconv should be Dcir? The dependence on d should be more explicitly stated.\n* Page 8 typo \"Figure 1 (a) (a)\".\n* Figure 2(a): the legend is better used as the label for the y axis.\n* I do not think Figure 2(b) verifies Theorem 1; rather, it verifies that your learning scheme gives parameter values that allow for Theorem 1 to apply (which is true by design).\n* Figure 3: isn't it easier to use metrics from support detection (false alarm/missed detection proportions given by the ALISTA output)?", "Answers to individual comments:\n\n- Q1 (Intuition and feasibility of identifying \"good\" matrices; Definition 1): \nDefinition 1 describes a property of good matrices: small coherence with respect to D. This is inspired by Donoho & Elad, 2003; Elad, 2007; Lu et al, 2018. Our Theorem 1 validates this point: a small mutual coherence leads to a large c and faster convergence. Feasibility is proved in (Chen et al., 2018). We have added these clarifications in our update.\n \n- Q1 (Clarification of Definition 2): \nBecause W and D are both “fat” matrices, the product W’D, and such products of their submatrices consisting of two or more their corresponding columns, generally cannot be very close to the identity matrix. For a given D, Definition 2 let sigma_min represent the minimal “distance” and define the set of corresponding W matrices. A larger sigma_min implies slower convergence in Theorem 2. We have added numerical validations of (11) to the appendix in the update. (The original definition (12) is (11) in the updated version.)\n\n- Q2 (Difference between the maximum entry \"norm\" and the Frobenius norm): \nWe use a Frobenius norm in (16) instead of a sup-norm in Def. 1 (8) for computational efficiency. Directly minimizing the sup norm leads to a large-scale linear program. The sizes of the matrices W and D that we used in our numerical experiments are 250 by 500. We implemented an LP solver for the sup-norm minimization (8) based on Gurobi, which requires more than 8GB of memory and may be intractable on a typical PC. However, solving (16) in MATLAB needs only around 10MB of memory and a few seconds. Besides the Frobenius norm, we also tried to minimize the L_{1,1} norm but found no advantages. (The original formula (17) is (16) in the update.)\n\n- Q3 (Definition 3): \nBy (6), x^k depends on thresholding parameters theta^0, theta^1, ..., theta^{k-1}. When these theta parameters are large enough, x^k can be sufficiently sparse. Theorem 1 implies we can ensure “support(x^k) belongs to S” for all k by properly choosing the theta^k sequence.\n\n- Q4 (How is gamma learned): \nThe step sizes gamma^k and thresholds theta^k (for all k) are updated to minimize the empirical recovery loss in (5), using the standard training method based on backpropagation and the Adam method. For ALISTA, the big Theta in (5), which is the set of parameters subject to learning, consists of only gammas and thetas. The matrix W is pre-computed by analytic optimization and, therefore, is fixed during training.\n\n- Q5 (The notation in Section 3): \nThe lowercase letters are always vectors. The matrices D_{conv,m} are defined so that (18), which is precise but complicated, is equivalent to (19), which is simple and compact. The full definition of D_{conv,m} is given in Appendix C.2. The matrices W_{conv,m} are defined for a similar purpose before (21). We have added these clarifications in the updated version. (The original formula (20) is (19) in the current version.)\n\n- Q6 (Transpose in convolution): \nTransposing a circulant matrix is equivalent to applying the convolution with rotated filters (Equation (6) and Footnote 2 in Chalasani et al., 2013). We have made clarifications in the update. \n\n- Q7 & Q8 & Q9 (Typos and figure suggestions): \nThanks for finding the typos and making suggestions for figures. We have fixed the typos and will carefully proofread our paper. \n\n- Q10 (“I do not think Figure 2(b) verifies Theorem 1”): \nWe agree that we incorrectly used the words \"verify\" and \"validation.\" Rather, the numerical observations in Figure 2(b) justify our choices of parameters in Theorem 1. We have made this correction.\n\n- Q11 (Figure 3): \nWe agree that the number and proportion of false alarms are a more straightforward performance metric. However, they are sensitive to the threshold. We found that, although using a smaller threshold leads to more false alarms, the final recovery quality is better and those false alarms have small magnitudes and are easy to remove by thresholding during post-processing. That's why we chose to show their magnitudes, implying that we get easy-to-remove false alarms. We have added this reasoning to the final version.", "Thanks for your careful review and the comments! We have revised our paper and we believe our responses and revisions address your concerns. We would be very grateful if you would look over our paper again, and reconsider your opinion.\n\nLet us first provide a general response, followed by responses to your specific comments.\n\nThe goal of work is to significantly speed up sparse recovery. The basis of this line of work is ISTA (iterative soft-thresholding algorithm), a classic iterative method for recovering a sparse vector x from it linear measurements Dx, which are further contaminated by additive noise. Like most iterative methods, ISTA repeats the same operation (matrix-vector multiplications by D and D’ and a soft thresholding) at each iteration. Therefore, it can be written as a simple for-loop. However, depending on the problem condition, it can take hundreds of iterations or tens of thousands of iterations. Gregor & LeCun, 2010, instead of using the original matrices D and D’ and soft-thresholding scalars in ISTA, select a series of new matrices and scalars by training using a set of synthetic sparse signals and their linear measurements. The resulting method, called LISTA (or learned ISTA), has a small fixed number of iterations, roughly 20, and is not only much faster but recovers more accurate sparse vectors than ISTA even if ISTA runs order-of-magnitude more iterations. On the other hand, training LISTA takes a long time, typically ten hours or longer, much like training a neural network with lots of parameters. Also, one must train new matrices and scalars for each encoding matrix D. These shortcomings are addressed by a line of work that follows LISTA. \n\nThis paper introduces ALISTA, which significantly simplifies LISTA by using only one free matrix (besides the encoding matrix D) for all iterations, and pre-computing that matrix by analytic optimization, as opposed to data-driven training. Therefore, when it comes to training ALISTA, there remain only a series of scalars for thresholding and step sizes to be learned from synthetic data. Despite this huge simplification, the performance of ALISTA is no worse than LISTA and other work along the line, supported by our theoretical results and numerical verification. \n\nYour question on computational complexity is great. Let us compute how much saving in flops ALISTA has over LISTA or its variants. Assume there are K layers (i.e., iterations) in total, and the encoding matrix has N rows and M columns with N < M, possibly N << M. In its typical implementation, vanilla LISTA learns O(KM^2+K+MN) parameters. That is one matrix and one scalar per layer and another matrix shared between all layers. LISTA in Chen et al., 2018 (also (6) in this paper) learns O(KNM + K) parameters as they learn only one N-by-M matrix and one thresholding parameter per layer. Tied LISTA ((15) in this paper) learns only O(NM + K) parameters by using only one matrix for all the K layers plus a step size and a thresholding parameter per layer. ALISTA ((16) in this paper) learns only O(K) parameters because it determines the only matrix by analytic optimization and fixes it during training. All these methods achieve similar recover quality. We have added this comparison to the revised paper.\n\nThe model in the paper that you has mentioned, “Dictionary Learning for Analysis-Synthesis Thresholding”, is related to our paper as a special LISTA model with only one layer. We have cited this and related papers (listed below) in Section 1 of our updated version and discussed their contributions. \n\nYang et al., 2016. “Analysis-Synthesis Dictionary Learning for Universality-Particularity Representation Based Classification.”", "The papers studies neural network-based sparse signal recovery, and derives many new theoretical insights into the classical LISTA model. The authors proposed Analytic LISTA (ALISTA), where the weight matrix in LISTA is pre-computed with a data-free coherence minimization, followed by a separate data-driven learning step for merely (a very small number of) step-size and threshold parameters. Their theory is extensible to convolutional cases. The two-stage decomposed pipeline was shown to keep the optimal linear convergence proved in (Chen et al., 2018). Experiments observe that ALISTA has almost no performance loss compared to the much heavier parameterized LISTA, in contrast to the common wisdom that (brutal-force) “end-to-end” always outperforms stage-wise training. Their contributions thus manifest in both novel theory results, and the practical impacts of simplifying/accelerating LISTA training. Besides, they also proposed an interesting new strategy called Robust ALISTA to overcome the small perturbations on the encoding basis, which also benefits from this decomposed problems structure. \n\nThe proofs and conclusions are mathematically correct to my best knowledge. I personally worked on similar sparse unfolding problems before so this work looks particularly novel and interesting to me. My intuition then was that, it should not be really necessary to use heavily parameterized networks to approximate a simple linear sparse coding form (LISTA idea). Similar accelerations could have been achieved with line search for something similar to steepest descent (also computational expensive, but need learn step-sizes only, and agnostic to input distribution). Correspondingly, there should exist a more elegant network solution with very light learnable weights. This work perfectly coincides with the intuition, providing very solid guidance on how a LISTA model could be built right. Given in recent three years, many application works rely on unfold-truncating techniques (compressive sensing, reconstruction, super resolution, image restoration, clustering…), I envision this paper to generate important impacts for practitioners pursuing those ideas. \n\nAdditionally, I like Theorem 3 in Section 3.1, on the provable efficient approximation of general convolution using circular convolution. It could be useful for many other problems such as filter response matching. \n\nI therefore hold a very positive attitude towards this paper and support for its acceptance. Some questions I would like the authors to clarify & improve in revision:\n\n1.\tEqn (7) assumes noise-free case. The author stated “The zero-noise assumption is for simplicity of the proofs.” Could the authors elaborate which part of current theory/proof will fail in noisy case? If so, can it be overcome (even by less “simpler” way)? How about convolutional case, the same? Could the authors at least provide some empirical results for ALISTA’s performance under noise?\n\n2.\tSection 5.3. It is unclear to me why Robust ALISTA has to work better than the data augmented ALISTA. Is it potentially because that in the data augmentation baseline, the training data volume is much amplified, and one ALISTA model might become underfitting? It would be interesting to create a larger-capacity ALISTA model (e.g., by increasing unfolded layer numbers), train it on the augmented data, and see if it can compare more favorably against Robust ALISTA?\n\n3.\tThe writeup is overall very good, mature, and easy to follow. But still, typos occur from time to time, showing a bit rush. For example, Section 5.1, “the x-axes denotes is the indices of layers” should remove “is”. Please make sure more proofreading will be done.\n\n", "As you kindly suggested, we added two experiments to train the data-augmented version of ALISTA with 20 and 24 layers, to compare with the robust ALISTA model (the concatenation of a feed-forward encoder network that learns to solve the coherence minimization and a ALISTA network with step size and thresholds parameters). \n\nFor training ALISTA with data-augmentation, in each step, we first generate a batch of perturbed dictionaries \\tilde{D}s around an original dictionary D. Then these perturbed dictionaries are used to generate observations, by multiplying sparse vector samples from the same distribution. The data-augmented version of ALISTA is then trained with those dictionary-perturbed samples. It still follows the standard ALISTA to use a fixed weight matrix W that is analytically pre-solved from the original dictionary D.\n\nThe robust ALISTA model instead uses the encoder network to adaptively produce weight matrices to be used in ALISTA. Apart from the encoder network, the robust ALISTA needs to learn a set of step size and thresholds parameters just like the baseline ALISTA. We fix using a 16-layer ALISTA network and a 4-layer encoder in the robust ALISTA model.\n\nIn this experiment, we compare both models’ robustness to dictionary perturbations, by plotting recovery normalized MSEs (in dB) in testing, w.r.t. the standard deviation of perturbation noise, and also w.r.t. the layers used for data-augmented ALISTA. We set the maximal standard deviation of generated perturbations to 0.02 and followed the same settings described in Appendix E in the paper: \n\nSigma (standard deviation) | 0.0001 | 0.001 | 0.01 | 0.015 | 0.02 | 0.025\nAugmented ALISTA T=16 | -26.58 | -25.87 | -15.49 | -11.71 | -8.84 | -6.74\nAugmented ALISTA T=20 | -24.43 | -24.46 | -15.39 | -11.77 | -8.94 | -6.82\nAugmented ALISTA T=24 | -24.12 | -24.00 | -15.45 | -11.68 | -8.81 | -6.70\nRobust ALISTA T=16 | -62.47 | -62.41 | -62.02 | -61.50 | -60.67 | -45.00\n\n\n- Observation: as we may see in the above results, more layers didn’t bring obvious empirical benefits to the recoverability of ALISTA. We could even observe that ALISTA of 24 layers had slightly worse NMSE that ALISTA of 16 and 20 layers.\n\n- Analysis: we agree with your insight that the limited parameter volume of augmented ALISTA might limited its capacity and robustness to recover from dictionary-perturbed measurements, compared to robust ALISTA which has another encoder network that adaptively and efficiently encodes the perturbed dictionary \\tilde{D} into new (dynamic) weight matrix \\tilde{W}. ALISTA only has two scalars to be learned in each layer (one scalar as step size and the other as threshold), therefore adding more layers do not enlarge the parameter volume significantly. \n\n- Remark: from the comparison, we could conclude that it takes more than adjusting step sizes and thresholds to gain robustness to dictionary perturbations in LISTA/ALISTA. Therefore, robust ALISTA makes the meaningful progress in creating an efficient encoder network, that can dynamically address the dictionary variations \\tilde{D} by always adjusting \\tilde{W}. Without incurring much higher complexity, robust ALISTA witness remarkable improvements over ALISTA, making it a worthy effort in advancing LISTA-type network research into the practical domain. ", "Thank you for your careful reading and comments!\n\n- Q1: In our proofs, we take b as b=Ax*. If we add noise to the measurements, almost all the inequalities in the proof need to be modified. We will end up getting “convergence” to a neighbor of x* with a size depending on the noise level. Such modifications also apply to the analysis for convolutional dictionaries. Numerically, figures 1(b), 1(c) and 1(d) depict the results of ALISTA under SNRs = 40dB, 30dB and 20dB, respectively. \n\n- Q2: We basically agree with your comment on why data augmented TiLISTA and ALISTA are not performing as well as robust ALISTA. We are conducting the experiments that you have suggested and will update the results in comments once they become available, and also add them to the paper’s next update.\n\n- Q3: Thanks for kindly pointing out our writing issues. We will carefully fix typos and use more proofreading.", "Thank you for your careful reading and kindly identifying the typos in our paper! We will fix these typos and meticulously proofread our article.\n", "The paper raises many important questions about unrolled iterative optimization algorithms, and answers many questions for the case of iterative soft thresholding algorithm (ISTA, and learned variant LISTA). The authors demonstrate that a major simplification is available for the learned network: instead of learning a matrix for each layer, or even a single (potentially large) matrix, one may obtain the matrix analytically and learn only a series of scalars. These simplifications are not only practically useful but allow for theoretical analysis in the context of optimization theory. On top of this seminal contribution, the results are extended to the convolutional-LISTA setting. Finally, yet another fascinating result is presented, namely that the analytic weights can be determined from a Gaussian-perturbed version of the dictionary. Experimental validation of all results is presented.\n\nMy only constructive criticism of this paper are a few grammatical typos, but specifically the 2nd to last sentence before Sec 2.1 states the wrong thing \"In this way, the LISTA model could be further significantly simplified, without little performance loss\"\n...\nit should be \"with little\".\n" ]
[ -1, -1, 7, -1, -1, 9, -1, -1, -1, 10 ]
[ -1, -1, 4, -1, -1, 5, -1, -1, -1, 5 ]
[ "HklQ7zppRQ", "rkxXkC3DTQ", "iclr_2019_B1lnzn0ctQ", "B1xca8C4nX", "B1xca8C4nX", "iclr_2019_B1lnzn0ctQ", "HyltGkJFiQ", "HyltGkJFiQ", "SyxX7z5uhQ", "iclr_2019_B1lnzn0ctQ" ]
iclr_2019_B1lz-3Rct7
Three Mechanisms of Weight Decay Regularization
Weight decay is one of the standard tricks in the neural network toolbox, but the reasons for its regularization effect are poorly understood, and recent results have cast doubt on the traditional interpretation in terms of L2 regularization. Literal weight decay has been shown to outperform L2 regularization for optimizers for which they differ. We empirically investigate weight decay for three optimization algorithms (SGD, Adam, and K-FAC) and a variety of network architectures. We identify three distinct mechanisms by which weight decay exerts a regularization effect, depending on the particular optimization algorithm and architecture: (1) increasing the effective learning rate, (2) approximately regularizing the input-output Jacobian norm, and (3) reducing the effective damping coefficient for second-order optimization. Our results provide insight into how to improve the regularization of neural networks.
accepted-poster-papers
Reviewers are in a consensus and recommended to accept after engaging with the authors. Please take reviewers' comments into consideration to improve your submission for the camera ready.
train
[ "SyxswZfEkV", "B1eBMzYupm", "B1g5xlXqnm", "rygRc8IopQ", "SJgG4O8op7", "HklYSxUoTX", "rygtlWdRnm", "Skldcfu0hm", "rJx5uduA2m", "rJx3XFFv2Q", "B1eS7cTdhm", "rJlhD5wRnQ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "public", "author", "author", "author", "official_reviewer", "official_reviewer", "author" ]
[ "The authors have taken my comment into account in the new revision of the paper and adequately addressed issues pointed out by other reviewers. So, I keep my rating unchanged.", "Q1: Agreed\n\nQ2: You are right about weight decay on gamma only affecting the complexity of the model due to the last layer which can be merged with the softmax layer weights (as also pointed out by van Laarhoven). May be mention this below Eq. 5 (while citing van Laarhoven) to remind the reader of this fact.\n\nQ3:\nOn page 6 (left of Figure 4), I recommend changing the sentence \n\"In all cases, we observe that whether weight decay was applied to the top (fully connected) layer did not appear to matter;\"\nto something like \n\"In all cases, we observe that whether weight decay was applied to the top (fully connected) layer did not have a significant impact;\"\n\nQ4: OK\n\nQ5: Thank you for clarifying. I can see the technical mistake made in the 1st submission involving expectation over the input-output Jacobian for ReLU networks. However the current Theorem 1 on deep linear network makes the claim weak and the authors have used earlier work on deep linear networks as a justification.\n\nQ6,7,8,9: OK\n\nComments:\n\nThere were a few technical mistakes in the original submission that were overlooked by the reviewers and the authors have themselves identified and corrected them. However, these corrections have made the results for the second order methods weaker (section 4.2) since they apply to deep linear networks, which is a bit disappointing. But I still think this paper deserves to be read because 1. even though based on intuitions from deep linear networks, experiments are shown for deep non-linear networks confirming the insights drawn from them; 2. other sections have complementary analysis of weight decay for additional cases.\n\n(I have increased my original score by 1)", "I have read the author's response, and I would like to stick to my rating. From the authors' response on the convergence issue, the result from [1] does not directly apply since the activation function that the authors use in this paper is relu (not linear). Having said that, authors didn't find any issues empirically.\n\nQ7: Yes, I agree that the result depends on the gradient structure of the relu activations. But my point was that, it is still a calculation that one has to carry out, and the insight we gain from the calculation seem computational: that one can regularize jacobian norm easily. True, but is that necessary? Or in other words, can we use techniques (not-so) recent implicit regularization literature to analyze KFAC? I still think that the work is good, these are just my questions.\n====\n\nThe paper investigates how weight decay (according to the authors, this is done by scaling weights at each iteration) can be used as a regularizer while using standard first order methods and KFAC. As far as I can see, the experimental conclusion seem pretty consistent with other papers that the authors themselves cite (for eg: Neelakantan et al. (2015); Martens & Grosse, 2015. \n\nIn page 2, the authors mention the three different mechanisms by which weight decay has a regularizing effect. First, what is the definition of \"effective learning rate\"? If the authors mean that regularization just changes the learning rate in some case, that is true. In fact, it is only true while using l2-norm. I looked through the paper, and I couldn't find one. Similarly, I find point #1. to be confusing: why does reducing the scale of the weights increase the effective learning rate? (This confusion carries over to/remains in section 4.1.). The sentence starting (in point #1.) with \"As evidence,\", what is the evidence for? Is it for the previous statement that weight decay helps as a regularizer? Looking at Figure 1., Table 1., I can see that weight decay is actually helpful even with BN+D. In fact, the improvement provided by weight decay is uniform across the board. \n\nThe conclusion of mechanism 1 is that for layers with BN, weight decay is implicitly using higher learning rate and not by limiting the capacity as pointed out by van Laarhoven (2017). The two paragraphs below (12) are contradictory or I'm missing something: first paragraph says that \"This is contrary to our intuition that weight decay results in a simple function.\" but immediately below, \"We show empirically that weight decay only improves generalization by controlling the norm, and therefore the effective learning rate.\" Can the authors please explain what the \"effective learning rate\" argument is?\n\nProposition 1 and theorem 1 are extensions from Martens & Gross, 2015, I didn't fully check the calculations. I glanced through them, and they mostly use algebraic manipulations. The main empirical takeaway as the authors mention is that: weight decay in both KFAC-F and KFAC-G serves as a complexity regularizer which sounds trivial (assuming Martens & Grosse, 2015) since in both of these cases, BN is not used and the fact that weight decay is regularization using the local norm. \n\nIf I understand correctly, KFAC is an approximate second order method with the approximation chosen to be such that it is invariant under affine transformations. Are there any convergence guarantees at all for either of these approaches? Newton's method, even for strongly convex loss functions, requires self-concordance to ensure convergence, so I'm a bit skeptic when using approximate (stochastic) Jacobian norm. \n\nSome of the plots have loss values, some have accuracy etc., which is also confusing while reading. I strongly suggest that Figure 1 be shown differently, especially the x-axis! Essentially weight decay improves the accuracy about 2-4% but it is hard to interpret that from the figure.\n", "Thank you for pointing out this work.\n\nSection 2.2 of this paper is indeed related to our mechanism 1. However, the argument of effective learning rate was first identified by van Laarhoven 2017 and we did properly discuss the relationship with van Laarhoven 2017 (also see the response to AnonReview1). In the upcoming version, we will cite the paper you mentioned.", "Thank you for your new comments. We will update the paper according to your suggestions (Q2 and Q3).", "The first mechanism, increasing the effective learning rate, is also identified in this work https://arxiv.org/abs/1709.04546 (Sec. 2.2 and 3.2). The authors may want to discuss how they are related.", "Thank you for the insightful comments. According to your suggestions, we revised the statements of the paper (including 4.1) to make them clearer.\n\nQ1: what is the definition of \"effective learning rate\"\nFor \"effective learning rate\", you can understand it as the \"learning rate\" for normalized networks (see equation 9).\n\nQ2: regularization just changes the learning rate (Mechanism 1)\nNote: it's true for weight decay in general (not L2-norm). We also tested weight decay in the case of Adam (see Figure 2) where weight decay and L2 regularization are not identical.\n\nQ3: why reducing the scale of the weights increase the effective learning rate\nAs explained in equation 9, the effective learning is inversely proportional to the weight norm.\n\nQ4: The sentence starting (in point #1.) with \"As evidence,\", what is the evidence for?\nSee Figure 2 and Figure 4. Most of the generalization effect of weight decay is due to applying it to layers with BN.\n\nQ5: The improvement provided by weight decay is uniform across the board. \nWeight decay does improve the performance consistently, but the mechanisms behind are different (depending on the optimization algorithm and network architecture). Figure 1 and Table 1 are mostly to emphasize the difference between L2 regularization and weight decay so as to motivate three mechanisms.\n\nQ6: Argument of Mechanism 1 (or effective learning rate)\nIn mechanism 1, we basically argue that the scaling of weights for BN layers doesn't influence the underlying function (see equation 8), so it doesn't meaningfully constrain the function to be simple (you can always scale down the weights but the function represented by the network is still the same, also see the first paragraph in 4.1). However, the scaling of the weights does influence the updates (see equation 9) by controlling the effective learning rate. The regularization effect of weight decay is achieved by scaling the weights, and therefore the effective learning rate.\n\nQ7: Proposition 1 and Theorem 1 are extensions from Martens & Gross, 2015\nWe have removed Proposition 1 in the latest version. Theorem 1 (Lemma 2 in the latest version) is not just an extension from the K-FAC paper (martens & Grosse, 2015). Actually, it has little to do with the K-FAC paper. We don't think it's trivial for the following reasons:\n\n- Theorem 1 (Lemma 2 in the latest version) is new and it heavily relies on the Lemma 1 (gradient structure) which has nothing to do with the original K-FAC (Martens & Grosse, 2015) paper. \n- Theorem 1 (Lemma 2 in the latest version) is an important part to connect Gauss-Newton norm to approximate Jacobian norm. The result of approximate Jacobian norm is non-trivial and we didn't see any similar theoretical result before. In practice, it's quite expensive to directly regularize Jacobian norm due to the extra computation overhead. In this work, we provide a simple yet cheap way to approximately regularize Jacobian norm and we believe it's useful and novel.\n\nQ8: K-FAC (convergence?)\nK-FAC is currently the most popular approximate natural gradient method in training deep neural networks. It works very well (due to the use of curvature information) in practice and we didn't see any convergence issue. Recently, Bernacchia, 2018 [1] provided convergence guarantee for natural gradient in the case of deep linear networks (where the loss is non-convex). Beyond that, they also gave some theoretical justifications for the performance of K-FAC.\n\n[Reference]\n[1] Exact natural gradient in deep linear networks and application to the nonlinear case", "We thank the reviewer for the positive feedback. \n\nWe have revised the conclusion section to discuss the observed results and potential new directions for future work.", "Thank you for the useful feedback. We have updated the paper (especially 4.2) taking into account several of your comments.\n\nQ1: Mechanism 1 is more of a discussion on existing work rather than novel contribution\nWe agree that the argument of \"effective learning rate\" itself is not novel and has been observed by van Laarhoven 2017. \nHowever, we don't think the mechanism 1 is just a discussion of existing work. Particularly, van Laarhoven 2017 didn’t show any experiments that weight decay improves generalization performance. In Figure 2 of van Laarhoven 2017, they only showed that small learning rate is preferred when weight decay is applied. The important point we made is that weight decay actually improves the generalization performance even with well-tuned learning rate parameter and the gain of applying weight decay cannot be achieved by tunning the learning rate directly (we shouldn't ignore the interaction between the learning rate and weight decay).\n\nFurthermore, van Laarhoven 2017 was just talking about L2 regularization which is not equivalent to weight decay in adaptive gradient methods. We don't think the author realized the subtle difference between L2 regularization and weight decay. In the combination of L2 regularization and adaptive gradient methods, the argument of effective learning rate might not hold exactly since L2 regularization can affect both the scale and direction of the weights. In our paper, we extend the argument of \"effective learning rate\" to first-order optimization algorithms (including SGD and Adam) by identifying the subtle difference between L2 regularization and weight decay.\n\nQ2: The effect of weight decay on the gamma parameter of batch-norm.\nAs discussed in van Laarhoven 2017, only the gamma of the last BN layer affects the complexity of the network. The role of it is quite similar to the scale of the last fully connected layer since you can always merge the gamma parameter into the last fc layer. In practice, the gain of regularizing the gamma parameter of the last BN layer is quite small which is consistent with our observation that regularizing the last fc layer gives marginal improvement. That's why we fixed the gamma parameter throughout the paper.\n\nQ3: In Figure 2 and 4, there is a noticeable difference between training without weight decay, and training with weight decay only on the last layer. \nIn Figure 2, the gap is pretty small (<1%). \nIn Figure 4, regularizing the last layer does help a little bit (~1%) while the improvement of regularizing conv layers is much larger (~3%). \nAccording to your suggestion, we revised our statements in 4.1 to make the arguments softer.\n\nQ4: In the line right above remark 1, what does “assumption” refer to?\nIt does refer to spherical Gaussian input distribution. We have improved the writing for this part, it should be much clearer now.\n\nQ5: Regarding the equivalence of L2 norm of theta under Gauss-Newton metric and the Frobenius norm of input-output Jacobian, why does f_theta need to be a linear function without any non-linearity?\nThat’s because we want the input-output Jacobian to be independent of the input x (which is not true for non-linear networks). Under this assumption, we can take J_x out of the expectation (see revised Theorem 1).\n\nNote: if the (all) input x has entries ±1 (so that xx^T is an identity matrix), then the assumption of f_theta being linear is not necessary. In that case, it is easy to show that the Gauss-Newton norm is proportional to the expectation of squared Jacobian norm over input distribution.\n\nQ6: In remark 1, what does it mean by “Furthermore, if G is approximated by KFAC”?\nThis original claim is a little misleading, we have rewritten this part. Basically, when G is approximated by K-FAC (it's intractable to use exact G in practice), the K-FAC Gauss-Newton norm is still proportional to the squared Jacobian norm, but the constant becomes (L+1), not (L+1)**2.\n\nQ7: In the 1st line of the last paragraph of page 6, what are the general conditions under which the connection between Gauss-Newton norm and Jacobian norm does not hold true?\nIf the network is not linear, then the connection will not hold exactly. We need the assumption of the network being linear so that the input-output Jacobian J_x is independent of the input x.\n\nQ8: In Figure 5, how are the different points in the plots achieved? By varying hyper-parameters?\nSorry, we didn't explain Figure 5 clearly in the submitted version. Different points are achieved by varying optimizers and architectures (we mentioned that on page 7 of the updated version). Specifically, we trained feed-forward networks with a variety optimizers on both MNIST and CIFAR-10. For MNIST, we used simple fully-connected networks with different depth and width. For CIFAR-10, we adopted the VGG family (From VGG11 to VGG19). \n\nQ9: Missing citations\nThank you for pointing out missing citations. We added multiple citations in the latest version.\n", "This paper discusses the effect of weight decay on the training of deep network models with and without batch normalization and when using first/second order optimization methods. \n\nFirst, it is discussed how weight decay affects the learning dynamics in networks with batch normalization when trained with SGD. The dominant generalization benefit due to weight decay comes from increasing the effective learning rate of parameters on which batch normalization is applied. The authors therefore hypothesize that a larger learning rate has a regularization effect.\n\nSecond, the role of weight decay is discussed when training with second order methods without batch normalization. Under the approximation of not differentiating the curvature matrix used in second order method, it is shown that using weight decay is equivalent to adding to the loss an L2 regularization in the metric space of the curvature matrix considered. It is then shown that if the curvature matrix is the Gauss-Newton matrix, this L2 regularization (and hence the weight decay) is equivalent to the Frobenius norm of the input-output Jacobian when the input has a spherical Gaussian distribution. Similar arguments are made about KFAC with Gauss-Newton norm. The generalization benefit due to weight decay in this case is claimed based on the recent paper by Novak et al 2018 which empirically shows a strong correlation between input-output Jacobian norm and generalization error.\n\n\nFinally, the role of weight decay is discussed for second order methods when using batch normalization. In this case it is discussed for Gauss-Newton KFAC that the benefit mostly comes from the application of weight decay on the softmax layer and the effect of weight decay on other weights cancel out due to batch normalization. A comparison between Gauss-Newton KFAC and Fischer KFAC is also made. Thus the generalization benefit is presumably attributed to the second order properties of KFAC and a smaller norm of softmax layer weights.\n\nComments:\nThe paper is technically correct and proofs look good.\n\nI have mixed comments about this paper. I find the analysis in section 4.2 and 4.3 which discuss about the role of weight decay for second order methods (with and without batch-norm) to be novel and insightful (described above). \n\nBut on the other hand, I feel section 4.1 is more of a discussion on existing work rather than novel contribution. Most of what is said, both analytically and experimentally, is a repetition of van Laarhoven 2017, except for a few details. It would have been interesting to carefully study the effect of weight decay on the gamma parameter of batch-norm which controls the complexity of the network along with the softmax layer weights as it was left for future work in van Laarhoven 2017. But instead the authors brush it under the carpet by saying they did not find the gamma and beta parameters to have significant impact on performance, and fixed them during training. I also find the claim of section 4.1 to be a bit mis-leading because it is claimed that weight decay applied with SGD and batch normalization only has benefits due to batch-norm dynamics, and not due to complexity control even though in Fig 2 and 4, there is a noticeable difference between training without weight decay, and training with weight decay only on last layer. Furthermore, when hypothesizing the regularization effect of large learning rate in section 4.1, a large body of literature that has studied this effect has not been cited. Examples are [1], [2], [3]. \n\nI have other concerns which mainly stem from lack of clarity in writing:\n\n1. In the line right above remark 1, it is not clear what “assumption” refer to. I am guessing the distribution of the input being spherical Gaussian?\n2. In remark 1, regarding the claim about the equivalence of L2 norm of theta under Gauss-Newton metric and the Frobenius norm of input-output Jacobian, why does f_theta need to be a linear function without any non-linearity? I think the linearity part is only needed for the KFAC result.\n3. In remark 1, what does it mean by “Furthermore, if G is approximated by KFAC”? For linear f_theta, given lemma 1 and theorem 1, the claimed equivalence always holds true, no?\n4. In the 1st line of last paragraph of page 6, what are the general conditions under which the connection between Gauss-Newton norm and Jacobian norm does not hold true?\n5. In figure 5, how are the different points in the plots achieved? By varying hyper-parameters?\n\nA minor suggestion: in theorem 1 (and lemma 1), instead of assuming network has no bias, it can be said that the L2 regularization term does not have bias terms. This is more reasonable because bias terms have no effect on complexity and so it is reasonable to not apply weight decay on bias.\n\nOverall I think the paper is good *if* section 4.1 is sorted out and writing (especially in section 4.2) is improved. For these reasons, I am currently giving a score of 6, but I will increase it if my concerns are addressed.\n\n[1] a bayesian perspective on generalization and stochastic gradient descent\n[2] Train longer, generalize better: closing the generalization gap in large batch training of neural networks\n[3] Three Factors Influencing Minima in SGD", "This paper identifies and investigates three mechanisms of weight decay regularization. The authors consider weight decay for DNN architectures with/without BN and different types of optimization algorithms (SGD, Adam, and two versions of KFAC). The paper unravels insights on weight decay regularization effects, which cannot be explained only by traditional L2 regularization approach. This understanding is of high importance for the further development of regulations techniques for deep learning.\n\nStrengths:\n+ The authors draw connections between identified mechanisms and effects observed in prior work.\n+ The authors provide both clear theoretical analysis and adequate experimental evidence supporting identified regularization mechanisms.\n+ The paper is organized and written clearly.\n\nI cannot point out any flaws in the paper. The only recommendation I would give is to discuss in more detail possible implications of the observed results for new methods of regularization in deep learning and potential directions for future work. It would emphasize the significance of the obtained results.", "We have updated the paper and improved the writing a lot. In particular, we rewrote the section 4.1 and 4.2 as requested by AnonReview1 and AnonReview2." ]
[ -1, -1, 6, -1, -1, -1, -1, -1, -1, 7, 7, -1 ]
[ -1, -1, 4, -1, -1, -1, -1, -1, -1, 5, 4, -1 ]
[ "Skldcfu0hm", "rJx5uduA2m", "iclr_2019_B1lz-3Rct7", "HklYSxUoTX", "B1eBMzYupm", "iclr_2019_B1lz-3Rct7", "B1g5xlXqnm", "B1eS7cTdhm", "rJx3XFFv2Q", "iclr_2019_B1lz-3Rct7", "iclr_2019_B1lz-3Rct7", "iclr_2019_B1lz-3Rct7" ]
iclr_2019_B1xJAsA5F7
Learning Multimodal Graph-to-Graph Translation for Molecule Optimization
We view molecule optimization as a graph-to-graph translation problem. The goal is to learn to map from one molecular graph to another with better properties based on an available corpus of paired molecules. Since molecules can be optimized in different ways, there are multiple viable translations for each input graph. A key challenge is therefore to model diverse translation outputs. Our primary contributions include a junction tree encoder-decoder for learning diverse graph translations along with a novel adversarial training method for aligning distributions of molecules. Diverse output distributions in our model are explicitly realized by low-dimensional latent vectors that modulate the translation process. We evaluate our model on multiple molecule optimization tasks and show that our model outperforms previous state-of-the-art baselines by a significant margin.
accepted-poster-papers
The revisions made by the authors convinced the reviewers to all recommend accepting this paper. Therefore, I am recommending acceptance as well. I believe the revisions were important to make since I concur with several points in the initial reviews about additional baselines. It is all too easy to add confusion to the literature by not including enough experiments.
test
[ "HylNHt4h0X", "r1gHailcAQ", "SyeFxjBc37", "B1lysCy5CX", "r1xXOkQO07", "Syl_q2s_67", "SyghQTiupQ", "H1xuZ76rTm", "Skl19Q6Hpm", "SkgWrRs_Tm", "Hkl9bvO52Q", "HkgKBKlq2m" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thank you for your insightful comments again! They are very helpful!", "Thank you for updating the paper. I've updated the score as well.", "Update:\nThe score has been updated to reflect the authors' great efforts in improving the manuscript. This reviewer would suggest to accept the paper now.\n\n\nOld Review Below:\n\nThe paper describes a graph-to-graph translation model for molecule optimization inspired from matched molecular pair analysis, which is an established approach for optimizing the properties of molecules. The model extends a chemistry-specific variational autoencoder architecture, and is assessed on a set of three benchmark tasks.\n\n\nWhile the idea of manuscript is interesting and promising for bioinformatics, there are several outstanding problems, which have to be addressed before it can be considered to be an acceptable submission. This referee is willing to adjust their rating if the raised points are addressed. Overall, the paper might also be more suited at a domain-specific bioinformatics conference.\n\n\nMost importantly, the paper makes several claims that are currently not backed up by experiments and/or data. \n\nFirst, the authors claim that MMPs “only covers the most simple and common transformation patterns”. This is not correct, since these MMP patterns can be as complex as desired. Also, it is claimed that the presented model is able to “learn far more complex transformations than hard-coded rules”. The authors will need to provide compelling evidence to back up these claims. At least, a comparison with a traditional MMPA method needs to be performed, and added as a baseline. Also, it has to be kept in mind that the reason MMPA was introduced was to provide an easily interpretable method, which performs only local transformations at one part of the molecule. “Far more complex transformations” may thus not be desirable in the context of MMPA. Can the authors comment on that?\n\nSecond, the authors state that they “sidestep” the problem of non-generalizing property predictors in reinforcement learning, by “unifying graph generation and property estimation in one model”. How does the authors’ model not suffer from the same problem? Can they provide evidence that their model is better in property estimation than other models?\n\n\nIn the first benchmark (logP) the GCPN baseline is shown, but in the second benchmark table, the GCPN baseline is missing. Why? The GCPN baseline will need to be added there. Can the authors also comment on how they ensure the comparison to the GPCN and VSeq2Seq is fair? Also, can the authors comment on why they think the penalized logP task is a good benchmark?\n\nAlso, the authors write that Jin et al ICML 2018 (JTVAE) is a state of the model. However, also Liu et al NIPS 2018 (CGVAE) state that their model is state of the art. Unfortunately, both JTVAE and CGVAE were never compared against the strongest literature method so far, by Popova et al, which was evaluated on a much more challenging set of tasks than JT-VAE and CGVAE. The authors cite this paper but do not compare against it, which should to be rectified. This referee understands it is more compelling to invent new models, but currently, the literature of generative models for molecules is in a state of anarchy due to lack of solid comparison studies, which is not doing the community a great service.\n\n\nFurthermore, the training details are not described in enough detail. \nHow exactly are the pairs selected? Where do the properties for the molecules come from? Were they calculated using the logP, QED and DRD2 models? How many molecules are used in total in each of these tasks?\n", "Thank you very much for your insightful comments. We have removed claims about practical drug discovery, as well as claims that are not well supported by our current manuscript. For instance, we have modified the related work section (see point 3) and removed statements in the ablation study paragraph in the appendix (see point 5). We also updated statements in the experiment section since we have added MMPA and GCPN baselines.\n\n1a) How did the authors optimize the hyperparameters of the mmpdb algorithm?\nThe current mmpdb program is very expensive to run. It takes about 4-5 hours to perform MMPA on 1000 molecules due to large number of extracted rules. Therefore, we performed limited amount of hyperparameter tuning on the validation set to find good hyperparameters. Moreover, some hyperparameters (e.g., the size of environment fingerprints) are hard-coded in the source code, and we couldn’t investigate how these hyperparameters will affect the model performance.\n\n1b) Why the authors need to translate each molecule 50 times? MMPA is deterministic, so one should just need to translate once and then pick the top 50 translated/transformed molecules with the highest expected improvement ...\nWe did exactly what you describe here. Each test set molecule is translated “once”, but in this “one-time” translation, multiple matching transformation rules are applied to this compound. And we simply picked the top 50 transformed molecules within the similarity constraint. We defined “one” translation in MMPA as applying “one” transformation rule.\n\n2) Popova’s work, GCPN or any other comparable RL framework can be applied in straightforward way to lead optimization as well: One would just plugin a reward function of f(mol) = min( sim(startmol, mol),threshold ) + Property(mol) [...] Wouldn’t this be even more flexible & general compared to the method presented here?\nWe agree that RL framework could be extended to our conditional translation scenario. However, adding similarity into the reward itself is not enough, unless you also feed the “startmol” into the RL model so that it knows what the starting molecule looks like. Otherwise the RL model will get confused since the reward function will keep changing as the starting molecule changes during training. Therefore a successful extension of this algorithm would be a contribution in its own right.\n\n3) Re II 2: This reviewer remains unconvinced. This paragraph needs to be fixed in the manuscript because also implicit estimation is estimation.\nWe suppose that you are referring to our response Part II 3 (not II 2, which is about MMPA instead of implicit property estimation). We agree that current manuscript does not provide enough evidence regarding this point. Therefore we have changed the paragraph in related work section. We removed statements involving “suboptimal property estimator”.\n\n4) The authors have scored all 250k/350k molecules using logP/QED/the DRD SVM, which are exactly the “suboptimal property predictors” that BayesOpt/RL would use for scoring, and then created pairs from them? Doesn’t this imply the same suboptimal estimation is now baked into the translation model, but implicitly?\nWe agree that the suboptimal property estimator can implicitly affect our model, given the way we created the training data. Therefore, we have removed these claims (see point 3). However, our graph-to-graph translation model can be trained on molecular pairs constructed based on their measured properties without any property estimation models. We couldn’t do this experiment as such datasets are not publicly available, but they often exist in pharma companies. In contrast, prior models require property predictor to be an integral part of the model.\n\n5) The authors state that “In a real-world drug discovery setting, there is usually a budget on how many drug candidates can be tested in the laboratory […] This is beneficial as it requires fewer experiments in the real scenario.”, but then require 250k/350k samples to train the model. Isn’t this a contradiction?\nWe have removed these sentences as they can be misleading and they are irrelevant to the ablation comparison. Please note that the goal of this ablation study is to investigate the importance of the adversarial learning component.\nRegarding your “contradiction” concern, we used 250k/350k samples as they were readily available. The question of data efficiency applies to all neural models, including RL models for drug discovery and neural models for property prediction. To investigate this, we trained VJTNN on the logP task (delta=0.4) using only 3k molecular pairs, as compared to 120k pairs extracted from the full dataset. The test set result is 1.26 +/- 1.53 (full dataset performance was 3.3 +/- 1.8). Indeed, learning graph translation is challenging under low-resource scenario, and we leave this issue for future work.", "First, thanks a lot for the authors efforts, this is much appreciated! \nNevertheless, this reviewer thinks the paper is still overselling the results, and hides limitations, which is unfortunate and unnecessary, since the modeling idea is actually promising.\n\n\nComments:\n\nIn terms of modeling, there is indeed a distinction between mapping from molecules to better molecules over other generative models, e.g. variational autoencoders or graph-convolutional policy networks.\n\nHowever, in practice, there is no distinction, since *in effect* both models perform the optimization of molecular properties with respect to the molecules. In fact, the same scoring functions that are used in this paper here could be used by a VAE+Bayesian optimization or an RL model as the reward, and are applied in practice to hit/lead optimization as well as library generation. The former application is even more frequent in practice than the latter.\n\nComments to the authors comments:\n\nRe: I 1) \nThanks for running the mmpdb baseline! A few questions on that:\n\na) How did the authors optimize the hyperparameters of the mmpdb algorithm?\nb) This reviewer does not fully understand why the authors need to translate each molecule 50 times? MMPA is determistic, so one should just need to translate once and then pick the top 50 translated/transformed molecules with the highest expected improvement that are within the similarity constraint. Can the authors comment on that in more detail? \n\n\nRe I 2: Thank you for running the GCPN baseline!\nPlease note that Popova’s work, GCPN or any other comparable RL framework can be applied in straightforward way to lead optimization as well: One would just plugin a reward function of f(mol) = min( sim(startmol, mol),threshold ) + Property(mol), and wouldn’t actually have to worry about pretraining, Wouldn’t this be even more flexible & general compared to the method presented here?\n\nRe II 2: This reviewer remains unconvinced. This paragraph needs to be fixed in the manuscript because also implicit estimation is estimation.\n\nRe II 6:\nSo, if this reviewer understands correctly, the authors have scored all 250k/350k molecules using logP/QED/the DRD SVM, which are exactly the “suboptimal property predictors” that BayesOpt/RL would use for scoring, and then created pairs from them? Doesn’t this imply the same suboptimal estimation is now baked into the translation model, but implicitly?\n\n\nAlso in the (commendable) ablation study in the appendix, the authors state that “In a real-world drug discovery setting, there is usually a budget on how many drug candidates can be tested in the laboratory, as biological experiments are time-consuming in general. […] This is beneficial as it requires fewer experiments in the real scenario.”, but then require 250k/350k samples to train the model. Isn’t this a contradiction?\n\n\nOverall:\nSo to be crystal-clear: The authors will need to remove any claims to practical drug discovery, and position their paper more realistically, then this reviewer will recommend acceptance. But in the current form, there are still too many unsupported and misleading claims.\n", "Thank you very much for your insightful comments. Our response to the issues you mentioned is the following:\n\n1) Please provide an explanation of why using a larger value for delta gives worse performance than a smaller value.\nA larger delta implies a tighter similarity constraint. For instance, setting delta to 0.6 means the generated compounds Y have to be very similar to the input molecule X (sim(X,Y) > 0.6). When delta decreases to 0.4, the generated structures are allowed to deviate more from the starting point X (sim(X,Y) > 0.4). Therefore, one would naturally expect the model to perform better (find higher scoring molecules) when delta is smaller since the structures can be chosen from a larger set. \n\n2) Diversity could be influenced by the cardinality of the sample. Please discuss why diversity is (not) biased versus larger sets.\nWe agree that the diversity depends on the sample size. Therefore, all the models are evaluated with the same sample size (K=50) for fair comparison. That is, for each molecule in the test set, we randomly sample 50 times from each model to compute the resulting diversity score.\n\n3) Tree and graph encoding: asynchronous update implies that T should be a multiple of the diameter of the input graph to guarantee a proper propagation of information across the graph.\nWe agree that a number of iterations (T) is required for proper propagation of information across the input graph. However, T does not need to be larger than the diameter since we adopted an attention mechanism in the decoder. It can dynamically read the information across the input graph in different decoding steps. In fact, a large T (e.g., the diameter) may potentially lead to overfitting.\n\n4) Clarification of tree decoding step (Section 3.2)\nFirst, the tree decoding process stops when it choose to backtrack at the root node. Second, we agree that this probability should depend on the number of nodes having been generated. This is implicitly captured by the neural message passing procedure. As noted in Eq. (4), the model makes this decision (expanding a new node or not) based on all the incoming messages at the current node. The messages carry information about the current (partial) tree structure, including potentially the number of nodes generated so far though not explicitly. \n\n5) Explanation of graph decoding step (Section 3.3)\nWe added Figure 2 to illustrate why the graph decoding step is not deterministic and how one junction tree can be decoded into different molecular graphs. Regarding the likelihood of ground truth subgraphs, we applied teacher forcing, i.e., we feed the graph decoder with ground truth junction trees as input. Section 3.3 has been updated correspondingly.", "Thank you very much for your insightful comments. We want to provide more explanations on the probabilistic modeling of different involved components.\n\n1) Explicit probabilistic modeling of junction tree encoder-decoder (Section 3).\nPrior work (Jin et al. 2018) found that it is beneficial to adopt a coarse-to-fine approach to generate molecular graphs: first generate the backbone structure (i.e., junction tree T) and then assemble the sub-graphs in the tree into a complete molecular graph Y. Thus\n p(Y | X) = \\sum_T p(Y | T, X) p(T | X)\nwhere p(Y | T, X) is the graph decoder and p(T | X) is the tree decoder. As the junction tree T of any graph is constructed through a deterministic tree decomposition algorithm, T does not function as a latent variable during training but is rather an intermediate object that can be predicted via supervised learning. Therefore, \n p(Y | X) \\approx p(Y | T_y, X) * p(T_y | X)\nwhere T_y is the junction tree underlying the target graph Y.\n\nThe tree decoder generates a tree in an autoregressive manner, based on a specific sequentialization of the tree structure. A tree T is laid out as a sequence of edges {(i_1, j_1), …, (i_m, j_m)} visited in the depth-first traversal over the tree. The probability of generating T is thus\n p(T | X) = \\prod_t p( (i_t, j_t) | (i_1, j_1), …, (i_t-1, j_t-1), X )\nwhere j_t always equals i_{t+1}. Probability of (i_t, j_t) depends on two factors: 1) whether j_t is a new node; 2) If j_t is a new node, what is its label; These two factors are modeled by the topological predictor (Eq. 4-6) and the label predictor (Eq. 8-9). The message passing procedure (Eq. 3) embeds the current partial tree realized by {(i_1, j_1), …, (i_t-1, j_t-1)} into a continuous representation. Beyond the above architecture, in this paper we introduced an attention mechanism to capture how the decoded tree unravels step-by-step in an input graph X dependent manner. \n\nThe graph decoder models the conditional probability p(Y | T_y, X). This is a structured prediction task since Y is a graph. The variables in this structured prediction problem are node assembling decisions between neighboring nodes in the tree. For efficiency reasons, the assembling decisions are solved locally, starting from the root and its direct neighbors. In other words, p(Y | T_y, X) is a product of probabilities of choosing the right graph attachments with each node’s neighbors, resulting in Eq. (10) (after taking log).\n\n2) Probabilistic modeling of multi-modal translation model (Section 4) \nIn this paper, we aim to learn diverse multi-modal mappings between two molecular domains, as there are many different ways to improve a given molecule. This diversity is introduced via latent variables z:\n p(Y | X) = \\int_z p(Y | X, z) p(z) dz\nwhere prior p(z) models diverse strategies of improvement, independent of X, and is taken to be a standard Gaussian distribution. The overall model resembles a conditional variational autoencoder, learnable through reparameterization (Section 4.1). The approximate posterior Q(z | Y) only depends on the target Y so as to force z to capture resulting type of molecule, inferable from Y alone. \n\nThe proposed adversarial training technique (Section 4.2) is an additional regularization trying to discourage the model from generating undesirable outputs (e.g. molecules outside of the defined target domain). As a side note, p(Y | X, z) can be expanded as \n p(Y | X, z) = p(Y | T_y, X, z) p(T_y | X, z)\nwhere latent variable z is concatenated with the encoded representation of X (Eq. 11).", "Thank you very much for your insightful comments. Regarding your other comments and questions, our response is the following:\n\n1) “The authors claim that MMPs “only covers the most simple and common transformation patterns”. This is not correct, since these MMP patterns can be as complex as desired.”\nWe agree that MMP patterns can be as complex as desired. However, allowing the patterns to be arbitrarily complex will result in a huge number of transformation rules. For instance, we have extracted 12 million rules in total on the logP and QED tasks when no constraints are imposed. Therefore, we have updated this claim in the paper with the following statement: “MMPA's main drawback is that large numbers of rules have to be realized (e.g. millions) to cover all the complex transformation patterns.”\n\n2) “the reason MMPA was introduced was to provide an easily interpretable method, which performs only local transformations at one part of the molecule. ‘Far more complex transformations’ may thus not be desirable in the context of MMPA.”\nYes, we agree that there is always a trade-off between simple and understandable rules vs performance, and that the same trade-off is present in other machine learning applications (e.g., shallow decision trees vs neural networks). Our focus in this paper is on demonstrating the performance gains we can obtain by reformulating the task as a translation problem. Deriving interpretable explanations for the predictions is clearly an important future direction, but is orthogonal to our current effort.\n\n3) “The authors state that they “sidestep” the problem of non-generalizing property predictors in reinforcement learning … How does the authors’ model not suffer from the same problem? Can they provide evidence that their model is better in property estimation than other models?”\nWe want to clarify that our model does not explicitly estimate the properties. As a result, we can only provide indirect evidence showing that our model can nevertheless outperform other models in mapping precursor molecules into the target set of molecules with better properties. \n\n4) “Can the authors also comment on how they ensure the comparison to the GCPN and VSeq2Seq is fair?”\nWhen comparing to VSeq2Seq, we ensure that all models have about the same number of parameters (3.8~3.9 million), trained on the same dataset with the same optimizer and the same number of epochs. Both models are evaluated with K=50 translation attempts for each test compound.\nRegarding GCPN, their exact setup is not provided. As described in their paper [4], GCPN was trained in an environment whose initial state is one of the test set molecule of the logP task. They kept all the molecules generated during training and reported the molecule with the best logP improvement. We think this may bring more advantage to GCPN in our comparison, as our models do not have access to the test set.\n\n5) “Can the authors comment on why they think the penalized logP task is a good benchmark?”\nWe evaluated on this task because some prior work (e.g. JT-VAE, GCPN) has been tested on this benchmark, and their results are readily available for comparison. Indeed, this benchmark itself is not comprehensive enough. We therefore tested on two more tasks (QED and DRD2) aiming to provide a more thorough evaluation.\n\n6) “How exactly are the pairs selected? Where do the properties for the molecules come from? Were they calculated using the logP, QED and DRD2 models? How many molecules are used …?”\nThose details have been discussed in the Appendix B. We updated the relevant paragraphs to make it more clear. To summarize, logP and QED scores are calculated with RDKit built-in functions. For DRD2 activity prediction, we directly used the pre-trained model in Olivecrona et al. [3].\nOn the QED and DRD2 tasks, a molecular pair (X,Y) is selected if the Tanimoto similarity sim(X,Y) >= 0.4 and both X and Y fall into the source and target property range. On the logP task, we select molecular pairs when similarity sim(X,Y) >= delta and property improvement is greater than 0.5 (if delta=0.6) and 2.5 (if delta=0.4). In total 250K molecules are used for constructing the training pairs in the logP and QED tasks, and 350K molecules in the DRD2 task.\n\nReferences\n[1] A. Dalke, J. Hert, C. Kramer. mmpdb: An Open-Source Matched Molecular Pair Platform for Large Multiproperty Data Sets. J. Chem. Inf. Model., 2018, 58 (5), pp 902–910.\n[2] M. Popova, O. Isayev, and A. Tropsha. Deep reinforcement learning for de novo drug design. Science advances, 4(7):eaap7885, 2018.\n[3] M. Olivecrona, T. Blaschke, O. Engkvist, and H. Chen. Molecular de-novo design through deep reinforcement learning. Journal of cheminformatics, 9(1):48, 2017.\n[4] J. You, B. Liu, R. Ying, V. Pande, and J. Leskovec. Graph convolutional policy network for goal-directed molecular graph generation. arXiv preprint arXiv:1806.02473, 2018", "Thank you very much for your insightful comments. We’d like to clarify first that our model is a conditional graph-to-graph translation model which maps a given precursor compound to another with more desirable properties. Our translation approach is therefore NOT equivalent to a generative model over molecular structures (i.e., for chemical library design). This conditional translation model is useful and important for hit/lead compound optimization.\n\nIn response to your suggestions, we added two additional experiments:\n1) MMPA baseline: We utilized the open source tool “mmpdb” [1] to perform MMPA. For each task, we constructed a database of transformation rules extracted from the ZINC and Olivecrona et al. [3]’s dataset. Same as our methods, each test set molecule is translated 50 times using the matching rules found in the database. When there are more than 50 matching rules, we choose those having higher average property improvement in the database. This statistic is calculated during the database construction. More details can be found in the Appendix B.\n\nThe results are shown in Tables 1 and 2 in the updated paper. On the QED and DRD2 tasks, our model outperforms MMPA with significant margin in terms of translation success rate (56.9% vs 20.8% on QED and 81.0% vs 35.6% on DRD2). On the logP task, our model also outperforms MMPA in terms of average property improvement (3.37 vs 2.00 when delta=0.4 and 1.53 vs 1.41 when delta=0.6).\n\n2) GCPN baseline: We used You et al [4]’s open source implementation to train GCPN on the QED and DRD2 tasks. As stated in their paper [4], GCPN was trained in an environment whose initial state is one of the test set molecules. They kept all the molecules generated during training and reported the molecule with the best property improvement. For consistency, we adopted the same strategy in training and evaluation of GCPN (i.e., training on the test set of QED and DRD2). The performance is reported in Table 2. Our model greatly outperforms GCPN (56.9% vs 9.4% on QED and 81.0% vs 4.4% on DRD2).\n\nRegarding Popova et al.’s method [2], we have carefully read the paper and studied its open-sourced code. The model described in [2] is not directly applicable to our setting as it targets chemical library design while our focus is on lead optimization starting from a given precursor compound. Their model architecture would have to be modified so as to take a precursor compound as an input to be optimized / translated. In fact, Popova et al. list this task as a future work.\n\nDue to limited length, our response to your other questions is posted in another post.\n\nReferences\n[1] A. Dalke, J. Hert, C. Kramer. mmpdb: An Open-Source Matched Molecular Pair Platform for Large Multiproperty Data Sets. J. Chem. Inf. Model., 2018, 58 (5), pp 902–910.\n[2] M. Popova, O. Isayev, and A. Tropsha. Deep reinforcement learning for de novo drug design. Science advances, 4(7):eaap7885, 2018.\n[3] M. Olivecrona, T. Blaschke, O. Engkvist, and H. Chen. Molecular de-novo design through deep reinforcement learning. Journal of cheminformatics, 9(1):48, 2017.\n[4] J. You, B. Liu, R. Ying, V. Pande, and J. Leskovec. Graph convolutional policy network for goal-directed molecular graph generation. arXiv preprint arXiv:1806.02473, 2018", "Thank you very much for your insightful comments. \n\n1) Why VSeq2Seq is better than JT-VAE and GCPN?\nThe main reason is that VSeq2Seq is trained with direct translation pairs through supervised learning, while JT-VAE and GCPN have to learn to discover these pairs in a weakly supervised manner. For instance, GCPN iteratively modifies a given molecule to maximize the predicted property score, where the translation pairs are discovered through reinforcement learning. JT-VAE optimizes a molecule by first mapping it into its latent representation and then performing gradient ascent in the latent space. In this case, translation pairs are discovered through the gradient signal given by the property predictor, which is trained on molecules with labeled properties. As the models are evaluated by translation quality, training the model directly with translation pairs is advantageous. \n\n2) Suppose we keep translating the molecule X1 -> X2 -> X3 ... using the learned translation model, would the model still get improvement after X2? When would it get maxed out?\nOn the logP task, the model may still get improvements after X2, but we suspect this process will get maxed out after several steps because in general it is harder to optimize a molecule with high property scores. The QED and DRD2 tasks are different from logP task, as the target domain now becomes a closed set defined by the property range. As long as X2 belongs to the target domain (e.g., QED >= 0.9, DRD2 >= 0.5), this process will get maxed out since the model is trained only to improve molecules outside of the target domain.\n\n3) If we train with ‘path’ translation (i.e., train with improvement path with variable length), instead of just the pair translation, would that be helpful? \nIn general, it is harder to collect ‘path’ translation data than translation pairs due to data sparsity. For instance, to find a translation path X1 -> X2 -> X3, we need (X1,X2) and (X2,X3) to be valid translation pairs (i.e., both pairs satisfying property improvement and similarity constraints). Nonetheless, we believe that training the model with path translation will be helpful for global optimization -- finding molecules with the best property scores in the entire molecular space.", "This paper proposed an extension of JT-VAE [1] into the graph to graph translation scenario. To help make the translation model predicting diverse and valid outcomes, the author added the latent variable to capture the multi-modality, and an adversarial regularization in the latent space. Experiment on molecule translation tasks show significant improvement over existing methods.\n\nThe paper is well written. The author explains the GNN, JT-VAE and GAN in a very organized way. The idea of modeling the molecule optimization as translation problem is interesting, and sounds more promising (and could be easier) than finding promising molecule from scratch. \n\nTechnically I think it is reasonable to use latent variable model to handle the multi-modality. Using GAN to align the distribution is also a well adapted method recently. Thus overall the method is not too surprising to me, but the paper executes it nicely. Given the significant empirical improvement, I think this paper has made a valid contribution to the area.\n\nRegarding the results in Table 1, I’m curious why the VSeq2Seq is better than JT-VAE and GCPN (given the latter two are the current state-of-the-art)? \n\nAnother thing I’m curious about is the ‘stacking’ of this translation model. Suppose we keep translating the molecule X1 -> X2 -> X3 ... using the learned translation model, would the model still gets improvement after X2? When would it get maxed out?\nOr if we train with ‘path’ translation (i.e., train with improvement path with variable length), instead of just the pair translation, would that be helpful? I’m not asking for more experiments, but some discussion might be useful.\n\n[1] Jin et.al, Junction tree variational autoencoder for molecular graph generation, ICML 2018\n", "As a reviewer I am expert in learning in structured data domains. \nThe paper proposes a quite complex system, involving many different choices and components, for obtaining chemical compounds with improved properties starting from a given corpora. \nOverall presentation is good, although some details/explanations/motivations are missing. I guess this was due to the need to keep the description of a quite complex system in the given space limit. Such details/explanations/motivations could, however, have been inserted in the appendix. As an example, let consider the description of the decoding of the junction tree. In that section, it is not explained when the decoding process stops. My understanding is that this is when, being in the root node, the choice is to go back to the parent (that does not exist). In the same section, it is not explicitly discussed that the probability to select between adding a node or going back to the parent should have a different distribution according to \"how many\" nodes have been generated before, i.e. we do not want to have a high probability to \"go back\" at the beginning of the decoding, while I guess it is desirable that such probability increases proportionally with the number of generated nodes. This leads to an issue that I personally think is important: the paper does lack an explicit probabilistic modelling of the different involved components, which may help for a better understanding of all the assumptions made in the construction of the proposed system. \nThe complexity of the proposed system is actually an issue since the author(s) do not attempt (except for the presence or absence of the adversarial scaffold regularization and the number of trials in appendix) an analysis of the influence of the different components (and corresponding hyper-parameters). \nReference to previous relevant work seems to be complete.\nI think the paper is relevant for ICLR (although there is no explicit analysis of the obtained hidden representations) and of interest for a good portion of attendees.\n\nMinor issues:\n- Tree and Graph Encoding: asynchronous update implies that T should be a multiple of the diameter of the input graph to guarantee a proper propagation of information across the graph. A discussion about that would be needed.\n- eq.(6): \\mathbb{u}^d is not defined.\n- Section 3.3:\n - first paragraph is not clear. An example and/or figure is needed to understand the argument, which is related to the presence of cycles.\n - the definition of f(G_i) involves \\mathbb{x}_u. I guess they should be \\mathbb{x}_u^G.\n - not clear how the log-likelihood of ground truth subgraphs is computed given that the predicted junction tree, especially at the beginning of training, may be way different from the correct one. Moreover, what is the assumed bias of this choice ?\n- Table I: please provide an explanation of why using a larger value for \\delta does provide worst performance than a smaller value. From an optimisation point of view it should provide at least an as good performance. This is a clear indication that the used procedure is suboptimal.\n- diversity could be influenced by the cardinality of the sample. Is this false ? please discuss why diversity is (not) biased versus larger sets." ]
[ -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, 7, 6 ]
[ -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "r1gHailcAQ", "B1lysCy5CX", "iclr_2019_B1xJAsA5F7", "r1xXOkQO07", "H1xuZ76rTm", "HkgKBKlq2m", "HkgKBKlq2m", "SyeFxjBc37", "SyeFxjBc37", "Hkl9bvO52Q", "iclr_2019_B1xJAsA5F7", "iclr_2019_B1xJAsA5F7" ]
iclr_2019_B1xVTjCqKQ
A Data-Driven and Distributed Approach to Sparse Signal Representation and Recovery
In this paper, we focus on two challenges which offset the promise of sparse signal representation, sensing, and recovery. First, real-world signals can seldom be described as perfectly sparse vectors in a known basis, and traditionally used random measurement schemes are seldom optimal for sensing them. Second, existing signal recovery algorithms are usually not fast enough to make them applicable to real-time problems. In this paper, we address these two challenges by presenting a novel framework based on deep learning. For the first challenge, we cast the problem of finding informative measurements by using a maximum likelihood (ML) formulation and show how we can build a data-driven dimensionality reduction protocol for sensing signals using convolutional architectures. For the second challenge, we discuss and analyze a novel parallelization scheme and show it significantly speeds-up the signal recovery process. We demonstrate the significant improvement our method obtains over competing methods through a series of experiments.
accepted-poster-papers
This paper studies deep convolutional architectures to perform compressive sensing of natural images, demonstrating improved empirical performance with an efficient pipeline. Reviewers reached a consensus that this is an interesting contribution that advances data-driven methods for compressed sensing, despite some doubts about the experimental setup and the scope of the theoretical insights. We thus recommend acceptance as poster.
test
[ "SygHVHWiyV", "Byl6y31r37", "BkgimTEqRm", "HklY22V9RQ", "B1gQwnV9R7", "Bylk_jNcC7", "S1lGNoV90m", "HkgjfHOahX", "SJgsRkfb3m" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "I think the authors have addressed all my comments and I recommend acceptance. ", "Quality & Clarity:\nThis is a nice paper with clear explanations and justifications. The experiments seem a little shakey.\n\nOriginality & Significance:\nI'm personally not familiar enough to say the theoretical work is original, but it is presented as so. However it seems significant. The numerical results do not seem extremely significant, but to be fair I'm not familiar with state of the art nearest neighbor results ie Fig 3.\n\nPros:\nI like that you don't take much for granted. E.g. you justify using convolutional net in 2.1, and answered multiple of my questions before I could type them (e.g. why didn't you include nonlinearities between convolutions, why bother with cascaded convolutions, and what you mean by near-optimal).\n\nCons:\nThe visual comparisons in Figure 4 are difficult to see. DLAMP appears to be over-smoothing but in general it's hard to compare to low-ish resolution noisy-looking textures. I strongly recommend using a test image with a clear texture to illustrate your point (eg the famous natural test image that has on the side a tablecloth with zig-zag lines)\n\nThe horizontal error bars are obfuscated by the lines between markers in Fig 3a.\n\nI don't understand Fig 3a. You are varying M, which is on the Y-axis, and observing epsilon, on the X-axis?\n\nQuestions:\nCan you state what is novel about the discussion in the \"Theoretical Insights\" subsection of 2.1? I guess this is described in your abstract as \"we cast the problem ... by using a maximum likelihood protocol...\" but your contribution could be made more explicit. For example \"We show that by jointly optimizing phi and lambda (sensing and recovery), we are maximizing the lower bound of mutual information between reconstructions (X) and samples (Y)\" (that is my understanding of the section)\n\nWhy don't you use the same M for all methods in the Figure 3 experiments? ie why did you use a different M for numax/random versus deepSSRR/DCN?\n\nWhy do you choose 20-layers for the denoiser? Seems deep...\n\nThe last part of the last sentence of the 2nd paragraph of section 3.1 should be a complete sentence \"though, with more number of parameters\". Does that mean that the DCN has more parameters than the DeepSSRR?\n\nI am willing to change score based on the response\n\n******************\nUpdate after author response:\nThanks for the clear response and Figure 3, and nice paper. My score is updated.\nPS: I still think that the (tiny) error bars are obfuscated because the line connecting them is the same thickness and color.", "\nComment: Previous works have been proposed to ...\n\nResponse: As discussed in \"Compressed Sensing MRI\" (available at https://ieeexplore.ieee.org/abstract/document/4472246), in CS MRI, the measurement matrix is basically a subsampled Fourier matrix. On the other hand, in algorithms that reconstruct signals block-by-block (similar to Kulkarni et al. 2016 and Shi et al. 2017), the measurement matrix has a block-diagonal structure. Now since the entries of a subsampled Fourier matrix cover a range of frequencies, it cannot have a block-diagonal structure. This means that methods that reconstruct signals block-by-block cannot use a subsampled Fourier matrix as their measurement matrix. Hence, these methods are not suitable for the MRI application.\n\nAlso, please note that the reconstruction we obtain from algorithms that reconstruct signals block-by-block have clear blocky artifacts. Because of the blocky artifacts, these algorithms typically include an additional denoiser to suppress the artifacts. In order to have a fair comparison between our approach and these previous works, we should also train a separate denoiser at the end of our architecture to improve its final reconstruction. However, this was beyond the scope of our work. Fortunately, our method achieves the state-of-the-art results in terms of compressive image recovery quality even without an extra denoiser.\n\nComment: It is not clear where the maximum deviation from isometry in Algorithm 1 is discussed since the MSE is used as a loss function.\n\nResponse: Please note that Algorithm 1 is related to the Section 2.2 titled \"Applications of Low-Dimensional Embedding\". In this section, we have discussed how we can use \"the encoder\" part of our approach to build near-isometric embeddings. Therefore, the loss function that we have used in this section (which only uses the encoder for sensing) is the maximum deviation from isometry as you can see in Algorithm 1 (one line above the first 'end for'). However, when we use both encoder and decoder (for both sensing and recovery), we use MSE as the loss function. \n \nComment: Authors provided theoretical insights for the proposed algorithm ...\n\nResponse: Note that our theoretical insights are just that; in particular, we do not intend them to be interpreted as rigorous theoretical guarantees. Indeed establishing such guarantees is an exciting avenue for future work. With this in mind, we have argued why learning both Phi and Lambda (sensing and recovery) simultaneously, which is exactly what our approach does, is useful for compressive sensing (CS). We have characterized good measurements as the ones that give us back the original signals with the highest probability. We have shown that this problem is equivalent to maximizing the mutual information between the measurements and the original signals. Then we have argued that, since in practice our training data is limited and we do not know the exact distribution of data, we are not able to maximize this mutual information. Instead, we can assume a parametric distribution on the reconstruction error and show that jointly learning Phi and Lambda (i.e., sensing and recovery) gives us a lower-bound on the mutual information we wanted to maximize. Although we do not have rigorous theoretical guarantees for our approach, we have demonstrated that in practice it works very well. \n\nComment: One of the contributions in this paper is the speed, so the results on the speed should be put in the main paper.\n\nResponse: We have added a new section in the Appendix to discuss the computational benefits of our approach. As shown in Table 3 of our updated manuscript, our method is significantly faster than both DAMP and LDAMP methods.", "\nComment: The visual comparisons in Figure 4 ...\n\nResponse: We have added a new visual comparison (Figure 6) in our updated manuscript that presents the reconstruction of the 512x512 Mandrill test image with sampling ratio= 0.25. In this case, LDAMP slightly outperforms our algorithm. However, in order to compare the reconstruction of textures, we have explicitly compared the reconstruction of Mandrill's nose and cheeks. Figure 6(b) shows that, in this case, our algorithm outperforms LDAMP by 0.9dB and has a better visual quality and fewer artifacts (e.g. less over-smoothing).\n\nComment: The horizontal error bars ...\n\nResponse: Please note that we have used horizontal error bars only for the random embedding, which does not have any marker. If you zoom in on the plot, these horizontal error bars are well concentrated around their mean values. \n\nComment: I don't understand Fig 3a ...\n\nResponse: First of all, there is an important difference between NuMax and the other algorithms in Figure 3a. In Algorithm 1 of the NuMax paper (http://home.engineering.iastate.edu/~chinmay/files/papers/numax_tsp.pdf), the parameter \\epsilon (which is called \\delta in that paper) is an input to the algorithm. Given a value for \\epsilon, NuMax determines the appropriate dimension of the embedding (i.e., M). However, for other approaches (random/DeepSSRR/DCN) we do not give an \\epsilon to the algorithm. Instead, we pick an embedding size (i.e., M), construct an embedding of that size, and then measure the \\epsilon. In other words, for NuMax, \\epsilon is the input and M is the output while for other methods, M is the input and \\epsilon is the output. In spite of this difference, the visualization in Figure 3a lets us compare different methods and understand which one gives us a better isometry constant.", "\nQuestion: Can you state what is novel ...\n\nResponse: The \"Theoretical Insights\" section basically describes why learning phi and lambda (sensing and recovery) simultaneously is useful for the CS problem. We characterize \"good\" measurements as the ones that give us back the original data with the highest probability. This problem is equivalent to maximizing the mutual information between the original data and the measurements. Since we do not know the true underlying distribution of data, we cannot maximize this mutual information. Instead, we assume a parametric distribution on the reconstruction error and show that jointly learning the sensing and recovery gives us a lower-bound on the mutual information we wish to maximize. In other words, instead of maximizing the true mutual information, we maximize a lower-bound of it.\n\nQuestion: Why don't you use the same M ...\n\nResponse: In response to one of the previous comments, we described an important difference between NuMax and other approaches (in terms of the input/output of the algorithms). Because of this difference, we were not able to use the exact same M for all approaches. In addition, since calculating \\epsilon is significantly cheaper for random embedding compared to deep learning-based approaches, we have used a fewer number of 'M's for the curves of learning-based approaches (i.e., DCN/DeepSSRR) compared to the curve of random embedding.\n\nQuestion: Why do you choose 20-layers for the denoiser? Seems deep...:\n\nResponse: The LDAMP paper uses DnCNN, which is a 20-layer convolutional network. The reason for having 20 layers can be understood from the Table 1 of the DnCNN paper (https://arxiv.org/pdf/1608.03981.pdf), which tabulates the effective patch size for different denoisers. Considering the fact that DnCNN uses convolutional layers with 3x3 filters, the authors have chosen 20 layers in order to have a receptive field (which is correlated to the effective patch size) similar to other denoisers. For a more detailed argument, please refer to Section III.A of DnCNN paper (https://arxiv.org/pdf/1608.03981.pdf). \n\nQuestion: The last part of the last sentence of the 2nd paragraph ...\n\nResponse: Yes, it means that the DCN has more parameters compared to DeepSSRR. As we have mentioned in the 1st paragraph of Section 3.1, the DCN has 8 convolutional layers, while DeepSSRR has 5 to 7 convolutional layers, depending on the size of embedding.", "Re your specific comments:\n\n1- It is indeed possible to compare learning-based approaches to compressive sensing (like our work in this manuscript) vs. model-based approaches (like AMP, DAMP). We refer the reviewer to Figure 5(c) in our paper and also Figure 3 of the paper \"A Learning Approach to Compressed Sensing,\" http://cs231n.stanford.edu/reports/2017/pdfs/8.pdf. Figure 5 of our paper compares the performance of our learning-based approach vs. the LASSO L1 solver; Figure 3 of the aforementioned paper compares the performance of other learning-based approaches such as CNN and VAE with AMP. Both figures show that i) when the undersampling ratio (i.e. m/n) is small, learning-based approaches (like our work) can outperform model-based approaches (such as AMP or DAMP); ii) when the undersampling ratio is large enough, model-based approaches start to outperform learning-based approaches.\n\nIntuitively, when the undersampling ratio is large enough, model-based approaches can extract sufficient information from measurements to reconstruct signals accurately enough and even better than learning-based approaches.\nMoreover, model-based algorithms like AMP/DAMP have the knowledge of the measurement matrix and this is another factor helping them to be better than learning-based approaches in high undersampling ratio regime. \n\nRegarding different SNRs, we refer the reviewer to Table 3 of [arXiv:1701.03891] which we have also cited in our submission. In that table, the authors compare the robustness of recovery based on CNNs and DAMP. As they have shown, CNNs are more robust to noise. In general, learning-based approaches can utilize data to more effectively suppress measurement noise. \n\nFinally, we note that the LDAMP approach we have cited in our paper is very similar to DAMP except that, instead of using a BM3D denoiser, LDAMP uses a CNN denoiser. The rest of the architecture is not learned and hence, is similar to AMP/DAMP. Therefore, one can expect that LDAMP's behaviour would be similar to DAMP except for the fact that it has a better denoiser.\n\n2- We have added a reference to the SRA paper in our revised paper plus added a short discussion of the differences with our approach. Like our approach, the SRA architecture is also an autoencoder. In SRA, the encoder can be considered to be a fully connected layer while in our work the encoder has a convolutional structure and is basically a circulant matrix. For large problems, learning a fully connected layer (as in the SRA encoder) is significantly more challenging than learning one/several convolutional layers (as in our encoder). In SRA, the decoder is a T-step projected subgradient. In our work, the decoder consists of several convolutional layers plus a rearrangement layer. The optimization in SRA is solely over the measurement matrix and T (which is the number of layers in the decoder) scalar values that could be considered as learning rates at every layer of the decoder. However, in our work, the optimization is over the convolution weights and biases that we have across the different layers of our encoder and decoder. The authors of SRA have shown results mainly on synthetic datasets whereas we have presented results on real images. ", "3- We refer the reviewer to Section 2.2 of our submission (\"Applications of Low-Dimensional Embedding\"). In this section and in Algorithm 1, we discuss how we can learn near-isometric embeddings using our approach. One of the main applications of near-isometric embeddings is designing compressive sensing (CS) measurement matrices. In CS language, learning a near-isometric embedding is equivalent to learning a measurement matrix that satisfies the so-called restricted isometry property (RIP). RIP is a *sufficient* condition for compressive sensing. This means that the matrices we learn with Algorithm 1 can be used along with L1 minimization for CS. \n\nFor a comparison of our approach with previous work, we refer the reviewer to Figure 3(a) in our submission and also Figure 8 in the NuMax paper we cite (available at http://home.engineering.iastate.edu/~chinmay/files/papers/numax_tsp.pdf). Figure 8 of the NuMax paper compares the CS recovery performance of NuMax vs. random Gaussian projections and shows that NuMax outperforms random projections in terms of MSE for different measurement ranges and SNRs. This success is mainly explained by Figure 3 of the NuMax paper, which shows that the matrices built by the NuMax algorithm have a better isometry constant than random matrices. With this in mind, we now refer the reviewer to Figure 3(a) in our manuscript, where we have shown that the isometry constant of our method is even better than NuMax. This means that, if our approach is used with L1 reconstruction, then the result will be better than using either random matrices or NuMax matrices. Therefore, the answer to the reviewer's question is \"yes\". We can basically use matrices learned with our approach along with L1 reconstruction, and the result will beat both random projections and NuMax embeddings. \n\n4- We used a right to left ordering in Figure 1, because we wanted to include the vector-matrix multiplications denoted as 'parallel convolutions' in this figure.", "\nThis paper proposes a (CNNs) architecture for encoding and decoding images for compressed sensing. \nIn standard compressed sensing (CS), encoding usually is linear and corresponds to multiplying by a fat matrix that is iid gaussian. The decoding is performed with a recovery algorithm that tries to explain the linear measurements but also promotes sparsity. Standard decoding algorithms include Lasso (i.e. l1 regularization and a MSE constraint) \nor iterative algorithms that promote sparsity by construction. \n\nThis paper instead proposes a joint framework to learn a measurement matrix Phi and a decoder which is another CNN in a data-driven way. The proposed architecture is novel and interesting. \n\nI particularly liked the theoretical motivation of the used MSE loss by maximizing mutual information. \n\nThe use of parallel convolutions is also neat and can significantly accelerate inference, which can be useful for some applications. \n\nThe empirical performance is very good and matches or outperforms previous state of the art reconstruction algorithms D-AMP and Learned D-Amp. \n\nOn comparisons with prior/concurrent work: The paper is essentially a CNN autoencoder architecture but specifically designed for compressed sensing problems. \nThere is vast literature on CNN autoencoders including (Jiang 2017 and Shi 2017) paper cited by the authors. I think it is fine to not compare against those since they divide the images into small blocks and hence have are a fundamentally different approach. This is fine even if block-reconstruction methods outperform this paper, in my opinion: new ideas should be allowed to be published even if they do not beat SOTA, as long as they have clearly novel ideas. It is important however to discuss these differences as the authors have done in page 2. \n\nSpecific comments: \n\n1. It would be interesting to see a comparison to D-Amp and LDAmp for different number of measurements or for different SNRs (i.e. when y = Phi x+ noise ). I suspect each method will be better for a different regime?\n\n2. The paper: `The Sparse Recovery Autoencoder' (SRA) by Wu et al. https://arxiv.org/abs/1806.10175\nis related in that it learns both the sensing matrix and a decoder and is also focused on compressed sensing, but for non-image data. The authors should discuss the differences in architecture and training. \n\n3. Building on the SRA paper, it is possible that the learned Phi matrix is used but then reconstruction is done with l1-minimization. How does that perform for the matrices learned with DeepSSRR?\n\n4. Why is Figure 1 going from right to left?\n\n\n\n", "Authors case the problem of finding informative measurements by using a maximum likelihood formulation and show how a data-driven dimensionality reduction protocol is built for sensing signals using convolutional architectures. A novel parallelization scheme is discussed and analyzed for speeding up the signal recovery process.\n \nPrevious works have been proposed to jointly learn the signal sensing and reconstruction algorithm using convolutional networks. Authors do not consider them as the baseline methods due to the fact that the blocky reconstruction approach is unrealistic such as MRI. However, there is no empirical result to support his conclusion. In addition, the comparisons to these methods can further convince the readers about the advantage of the proposed method.\n \nIt is not clear where the maximum deviation from isometry in Algorithm 1 is discussed since the MSE is used as a loss function.\n \nAuthors provided theoretical insights for the proposed algorithm. It indicates that the lower-bound of the mutual information is maximized and minimizing the mean squared error is a special case, but it is unclear why this can provide theoretical guarantee for the proposed method. More details are good for the connections between the theory and the proposed algorithm.\n \nOne of the contributions in this paper is the speed, so the results on the speed should be put in the main paper." ]
[ -1, 7, -1, -1, -1, -1, -1, 8, 6 ]
[ -1, 3, -1, -1, -1, -1, -1, 4, 3 ]
[ "Bylk_jNcC7", "iclr_2019_B1xVTjCqKQ", "SJgsRkfb3m", "Byl6y31r37", "Byl6y31r37", "HkgjfHOahX", "HkgjfHOahX", "iclr_2019_B1xVTjCqKQ", "iclr_2019_B1xVTjCqKQ" ]
iclr_2019_B1xWcj0qYm
On the Minimal Supervision for Training Any Binary Classifier from Only Unlabeled Data
Empirical risk minimization (ERM), with proper loss function and regularization, is the common practice of supervised classification. In this paper, we study training arbitrary (from linear to deep) binary classifier from only unlabeled (U) data by ERM. We prove that it is impossible to estimate the risk of an arbitrary binary classifier in an unbiased manner given a single set of U data, but it becomes possible given two sets of U data with different class priors. These two facts answer a fundamental question---what the minimal supervision is for training any binary classifier from only U data. Following these findings, we propose an ERM-based learning method from two sets of U data, and then prove it is consistent. Experiments demonstrate the proposed method could train deep models and outperform state-of-the-art methods for learning from two sets of U data.
accepted-poster-papers
This paper studies the task of learning a binary classifier from only unlabeled data. They first provide a negative result, i.e., they show it is impossible to learn an unbiased estimator from a set of unlabeled data. Then they provide an empirical risk minimization method which works when given two sets of unlabeled data, as well as the class priors. The four submitted reviews were unanimous in their vote to accept. The results are impactful, and might make for an interesting oral presentation.
train
[ "ryxRqB96R7", "HJgRrkQw07", "BJe2DbrUR7", "r1enCpTGCQ", "S1gL7qwf07", "rkxn-AIbC7", "SyeJk0I-A7", "rJlyh6UW07", "H1leKaUW0Q", "rJgzB6LbAX", "r1gZR2ahaQ", "rkx7Fs-ham", "BJgWwjZ3T7", "rJeZEjbhTX", "rJepWiW3pm", "rkeeoqWha7", "ryxIBivipQ", "BkgahCMcpQ", "H1xTSAEqnm" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors have responded to my questions, and I have no other comment to make.", "Thank you for your many insightful clarifications and expanding your experiments. I look forward to seeing more work in the future!", "We would like to thank all reviewers for their helpful comments! We have now updated our submission accordingly. \nThe key modifications of the revised version include:\n\n1. extend the number of max epochs for training from 200 to 500 in the benchmark experiments with deep neural networks (please see Fig. 2),\na table of final test risk is also added (please see Table 4 in Appendix C.2);\n2. update Table 2 by adding noises of different directions to the training class priors.", "And please find our responses below.\n\nQ: For small PN and small PN prior-shift, the choice of 10% seems arbitrary\nA: Yes, using 10% data for training is a bit arbitrary, but it follows the tradition in semi-supervised learning where it is common to give 10% labeled data. Some recent semi-supervised papers give slightly less than 10% labeled data, for example, 4k labeled data for CIFAR-10 in “temporal ensembling” from ICLR 2017, “mean teachers” from NIPS 2017, “smooth neighbors on teacher graphs” from CVPR 2018, and “compact latent space clustering” from ICML 2018. Note that section 5.1 is more a proof of concept and illustration of properties, and hence this arbitrary choice should be a safe choice.\n\nQ: At what percentage do the supervised methods start displaying a clear advantage?\nA: This is a great question but hard to answer. The proposed UU learning is model-independent. However, this doesn’t mean the best model for PN learning is the best model for UU learning due to the memorization in deep networks (“a closer look at memorization in deep networks” from ICML 2017). At what percentage PN learning is clearly better than UU learning mainly depends on 4 factors: first of all, the dataset; second, the values of theta and theta’, which naturally measure how far UU learning is away from PN learning; third, the model capacity in terms of memorizing signals and noises with different speeds---it is conjectured that skip connections themselves have certain regularization effects against label noises; finally, the optimization algorithm, especially the learning rate as a function of the epoch number.\n\nQ: The curves in Fig. 2 suggest that the models should have been trained for longer time\nA: Thanks for the suggestion! We are working on the experiments for extending the number of max epochs for training from 200 to 500; we will see if standard supervised learning with limited data can be significantly better than now.\n\nQ: Try a more realistic setting that the priors are perturbed in a different direction\nA: We have launched new experiments by scaling the training class priors differently. Experiments on MNIST are finished but experiments on CIFAR-10 are quite slow; we will update the submission later. The experimental results on MNIST show that the proposed method still performs reasonably well.", "Thanks for pointing out many typos; we will fix them accordingly. Please find our detailed responses below.\n\nQ: Why is the simplification easier to implement in deep learning frameworks?\nA: Sorry for not explaining it. The simplified risk estimator is standard cost-sensitive learning, and thus we can reuse existing codes for cost-sensitive learning, such as importance reweighting by plugging alpha and alpha’ into the codes. However, the original risk estimator needs to be implemented since it is new and cannot be reduced to existing objective functions.\n\nQ: What about computation time in experiments?\nA: Please note that the proposed method just offers a new objective function. After specifying a model, this objective function can be minimized by any optimization algorithm. Specifically, we applied standard SGD for MNIST and Fashion-MNIST and Adam for SVHN and CIFAR-10. Here the proposed method would not add any more computational burden, so that the computation time simply depends on how many epochs we would like to train the model.\n\nQ: Examples of typical problems for classification from two sets of U data with known class priors\nA: Two sets of U data with different class priors may be collected from different places or time points. For example, considering morbidity rates, they can be potential patient data collected from urban and rural areas; considering food preferences, they can be potential customer data collected from the Northern and Southern China; likewise, considering approval rates, they can be unlabeled voter data collected in two years.\n\nNote that in the seminal paper on learning from label proportions “N. Quadrianto, A. J. Smola, T. S. Caetano, and Q. V. Le. Estimating labels from label proportions. JMLR, 2009”, there are many potential applications in areas like e-commerce, politics, spam filtering and improper content detection. The two problem settings are different yet closely related, and thus those can also be our potential applications.\n\nQ: Why naming l- and l+ the corrected loss functions?\nA: We apologize for the confusion. The notation (i.e., l+ and l-) indicates the U set with larger/smaller class prior is regarded as the corrupted P/N dataset. The name is also from learning with noisy labels. Since our training data are corrupted data, using the original loss l means regarding the corrupted data as clean data and will cause learning to be biased and inconsistent. In order to “correct” this effect, the loss has to be corrected so that the corrected loss is perfectly compatible with the corrupted data.", "Great question! A similar question has been addressed in a previous reply entitled “Relationship to learning from label proportions (LLP) and natural extensions to k classes with k U sets”. The main message can be summarized as follows. LLP makes use of the mean operator technique for linear-odd losses, and it can naturally handle k classes with k U sets but it cannot learn nonlinear classifiers. The proposed method makes use of the risk rewrite technique from learning with noisy labels, and it can naturally learn nonlinear classifiers but it cannot handle k classes with k U sets.\n\nWe think the technical difficulty is how to connect the k-class learning problem to learning with noisy labels. For binary classification, this connection is obvious: we regard the U set with larger class prior as the corrupted positive class and the U set with smaller class prior as the corrupted negative class. For multi-class classification with multiple U sets where all class priors are given, we can construct combinatorial many mappings from a U set to a corrupted class, and we lack a measure of the quality of these mappings. This should be the first step for extending this paper beyond binary classification.", "Two sets of U data with different class priors may be collected from different places or time points. For example, considering morbidity rates, they can be potential patient data collected from urban and rural areas; considering food preferences, they can be potential customer data collected from the Northern and Southern China; likewise, considering approval rates, they can be unlabeled voter data collected in two years.\n\nNote that in the seminal paper on learning from label proportions “N. Quadrianto, A. J. Smola, T. S. Caetano, and Q. V. Le. Estimating labels from label proportions. JMLR, 2009”, there are many potential applications in areas like e-commerce, politics, spam filtering and improper content detection. The two problem settings are different yet closely related, and thus those can also be our potential applications.", "Note that given only U data for training, the most straightforward idea is to use clustering, in particular discriminative clustering which is also known as “unsupervised classification”. This solution is usually suboptimal. A minor reason is that clustering methods are not always compatible with state-of-the-art deep models, but there are two major reasons.\n\nFirst, successful translation of clustering results into classification results exclusively relies on an assumption, namely one cluster exactly corresponds to one class. If we have one cluster formed by a few geometrically close classes, or one class formed by several geometrically separated clusters (as in our experiments), this assumption would be violated and we would fail in translating clusters into meaningful classes. It may happen that clustering results are perfect while classification results are poor.\n\nSecond, clustering must introduce additional geometric or information-theoretic assumptions (for example, by following the large margin principle and the information-maximization principle). The learning objectives of clustering methods are built upon these additional assumptions. It is very difficult to measure the distance or similarity according to the geometry for complex data in high-dimensional spaces. On the other hand, we employ ERM and rely on the same assumptions of supervised deep learning: the smoothness assumption for supervised learning and the composition-of-factors assumption for deep learning (Section 5.11.2, the DL book). Therefore, we prefer ERM to clustering methods.\n\nBTW, the argument was that using clustering is inferior to ERM rather than using clustering is inferior to arbitrary binary classifier. The difference between learning objectives is more critical than the difference between models to be learned.", "Classification-calibrated loss functions are defined in “P. L. Bartlett, M. I. Jordan, and J. D. McAuliffe. Convexity, classification, and risk bounds. Journal of the American Statistical Association, 2006.” Briefly speaking, using such a surrogate loss guarantees that under mild assumption on the model, the learned model will converge to the Bayes optimal classifier, as the number of training data goes to infinity. Almost all popular losses are classification calibrated---actually, a monotonic and differentiable loss should be classification calibrated, if its gradient at zero is negative.", "We are trying to expand Table 2 by adding noises of different directions to the training class priors (experiments on MNIST are finished but experiments on CIFAR-10 are quite slow; we will update the submission later). Please find our detailed responses below.", "The authors propose an unbiased estimator that allows for training models with weak supervision on two unlabeled datasets with known class priors. The theoretical properties of the estimator are discussed and an empirical evaluation shows promising performance.\n\nThe paper provides a thorough overview of the related work.\nThe experiments compare to the relevant baselines.\n\nMinor remarks:\n\nThe writing seems like it could be improved in multiple places and the main thing that makes some sections of the paper hard to follow is that the concepts often get mentioned and discussed before they are formally defined/introduced. Concepts that are introduced via citations should also be explained even if not in-depth.\n\nFigure 2: the curves suggest that the models should have been left to train for a longer time - some of the small PN and small PN prior-shift risks are still decreasing\n\nFigure 2: the scaling seems inconsistent - the leftmost subplot in each row doesn’t start at (0,0) in the lower left corner, unlike the other subplots in each row - and it should probably be the same throughout - no need to be showing the negative space.\n\nFigure 2: maybe it would be good to plot the different lines in different styles (not just colors) - for BW print and colorblind readers\n\nFor small PN and small PN prior-shift, the choice of 10% seems arbitrary. At what percentage do the supervised methods start displaying a clear advantage - for the experiments in the paper?\n\nWhen looking into the robustness wrt noise in the training class priors, both are multiplied by the same epsilon coefficient. In a more realistic setting the priors might be perturbed independently, potentially even in a different direction. It would be nice to have a more general experiment here, measuring the robustness of the proposed approach in such a way.\n\n5.2 typo: benchmarksand ; datasetsfor", "Thanks for pointing out that crowdsourcing is also related to this paper! We are conjecturing that data generation processes in crowdsourcing depend heavily on non-expert labelers, while data generation/corruption processes in LNL are more theoretical/statistical. In LNL, when p(y|x) is corrupted, we may first draw x_i from p(x) and then manipulate y_i according to corrupted p(y|x). However, when p(x|y) is corrupted, we have to first pick up y_i and then draw corrupted x_i directly from corrupted p(x|y), since here p(x) is different in clean and corrupted joint densities. In this sense, the problem settings seem not very related. We will carefully check this issue later since we are not very familiar with crowdsourcing. Could you please recommend a few crowdsourcing and multi-source learning papers for our reference?\n\nNote that CCN noise model (where p(y|x) is corrupted) has no covariate shift, and MCD noise model (where p(x|y) is corrupted) has inevitable covariate shift. The empirical studies cited in this paper from the computer vision society, if not CCN-based, basically assume that noisy labels are from p(y_noisy|x,y_clean) and thus again has no covariate shift. This paper is the first to experimentally show that unbiased risk estimators originally designed for no-covariate-shift case don’t work in covariate-shift case. As a consequence, this paper is related to but still fairly different from the majority of LNL papers. To the best of our knowledge, this paper is the fourth paper going along this specific direction (after three papers from COLT 2013, TAAI 2013 and ICML 2015).\n", "Thanks for the suggestion, but extending this paper to binary or k classes with k U sets is beyond the scope of the current paper. It seems that the problem setting of LLP is more general than this paper since LLP can make use of k U sets to learn a binary classifier. Nevertheless, the goal of learning or the model to be learned should be taken into account too: the goal of LLP is to learn a linear model whereas our goal is to learn either linear or deep model. Existing LLP methods based on ERM, i.e., “N. Quadrianto, A. J. Smola, T. S. Caetano, and Q. V. Le. Estimating labels from label proportions. JMLR, 2009”, cannot learn any nonlinear model.\n\nHere we distinguish three cases of extending A to B where A is some component in some existing method. The first case is that A can be naturally extended to B but the authors didn’t know/realize it. The second case is that extending A to B has no problem theoretically but the performance can be quite poor practically. The third case is that A cannot be extended to B theoretically.\n\nFor example, extending the logistic loss in LLP to any linear-odd loss (i.e., l(z) - l(-z) = -z; see [1]) is the first case. Extending the linear model in many LNL methods to deep models is the second case---the mathematical derivations suggest they are model-independent, but the performances turn out to be quite poor because deep models are much better at memorizing noisy labels than linear models. Finally, extending the linear model in LLP to deep models is the third case---this can be explained by the proof of Theorem 3 in [1], where the key observation is y*g(x)=g(y*x) if g is linear in its parameters; as a result, only the expectation of y*x needs to be estimated that is known as the mean operator and is the technique of LLP. Therefore, LLP cannot benefit from deep learning unless it can get rid of the mean operator.\n\nOn the other hand, the proposed method shares the technique for designing unbiased risk estimators in LNL (i.e., risk rewrite). For learning a binary classifier, we proved that one U set is not enough but two U sets are enough. However, given (more than) three U sets for training, how to meaningfully incorporate all U sets is an open question. Now we cannot say it belongs to the second or third case, but we are sure it is not the first case.\n\n[1] G. Patrini, F. Nielsen, R. Nock, M. Carioni. Loss factorization, weakly supervised learning and label noise robustness. ICML, 2016.", "This has been explored in Theorem 6, where the estimation error bound of the proposed method is linear in alpha and alpha’. By the definitions of alpha and alpha’, we can see that alpha and alpha’ are both non-negative and no more than 4/(theta-theta’) under the assumption that theta>theta’. Thus, the larger theta-theta’ is, the better the proposed method performs. This theoretical result is consistent with our empirical results shown in Figures 2 and 3.", "Q: Meaning of “classification calibrated”\nA: Classification-calibrated loss functions are defined in “P. L. Bartlett, M. I. Jordan, and J. D. McAuliffe. Convexity, classification, and risk bounds. Journal of the American Statistical Association, 2006.” Briefly speaking, using such a surrogate loss guarantees that under mild assumption on the model, the learned model will converge to the Bayes optimal classifier, as the number of training data goes to infinity. Almost all popular losses are classification calibrated---actually, a monotonic and differentiable loss should be classification calibrated, if its gradient at zero is negative.\n\nQ: Saying that three U sets are needed, where this includes the test set, seems a bit non-standard\nA: Sorry for the confusion. To be clear, *three class priors* are needed, i.e. two for the training distributions and one for the test distribution. However, *two U sets* corresponding to two training distributions are needed, and we don’t need any training data from the test distribution.\n\nQ: The labels l_+ and l_- in Definition 3 seem to imply the two U sets are positive vs. negative; but this is not the case, correct?\nA: Yes, the two U sets are completely unlabeled, that is, neither positive nor negative. We followed the notation suggested by the area of learning with noisy labels which caused this confusion. In this context, the U set with larger class prior is regarded as *corrupted positive* data and the U set with smaller class prior is regarded as *corrupted negative* data. We will clearly explain this notation in the revised version.", "We are still working on the experiments for extending the number of max epochs for training from 200 to 500 and investigating whether standard supervised learning with limited data can be significantly better than now. After that we will go to improve the clarity of the paper following your comments. Please find our detailed responses below.", "This paper proposes a methodology for training any binary classifier from only unlabeled data. They proved that it is impossible to provide an unbiased estimator if having only a single set of unlabeled data, however, they provide an empirical risk minimization method for only two sets of unlabeled data where all the class priors are given. Some experiments and comparisons with state-of-the-art are provided, together with a study on the robustness of the method.\n\npros:\n\n- The paper is clear, and it provides an interesting proven statement as well as a methodology that can be applied directly. Because they show that only two sets with different (and known) priors are sufficient to have an unbiased estimator, the paper has a clear contribution.\n- The impact of the method is a clear asset, because learning from unlabeled data is applicable to a large number of tasks and is raising attention in the last years.\n- The large literature on the subject has been well covered in the introduction.\n- The importance made on the integration of the method to state-of-the-art classifiers, such as the deep learning framework, is also a very positive point.\n- The effort made in the experiments, by testing the performance as well as the robustness of the method with noisy training class priors is very interesting. \n\nremarks:\n\n- part 4.1 : the simplification is interesting. However, the authors say that this simplification is easier to implement in many deep learning frameworks. Why is that?\n- part 4.2 : the consistency part is too condensed and not clear enough.\n- experiments : what about computation time?\n- More generally, I wonder if the authors can find examples of typical problems for classification from unlabeled data with known class priors and with at least two sets?\n\nminor comments:\n- part 1: 'but also IN weakly-supervised learning'\n- part 2. related work : post- precessing --> post-processing\n- part 2. related work : it is proven THAT the minimal number of U sets...\n- part 2. related work : In fact, these two are fairly different --> not clear, did you mean 'Actually, ..' ?\n- part 4.1 : definition 3. Why naming l- and l+ the corrected loss functions? both of them integrate l(z) and l(-z), so it can be confusing.\n- part 5.1 Analysis of moving ... closer: ... is exactly THE same as before.\n- part 5.2 : Missing spaces : 'from the webpage of authors.Note ...' and 'USPS datasetsfor the experiment ...' ", "Summary: \nThe authors introduce the task of learning from unlabeled data clearly and concisely with sufficient reference to background material. They propose a learning approach, called UU, from two unlabeled datasets with known class priors and prove consistency and convergence rates. Their experiments are insightful to the problem, revealing how the two datasets must be sufficiently separated and how UU learning outperforms state-of-the-art approaches. The writing is clear and the idea is an original refinement of earlier work, justified by its exceeding state-of-the-art approaches. However, the paper needs more experimentation. \n\nFurther details:\nWhile the introduction and set-up is long, it positions the paper well by making it approachable to someone not directly in the subject area and delineating how the approach differs from existing theory. The paper flows smoothly and the arguments build sequentially. A few issues are left unaddressed:\n- How does the natural extension of UU learning extend beyond the binary setting? \n- As the authors state, in the wild the class priors may not be known. Their experiment is not completely satisfying because it scales both priors the same. It would be more interesting to experimentally consider them with two different unknown error rates. If this were theoretically addressed (even under the symmetrical single epsilon) this paper would be much better. \n- In Table 2, using an epsilon greater than 1 seems to always decrease the error with a seeming greater impact when theta and theta' are close. This trend should be explained. In general, the real-world application was the weakest section. Expounding up on it more, running more revealing experiments (potentially on an actual problem in addition to benchmarks), and providing theoretical motivation would greatly improve the paper. \n- In the introduction is is emphasized how this compares to supervised learning but the explanation is how this compares to unsupervised clustering is much more terse. Another sentence or two explaining why using the resulting cluster identifications for binary labeling is inferior to the \"arbitrary binary classifier\" would help. It's clear in the author's application because one would like to use all data available, including the class priors, for classification. \n\nMinor issues: \n-At the bottom of page 3 the authors state, \" In fact, these two are fairly different, and the differences are reviewed and discussed in Menon et al. (2015) and van Rooyen & Williamson (2018). \" It would be clearer to immediately state the key difference instead of waiting until the end of the paragraph. \n- In the first sentence of Section 3.1 \"imagining\" is mistyped as \"imaging.\"\n- What does \"classifier-calibrated\" mean in Section 3.1? \n- In Section 3.1, \"That is why by choosing a model G, g∗ = arg ming∈G R(g) is changed as the target to which\" was a bit unclear at first. The phrase \"is changed as the target to which\" was confusing because of the phrasing. Upon second read, the meaning was clear. \n- In the introduction it was stated \"impossibility is a proof by contradiction, and the possibility is a proof by construction.\" It would be better to (re)state this with each theorem. I was immediately curious about the proof technique after reading the theorem but no elaboration was provided (other than see the appendix). The footnote with the latter theorem is helpful as it alludes to the kind of construction used without being overly detailed.\n- In section 5.2, in the next to last sentence of the first paragraph there are some issues with missing spaces. \n- Some more experiment details, e.g. hyperparameter tuning, could be explained in the appendix for reproducibility. ", "This paper studies the weak supervision setting of learning a general binary classifier from two unlabeled (U) datasets with known class balances. The authors establish that this is possible by constructing an unbiased estimator, analyze its convergence theoretically, and then run experiments using modern image classification models.\n\nPros:\n- This work demonstrates, theoretically and empirically, a simple way to train generic models using only the known class balances of several sets of unlabeled data (having the same conditional distributions p(x|y))---a very interesting configuration of weak supervision, an increasingly popular and important area\n\n- The treatment is thorough, proceeding from establishing the minimum number of U datasets, constructing the estimator, analyzing convergence, and implementing thorough experiments\n\nCons:\n- This is a crowded area (as covered in their related work section). As they cite, (Quadrianto et al., 2009) proposed this setting and considered linear models for k-wise classification. Moreover, the two U datasets with known class balances can equivalently be viewed as two weak / noisy label sources with known accuracies. Thus this work connects to many areas- both in noisy learning, as they cite heavily, but also in methods (in e.g. crowdsourcing and multi-source weak supervision) where several sources label unlabeled datasets with unknown accuracies (which are often estimated in an unsupervised fashion).\n\n- The overall clarity of the paper's writing could be improved. For example, the introduction and related work sections take up a large portion of the paper, but are very dense and heavy with jargon that is not internally defined upfront; for example \"risk rewrite\" is introduced in paragraph 2 with no internal definition and then used subsequently throughout the paper (this defn would be simple enough to give: in the context of this paper, \"risk rewrite\" means a linear combination of the class-conditional losses; or more generally, the expected loss w.r.t. distribution over classes...). Also intuition could be briefly given about the theorem proof strategies.\n\n- The difference between the two class distributions over the U datasets seems like an important quantity (akin, in e.g. weak supervision / crowd source modeling papers, to quantity of how bounded away from random noise the labelers are). This is treated empirically, but would be stronger to have this show up in the theory somewhere.\n\n- Other prior work here has handled k classes with k U sets; could have extended to cover this setting too, since seems natural\n\nOverall take: This learning from label proportions setting has been covered before, but this paper presents it in an overall clean and general way, testing it empirically on modern models and datasets, which is an interesting contribution.\n\nOther minor points:\n- The argument for / distinction between using eqns. (3) and (4) seems a bit ad hoc / informal (\"we argue that...\"). This is an important point...\n- Theorem 1 proof seems fine, but some intuition in the main body would be nice.\n- What does \"classification calibrated\" mean?\n- Saying that three U sets are needed, where this includes the test set, seems a bit non-standard? Also I'm confused- isn't a labeled test set used? So what is this third U set for?\n- The labels l_+ and l_- in Defn. 3 seem to imply that the two U sets are positive vs. negative; but this is not the case, correct…?\n- Stating both Lemma 5 and Thm 6 seems unnecessary\n- In Fig. 2, seems like could have trained for longer and perhaps some of the losses would have continued decreasing? In particular, small PN? Also, a table of the final test set accuracies would have been very helpful.\n- More detail on experimental protocol would be helpful: what kind of hyperparameter tuning was done? repeated runs averaging? It seems odd, for example in Fig. 3, that the green lines are so different in (a) vs. (c), and not in the way that one would expect given the decrease in theta\n" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, -1, -1, -1, -1, -1, 8, 8, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "S1gL7qwf07", "rJgzB6LbAX", "iclr_2019_B1xWcj0qYm", "r1gZR2ahaQ", "ryxIBivipQ", "rJgzB6LbAX", "rJgzB6LbAX", "rJgzB6LbAX", "rJgzB6LbAX", "BkgahCMcpQ", "iclr_2019_B1xWcj0qYm", "rkeeoqWha7", "rkeeoqWha7", "rkeeoqWha7", "rkeeoqWha7", "H1xTSAEqnm", "iclr_2019_B1xWcj0qYm", "iclr_2019_B1xWcj0qYm", "iclr_2019_B1xWcj0qYm" ]
iclr_2019_B1xY-hRctX
Neural Logic Machines
We propose the Neural Logic Machine (NLM), a neural-symbolic architecture for both inductive learning and logic reasoning. NLMs exploit the power of both neural networks---as function approximators, and logic programming---as a symbolic processor for objects with properties, relations, logic connectives, and quantifiers. After being trained on small-scale tasks (such as sorting short arrays), NLMs can recover lifted rules, and generalize to large-scale tasks (such as sorting longer arrays). In our experiments, NLMs achieve perfect generalization in a number of tasks, from relational reasoning tasks on the family tree and general graphs, to decision making tasks including sorting arrays, finding shortest paths, and playing the blocks world. Most of these tasks are hard to accomplish for neural networks or inductive logic programming alone.
accepted-poster-papers
pros: - The paper presents an interesting forward chaining model which makes use of meta-level expansions and reductions on predicate arguments in a neat way to reduce complexity. As Reviewer 3 points out, there are a number of other papers from the neuro-symbolic community that learn relations (logic tensor networks is one good reference there). However using these meta-rules you can mix predicates of different arities in a principled way in the construction of the rules, which is something I haven't seen. - The paper is reasonably well written (see cons for specific issues) - There is quite a broad evaluation across a number of different tasks. I appreciated that you integrated this into an RL setting for tasks like blocks world. - The results are good on small datasets and generalize well cons: - (scalability) As both Reviewers 1 and 3 point out, there are scalability issues as a function of the predicate arity in computing the set of permutations for the output predicate computation. - (interpretability) As Reviewer 2 notes, unlike del-ILP, it is not obvious how symbolic rules can be extracted. This is an important point to address up front in the text. - (clarity) The paper is confusing or ambiguous in places: -Initially I read the 1,2,3 sequence at the top of 3 to be a deduction (and was confused) rather than three applications of the meta-rules. Maybe instead of calling that section "primitive logic rules" you can call them "logical meta-rules". -Another confusion, also mentioned by reviewer 3 is that you are assuming that free variables (e.g. the "x" in the expression "Clear(x)") are implicitly considered universally quantified in your examples but you don't say this anywhere. If I have the fact "Clear(x)" as an input fact, then presumably you will interpret this as "for all x Clear(x)" and provide an input tensor to the first layer which will have all 1.0's along the "Clear" relation dimension, right? -It seems that you are making the assumption that you will never need to apply a predicate to the same object in multiple arguments? If not, I don't see why you say that the shape of the tensor will be m x (m-1) instead of m^2. You need to be able to do this to get reflexivity for example: "a <= a". -I think you are implicitly making the closed world assumption (CWA) and should say so. -On pg. 4 you say "The facts are tensors that encode relations among multiple objectives, as described in Sec. 2.2.". What do you mean by "objectives"? I would say the facts are tensors that encode relations among multiple objects. -On pg. 5 you say "We finish this subsection, continuing with the blocks world to illustrate the forward propagation in NLM". I see no mention of blocks world in this paragraph. It just seems like a description of what happens at one block, generically. -In many places you say that this model can compute deduction on first-order predicate calculus (FOPC) but it seems to me that you are limited to horn logic (rule logic) in which there is at most one positive literal per clause (i.e. rules of the form: b1 AND b2 AND ... AND bn => h). From what I can tell you cannot handle deduction on clauses such as b1 AND b2 => h1 or (h2 and h3). -There is not enough description of the exact setup for each experiment. For example in blocks world, how do you choose predicates for each layer? How many exactly for each experiment? You make it seem on p3 that you can handle recursive predicates but this seems to not have been worked out completely in the appendix. You should make this clear. -In figure 1 you list Move as if its a predicate like On but it's a very different thing. On is predicate describing a relation in one state. Move is an action which updates a state by changing the values of predicates. They should not be presented in the same way. -You use "min" and "max" for "and" and "or" respectively. Other approaches have found that using the product t-norm t-norm(x,y) = x * y helps with gradient propagation. del-ILP discusses this in more detail on p 19. Did you try these variations? -I think it would be helpful to somewhere explicitly describe the actual MLP model you use for deduction including layer sizes and activation functions. -p. 5. typo: "Such a parameter sharing mechanism is crucial to the generalization ability of NLM to problems ov varying sizes." ("ov" -> "of") -p. 6. sec 3.1 typo: "For ∂ILP, the set of pre-conditions of the symbols is used direclty as input of the system." ("direclty" -> "directly") I think this is a valuable contribution and novel in the particulars of the architecture (eg. expand/reduce) and am recommending acceptance. But I would like to see a real effort made to sharpen the writing and make the exposition crystal clear. Please in particular pay attention to Reviewer 3's comments.
train
[ "r1ee4DycyV", "HJxOd0tDJE", "S1e1Do49Cm", "rklvumwmCQ", "r1e_izwX0m", "S1l4CCI66Q", "rJxii0IT6Q", "H1g19nL6Tm", "rylWbydT3Q", "rkgpGkN52Q", "r1gMP1TKnQ" ]
[ "author", "public", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for your pointers to the related papers. We will discuss them in the next version of our paper.", "... although it is not a differentiable model or even a neural model, the idea of learning to sort infinite arrays from short examples has been explored in the \"Generalized Planning\" literature, for example, \nhttp://rbr.cs.umass.edu/shlomo/papers/SIZaij11.pdf\nhttps://www.ijcai.org/Proceedings/11/Papers/159.pdf\nhttps://www.dtic.upf.edu/~jonsson/ker18.pdf\n", "Thanks for the clarification about the details and the scalability. I would like to keep my rating. This is an interesting direction and worth pursuing, so I support acceptance. But it is still unclear to me how the proposed approach can move beyond toy datasets. ", "1. Running time / training time.\nThe number of examples/episodes used is shown in Table 4. We plan to add training time / inference speed in our revision. Here, we show our results on Blocks World. We train our model on 12 CPUs (Xeon E5) and a single GPU (GTX 1080), It takes 3 hours to train our model (26000 episodes). During inference, the model runs in 1.43s per episode when the number of blocks is 50.\n\n2. Rules are not expressed in a logical formalism.\nThanks for the comment and suggestion --- Yes, your understanding is correct. Although the design of NLM’s neural architecture is highly motivated by FOPC logic formalism, NLM models do not explicitly encode FOPC logic forms. In contrast, the weights of the MLPs encodes how models should perform the deduction, and the output of the NLM can be regarded as the conclusions (0/1 indicating whether we should move the block, in a Blocks World).\n", "1. Model details.\nDetailed implementation details including the number of layers (a.k.a. the depth) can be found in Table 4 (Appendix B.2). As for the hyper-parameters of the MLPs, we use no hidden layer, and the hidden dimension (number of intermediate predicates) of each layer is set to 8 across all our experiments.\nWe thank the reviewer for the suggestion, and will make these information more clear in our revision. Moreover, we plan to release our code upon acceptance.\n\n2. Scalability\nIt should be clarified that scalability mentioned in the paper mainly refers to the complexity of reasoning (e.g., number of steps before producing a desired predicate), not the number of objects/entities or relations. For example, as shown in our general clarification, learning predicates that have a complex structure (such as the ShouldMove in the example) pose a scalability challenge to existing ILP methods. We also refer the reviewer to our clarification on scalability for more detailed analysis.\nIn general, we agree with the reviewer that an inductive logic system should be able to handle both complex reasoning rules (e.g., as the settings explored in our paper) and large-scale entity sets (e.g., as in knowledge graph-related literature). We hope the methods and insights we presented in this paper could help the whole community in this interesting direction.\n\n3. Permutation in MLP.\nPermutation is needed in two places. Consider two n-ary MLPs at a particular layer of the NLM (called “p”). As the reviewer correctly points out, the [m, m-1, …, m-n+1] dimensions represent permutations in the input of p. On the other hand, the permutation before MLP is to create new predicates that only differs from the existing one in the variable order, in order to compute composition of these two predicates; this is the second place where permutation is needed.\n\nAs an example, suppose “p” is the predicate HasEdge(x, y). By permuting its variables, we get another predicate, HasReverseEdge(x, y), which is TRUE if there is an edge from y to x. These two predicates can be used to compose a more complex predicate\n HasBidirectionalEdge(x, y) ← HasEdge(x, y) ∧ HasReverseEdge(x, y)\n", "6. The scalability discussion with ILP systems and SRL methods.\nThank you for the comment. Please see our response to the scalability claim. We will revise the paper accordingly to clarify.\n\n7. Generalization w.r.t. the number of objects.\nDifferent from the reviewer’s hypothesis, our results actually verify that NLM models do generalize well to larger test instances. For example, Table 2 shows that our learned model achieves 100% accuracy on test instances with more blocks, and the same for Table 1. We have also conducted experiments testing this ability using several trained model in extreme cases which consist of 500 blocks (1000 numbers for sorting), no failure cases were found. The models will be made public along with our code after the paper decision. This ability is one of our main findings, as highlighted in the abstract (“NLMs ... generalize to arbitrarily large-scale tasks”).\n\n8. The goal configuration of Blocks World.\nWe present the generation of Blocks World instances in Appendix B.4. We will make it more clear in the revision. The goal configuration is randomly and independently generated as the initial configuration. One can compute that the expected optimal number of steps needed for solving the Blocks World is approximately 2m - o(m) steps are required to solve the case, where m is the number of blocks, which is 50 in the test instances. In average 84 steps means the model learns a fairly good solution. The reviewer is also welcome to check our demo in the footnote of Page 8: https://sites.google.com/view/neural-logic-machines .\n\n9. MDP formulation of the Blocks World.\nThanks for the nice suggestion. We discuss the MDP formulation in section 3.4, and we will make it more clear. We input the current world and the target world with tensors describing relations between objects. At each time step, the agents take actions to move one block onto another. We use sparse rewards to train the agents: The agents get the reward only when they finish the task.\n\n10. NLM learns the underlying logical rules.\nThanks for the comment. We intend to mean that the learned NLM generalizes well to problems with varying sizes, in the same way logical rules do. We will reword the sentence to avoid confusions, and discuss rule extraction as future work.\n", "We thank the reviewer for many comments and pointers, and will revise our paper to emphasize further our contributions and novelties compared to previous work.\n\n1. Section 2.1 and the handling of free variables.\nSection 2.1 lists three primitive rules that serve as building blocks in later subsections to implement a Neural Logic Machine. This is necessary for providing terminology and notation used throughout the rest of the paper. We are not claiming them as novel contributions.\nSection 2.1 does *not* describe propositional logic. The rule for “Boolean logic” is used in NLM as a component for realizing first-order logic (probabilistically, as described in section 2.2): they are used to operate on predicates grounded on objects. An example in the Blocks World domain may look like:\n IsGround(A) V Clear(A) -> Placeable(A)\nwhere A is one object in the Blocks World domain; and notably, IsGround(.), Clear(.) and Placeable(.) are not manually specified but are learned by the network.\nOur model supports free variables. The arity of a predicate is its number of free variable. For example, the arity of a binary predicate is 2, and NLM uses a matrix (a tensor of dimension 2) to represent the predicate’s values for all possible grounding; the 1st paragraph of section 2.2 gives further details. The three rules (eqns 1-3) keep the same number of free variables, increase it by 1, and decrease it by 1, respectively.\n\n2. The probability distributions modeled by MLPs.\nWe would like to thank R3 for the comment about “joint distribution”, and briefly clarify technical details in Section 2.2 & 2.3 to avoid potential misunderstanding.\n\nLet’s define the input of each layer k as H_k (whose each element is in [0, 1]) recursively in the following:\n\n(1) The initial layer is H_1 = prob(B) representing boolean values 0 or 1, where B is a set of base predicates.\n(2) For each layer k, the probabilistic boolean expression in the building block is defined above Eqn. 4:\n Expression(H_1, ... , H_k) ==> H_{k+1}\nwhere Expression in NLM is represented by some neural network structure. As illustrated in Figure 2 & 3, we use (a) grouped MLP with weights \\theta_k and activation \\sigma, and (b) ReduceOrExpand that computes\n H'_k = \\sigma(MLP(H_1, ... , H_k; \\theta_k)),\n H_{k+1} = ReduceOrExpand(H'_k).\nThis building block keeps all elements of H_{k+1} in [0, 1] and becomes the input of next layer k+1. Therefore, such a series of building blocks is able to model a complex expression. \n\nWe will not use “joint distribution” to avoid confusion, and make it more clear in the revision.\n\n3. The difference with other approaches that encode the weight of weighted logic rules using neural networks.\nThanks for the pointers. We will cite and discuss the papers in the revision. Our work differs substantially from MLN with weights computed by NNs, e.g., the mentioned L&F paper:\nTheir logic rules (called “knowledge base” in L&F) are designed by experts; see sec 2.3 of L&F). Here, our NLM uses deep NNs to learn such rules from data. The Blocks World example in our response to the scalability question shows the complexity of the rules that NLMs can handle.\nConsequently, our NLM needs to learn weights that form those rules. In contrast, MLN only needs to learn a real-valued weight for each hand-designed logic rule.\n\n4. The difference with the unrolled computation graph of MLN.\nOne of our main contributions is to use deep NN to learn logic rules. Unrolling NN-parameterized MLNs is limited by the need and quality of expert-designed logic rules.\n\n5. The encoding of objects.\nIt is unclear to us what the reviewer means by “objects are … vector encodings” and hence the similarity to DeepProbLog, as we do *not* encode objects by vectors. Data representations in NLM are all tensors that encode the (probabilistic) true/false values of grounded predicates; see the 1st paragraph of section 2.2 (page 3).\n", "We thank all reviewers for their thoughts and comments. In addition to the specific responses below, here we clarify on the scalability question asked by some reviewers. We will include related discussions in our revision.\n\nIt should be clarified that scalability mentioned in the paper mainly refers to the complexity of reasoning (e.g., number of steps before producing a desired predicate), not the number of objects/entities or relations. This is highlighted in #2 at the bottom of page 1: “We expect the learning system to scale with the number of logic rules. Existing logic-based algorithms like ILP suffer an exponential computational complexity with respect to the number of logic rules”.\n\nKnowledge-graph tasks involve many entities (e.g. > 10M) and relations as reviewers pointed out, but the rules involved in the reasoning steps are usually restricted. For example, the rules considered in the knowledge base reasoning work (Yang et al., 2017) are restricted in a “chain-like” form (their eqn 1.), which is query(Y,X)<-Rn (Y,Zn) ∧ · · · ∧ R1 (Z1,X), while R1, . . . , Rn are *known* relations in the knowledge base. Such knowledge-graph reasoning tasks represent an interesting yet different class of problems outside of the current scope of our paper.\n\nIn contrast, learning predicates that have a complex structure (such as the ShouldMove example below) pose a scalability challenge to existing ILP methods. In dILP [Evans et al.], for example, suppose each rule has C possible choices from the templates and R rules are need to be learned, then the possible space is at least O(C^R) --- the number of the set of possible rules is exponential w.r.t. the number of rules. On the other hand, our method is only quadratic in the number of rules (or in this case, equivalently, number of predicates).\n\n**********************************************************************\n A Blocks World Example \n**********************************************************************\nThis example shows what we mean by complex reasoning in the seemingly simple Blocks World domain. Suppose we are interested in knowing whether a block should be moved in order to reach the target configuration. Here, a block should be moved if (1) it is moveable; and (2) there is at least one block below it that does not match the target configuration. Call the desired predicate “ShouldMove(x)”.\n\nInputs Relations (as specified in the last paragraph of page 7):\nSameWorldID, SmallerWorldID, LargerWorldID;\nSameID, SmallerID, LargerID;\nLeft, SameX, Right, Below, SameY, Above.\nThe relations are given on all pairs of objects across both worlds.\n\nHere is one way to produce the desired predicate by defining several helper predicates, designed by “human experts”:\n1. IsGround(x) ← ∀y Above(y, x)\n2. SameXAbove(x, y) ← SameWorldID(x, y) ∧ SameX(x, y) ∧ Above(x, y)\n3. Clear(x) ← ∀y ¬SameXAbove(y, x)\n4. Moveable(x) ← Clear(x) ∧ ¬IsGround(x)\n5. InitialWorld(x) ← ∀y ¬SmallerWorldID(y, x)\n6. Match(x, y) ← ¬SameWorldID(x, y) ∧ SameID(x, y) ∧ SameX(x, y) ∧ SameY(x, y)\n7. Matched(x) ← ∃y Match(x, y)\n8. HaveUnmatchedBelow(x) ← ∃y SameXAbove(x, y) ∧ ¬Matched(y) \n9. ShouldMove(x) ← InitialWorld(x) ∧ Moveable(x) ∧ HaveUnmatchedBelow(x)\nWe can also write the logic forms in one line:\nShouldMove(x) ← (∀y ¬SmallerWorldID(y, x)) ∧ (∀y ¬(SameWorldID(y, x) ∧ SameX(y, x) ∧ Above(y, x))) ∧ ¬(∀y Above(y, x)) ∧ ((∃y SameWorldID(x, y) ∧ SameX(x, y) ∧ Above(x, y)) ∧ ¬(∃z ¬SameWorldID(y, z) ∧ SameID(y, z) ∧ SameX(y, z) ∧ SameY(y, z)) )\n\nNote that this is only a part of the logic needed to complete the Blocks World challenge. The learner also needs to figure out where should the block be moved onto. The proposed NLM can learn policies that solve the Blocks World from the sparse reward signal indicating only whether the agent has finished the game. More importantly, the learned policy generalizes well to larger instances (consisting more blocks).\n**********************************************************************\n", "This paper presents a model to combine neural network and logic programming. It proposes to use 3 primitive logic rules to model first-order predicate calculus in the neural networks. Specifically, relations with different numbers of arguments over all permutations of the groups of objects are represented as tensors with corresponding dimensions. In each layer, a MLP (shared among different permutations) is applied to transform the tensor. Multiple layers captures multiple steps of deduction. On several synthetic tasks, the proposed method is shown to outperform the memory network baseline and shows strong generalization. \n\nThe paper is well written, but some of the contents are still a bit dense, especially for readers who are not familiar with first-order predicate calculus. \n\nThe small Python example in the Appendix helps to clarify the details. It would be good to include the details of the architectures, for example, the number of layers, and the number of hidden sizes in each layer, in the experiment details in the appendix. \n\nThe idea of using the 3 primitive logic rules and applying the same MLP to all the permutations are interesting. However, due to the permutation step, my concern is whether it can scale to real-world problems with a large number of entities and different types of relations, for example, a real-world knowledge graph.\n\nSpecifically:\n\n1. Each step of the reasoning (one layer) is applied to all the permutations for each predicate over each group of objects, which might be prohibitive in real-world scenario. For example, although there are usually only binary relations in real-world KG, the number of entities is usually >10M. \n\n2. Although the inputs or preconditions could be sparse, thus efficient to store and process, the intermediate representations are dense due to the probabilistic view, which makes the (soft) deduction computationally expensive. \n\nSome clarification questions: \n\nIs there some references for the Remark on page 3? \n\nWhy is there a permutation before MLP? I thought the [m, m-1, …, m-n+1] dimensions represent the permutations. For example, if there are two objects, {x1, x2}. Then the [0, 1, 0] represents the first predicate applied on x1, and x2. [1, 0, 0] represents the first predicate applied on x2 and x1. Some clarifications would definitely help here. \n\nI think this paper presents an interesting approach to model FOPC in neural networks. So I support the acceptance of the paper. However, I am concerned with its scalability beyond the toy datasets. \n", "In this paper the authors propose a neural-symbolic architecture, called Neural Logic Machines (NLMs), that can learn logic rules.\n\nThe paper is pretty clear and well-written and the proposed system is compelling. I have only some small concerns.\nOne issue concerns the learning time. In the experimental phase the authors do not state how long training is for different datasets.\nMoreover it seems that the “rules” learnt by NSMs cannot be expressed in a logical formalism, isn’t it? If I am right, I think this is a major difference between dILP (Evans et. al) and NLMs and the authors should discuss about that. If I am wrong, I think the authors should describe how to extract rules from NLMs.\nIn conclusion I think that, once these little issues are fixed, the paper could be considered for acceptance.\n\n[minor comments]\np. 4\n“tenary” -> “ternary”\n p. 5\n“ov varying size” -> “of varying size”\n“The number of parameters in the block described above is…”. It is not clear to me how the number of parameters is computed.\n“In Eq. equation 4” -> “In Eq. 4”\n\np. 16\n“Each lesson contains the example with same number of objects in our experiments.”. This sentence sounds odd.\n", "The paper introduces Neural Logic Machines, a particular way to combine neural networks and first order but finite logic. \n\nThe paper is very well written and structured. However, there are also some downsides.\n\nFirst of all, Section 2.1 is rather simple from a logical perspective and hence it is not clear what this gets a special term. Moreover, why do mix Boolean logic (propostional logic) and first order logic? Any how to you deal with the free variables, i.e., the variables that are not bounded by a quantifier? The semantics you define later actually assumes that all free variables (in your notation) are bounded by all quantifiers since you apply the same rule to all ground instances. Given that you argue that you want a neural extension of symbolic logic (\"NLM is a neural realization of (symbolic) logic machines\") this has to be clarified as it would not be an extension otherwise. \n\nFurthermore, Section 2.2 argues that we can use a MLP with a sigmoid output to encode any joint distribution. This should be proven. It particular, given that the input to the network are the marginals of the ground atoms. So this is more like a conditional distribution? Moreover, it is not clear how this is different to other approaches that encode the weight of weighted logical rule (e.g. in a MLN) using neural networks, see\ne.g. \n\nMarco Lippi, Paolo Frasconi:\nPrediction of protein beta-residue contacts by Markov logic networks with grounding-specific weights. \nBioinformatics 25(18): 2326-2333 (2009)\n\nNow of course, and this is the nice part of the present paper, by stacking several of the rules, we could directly specify that we may need a certain number of latent predicates. \nThis is nice but it is not argued that this is highly novel. Consider again the work by Lippi and Frasconi. We unroll a given NN-parameterized MLN for s fixed number of forward chaining steps. This gives us essentially a computational graph that could also be made differentiable and hence we could also have end2end training. The major difference seems to be that now objects are directly attached with vector encodings, which are not present in Lippi and Frasconi's approach. This is nice but also follows from Rocktaeschel and Riedel's differentiable Prolog work (when combined with Lippi and Frasconi's approach).\nMoreover, there have been other combinations of tensors and logic, see e.g. \n\nIvan Donadello, Luciano Serafini, Artur S. d'Avila Garcez:\nLogic Tensor Networks for Semantic Image Interpretation. \nIJCAI 2017: 1596-1602\n \nHere you can also have vector encodings of constants. This also holds for \n\nRobin Manhaeve, Sebastijan Dumancic, Angelika Kimmig, Thomas Demeester, Luc De Raedt:\nDeepProbLog: Neural Probabilistic Logic Programming. CoRR abs/1805.10872 (2018)\n\nThe authors should really discuss this missing related work. This should also involve\na clarification of the \"ILP systems do not scale\" statement. At least if one views statistical relational learning methods as an extension of ILP, this is not true. Probabilistic ILP aka statistical relational learning has been used to learn models on electronic health records, see e.g., the papers collectively discussed in \n\nSriraam Natarajan, Kristian Kersting, Tushar Khot, Jude W. Shavlik:\nBoosted Statistical Relational Learners - From Benchmarks to Data-Driven Medicine. Springer Briefs in Computer Science, Springer 2014, ISBN 978-3-319-13643-1, pp. 1-68\n\nSo the authors should either discuss SRL and its successes, separating SRL from ILP, or they cannot argue that ILP does not scale. In the related work section, they decided to view both as ILP, and, in turn, the statement that ILP does not scale is not true. Moreover, many of the learning tasks considered have been solved with ILP, too, of course in the ILP setting. Any ILP systems have been shown to scale beyond those toy domains. \nThis also includes the blocks world. Here relational MDP solvers can deal e.g. with BW worlds composed of 10 blocks, resulting in MDPs with several million states. And the can compute relational policies that solve e.g. the goal on(a,b) for arbitrary number of blocks. This should be incorporated in the discussion of the introduction in order to avoid the wrong impression that existing methods just work for toy examples. \n\nComing back to scaling, the current examples are on rather small datasets, too, namely <12 training instances. Moreover, given that we learn a continuous approximation with a limit depth of reasoning, it is also very likely that the models to not generate well to larger test instances. So the scaling issue has to be qualified to avoid to give the wrong impression that the present paper solves this issue. \n\nFinally, the BW experiments should indicate some more information on the goal configuration. This would help to understand whether an average number of moves of 84 is good or bad. Moreover, some hints about the MDP formulation should be provided, given that there have been relational MDPs that solve many of the probabilistic planning competition tasks. And, given that the conclusions argue that NLMs can learn the \"underlying logical rules\", the learned rules should actually be shown. \n\nNevertheless, the direction is really interesting but there several downsides that have to be addressed. " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 2, 5 ]
[ "HJxOd0tDJE", "iclr_2019_B1xY-hRctX", "r1e_izwX0m", "rkgpGkN52Q", "rylWbydT3Q", "rJxii0IT6Q", "r1gMP1TKnQ", "iclr_2019_B1xY-hRctX", "iclr_2019_B1xY-hRctX", "iclr_2019_B1xY-hRctX", "iclr_2019_B1xY-hRctX" ]
iclr_2019_B1xf9jAqFQ
Neural Speed Reading with Structural-Jump-LSTM
Recurrent neural networks (RNNs) can model natural language by sequentially ''reading'' input tokens and outputting a distributed representation of each token. Due to the sequential nature of RNNs, inference time is linearly dependent on the input length, and all inputs are read regardless of their importance. Efforts to speed up this inference, known as ''neural speed reading'', either ignore or skim over part of the input. We present Structural-Jump-LSTM: the first neural speed reading model to both skip and jump text during inference. The model consists of a standard LSTM and two agents: one capable of skipping single words when reading, and one capable of exploiting punctuation structure (sub-sentence separators (,:), sentence end symbols (.!?), or end of text markers) to jump ahead after reading a word. A comprehensive experimental evaluation of our model against all five state-of-the-art neural reading models shows that Structural-Jump-LSTM achieves the best overall floating point operations (FLOP) reduction (hence is faster), while keeping the same accuracy or even improving it compared to a vanilla LSTM that reads the whole text.
accepted-poster-papers
The authors obtain nice speed improvements by learning to skip and jump over input words when processing text with an LSTM. At some points the reviewers considered the work incremental since similar ideas have already been explored, but at the end two of the reviewers ended up endorsing the paper with strong support.
test
[ "H1liRmli2Q", "BygoH5N51N", "S1eAQNnPnm", "rygL7FSlRQ", "r1xklFrl07", "S1gJqdreRm", "SylYFwwu2m" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "The paper proposes a Structural-Jump-LSTM model to speed up machine reading, which is an extension of the previous speed reading models, such as LSTM-Jump, Skim-LSTM and LSTM-Shuffle. The major difference, as claimed by the authors, is that the proposed model has two agents instead of one. One agent decides whether the next input should be fed into the LSTM (skip) and the other determines whether the model should jump to the next punctuation (jump). The sentence-wise jumping makes the jumping more structural than models like LSTM-Jump, while the word-wise skipping operation has a finer skimming decision. The reinforcement learning algorithm in this paper is also different from LSTM-Jump, where LSTM-Jump uses REINFORCE, while this paper applies actor-critic approach. \n\nEmpirical studies show that Structural-Jump-LSTM is (slightly) better than state-of-the-art methods in terms of both accuracy and speed over most but few datasets. My feeling is that the proposed model should work much better than the previous models in very long texts, which I suggest the author should try on. Otherwise, the performance gain looks marginal and it is thus questionable whether the complicated modeling is necessary. \n\nI am confused by Figure 1: why are the “yes/no” placed in front of the “skipped”? “Previous LSTM” is confusing as well, which should be “Previous Output/hidden state”.\n\nMinor comment: The LSTM-Jump takes word2vec as the initialization in CBT, while this paper uses GLOVE. I wonder if this results in the performance difference in accuracy. From my experience, GLOVE is usually better than word2vec in most of the tasks. If this effect also applies to CBT, the experiment is not fair.\n", "I read the author response and feel my questions are well addressed. I will increase the score to champion its acceptance.", "The paper presents a novel model for neural speed reading. In this new model, the authors combined several existing ideas in a nice way, namely, the new reader has the ability to skip a word or to jump a sequence of words at once. The reward of the reader is mixed of the final prediction correctness and the amount of text been skipped. The problem is formulated as a reinforcement learning problem. The results compared with the existing techniques on several benchmark datasets show consistently good improvements.\n\nIn my view, one important (also a little surprising) finding of the paper is that the reader can make jump choices successfully with the help of punctuations. And, blindly jumping a sequence of words without even lightly read them can still make very good predictions.\n\nThe basic idea of the paper, the concepts of skip and jump, and the reinforcement learning formulation are not completely new, but the paper combined them in an effective way. The results show good improvements majorly in FLOPS.\n\nThe way of defining state, rewards and value function are not very clear to me. Two value estimates are defined separately for the skip agent and the jump agent. Why not define a common value function for a shared state? Two values will double count the rewards from reading. Also, the state of the jump agent may not capture all available information. For example, how many words until the end of the sentence if you make a jump. Will this make the problem not a MDP? \n\nOverall, this is a good paper.\n\nI read the authors' response. The paper should in its final version add the precise explanation of how the two states interact and how a joint state definition differs from the current one.", "Thank you for your review, questions and suggestions. We address both questions and suggestions below. \n\nQuestion1: \"The basic idea of the paper, the concepts of skip and jump, and the reinforcement learning formulation are not completely new, but the paper combined them in an effective way. The results show good improvements majorly in FLOPS. The way of defining state, rewards and value function are not very clear to me. Two value estimates are defined separately for the skip agent and the jump agent. Why not define a common value function for a shared state? Two values will double count the rewards from reading. [...]\"\n\nAnswer1: In our model we choose to have a value estimate for each agent, as we posit that reading some high information words can change the state of the LSTM significantly, leading to a different value estimate from the skip agent (which is based on the old LSTM state and the input to the LSTM) and the jump agent (which is based on the updated LSTM state). In principle, if we assume the skip agent can learn how a given word will change the LSTM state, the value estimate from the skip agent could be used for the jump agent, if it is updated to reflect the cost associated with reading the word. \nWe have not included in the paper the impact of this on model training and performance explicitly, due to space constraints, but we will investigate it in future work.\n\nQuestion2: \"[...] Also, the state of the jump agent may not capture all available information. For example, how many words until the end of the sentence if you make a jump. Will this make the problem not a MDP?\"\n\nAnswer2: Whether it is a MDP depends on how we consider the setting when reading the texts. In a streaming setting where each word is continuously arriving, we would not have this information when the decision to jump is made. If we have access to the whole text, we could have access to this information, and our state therefore does not capture all relevant information when making the decision. We have chosen not to use this information, as no other related work uses “future” information when making a decision, but it can potentially give an advantage. \nSimilarly to Question1, we have not explicitly extended this discussion in the paper due to space constraints, but we believe it is an interesting idea to try, to see how the policies potentially change when this information is available.\n", "Thank you for your review, questions and suggestions. We address both questions and suggestions below. \n\nComment1: As a positive point the reviewer writes: ”very fast reading model (?).”\n\nAnswer1: The overall aim of this work was indeed to create a very fast speed-reading model. However, we would also argue that the paper contains multiple contributions in ways of achieving this goal:\n1) As noted by reviewers 1 and 3, the idea of combining skipping and jumping though a multi-agent architecture has not been done previously and empirically provides state-of-the-art speed-reading results. \n2) We provide a more stable way of training the speed reading model compared to strong baselines such as LSTM-Jump and LSTM-Shuffle, which both require selecting 3 parameters describing the model constraints from a very large set of possible values. In contrast, because our model makes skip and jump decisions dynamically, we do not have the same tuning of model constraints, and as described in Section 4.1 our parameter tuning is relatively stable independently of the dataset.\n\nComment2: “although the paper is well written, the jump is not described in details.”\n\nAnswer2: In the first paragraph of Section 3 we describe the idea of both the skip and jump agent. The skip agent can skip a single word, thus not updating the LSTM state. If the word is not skipped, the jump agent makes a decision. In practice, both agents output when to read the next word, as the skip agent can decide to ignore the current word and the jump agent can decide to ignore all words until e.g. the next comma. We have now updated the end of Section 3.1 to better describe how the jumping is made based on the sampled action.\n\nComment3: “using 'structural-jump' is a little misleading. The model will jump to \".,!\" or end of sentence. What is called \"structural\"? Note that those punctuation marks are not 100% correlated to sentence structure. For example, \"He hate fruits such as apples, pears, and oranges.\" The mode should jump to the end of sentence rather than the first \",\" when reading \"such\".”\n\nAnswer3: Thank you for pointing this out. By \"structure\" we indeed refer to \"punctuation structure\". We have now clarified this point throughout the paper. \n\nComment4: “maybe the authors should say a little bit about the used computation-cost-reduction method. (I.e. in an appendix). “\n\nAnswer4: The computation-cost-reduction method is inherent in the speed-reading model, since skipped or jumped words correspond to fewer LSTM update computations. To highlight this point, we have explained explicitly in the end of the first paragraph in Section 3, that the speed up is due to the reduced number of LSTM state update computations.", "Thank you for your review, questions and suggestions. We address both questions and suggestions below. \n\nQuestion1: “Empirical studies show that Structural-Jump-LSTM is (slightly) better than state-of-the-art methods in terms of both accuracy and speed over most but few datasets. My feeling is that the proposed model should work much better than the previous models in very long texts, which I suggest the author should try on. Otherwise, the performance gain looks marginal and it is thus questionable whether the complicated modeling is necessary.”\n\nAnswer1: In our experiments we compare our model against the state of the art using more datasets than any other related work. This large selection of datasets includes texts of very different length. On the datasets with the longest texts (IMDB, CBT-CN, CBT-NE, Yelp) we obtain the largest FLOP reductions on 3 out of 4 of them. IMDB, CBT-CN, CBT-NE are also among the datasets where we obtain the lowest reading percentages (only 19.7% to 32.6%). So our model indeed performs very well on long text. However, we also observe that speed reading is very task -dependent, as one of the datasets with short texts (DBPedia) obtains the lowest reading percentage across all datasets (17.5%).\nRegarding whether “the complicated modeling is necessary”, we note that our model is not notably more complex than related models, as most related models (except Adaptive-LSTM) implement an agent for making speed-reading decisions. In our setting, we use a simple agent for skipping, followed by a potential decision by the structural jumping agent. This allows to effectively combine the benefits of skipping and jumping. Additionally, in comparison to strong models such as LSTM-Jump and LSTM-shuffle, our model makes parameter tuning notably easier: LSTM-Jump and LSTM-Shuffle both require tuning of 3 model constraint parameters describing the jumping behavior, however these vary significantly from dataset to dataset, and are chosen from a large set of values. In contrast, because our model makes skip and jump decisions dynamically, we do not have the same tuning of model constraints, and as described in Section 4.1 our parameter tuning is relatively stable independent of the dataset.\n\nQuestion2: ”I am confused by Figure 1: why are the “yes/no” placed in front of the “skipped”? “Previous LSTM” is confusing as well, which should be “Previous Output/hidden state”.”\n\nAnswer2: The Yes/No refers to which LSTM state and output is used for the next time step – if the word is skipped, then the previous state and output is used, otherwise the current state and output is used. We have now clarified this in the caption of Figure 1. We have also corrected “Previous LSTM” into “Previous Output/hidden state”. \n\nComment 1: ”Minor comment: The LSTM-Jump takes word2vec as the initialization in CBT, while this paper uses GLOVE. I wonder if this results in the performance difference in accuracy. From my experience, GLOVE is usually better than word2vec in most of the tasks. If this effect also applies to CBT, the experiment is not fair.”\n\nAnswer3: This question highlights our reason for reporting accuracy difference, as opposed to absolute values, since the accuracy is dependent on the embedding and (most importantly) model architecture. To answer the question, we have re-run our model on CBT-CN and CBT-NE with the word2vec embedding used in LSTM-Jump and report the results below:\n\nCBT-CN Acc. Jump Read FLOP-reduction\nVanilla LSTM 0.506 \nStructural-Jump-LSTM 0.526 73.0% 26.8% 4.78x\n\nCBT-NE Acc. Jump Read FLOP-reduction\nVanilla LSTM 0.414\nStructural-Jump-LSTM 0.423 59.4% 33.1% 3.82x\n\nThe absolute accuracy scores are lower than when using the GLOVE embedding, however the FLOP reductions and accuracy differences are similar to the GLOVE embedding setting (CBT-CN slightly better and CBT-NE slightly worse). If we replaced our original results on CBT-CN and CBT-NE with these new results, it would not change the ranking of the fastest models on those datasets.\n\nWe thank the reviewer for the insightful comments. We hope the above clarifications and paper changes related to Figure 1 sufficiently answer the questions and concerns raised by the reviewer. \n", "The paper proposes a fast-reading method using skip and jump actions. The paper shows that the proposed method is as accurate as LSTM but uses much less computation.\n\n* pros: \n- very fast reading model (?). \n\n* cons: \n- although the paper is well written, the jump is not described in details. \n- using 'structural-jump' is a little misleading. The model will jump to \".,!\" or end of sentence. What is called \"structural\"? Note that those punctuation marks are not 100% correlated to sentence structure. For example, \"He hate fruits such as apples, pears, and oranges.\" The mode should jump to the end of sentence rather than the first \",\" when reading \"such\". \n- maybe the authors should say a little bit about the used computation-cost-reduction method. (I.e. in an appendix). " ]
[ 7, -1, 7, -1, -1, -1, 5 ]
[ 5, -1, 4, -1, -1, -1, 4 ]
[ "iclr_2019_B1xf9jAqFQ", "S1gJqdreRm", "iclr_2019_B1xf9jAqFQ", "S1eAQNnPnm", "SylYFwwu2m", "H1liRmli2Q", "iclr_2019_B1xf9jAqFQ" ]
iclr_2019_B1xhQhRcK7
Rigorous Agent Evaluation: An Adversarial Approach to Uncover Catastrophic Failures
This paper addresses the problem of evaluating learning systems in safety critical domains such as autonomous driving, where failures can have catastrophic consequences. We focus on two problems: searching for scenarios when learned agents fail and assessing their probability of failure. The standard method for agent evaluation in reinforcement learning, Vanilla Monte Carlo, can miss failures entirely, leading to the deployment of unsafe agents. We demonstrate this is an issue for current agents, where even matching the compute used for training is sometimes insufficient for evaluation. To address this shortcoming, we draw upon the rare event probability estimation literature and propose an adversarial evaluation approach. Our approach focuses evaluation on adversarially chosen situations, while still providing unbiased estimates of failure probabilities. The key difficulty is in identifying these adversarial situations -- since failures are rare there is little signal to drive optimization. To solve this we propose a continuation approach that learns failure modes in related but less robust agents. Our approach also allows reuse of data already collected for training the agent. We demonstrate the efficacy of adversarial evaluation on two standard domains: humanoid control and simulated driving. Experimental results show that our methods can find catastrophic failures and estimate failures rates of agents multiple orders of magnitude faster than standard evaluation schemes, in minutes to hours rather than days.
accepted-poster-papers
* Strengths The paper addresses a timely topic, and reviewers generally agreed that the approach is reasonable and the experiments are convincing. Reviewers raised a number of specific concerns (which could be addressed in a revised version or future work), described below. * Weaknesses Some reviewers were concerned the baselines are weak. Several reviewers were concerned that relying on failures observed during training could create issues by narrowing the proposal distribution (Reviewer 3 characterizes this in a particularly precise manner). In addition, there was a general feeling that more steps are needed before the method can be used in practice (but this could be said of most research). * Recommendation All reviewers agreed that the paper should be accepted, although there was also consensus that the paper would benefit from stronger baselines and more close attention to issues that could be caused by an overly narrow proposal distribution. The authors should consider addressing or commenting on these issues in the final version.
train
[ "r1ggQ1XjCX", "BJex-sbiRm", "H1l_8M3gRm", "H1lea0SWpQ", "Byxi9c4fpm", "r1lgl9Iz6X", "r1xLqdIM6X", "BJxm6c4GaQ", "BJl92-Odhm", "B1lWFTj03X", "H1guoXzK37", "ByeinoLypQ" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author" ]
[ "Thanks for clarifying your concerns.\n\nWe understand the high-level question raised here to be: “when should practitioners deploying a system in the real world test this system with the FPP rather than VMC”? In short, the answer is *always*. \n\nFirst, for risk estimation, by mixing the FPP and VMC estimates, we can guarantee that we never do worse than VMC by over a small constant factor, even when the FPP does not generalize at all, while preserving orders of magnitudes improvement when the FPP generalizes. See the discussion on statistical efficiency in our response to R3, or the paper by Neufeld et al in our citations, for details.\n\nSecond, in the real world, practitioners have limited evaluation budgets - self-driving car companies can’t test the car for millions of miles before every code update. When we deploy ML in safety critical environments, existing methods will often find 0 failures under limited evaluation, even if the system is unsafe. We don’t claim that our method will work well in all such situations. But even if it finds failures in some safety critical domains, preventing the deployment of some unsafe systems is a very positive impact. Our experiments suggest that there are widely used domains where our method works. Further, conceptually, the proposed continuation method for learning the FPP seems much stronger to us than all existing methods and baselines we are aware of.\n\nIn the revised section 3.3, we gave detailed explanations for when the method would (not) be better than existing approaches. To add on, if the test agent fails in a subset of ways (at least some of) the training agents do, our method will work well. If all the training agents do well in a particular scenario, but the test agent fails, then we will do no better (but no worse) than existing methods at detecting such failures.\n\nFinally, we've also provided a number of tips for practitioners using our method in the revised section 4.4, for example using a DND/Bayesian Neural Network to prioritize sampling uncertain states.", "Thank the authors for clarifying some of the details. \n\nRegarding my concern about the practical performance of the proposed approach, I was not referring to the experiments but rather the use cases in the real world. As FPP depends on the generalization of the binary classification neural network, it is hard to give a confidence interval about its prediction. Furthermore, the distribution of the training data for this binary neural network may be significantly mismatched with that of the test given that the catastrophic failures are rare in nature. \n\nI do not have any concrete baselines in mind. I find it hard to justify the proposed approach by experimental comparisons only. Reiterating my earlier comments, it is better to lay out the conditions under which the proposed method would (not) work. The authors' current answer is somehow too obvious to be useful for practitioners: \"if the neural network severely underestimates the failure probability of a large fraction of failure cases\". ", "Dear Reviewers,\n\nThank you for the constructive feedback. All reviews expressed that we were formulating and tackling a significant problem, and that the experimental results were compelling. There were also positive comments about the soundness and novelty of the approach. We hope our work leads to an increased focus on robustness and adversarial examples in RL (and in general beyond norm-ball perturbations).\n\nWe have updated our paper to incorporate reviewer feedback. In particular, we added a paragraph at the end of section 4.1 to explain why classical baselines would not work in our context. We added section 4.4 to discuss practical considerations: lower bounds on statistical efficiency, as well as heuristics we use to robustify our method. We have revamped the exposition in section 3.3 to explain one of the key novelties of our approach: the continuation approach to learning FPPs. The other novelties were motivating an important, unaddressed problem, and the extension of the importance sampling framework to include stochasticity.\n\nWe believe these address the reviewer comments on statistical efficiency, baselines, and novelty. If the responses satisfy the reviewers, we hope they will consider raising their scores, or letting us know in what ways they think the paper should be improved. \n\nThanks,\nAuthors\n", "Thank you for taking the time to write very thoughtful comments.\n\n> \"I believe that this paper addresses an important problem in a novel manner (as far as I can tell) and the experiments are quite convincing.\"\n\nIt sounds like we’re on the same page regarding the importance of the problem, novelty, and experimental sections. You raised some really good points about the technical section, which we discuss below.\n\n> \"The main negative point is that I believe that the proposed method has some flaws which may actually decrease statistical efficiency in some cases... It seems to me that a weak point of the method is that it may also severely reduce the efficiency compared to a standard MC method.\"\n\nTheoretically, we can ensure that our method never does more than 2x worse than standard MC.\n\n(1) Here’s an intuitive approach for limiting slowdown by a constant factor. We can run both standard MC and our estimator in parallel. If standard MC finds at least a few failures, we can use standard MC. If not, we can use our method. This incurs a slow-down of 2x in the worst case, while remaining orders of magnitudes better in safety critical domains such as the ones we test. Neufeld et al, which we mention in our related works, give even better guarantees when combining stochastic estimators.\n\n(2) Moreover, any method for variance reduction or choosing proposal distributions can be worse in certain cases. This is true for cross-entropy method, subset simulation, control variates, baselines, to name a few. Yet these methods are used in practice, with great success. Requiring 0 slowdown may be too demanding -- we suspect an analogue of the no free lunch theorem might hold -- but we can limit slowdown by a constant factor. We will make all this more clear in the manuscript.\n\nIn practice, we employ safeguards to protect us from the issues you describe. (1) For the humanoid experiment we used a Differentiable Neural Dictionary described in Appendix E.1 (Pritzel at al, 2017), this was in D.1 in the original version. A DND is a kNN classifier in feature space, but uses a learned pseudo-count to output higher failure probabilities when the query point is far from training points. Intuitively, the DND model outputs higher failure probabilities for points on which it is uncertain, related to UCB. (2) We trained the FPP on weaker agents. So our method typically over-estimates failure probabilities. (3) Even so, if f underestimates the probability of failure at several points x, it will still typically converge much faster than standard MC. If all x are underestimated by at most a factor of k, then our method slows down on the order of sqrt(k). We show experimentally that our method does orders of magnitude better so this slowdown is not bad.\n\n> \"The proposed method relies on the ability to initialize the system in any desired state. However, on a physical system, where finding failure cases is particularly important, this is usually not possible. It would be interesting if the paper would discuss how the proposed approach would be used on such real systems.\"\n\nOur method actually does not initialize the system at arbitrary states. We only assume that the initial state x is sampled from some (unknown) distribution. Further, the initial system state only needs to be partially observable and the unobserved details can be absorbed into Z. We will make this more clear in the paper - does this address your concern?\n\n> \"On page 6, in the first paragraph, the state is called s instead of x as before. Furthermore, the arguments of f are switched.\"\n\nThanks for spotting this, we will fix this.\n\nReferences (also cited in the original paper):\nJames Neufeld, Andras Gyorgy, Csaba Szepesvari, Dale Schuurmans. Adaptive Monte Carlo via Bandit Allocation. In ICML 2014.\nAlexander Pritzel, Benigno Uria, Sriram Srinivasan, Adria Puigdomenech, Oriol Vinyals, Demis Hassabis, Daan Wierstra, and Charles Blundell. Neural episodic control. In ICML 2017.\n", "Thank you for the review and suggestions. We first address what we understand to be the main concerns in your review:\n\nWe believe there are two sources of novelty. (1) A long-term goal is robust RL agents. Testing agents when rewards are highly sparse is on the critical path to this goal. To our knowledge, this problem has gone unaddressed. Thus, one novelty is considering a practical and important class of rare event estimation problems. (2) Our setting is fairly different from classical settings. By exploiting its structure, we provide an effective approach, whereas prior approaches simply would not work.\n\n> Small amount of novelty; primarily an application of established techniques\n> The specific novelty of the approach seems to be fitting the proposal distribution to failures observed during training. \n\nWe believe there are several novel ideas in our approach which are missing in this summary. These novelties aren’t just small changes - we don’t see how existing approaches could handle our setting (failure search and risk estimation, with binary failure signals) without them. Admittedly, we emphasized importance over novelty in writing the paper, and will edit for clarity.\n\nThe main novelty in the continuation approach is to learn the proposal distribution from a family of related, but weaker, agents. Our method goes beyond simply fitting a function to data. Fitting a proposal distribution to failures observed for the final agent would not work well. For example, in Humanoid, the final agent fails once every 110k episodes, and was trained for 300k episodes. If we run existing methods like the cross-entropy method on the final agent, we would need significantly more than 300k episodes of data to get a good proposal distribution. \n\nAnother novel aspect is our extension of the standard importance sampling setup to include stochasticity. While this seems very fundamental, we are not aware of this in prior work. To reflect the practicalities of RL tasks, we separate controllable randomness (observed initial conditions) from unobservable, uncontrollable randomness (environment and agent randomness, or unobserved initial conditions). We show this changes the form of the minimum-variance proposal distribution (Proposition 3.2). Additionally, in our setup, the initial state distribution is arbitrary and unknown.\n\n> I wonder if learning the proposal distribution based on failures observed during training presents a risk of narrowing the range of possible failures being considered.\n\nThis is a good observation. In our humanoid experiments, we safeguard against this using a differentiable neural dictionary (Appendix D.1, moved to E.1 in the latest revision). This encourages higher failure probabilities for initial conditions far from those seen during training. Also see our response to R3 regarding statistical efficiency.", "> What is the certainty equivalence approach? A reference would be helpful and improve the presentation quality of the paper.\n\nThe certainty equivalence approach is described on page 3. The term has a long history in economics and control, going back to work by Stephen Turnovsky. We will add a reference:\nStephen Turnovsky. Optimal Stabilization Policies for Stochastic Linear Systems: The Case of Correlated Multiplicative and Additive disturbances. Review of Economic Studies 1976. 43 (1): 191–94.\n\n> What is exactly the $\\theta_t$ in Section 3.3? What is the dimension of this vector in the experiments? What quantities should be encoded in this vector in practice? \n\nIn general, theta_t should contain any features which provide useful information about the failure probabilities of the policy, and are easy to condition on. In our experiments, theta_t encodes the training iteration, and the amount of noise applied to the policy (details in old appendix D.1, moved to E.1 in the upcoming version), so two dimensions. More features may improve performance, but this was just the simple thing we tried, and since the improvement was already so drastic, it didn’t seem there was much point pushing further.\n", "> Overall, this paper addresses a practically significant problem and has proposed reasonable approaches. While I still have concerns about the practical performance of the proposed methods, this work along the right track in my opinion.\n\nThank you for the positive comments, and helpful feedback. Could you please explain what concerns you have about the practical performance of the proposed methods? How can we address these? We believe our approach is a large improvement over baselines, both in theory, and as supported by our experiments.\n\n> The reviewer is not familiar with this domain, but the baseline, naive search, seems like straightforward and very weak. Are there any other methods for the same problem in the literature?\n\nWe assume you are talking about failure search, and not failure rate estimation? In our original paper, we did compare our method with an additional baseline: a prioritized replay baseline. This does significantly better than naive search, but significantly worse than our proposed method. \n\nWe seem to be the first to tackle this problem. The setting is sufficiently different from classical settings, so classical baselines would not work, as we explain in our response to R2. We’d be happy to compare to additional baselines though - are there are any other baselines you would suggest we include?\n\n> I am still concerned about the fact that the FPP depends on the generalization of the binary classification neural network, although the authors tried to give intuitive examples and discussions. Nonetheless, I understand the difficulty. Could the authors give some conditions under which the approach would fail? Any alternative approaches to the binary neural network? What is a good principle to design the network architecture? \n\nThe main point we hope to convey is that approaches beyond VMC are crucial, and using an optimized adversary is a good idea in safety-critical settings. We can guarantee that we never do worse than VMC by over a small constant factor (see the discussion on statistical efficiency in our response to R3 for details). However, as you point out, details can influence how much improvement we observe in practice. These details can be application specific, and is not the focus of our paper, but we expand on some of these details below.\n\nOur approach would not help if the neural network severely underestimates the failure probability of a large fraction of failure cases. This could occur for initial states that are very different from all the initial states we have seen during training. We could mitigate this issue: (1) In the humanoid domain, we use a differentiable neural dictionary. The DND outputs higher failure probabilities for points very far from those seen during training. (2) Since we train on weaker agents, we tend to overestimate the failure probabilities. In general, a guiding principle is to output higher failure probabilities for examples we are uncertain about.\n\nWe included architectural details in Appendix D.1, but will move the key ideas to the main paper in the next update. Does this address your concerns? We are happy to provide more details if that helps.\n", "> I think the method accomplishes what it sets out to do. However, as the paper notes, creating robust agents will require a combination of methodologies, of which this testing approach is only a part. \n\nAgreed, this an exciting direction for future work. We believe our work is essential for this goal - if we cannot test whether an agent is robust or not, we cannot hope to develop robust agents. Note that in section 4.3 we use the FPP in a simple way to identify more robust agents. We hope future work extends on this - one way is to learn the FPP online with the policy and apply it for adversarial training. This could yield large improvements in sample efficiency - if the FPP is 100x faster at failure search, the agent gets useful examples 100x as often.\n\n> I would suggest incorporating some of the descriptions of the models and methods in Appendix D into the main paper.\n\n\nWe’ve edited down the length of the paper, which allows to move some important details to the main paper. We’ll mention some details regarding the training + architecture of the failure probability predictor in the next update. Are there any specific details you would suggest we include?\n\n> Sec 4.2: How are the confidence bounds for the results calculated?\n> What are the \"true\" failure probabilities in the experiments?\n\nThe ground truth failure probabilities are obtained by running the VMC estimator for 5e6 episodes on Driving and 2e7 episodes on Humanoid. Right now, this is mentioned in the footnote at the bottom of page 7, with additional details in the appendix. Thanks for raising this - we’ve definitely tried to make these details as clear as possible, but also realize there’s a lot of such details, and may still be unclear. Please let us know if the writing could be clearer.\n\nThe confidence bands in Figure 1 represent 2 standard errors. Each plot is generated by running the estimators many times, and plotting the probability of an unreliable estimate. We use a conservative estimate for standard errors, where if p^ is the empirical mean over n trials for the probability parameter for a Bernoulli RV, SE(p^) = sqrt(max(p^, 0.1) * (1-p^) / n). The max is just to avoid overly narrow confidence bands when p^ is very close to 0 (i.e. when none of the estimates from the estimator are unreliable).\n\n> Sec 4.3: There is a reference to non-existant \"Appendix X\"\n\nThanks, fixed.", "PAPER SUMMARY\n-------------\n\nThe paper proposes a method for evaluating the failure probability of a learned agent, which is important in safety critical domains. \n\nUsing plain Monte Carlo for this evaluation can be too expensive, since discovering a failure probability of epsilon requires on the order of 1/epsilon samples. Therefore the authors propose an adversarial approach, which focuses on scenarios which are difficult for the agent, while still yielding unbiased estimates of failure probabilities. \n\nThe key idea of the proposed approach is to learn a failure probability predictor (FPP). This function attempts to predict at which initial states the system will fail. This function is then used in an importance sampling scheme to sample the regions with higher failure probability more often, which leads to higher statistical efficiency.\nFinding the FPP is itself a problem which is just as hard as the original problem of estimating the overall failure probability. However, the FPP can be trained using data from different agents, not just the final agent to be evaluated (for instance the data from agent training, containing typically many failure cases). The approach hinges on the assumption that these agents tend to fail in the same states as the final agent, but with higher probability. \n\nThe paper shows that the proposed method finds failure cases orders of magnitude faster than standard MC in simulated driving as well as a simulated humanoid task. Since the proposed approach uses data acquired during the training of the agent, it has more information at its disposal than standard MC. However, the paper shows that the proposed method is also orders of magnitudes more efficient than a naive approach using the failure cases during training.\n\n\nREVIEW SUMMARY\n--------------\n\nI believe that this paper addresses an important problem in a novel manner (as far as I can tell) and the experiments are quite convincing.\nThe main negative point is that I believe that the proposed method has some flaws which may actually decrease statistical efficiency in some cases (please see details below).\n\n\nDETAILED COMMENTS\n-----------------\n\n- It seems to me that a weak point of the method is that it may also severly reduce the efficiency compared to a standard MC method. If the function f underestimates the probability of failure at certain x, it would take a very long time to correct itself because these points would hardly ever be evaluated. It seems that the paper heuristically addresses this to some extent using the exponent alpha of the function. However, I think there should be a more in-depth discussion of this issue. An upper-confidence-bound type of algorithm may be a principled way of addressing this problem.\n\n- The proposed method relies on the ability to initialize the system in any desired state. However, on a physical system, where finding failure cases is particularly important, this is usually not possible. It would be interesting if the paper would discuss how the proposed approach would be used on such real systems.\n\n- On page 6, in the first paragraph, the state is called s instead of x as before. Furthermore, the arguments of f are switched.", "Summary:\nProposes an importance sampling approach to sampling failure cases for RL algorithms. The proposal distribution is based on a function learned via a neural network on failures that occur during agent training. The method is compared to random sampling on two problems where the \"true\" failure probability can be approximated through random sampling. The IS method requires substantially fewer samples to produce failure cases and to estimate the failure probability.\n\nReview:\nThe overall approach is technically sound, and the experiments demonstrate a significant savings in sampling compared to naive random sampling. The specific novelty of the approach seems to be fitting the proposal distribution to failures observed during training. \n\nI think the method accomplishes what it sets out to do. However, as the paper notes, creating robust agents will require a combination of methodologies, of which this testing approach is only a part. \n\nI wonder if learning the proposal distribution based on failures observed during training presents a risk of narrowing the range of possible failures being considered. Of course identifying any failure is valuable, but by biasing the search toward failures that are similar to failures observed in training, might we be decreasing the likelihood of discovering failures that are substantially different from those seen during training? One could imagine that if the agent has not explored some regions of the state space, we would actually like to sample test examples from the unexplored states, which becomes less likely if we preferentially sample in states that were encountered in training.\n\nThe paper is well-written with good coverage of related literature. I would suggest incorporating some of the descriptions of the models and methods in Appendix D into the main paper.\n\nComments / Questions:\n* Sec 4.2: How are the confidence bounds for the results calculated?\n* What are the \"true\" failure probabilities in the experiments?\n* Sec 4.3: There is a reference to non-existant \"Appendix X\"\n\nPros:\n* Overall approach is sound and achieves its objectives\n\nCons:\n* Small amount of novelty; primarily an application of established techniques", "This paper proposed an adversarial approach to identifying catastrophic failure cases in reinforcement learning. It is a timely topic and may have practical significance. The proposed approach is built on importance sampling for the failure search and function fitting for estimating the failure probabilities. Experiments on two simulated environments show significant gain of the proposed approaches over naive search. \n\nThe reviewer is not familiar with this domain, but the baseline, naive search, seems like straightforward and very weak. Are there any other methods for the same problem in the literature? The authors may consider to contrast to them in the experiments. \n\nWhat is the certainty equivalence approach? A reference would be helpful and improve the presentation quality of the paper.\n\nWhat is exactly the $\\theta_t$ in Section 3.3? What is the dimension of this vector in the experiments? What quantities should be encoded in this vector in practice? \n\nI am still concerned about the fact that the FPP depends on the generalization of the binary classification neural network, although the authors tried to give intuitive examples and discussions. Nonetheless, I understand the difficulty. Could the authors give some conditions under which the approach would fail? Any alternative approaches to the binary neural network? What is a good principle to design the network architecture? \n\nOverall, this paper addresses a practically significant problem and has proposed reasonable approaches. While I still have concerns about the practical performance of the proposed methods, this work along the right track in my opinion.\n\n", "Thank you for the specific feedback and helpful comments. We wanted to quickly clarify the correctness of Proposition 3.2, since it seemed to be a major point in your review.\n\n> \"It seems to me that Proposition 3.2 is wrong. In the proof it is written E[U^2] = E[W^2 c(X,Z)], which is wrong since U^2 = W^2 c^2(X,Z). This means that the proposal distribution Q_f* is not in fact the optimal proposal distribution. This is problematic because the entire approach is justified using this argument.\"\n\nWe believe the proof is correct, but this point is indeed subtle, and we’ll clarify it in the paper. In our case c(X, Z) is a Bernoulli random variable. So c^2(X, Z) = c(X, Z), as c(·, ·) is either 0 or 1 and in both cases the square is the identity. This means E[U^2] = E[W^2 c^2(X,Z)] = E[W^2 c(X,Z)]. In the case where c represents an arbitrary distribution, the optimal proposal distribution is more difficult to compute and is a worthwhile question for future work. \n\nWe also note that the standard analysis of the optimal proposal distribution under importance sampling does not account for unobserved stochasticity, which we model in Z. This is why the optimal proposal distribution we derive (for Bernoulli random variables) differs from the standard case.\n\nPlease let us know if this addresses your concern." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6, -1 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3, -1 ]
[ "BJex-sbiRm", "r1xLqdIM6X", "iclr_2019_B1xhQhRcK7", "BJl92-Odhm", "B1lWFTj03X", "H1guoXzK37", "H1guoXzK37", "B1lWFTj03X", "iclr_2019_B1xhQhRcK7", "iclr_2019_B1xhQhRcK7", "iclr_2019_B1xhQhRcK7", "BJl92-Odhm" ]
iclr_2019_BJG0voC9YQ
Woulda, Coulda, Shoulda: Counterfactually-Guided Policy Search
Learning policies on data synthesized by models can in principle quench the thirst of reinforcement learning algorithms for large amounts of real experience, which is often costly to acquire. However, simulating plausible experience de novo is a hard problem for many complex environments, often resulting in biases for model-based policy evaluation and search. Instead of de novo synthesis of data, here we assume logged, real experience and model alternative outcomes of this experience under counterfactual actions, i.e. actions that were not actually taken. Based on this, we propose the Counterfactually-Guided Policy Search (CF-GPS) algorithm for learning policies in POMDPs from off-policy experience. It leverages structural causal models for counterfactual evaluation of arbitrary policies on individual off-policy episodes. CF-GPS can improve on vanilla model-based RL algorithms by making use of available logged data to de-bias model predictions. In contrast to off-policy algorithms based on Importance Sampling which re-weight data, CF-GPS leverages a model to explicitly consider alternative outcomes, allowing the algorithm to make better use of experience data. We find empirically that these advantages translate into improved policy evaluation and search results on a non-trivial grid-world task. Finally, we show that CF-GPS generalizes the previously proposed Guided Policy Search and that reparameterization-based algorithms such Stochastic Value Gradient can be interpreted as counterfactual methods.
accepted-poster-papers
see my comment to the authors below
train
[ "Ske2pWsvg4", "Skl3eOkblN", "SklAR5DLpm", "rJxsxWflC7", "ryxd2ezlRQ", "H1l_SlGxCX", "H1xnN1GxAm", "Bye_P5EZT7", "B1lQbh_c37" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "We thank the area chair for pointing out the references, we will add them to our\nmanuscript. As stated in the response to the reviewers, we agree that our\nexperiments test our algorithm only in the idealized setting of known transition\nand reward kernels and unknown initial state. We will change the wording in the\nintroduction to better reflect the scope of our experiments. We maintain\nhowever, that the underlying idea of inferring scenarios (that can influence all\ntransitions) in hindsight from off-policy data, and re-using these for\ncounterfactual policy evaluation in principle applies to a wider setting. Given\nthe close connection of our proposed algorithm to the GPS algorithm as presented\nby [Levine, Abbeel. 2014], we prefer to keep it's name (CF-GPS) as well as the\ntitle of the paper as is.", "\n\n\nThis is a clear topic of increasing importance to the community-- combining causality / counterfactual reasoning with sequential decision making. The authors draw their perspective from the view of structural causal modeling and it would also be beneficial to reference the body of literature from more of the potential outcomes framework-- see below for a few of these references in the RL counterfactual/ off policy policy evaluation community. In particular, the proposed approach here is general but only instantiated (in terms of inference algorithms and experiments) for when the initial starting state is unknown in a deterministic POMDP environment, where the dynamics and reward model is known. The authors show that they can use inference over the full trajectory (or some multi-time-step subpart) to get a (often delta function) posterior over the initial starting state, which then allows them to build a more accurate initial state distribution for use in their model simulations than approaches that do not use more than 1 step to do so. This is interesting, but it’s not quite clear where this sort of situation would arise in practice, and the proposed experimental results are limited to one simulated toy domain. This is fine, but the title and introduction seem to suggest a much more general contribution, as opposed to this much more restricted (though interesting) setting of inferring the initial starting state distribution when the dynamics and reward model are known. Therefore I encourage the authors to update their title and introduction to narrow the scope of the proposed contribution. \n\nMandel, Liu, Levine, Brunskill, Popovic AAMAS 2014.\nThomas and Brunskill ICML 2016\nSchulam, Saria. NeurIPS 2017\nGuo, Thomas, Brunskill NeurIPS 2017\nParbhoo, Gottesman, Ross, Komorowski, Faisal, Bon, Roth, Doshi-Velez, PloS one 2018\nLiu, Gottesman, Raghu, Komorowski, Faisal, Doshi-Velez, Brunskill NeurIPS 2018\n", "Summary:\n\nThis paper proposes a policy evaluation and search method assisted by a counterfactual model, in contrast previous work using vanilla (non-causal) models. With “no model mismatch” assumption the policy evaluation estimator is unbiased. Empirically, the paper compares Guided Policy Search with counterfactual model (CF-GPS) with vanilla GPS, model based RL algorithm and show benefit in terms of (empirical) sample complexity.\n\nMain comments:\n\nThis paper studies several interesting problems: 1) policy learning with off-policy data; 2) model based RL and how to use model to help policy learning. By capturing a nice connection between causal models and MDP/POMDP model with off-policy data, this paper can leverage SCMs to help the model guided policy search in POMDP. The combination of those ideas is novel and enjoyable.\n\nOn the negative side, I find I met several confused points as a reader with more RL background and less causal inference background. It would be better if the authors could clarify what is the prior distribution P(u) and posterior distribution P(u|h) exactly means in terms of CF-PE algorithm and MB-PE algorithm. I would also appreciate if a more detailed proof of corollary 1 and 2 are included in the appendix, and a higher level intuition/justification about those two results in main body. Maybe I am missing these points due to my limited background in causal inference, but I think those clarification can definitely be helpful for RL audience without that much knowledge in causal inference.\n\nThe main theoretical result seems to be based on the assumption of no model mismatch, and I guess here how the model is estimated from sample are ignored, unless I missed anything. Thus I assume the main contribution of this paper should be algorithmic and empirical. I expect to see the empirical study in more domains with more informative results about how this CF model get the benefit of sampling from p(u|h) rather than p(u) (as an evidence to support motivation paragraph on page 5). ", "We added a paragraph on the “auto-regressive uniformization” in the appendix, showing how any joint distribution over random variables can be converted into independent noise variables and deterministic functions. Please also see our reply to reviewer 1.\n\nConcerning the choice of prior $p(u)$ in the experiments: $U$ was defined as the initial state or “level” of the environment. The prior, as well as the posterior, were chosen to be DRAW latent variable models. The only difference between these models was that the one encoding the posterior was conditioned on observed data $h$. This parametrization is also discussed in the appendix D.", "We agree that our algorithm makes strong assumptions about the model, and that we have not yet studied theoretically or experimentally the important question raised by the reviewer of how violations of these assumptions influence performance. \n\nWe want clarify however, that the assumption of the model class consisting of deterministic functions and independent noise variables is not restrictive in itself, any joint probability over random variables can be written in this way by iteratively applying the “inverse-CDF” method. For a joint Gaussian for example, this corresponds to sampling one variable at a time (conditioned on the previous ones) by sampling an RV uniformly in [0,1], passing it through the inverse standard-Gaussian CDF and scaling it with the conditional standard deviation and adding the conditional mean. We added a paragraph in the appendix to clarify this point.", "For improved readability, we added a proof for corollary 1 in the appendix. Corollary 2 is a direct application of lemma 1 to the SCM prepresentation of a POMDP.\n\nConcerning the difference between $p(u)$ vs $(u\\vert h)$:\nStandard model based RL (MBRL) algorithms usually try to learn a model over unobserved variables $U$ of the environment. If there is uncertainty over these given the observations, then a natural approach for MBRL would to learn a distribution, ie prior $p(u)$. At model test time, one usually samples from this prior to generate rollouts for policy evaluation (or learning). This corresponds to the MB-PE procedure. We propose, instead of sampling from the prior, given concrete observed data $h$, sample from the posterior $p(u\\vert h)$, yielding the CF-PE algorithm. As argued in the paper, $p(u\\vert h)$ should be easier to learn than $p(u)$. We hope the “motivation” paragraphs in the introduction and Ch2 can give an intuitive understanding of the difference.", "We thanks the reviewers for the their thoughtful comments, some of which we address individually below.\n\nGenerally, we want to emphasize that the main contribution of the paper is to show quantitatively that counterfactual reasoning can be beneficial for learning policies in reinforcement learning, admittedly in a highly idealized but not trivial task. In our opinion, this is an important, novel result, given that humans almost constantly engage in counterfactual reasoning, for which a vague functional role was hypothesised but no learning mechanism has been proposed (see [Roese 97]). \nUltimately, we think our proposed method can contribute to novel methods to the important problem of off-policy learning.\n\nWe are currently working on applying the proposed methods to partially observed problems in continuous control, to study if the observed benefits carry over to less idealized settings.", "Summary: by assuming a correct, strongly factored environment model, improved estimators useful for policy search can be derived by \"counterfactual reasoning\", where data sampled from experience is used to refine initial conditions in the model; this translates into improved estimators of policy values, which improves policy search.\n\nMajor comments:\n\nI enjoyed this paper. I think that model-based RL deserves more work, and I think that this is a simple, reasonably workable approach with some nice theoretical benefits. I like the idea of SCMs; I like the idea of counterfactual reasoning; I like the idea of leveraging models in this unique way.\n\nOn the negative side, I felt that the paper makes some rather strong assumptions - specifically, that the agent has access to a perfect model with no mismatch, and that the model decomposes neatly into noise variables plus deterministic functions. Given such a model, one wonders if there are other techniques, say, from classical planning, that could also be used for some sort of policy search.\n\nI have a few questions about approximations. First, I see that probabilistic inference is a core element of each algorithm (where p(u|h) must be computed). For large, complex models, I assume this must be approximate inference. This leads naturally to questions about accuracy (does approximate inference result in biased estimators? [probably yes]), efficacy (do the inaccuracies inherent in approximate inference outweigh the benefits of using p(u|h) vs. p(u)?) and scalability (how large of a model can we reasonably cope with before degradation is unacceptable, or no better than non-CF algorithms?). As far as I can tell, none of this was addressed in the paper, although I do not expect every paper to answer every question; this is a first step.\n\nI wish the experiments were a little more varied. The experimental results really only show marginal improvement in one small task. While I understand that this is not an empirical paper, neither does it fit strongly into the category of \"theory paper\". For example, there are no theory results indicating what sort of benefit we might expect from using the methods outlined here, and in the absence of such theory, we might reasonably look to various experiments to demonstrate its effectiveness.\n\nPros:\n+ Integration with SCMs is interesting\n+ Counterfactual variants of algorithms are clearly motivated and interesting\n+ Paper is generally well-written\n\nCons:\n- Assumption that the agent is given a model with no mismatch is very strong\n- Model class (noise variables + deterministic functions) seems potentially restrictive\n- Questions about impact of approximate inference\n- Experiments could have been more varied\n\n", "Summary:\nProposes Counterfactual Guided Policy Search (CF-GPS), which uses counterfactual inference from sampled trajectories to improve an approximate simulator that is used for policy evaluation. Counterfactual inference is formalized with structural causal models of the POMDP. The method is evaluated in partially-observed Sokoban problems. The dynamics model is assumed known, and a learned model maps observation histories to a conditional distribution on the starting state. CF-GPS outperforms model-based policy search and a \"GPS-like\" algorithm in these domains. GPS in MDPs is shown to be a particular case of CF-GPS, and a connection is also suggested between stochastic value gradient and CF-GPS.\n\nReview:\nThe work is an interesting approach to a relevant problem. Related literature is covered well, and the paper is well-written in an approachable, conversational style. \n\nThe approach is technically sound and generally presented clearly, with a few missing details. It is mainly a combination of existing tools, but the combination seems to be novel. \n\nThe experiments show that the method is effective for these Sokoban problems. A weakness is that the setting is very \"clean\" in several ways. The dynamics and rewards are assumed known and the problem itself is deterministic, so the only thing being inferred in hindsight is the initial state. This could be done without all of the machinery of CF-GPS. I realize that the CF-GPS approach is domain-agnostic, but it would be useful to see it applied in a more general setting to get an idea of the practical difficulties. The issue of inaccurate dynamics models seems especially relevant, and is not addressed by the Sokoban experiment. It's also notable that the agent cannot affect any of the random outcomes in this problem, which I would think would make counterfactual reasoning more difficult.\n\nComments / Questions:\n* Please expand on what \"auto-regressive uniformization\" is and how it ensures that every POMDP can be expressed as an SCM\n* What is the prior p(U) for the experiments? \n* \"lotion-scale\" -> \"location-scale\"\n\nPros:\n* An interesting and well-motivated approach to an important problem\n* Interesting connections to GPS in MDPs\n\nCons:\n* Experimental domain does not \"exercise\" the approach fully; the counterfactual inference task is limited in scope and the dynamics and rewards are deterministic and assumed known\n* Work may not be easily reproducible due to the large number of pieces and incomplete specification of (hyper-)parameter settings " ]
[ -1, -1, 7, -1, -1, -1, -1, 7, 7 ]
[ -1, -1, 2, -1, -1, -1, -1, 3, 3 ]
[ "Skl3eOkblN", "iclr_2019_BJG0voC9YQ", "iclr_2019_BJG0voC9YQ", "B1lQbh_c37", "Bye_P5EZT7", "SklAR5DLpm", "iclr_2019_BJG0voC9YQ", "iclr_2019_BJG0voC9YQ", "iclr_2019_BJG0voC9YQ" ]
iclr_2019_BJe-DsC5Fm
signSGD via Zeroth-Order Oracle
In this paper, we design and analyze a new zeroth-order (ZO) stochastic optimization algorithm, ZO-signSGD, which enjoys dual advantages of gradient-free operations and signSGD. The latter requires only the sign information of gradient estimates but is able to achieve a comparable or even better convergence speed than SGD-type algorithms. Our study shows that ZO signSGD requires d times more iterations than signSGD, leading to a convergence rate of O(d/T) under mild conditions, where d is the number of optimization variables, and T is the number of iterations. In addition, we analyze the effects of different types of gradient estimators on the convergence of ZO-signSGD, and propose two variants of ZO-signSGD that at least achieve O(d/T) convergence rate. On the application side we explore the connection between ZO-signSGD and black-box adversarial attacks in robust deep learning. Our empirical evaluations on image classification datasets MNIST and CIFAR-10 demonstrate the superior performance of ZO-signSGD on the generation of adversarial examples from black-box neural networks.
accepted-poster-papers
This is a solid paper that proposes and analyzes a sound approach to zero order optimization, covering a variants of a simple base algorithm. After resolving some issues during the response period, the reviewers concluded with a unanimous recommendation of acceptance. Some concerns regarding the necessity for such algorithms persisted, but the connection to adversarial examples provides an interesting motivation.
train
[ "S1lxUJxU6X", "Skemq-kZkV", "rkxd0tneyN", "HyxK1V69n7", "ryex1VPEC7", "r1xqTty5CQ", "SJgTdFJc0Q", "r1e7SFJcCQ", "SJewbwwVC7", "rJxHBMwE07", "SJxAFZDECm", "SklDel3Vam", "Hyl5D-vj3X", "r1xwxNwtnQ" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer" ]
[ "Thank you for pointing out the concurrent ICLR submission, which focused on the first-order Byzantine setting. The authors agreed that the extra unimodal symmetric assumption can improve the theoretical convergence bound. And indeed we showed that in the zeroth-order setting, this conclusion holds (Corollary 2). Most importantly, both papers showed the potential impact of zeroth-order and first-order signSGD on addressing practical ML problems.", "We thank Reviewer 3 for this insightful comment. We totally understand your concern. We did not mean that being convex is inferior, instead, we just want to show the convergence of a non-convex loss while using ZO-signSGD. To fully address your concern, we will remove the sentence \"The least squared formulation is commonly used for nonconvex machine learning (Xue et al. 2017).\" And we will add experiments on solving the standard logistic regression problem once the revision window reopens. \n\nWe also thank the reviewer very much for raising the evaluation score. Your previous comments really helped authors to improve the original version.", "I find the authors' explanation for choosing the peculiar least-square logistic regression problem unsatisfactory. It appears rather contrived than what the authors claimed: \"The least squared formulation is commonly used for nonconvex machine learning (Xue et al. 2017).\" Note that if it is indeed commonly used, then you should be able to provide more convincing references than an arxiv paper from last year. \n\nThe explanation \"given the fact that the standard logistic regression yields a convex problem\" is also problematic: why being convex is inferior? If yes, how does the least-squares logistic regression address the downside of convexity? Is there any other way? ", "The authors proposed a zero-order version of the recent signSGD algorithm, by replacing the stochastic gradient with a usual function difference estimate. Similar convergence rates as signSGD were obtained, with an additional sqrt(d) factor which is typical in zero-order methods. Three (typical) gradient estimates based on function values were discussed. Overall, the obtained results are relatively straightforward combination of signSGD with existing zero-order techniques. \n\nQuality: The technical part of this paper seems to be solid. The experiments, on the other hand, are quite ambiguous. First off, why do you choose that peculiar least squares binary classification problem on page 7? Is Assumption A2 satisfied for this problem? Why not use logistic regression? The experimental results are also strange: Why would ZO-signSGD converge faster than ZO-SGD or any other ZO variant? Shouldn't they enjoy similar rates of convergence? Why would taking the sign make the algorithm converge faster? Note that the original motivation for signSGD is not for faster convergence but less communication. For the second set of experiment, how do you apply ZO-SGD to generate adversarial examples? Again, why do we expect ZO-signSGD to perform better than ZO-SGD?\n\nClarity: This paper is mostly well-written, but the authors at times largely overclaim their contributions or exaggerate the technical challenges. \n-- Page 2, 2nd line: the authors claim that \"Our analysis removes the impractical assumption of b = O(T)\", but in the later examples (page 6, top), they require q = O(T). How is this any different than b = O(T)? Even worse, the former case also require b = n, i.e., there is no stochasity at all...\n-- Assumption A2: how crucial is this assumption for obtaining the convergence results? note that not many functions have Lipschitz continuous bounded gradients... (logistic regression is an example)\n-- Page 4, top: \"ZO-signSGD has no restriction on the mini-batch size b\"? The rates at the end of page 5 suggests otherwise if we want the bound to go to 0 (due to the term sqrt(d/b)). \n-- Page 4, top: the last two technical challenges do not make sense: once we replace f by f_mu, these difficulties go away immediately, and it is well-known how to relate f_mu with f.\n\nOriginality: The originality seems to be limited. Contrary to what the authors claimed, I found the established results to be relatively straightforward combination of signSGD and existing zero-order techniques. Can the authors elaborate on what additional difficulties they need to overcome in order to extend existing zero-order results to the signSGD case?\n\nSignificance: The proposed zero-order version of signSGD may potentially be significant in applications where gradient information is not available and yet distributed optimization is needed. This, however, is not demonstrated in the paper as the authors never considered distributed optimization.\n\n\n##### added after author response #####\nI appreciate the authors effort in trying to make their contributions precise and appropriate. The connection between ZO-signSGD and adversarial examples is further elaborated, which I agree is an interesting and potentially fruitful direction. I commend the authors for supplying further experiments to explain the pros and cons of the proposed algorithms. Many of the concerns in my original review were largely alleviate/addressed. As such, I have raised my original evaluation.", "Response to Reviewer 3 (Q: question; R: response):\n\nQ: The obtained results are relatively straightforward combination of signSGD with existing zero-order techniques.\nAnd question on originality: Can the authors elaborate on what additional difficulties they need to overcome in order to extend existing zero-order results to the signSGD case?\n\nR: We are sorry to learn that the reviewer feels our work is a relatively straightforward combination of signSGD with existing zero-order techniques. Based on the reviewer’s comments, our paper has been largely improved. In what follows, we clarity our main contributions and 'additional difficulties'.\n\nFirst, beyond signSGD, our established results apply to the case of mini-batch sampling without replacement. And thus, ZO-signGD can be treated as a special case in our analysis. To derive the variance of ZO gradient estimate, we require careful analysis on the effects of two types of mini-batch sampling as well as random direction sampling, and then link them with statistics of a single random gradient estimate known in the existing ZO results.\n\nSecond, to derive the eventual convergence error of ZO-signSGD, we require to fill the gap between the L1 geometry of signSGD and the variance of the ZO gradient estimate in terms of squared L2 norm. Moreover, we require to study the effects of different types of ZO gradient estimators on the convergence of ZO-signSGD. In particular, sign-based gradient estimators, (11)-(12) in Sec. 5, have not been well studied in the ZO literature. These estimators can be interpreted as the ZO counterparts of first-order gradient estimators with majority vote in the centralized and distributed settings.\n\nLast but not the least, our goal is not to 'combine' ZO and signSGD. As a matter of fact, ZO-signSGD has been well motivated in the design of black-box adversarial examples (Ilyas et al., 2018a). However, the formal connection between optimization theory and adversarial ML was not fully established. Our work provides a comprehensive study on ZO-signSGD from multiple perspectives including convergence analysis, gradient estimator, and applications. We really hope that the reviewer can recognize the contributions of this work in both theory and practice. \n\n\nQ: The technical part of this paper seems to be solid. The experiments, on the other hand, are quite ambiguous. First off, why do you choose that peculiar least squares binary classification problem on page 7? Is Assumption A2 satisfied for this problem? Why not use logistic regression? \n\nR: The least squared formulation is commonly used for nonconvex machine learning (Xue et al. 2017), given the fact that the standard logistic regression yields a convex problem. Since we study ZO-signSGD in the nonconvex setting, we choose to solve the least squared binary classification problem in order to make empirical studies consistent with theory. And Assumption A2 is indeed satisfied for the proposed problem. This is not difficult to prove by the boundedness of the sigmoid function. We have clarified this point in Sec. 6.\n\nP. Xu, F. Roosta-Khorasan, and M. W. Mahoney. Second-order optimization for non-convex machine learning: An empirical study. arXiv preprint arXiv:1708.07827, 2017", "Q: Page 4, top: the last two technical challenges do not make sense: once we replace f by f_mu, these difficulties go away immediately, and it is well-known how to relate f_mu with f.\n\nR: Based on the reviewer's feedback, we have rephrased our claim and make it clearer and more accurate. About the original last two technical challenges, we would like to emphasize the following points.\n \nFirst, it is not trivial to overcome the second technical challenge, since we need to carefully investigate the effect of two mini-batch sampling schemes as well as the effect of random direction sampling on the variance of ZO gradient estimates (see Proposition 2). In particular, the use of mini-batch samples without replacement removes the assumption of i.i.d. samples. This challenge does not go away immediately even if we replace f by f_mu. \n\nSecond, in the last technical challenge, we aimed to emphasize that both the sign operation and the ZO random gradient estimation are biased approximations to the true gradient. Note that the sign-based descent algorithm measures the convergence in L1 geometry, which introduces a mismatch with the squared L2 norm used in bounding the variance of ZO gradient estimates. In addition to relating f_mu with f, we need to translate the gradient norm from L1 to L2 and use the probabilistic convergence method to derive the eventual convergence error bound (see Theorem 1).\n\n\nQ: Significance: The proposed zero-order version of signSGD may potentially be significant in applications where gradient information is not available and yet distributed optimization is needed. This, however, is not demonstrated in the paper as the authors never considered distributed optimization.\n\n\nR: We agree with the reviewer that distributed optimization is an interesting setting to perform ZOsignSGD. However, even in the centralizing setting, ZO (gradient-free) methods are also attractive when the gradient is difficult or impossible to compute. For ZO-signSGD, we have shown in Sec. 3 and Sec. 6 that it is well motivated by centralization optimization problems, e.g., the design of black-box adversarial examples under limited queries.\n\nTo further address the reviewer’s concern, we have also added a new sign-based gradient estimator (12) used for ZO distributed optimization. This results in a distributed variant of ZO-signSGD, whose convergence rate is derived in Corollary 3 and empirical performance is compared with other variants of ZO-signSGD in Figure 2. We refer the reviewer to Sec. 5 for more details.\n\nHopefully, the reviewer agrees with us that our new version has been largely improved, and could re-evaluate our work towards a better score. We thanks the reviewer's efforts to review our work.\n", "Q: Clarity: This paper is mostly well-written, but the authors at times largely overclaim their contributions or exaggerate the technical challenges. \n\nR: In the revised version, we have tried our best to make our claim clearer and more accurate. We answer the reviewer’s specific questions as below. \n\n\nQ: -- Page 2, 2nd line: the authors claim that \"Our analysis removes the impractical assumption of b = O(T)\", but in the later examples (page 6, top), they require q = O(T). How is this any different than b = O(T)? Even worse, the former case also require b = n, i.e., there is no stochasity at all…\n\nR: We apologize for the confusion we made in the initial version. Here the ’impractical assumption b = O(T)’ meant the assumption of i.i.d. mini-batch samples (with replacement) used in signSGD (Bernstein et al. 2018). Based on such an assumption, signSGD in (Bernstein et al. 2018) cannot cover signGD as a special case since the mini-batch of size b = n might NOT be equivalent to the entire set [n]. By contrast, our convergence analysis for ZO-signSGD applies to both mini-batch sampling schemes with and without replacement. The use of mini-batch sampling without replacement makes ZO-signSGD equivalent to ZO-signGD at b = n. \n\nDifferent from the mini-batch b, the number of random directions q is introduced by ZO gradient estimation. Compared to signSGD, ZO-signSGD involves an additional convergence error relying on d and q. This is the cost of gradient-free optimization methods using function difference based gradient estimates. The choice of making q proportional to d (or T) is commonly used in ZO methods (Duchi et al. 2015; Liu et al. 2018; Hajinezhad et al. 2017) to reduce the variance of ZO gradient estimates. We refer the reviewer to the last paragraph of Sec. 4 for a thorough discussion on b and q. \n\nJ. C. Duchi et al., Optimal rates for zero-order convex optimization: The power of two function evaluations. IEEE TIT, 2015.\nS. Liu et al., Zeroth-order stochastic variance reduction for nonconvex optimization, NIPS, 2018\nD. Hajinezhad, et al., “Zeroth order nonconvex multi-agent optimization over networks,” 2017.\n\n\nQ: Assumption A2: how crucial is this assumption for obtaining the convergence results? note that not many functions have Lipschitz continuous bounded gradients... (logistic regression is an example)\n\nR: A2 is needed to bound the variance of ZO gradient estimates; see Proof of Proposition 2. The least squared form of logistic regression actually satisfies A2. We feel that A2 is not a strict assumption in nonconvex analysis, e.g., (Theorem 4, Reddi et al. 2018; Definition 3, Reddi et al. 2016). In practice, we only require that the gradient of the cost function at x_k is bounded at each time. \n\nReddi, Sashank J., Satyen Kale, and Sanjiv Kumar. \"On the convergence of adam and beyond.\" (2018).\nReddi, S. J., Hefny, A., Sra, S., Poczos, B. and Smola, A. Stochastic variance reduction for nonconvex optimization, 2016.\n\n\nQ: Page 4, top: \"ZO-signSGD has no restriction on the mini-batch size b\"? The rates at the end of page 5 suggests otherwise if we want the bound to go to 0 (due to the term sqrt(d/b)). \n\nR: The restriction meant the assumption of i.i.d. mini-batch samples (with replacement). We wanted to emphasize that ZO-signSGD allows mini-batch sampling without replacement. The error term \\sqrt(d/b) can be eliminated as b = n and when the mini-batch sampling without replacement is used. However, this is not true while using i.i.d. mini-batch samples (even if b = n). In general, the error term \\sqrt(d/b) exists, which is induced by the variance of gradient estimates. We refer the reviewer to the last paragraph of Sec. 4 for a thorough discussion on the mini-batch size b. ", "Q: The experimental results are also strange: Why would ZO-signSGD converge faster than ZO-SGD or any other ZO variant? Shouldn't they enjoy similar rates of convergence? Why would taking the sign make the algorithm converge faster? Note that the original motivation for signSGD is not for faster convergence but less communication. For the second set of experiment, how do you apply ZO-SGD to generate adversarial examples? Again, why do we expect ZO-signSGD to perform better than ZO-SGD?\n\nR: Based on the reviewer’s comment, we realize that our explanation on the possible fast convergence of ZO-signSGD is not enough. \n\nFirst, the original motivation for signSGD is for both fast communication and fast convergence; see abstract, Sec. 3 and Figure A1 in (Bernstein et al., 2018). Thus, the motivation of signSGD is not limited to the fact that it can significantly reduce communication overhead. \n\nSecond, it is not strange that ZO-signSGD could converge faster to at least moderate accuracy than ZO-SGD and other ZO variants. Our work, the previous work on signSGD (Bernstein et al., 2018), and many other white-box and black-box adversarial example generation methods (Goodfellow et al., 2015; Madry et al., 2018; Ilyas et al., 2018a) have shown that taking the sign could make the algorithm converge faster. We have added a subsection ‘Motivations of ZO-signSGD’ in Sec. 3 to provide rationale about why the sign operation could be beneficial to fast convergence. We repeat our discussion as below.\n\n“Compared to SGD-type methods, the fast empirical convergence of signSGD and ZO-signSGD has been shown in the application of generating white-box and black-box adversarial examples (Goodfellow et al., 2015; Madry et al., 2018; Ilyas et al., 2018a). As mentioned in (Bernstein et al., 2018), the sign operation could mitigate the negative effect of (coordinate-wise) gradient noise of large variance. Recall that the ZO gradient estimate is a biased approximation to the true gradient, and thus, could suffer larger noise variance than (first-order) stochastic gradients. In this context, one could benefit from ZO-signSGD due to its robustness to gradient noise. In Appendix 1, we provide two concrete examples (Fig. A1 and Fig. A2) to confirm the aforementioned analysis. In Fig. A1, we show the robustness of ZO-signSGD against sparse noise perturbation through a toy quadratic optimization problem, first introduced by (Bernstein et al., 2018). In Fig. A2, we show that gradient estimation via ZO oracle indeed encounters gradient noise of large variance. Thus, taking the sign of a gradient estimate might scale down the extremely noisy components.”\n\nThird, both our empirical results and theoretical results confirm that ZO-signSGD converge faster than ZO-SGD to moderate accuracy. In theory, the convergence rate of ZO-signSGD is measured through the L2 norm | \\nabla f(x_R) |_2 rather than its squared counterpart | \\nabla f(x_R) |_2^2, where the latter was used to evaluate the convergence of ZO-SGD. We recall from (Ghadimi & Lan, 2013, Theorem 3.2 & Corollary 3.3) that ZO-SGD yields the convergence error E [ | \\nabla f(x_R) |_2^2 ] \\leq O(\\sqrt{d}/\\sqrt{T}). Since | \\nabla f(x_R) |_2^2 \\leq | \\nabla f(x_R) |_2 as it converges, the established rate of ZO-signSGD meets a stricter convergence criterion than that of ZO-SGD. Thus, ZO-signSGD can converge faster (than ZO-SGD) to moderate accuracy, e.g., a neighborhood of a stationary point, where the size of the neighborhood is controlled by the mini-batch size b and the number of random direction vectors q. The application of black-box adversarial attack further shows that the fast convergence of ZO-signSGD to the first successful adversarial attack significantly saves the cost of function queries. We also show the superior performance of ZO-signSGD to a benchmark black-box attack generation method. We refer the reviewer to Sec. 6 for more details.\n\n Ilyas, L. Engstrom, A. Athalye, and J. Lin. Black-box adversarial attacks with limited queries and information. ICLR 2018.", "Reviewer #2 (Q: question; R: response):\n\nWe thank the reviewer for the positive comments on our paper. We provide the detailed response to each comment as below.\n\nQ: 1) out of curiosity, can we improve the convergence rate of the zeroth-order sign SGD if we assume the mini-batch size is of order O(T)? This could help us better compare zeroth-order sign SGD and sign SGD.\n\nR: Yes, the large mini-batch size of b = O(T) indeed improves the convergence rate of ZO-signSGD. As b = O(T), the convergence rate given in (9) becomes O(\\sqrt{d}/\\sqrt{T} + \\alpha_b \\sqrt{d}/\\sqrt{T} + d/\\sqrt{Tq}), where the last error term O(d/\\sqrt{Tq}) is induced by ZO gradient estimation error. In order to further improve the rate to O(\\sqrt{d}/\\sqrt{T}), it is required to make the number of random direction samples $q$ proportional to $d$. Similar to other ZO methods (Liu et al. 2018; Hajinezhad et al. 2017), the large q helps to reduce the variance of ZO gradient estimates. \n\nOn the other hand, the assumption of b = O(T) might not be necessary if n < O(T), where n is the total number of individual cost functions. Suppose that b = n and we use mini-batch sampling without replacement, then ZO-signSGD becomes ZO-signGD. This leads to the convergence rate O(\\sqrt{d}/\\sqrt{T} + d/\\sqrt{nq}). In this case, we can improve the rate to recover O(\\sqrt{d}/\\sqrt{T}) by only setting the number of random direction vectors induced by ZO gradient estimation, $q = O(dT/n)$. It is worth mentioning that such an improvement cannot be achieved by ZO-signSGD using mini-batch with replacement even if b = n with the same setting of q. We refer reviewer to our detailed analysis in the last paragraph of Sec. 4.\n\nS. Liu, et al., Zeroth-order stochastic variance reduction for nonconvex optimization, NIPS, 2018\nD. Hajinezhad, et al., “Zeroth order nonconvex multi-agent optimization over networks,” arXiv preprint arXiv:1710.09997, 2017.\n\n\nQ: 2) Figure 2 is too small to be legible. Also, it seems that the adversarial examples generated by zeroth-order sign SGD have higher distortion than those found by zeroth-order SGD on CIFAR-10 dataset. Is it true? If so, it would be beneficial to have a qualitative explanation of such behavior.\n\nR: We have enlarged Figure 2. Yes, Given the first successful adversarial example, we observe that ZO-signSGD yields slightly higher L2 distortion than ZO-SGD. This is not surprising since compared to ZO-SGD, the convergence rate of ZO-signSGD involves an additional error correction term (relying on b and q in (9)). Accordingly, ZO-signSGD might converge to moderate accuracy (e.g., a solution neighborhood) rather than a very high accuracy. However, the convergence of ZO-signSGD to moderate accuracy could be much faster than ZO-SGD since the former meets a stricter convergence criterion (L2 norm of gradient) than that of ZO-SGD (squared L2 norm of gradient). We refer the reviewer to the paragraph after Eq. (9) for more discussions. \n\nIn the example of generating black-box adversarial attacks, compared to convergence accuracy (in terms of attack distortion), the effectiveness of a black-box attack is measured by the number of function queries needed to achieve the first successful adversarial attack. Thus, ZO-signSGD is desired in this application due to its fast convergence to moderate accuracy. To further confirm this point, in Sec. 6 we have added an experiment to compare ZO-signSGD with a benchmark black-box attack generation method in (Ilyas et al., 2018a). Indeed, ZO-signSGD offers fast convergence to the first successful adversarial attack under limited queries.\n\nA. Ilyas, et al., Black-box adversarial attacks with limited queries and information. ICLR 2018.\n", "Response to reviewer 1(Q: question; R: response)\n\nQ: The paper was, overall very well written and sufficient experiment were presented. The math also seems correct. However, I think they should have explained the motivation for the need of developing such an algorithm better. Section 3 can be improved. \n\nR: Based on this comment, we have improved Sec. 3 and added a subsection ‘Motivations of ZO-signSGD’. Particularly, two concrete motivating examples (Appendix 1) are presented to motivate how ZO-signSGD could outperform ZO-SGD. In Fig. A1, we show the robustness of ZO-signSGD against sparse noise perturbation through a quadratic optimization problem, first introduced by (Bernstein et al., 2018). In Fig. A2, we show that ZO gradient estimates indeed encounter gradient noise of large variance. Thus, taking the sign of a gradient estimate might scale down the extremely noisy components. \n\nMoreover, in Sec. 6, we have added an experiment to compare ZO-signSGD with a benchmark black-box attack generation method (Ilyas et al., 2018a). As we can see, ZO-signSGD offers fast convergence to the first successful adversarial attack under limited queries.\n\nQ: I think this is an important paper because it provides a guaranteed algorithm for zero order sign-gradient descent. However, the ideas and the estimators are not novel. They show applicability of standard gradient estimators for zero order oracles for sign-sgd algorithm. \n\nR: We thank R1 for the positive comments on our paper. We would like to point out that sign-based gradient estimators, e.g., (11)-(12) in Sec. 5, have not been well studied in the ZO literature. These estimators can be interpreted as the ZO counterparts of first-order gradient estimators with majority vote in the centralized and distributed settings, respectively. Here the ZO gradient estimator (12) is newly introduced for ZO distributed optimization. Even the gradient estimators (3) and (10) were used by existing ZO methods, how they affect the convergence of ZO-signSGD has not been well studied. Due to their popularity in designing black-box adversarial examples (Ilyas et al., 2018a), it is important to rigorously analyze the effect of standard gradient estimators on ZO-signSGD, in order to characterize their limitations or possible improvements.\n\nRefs: A. Ilyas, L. Engstrom, A. Athalye, and J. Lin. Black-box adversarial attacks with limited queries and information. ICLR 2018.\n\n", "General response to all reviewers:\n\nWe thank all reviewers for their insightful and valuable comments. Our paper has been greatly improved based on these comments. The major modifications are summarized as below.\n\na) In Sec. 3, we have added the subsection ‘Motivations of ZO-signSGD’ to demonstrate the possible advantages of ZO-signSGD from a high-level point of view. We then presented two concrete examples (Fig. A1 and Fig. A2) to support our intuition prior to the rigorous study on the convergence rate of ZO-signSGD.\n\nb) In Sec. 5, we have added a new sign-based gradient estimator for ZO distributed optimization. This leads to a new variant of ZO-signSGD, whose convergence rate is illustrated in Corollary 3.\n\nc) In Sec. 6, we have added Figure 2 to show the empirical performance of different variants of ZO-signSGD (including the new one used for ZO distributed optimization). Moreover, we compare our approach with a benchmark black-box adversarial attack method (Ilyas et al., 2018a). The new empirical results show that ZO-signSGD outperforms the benchmark in terms of both query efficiency and attack distortion.\n\nd) Throughout the paper, we have tried our best to address reviewers’ comments and to make our presentation as clear as possible. \n\nA. Ilyas, L. Engstrom, A. Athalye, and J. Lin. Black-box adversarial attacks with limited queries and information. ICLR 2018.\n", "Hi Authors,\n\nFor your interest, I just want to point out another ICLR submission (from different anonymous authors):\nhttps://openreview.net/forum?id=BJxhijAcY7.\n\nThis paper shows how small batch convergence of signSGD can be guaranteed with the additional assumption of unimodal symmetric gradient noise (e.g. Gaussian). This paper does not address the zeroth-order case.\n\n", "The paper presents algorithms for optimization using sign-SGD when the access is restricted to a zero order oracle only, and provide detailed analysis and convergence rates. They also run optimization experiments on synthetic data. Additionally, they demonstrate superiority of the algorithm in the number of oracle calls for black box adversarial attacks for MNIST and CIFAR-10. The provided algorithm has optimal iteration complexity from a theoretical viewpoint. \n\nThe paper was, overall very well written and sufficient experiment were presented. The math also seems correct. However, I think they should have explained the motivation for the need of developing such an algorithm better. Section 3 can be improved. \n\nI think this is an important paper because it provides a guaranteed algorithm for zero order sign-gradient descent. However, the ideas and the estimators are not novel. They show applicability of standard gradient estimators for zero order oracles for sign-sgd algorithm. ", "In this paper, the authors studied zeroth order sign SGD. Sign SGD is commonly used in adversarial example generation. Compared to sign SGD, zeroth-order sign SGD does not require the knowledge of the magnitude of the gradient, which makes it suitable to optimize black-box systems. The authors studied the convergence rate of zeroth-order sign SGD, and showed that under common assumptions, zero-order sign SGD achieves O(sqrt(d/T)) convergence rate, which is slower than sign SGD by a factor of sqrt(d). However, sign SGD requires an unrealisitcally large mini-batch size, which zeroth-order sign SGD does not. The authors demonstrated the performance of zeroth-order sign SGD in numerical experiments.\n\nOverall, this is a well written paper. The convergence property of the zeroth-order sign SGD is sufficiently studied. The proposal seems to be useful in real world tasks.\n\nWeaknesses: \n1) out of curiosity, can we improve the convergence rate of the zeroth-order sign SGD if we assume the mini-batch size is of order O(T)? This could help us better compare zeroth-order sign SGD and sign SGD.\n2) Figure 2 is too small to be legible. Also, it seems that the adversarial examples generated by zeroth-order sign SGD have higher distortion than those found by zeroth-order SGD on CIFAR-10 dataset. Is it true? If so, it would be beneficial to have a qualitative explanation of such behavior." ]
[ -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, 8, 6 ]
[ -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, 3, 2 ]
[ "SklDel3Vam", "rkxd0tneyN", "ryex1VPEC7", "iclr_2019_BJe-DsC5Fm", "HyxK1V69n7", "SJgTdFJc0Q", "r1e7SFJcCQ", "ryex1VPEC7", "r1xwxNwtnQ", "Hyl5D-vj3X", "iclr_2019_BJe-DsC5Fm", "iclr_2019_BJe-DsC5Fm", "iclr_2019_BJe-DsC5Fm", "iclr_2019_BJe-DsC5Fm" ]
iclr_2019_BJe0Gn0cY7
Preventing Posterior Collapse with delta-VAEs
Due to the phenomenon of “posterior collapse,” current latent variable generative models pose a challenging design choice that either weakens the capacity of the decoder or requires altering the training objective. We develop an alternative that utilizes the most powerful generative models as decoders, optimize the variational lower bound, and ensures that the latent variables preserve and encode useful information. Our proposed δ-VAEs achieve this by constraining the variational family for the posterior to have a minimum distance to the prior. For sequential latent variable models, our approach resembles the classic representation learning approach of slow feature analysis. We demonstrate our method’s efficacy at modeling text on LM1B and modeling images: learning representations, improving sample quality, and achieving state of the art log-likelihood on CIFAR-10 and ImageNet 32 × 32.
accepted-poster-papers
Strengths: The proposed method is relatively principled. The paper also demonstrates a new ability: training VAEs with autoregressive decoders that have meaningful latents. The paper is clear and easy to read. Weaknesses: I wasn't entirely convinced by the causal/anticausal formulation, and it's a bit unfortunate that the decoder couldn't have been copied without modification from another paper. Points of contention: It's not clear how general the proposed approach is, or how important the causal/anti-causal idea was, although the authors added an ablation study to check this last question. Consensus: All reviewers rated the paper above the bar, and the objections of the two 6's seem to have been satisfactorily addressed by the rebuttal and paper update.
train
[ "SyegyQ0LCm", "S1eyafA807", "BJxg9M0LCQ", "BJgVLMCIAm", "SJxpwu6A2m", "Byl_786uhQ", "r1ixv5dnX", "HJlopG40cX", "BJl4aRZTq7" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public" ]
[ "We thank all the reviewers for their valuable feedback. All three reviewers agree that the paper is clear and well-written. R1 and R2 highlighted the convincing results of learning useful representations with autoregressive decoders and noted our extensive experiments. R3 was concerned about experiments demonstrating the utility of our technique over other approaches (like beta-VAE and free bits), so we have added additional experiments that show delta-VAEs perform best at learning representations for downstream tasks across a large range of rates (updated Figure 4).\n\nWe believe our revised paper presents compelling evidence that delta-VAEs are a simple and effective strategy for training VAEs by constraining the parameters of the variational family to target a minimum rate. We have demonstrated improvements in log-likelihood over prior work and an ability to leverage the most recent advances in autoregressive decoders while learning latent representations that are useful for downstream tasks.\n\nWe have addressed each of their reviews in detail individually.\n", "Thank you for the comments and questions!\n\n> importance of lower-bounding KL vs. encoder architecture\nWe have performed additional experiments to address these questions. We found that the anti-causal encoder structure alone is not sufficient for preventing posterior collapse, while the delta-VAE alone (constraining the rate) is sufficient. Combining the anti-causal encoder with a beta-VAE objective prevents posterior collapse with small beta, but resulted in worse representations for downstream classification than delta-VAEs (see: new Figure 4).\n\n> ablations\nAblations were performed on a smaller model on CIFAR-10. We have replaced the ablation table with a more extensive figure that shows the performance in terms of log-likelihood and linear classification accuracy for multiple techniques (beta-VAE, free bits, delta VAE in Fig. 4). We see that across all hyperparameter settings delta-VAEs results in better features for classification, and heldout ELBOs that are at least as good as other techniques.\n\n> claim of not altering the training objective\ndelta-VAEs impose a *hard* constraint on the variational family, which is enforced through parameterization of the variational family. This differs from the typical soft or functional constraints that require modifying the objective and solving a constrained optimization problem using e.g. dual ascent or ALM. As we discuss in the text, by imposing hard constraints through parameterization, we do not have to alter the ELBO objectived used at training time. \n", "Thank you for your positive comments and valuable feedback!\n\n> The quality of Figure 4 is too low.\nWe have improved the quality of Figure 4 and added plots of rate vs. distortion and accuracy for all techniques (beta-VAE, free bits, delta-VAE). This updated figure highlights the robustness of delta-VAEs across different hyperparameters and rates, and shows that it outperforms other approaches at all rates. This supersedes the earlier results we had in Table 1 that contained only the best achieved performance (in terms of ELBO) for each method.\n\n> auxiliary prior\nFor models that operate at higher rates, the auxiliary prior is critical to achieve SOTA performance and improve sample quality. Fig. 9 in the appendix shows that samples from the AR-1 prior are smoother and exhibit less fine-grained details than samples from the auxiliary prior. Quantitatively for our best CIFAR-10 model the difference in log-likelihood as reported per dimension does not seem large, but the auxiliary prior reduces the KL term by 72% (from 71 bits to 20 bits per image), which translates to the increased coding efficiency (i.e., reduction in distortion per transmitted bit) of 263%! \n\n> specific approach vs. framework?\nWe consider the temporal and independent version of delta-VAEs as two instantiations of the general principle that the variational family should be chosen to not match the prior. Typically variational families are chosen to be maximally flexible (e.g. the work on normalizing flows), and here we present evidence that simpler and more constrained variational families are effective at regularizing generative models with rich decoders to learn more useful representations.\n", "Thank you for your thoughtful review.\n\n> minimal experimentation\nIn the original text we performed experiments on CIFAR-10, ImageNet, and LM1B to highlight the versatility of our approach. We have performed additional ablations and experiments on CIFAR-10 that shows that our proposed delta-VAE approach outperforms beta-VAE and free bits approaches for learning useful representations across a wide range of rates (Fig. 4).\n\n> lack of theory\nWhile we agree that different training methods may perform better in different settings, we present three reasons in the paper for why delta-VAEs may be preferable:\n\n\n1. Throughout the text we highlight that delta-VAEs do not require altering the training objective of the ELBO. For beta-VAEs, deviations from the ELBO at beta=1 result in an encoder, prior, and decoder that do not obey Bayes rule (Hoffman & Johnson 2016), and thus lead to worse performance in terms of log-likelihood. \n\n2. For representation learning, the temporal-VAE approach of pairing an independent prior with a correlated prior resembles slow feature analysis which has been argued to learn more robust invariant features (Turner & Sahani, 2007).\n\n3. Ease of hyperparameter tuning. Given a target rate, we can analytically determine and parameterize the variational family such that the the rate is greater than or equal to the minimum target rate. This takes the form of a constraint on the mean and variances for independent delta-VAEs, and a constraint on the correlation for temporal delta-VAEs. In contrast, the relationship between beta and rate is complicated and mode- and data-dependent, thus tuning beta in beta-VAEs can be challenging. Free bits can be unstable and difficult to train, as the gradient goes from 0 to large when the constraint becomes active (see: VLAE). This motivated the authors of VLAE to use beta-VAE (which they name “soft free bits”).\n", "The majority of approaches for preventing posterior collapse in VAEs equipped with powerful decoders to better model local structure involve either: alteration of the ELBO training objective, or a restriction on the decoder structure.\n\nThis paper presents an approach which broadly falls into the latter category; by limiting the family of the variational approximation to the posterior, the minimum KL divergence between the prior and posterior is lower bounded to a 'delta' value, preventing collapse.\n\nThe paper is well written, and the methodology clearly explained.\n\nThe experiments show that the proposed approach (delta VAE combined with the 'anti-causal' architecture) captures both local and global structure, and appears to do so while preserving SOTA discriminative performance on some tasks. Tests are performed on both generative image and language tasks.\n\nI believe that the paper is of low-medium significance: whilst it does outline a different method of restricting the family of posteriors, it does not give a detailed reasoning (empirical or theoretical) as to why this should be a generally better solution as compared to other approaches.\n\nPros:\n- Very clear and well written.\n- Good execution and ablation/experimentation section.\n\nCons:\n- Lack of theory (and minimal experimentation) as to why this approach should be better than competing methods.\n", "General:\nThe paper attacks a problem of the posterior collapse that is one of the main issues encountered in deep generative models like VAEs. The idea of the paper relies on introducing a constraint on the family of variational posteriors in such a way that the KL term could be controlled.\n\nThe authors propose to use a linear autoregressive process (AR(1)) as the prior. Alternatively, they trained a single-layer LSTM network with conditional-Gaussian outputs as the prior (the auxiliary prior). Additionally, the authors claim that the encoder should contain anti-causal dependencies in order to introduce additional bias that may diminish the posterior collapse.\n\nThe experiments present various results on image and text datasets. Interestingly, the proposed techniques allowed to perform on a par with purely autoregressive models, however, the latent variables were utilized (i.e., no posterior collapse). For instance, in Figure 3(a) we can notice that a decoder is capable of generating similar images for given latent variable. A similar situation is obtained for text data (e.g., Figure 12).\n\nIn general, I find the paper interesting and I believe it should be discussed during ICLR.\n\nPros:\n+ The paper is well-written and all ideas are clearly presented.\n+ The idea of “hard-coded” constraints is interesting and constitutes an alternative approach to utilizing either quantized values in the VAE (VQ-VAE) or a constrained family of variational posteriors (e.g., Hyperspherical VAE).\n+ The obtained results are convincing. Additionally, I would like to highlight that at the first glance it might seem that there is no improvement over the autoregressive models. However, the proposed approach allows to encode an image or a document and then decode it. This is not a case for purely autoregressive models.\n+ The introduction of the Slow Features into the VAE framework constitutes an interesting direction for future research.\n\nCons:\n- The quality of Figure 4 is too low.\n- I am not fully convinced that the auxiliary prior is significantly better than the AR(1) prior. Indeed, the samples seem to be a bit better for the aux. prior but it is rather hard to notice by inspecting quantitative metrics.\n- In general, the proposed approach is a specific solution rather than a general framework. Nevertheless, I find it very interesting with a potential for future work.", "The paper proposes a method to prevent posterior collapse, which refers to the phenomenon that VAEs with powerful autoregressive decoders tend to ignore the latent code, i.e., the decoder models the data distribution independently of the code. Specifically, the encoder, decoder, and prior distribution families are chosen such that the KL-term in the ELBO is bounded away from 0, meaning that the encoder output cannot perfectly match the prior. Assuming temporal data, the authors employ a 1-step autoregressive (across) prior with an encoder whose codes are independent conditionally on the input. Furthermore, they propose to use a causal decoder together with an anti-causal or non-causal encoder, which translates into a PixelSNAIL/PixelCNN style decoder and an anti-causal version thereof as encoder in the case of image data. The proposed approach is evaluated on CIFAR10, Imagenet 32x32, and the LM1B data set (text).\n\nPros:\n\nThe method obtains state-of-the-art performance in image generation. The paper features extensive ablation experiments and is well-written. Furthermore, it is demonstrated that the code learns an abstract representation by repeatedly sampling form the decoder conditionally on the code.\n\nCons:\n\nOne question that remains is the relative contribution of 1) lower-bounding the KL-term 2) using causal decoder/anti-causal encoder to the overall result. Is the encoder-decoder structure alone enough to prevent posterior collapse? In this context it would also be interesting to see how the encoder-decoder structure performs without \\delta-constraint, but with regularization as in \\beta-VAE.\n\nWhat data set are the ablation experiments performed on? As far as I could see this is not specified.\n\nAlso, I suggest toning down the claims that the proposed method works \"without altering the ELBO training objective\" in the introduction and conclusion. After all, the encoding and decoding distributions are chosen such that the KL term in the ELBO is lower-bounded by \\delta. In other words the authors impose a constraint to the ELBO.\n\nMinor comments:\n- Space missing in the first paragraph of p 5: \\kappaas\n- \"Auxiliary prior\"-paragraph on p 5: marginal posterior -> aggregate posterior?", "Thank you for the pointer. We will consider the paper carefully and will update our citations after the review period.\n\n", "Hi,\n\nJust wanted to point out our related paper https://arxiv.org/abs/1807.04863 . \n\n\n" ]
[ -1, -1, -1, -1, 6, 7, 6, -1, -1 ]
[ -1, -1, -1, -1, 3, 4, 3, -1, -1 ]
[ "iclr_2019_BJe0Gn0cY7", "r1ixv5dnX", "Byl_786uhQ", "SJxpwu6A2m", "iclr_2019_BJe0Gn0cY7", "iclr_2019_BJe0Gn0cY7", "iclr_2019_BJe0Gn0cY7", "BJl4aRZTq7", "iclr_2019_BJe0Gn0cY7" ]
iclr_2019_BJe1E2R5KX
Algorithmic Framework for Model-based Deep Reinforcement Learning with Theoretical Guarantees
Model-based reinforcement learning (RL) is considered to be a promising approach to reduce the sample complexity that hinders model-free RL. However, the theoretical understanding of such methods has been rather limited. This paper introduces a novel algorithmic framework for designing and analyzing model-based RL algorithms with theoretical guarantees. We design a meta-algorithm with a theoretical guarantee of monotone improvement to a local maximum of the expected reward. The meta-algorithm iteratively builds a lower bound of the expected reward based on the estimated dynamical model and sample trajectories, and then maximizes the lower bound jointly over the policy and the model. The framework extends the optimism-in-face-of-uncertainty principle to non-linear dynamical models in a way that requires no explicit uncertainty quantification. Instantiating our framework with simplification gives a variant of model-based RL algorithms Stochastic Lower Bounds Optimization (SLBO). Experiments demonstrate that SLBO achieves the state-of-the-art performance when only 1M or fewer samples are permitted on a range of continuous control benchmark tasks.
accepted-poster-papers
This paper proposes model-based reinforcement learning algorithms that have theoretical guarantees. These methods are shown to good results on Mujuco benchmark tasks. All of the reviewers have given a reasonable score to the paper, and the paper can be accepted.
train
[ "Hkx29sJxC7", "rJxCLWG_Rm", "rJgrRkSDTX", "rygz4okeAQ", "rJlACiJXa7", "r1xWIcFhTm", "H1g1w5th6X", "r1gl8iJX6X", "H1e69okXam", "HJeBddsh37", "SyxTcTgq3Q" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "We’ve added a paragraph below Theorem 3.1 and Appendix G, which contains a finite sample complexity results. We can obtain an approximate local maximum in $O(1/\\epsilon)$ iterations with sample complexity (in the number of trajectories) that is linear in the number of parameters and accuracy $\\epsilon$ and is logarithmic in certain smoothness parameters. \n\nWe note that the bound doesn’t directly apply to LQR because we require the reward is bounded (which is not true in LQR because the states can blow up.) If the reward (at a single step) is bounded --- which is a reasonable assumption for many practical applications --- then our sample complexity can be better (or at least comparable) to Abbasi-Yadkori, Szepesvari in dimension (it’s not clear how AS’s bound depends on the dimension --- it’s likely exponential in dimension.) We also note that AS applies to the adaptive setting which is stronger than the episodic setting that we work with. Finally, we’d like to mention again that our paper’s primary goal is to deal with non-linear setting without explicit uncertainty quantification. In this sense, our result is much stronger than AS because our result applies to any non-linear models with bounded rewards. \n", "Thank you for answering my questions. I have adjusted my score accordingly.", "The paper proposed a framework to design model-based RL algorithms. The framework is based on OFU and within this framework the authors develop an algorithm (a variant of SLBO) achieving SOTA performance on MuJoCo tasks.\n\nThe paper is very well written and the topic is important for the RL community. The authors do a good job at covering related works, the bounds are very interesting and the results quite convincing. \n\nQuestions/comments to the authors:\n1) In footnote 3 you state that \"[...] we only need to approximate the dynamical model accurately on the trajectories of the optimal policy\". Why only of the optimal policy? Don't you also need an accurate dynamic model for the current policy to perform a good policy improvement step? \n2) A major challenge in RL is that the state distribution \\rho^\\pi changes with \\pi and it is usually very hard to estimate. Therefore, many algorithms assume it does not change if the policy is subject to small changes (examples are PPO and TRPO). In Eq 4.3 it seems that you also do something similar, fixing \\rho^\\pi and constraining the KL of \\pi (and not of the joint distribution p(s,a)). Am I correct? Can you elaborate it a bit more, building a connection with other RL methods?\n3) In Eq. 6.1 and 6.2 you minimize the H-loss, defined as the prediction error of your model. Recently, Pathak et al. used the same loss function in many papers (such as Curiosity-driven Exploration by Self-supervised Prediction) and your Eq. 6.2 looks like theirs. The practical implementation of your algorithm looks very similar to theirs too. Can you comment on that? \n4) If I understood it correctly, your V-function directly depends on your model, i.e., you have V(M(s)) and you learn the model M parameters to maximize V. This means that you want to learn the model that, together with the policy, maximizes V. Am I correct? Can you comment a bit more on that? Did you try to optimize them (V and M) separately, i.e., to add a third parameter to learn (the V-function parameters)?\n5) How does you algorithm deal with environmental noise? The tasks used for the evaluation are all deterministic and I believe that this heavily simplifies the model learning. It would be interesting an evaluation on a simple problem (for example the swing-up pendulum) in the presence of noise on the observations and/or the transition function.\n6) I appreciate that you provide many details about the implementation in the appendix. Can you comment a bit more? Which are the most important hyperparameters? The number of policy optimization n_policy or of model optimization n_model? You mention that you observed policy overfitting at the first iterations. Did you also experience model overfitting? Did normalizing the state help a lot? ", "As promised in the responses to the reviewers, we have updated the paper with the following changes: \n\n--- We’ve added the citations mentioned by the reviewers, and incorporated most of the clarifications in the responses in the paper. (E.g., in Appendix F.4, we discussed more on the most important hyperparameters.)\n\n--- We’ve added a paragraph below Theorem 3.1 and Appendix G, which contains a finite sample complexity results. We can obtain an approximate local maximum in $O(1/\\epsilon)$ iterations with sample complexity (in the number of trajectories) that is linear in the number of parameters and accuracy $\\epsilon$ and is logarithmic in certain smoothness parameters. \n", "\n--- “Please explain in more detail what the effects are from relaxing the assumptions for the algorithm? I assume none of the monotonic improvement results can be transferred to the algorithm?” \n\nWe will still have monotone improvements if the algorithm uses any of the discrepancy bounds in Section 4. The monotonicity won’t hold in the worst case, if the model is not optimized in an optimistic fashion. The worst-case scenario would be that the lower bound is quite loose for some dynamical models, and accurate for others. In this case, we would really have to be optimistic about the lower bound and choose the best models and policy. However, such situations are unlikely to occur in MuJoCo tasks since the looseness of the lower bound seems to be comparable in a neighborhood of the current model. This may be the reason why we can simplify the algorithm. \n\n\n--- “Could you elaborate why the algorithm was not implemented as suggested by Section 4? Is the problem that the algorithm did not perform well or that the discrepancy measure is hard to compute?”\n\nWe implemented the discrepancy bound in Section 4.1 as reported in the experiments. The discrepancy measure of Section 4.2 involves the value function which requires another neural net approximators (and thus the resulting algorithm would update the model, the value, and the policy iteratively.) We have implemented this algorithm and it works fine but not as well as the reported simpler version. This may be because either the MuJoCo environments satisfy the assumption in Section 4.1 well, so that L2 norm model loss already performs great, or we have not pinned down the best ways to combine the updates of the model, value, and policy iteratively. \n\n--- “For the presented algorithm, the discrepancy does not depend on the policy any more. I did not understand why the iterative optimization should be useful in this case.” \n\nAs briefly mentioned in one of the previous paragraphs, the key benefit of iterative optimization is from the stochasticity in the model when we optimize the imaginary value V^{\\pi, M} over the policy. In other words, if we were to optimize M until convergence and then optimize pi, we may optimize the lower bound better, but the algorithm doesn’t use samples in a fully stochastic way. The stochasticity dramatically reduces the overfitting (of the policy to the estimated dynamical model) in a way similar to that SGD regularizes ordinary supervised training. To some extent, since the policy optimization involves stochastic iterates from the updates of the model learning loss, the learned policy has to be robust to a family of stochastic models instead of a single one. \n\n--- “The only difference between Algo 3 and Algo 2 seems to be the additional for loop. ….. Did you try Algo 3 with the same amount of Adam updates as Algo 2 (could be that I missed that). ” “The difference to a standard model-based RL algorithm is minor and the many advantages of the nice theoretical framework are lost”\n\nIndeed we did try Algo 3 with the same amount of Adam updates as in Algo 2, and it performs worse than the current setting. In fact, we used the optimal number of Adam updates in Algo 3. Concretely, we test the performance of Algo 3 with different hyper-parameters, including the number of Adam updates. Our experiments show that 200 (among 100, 200, 400, 800) is the optimal number of Adam updates in Ant and we use it for all other environments. Note that when Algo 3 uses 800 Adam updates (per outer iteration), it has the same amount of updates (per outer iteration) as in Algo 2. \n\nTherefore, the differences of ours from the standard MBRL algorithms, though look simple, are empirically important for the significant improvements of the performance. As we try to argue above, these differences were indeed inspired by the theory. \n\n", "We thank the reviewer for the insightful and positive comments. We address the questions below: \n\n1) “In footnote 3 you state that \"[We note that such an assumption, though restricted, may not be very far from reality: optimistically speaking], we only need to approximate the dynamical model accurately on the trajectories of the optimal policy\". Why only of the optimal policy? Don't you also need an accurate dynamic model for the current policy to perform a good policy improvement step?”\n\nIn the most optimistic scenario, one only needs a not-so-accurate model around the trajectories of a non-optimal policy to make *some reasonable* progress. We note that it’s likely preferable to make decent progress with a non-perfect model compared to making optimal progress with a perfect model, because learning perfect models would require much more samples. \n\n2) A major challenge in RL is that the state distribution \\rho^\\pi changes with \\pi and it is usually very hard to estimate. Therefore, many algorithms assume it does not change if the policy is subject to small changes (examples are PPO and TRPO). In Eq 4.3 it seems that you also do something similar, fixing \\rho^\\pi and constraining the KL of \\pi (and not of the joint distribution p(s,a)). Am I correct? Can you elaborate it a bit more, building a connection with other RL methods?\n\nYou are correct that we constrain the changes of \\rho^\\pi. We compare with PPO and TRPO from this perspective in the remark 4.5. We summarize the key point here (please see Remark 4.5 for a longer and more technical discussion): the main advantage of MB approach to TRPO is that our constraint on the changes of \\rho^\\pi can be more relaxed than that in TRPO. Or in other words, the sensitivity of the reward approximation to the change of \\rho^\\pi is smaller in our algorithm than in TRPO. This is mostly because, in MB algorithms, the approximation error of the total reward by the imaginary total reward decreases as the model error decreases (even with a fixed change of \\rho^\\pi), whereas, in model-free algorithms, the approximation error of the total reward by the local linear approximation only depends on the change of the \\rho^\\pi. Intuitively, we build a better local approximation of the reward using the models than the linear approximation in TRPO. \n\n3) In Eq. 6.1 and 6.2 you minimize the H-loss, defined as the prediction error of your model. Recently, Pathak et al. used the same loss function in many papers (such as Curiosity-driven Exploration by Self-supervised Prediction) and your Eq. 6.2 looks like theirs. The practical implementation of your algorithm looks very similar to theirs too. Can you comment on that?\n\nThis H-loss is not a contribution of ours (e.g., as mentioned in our paper, it has been used in Nagabandi et al’‎2017 for evaluation.) Our implementation differs from “Curiosity-driven Exploration by Self-supervised Prediction” in the sense that we consider the prediction after multiple steps while theirs only considers the prediction of next state (thus one-step prediction). The Zero-shot visual imitation learning paper by Pathak et al uses an auto-regressive recurrent model to predict a multi-step loss on a trajectory, which is closely related to ours. However, theirs differ from ours in the sense that they do not use the predicted output x_{t+1} as the input for the prediction of x_{t+2}, and so on and so forth. Thanks for pointing out the reference! We include this work in our references and discuss more in our next revision.\n", "\n4) If I understood it correctly, your V-function directly depends on your model, i.e., you have V(M(s)) and you learn the model M parameters to maximize V. This means that you want to learn the model that, together with the policy, maximizes V. Am I correct? Can you comment a bit more on that? Did you try to optimize them (V and M) separately, i.e., to add a third parameter to learn (the V-function parameters)?\n\nYes, V-function directly depends on your model and we learn the M parameters and \\pi parameters to maximize V. In other words, in our current implementation, we don’t have a parameterized approximator for V, and V is computed by querying the model. It’s a fascinating idea of using a third function approximator for V and learn that as well. This is left as future work though. \n\n\n5) How does you algorithm deal with environmental noise? The tasks used for the evaluation are all deterministic and I believe that this heavily simplifies the model learning. It would be interesting an evaluation on a simple problem (for example the swing-up pendulum) in the presence of noise on the observations and/or the transition function.\n\nThe MuJoCo locomotion environments are deterministic yet very challenging. The dynamics of such environments are very complex (e.g. the humanoid dynamics) thus this demonstrates the effectiveness of our method. Many of reinforcement learning algorithms are using these locomotion environments as testbeds. Our meta-algorithms also applies to stochastic environments. We will try to apply the algorithm to a stochastic environment empirically, and hopefully, we can add this to the revision soon.\n\n6) I appreciate that you provide many details about the implementation in the appendix. Can you comment a bit more? Which are the most important hyperparameters? The number of policy optimization n_policy or of model optimization n_model? You mention that you observed policy overfitting at the first iterations. Did you also experience model overfitting? Did normalizing the state help a lot? \n\nThe most important hyperparameters we found are n_policy and the coefficient in front of the entropy regularizer. It seems that once n_model is large enough we don’t see any significant changes. We did have a held-out set for model prediction (with the same distribution as the training set) and found out the model doesn’t overfit much. Normalizing the state helped a lot since the raw entries in the state have different magnitudes --- if we don’t normalize them, the loss will be dominated by the loss of some large entries.", "We thank the reviewer for the insightful review and positive comments on the theoretical framework. We address the reviewer’s comments/questions below: \n\n--- “The framework seems to be quite general but does not include any specific example, like what non-linear dynamical model in detail could be included and will this framework cover the classical MDP setting”, “Previous model-based work with simpler model already can have such strong guarantees, such as linear dynamic (Y. Abbasi-Yadkori and Cs. Szepesvari (2011)), MDP (Agrawal and Jia (2017)). What kind of new insights will this framework give when the model reduces to simpler one (linear model)?”\n\nIndeed, our framework can capture all parameterized models (including linear model or even tabular MDP); however, our focus is on non-linear models. The distinction to the previous papers is that we are the first framework that can show the monotone improvement and handle the uncertainty quantification (via a discrepancy bound) *for non-linear models*. As far as we understand, the existing papers’ techniques are difficult to extend to non-linear models. Our approach, restricted to linear models or classical MDP, would give some sensible results but wouldn’t be as strong as the existing ones, and would probably not provide much more insights. However, the strength of the paper is that it works for non-linear models and the key insight is that we don’t need explicit uncertainty quantification of the parameters in the traditional sense (instead, a discrepancy bound would suffice.)\n\n--- “in RL, people may be more care about the regret or sample complexity. ”\n\nWe can actually prove a polynomial (in dimension) sample complexity bound for Algorithm 1 with very standard concentration inequality. We can prove uniform convergence results with standard machinery for the estimation of the discrepancy bounds via samples when the bound satisfies R3. Then we can show that the algorithm converges to an approximate local maximum with an error that depends on the estimation error of the discrepancy bound. Such a polynomial complexity bound will not be comparable to Y. Abbasi-Yadkori and Cs. Szepesvari (2011) when restricted to linear models, but they can work generically for non-linear models (under the assumption of Theorem 3.1.) This result is not written in the paper because we thought it’s relatively standard, but we would be more than happy to add it in the revision very soon. \n\n--- “1. In (3.2), what norm is considered here?” \n\nEquation (3.2) is a demonstration of a potential type of results we could hope for. In Section 4, we show that if the value function is L-Lipschitz with some norm, then (3.2) would be true with the same norm. In the experiments, we use the L2 norm. \n\n--- “2. In page 4, the authors mentioned their algorithm can be viewed as an extension of the optimism-in-face-of-uncertainty principle to non-linear parameterized setting. This is a little bit confused. How this algorithm can be viewed as OFU principle? How does it recover the result in linear setting (Y. Abbasi-Yadkori and Cs. Szepesvari (2011))?”\n\nThe relationship to OFU is in the very conceptual sense that we optimize the model and the policy together in an optimistic fashion as in OFU. However, the way to quantify the uncertainty is through the discrepancy bound but not the confidence interval as in typical OFU approaches. (But many OFU based papers, such as Jaksch et al’10, implicitly uses some sort of discrepancy bound that is similar to ours in nature in their proof techniques.) \n\n--- “- Is there any convergence rate guarantee for this stochastic optimization? ” “And also neural network is used for deep RL. So there is also no guarantee for the actual algorithm which is used?”\n\nThe concrete implementation of the algorithm doesn’t have a convergence rate guarantee yet. We don’t expect it to work for all environments, but under some assumptions of the environments, we may be able to show convergence. This is left as future work. \n\nWe also thank the reviewer for the suggestions to add sub-sections in Section 1 and will revise in the next revision. We will also cite the two relevant papers mentioned by the reviewers in the revision. \n", "We thank the reviewer for the insightful review and positive comments on the theoretical framework. We address the reviewer’s comments/questions below: \n\n--- It seems that the reviewer thinks our empirical implementation is different from what the theory suggests: “ the resulting algorithm is actually quite far away from the assumptions made for deriving the bounds”. \n\nWe would like first to mention/clarify that our proposed algorithm (Algorithm 1) is a meta-algorithm/framework for model-based RL. Our main goal is to develop some framework to mathematically reason about non-linear MB RL (such as how to design the model loss function.) The meta-algorithm is designed to have provable monotone convergence, even for the worst-case environments. However, in the empirical implementation, since MuJoCo tasks have nice properties (e.g., the value functions tend to be Lipschitz in states), many components of the meta-algorithm are not necessary, and thus we only need a simplification of the meta-algorithm with a simple discrepancy bound in Section 4.1. \n\nWe tried hard to find the simplest instantiation of our meta-algorithm for MuJoCo tasks, instead of using an artificially complicated algorithm. That doesn’t necessarily mean that other instantiations wouldn’t work. (In fact, as mentioned below, some others are promising, though not entirely successful yet. Our current implementation also mostly just serves as a proof-of-concept demonstration that some instantiations of the framework are possible and helpful.) \nThe theoretical results in MBRL are very sparse. To some extent, we hope that our work can spark future works that either instantiate our meta-algorithm with strong and clever modifications or that improve our meta-algorithm with stronger guarantees. \n\nMoreover, we would like to argue that the two new empirical ingredients pointed out by the reviewer are both inspired by the theory, in our opinion and our research process. First, the technique of optimizing the policy and model iteratively in an inner policy improvement loop may sound unrelated to the theory, but actually, it was very much inspired by it: our theory suggests that we should jointly optimize the model and the policy to maximize the lower bound for the real reward by SGD, and this would have perfectly justified the iterative optimization of the policy and the model in an inner loop. Later in the experiments, we found that stopping the gradient from one occurrence of the model parameter would not hurt the performance and would speed up the code. Doing so would a priori implies that alternating updates of the model and the policy in an inner loop are less useful, but in fact, the stochasticity introduced by the SGD on model loss is still powerful to reduce overfitting, in a way similar to that SGD regularizes the ordinary supervised training. (Please see paragraph before section 6.2, or the response below to the last two questions, for slightly more detailed discussions.) Therefore, we view this technique as inspired crucially by the theory, though disguised by the simplification of our algorithm. \n\nAs the reviewer agreed, the use of L2 norm (instead of MSE) is inspired and justified by the theory and it also contributes significantly to the empirical improvements. \n\n--- “I was confused by section 4.2. Could you please explain why the transformation is needed and how it is used?”\n\nThe transformation is only to demonstrate that the norm-based model loss is not invariant to a potential hidden transformation of the state space, whereas the discrepancy bound proposed in Section 4.2 is. This is a feature of the algorithm: if somehow the algorithm is presented with states in different representation space, it will still work the same, whereas the norm-based model loss will behave differently. If one is not concerned with the representation of the states, this section indeed only provides the formal error bound of the discrepancy bound in equation 4.6. \n", "The paper presents monotonic improvement bounds for model-based reinforcement learning algorithms. Based on these bounds, a new model-based RL algorithm is presented that performs well on standard benchmarks for deep RL.\n\nThe paper is well written and the bounds are very interesting. The algorithm is also interesting and seems to perform well. However, there is a slight disappointment after reading the paper because the resulting algorithm is actually quite far away from the assumptions made for deriving the bounds. The 2 innovations of the algorithm are:\n- Model and policy are optimized iteratively in an inner policy improvement loop. As far as I see it, this is independent of the presented theory. \n- The L2 norm is used to learn the model instead of the squared L2 norm. This is inspired by the theory.\n\nMore comments below:\n- I was confused by section 4.2. Could you please explain why the transformation is needed and how it is used? As I understand, this is not used at all in the algorithm any more? So what is the advantage of this derivation in comparison to Eq 4.6?\n- Please explain in more detail what the effects are from relaxing the assumptions for the algorithm? I assume none of the monotonic improvement results can be transferred to the algorithm?\n- Could you elaborate why the algorithm was not implemented as suggested by Section 4? Is the problem that the algorithm did not perform well or that the discrepency measure is hard to compute?\n- For the presented algorithm, the discrepency does not depend on the policy any more. I did not understand why the iterative optimization should be useful in this case.\n- The theory suggests that we have to do a combined optimization of the lower bound. However, effectively, the algorithm optimizes the policy over V and the policy over the L2 multi-step prediction loss. The difference to a standard model-based RL algorithm is minor and the many advantages of the nice theoretical framework are lost.\n- The only difference between Algo 3 and Algo 2 seems to be the additional for loop. As I said, its not clear to me why this should be useful as the optimization problems are independent of each other (except for the trajectories, but the model does not depend on the policy). Did you try Algo 3 with the same amount of Adam updates as Algo 2 (could be that I missed that). \n\n ", "This paper proposed a new class of meta-algorithm for reinforcement learning and proved the monotone improvement for a local maximum of the expected reward, which could be used in deep RL setting. The framework seems to be quite general but does not include any specific example, like what non-linear dynamical model in detail could be included and will this framework cover the classical MDP setting? In theory, the dynamical model needs to satisfy L-Lipschitz. So which dynamical model in reality could satisfy this assumption? It seems that the focus of this paper is theoretical side. But the only guarantee is the non-decreasing value function of the policy. In RL, people may be more care about the regret or sample complexity. Previous model-based work with simpler model already can have such strong guarantees, such as linear dynamic (Y. Abbasi-Yadkori and Cs. Szepesvari (2011)), MDP (Agrawal and Jia (2017)). What kind of new insights will this framework give when the model reduces to simpler one (linear model)?\n\nIn practical implementation, the authors designed a Stochastic Lower Bound Optimization. Is there any convergence rate guarantee for this stochastic optimization? And also neural network is used for deep RL. So there is also no guarantee for the actual algorithm which is used?\n\nMinor:\n\n1. In (3.2), what norm is considered here?\n2. In page 4, the authors mentioned their algorithm can be viewed as an extension of the optimism-in-face-of-uncertainty principle to non-linear parameterized setting. This is a little bit confused. How this algorithm can be viewed as OFU principle? How does it recover the result in linear setting (Y. Abbasi-Yadkori and Cs. Szepesvari (2011))?\n3. The organization could be more informative. For example, Section 1 has 13 paragraphs but without any subsection.\n\nY. Abbasi-Yadkori and Cs. Szepesvari, Regret Bounds for the Adaptive Control of Linear Quadratic Systems, COLT, 2011.\nShipra Agrawal and Randy Jia. Optimistic posterior sampling for reinforcement learning: worst-case regret bounds. NIPS, 2017" ]
[ -1, -1, 7, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ -1, -1, 4, -1, -1, -1, -1, -1, -1, 4, 2 ]
[ "SyxTcTgq3Q", "H1g1w5th6X", "iclr_2019_BJe1E2R5KX", "iclr_2019_BJe1E2R5KX", "HJeBddsh37", "rJgrRkSDTX", "rJgrRkSDTX", "SyxTcTgq3Q", "HJeBddsh37", "iclr_2019_BJe1E2R5KX", "iclr_2019_BJe1E2R5KX" ]
iclr_2019_BJeOioA9Y7
Knowledge Flow: Improve Upon Your Teachers
A zoo of deep nets is available these days for almost any given task, and it is increasingly unclear which net to start with when addressing a new task, or which net to use as an initialization for fine-tuning a new model. To address this issue, in this paper, we develop knowledge flow which moves ‘knowledge’ from multiple deep nets, referred to as teachers, to a new deep net model, called the student. The structure of the teachers and the student can differ arbitrarily and they can be trained on entirely different tasks with different output spaces too. Upon training with knowledge flow the student is independent of the teachers. We demonstrate our approach on a variety of supervised and reinforcement learning tasks, outperforming fine-tuning and other ‘knowledge exchange’ methods.
accepted-poster-papers
The authors have taken inspiration from recent publications that demonstrate transfer learning over sequential RL tasks and have proposed a method that trains individual learners from experts using layerwise connections, gradually forcing the features to distill into the student with a hard-coded annealing of coeffiecients. The authors have done thorough experiments and the value of the approach seems clear, especially compared against progressive nets and pathnets. The paper is well-written and interesting, and the approach is novel. The reviewers have discussed the paper in detail and agree, with the AC, that it should be accepted.
train
[ "Bkl3Lcd80m", "rklBpc_LC7", "r1liuBG50X", "r1lPW7iI07", "BkgiFx9nnQ", "BJlDEqbKA7", "ByeCKHTOA7", "SkgAysOIRm", "rkgVy1xs2m", "Byx-vLP5hQ" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Updated: Changed section numbers to fit latest revision.\n---------------------------------------------------------------------------\nWe thank the reviewer for time and feedback.\n\nRe 1: Use teachers with different architectures from the student. \nIn additional experiments, following the suggestion of the reviewer, we use architectures for the teacher which differ from the student model. The results are summarized in Fig. 10 and discussed in Sec. 7.4. We observed that learning with teachers, whose architecture differs from the student, to have similar performance as learning with teachers which have the same architecture. Consider Fig.10 (a) as an example, where the target task is KungFu Master, and the teachers are experts for Seaquest and Riverraid. At the end of training, learning with teachers of different architectures achieves an average reward of 37520, and learning with teachers of the same architecture achieves an average reward of 35012. More results are shown in Figs. 10 (b, c). The results illustrate that Knowledge Flow can benefit from the knowledge of teachers, and thus achieve higher rewards, even if the teachers and the student architectures differ. \n\nRe 2: Importance of KL term. \nThe KL term prevents the student’s output distribution over actions or labels to change too much when the teachers’ influence is decreasing. To investigate the importance of the KL term, we conduct an ablation study where the KL coefficient (\\lambda2) is set to zero. The results are summarized in Fig. 9 and discussed in Sec. 7.3.2. Considering Fig. 9 (a) as an example, where the target task is MsPacman and the teachers are Riverraid and Seaquest experts. Without the KL term the rewards drop drastically when the teacher’s influence decreases. In contrast, we don’t observe this performance drop with a KL term. At the end of training, learning with a KL term achieves an average reward of 2907 and learning without the KL term achieves an average reward of 1215. More results are presented in Figs. 9 (b, c), which show that training with the KL term achieves higher reward than training without the KL term. \n\nRe 3: Use of an average network as \\theta_{old}.\nAn average network, i.e., exponential averaging can be used to obtain \\theta_{old}. To investigate how usage of an average network for \\theta{old} affects the performance, we conduct an experiment setting \\theta_{old} to be the exponential running average of the model weight. More specifically, \\theta_{old} is updated as follows: \\theta_{old} \\leftarrow \\alpha * \\theta_{old} + (1 - \\alpha) * \\theta, where \\alpha = 0.9. The results are summarized in Fig. 11 and discussed in Sec. 7.5. We observed that using an exponential average to compute \\theta_{old} results in very similar performance to using a single model. Consider Fig.11 (a) as an example, where the target task is Boxing and the teacher is a Riverraid expert. At the end of training, computing \\theta_{old} via an exponential average achieves an average reward of 96.2 and using a single parameter to set \\theta_old achieves an average reward of 96.0. More results on using an exponential average to compute \\theta_{old} are shown in Figs. 11 (b, c).\n\n", "Updated: Changed section numbers to fit latest revision.\n---------------------------------------------------------------------------\nWe thank the reviewer for time and feedback.\n\nRe 1: plot p_w values for C10/C100 dataset. \nIn the newly added Fig. 4 and the corresponding discussion (Sec. 4.2), we plot the weight (p_w) for teachers and the student in the C10/C100 experiment, where C100 and SVHN experts are teachers. As expected and intuitively, the C100 teacher should have higher p_w value than the SVHN based teacher, because C100 is more relevant to C10. The plot verifies this intuition, p_w of the C100 teacher is higher than that of the SVHN teacher during the entire training. Both teachers’ normalized weights approach zero at the end of training. \n\nRe 2: verify Knowledge Flow is not just NAS. \nAs the reviewer pointed out, one key difference between NAS and Knowledge Flow is that a student in Knowledge Flow benefits from teachers’ knowledge. To verify that the student really benefits from the knowledge of teachers, we conduct the ablation study suggested by the reviewer. In the newly added experiment, discussed in Sec. 7.3.1 and summarized in Fig. 8, teachers are models that haven’t been trained at all. Intuitively, learning with untrained teachers should have worse performance than learning with knowledgeable teachers. Our experiments verify this intuition. Considering Fig. 8 (a), where the target task is hero, learning with untrained teachers achieves an average reward of 15934, while learning with knowledgeable teachers (experts of Seaquest and Riverraid) achieves an average reward of 30928. Consistently with all other experiments we average over five runs. More results are presented in Fig. 8 (b, c). The results show that Knowledge Flow achieves higher rewards than NAS in different environments and teacher-student settings.\n\nRe 3: training teacher networks jointly. \nWe did try to train teachers jointly with students. However, as the reviewer mentioned, the memory usage is large and training is very slow. Up until now we didn’t observe any improvements. \n\nRe 4: memory requirement for matrices Q. \nThe upper bound of the number of Q matrices in our framework is O(L*M*T). In practice, we don’t link a student’s layer to every layer of a teacher network. For example, we observed that linking a teachers’ bottom layer to a student’s top layer generally doesn’t yield improvements. Intuitively, a teachers’ bottom layer features are very likely irrelevant to a student’s top layer features. Therefore, in practice, we recommend to link one teacher layer to one or two student layers, in which case the space complexity is O(L*T).\n\nRe 5: Captions of Table 1 and Table 2.\nWe updated the caption of Table 1 and Table 2. \n\nRe 6: Shorten paragraph 2 and paragraph 3.\nWe felt shortening paragraph 2 and 3 would remove the motivation of this work. Shortening the related work section wouldn’t do justice to our peers. Therefore at this point we prefer to maintain the current writing unless the majority of the reviewers and the AC feel strongly about shortening.\n", "Thanks a lot for additional time and feedback.\n\nRe 4: We added the comment regarding space complexity to Sec. 3.2 of the main paper.\n\nRe 6: We moved the detailed treatment of related work to the appendix and provide a shortened version in the main paper. We also moved Fig. 7 and the corresponding text to Sec. 4.2 of the main paper.\n", "2 and 3 are the same.\n\nMultiple-task learning approaches are rife in this area (see e.g. https://en.wikipedia.org/wiki/Multi-task_learning, and citations therein). This huge body of work establishes that using a proper regularisation scheme is central. The intuition in the present paper seems to align with those ideas. But since those are so standard by now, the authors can be expected to make the connection explicit. Note that the idea of 'lifelong learning' as cited does acknowledge this connection.\n\nMulti-task learning for DNN is a standard theme (especially in this conference), and it is not clarified how this work relates/improves over this body of work. One way to address this issue is to report empirical results on a standard benchmark (as MNIST).\n\nThe introductory text (ch, 2) is not quite correct (especially the RL needs care), but can be patched up by citing relevant introductory texts (what is random etc..) and adhering to their notation.\n\n", "This paper proposes a new set of heuristics for learning a NN for generalising a set of NNs trained for more specific tasks. This particular recipe might be reasonable, but the semi-formal flavour is distracting. The issue of model selection (clearly the main issue here) is not addressed. A quite severe issue with this report is that the authors don't report relevant learning results from before (+-) 2009, and empirical comparisons are only given w.r.t. other recent heuristics. This makes it for me not possible to advice publication as is.", "The new additions to the paper are very welcome, and definitely make the paper stronger in my opinion.\n\nRe 4: I recommend the authors include this statement somewhere in the paper/appendix.\n\nRe 6: If the authors feel paragraphs 2 & 3 are critical to motivate this work, then I still think you can instead shorten the parts of the related work section to make the information there less redundant. In my opinion, the main text would be better if you made space for Fig 7 from the appendix and the relevant text description, by reducing the redundancy in descriptions of the alternative methods and their shortcomings.", "Thanks a lot for additional time, feedback and clarifications.\n\nRe 1: Multi-task learning. \nNote that the challenge we address differs from multi-task learning. In multi-task learning, multiple tasks are addressed at the same time. In contrast, `Knowledge Flow’ focuses on a single task. Hence, common for multi-task learning and `Knowledge Flow’ is a transfer of information. However, in multi-task learning, information extracted from different tasks are shared to boost performance, while, in `Knowledge Flow,’ the information of multiple teachers is leveraged to help a student learn better a single, new, previously unseen task. We updated Section 5 to clarify the connection and differences.\n\n\nRe 2: Notation of Section 2. \nWe follow the notation of Mnih et al. (2016), i.e., the expectation is taken with respect to a trajectory \\tau = ({x_t, a_t, r_t}, {x_{t+1}, a_{t+1}, r_{t+1}}, ...) generated by following the policy \\pi. We clarified this and updated Section 2 and 3. \n", "We thank the reviewer for time and feedback. We think the questions aren’t precise enough for us to act upon:\n1. We’d appreciate if the reviewer can point out the parts that are according to the reviewer’s opinion `semi-formal’? We are more than happy to revise the text but are currently left guessing, particularly since another reviewer points out that the paper is `well written.’ \n2. We compare to recent baselines, in particular state-of-the-art methods like PNN and PathNet. If the reviewer would specify which papers from before 2009 we should compare to, we are very happy to include a statement, assuming that PNN and/or PathNet or their predecessors haven’t compared to those already. \n3. To the best of our knowledge, the two baselines (PNN and PathNet) we compare with are the state-of-the-art RL transfer frameworks.\n", "This paper proposes to feed the representations of various external \"teacher\" neural networks of a particular example as inputs to various layers of a student network. \nThe idea is quite intriguing and performs very well empirically, and the paper is also well written. While I view the performance experiments as extremely thorough, I believe the paper could possibly use some additional ablation-style experiments just to verify the method actually operates as one intuitively thinks it should. \n\nOther Comments:\n\n- Did you verify that in Table 3, the p_w values for the teachers trained on the more-relevant C10/C100 dataset are higher than the p_w value for the teacher trained on the SVHN data? It would be interesting to see the plots of these p_w over the course of training (similar to Fig 1c) to verify this method actually operates as one intuitively believes it should.\n\n- Integrating the teacher-network representations into various hidden layers of the student network might also be considered some form of neural architecture search (NAS) (by including parts of the teacher network into the student architecture). \nSee for example the DARTS paper: https://arxiv.org/abs/1806.09055\nwhich similarly employs mixtures of potential connections. \nUnder this NAS perspective, the dependence loss subsequently distills the optimal architecture network back into the student network architecture.\n\nHave you verified that this method is not just doing NAS, by for example, providing a small student network with a few teacher networks that haven't been trained at all? (i.e. should not permit any knowledge flow)\n\n- Have the authors considered training the teacher networks jointly with the student? This could be viewed as teachers learning how to improve their knowledge flow (although might require large amounts of memory depending on the size of the teacher networks).\n\n- Suppose we have an L-layer student network and T M-layer teacher networks.\nDoes this imply we have to consider O(L*M*T) additional weight matrices Q?\nCan you comment on the memory requirements?\n\n- The teacher-student setup should be made more clear in Tables 1 and 2 captions (took me some time to comprehend).\n\n- The second and third paragraphs are redundant given the Related Work section that appears later on. I would like to see these redundancies minimized and the freed up space used to include more results from the Appendix in the main text. \n", "This paper presents a method for distilling multiple teacher networks into a student, by linearly combining feature representations from all networks at multiple intermediate layers, and gradually forcing the student to \"take over\" the learned combination. Networks to be used as teachers are first pretrained on various initial tasks. A student network is then trained on a target task (possibly different from any teacher task), by combining corresponding hidden layers from each teacher using learned linear remappings and weighted combinations. Learning this combination allows the system to find appropriate teachers for the target task; eventually, a penalty on the combination weights forces all weight onto the student network, resulting in the distillation.\n\nApplications to both reinforcement learning (atari game) and supervised image classification (cifar, svhn) are evaluated. The reinforcement learning application is particularly fitting, since combining tasks together is less straightforward in this domain.\n\nI wonder whether any experiments were performed where the layers correspondence between teacher models was less clear --- say, using teachers with different architectures. Figure 1(a) (different teacher archs) as well as the text (\"candidate set\" on p.4) indicate this is possible, but experiment details describe combinations of same-architecture teachers only.\n\nIn addition, I would have liked to see some further exploration of the KL term and use of \"theta_old\". This seems potentially important, and also has ties to self-ensembling through teachers with exponential weight averaging. Could an average network also be used here? And how important is this term in linking student to teachers as the weights change?\n\nOverall I find this a very interesting approach. Rather than training a large joint model on multiple tasks simultaneously as a transfer initialization, this approach uses models already fully trained for different tasks. This results in a potentially advantageous trade-off: One no longer needs to carefully calibrate the different tasks and common task components in a joint model, but at the cost of requiring inference through multiple teachers when training the student.\n" ]
[ -1, -1, -1, -1, 6, -1, -1, -1, 8, 7 ]
[ -1, -1, -1, -1, 3, -1, -1, -1, 5, 4 ]
[ "Byx-vLP5hQ", "rkgVy1xs2m", "BJlDEqbKA7", "SkgAysOIRm", "iclr_2019_BJeOioA9Y7", "rklBpc_LC7", "r1lPW7iI07", "BkgiFx9nnQ", "iclr_2019_BJeOioA9Y7", "iclr_2019_BJeOioA9Y7" ]
iclr_2019_BJeWUs05KQ
Directed-Info GAIL: Learning Hierarchical Policies from Unsegmented Demonstrations using Directed Information
The use of imitation learning to learn a single policy for a complex task that has multiple modes or hierarchical structure can be challenging. In fact, previous work has shown that when the modes are known, learning separate policies for each mode or sub-task can greatly improve the performance of imitation learning. In this work, we discover the interaction between sub-tasks from their resulting state-action trajectory sequences using a directed graphical model. We propose a new algorithm based on the generative adversarial imitation learning framework which automatically learns sub-task policies from unsegmented demonstrations. Our approach maximizes the directed information flow in the graphical model between sub-task latent variables and their generated trajectories. We also show how our approach connects with the existing Options framework, which is commonly used to learn hierarchical policies.
accepted-poster-papers
This paper proposes an approach for imitation learning from unsegmented demonstrations. The paper addresses an important problem and is well-motivated. Many of the concerns about the experiments have been addressed with follow-up comments. We strongly encourage the authors to integrate the new results and additional literature to the final version. With these changes, the reviewers agree that the paper exceeds the bar for acceptance. Thus, I recommend acceptance.
val
[ "S1gRVWWc0m", "HkehIf6w0Q", "rJxs7IUPAQ", "rklVVJy5pX", "SJx_MRAKaX", "ryeTN6CYpm", "H1l3BLbJpm", "rJel6IwA3Q", "Ske4Ltaws7" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "For completeness, here is the table of results on the FetchPickandPlace-v1 environment with results of the VAE baseline included:\n\nDirected Info GAIL + L2 loss: Mean = -9.47, Std dev. = 4.84\nGAIL + L2 loss: Mean = -12. 05, Std dev. = 4.94\nDirected-Info GAIL: Mean = -11.74, Std dev. = 5.87\nGAIL: Mean = -13.29, Std dev. = 5.84\nVAE: Mean = -14.07, Std dev. = 5.57", "We would like to thank the reviewers for their constructive feedback on our paper. We are encouraged by the positive reviews. The reviewers noted that our work makes a relevant contribution and that our approach is novel and interesting. They agreed unanimously that the paper is clearly written and well motivated. R1 and R2 recommended performing experiments on more complicated benchmark tasks. Given this feedback, we performed further experiments on a challenging recent manipulation environment from OpenAI gym. We discuss the results of these experiments in detail in a comment to R2 and have also added these details to the appendix section of the paper. Our results clearly demonstrate the merits of our approach over state-of-the-art baselines. We will integrate these new results and include the suggested improvements to the literature review in the final version of the paper.\n\nWe hope that the reviewers and the area chair will take these new experiments into account when assessing the final scores.", "As mentioned in our previous comment, we observed that agents trained with both GAIL and our proposed approach failed to learn to grasp the object, pointing to a need for stronger supervision, especially at states close to the grasping states. We performed some more experiments, where we provide this supervision by training the policy to minimize the L2 distance between the policy action and the expert action on states in the expert demonstrations.\n\nAt every training step, we compute the discriminator and policy (generator) gradient using the Directed-Info GAIL (or in the baseline, GAIL) loss using states and actions generated by the policy. Along with this gradient, we also sample a batch of states from the expert demonstrations and compute the policy gradient that minimizes the L2 loss between actions that the policy takes at these states and the actions taken by the expert. We weigh these two gradients to train the policy.\n\nThe mean and standard deviation of returns on 100 episodes is as follows (higher is better)- \n\nDirected Info GAIL + L2 loss: Mean = -9.47, Std dev. = 4.84\nGAIL + L2 loss: Mean = -12. 05, Std dev. = 4.94\nDirected-Info GAIL: Mean = -11.74, Std dev. = 5.87\nGAIL: Mean = -13.29, Std dev. = 5.84\n\nAdding the L2 measure as an additional loss led to significant improvement. Our proposed approach Directed-Info GAIL + L2 loss outperforms the baseline. Moreover, we believe that this quantitative improvement does not reflect the true performance gain obtained using our method. The reward function is such that a correct grasp but incorrect movement (e.g. motion in the opposite direction or dropping of the object) is penalized more than a failed grasp. Thus, the reward function does not capture the extent to which the task was completed. \n\nQualitatively, we observed a much more significant difference in performance between the proposed approach and the baseline. This can be seen in the sample videos of the success and failure cases for our and the baseline method at https://sites.google.com/view/directedinfo-gail/home#h.p_qM39qD8xQhJQ \n\nWe observed that our proposed method succeeds much more often than the baseline method. The most common failure cases for our method include the agent picking up the object, but not reaching the goal state before the end of the episode, moving the object to an incorrect location or dropping the object while moving it to the goal. Agents trained using GAIL + L2 loss on the other hand often fail to grasp the object, either not closing the gripper or closing the gripper prematurely. We believe that our approach helps the agent alleviate this issue by providing it with the sub-task code, helping it disambiguate between the very similar states the agent observes just before and just after grasping.", "Thank you for your feedback on our paper.\n\n1. The latent codes can be categorical or continuous, single or multi-dimensional. Since our approach utilizes a VAE, any distribution which allows for sampling using the reparameterization trick can be used. Our approach does not put any additional constraints. While we reported results with categorical latent codes, we also experimented with continuous variables with multi-dimensional Gaussian priors in early experiments. Using continuous latent variables often allowed faster training and lower loss for the VAE step due to their rich representational power. However, using continuous variables as options has a drawback because the network can select very different values of context for seemingly same sub-tasks in different states. This goes against the idea of having sub-task specific policies, as now the latent codes become much more susceptible to the state as opposed to the sub-task. By discretizing the context into 1-of-n values using categorical variables, we found that the sub-tasks correlate much better with sub-tasks that are intuitive to humans.\n\n2. Thank you for pointing us to the line of work on Discovery of Deep Options (DDO). While very relevant, their approach is different from our proposed approach and is similar to the work from Daniel et al. we cited. DDO proposes to extend the EM based approach to multiple levels of option hierarchies. Their work on Discovery of Deep Continuous Options allows the option policy to also select a continuous action in states where none of the options are applicable. Here, we would like to point out that the title of the paper is somewhat misleading since the options are still modeled as categorical variables in their paper and are not continuous. Note that their approaches belong to the domain of behavior cloning. In contrast, we propose a method to integrate GAIL with the options framework. GAIL and other works that build on it, use Inverse Reinforcement Learning to learn policies as well as rewards, overcoming problems such as compounding errors, and have been shown to need fewer expert demonstrations than behavior cloning. Moreover, our proposed approach can also be extended to multi-level hierarchies (e.g. by learning VAEs with multiple sampling layers) or hybrid categorical-continuous macro-policies (e.g. using both categorical and continuous hidden units in the sampling layer in VAE). We will add this discussion to the paper.\n\n3. We would like to clarify that Eq 1 is not the loss used in Info-GAIL. The graphical model used in Info-GAIL (Fig. 1 left) does not model sub-tasks, while in contrast Fig. 1 (right) shows the graphical model that we propose to use. This enables us to model expert demonstrations as an interaction between sub-tasks and their resulting state-action trajectory, and learn re-usable sub-task specific policies. Eq 1 is the loss function under this graphical model when using mutual information. The equation can be modified to have a Markov assumption where c_{t} only depends on c_{t-1}. However, the dependence on future states still remains since that is by ‘definition’ of mutual information, which cannot be altered using a Markov assumption. Markov assumption only allows us to remove dependence on the past (not future) given the most recent history. The assumption you propose is precisely the effect of utilizing directed information. We will make this point clearer.\n\nWe agree that a recurrent model better captures the mathematical intuition. In the room and circle world tasks, the encoder (but not the decoder/policy) did get a history of previous 5 time steps as an approximation for the trajectory until the current time step. In the Mujoco environments, we found that the state was representative enough to not make a difference in practice.\n\n4. While we agree that generating unseen gaits would be ideal, unfortunately, the one time-step option paradigm makes unseen composition hard. This difficulty is further compounded by demonstrations with unnatural and asymmetric gaits (e.g. the two feet of the agent do not support the agent’s motion equally and play different roles). We would also argue that it is not easy to decompose a walk into 2 separate limping motions, just as two independent limping movements do not fully constitute a walk. However, we provide more evidence that the policy does indeed use the different latent variables to perform different sub-tasks. We have uploaded two new videos at https://sites.google.com/view/directedinfo-gail/home#h.p_cEMQy28s4Jkb where we give just one latent code to the policy in each video - code to put pink leg down in video 1, and code to use the brown leg in video 2. As can be seen, both latent codes give rise to different behaviors. Please also see response to R2 on experiments on environments with clearer hierarchies.\n\n5. This is by the rule P(A, B) = P(A|B)P(B). We will make the steps in the derivation clearer.", "Thank you for your constructive comments on the paper.\n\n1. Implementation details and hyper-parameter settings can be found in Table 2 and section A.2 of the appendix. The networks were trained for different number of iterations, and with different batch sizes for each environment as listed in the table. The networks were 2 layer MLPs, trained using Adam optimization with a learning rate of 3e-4. The VAEs were trained on the expert trajectories using the Gumbel-softmax trick with an exponential decay in temperature. The initial temperature was set to 5, and was decayed to around 0.1 by the end of the training. We used Proximal Policy Optimization for the policy updates while optimizing the Directed-Info Loss. Batch sizes for all environments are listed in table 2. In all experiments, the number of expert episodes were selected to approximately have an equal number of generated and expert state-action pairs in a batch. The lambda parameter was set differently for each environment and these settings can be seen in table 2 (posterior lambda column). We used 4 latent codes in the room environment, 2 in circle world, and 3 in each of the Mujoco environments. We used 5 different seeds in the Open AI gym environments. The results were computed by averaging over 300 episodes.\n\n2. While we did not try pre-training c_{t}s as continuous variables, we use temperature annealing, starting with a high initial temperature of 5, which is decreased over the epochs of VAE training to around 0.1. This means that during the initial epochs of the training, the latent variables are continuous, and only later in the training do they approximate categorical variables. Also, as noted in response to R1, we also tried using continuous latent variables in early experiments. Although the this led to a lower L2 loss during VAE training, the high representational power of continuous variables meant that the network learned to assign different latent codes to sub-tasks which were intuitively similar but in different states. Since our goal was to learn sub-task specific policies, we switched to using discrete latent variables. By forcing the network to use only 1-of-n possible codes, the network was forced to assign the same code to similar behaviors, even if they occur in different states. \n\n3. We did try using variational RNNs during early experiments on simple discrete environments (that we did not report here). We did not find much advantage in using recurrent models over providing state history to an MLP in those environments, and hence all later experiments were done using feed-forward architectures with history of appropriate time length.\n\n4. No, we haven't tried training our models on pixels for the continuous control tasks. However, in principle, this can be done using convolutional variational autoencoders during the VAE step and then using CNNs for generator and discriminators (similar to Deep Convolutional GAN).\n\nExperiments on OpenAI Robotics environments - Following suggestions, we tried to test the baselines and our approach on the FetchPickandPlace task in OpenAI Gym. While our method was able to learn to segment the expert demonstrations into the Pick and Place sub-tasks correctly, as can be seen in the videos at https://sites.google.com/view/directedinfo-gail/home#h.p_4dsbuC5expkZ , neither our approach, nor GAIL was able to successfully complete the task. In our preliminary results, we found that the robot, in both our proposed approach and GAIL, would reach the object but fail to grasp it despite repeated attempts. To the best of our knowledge, no other work has successfully trained GAIL on this task either. Our preliminary experiments seem to suggest that stronger supervision may be necessary to teach the agent the subtle action of grasping.\n\nExperiments on problems with hierarchical structure - We had also tried some experiments on tasks with clearer hierarchical structure prior to submission. We constructed rewards for a monoped agent to perform three different subtasks - walk forward, walk backward and jump. Then, we trained RL agents to perform 2 of these sub-tasks one after the other in an episode. We found that RL agents with MLP policies trained using PPO and DDPG failed to learn any combination of these sub-tasks. We were able to train agents using phase-functioned policies [1, 2] to perform 4 of 6 combinations. However, we found that the gait of the agent was strongly dependent on the ordering of the sub-tasks. This made identifying common sub-tasks hard. We found training phase policies for imitation learning with such noisy segmentations to be challenging. We believe that this is beyond the scope of the paper and should be left to future work.\n\n[1] Phase-Functioned Neural Networks for Character Control. ACM Transactions on Graphics, 2017\n[2] Phase-Parametric Policies for Reinforcement Learning in Cyclic Environments. AAAI 2018", "Thank you for your encouraging comments on the paper.\n\nWe agree that further investigation on the dependence on the number of latent variables will be useful. Empirically, we found that the VAE often learns to ignore excess latent codes when the number of latent variables are close to the actual number of sub-tasks. For e.g. in the hopper and walker tasks even when the latent code size is set to 4, the VAE ends up only utilizing 3 codes. In the manipulation experiments we did on the suggestion of R2, when using 3 or 4 latent codes, the VAE only uses 2. These observations motivated us to perform the analysis that we report in the appendix. However, we agree that future work should analyse this further.\n\nThe problem of discovering meaningful sub-tasks is certainly an interesting open problem. Using intuition driven loss functions, as we did in the circle world experiment, could be one way to allow networks to find sub-tasks meaningful to humans. Exploring other ways of introducing problem structure is definitely an important future direction.\n", "The paper presents a learning-based method for learning the latent context codes from demonstrations along with a GAIL model. \nThis amounts to learning the option segments and the policies simultaneously. \nThe main contribution is the model the problem as a time-dependent context and then use a directed information flow loss instead of the mutual information loss.\n\n1. What is the effect of models of the underlying distribution of latent codes. \nCan it be categorical only, or can it be continuous? \nCould we also model it as multidimensional?\nThe current results only provide single dimensional categorial distribution as latent codes. \n\n2. The paper missed an important line of work which solves nearly the same problem -- option discovery and policy learning. \nKrishnan -- Discovery of Deep Option(1703.08294). This work was used by authors in continuous options and then again for program generation (https://openreview.net/pdf?id=rJl63fZRb). \n\nThey explicitly infer the option parameters, along with termination conditions with the Expectation Propagation method. \nThe results are in very similar domains hence comments, if not a comparison, would be useful. \n\n\n3. The authors state that the main problem with an InfoGail style method is dependence on the full trajectory as in eq 1. Hence the directed info flow is required to solve the problem. However in the actual model, the authors make a sequence of variational approximations -- (a) reduction of eq2 to eq1 with a variation lower bound on posterior p(c|c,\\tau) and then replace the prior p(c) with q(c|c,\\tau) in eq 5. But looking at the model diagram in fig 2. the VAE actually makes the Markovian assumption -- i.e. c only depends on c_{t-1} and s_{t}. If that is true then how would this be very different from InfoGAIL mutual information loss. \nIt appears that to capture the authors' mathematical intuition the VAE should have a recurrent generator which should have a hidden state factor passing in to capture dependence on history until the current time. \n\n3a. In fact the first term in eq 6 looks closer to the actually used model. If that is not true then the authors should clarify. \n\n4. Experiments do capture the notion discovery of options. But the simplicity of data leaves much to be desired. \nOne of the main difference of this work in comparison to unsupervised segmentation models GMM or BP-AR-HMM is the fact that the options learned are composable. But the authors only show this composability on the circle domain -- which is arguably a toy-domain. \nA reasonable confirmation that the model indeed learns composition is to generate a trajectory for a sequence of latent code not seen in data. -- like walking -- normal -- left-right-left can be converted to limping gait -- left-left-right-right. This is only a suggestive example. \n\n5. In appendix eq 8 how is the reduction from line 3 to line 4 of the equation made -- what is the implicit assumption. \njoint distribution p(c, \\tau) is written out as p (\\tau|c) p(c) without an integral.\n", "Summary:\n\nThis paper proposes an extension over the popular GAIL method for imitation learning for the multi-modal data or tasks that have hierarchical structure in them. To achieve that the paper introduces an unsupervised variational objective by maximizing the directed mutual information between the latents c’s and the trajectories. The advantage of using directed information instead of regular MI based criterion is two-folds: 1) Being able to express the causal and temporal dependencies among the c’s changing across time. 2) Being able to learn a macro-policy without needing to condition on the future trajectories. Authors present results both on continuous and discrete environments.\n\n\nQuestions: \n1) Can you give more detailed information about the hyperparameters of your model? For example how many seeds have you used?\n2) Have you tried pre-training c_t’s as continuous latent variables?\n3) Have you tried pre-training your model as Variational RNN instead of VAE?\n4) Have you tried training your model on the pixels on the continuous control tasks?\n\nPros:\n* Although the approach bears some similarity to Info-GAIL approach. The idea of using directed information for GAIL is novel and very interesting. This approach can be in particular useful for the tasks that have \n* The paper is very well-written the goal and motivation of the paper is quite clear.\n\nCons:\n* Experiments are quite weak. Both the discrete and the continuous environment experiments are conducted on very simplistic and toyish tasks. There are much more complicated and modern continuous control environments such as control suite [1] or manipulation suite [2]. In particular tasks where there is a more clear hierarchy would be interesting to investigate.\n* Experimental results are underwhelming. For example Table 1, the results of the proposed approach is only barely better than the baseline.\n\n[1] https://github.com/deepmind/dm_control\n[2] Learning by Playing-Solving Sparse Reward Tasks from Scratch, M Riedmiller, R Hafner, T Lampe, M Neunert et al - arXiv preprint arXiv:1802.10567, 2018\n\n", "The paper describes a new learning framework, based on generative\nadversarial imitation learning (GAIL), that is able to learn sub-tasks\npolicies from unsegmented demonstrations. In particular, it follows\nthe ideas presented in InfoGAIL, that depends on a latent variable,\nand extend them to include a sequence of latent variables representing\nthe sequence of different subtasks. The proposed approach uses a\npre-training step, based on a variational auto-encoder (VAE), to\nestimate latent variable sequences. The paper is well written and\nrelates the approach with the Options framework. It also shows,\nexperimentally, its performance against current state-of-the-art\nalgorithms. \n\nAlthough the authors claim in the appendix that the approach is\nrelatively independent on the dimensionality of the context variable,\nthis statement needs further evidence. The approach is similar to HMMs\nwhere the number f hidden states or latent variables can make a\ndifference in the performance of the system.\n\nAlso, it seems that the learned contexts do not necessarily correspond\nto meaningful sub-tasks, as shown in the circle-world. In this sense,\nit is not only relevant to determine the \"right\" size of the context\nvariable, but also how to ensure a meaningful sub-task segmentation. \n" ]
[ -1, -1, -1, -1, -1, -1, 6, 6, 8 ]
[ -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "rJxs7IUPAQ", "iclr_2019_BJeWUs05KQ", "SJx_MRAKaX", "H1l3BLbJpm", "rJel6IwA3Q", "Ske4Ltaws7", "iclr_2019_BJeWUs05KQ", "iclr_2019_BJeWUs05KQ", "iclr_2019_BJeWUs05KQ" ]
iclr_2019_BJej72AqF7
A Max-Affine Spline Perspective of Recurrent Neural Networks
We develop a framework for understanding and improving recurrent neural networks (RNNs) using max-affine spline operators (MASOs). We prove that RNNs using piecewise affine and convex nonlinearities can be written as a simple piecewise affine spline operator. The resulting representation provides several new perspectives for analyzing RNNs, three of which we study in this paper. First, we show that an RNN internally partitions the input space during training and that it builds up the partition through time. Second, we show that the affine slope parameter of an RNN corresponds to an input-specific template, from which we can interpret an RNN as performing a simple template matching (matched filtering) given the input. Third, by carefully examining the MASO RNN affine mapping, we prove that using a random initial hidden state corresponds to an explicit L2 regularization of the affine parameters, which can mollify exploding gradients and improve generalization. Extensive experiments on several datasets of various modalities demonstrate and validate each of the above conclusions. In particular, using a random initial hidden states elevates simple RNNs to near state-of-the-art performers on these datasets.
accepted-poster-papers
While the reformulation of RNNs is not practical as it is missing sigmoids and tanhs that are common in LSTMs it does provide an interesting analysis of traditional RNNs and a technique that's novel for many in the ICLR community.
train
[ "SkePFUZVam", "HyeMiQWEaQ", "S1eTiBWVpm", "B1eby2B5n7", "B1e0FnM93X", "r1e_41DDhQ" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for their careful reading and constructive suggestions. We agree that the MASO framework sheds new light on the inner workings of RNNs. We have made significant simplifications and revisions to the mathematical notation, particularly in Sections 1.1, 1.2, and 2, that should address most of your concerns. Below we respond to your specific questions.\n\na) We removed the exponents \\ell in Section 1.2. The reason for not using S in the remainder of the paper is that each operation in an RNN cell is a MASO S, e.g., we could have written an RNN cell operation as z_t = S_cell ( x , z_{t-1} ) = S_sigma ( S_W * x + S_z_{t-1} * z_{t-1} + b ), but this would make the notation a bit more confusing. Therefore, we only use the notation S to introduce the definition of MASO, and omit it in the remainder of the paper. \n\nb) Implicitly, yes, Q is dependent on the affine parameters A and B of the MASO and the region in which the input x belongs. Here is a bit more detail on Q: given the parameters A and B and the input x, the MASO calculates the output through the internal maximization mechanism of the max-affine splines (see Eqs. 4 and 5 of the updated paper). This process infers (for each output dimension k) the region $r_k$ in which the input x belongs to, and adapts the rows of A and entries of B (of the affine mapping) accordingly. This process is highlighted in the paper through the tensor Q, in which the region inferred by the max-affine splines are stored as one-hot vectors. Stacking these region selection vectors for all output dimensions (all max-affine splines) row-wise, we obtain the partition section matrix Q. We have added a discussion about Q in Section 1.2 and revised Section 3 to make the explanation much cleaner.\n\nc) We have included the notation for A_\\sigma in Proposition 1.\n\nd) We agree that the bracket notation is nonideal. The notation A[z]z is intended to indicate that the matrix A depends on the value of z (actually the partition region into which z falls). In an attempt to clarify the notation, in our revised paper, we use brackets strictly to denote matrix/vector value selection or concatenation. For example, [x]_k denotes the value of the k-th entry of the vector x, and [x_1, …, x_n] denotes the concatenation of the vectors x_1, …, x_n. Accordingly, we have omitted the input-dependency of the affine parameters. Instead, we make a note on page 3 to remind the reader that all affine parameters are input-dependent even though they are not explicitly written as such.\n\ne) We have added a footnote in the statement of Proposition 1 to reflect that \\sigma is assumed to be piecewise affine and convex.\n\nf) Yes, we have unified our notation and now both a layer of an RNN and the overall RNN are referred to as a “piecewise affine spline operator” in their MASO formulation.\n\ng) “f” here denotes the RNN function, where the input is the concatenated input sequence and the output is the concatenated hidden states at the last layer. We have removed “f” in Theorem 2 to make it cleaner.\n\nh) We have made significant revisions and simplifications to the notation that hopefully improve the flow of the paper. Since Section 4 is an important section that contains the matched filterbank view of an RNN, we have kept this section in the main text.\n\ni) We have added a short overview of our contributions, including the noisy initial hidden state, in the second paragraph of the Introduction.\n\nPlease let us know if the above address your concerns and if you have further inquiries. ", "We thank the reviewer for their constructive comments. First, all the typos have been corrected in the updated manuscript. Second, we have made significant simplifications to the mathematical notation in Sections 1 and 2 that improve the clarity of presentation. We address the remaining concerns below.\n \n1) Regarding the seemingly insufficient experimental evaluation:\nWe actually evaluated the use of the noise in the initial hidden state on not one but four datasets of four different modalities: simulated toy data (artificial), MNIST (imagery), SST-2 (text), and bird detection (audio). Our goal (which was achieved) was to demonstrate that, for simple RNNs, injecting noise into initial hidden state improves performance for all four modalities. We present additional successful experimental results in Appendix F of the Supplementary Material; we could not include these in the main text due to space limitations. Our experiments on these four datasets/modalities provide strong evidence on the utility of the noisy initial hidden state. Additional results/visualizations of the input space partitioning and matched filtering are included in Appendices D and E, respectively. \n\nFor the exploratory experiments (last part in Section 5.2), we have added experimental results on MNIST and permuted MNIST datasets using a one-layer GRU that similarly demonstrates the potential gain in classification accuracy when using noisy initial hidden state in more complex models where nonlinearities are no longer piecewise affine and convex. \n\n2) Regarding the Bird Detection Dataset being not well benchmarked: \nThis dataset is, in fact, well benchmarked; perhaps we failed to make it clear in the main text. Please see this website for the task description (http://machine-listening.eecs.qmul.ac.uk/bird-audio-detection-challenge) and this website for a list of benchmarks (http://c4dm.eecs.qmul.ac.uk/events/badchallenge_results). We have included the link to the benchmarks in the main text; see the new footnote on page 8. \n\nPlease let us know if the above address your concerns or if you have further inquiries. \n", "We thank the reviewer for their constructive comments and suggested edits. We address each of them below.\n\n1) Lack of application of the MASO formulation: \nIn addition to improving the performance of RNNs using our suggestion of a noisy initial hidden state, our paper provides two additional insights/applications: (i) visualizing the progression through time of the RNN MASO input space partitioning and (ii) interpreting an RNN as a template matching machine (matched filterbank). These two applications are detailed in Sections 3 and 4; they provide new ways to visualize and interpret RNNs that complement related prior work on RNN visualization and interpretation. \n\nFuture research directions and applications include the following, which have been added to the Conclusions of the paper (see Section 6). We can study whether enforcing an orthogonality constraint on the slope parameter A improves RNN performance, similar to what has been observed in [1] for deep feedforward networks. We can use the recently developed random matrix theory of deep learning [2] to analyze the affine slope parameter A (e.g., study how the distribution of its singular values changes during training) to analyze the implicit regularization that the optimizer performs when training RNNs. \n\n2) Limitation of the analysis to convex activation functions: \nFirst, we acknowledge that focusing on piecewise affine and convex nonlinearities in RNNs might be limiting, since more elaborate models like LSTM and GRU use sigmoid and hyperbolic tangent activations. Nevertheless, having a solid understanding piecewise affine and convex nonlinearities in RNNs will guide subsequent theoretical development on other nonlinearities used in RNNs. Moreover, ReLU RNNs have recently gained considerable attention due to their simplicity, competitive performance, and ability to combat the exploding gradient problem provided they are parametrized and initialized properly. We have added a concise discussion in the third paragraph of the Introduction about ReLU RNNs to provide additional motivation for our work. In a future work direction, we expect that we can extend our convex/affine analysis to non-convex nonlinearities like the sigmoid and hyperbolic tangent by leveraging the development of the recent paper [3], which extend the MASO framework to more general nonlinearities. This development, however, is beyond the scope (and available space) of the current paper.\n\nPlease let us know if the above address your concerns and if you have further inquiries. \n\n[1] Mad Max: Affine Spline Insights into Deep Learning (Balestriero and Baraniuk, 2018), https://arxiv.org/abs/1805.06576\n[2] Implicit Self-Regularization in Deep Neural Networks: Evidence from Random Matrix Theory and Implications for Learning (Martin and Mahoney, 2018), https://arxiv.org/abs/1810.01075\n[3] From Hard to Soft: Understanding Deep Network Nonlinearities via Vector Quantization and Statistical Inference (Balestriero and Baraniuk, 2018), https://arxiv.org/abs/1810.09274", "The paper rewrites equations of Elman RNN in terms of so-called max-affine spline operators. Paper claims that this reformulation allows better analysis and, in particular, gives an insight to use initial state noise to regularize hidden states and fight exploding gradients problem.\n\nThe paper seems to be theoretically sound. The experiment with sequential MNIST looks very promising, thought it would be great to check this method on other datasets (perhaps, toy data) to check that this is not a fluke. The bird audio dataset is not well benchmarked in the literature. The paper could make much stronger claim with more extensive experimentation.\n\nSome typos:\n- p3: an simple -> a simple\n- Figure 2 caption is not finished\n- p5 last paragraph: extra full stop\n- Fig 3: correct and negative probably switched around\n- p7: in regularize -> in regularization\n", "In this paper, the authors provide a novel approach towards understanding\nRNNs using max-affine spline operators (MASO). Specifically, they rewrite RNNs\nwith piecewise affine and convex activations MASOs and provide some\nexplanation to the use of noisy initial hidden state. \n\nThe paper can be improved in presentation. More high level explanation should\nbe given on MASOs and why this new view of RNN is better. \n\nTo best of my knowledge, this is the first paper that related RNNs with MASOs\nand provides insights on this re-formulation. However, the authors failed to\nfind more useful applications of this new formulation other than finding that\nnoisy initial hidden state helps in regularization. Also, the re-formulation\nis restricted to piecewise affine and convex activation functions (Relu and\nleaky-Relu). \n\nIn general, I think this is an original work providing interesting viewing\npoint, but could be further improved if the authors find more applications of\nthe MASO form. \n", "This paper builds upon recent work by Balestriero and Baraniuk (ICML 2018) that concern max-affine spline opertaor (MASO) interpretation of a substantial class of deep networks. In the new paper a special focus is put on Recurrent Neural Networks (RNNs), and it is highlighted based on theoretical considerations leveraging the MASO and numerical experiments that in the case of a piecewise affine and convex activation function, using noise in initial hidden state acts as regularization. \nOverall I was impressed by the volume of contributions presented throughout the paper and also I very muched like the light shed on important classes of models that turn out to be not as black box as they could seem. My enthouasiasm was somehow tempered when discovering that the MASO modelling here was in fact a special case of Balestriero and Baraniuk (ICML 2018), but it seems that despite this the specific contribution is well motivated and justified, especially regarding application results. Yet, the other thing that has annoyed me and is causing me to only moderately champion the paper so far is that I found the notation heavy, not always well introduced nor explained, and while I believe that the authors have a clear understanding of things, it appears to me that the the opening sections 1 and 2 lack notation and/or conceptual clarity, making the paper hard to accept without additional care. To take a few examples:\na) In equation (3), the exponent (\\ell) in A and B is not discussed. On a different level, the term \"S\" is used here but doesn't seem to be employed much in next instances of MASOs...why? \nb) In equation (4), sure you can write a max as a sum with an approxiate indicator (modulo unicity I guess) but then what is called Q^{(\\ell)} here becomes a function of A^{(\\ell)}, B^{(\\ell)}, z^{(\\ell-1)}...?\nc) In proposition 1, the notation A_sigma is not introduced. Of course, there is a notation table later but this would help (to preserve the flow and sometimes clarify things) to introduce notations upon first usage...\nd) Still in prop 1, braket notation not so easy to grasp. What is A[z]z? \ne) Still in prop 1, recall that sigma is assumed piecewise-linear and convex? \nf) In th1, abusive to say that the layer \"is\" a mapping, isn't it? \ng) In Theorem 2, what is f? A generic term for a deterministic function? \nAlso, below the Theorem, \"affine\" or \"piecewise affine\"? \nh) I found section 4 somehow disconnected and flow-breaking. Put in appendix and use space to better explain the rest? \ni) Section 5 is a strong and original bit, it seems. Should be put more to the fore in abstract/intro/conclusion? " ]
[ -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, 3, 3, 3 ]
[ "r1e_41DDhQ", "B1eby2B5n7", "B1e0FnM93X", "iclr_2019_BJej72AqF7", "iclr_2019_BJej72AqF7", "iclr_2019_BJej72AqF7" ]
iclr_2019_BJemQ209FQ
Learning to Navigate the Web
Learning in environments with large state and action spaces, and sparse rewards, can hinder a Reinforcement Learning (RL) agent’s learning through trial-and-error. For instance, following natural language instructions on the Web (such as booking a flight ticket) leads to RL settings where input vocabulary and number of actionable elements on a page can grow very large. Even though recent approaches improve the success rate on relatively simple environments with the help of human demonstrations to guide the exploration, they still fail in environments where the set of possible instructions can reach millions. We approach the aforementioned problems from a different perspective and propose guided RL approaches that can generate unbounded amount of experience for an agent to learn from. Instead of learning from a complicated instruction with a large vocabulary, we decompose it into multiple sub-instructions and schedule a curriculum in which an agent is tasked with a gradually increasing subset of these relatively easier sub-instructions. In addition, when the expert demonstrations are not available, we propose a novel meta-learning framework that generates new instruction following tasks and trains the agent more effectively. We train DQN, deep reinforcement learning agent, with Q-value function approximated with a novel QWeb neural network architecture on these smaller, synthetic instructions. We evaluate the ability of our agent to generalize to new instructions onWorld of Bits benchmark, on forms with up to 100 elements, supporting 14 million possible instructions. The QWeb agent outperforms the baseline without using any human demonstration achieving 100% success rate on several difficult environments.
accepted-poster-papers
All reviewers (including those with substantial expertise in RL) were solid in their praise for this paper that is also tackling an interesting application that is much less well studied but deserves attention.
train
[ "SylnSTaghX", "Bklaz-qtRm", "SJeSplqYR7", "HJlCjlqKAX", "BJexNkcF0Q", "ryejZrK9h7", "HkxIVs6PsQ" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "UPDATE:\n\nThank you to the authors for a comprehensive response. I have increased my score based on these changes. I apologize for the misunderstanding about ArXiV papers and indeed the authors are correct on that point. Thank you as well for reporting the learning speeds. As you mentioned, they confirm our intuitions and complete the picture of the algorithm’s behavior. The addition of pseudo-code does make the paper and algorithm easier to follow. Thank you for adding it. The rewritten section 5 is indeed much easier to follow and makes the coordination between the agents clear. Seeing that the instructor is a fixed policy resolves the game theoretic issue form the original review.\n\n\nSummary:\n\nThe paper proposes a deep reinforcement learning approach to filling out web forms, called QWeb. In addition to both deep and shallow embeddings of the states, the authors evaluate various methods for improving the learning system, including reward shaping, introducing subgoals, and even a meta-learning algorithm that is used as an instructor. These variations are tested in several environments and basic QWeb is shown to outperform the baselines and many of the adaptations perform even better than that in more complex domains.\n\nReview:\n\nOverall, the problem the paper considers is important and their results seem significant. The authors have derived a novel architecture and are the first to tackle the problem of filling in web forms at this scale with an autonomous learning agent rather than one that is taught mostly by demonstration. \n\nThe related work section is very well written with topical references to recent results and solid differentiations to the new algorithm. However, I see many references in the paper are not from peer reviewed conferences or journals. Unless absolutely necessary, such papers should not be cited because they have not been properly peer reviewed. If the papers cited have actually been in a conference or journal, please add the correct attribution.\n\nThe experiments seem well conducted. I liked that each new addition to the algorithm was tested incrementally in Figure 7 to give a realistic view of the gains introduced by each change. I also thought the earlier comparisons to the baselines were well done and I liked that they were done against modern cutting-edge LfD demonstrations. The only thing I would have liked to seen beyond these results are actual learning curves showing, after X iterations, what percentage of the tasks could be completed. I suspect that in many domains the baseline LfD techniques are learning much faster since learning from teachers tends to be more targeted and sample efficient. Learning curves would show us whether or not this is the case. \n\nThe weakest part of the paper was the description of the instructor network and the Meta-training in general. This portion seemed ill-described and largely speculative, despite the promising results in Figure 7. In particular, Section 5 is very unclear on how exactly the Meta-Learning works. Pseudocode is definitely needed in this portion well beyond the quick descriptions in Figure 4 and 5, which I could not understand, despite multiple readings. I suggest eliminating those figures and providing concrete pseudo—code describing the meta learning and also addressing the following open questions in the text:\n•\tWhy is a rule based randomized policy good to learn from? How is this different from learning from demonstration in the baselines?\n•\tHow is a “fine grained signal” generated? What does that mean? Is it a reward?\n•\tIn Section 5.1, are there two RL agents, an instructor and a learner with different reward functions? If so, isn’t this becoming game theoretic and is this likely to converge in most scenarios?\n•\tWhat does Q_D^I actually represent? Why is maximizing these values a good thing?\n\nThere are a few grammatical mistakes in the paper including:\n\nAbstract – simpler environments -> simple environments\nAbstract- with gradually increasing -> with a gradually increasing\nPage 2 – generate unbounded -> generate an unbounded\nPage 7 – correct value -> correct values\nPage 9 – episode length -> episode lengths\n\n", "We thank the reviewer for the comments and questions. Below are our responses.\n\n> “In the first set of experiments it is clear the improved performance of QWeb over Shi17 and Liu18, however, it is not clear why QWeb is not able to learn in the social-media-all problem. The authors tested only one of the possible variants (AR) of the proposed approach with good performance.” \n\nThe main reason is that in social-media-all environment, the size of the vocabulary is more than 7000 and task length is 12 which are both considerably larger compared to other environments. Another reason is that the QWeb can not learn the correct action by focusing on a single node; it needs to incorporate siblings of a node in the DOM tree to generate the correct action. Without adding shallow encoding (SE) and one of the proposed approaches (such as AR), QWeb is not able to train purely from trial-and-error as the number of successful episodes is very small. \n\nWe updated the Section 6.1 of the paper with these explanations and we plan to conduct more experiments in future work.\n\n> “It is not clear in the book-flight-form environment, why the QWeb+SE+AR obtained 100% success while the MetaQWeb, which includes one of main components in this paper, has a lower performance.”\n\nThe main reason of the performance difference between the QWeb+SE+AR and the MetaQWeb can be explained by the difference between the generated experience that these models learn from. In training QWeb+SE+AR, we use the original and clean instructions that the environment sets at the beginning of an episode. MetaQWeb, however, is trained with the instructions that instructor agent generates. These instructions are sometimes incorrect (as indicated by the error rate of INET : $4\\%$) and might not reflect the goal accurately. These noisy experiences hurt the performance of MetaQWeb slightly and causes the $1\\%$ drop in performance. \n\nWe updated the Section 6.2 with this explanation.\n\n> “The proposed method uses a large number of components/methods, but it is not clear the relevance of each of them. The papers reads like, \"I have a very complex problem to solve so I try all the methods that I think will be useful\". The paper will benefit from an individual assessment of the different components.”\n\nThank you for the comment. We have revised the Introduction, and Sections 4 and 5 to clarify the differences between the methods and contributions. Below is the summary that hopefully brings more clarity to the reasoning before the approaches.\n\nWe aim to solve the web navigation tasks in two situations, when the expert demonstrations are available and when they are not. When the expert demonstrations are available, we need to make several improvements to the training to outperform the baselines. These improvements are: better neural network architecture (QWeb), and more dense rewards. We get the more dense rewards by using the reward potentials and setting up a curriculum over the given demonstrations. \n\nIn the second case, when the expert demonstrations are not available. In that situation, we use the meta-trainer to generate new demonstrations. \n\n> ”The authors should include a section of conclusions and future work.”\nThank you for point it out. The section is added to the paper.\n", "> “In Section 5.1, are there two RL agents, an instructor and a learner with different reward functions? If so, isn’t this becoming game theoretic and is this likely to converge in most scenarios?”\n\nThere are two different RL agents : instructor agent (INET) and navigator agent (QWeb). These are trained in two phases : (i) we first train INET (a DQN agent with Q value function defined at the end of Section 5.1) using the instruction generation environment that we described in Section 5.1, (ii) next, parameters of the INET agent is fixed and we train QWeb using the instruction and goal pairs that the meta-trainer generates by running INET at the beginning of each episode. Hence, we avoid the problems that could have arised by jointly training two different RL agents with different objectives.\n\n> “What does Q_D^I actually represent? Why is maximizing these values a good thing?”\nQ_D^I is the Q value function that we used to train instructor agent (INET) as we described in Section 5.1.\n\n> “There are a few grammatical mistakes in the paper including.”\nThank you for pointing it out. We updated in the paper, and will make another pass for the final version if accepted.", "We thank the reviewer for the insightful comments. Below are our responses.\n\n> “However, I see many references in the paper are not from peer reviewed conferences or journals. Unless absolutely necessary, such papers should not be cited because they have not been properly peer reviewed.”\n\nThank you for pointing that out. We updated the references where the archival versions became available, and will do so again before camera-ready if accepted. At the same time, we also wanted to kindly point out that ICLR reviewer guidelines consider publication on Arxiv as prior work that should be properly cited: https://iclr.cc/Conferences/2019/Reviewer_Guidelines\n\n\n> “The only thing I would have liked to seen beyond these results are actual learning curves showing, after X iterations, what percentage of the tasks could be completed. I suspect that in many domains the baseline LfD techniques are learning much faster since learning from teachers tends to be more targeted and sample efficient. Learning curves would show us whether or not this is the case. “\n\nWe collected the number of steps (k=1000) needed to reach the top performance :\n___________________________________________________\n| environment \\ method | QWeb | LIU18 | \n--------------------------------------------------------------\n| click-pie | 175k | 13k |\n| login-user | 96k | < 1k |\n| click-dialog | 5k | < 1k |\n| enter-password | 3k | < 1k |\n--------------------------------------------------------------\n\nThese numbers reflect the reviewer’s intuition that LfD techniques learn faster, however, with a drop in success rate for some environments. We updated the experimental results in Section 6.1 with these results.\n\n> ”The weakest part of the paper was the description of the instructor network and the Meta-training in general. This portion seemed ill-described and largely speculative, despite the promising results in Figure 7. In particular, Section 5 is very unclear on how exactly the Meta-Learning works. Pseudocode is definitely needed in this portion well beyond the quick descriptions in Figure 4 and 5, which I could not understand, despite multiple readings. I suggest eliminating those figures and providing concrete pseudo—code describing the meta learning and also addressing the following open questions in the text:”\n\nThank you so much for the suggestion. We added the Algorithms 1, 2, 3 for the curriculum learning, DQN training, and meta learning. We removed Figure 5, and put Figures 3 and 4 side-by-side, since they both depict neural network architecture. We have also rewrote the Section 5. We are hoping that the changes are improving the clarity.\n\n> “Why is a rule based randomized policy good to learn from? How is this different from learning from demonstration in the baselines?”\nWhen the expert demonstrations are not available, we can use any policy (random or rule-based) and pretend that the policy is following some, to us known, instruction. The instructor agent learns to recover that hidden instructions, in effect creating new demonstrations. Once the instructor is trained to recover the instructions for a given policy, we generate new instruction / goal paths so that we can train QWeb. The choice of policy is arbitrary, and it was a design choice to select a simple, rule-based policy that visits each DOM element in web navigation environments.\n\nOur meta-training approach has two main advantages over learning from demonstrations:\nBy learning to generate new instruction following tasks, we can generate unbounded amount of episodes for any environment where collecting large amount of episodes for each environment is costly.\nSimilar to our curriculum generation with simulated goals approach, generated goal states are allowed to be incomplete. For example, if we constrain our rule based policy to run only a small number of steps, generated goal state could be incomplete and some DOM elements in the web page could be unvisited. In this case, QWeb can still leverage these experiences while also learning from the original instructions and sparse rewards that the environment generates.\n\nPaper’s introduction, and Section 5 are updated to clarify the role and selection of the rule-based policy, and advantages over the baselines.\n\n> “How is a “fine grained signal” generated? What does that mean? Is it a reward?”\nThank you for pointing it out. Yes, it is a dense reward. We updated the paper to use more commonly used term: dense reward.\n", "We thank the reviewer for the kind words and questions that help us improve the paper. We detail our responses below.\n\n> “There are a few notations used without definition, for example DOM tree, Potential (in equation (4))”\n\nWe updated in the paper.\nOn Page 3, line 3: “the Document Object Model (DOM) trees, a hierarchy of web elements in a web page.”\n\nSection 4.2:“we define a potential function ($Potential(s, g)$) that counts the number of matching DOM elements between a given state (s) and the goal state (g); normalized by the number of DOM elements in the goal state. Potential based reward is then computed as the scaled difference between two potentials for the next state and current state”\n\n> “Some justification regarding the the Q value function specified in (1) might be helpful, otherwise it looks very adhoc”.\n\nOur Q value function in Eq. (1) is motivated by the design of our composite actions (click(e) and type(e, y)) and the nature of web pages in general. A DOM element (e) in a web page mostly identifies which composite action to select, e.g., a text box such as destination airport is typed with the name of an airport code while a date picker is clicked. This motivates the dependency graph that we sketched in Figure 2. We define our Q value function for each composite action based on this dependency graph via a separate value function to model each node in the graph given its dependencies. We also also added this motivation to Section 3.\n\n\n> “Although using both shallow encoding and augmented reward lead to good empirical results, it might be useful to give more insights, for example, sample size limit cause overfitting for deep models?”\n\nWe would like to give more insights into overfitting of deep models without and with augmented rewards. Without augmented rewards, the Q function overfits very early to the minimum Q value possible since the majority of the episodes are unsuccessful and the reward is highly unbalanced towards negative. Escaping this bad minima via purely random exploration is difficult especially in environments that require longer episodes. We observe that in majority of these cases the policy converges to terminating the episode as early as possible to get the least step penalty. With augmented rewards, Q function recovers from these cases very quickly and gradually learns from more successful episodes. We also added these insights into Section 6.1.\n\n\n> “What are the sizes of action state and action spaces?”\n\nOur action and state spaces are mainly defined by the number of DOM elements in web pages and number of fields in the instructions. For example, in flight-booking-form environment, the number of DOM elements is capped at 100, number of fields is 3, and there are two types of actions (click or type). Hence, the number of possible actions would reach 600 and number of possible variables in a state reach 300. These numbers, however, do not reflect the possible “realization” of a DOM element or a field; they just reflect a sketch. For example, “from” field can take a value from 700 possible airports or “destination” input DOM element can be repetitively typed with any value from the instruction. These greatly increase the space of both states and actions. We added this description into the Section 6.1.\n\n\n> “The conclusion part is missing.”\nThank you for pointing that out. We added the conclusion in the paper.\n", "This paper developed a curriculum learning method for training an RL agent to navigate a web. It is based on the idea of decomposing an instruction in to multiple sub-instructions, which is equivalent to decompose the original task into multiple easy to solve sub-tasks. The paper is well motivated and easily accessible. The problem tackled in this work is an interesting application of RL dealing with large action and state spaces. It also demonstrates superior performance over the state of the art methods on the same domains\n\nHere are the comments for improving this manuscript:\n \nThere are a few notations used without definition, for example DOM tree, Potential (in equation (4))\n\nSome justification regarding the the Q value function specified in (1) might be helpful, otherwise it looks very adhoc.\n\nAlthough using both shallow encoding and augmented reward lead to good empirical results, it might be useful to give more insights, for example, sample size limit cause overfitting for deep models?\n\nWhat are the sizes of action state and action spaces?\n\nThe conclusion part is missing.\n", "The paper propose a framework to deal with large state and action\nspaces with sparse rewards in reinforcement learning. In particular,\nthey propose to use a meta-learner to generate experience to the agent\nand to decompose the learning task into simpler sub-tasks. The authors\ntrain a DQN with a novel architecture to navigate the Web.\nIn addition the authors propose to use several strategies: shallow\nencoding (SE), reward shaping (AR) and curriculum learning (CI/CG). \nIt is shown how the proposed method outperforms state-of-the-art\nsystems on several tasks.\n\nIn the first set of experiments it is clear the improved performance\nof QWeb over Shi17 and Liu18, however, it is not clear why QWeb is not\nable to learn in the social-media-all problem. The authors tested only\none of the possible variants (AR) of the proposed approach with good\nperformance. \n\nIt is not clear in the book-flight-form environment, why the\nQWeb+SE+AR obtained 100% success while the MetaQWeb, which includes\none of main components in this paper, has a lower performance.\n\nThe proposed method uses a large number of components/methods, but it\nis not clear the relevance of each of them. The papers reads like, \"I\nhave a very complex problem to solve so I try all the methods that I\nthink will be useful\". The paper will benefit from an individual\nassessment of the different components.\n\nThe authors should include a section of conclusions and future work.\n" ]
[ 8, -1, -1, -1, -1, 7, 7 ]
[ 3, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2019_BJemQ209FQ", "HkxIVs6PsQ", "HJlCjlqKAX", "SylnSTaghX", "ryejZrK9h7", "iclr_2019_BJemQ209FQ", "iclr_2019_BJemQ209FQ" ]
iclr_2019_BJfIVjAcKm
Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability
We explore the concept of co-design in the context of neural network verification. Specifically, we aim to train deep neural networks that not only are robust to adversarial perturbations but also whose robustness can be verified more easily. To this end, we identify two properties of network models - weight sparsity and so-called ReLU stability - that turn out to significantly impact the complexity of the corresponding verification task. We demonstrate that improving weight sparsity alone already enables us to turn computationally intractable verification problems into tractable ones. Then, improving ReLU stability leads to an additional 4-13x speedup in verification times. An important feature of our methodology is its "universality," in the sense that it can be used with a broad range of training procedures and verification approaches.
accepted-poster-papers
This paper introduced a concept called ReLU stability to motivate regularization and enable fast verification. Most of the analysis was presented empirically on two simple datasets and with low-performing models. I feel theoretical analysis and more comprehensive and realistic empirical studies would make the paper stronger. In general, the contribution of this paper is original and interesting.
train
[ "rkgwBQQKCQ", "HJlXjAodT7", "rkxpILquA7", "Bke0ohlmoQ", "SklV2h91A7", "BJlht5PcaX", "Hkgl-jv5p7", "rkefO5yq67", "H1e9xkx5TQ", "ryl5Ywpd6Q", "ByxrBih_aX", "H1lQYCj_6Q", "H1gHm0i_pm", "Skxfe0sdpX", "Skg1L6iuTm", "SylfYwH5hX", "rkeN2o7FhQ" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "public", "public", "author", "public", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "We would like to thank all reviewers and commenters for their suggestions on improving the manuscript. We have revised our submission based on the feedback we received, and uploaded our revision.", "We thank the reviewer for their helpful comments. We are glad you found the paper pleasant to read!\n\nWe agree that labeling unstable ReLUs properly is an important aspect of our technique. The upper and lower bounds we compute on each ReLU are conservative - thus, every unstable ReLU will always be correctly labeled as unstable, while stable ReLUs can be labeled as either stable or unstable. Importantly, every unstable ReLU is correctly labeled and penalized by the RS Loss we propose. The tradeoff is that stable ReLUs mislabeled as unstable will also be penalized, which can be an unnecessary regularization of the model.\n\nWe showed empirically that we could achieve the following two objectives at once using RS Loss\n1) Reduce the number of ReLUs labeled as unstable, which is an upper bound on the true number of unstable ReLUs\n2) Achieve similarly good test set accuracy and PGD-adversarial accuracy as a model trained without RS Loss\n\nFor example, when comparing the Control and “+RS” networks for MNIST and eps=0.1, we decreased the average number of ReLUs labeled as unstable (using bounds from Improved Interval Arithmetic) from 290.5 to 105.4 with just a 0.26% loss in test set accuracy (cf. Appendix C.3, Appendix E). \n\nThe same trends hold for deeper networks (we only showed results for a 3-layer network in Appendix C.3, but we will include details about a 6-layer network in the revision). For the deeper 6-layer “+RS” network for MNIST and eps=0.1 that we presented, it had a test set accuracy of 98.93% and just 184.6 ReLUs labeled as unstable at the end of training [*]. Training the exact same network without the RS Loss penalty had a slightly higher test set accuracy (99.09%) but also had far more ReLUs labeled as unstable (1028.3). Thus, we could effectively reduce the number of ReLUs labeled as unstable without significantly degrading test set accuracy.\n\nWe will clarify these points better in Appendix C where we discuss ReLU bounds when revising the paper.\n\n[*] Edit made: Previously, we wrote test set accuracy of 98.95% and 150.3 ReLUs labeled as unstable, which matches Appendix E. Those are the correct numbers after post-processing (weight pruning and ReLU pruning) is applied, whereas the updated numbers we write here are before post-processing, to match the fact that the 99.09%/1028.3 unstable ReLUs numbers are also computed before post-processing.", "Thank you for taking the time to carefully read through our responses and paper. We appreciate the time and expertise you put into your review.\n\nWe still believe that improving verification is an important research topic. We state that \"using and improving formal verification methods should still be our focus\" because the ultimate end goal is formal verification of properties such as adversarial robustness. If verification was computationally infeasible in most settings, which seemed to be the case prior to our work, then using the “shortcut” of certification would indeed be the only viable approach. However, as we show in our paper, verification can be made more feasible using our techniques for training easily verifiable models, and thus verification is viable after all. We thus believe improving verification further will lead to better results regarding properties such as adversarial robustness, and, as such, it is an important research topic. Finally, we do not think we have to view verification vs. certification as an either-or “choice.” After all, our technique could potentially improve certification too. As the anonymous commenter pointed out, the certification relaxation of Wong [1, 10] becomes tighter if there are fewer unstable ReLUs.\n\nWe chose to compare our technique to SOTA certification results to show its relative effectiveness. We do not believe it makes as much sense to compare with methods that are not SOTA. We do agree though that it is worth explaining that we specifically compare to Wong [1] and Dvijotham [8] as opposed to the other works listed in section 2 specifically because [1] and [8] are SOTA.\n\nYou correctly note that other works have not considered the MNIST, eps=0.2 case. Dvijotham [8] does not have publicly available code, while Wong [1] does. When using Wong’s code on the eps=0.2 case a few months ago, we got 80.29% certifiable accuracy, which is lower than our 89.79% provable accuracy. However, we believe it is more fair to present the best results that each original author had previously reported in literature, as we did not want to run their code with incorrect settings. With that said, our best attempts to use Wong’s [1] code can definitely be documented in the Appendix to provide a datapoint for comparison.\n\nWe agree that 50% test accuracy on CIFAR is not ideal. In this paper, our focus was on obtaining higher provable adversarial accuracy via verification on CIFAR, as this had not been achieved before. We believe that there is much additional research to be done toward understanding how to obtain more easily verifiable networks without sacrificing as much test accuracy, or toward understanding if/when the tradeoff is necessary. We believe that our work is a step in that direction.\n", "The paper presents several ways to regularize plain ReLU networks to optimize 3 things\n\n- the adversarial robustness, defined as the fraction of examples for which adversarial perturbation exists\n- the provable adversarial robustness, defined as the fraction of examples for which some method can show that there exists no adversarial example within a certain time budget\n- the verification speed, i.e. the amount of time it takes some method to verify whether there is an adversarial example or not\n \nOverall, the ideas are sound and the analysis is solid. My main concern is the comparison between the authors method and the 'certification' methods, both conceptually and regarding performance.\n\nThe authors note that their method falls under 'verification', whereas many competing methods fall under 'certification'. They point to two advantages of verification over certification: (1) the ability to provide true negatives, i.e. prove that an adversarial example exists when it does, and (2) certification requires that 'models must be trained and optimized for a specific certification method'. However, neither argument convinces me regarding the utility of the authors method. \n\nRegarding (2): The authors method also requires training the network in a specific way (with RS loss), and it is only compatible with verifiers that care about ReLU stability. \n\nRegarding (1): It is not clear that this would be helpful at all. Is it really that much better if method A has 80% proven robustness and 20% proven non-robustness versus method B that has 80% proven robustness and 20% unknown? One could make the case that method B is actually even better.\n\nSo overall, I think one has to compare the authors method and the certification methods head-to-head. And in table 3, where this is done, Dvijotham comes out on top 2 out of 2 times and Wong comes out on top 2 out of 4 times. That does not seem convincing. Also, what about the performance numbers form other papers discussed in section 2?\n\n-------\n\nOther issues:\n\nAt first glance, the fact that the paper only deals with (small) plain ReLU networks seems to be a huge downside. While I'm not familiar with the verification / certification literature, from reading the paper, I suspect that all the other verification / certification methods also only deal with that or highly similar architectures. However, I will defer to the other reviewers if this is not the case.\n\nTo expand upon my comment above, I think the paper should discuss true adversarial accuracy on top of provable adversarial robustness. Looking at table 1, for instance, for rows 2, 3 and 4, it seems that the verifier used much less than 120 seconds on average. Does that mean the verifier finished for all test examples? And wouldn't that mean that the verifier determined for each test example exactly whether an adversarial example existed or not? In that case, I would write \"true adversarial accuracy\" instead of \"provable adversarial accuracy\" as column header. If the verifiers did not finish, I would include in the paper for how many examples the result was \"adverarial example exists\" and for how many the result was \"timeout\". I would also include that information in table 3, and I would also include proving / certification times there. \n\nBased on the paper, I'm not quite sure whether the idea of training with L1 regularization and/or small weight pruning and/or ReLU pruning for the purpose of improving robustness / verifiability was an original idea of this paper. In either case, this should be made clear. Also, the paper seems to use networks with adversarial training, small weight pruning, L1 and ReLU pruning as its baseline in most cases (all figures except table 1). If some of these techniques are original contributions, this might not be an appropriate baseline to use, even if it is a strong baselines.\n\nWhy are most experiments presented outside of the \"experiments\" section? This seems to be bad presentation.\n\nI would include all test set accuracy values instead of writing \"its almost as high\". Also, in table 3, it appears as if using RS loss DOES in fact reduce test error significantly, at least for CIFAR. Why is that?\n\nWhile, again, I'm not familiar with the background work on verification / certification, it appears to me from reading this paper that all known verification algorithms perform terribly and are restricted to a narrow range of network architectures. If that is the case, one has to wonder whether that line of research should be encouraged to continue.\n\n--------\n\nMinor issues:\n\n- \"our focus will be on the most common architecture for state-of-the-art models: k-layer fully-connected feed-forward DNN classifiers\" Citation needed. Otherwise, I would suggest removing this statement.\n- \"such models can be viewed as a function f(.,W)\" - you also need to include the bias in the formula I think\n- \"convolutional layers can be represented as fully-connected layers\". I think what you mean is \"convolutional layers can be represented as matrix multiplication\"\n- could you make the difference between co-design and co-training more clear?\n- The paper could include in the appendix a section outlining the verification method of Tjeng", "I'm not convinced by your statement \"using and improving formal verification methods should still be our focus\", and I'm not convinced that knowing that there is an adversarial example has great value. After all, we can simply assume that all examples that are not certified by some certification method have an adversarial example, as a worst case.\n\n\"We view RS Loss as a regularization method, similar to L1 regularization. It can be added to any training procedure, and it is designed for a natural goal - encouraging stable ReLUs.\" In all experients, RS loss reduced test error. However, the goal of \"natural\" regularization is to increase test error by reducing the generalization gap. So I don't see how RS loss would compete with L1 regularization or weight decay.\n\nI'm still unimpressed by your experimental results. If you outperform many of the methods cited in section 2, I would include those results in the paper. Right now, for MNIST with \\epsilon=0.2, you don't show any comparable method. Also, if Dijotham et al is SOTA, why not run it yourself for all the scenarios you study? I'm also still unconvinced that verification via linear program solvers is a fruitful direction for research in general as all results presented (both from your method, the baseline and competing papers) seem horrendously bad to me (50% test accuracy on CIFAR ... ?).\n\nBecause L1 regularization, weight pruning, and ReLU pruning are all original contributions; because of Appendix E; because your method is \"universal\" with regards to current verification methods; because the nets you use are the largest in terms of size within the verification literature; and because of your responsiveness to my and other criticisms overall, I increase my score to 5. I'm not an expert on the topic of the paper and wouldn't mind seeing this paper accepted, or deferring to more knowledgeable reviewers.", "We absolutely agree with you on these points, and believe that we had a misunderstanding regarding terminology earlier. Thank you for clarifying; we will clarify here our viewpoint here, which we believe does not conflict with yours.\n\nThis is how we view the overall procedure to train a neural network with high true adversarial accuracy, and then verify/certify that accuracy.\n\nStep 1: Training.\nStep 2: Verification/Certification.\n\nIt is very difficult to compute the true adversarial accuracy during training. Thus, current training procedures optimize for an approximation of it. In our work, we use standard adversarial training for Step 1, which optimizes for PGD-adversarial accuracy. PGD-adversarial accuracy is an upper-bound approximation of true adversarial accuracy, while “relaxation”-based approaches are a lower-bound approximation of true adversarial accuracy. Our contribution is to add regularization to Step 1 which improves ease-of-verification in Step 2.\n\nTo compare the main approaches of the 4 papers being discussed (ours, Gowal et al. [9], and Wong et. al [1] / Dvijotham et. al [8]):\nOur paper does\nStep 1: upper-bound approximation and regularization, Step 2: (MILP) Verification\n[9] does\nStep 1: lower-bound approximation, Step 2: (MILP) Verification\n[1] and [8] do\nStep 1: lower-bound approximation, Step 2: Certification\n\nOur contention regarding verification/certification is simply that we should try to use verification in Step 2. Our work presents one possible Step 1 to make the use of verification in Step 2 easier. The work of [9], which was posted just two weeks ago, achieves great results and is another step in the same direction as our work.\n\nWith that said, we think improving certification as Step 2 is certainly an important line of research as well - it was just not the focus of our work. If the gap in robustness guarantees between verification as Step 2 and certification as Step 2 can be decreased, we feel that is also a valuable research contribution, as certification has the advantage of being faster than verification.\n\nWe agree that comparing the runtimes of an exact verifier on networks trained using [9] and our approach can provide further insight, and would be interesting future work.\n\nYou also make a great point that exact verification is important in more general settings beyond adversarial robustness. Ultimately, we believe that our techniques can be useful in those other settings of verification as well.\n\nIn light of the discussion here regarding certification and verification, we feel that it would improve clarity to add an additional section clarifying these terms and their relation, much like we have tried to do so here, in the Appendix. We also plan to revise our manuscript to cite the very recent work of [9].\n\nFinally, we address potential similarity between our results and https://arxiv.org/pdf/1711.00851.pdf [10] in our next response below.\n\n[9] Gowal, S., Dvijotham, K., Stanforth, R., Bunel, R., Qin, C., Uesato, J., Mann, T. and Kohli, P., 2018. On the Effectiveness of Interval Bound Propagation for Training Verifiably Robust Models. arXiv preprint arXiv:1810.12715.\n\n[10] Eric Wong and J. Zico Kolter. Provable defenses against adversarial examples via the convex outer adversarial polytope. In International Conference on Machine Learning (ICML), 2018.\n", "We agree here as well - ReLU stability and weight sparsity are not properties unique to the networks we train. In fact, we specifically acknowledge what you point out here in our Appendix B - that training as in [10], as well as the adversarial training approach of Madry et. al [11], already seem to improve weight sparsity. However, as shown in Table 1, we found that adversarial training alone was not enough for easily verifiable networks, so we used additional regularization for the natural goals of ReLU stability and weight sparsity. All of these methods appear complementary, rather than conflicting.\n\n[11] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. In International Conference on Learning Representations (ICLR), 2018.\n", "The paper achieves these SOTA using a \"relaxation-based\" method for training, not exact verification. It relaxes every non-linear node in the network using interval analysis, and then uses these bounds to guide the training procedure. \n\nThey use a MILP solver to measure the robustness however. Perhaps a comparison of the run-times for networks trained with \"relaxations\" vs your approach is useful here? \n\nI strongly suspect sparse weights/ReLU-stability might already be a consequence of training with relaxations as both of these things tend to make relaxations tighter. The relaxations would be tightest when the network is simply linear in the neighborhood of a point. More local linearity->tighter relaxations and tighter relaxations->easier to optimize. I am concerned that this is just another path to arriving at weights that have similar properties as the ones discussed in https://arxiv.org/pdf/1711.00851.pdf . \n\nI do believe exact verification has merit in verifying more generalized properties, but training for \"adversarial\" robustness seems doable with relaxation based approaches, particularly as the relaxations keep getting tighter!", "Check out Figure 8 in https://arxiv.org/pdf/1711.00851.pdf", "As far as we can tell, this paper achieves these SOTA results using an exact verification approach as opposed to using certification. Thus, it actually shows that verification can be scaled further, surpassing the results of current SOTA certification. In particular, this supports our idea that improving verification should be our primary focus. Please do correct us if we are misunderstanding the methodology of this paper.\n\nWe did not intend for our response or paper to focus on the message of \"Certification is limited.\" Instead, our primary message is \"Verification has been limited in the past, but it can be improved using our techniques. Our improvements are on par with SOTA certification methods, which are a ‘shortcut’ to fully doing verification.\" This newer paper https://arxiv.org/pdf/1810.12715.pdf actually improves verification beyond our current results, and we believe that their interval bound propagation method could be used in conjunction with our methods for ReLU stability for even more easily verifiable networks.\n\nLastly, we want to clarify the words \"certification\" and \"verification\" one more time just to avoid any confusion about terms. We use \"certification\" to refer to techniques that use relaxation-based approaches to provide certificates of robustness. These techniques differ from \"verification\" techniques because of the relaxation step, which is helpful for speeding up the certification process but can result in a decrease in the number of inputs that can be certified.", "Please check out this paper https://arxiv.org/abs/1810.12715\n\nIt is quite recent, and I would not expect a comparison or use it was a baseline. But if your argument is certification based methods are limited/not scalable, this paper clearly answers both those questions in the negative. If the claim of your paper is, \"Certification is fundamentally limited and hence we need alternative approaches\", I think that claim is not justified in the least, as clearly proven otherwise by the above paper.\n\nAlso, these numbers significantly improve over [8] you cite earlier. Please consider toning down your criticism/claims in the light of evidence provided by https://arxiv.org/abs/1810.12715.\n", "We thank the reviewer for their useful comments. Your comments will help us in revising this paper.\n\nWe agree that addressing norms other than L_\\infty is an important direction. The techniques explored in our paper are, in general, applicable to other L_p norms (as well as more broader sets of perturbations). Inducing sparsity via L1-regularization and/or weight pruning will still reduce the number of variables in the formulation of verification problems and should improve verification speed. ReLU stability will also help and can still be encouraged via our proposed RS Loss. We do acknowledge that the L_\\infty norm will give the tightest bounds on the input layer, which could mean that ReLU stability is easier to optimize for in the L_\\infty case.\n\nTo clarify with a quick example - if we have a 784-dimensional MNIST input (x1, x2, … x784) with values in the range [0, 1], a reasonable L_\\infty norm bound on allowed perturbations may be eps=0.3. On the other hand, a reasonable L_2 norm bound on allowed perturbations may be eps=3. This means that for the L_\\infty case, a perturbed input x’ with first dimension x1’ is bounded by x1 - 0.3 < x1’ < x1 + 0.3, while for the L_2 case, the tightest bounds on x1’ are 0 < x1’ < 1. Even though these bounds are looser, encouraging ReLU stability will still improve verification speed.\n\nFinally, as of now, most literature in verification and certification that we are aware of has also focused on the L_\\infty norm. Therefore, we similarly chose to focus on it as the most common and natural benchmark. We will be sure to discuss addressing other L_p norms and input constraints in more detail in a revised version of this paper.\n\nAdditionally, thank you for pointing out the Mirman et. al 2018 paper - we will absolutely add those relevant results to our comparison tables and references section.", "> Minor issues:\n\n> - \"our focus will be on the most common architecture for state-of-the-art models: k-layer fully-connected feed-forward DNN classifiers\" Citation needed. Otherwise, I would suggest removing this statement.\n> - \"such models can be viewed as a function f(.,W)\" - you also need to include the bias in the formula I think\n> - \"convolutional layers can be represented as fully-connected layers\". I think what you mean is \"convolutional layers can be represented as matrix multiplication\"\n> - could you make the difference between co-design and co-training more clear?\n> - The paper could include in the appendix a section outlining the verification method of Tjeng\n\nThese are all helpful comments that we will address while revising. To clarify here, co-design means optimizing for multiple design objectives during training and co-training mean using a specific training procedure combined with a specific certification procedure.\n\nREFERENCES\n-------------------------------------------------------------------------------------------------------------------------------------\nAll references here are also cited in the submitted paper.\n\n[1] Eric Wong, Frank Schmidt, Jan Hendrik Metzen, and J. Zico Kolter. Scaling provable adversarial defenses. NIPS, 2018.\n\n[2] A. Raghunathan, J. Steinhardt, and P. Liang. Certified defenses against adversarial examples. In International Conference on Learning Representations (ICLR), 2018.\n\n[3] Guy Katz, Clark Barrett, David L. Dill, Kyle Julian, and Mykel J. Kochenderfer. Reluplex: An efficient smt solver for verifying deep neural networks. In Rupak Majumdar and Viktor Kuncak, ˇ editors, Computer Aided Verification, pages 97–117, Cham, 2017. Springer International Publishing. ISBN 978-3-319-63387-9.\n\n[4] Rudiger Ehlers. Formal verification of piece-wise linear feed-forward neural networks. In Deepak ¨ D’Souza and K. Narayan Kumar, editors, Automated Technology for Verification and Analysis, pages 269–286, Cham, 2017. Springer International Publishing. ISBN 978-3-319-68167-2.\n\n[5] Vincent Tjeng, Kai Xiao, and Russ Tedrake. Verifying neural networks with mixed integer programming. CoRR, abs/1711.07356, 2017. URL http://arxiv.org/abs/1711.07356.\n\n[6] Alessio Lomuscio and Lalit Maganti. An approach to reachability analysis for feed-forward relu neural networks. CoRR, abs/1706.07351, 2017. URL http://arxiv.org/abs/1706. 07351.\n\n[7] Chih-Hong Cheng, Georg Nuhrenberg, and Harald Ruess. Maximum resilience of artificial neural ¨ networks. CoRR, abs/1705.01040, 2017a. URL http://arxiv.org/abs/1705.01040.\n\n[8] Krishnamurthy Dvijotham, Sven Gowal, Robert Stanforth, Relja Arandjelovic, Brendan O’Donoghue, Jonathan Uesato, and Pushmeet Kohli. Training verified learners with learned verifiers. arXiv preprint arXiv:1805.10265, 2018.", "Finally, we would like to address your more specific comments:\n\n> Is it really that much better if method A has 80% proven robustness and 20% proven non-robustness versus method B that has 80% proven robustness and 20% unknown? One could make the case that method B is actually even better.\n\nWith regards to the example you describe involving method A and method B, we would argue that the results of method A are better. It is better to know exactly in which cases your model is robust and in which cases it is not. If model B is close to 20% non-robust (if all or most of the 20% unknown cases are non-robust), it is better to know than not to know.\n\nIf you can get better proven robustness with method B and method A is fundamentally limited, then perhaps method B is better. But in the case where method A is verification and method B is certification, as of now, method B does not seem to offer significant advantages in robustness, and method A seems to have limitations that can be addressed - our work is one step toward doing so.\n\n> Also, what about the performance numbers form other papers discussed in section 2?\n\nThe performance of other papers discussed in Section 2 (related works) are worse than the results of Wong [1] and Dvijotham [8], which is why we primarily compare to those two works. In comparisons to [1] and [8], our results are within a few % or better.\n\n> Looking at table 1, for instance, for rows 2, 3 and 4, it seems that the verifier used much less than 120 seconds on average. Does that mean the verifier finished for all test examples?\n\nRows 2, 3, and 4 in Table 1 show the average verification time over all test examples, given that the “timeout” is 120 seconds. We count examples that “timeout” as taking 120 seconds. Thus, the average will always be less than 120 seconds, and we observe that this average decreases as we use more natural regularization techniques during training. All of Rows 2, 3, and 4 have some proportion of test examples where the verifier reaches timeout and does not finish (13.81%, 5.8%, 2.87%, respectively - cf. Appendix E). \n\n> Based on the paper, I'm not quite sure whether the idea of training with L1 regularization and/or small weight pruning and/or ReLU pruning for the purpose of improving robustness / verifiability was an original idea of this paper. In either case, this should be made clear. Also, the paper seems to use networks with adversarial training, small weight pruning, L1 and ReLU pruning as its baseline in most cases (all figures except table 1). If some of these techniques are original contributions, this might not be an appropriate baseline to use, even if it is a strong baselines.\n\nL1 regularization, weight pruning, and ReLU pruning are original contributions in terms of their application to improving the ease-of-verification of networks. We felt that these ideas are natural and/or commonly used in other settings, and thus did not want to over-emphasize our applying them as major contributions.\n\nWe chose to use a strong baseline as our control in order to isolate the effect of adding RS Loss. Simply using adversarial training as a baseline vs. our final +RS network (which includes adversarial training, L1, small weight pruning, ReLU pruning, and RS Loss) would show a drastic speedup gain in support of the combination of all of our methods, but it would not isolate the effect of each individual method.\n\n> Why are most experiments presented outside of the \"experiments\" section? This seems to be bad presentation.\n\nWe aim to present the most relevant experimental results that support our claims in the body of the paper, and leave the more in-depth details for the “Experiments” section and the Appendix.\n\n> I would include all test set accuracy values instead of writing \"its almost as high\".\n\nThis is a good suggestion - we will include the test set accuracy in Fig. 2.\n\n> Also, in table 3, it appears as if using RS loss DOES in fact reduce test error significantly, at least for CIFAR. Why is that?\n\nFor CIFAR, we chose a much higher weight on RS Loss to improve ease-of-verifiability further. We chose weights that did cause test set accuracy degradation to achieve this goal. We remarked in section 4.1 that a potential limitation of our current method is that CIFAR may require more unstable ReLUs to fit properly, which will cause a more noticeable tradeoff between test set accuracy and ReLU stability.", "We thank the reviewer for their detailed comments; they will be helpful in revising our manuscript.\n\nWe appreciate the point about the importance of “verification” compared to “certification” as it is indeed a great question!\nNow, one should note that, in this context, formal verification is the ultimate end goal we strive for; certification is just a fast “shortcut” that can get us closer to this goal, at the expense of sacrificing part of the robustness guarantee.\nOf course, if certification already gives us a satisfactory level of robustness, this tradeoff can be beneficial. However, it is unclear if the current state-of-the-art (SOTA) certification methods like Wong et. al 2018 [1] are at this point able to deliver such robustness once we move beyond the smallest perturbation sizes (see the MNIST, eps=0.3 case in Table 3 of our manuscript). Additionally, when applied to neural networks not specifically trained for that certification method, they give vacuous bounds [2].\nThus, as of now, using and improving formal verification methods should still be our focus.\n\n-------------------------------------------------------------------------------------------------------------------------------------\n\nNow, we would like to address your high-level comments in more detail, followed by your specific comments:\n\n> Regarding (2): The authors method also requires training the network in a specific way (with RS loss), and it is only compatible with verifiers that care about ReLU stability.\n\nWe view RS Loss as a regularization method, similar to L1 regularization. It can be added to any training procedure, and it is designed for a natural goal - encouraging stable ReLUs.\n\nEven if one could, in principle, imagine a verification approach that does not benefit from the natural goal of ReLU stability, all effective verification methods that we are aware of, either falling under the broader class of SMT-based verifiers [3,4] or MIP-based verifiers [5,6,7], can benefit from ReLU stability. For example, [3] states that “When tighter bounds are derived for ReLU variables, these variables can sometimes be eliminated, i.e., fixed to the active or inactive state, without splitting.” [6] writes: “we conjecture that the large increase of binary variables in the problem caused by the binary constraints on the input creates the large performance gap between the Reuters dataset and MNIST.” (the Reuters dataset had more binary variables and took much longer to verify)\n\n> Regarding (1): While the authors method can provide true negatives, they are not discussed in the paper at all.\n\n> I would include in the paper for how many examples the result was \"adverarial example exists\" and for how many the result was \"timeout\".\n\nWe do discuss upper bounds for the true robustness of all of our models, as well as the number of timeouts, in Appendix E. We appreciate that you bring up this point though, and we will work to point readers to these relevant details in Appendix E from the main body of the paper when revising.\n\nIn Appendix E, the column labeled “Verifier Upper Bound” is simply 100% minus the number of true negatives (“adversarial examples exists” cases) - it describes the maximum possible value for the true adversarial accuracy. The difference between the upper bound (“Verifier Upper Bound”) and the lower bound (“Provable Adversarial Accuracy”) on the true adversarial accuracy equals how many examples reached their “timeout,” as we can not determine which category they belong to. \n\n> At first glance, the fact that the paper only deals with (small) plain ReLU networks seems to be a huge downside.\n\n> While, again, I'm not familiar with the background work on verification / certification, it appears to me from reading this paper that all known verification algorithms perform terribly and are restricted to a narrow range of network architectures.\n\nIndeed, we purposely chose our architectures to match those of prior works in certification literature for the fairest possible comparison.\n\nWe agree that expanding beyond our current capabilities for verification and certification is an important direction for further research. Our contribution in this manuscript is to show that training for ease-of-verification via inducing weight sparsity and ReLU stability can help scale verification. Prior to our work, verification methods struggled for neural networks with just a few hundreds ReLUs in total. Using our methods, networks as large as our “large” convolutional CIFAR network, which have over 60000 ReLUs (most of which can be made stable), can be verified.", "Training for Faster Adversarial Robustness verification via inducing RELU stability\n\n\nAs I am familiar yet not an expert on adversarial training and robustess, my review will focus mainly on the overall soundness of the manuscript. I also only went superficially into the quantitative results.\n\nSummary:\n\nThe authors are interested in the problem of verifying neural networks models trained to be robust against adversarial attacks. The focus is on networks with relu activations and adversarial perturbations within an epsilon l1-ball around each input, and the verification problem consists in proving the network performs as intended for all possible perturbations (infinitely many)\n\nThe review on verification is clear. \nElements that affect verification time are introduced and well explained in main text or appendix from both intuitive and theoretical perspective: l1 penalty, weight pruning, relu stability. These can be summ\\arized as : you want few neurons, and you want them to operate in the same regime for all inputs, both to avoid branching. Relu stability is apparently a new concept and the proposed regularization approximately enforces relu stability.\nThe approximation [itself using the novel improved interval arithmetic] based bounds on unit activations propagated through the network seems not to scale well with depths (more units are mis-labelled as relu unstable, hence wrongly regularized if I understand correctly). The authors acknowledge and document this fact but I would like to hear more discussion on this feature and on the trade-off that still make this approach worthwhile for deeper networks.\n\nThis regularization does not help performance but only paves the way for a faster verification, for this reason the term co-design is used.\n\nThe rest of the manuscript is a thorough empirical analysis of the effect of the penalties/regularizations on the network and ultimately on the verification time, keeping an eye on not deteriorating the performance of the network.\nHow much regularization can be added seems to be indeed an empirical question since networks are ‘over-parametrized in the first place’ with no clear way to a priori quantify task or model complexity.\n\nThe devil is in the details and in practice implementation seems not straightforward with a complex optimization with varying learning rates and different regularizations applied at different time along the way. But this seems to be the case for most deep learning paper.\n\nThe authors claim and provide evidence to be able to verify network well beyond the scope of what was achievable before due to the obtained speed-ups, which is a notable feature.\n\nOverall, this manuscript is well structured, thorough and pleasant to read, and I recommend it to be accepted for publication at ICLR\n", "This paper proposes methods to train robust neural networks that can also be verified faster. Specifically, it uses pruning methods to encourage weight sparsity and uses regularization to encourage ReLU stability. Both weight sparsity and ReLU stability reduces time needed for verification. The verified robust accuracy reported in this paper is close to previous SOTA certified robust accuracy, although not beating SOTA.\n\nThe paper is clearly written and easy to follow.\n\nThe reviewer is familiar with literatures on certifiable robust network literature, but not familiar with verification literature. To the best knowledge of the reviewer, the proposed method is well motivated and novel, and provides a scalable method for verifying (instead of lower bounding) robustness.\n\nOther comments:\n\nI think there should be some discussions on applicability on different robustness measures. The paper focus on L_\\infty norm bounded attack, is this method extendable to other norms?\n\nRe: robust accuracy comparison, I found some previous SOTA results missing from Table 3. For example, Mirman et al., 2018 (Appendix Table 6) reached 82% (higher than 80.68% achieved in this paper) provable robust accuracy for MNIST eps=0.3 case. and this is not reported in Table 3. The CIFAR10 results in Mirman et al., 2018 is also better than the best SOTA accuracy in Table 3.\n\n\nMatthew Mirman, Timon Gehr, and Martin Vechev. Differentiable abstract interpretation for provably robust neural networks. In Jennifer Dy and Andreas Krause, editors, Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 3575–3583, Stockholmsmssan, Stockholm Sweden, 10–15 Jul 2018. PMLR. URL http://proceedings.mlr.press/v80/mirman18b.html.\n" ]
[ -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 7 ]
[ -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 3 ]
[ "iclr_2019_BJfIVjAcKm", "SylfYwH5hX", "SklV2h91A7", "iclr_2019_BJfIVjAcKm", "Hkgl-jv5p7", "rkefO5yq67", "H1e9xkx5TQ", "ryl5Ywpd6Q", "rkefO5yq67", "ByxrBih_aX", "H1gHm0i_pm", "rkeN2o7FhQ", "Skxfe0sdpX", "Skg1L6iuTm", "Bke0ohlmoQ", "iclr_2019_BJfIVjAcKm", "iclr_2019_BJfIVjAcKm" ]
iclr_2019_BJfOXnActQ
Learning to Learn with Conditional Class Dependencies
Neural networks can learn to extract statistical properties from data, but they seldom make use of structured information from the label space to help representation learning. Although some label structure can implicitly be obtained when training on huge amounts of data, in a few-shot learning context where little data is available, making explicit use of the label structure can inform the model to reshape the representation space to reflect a global sense of class dependencies. We propose a meta-learning framework, Conditional class-Aware Meta-Learning (CAML), that conditionally transforms feature representations based on a metric space that is trained to capture inter-class dependencies. This enables a conditional modulation of the feature representations of the base-learner to impose regularities informed by the label space. Experiments show that the conditional transformation in CAML leads to more disentangled representations and achieves competitive results on the miniImageNet benchmark.
accepted-poster-papers
The reviewers think that incorporating class conditional dependencies into the metric space of a few-shot learner is a sufficiently good idea to merit acceptance. The performance isn’t necessarily better than the state-of-the-art approaches like LEO, but it is nonetheless competitive. One reviewer suggests incorporating a pre-training strategy to strengthen your results. In terms of experimental details, one reviewer pointed out that the embedding network architecture is quite a bit more powerful than the base learner and would like some additional justification for this. They would also like more detail on the computing the MAML gradients in the context of this method. Beyond this, please ensure that you have incorporated all of the clarifications that were required during the discussion phase.
train
[ "r1xVbbV9RQ", "H1x2TeE90m", "B1gPog4c0Q", "Hyg7wl45AQ", "HJxo10uFh7", "ByxxsLKVn7", "H1lyuO0foX" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for the very detailed and constructive comments.\n\n1. The motivation\n1.1 How the metric space is trained?\nThe metric space is trained in a pre-training step and it is not updated while training the base-learner. The embeddings obtained from the metric space is different from other popular pre-training techniques, e.g. in LEO the embeddings are pre-trained as a supervised classification task. The pre-trained metric space provides a representation for class dependency as it is trained to provide good separation/clustering from randomly sampled classes. This is in contrast with supervised pre-training which aims to provide discriminative feature representations.\n\n1.2 “Will it introduce more information w.r.t. only using embedding space to do the classification?”\nThe proposed CAML makes use of two views of the data: a global view through the metric space and a local view via the base classifier. The global view of the data, i.e., the embeddings, may not capture all the necessary information for some classification tasks, such as classifying different breeds of dogs which may have similar embeddings. In such cases, the local view of data from the pixel space could help compensate for the lack of information in the global view.\n\n3.5 “How about build MAML directly on the embedding space?”\nWe are aware that meta-learning on the embedding space is a powerful idea, as shown in LEO. We have added an experiment that directly trains maml on the learned 512 dimensional metric space using three fully-connected layers. We were only able to obtain 47.43% on 1-shot tasks and 57.33% on 5-shot tasks. This suggests that applying conditional transformations on the metric space is more effective than directly using the metric space as input.\n\n2. Novelty.\nThe proposed CAML does have close relation to TADAM. However, they have three main differences.\n(i) Different goals: TADAM uses conditional transformation for metric scaling while CAML for developing better gradient-based representations.\n(ii) Task-level vs. example-level representation. TADAM uses task-level representation to modulate the inference from a task perspective, while CAML uses example-level representation to modulate the representation at the content level.\n(iii) The conditional transformation in TADAM is homogeneous in the sense that the conditional information is retrieved from the metric space and also applied to the metric space. However, the proposed CAML uses conditional transformation under the heterogeneous setup where the conditional information is retrieved from the embedding space but applied to a different base learner.\n\n3. Method details\n3.1 “Since CBN is example induced, will it prone to overfitting?”\nThe metric space (ResNet-12) is pre-trained and not updated while training the base learner. The gradients of the meta learner only affect the base learner and conditional transformation. We choose 30 convolutional channels out of computational considerations, and the skip connection has a bigger impact than the number of conv channels. Using 64 conv channels without the skip connection, we obtain 54.63% on 1-shot and 70.38% on 5-shot.\n\n3.2 “Is this skip connection very important for this particular model?”\nYes, the skip connection is very important. The use of skip connections is to improve the gradient flow. MAML unfolds the inner loop into one large graph which may cause gradient issues. Without skip connections, out model obtains 56.07% on 1-shot tasks and 71.26% on 5-shot tasks.\n\n3.3 “Will the MAML objective influences the embedding network?”\nWe would like to clarify that the metric is pre-trained and not updated in MAML updates. We empirically observe that training the metric space and meta-learner end-to-end is overly complex and tend to over-fit.\n\n3.4 “how many epochs does MAML need?”\nIt takes 50,000 episodes to train CAML, and another 30,000 episodes to pre-train the metric space.\n", "Thank you for your valuable review.\n\n1. Clarification on the metric learning step\nThank you for the suggestion. The metric is indeed learned in a K-means-flavored way and we have updated our manuscript to reflect that $\\phi$ is learned.\n\n2. How confidence intervals are constructed?\nWe sample 600 evaluation tasks from the meta-test classes and report the confidence intervals across all the evaluation tasks. We have updated our manuscript to reflect this.\n", "Thank you for your constructive review.\n\n1. Is the use of class dependency general or specific to MAML-based methods?\n(1) The benefits of class dependency is not restricted to MAML-based methods. The goal of class dependency is to provide complementary information to the meta-learner; this is especially important in few-shot learning due to insufficient data.\n(2) Relating to other SOA: (a) TADAM makes use of conditional transformations based on tasks representations for metric scaling; class dependencies can also be incorporated into TADAM with the additional benefit of capturing example-level class relationships. (b) LEO can also make use of the class dependency for improving the conditional generation of model parameters.\n\n2. The relationship between the metric space and the base-learner.\nThe proposed framework captures the dual views of a classification task: a global view that is aware of the relationships among all classes, and a local views of the current N-way K-shot classification task. The metric space, or the global view, is pre-trained in a way that is independent of the current N-way K-shot task; while the base-learner, or the local view, attempts to develop representations for the current classification task alone.\n\n3. “What would happen if similar process keeps on? E.g., by building the third stage that modulates the features from the previous two?”\nThis is a very interesting question. One can build different stages of conditional transformations associated with different granularities of class-dependency. With metric spaces trained to capture different levels of class-dependency, one could modulate the base-learner in a hierarchical manner.\n\n4. How to make use of hierarchical class structure?\nOne can employ a curriculum learning strategy to learn the metric space at different levels of the hierarchy to better train the metric space. As mentioned in 3, the hierarchical class structure can also be used to train different metric spaces and conditionally modulate representations in a hierarchical manner.\n", "We thank the reviewers for their valuable feedback. The main changes we have made in the manuscript include:\n\n(1) Clarifications on metric learning notations and the fact that the metric space is pre-trained.\n(2) Additional discussions about the relationships between the metric space and the base classifier.\n(3) Highlight the differences between CAML and TADAM.\n(4) Hyperparameters and other small edits.", "[Summary]\nThe paper presents an enhancement to the Model-Agnostic Meta-Learning (MAML) framework to integrate class dependency into the gradient-based meta-learning procedure. Specifically, the class dependency is encoded by embedding the training examples via a clustering network into a metric space where semantic similarity is preserved via affinity under Euclidean distance. Embedding of an example in this space is further employed to modulate (scale and shift) features of the example extracted by the base-learner via a transformation network, and the final prediction is made on top of the modulated features. Experiments on min-ImageNet shows that the proposed approach improves the baseline of MAML. \n\nPros\n- An interesting idea of leveraging class dependency in meta-learning.\n- Solid implementation with reasonable technical solutions.\n\nCons\n- Some relevant interesting areas/cases were not exploited/tested.\n- Improvement over state-of-the-arts (SOA) is marginal or none. \n\n[Originality]\nThe paper is motivated by an interesting observation that class dependency in the label space can also provide insights for meta-learning. This seems to be first introduced in the context of meta-learning.\n\n[Quality]\nOverall the paper is well executed in some aspects, including motivation and technical implementation. There are, however, a few areas/cases I would like to see more from it so as to make a stronger case. \n\nIn terms of generalization, the proposed enhancement to MAML is claimed to be orthogonal to other SOAs that are also within the framework based on gradient-descent, e.g. LEO. It is not quite clear to me that if the use of class dependency can lead to general benefits to alike methods like LEO, or if it is just a specific case for the MAML baseline. Actually, it would be interesting to see how the proposed class-conditional modulation can help other SOA in table 1. Also, more empirical results from other use cases (e.g., other datasets or problems) also help provide more insights here. These augmentation can better justify the value or significance of this work. \n\nIn the specific formulation of the approach in Fig 2, it looks to me that the whole system is a compounded framework that combines two classifiers with one (base-learner) producing base representation, and the second injects side-information (e.g., from class-dependency in this case) to modulates the base representation before the final prediction. I just wonder what would happen if similar process keeps on? E.g., by building the third stage that modulates the features from the previous two? Or what if we swap the roles of base-learner and the embedding from the metric space (i.e., using the base-learner to modulate the embedding)? It looks to me that the feature/embedding from both components (in Fig 5 and 6) are optimized to improve separability. The roles they play in this process are also very interesting to get more elucidation. \n \nAnother point worth discussion is that the class dependency currently imposed does not see to include hierarchical structure among classes, i.e., the label space is still flat. It would be great if this can be briefly discussed with respect to the current formulation to better inspire the future work.\n\n[Clarity]\nThe paper is generally well written and I did not have much difficulty to follow. \n\n[Significance]\nWhile the paper is built on an interesting idea, there are still a few areas for further improvement to justify its significance (the the comments above). \n", "TL;DR. Significant contribution to meta-learning by incorporating latent metrics on labels.\n\n* Summary\n\nThe manuscript builds on the observation that using structured information from the labels space improves learning accuracy. The proposed method --CAML-- is an instance of MAML (Finn et al., 2017), where an additional embedding is used to characterize the dissimilarity among labels.\n\nWhile quite natural, the proposed method is supported by a clever metric learning step. The classes are first represented by centroids and an optimal mapping $\\phi$ is then learnt by maximizing a clustering entropy (similarly to what is performed in a K-means-flavored algorithm, though this connection is not made in the manuscript). A conditional batch normalization (Dumoulin et al., 2017) is then used to model how closeness (in the embedding space $f_\\phi$) among labels is taken into account at the meta-learning level.\n\nExisting literature is well acknowledged and I find the numerical experiments to be convincing. In my opinion, a clear accept.\n\n* Minor issues\n\n- I would suggest adding a footnote explaining why Table 1 reports confidence intervals and not just standard deviations. How are constructed those intervals?\n- Section 3.2 bears ambiguity as the manuscript reads \"We first define centroids [...]\" depending on $f_\\phi$ which is then defined as the argument of the minim of the entropy term. What appears as a circular definition is merely the effect of loose writing yet I am afraid it would confuse readers. I would suggest to rewrite this part, maybe using a pseudo-code to better make the point that $f_\\phi$ is learnt.", "This paper proposes a new few-shot learning method with class dependencies. To consider the structure in the label space, the authors propose to use conditional batch normalization to help change the embedding based on class-wise statistics. Based on which the final classifier can be learned by the gradient-based meta-learning method, i.e., MAML. Experiments on MiniImageNet show the proposed method can achieve high-performance, and the proposed part can be proved to be effective based on the ablation study.\n\nThere are three main concerns about this paper, and the final rating depends on the authors' response.\n1. The motivation\nThe authors claim the label structure is helpful in the few-shot learning. If the reviewer understands correctly, it is the change of embedding network based on class statistics that consider such a label structure. From the objective perspective, there are no terms related to this purpose, and the embedding space learning is also based on the same few-shot objective. Will it introduces more information w.r.t. only using embedding space to do the classification?\n\n2. The novelty.\nThis paper looks like a MAML version of TADAM. Both of the methods use the conditional batch normalization in the embedding network, while CAML uses MAML to learn another classifier based on the embedding. Although CAML uses the CBN at the example level and considers the class information in a transductive setting, it is not very novel. From the results, the proposed method uses a stronger network but does not improve a lot w.r.t. TADAM.\n\n3. Method details\n3.1 Since CBN is example induced, will it prone to overfitting?\n3.2 About the model architecture. \nCAML uses a 4*4 skip connection from input to output. It is OK to use this improve the final performance, but the authors also need to show the results without the skip connection to fairly compare with other methods. Is this skip connection very important for this particular model? Most methods use 64 channel in the convNet while 30 channels are used in this paper. Is this computational consideration or to avoid overfitting? It is a bit strange that the main network is just four layers but the conditional network is a larger and stronger resNet.\n3.3 About the MAML gradients\nHow to compute the gradient in the MAML flow? Will the embedding network be updated simultaneously? In other words, will the MAML objective influences the embedding network?\n3.4 The training details are not clear. \nThe concrete training setting is not clear. For example, does the method need model pre-train? What is the learning rate, and how to adapt it? For the MAML, we also need the inner-update learning rate. How many epochs does CAML need?\n3.5 How about build MAML directly on the embedding space?" ]
[ -1, -1, -1, -1, 6, 8, 4 ]
[ -1, -1, -1, -1, 3, 3, 5 ]
[ "H1lyuO0foX", "ByxxsLKVn7", "HJxo10uFh7", "iclr_2019_BJfOXnActQ", "iclr_2019_BJfOXnActQ", "iclr_2019_BJfOXnActQ", "iclr_2019_BJfOXnActQ" ]
iclr_2019_BJfYvo09Y7
Hierarchical Visuomotor Control of Humanoids
We aim to build complex humanoid agents that integrate perception, motor control, and memory. In this work, we partly factor this problem into low-level motor control from proprioception and high-level coordination of the low-level skills informed by vision. We develop an architecture capable of surprisingly flexible, task-directed motor control of a relatively high-DoF humanoid body by combining pre-training of low-level motor controllers with a high-level, task-focused controller that switches among low-level sub-policies. The resulting system is able to control a physically-simulated humanoid body to solve tasks that require coupling visual perception from an unstabilized egocentric RGB camera during locomotion in the environment. Supplementary video link: https://youtu.be/fBoir7PNxPk
accepted-poster-papers
A hierarchical method is presented for developing humanoid motion control, using low-level control fragments, egocentric visual input, recurrent high-level control. It is likely the first demonstration of 3D humanoids learning to do memory-enabled tasks using only proprioceptive and head-based ego-centric vision. The use of control fragments as opposed to mocapclip-based skills allows for finer-grained repurposing of pieces of motion, while still allowing for mocap-based learning Weaknesses: It is largely a mashup up of previously known results (R2). Caveat: this can be said for all research at some sufficient level of abstraction. The motions are jerky when transitions happen between control fragments (R2,R3). There are some concerns as to whether the method compares against other methods; the authors note that they are either not directly comparable, i.e., solving a different problem, or are implicitly contained in some of the comparisons that are performed in the paper. Overall, the reviewers and AC are in broad agreement regarding the strengths and weaknesses of the paper. The AC believes that the work will be of broad interest. Demonstrating memory-enabled, vision-driven, mocap-imitating skills is a broad step forward. The paper also provides a further datapoint as to which combinations of method work well, and some of the specific features required to make them work. The paper could acknowledge motion quality artifacts, as noted by the reviewers and in the online discussion. Suggest to include [Peng et al 2017] as some of the most relevant related HRL humanoid control work, as per the reviews & discussion.
test
[ "r1gObOE-Am", "HJg6uB8Pam", "HkeENL21C7", "S1xdTHWs6m", "BketCjOKp7", "HJxPPo_FTm", "rJlfZ-ut6m", "Sygyex8D67", "SJxBek8vTX", "r1lL-jzbTm", "Hke4Tu33nX", "BkgV2s7537", "r1e-UMBoiQ", "BkltQstIi7" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "public", "author", "author", "public", "official_reviewer", "official_reviewer", "author", "public" ]
[ "The authors claim that this method improves upon the earlier work by substantially decreasing the amount of manual curation needed, however I still cannot see any real difference in the level of manual work required. This method as well as the earlier work (Peng et al. 2017 and Peng et al 2018) use existing pre-cleaned mocap data. Though I understand that importing the data to be usable in the RL framework is burdensome, I still believe that this step is shared between the approach in this work as well as the earlier works. Therefore, although I recognize the value in the authors' goal of using as little manual curation as possible, I don't believe this paper takes a substantial step towards this goal.\n\nIn regards to the hierarchical structure that was presented, I don't see much in terms of novelty in this framework and I am not convinced that this method is effective enough in making the task much easier for the higher-level controller.", "1) Summary\nThe authors propose an interesting hierarchical reinforcement learning method that makes use of visual inputs as well as proprioception for locomotion of humanoid agents. The low-level controllers make use of “motion capture” data and are expected to form a set of movement primitives that can be used by a higher-level controller that has vision and memory. Their method is tested on a variety of tasks and different choices of low-level controller are explored.\n\n2) Pros\n+ Combining vision, memory, and motor control\n+ Allows the high-level controller to operate at a coarser time scale\n+ The set of low-level movement primitives can be extended by using more mocap data\n\n3) Cons\n- No comparison to earlier work\n- Highly unnatural motions even though it makes use of mocap data\n- Sample inefficient: more than 1 billion time-steps to train the high-level controller\n\n4) Comments\nShowing that the agent can provide suitable solutions for these tasks using raw vision input is indeed interesting, however it is not clear what the main contribution of the paper is as the authors fail to compare their results with earlier work. It would be useful if the authors could cover the related work in more depth in order to motivate their method and contrast it with the existing solutions. As an example, DeepLoco (Peng et al. 2017) solves a similar problem in which they use an egocentric heightmap instead of direct visual input, hence a formal consideration of the trade-offs would be informative.\n\nIn addition, the appeal of using hierarchical reinforcement learning is to divide up the task into easier chunks that can be solved easier, however it is not obvious how well this method succeeds at this task, keeping in mind that the high-level controller takes in the order of 1 billion time-steps to learn most tasks (5 billion in the case of “Heterogeneous Forage”).\n\nIn the end, an ablation study could be useful since the authors make plenty of novel design decision, yet their effect on the final performance is not clear.\n\n\n6) Questions\n- Is is possible to entirely remove proprioception from the input to the high-level controller or at least use just a small portion of it? How do the results compare in this case?\n\n- How robust is “cold-switching” between control fragments? Is it possible to transition between most fragments without losing balance or does the high-level controller have to be extremely careful as to which combination it should use? The former case would suggest that this method is indeed useful as a hierarchical method. However the latter case might imply that the hierarchical method is failing and the higher level controller’s task has not been made much easier than the original problem itself.\n\n- Table 2 describes the mocap clips used to train the low-level controllers in each task. What is the effect of choosing different sets of motions? Specifically how well does the steerable controller work if walking motions were used for the “Go To Target” and “Walls” tasks rather than running motions? Presumably, this can result in a more flexible controller which allows sharper turns without loosing balance.\n\n- The network in Figure A.1 gets the last action as an input. Why is this required? Especially since the LSTM unit can learn to remember any information related to the previous actions.\n\n- How does the supervised pre-training described in section 2.1 effect the training of low-level controllers? Is it used as a speed-up mechanism or a way of escaping local minima?\n\n- In section 2.1 the authors mention that the episodes are “terminated when the pose deviates too far from the trajectory”. I believe this termination criteria was not present in the earlier works (Peng et al. 2018), then what is the effect of adding such a criterion? Can this make the learned agent less robust as it will not learn to recover from larger perturbations?\n\n\n7) Conclusion\nThe method and the results are interesting but further comparison with existing work is required.", "Thanks to everyone for the detailed reviews, and the authors for their detailed replies.\n\nReviewers: please advise as to whether the replies have influenced your evaluation and your score for the paper.\nYour input is greatly appreciated. \n\nNote that there is a convenient way to see the revision differences: select \"Show Revisions\" on the review page, and then select the check-boxes for the two versions you wish to compare. \n\n-- area chair\n", "Taking all of the reviewer comments into consideration, the first round of exchange has prompted us to revise some of how we communicated the ideas. Specifically, in addition to localized updates in direct response to reviewer comments, the substantial changes are that we revised the intro paragraphs to section 2.2, now titled “Varieties of low-level motor control”, and we substantially changed the discussion (section 4). We hope these revisions make clearer how what we have done relates to existing approaches. We again thank the reviewers for their input.", "Replies to specific questions:\n1) We have not considered the case where high level controllers are information limited -- rather, we view the more interesting asymmetry being that the low level controller only has proprioception. Depending on the task, it seems likely that only providing vision to the high level controller may do as well as also providing proprioception.\n\n2) Switching among control fragments cannot really be assessed for only a single transition as it might only become clear that a switch from fragment A → B was a bad choice after realizing that from the state arrived upon due to the sequence of actions (select A, select B), there are no good subsequent options. As such, the appropriate way to examine how flexibly it is possible to switch among control fragments is to examine transition behavior of trained policies. We depicted an example of this in Figure A.5 (for the go-to-target task). We see some diversity of transitions, especially within the fast walk and turn clip, which makes sense for this task. \n\n3) The graph transition and steerable approaches require significant manual curation -- mocap clips must be segmented by hand, possibly manipulated by blending/smoothing clips from the end of one clip to the beginning of another. This process takes human labor and to do it well requires considerable skill as an animator. As researchers with a machine learning orientation, it seems implausible to us that the most productive path forward for motor control is to hand-curate and animate specific behavioral transition, but indeed we have made a sincere attempt to implement some of these baselines. Through this work, we have found that we can much more rapidly develop methods that scale to more complex tasks if we avoid hand-designing the reuse of low level skills and instead use methods that require little to no human curation. This message we view as a strong takeaway for ourselves, and we wanted to communicate this to the readers of the paper. \n\n4) Providing the previous action to as an observation to the policy is a minor design choice that is not critical to this approach. We followed similar agent architectures as in Mnih et al. 2016, Espeholt et al. 2018.\n\n5) The supervised pretraining for the mocap tracking controllers does both help avoid local optima for the RL training and speed up the training. We can clarify this in the text.\n\n6) It is relatively common for early termination to be used. Heess et al. 2017 and Peng et al. 2018 have both used early terminations, and early terminations can generally be based on contact of body parts with the ground. Often papers are slightly unclear about what termination criterion they use so we stated that previous work “ terminated when the pose deviates too far from the trajectory or when the body falls”. The subsequent sentence clarifies that “Our specific termination condition triggers if parts of the body other than hands or feet make contact with the ground.” We will edit this section further for clarity.\n", "We thank the reviewer for a careful reading and for these questions. We will first address the three “cons” described above. \n\nComparison to earlier work: We have provided a thorough set of comparisons and investigation of the components of the system where possible. In particular, our low-level controllers are built using a variety of techniques, encompassing the techniques in Liu et al., 2017, and Peng et al., 2018. Therefore, at the low-level policy level, we are indeed primarily benchmarking and scaling up existing techniques. At the composite system level, however, we feel that the question is ill-posed. Consider, for example, the DeepLoco paper (Peng et al., 2017): the motion capture data used in this work were manually selected and preprocessed. While undoubtedly excellent work, this step of manual curation makes it difficult to understand what an objective algorithm comparison would mean. Our work aimed to minimize this manual curation. As we demonstrated, the control fragments approach scaled well with the inclusion of redundant or irrelevant motion capture data. We examined the application of existing techniques requiring little manual curation to the problem of hierarchical, memory-based visuomotor control of humanoids.\n\nJerkiness: The movements were admittedly slightly jerky in the control fragment model due to switching among the fragments. In the appendix, we also demonstrate the trade-off between longer fragments, which would result in smoother motion, and task performance. For the steering and graph switching approaches, any jerkiness was merely a consequence of the artistry with which mocap is curated: we generally preferred methods that enable scaling to large numbers of clips and the solution of new high-level tasks, instead of manual curation of motion capture data.\n\nSample efficiency: While we agree that 1 billion timesteps seems large, we would emphasize that this is standard at present (e.g. Peng et al. 2018 uses within 1 order of magnitude of that number for learning skill-selection behavior even without vision-based control). Additionally, for each higher-level task, we provided a comparison with simple rolling-ball body. This body was used to demonstrate the difficulty of the task independent of the control problem.\n\nWe would like to emphasize that there is limited learning-based work on humanoids in simulation reusing motor skills to solve new tasks, and this work is novel in this regard: certainly, the use of an egocentric camera to guide visuomotor behavior of humanoids is little studied. Other work on simulated humanoid control has primarily used state features designed to provide input to the agent about terrain (Peng et al. 2017, Heess et al. 2017, Peng et al. 2018). A contribution of the present work was to move past hand-designed features towards a more ecological observation setting. \n\nA scientific contribution of this work was also to show that hierarchical motor skill reuse enabled tasks that were unsolvable with flat policy learning to be solved. We clearly demonstrated this. For the walls and go-to-target tasks, learning from scratch was slower and produced less robust behavior; for the forage tasks, learning from scratch failed completely. Since control fragments were the most compelling LL control approach among those we considered, we did a thorough study of the effect of fragment length, number of fragments, as well as introduction of redundant clips (see appendix B). ", "Thanks for the reply! But it seems like some points still have to be clarified. I know that that this Humanoid_CMU model is available in Deepmind Control Suite. But this fact doesn't shed any light on some of the details of the design choice:\n\n1) Why hands are much smaller than for a human of the same height and feet are bigger?\n\n2) How robust is your method? Is it highly sensitive to the parameters above and will work only for these proportions and with feet larger than real human ones?\n\n", "We thank the reviewer for appreciating the difficulty of the problem and the novelty of the setting, including the use of egocentric vision.\n\nThe main concern of this reviewer seems to be about comparison to other methods. Our core intent with this paper has indeed been a comparison of different methods, adapted from the literature for reusing low-level motor skills. Previous work (e.g. Merel et al 2017, Peng et al 2018, and others) have attempted to build low-level controllers that incorporate transitions and are conditioned on a pre-specified input parameterization (such as heading direction). This style of approach is represented in our comparisons by the steering and graph-transitioning approaches. We also explored the use of control fragments for skill reuse (proposed by Liu et al. 2017) -- we presented results using a default version of this as well as a novel variant. As far as we are aware, these approaches amount to the current most competitive approaches for motor skill reuse. Our work goes beyond this by demonstrating that these forms of motor reuse enable visuomotor control and we can, in some cases, be used to solve more complex whole-body tasks than previously done. Overall, we view this current paper as an integrative work that helps establish and clarify the state of the art involving motor reuse for generic humanoid movement tasks. \n\nConcerning jerky transitions and other visual idiosyncrasies -- we agree that transitions between sub-behaviors are jerky here. One could attempt to enforce smoothness, however, without adjusting the low level controllers, this would merely amount to a prior or constraint on which transitions would occur and would reduce task performance. We also agree that there are numerous ways to train the low-level controllers to be a bit smoother -- this is important for computer graphics and would make movements more visually pleasing, but does not necessarily affect task performance. Importantly, these manual adjustments can require considerable human effort to tune and for this paper we wanted to focus on general techniques for reuse that require as little hand-tuning as possible as we believe this will be critical for scaling to large skill repertoires.\n", "We thank the reviewer for their feedback.\n\nWe share your interest in reproducibility. The body we have used is only a slight variation on the DM control suite CMU humanoid (https://github.com/deepmind/dm_control; Tassa et al. 2018); the motion capture data are already available, as is code in the suite to register mocap data to the body. We intend to release the updated body and task environments used in this work. Unfortunately, because of the complexity and inter-linkedness of the agent-level code, along with the requirements for cluster-specific, high-performance compute, we cannot at the present easily release it, but elements of the algorithms are available (e.g., the high-level controllers were trained with the same algorithm as in Espeholt et al., 2018; http://github.com/deepmind/scalable_agent).\n\nPeng et al. 2018 described a few approaches for multi-skill integration:\nThe multi-clip reward and skill selector approaches are similar to our conditional tracking along a graph as it involves training a low-level controller based on a few clips. The details differ slightly, but these approaches are similar in spirit to our baseline of training a low-level controller to switch based on a parametric input. We will clarify this relationship in the text.\nThe composite policy approach in Peng et al. 2018 is more specific and involves selecting subsequent skills based on the relative value function for transitioning to subsequent behaviors -- it is essentially autonomous, so it is less immediately amenable to re-purposing in the context of new tasks and less relevant in the context of the present work. \nOverall though, we emphasize that the Peng et al. 2018 style approaches require manual curation of the mocap data (similar to our transition-graph and steerable tracking approaches), and we aimed with this work as well to explore approaches (such as control fragments) that require very little/no manual curation.\n\nThe comparison with Heess et al. 2017 is a good question. Sensible looking locomotion behavior can be achieved in a variety of ways (environmental constraints, appropriate shaping rewards, task setup including multiple tasks or curricula). Heess et al. 2017 uses a simpler reward that encourages a constant forward velocity together with environmental variations, on a simpler humanoid model. In our own end-to-end experiments, especially slow movements and transitions between standing and moving have proven relatively difficult. The comparison here is mostly meant to demonstrate that the particular go-to-target task setup does not easily give rise to naturalistic looking behavior when trained end-to-end. However, we can expand on the details of the end-to-end training in the text.\n\nIn each episode of a task, the environment is randomized. In the Walls and Gaps task the layout of the track is randomized, requiring the agent to use vision to navigate. In the Forage task, the maze layout and placement of rewards is randomized. Finally, in the heterogeneous forage task, each orb color is assigned a random positive or negative reward. In this case the agent must match color to reward in the episode to only eat the “good” orbs. We can clarify in the text that not all tasks require memory. More broadly, the use of memory is a little subtle insofar as the high-level agent sees information at each low-level timestep, but only acts to switch among the low-level controllers (for control fragments, e.g. every 3 timesteps; it at least makes sense that some state information seen by the high-level controller between selection timesteps could inform subsequent selection, but we agree we don’t assess this explicitly).\n\nWe agree that adapting low-level skills is important and this is a clear direction for future research.\n", "It is a DeepMind paper for sure. Saying this is redundant. They never publish any code. ", "The paper proposes a control architecture for learning task oriented whole body behaviors in simulated humanoid robots bootstrapped with motion capture data.\n\nThe authors use a hierarchical approach, where the low-level controllers are trained to follow motion captured data whereas the high-level control combines them. \nThe topic of the paper is interesting and the language is understandable. \n\nThe paper discusses and compares different ways to achieve such a higher level control.\nIt probably won’t be useful for real robots, but will be possibly useful for computer graphics.\nI suspect that code will not be published anytime soon, and I am afraid it will be hard to reproduce without. There is a solid software engineering involved and the system has many parameters. \n\nThe related work section (or lack thereof) can be improved. What is the advantage of this work over the multi-skill integration in Peng et al 2018? Please explain explicitly in the paper.\n\nThe end-to-end approach seems a bit too weak to me. The video shows more artifacts than other similar papers, (cf. Heess et al. 2017). What’s the detail of the training for the end-to-end baseline?\n\nAre the environments randomized in each rollout? If not then this would need an ablation study which ablates memory/vision to prove its claim of integrating vision and memory. \nHow much is the memory used in the tasks where nothing needs to be memorized?\nIs there any noise in the simulations?\n\nOne weakness is that the low-level controllers are not adapted any further. That is probably why the fragments outperformed the transition policies etc., because the higher level policy has more flexibility.\n\nOverall, from the perspective of deep learning, I think the paper is novel and provides some insights into different approaches to the problem.\n", "1) Summary\nThis paper proposes a hierarchical reinforcement learning (HRL) method for visual motor control of humanoid agents. The method is decomposed into a high-level controller that takes in visual input and proprioceptive information, and a low-level controller (they compare may ways of doing this) that takes care of the agent’s motor control. In experiments, the proposed method is tested on a variety of RL tasks where the many low-level controllers presented in the paper are compared against each other.\n\n2) Pros:\n+ Novel high-level controller that takes in front-view visual information\n+ Novel multi-policy low level controller\n+ Interesting experimental section\n\n3) Cons:\nNumerical comparison to previous methods:\n- The only issue I found with this paper is that there is no comparison with other methods. Even if the other methods do not take in front-view visual input, it would be nice to compare with them. Maybe visual inputs results in better high-level controller? Or even show that performance is similar would be an interesting result.\n\n4) Comments:\nJerky transitions in switching controller:\n- Due to the fact that one policy takes over after each other based on the high-level controller choice, there is a jerk artifact that shows when the policies are being changed/executed. Did you guys try to add a connection in feature space between policies rather than only passing the state of the agent? This may be able to help with that artifact that sampling noise adds to the actions. Can the authors comment on this?\n\nSteerable controller limited rotation:\n- From observing the steerable controller policy in action, it seems the policy learned a steering that is somewhat independent of what the limbs are doing. Maybe adding a mechanism where the leg motion intensity depends more on the direction of movement could be a way to fix the issue where this policy moves to fast for the turning it tries to do. Maybe an energy based objective to minimize the torques or something in that line.\n\n4) Conclusion:\nTo the best of my knowledge, this paper proposes a novel interesting method for modeling humanoid motor skills with front-view visual input. However, as mentioned above, the paper lacks of numerical comparisons with other methods, and only compares against its own variations which is more of an ablation study. I am willing to increase my review score if the authors successfully address the concerns mentioned above", "This body was created in other work and is publicly available -- see the DM control suite github (https://github.com/deepmind/dm_control; which has two branches with variants of the “Humanoid_CMU”). As stated in the DM control suite write-up, the Humanoid_CMU segment lengths and pose parameterization are based on a subject in the CMU mocap database, so the body is easy to set to poses obtained from that database. \n\nHumans vary quite significantly in actuation strength. The actuation strengths of the model can be seen in the DM control suite model. These numbers appear relatively strong for a body of this mass, and in other experiments it is indeed capable of dynamic, humanlike movements including running and jumping and acrobatics. The present work does not emphasize or require highly dynamic movements; rather, we study schemes for reusing basic motor skills for solving high-level tasks with minimal manual curation.\n", "Hi,\n\nWhat was a reasoning behind the creation and using such a strange human model? It looks and behaves very unrealistically, for example, hands are much smaller than a real human with the same height should have. \n\nActuators look too weak - humanoid can't jump and run well enough, running looks very heavy more like a usual walking.\n\nWhat was a motivation for such a design? This humanoid model has much more degrees of freedom and looks like it was supposed to be more realistic and closer to the real human, compared to the traditional 23 DoF but it's not the case with its wrong proportions and motor strengths." ]
[ -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, 8, 6, -1, -1 ]
[ -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, -1, -1 ]
[ "BketCjOKp7", "iclr_2019_BJfYvo09Y7", "iclr_2019_BJfYvo09Y7", "iclr_2019_BJfYvo09Y7", "HJxPPo_FTm", "HJg6uB8Pam", "r1e-UMBoiQ", "BkgV2s7537", "Hke4Tu33nX", "Hke4Tu33nX", "iclr_2019_BJfYvo09Y7", "iclr_2019_BJfYvo09Y7", "BkltQstIi7", "iclr_2019_BJfYvo09Y7" ]
iclr_2019_BJg4Z3RqF7
Unsupervised Adversarial Image Reconstruction
We address the problem of recovering an underlying signal from lossy, inaccurate observations in an unsupervised setting. Typically, we consider situations where there is little to no background knowledge on the structure of the underlying signal, no access to signal-measurement pairs, nor even unpaired signal-measurement data. The only available information is provided by the observations and the measurement process statistics. We cast the problem as finding the \textit{maximum a posteriori} estimate of the signal given each measurement, and propose a general framework for the reconstruction problem. We use a formulation of generative adversarial networks, where the generator takes as input a corrupted observation in order to produce realistic reconstructions, and add a penalty term tying the reconstruction to the associated observation. We evaluate our reconstructions on several image datasets with different types of corruptions. The proposed approach yields better results than alternative baselines, and comparable performance with model variants trained with additional supervision.
accepted-poster-papers
This paper proposes a GAN-based method to recover images from a noisy version of it. The paper builds upon existing works on AmbientGAN and CS-GAN. By combining the two approaches, the work finds a new method that performs better than existing approaches. The paper clearly has new interesting ideas which have been executed well. Two of the reviewers have voted in favour of acceptance, with one of the reviewer providing an extensive and detailed review. The third reviewer however has some doubts which were not resolved completely after the rebuttal. Upon reading the work myself, I am convinced that this will be interesting to the community. However, I will recommend the authors to take the comments of Reviewer 2 into account and do whatever it takes to resolve issues pointed by the reviewer. During the review process, another related work was found to be very similar to the approach discussed in this work. This work should be cited in the paper, as a prior work that the authors were unaware of. https://arxiv.org/abs/1812.04744 Please also discuss any new insights this work offers on top of this existing work. Given that the above suggestions are taken into account, I recommend to accept this paper.
test
[ "HkgUbQ_WgN", "BJguc0OE0Q", "BJlcvR_VRm", "H1xoXRON0m", "H1xo8T_4RQ", "SklY001CnQ", "BylSgJYp2Q", "B1lziIvI3m" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We have released the code used in this paper : https://github.com/UNIR-Anonymous/UNIR", "Thank you for your feedback. We have taken note of your comments and have been actively working to take them into account.\nYou raised two main questions , one concerning the measurement process and the second one concerning the need to test the model on additional datasets.\n\nConcerning the first question, we have rewritten the sections explaining to the measurement process (please, see also the general comments about the measurement process above). Below is an extract from Section 2.1. “Problem Setting” of the updated paper version:\n\n“Suppose there exists a signal X ~ p_X we wish to acquire, but we only have access to this signal through lossy, inaccurate observation Y ~ p_Y. The measurement process is modeled through a stochastic operator F mapping signals X to their associated observations Y. We will refer to F as the measurement process, which corrupts the input signal. F is parameterized by a random variable \\Theta ~ p_\\Theta following an underlying distribution p_\\Theta we can sample from, which represents the factors of corruption. Thus, given a specific signal x, we can simulate its measurement by first sampling \\theta from p_\\Theta, and then computing F(x; \\theta). Additional sources of uncertainty, e.g. due to unknown factors, can be modeled using additive i.i.d. Gaussian noise \\Eps ~ \\mathcal{N}(0, \\sigma^2 I), so that the overall acquisition process becomes: \nDifferent instances of F will be considered, e.g. like random occlusions, information acquisition from a sparse subset of the signal, overly smoothing out and corrupting the original distribution with additive noise, etc... In such cases, the factors of corruption \\Theta might respectively represent the position of the occlusion, the coordinates of the acquired information, or simply the values of the additive noise.”\n\n\nFor different measurement processes instances, also called corruptions, please refer to the Corruptions section (4.2) in the Experiments Section.\n\nAs for the second remark, we have added experiments conducted on two additional datasets: LSUN Bedrooms, and Recipe1M. The results are provided in section 5 and in appendix 3. Overall this confirms the good results of the model already obtained on the first dataset.\n", "Thank you very much for your review and comments : they are very much appreciated. \n\n“If I understand correctly, this is the 'Conditional AmbientGAN' approach that is used as a baseline. This is a sensible approach given prior work. However, the authors show that their method ('Unpaired Supervision') performs significantly better compared to the Conditional AmbientGAN baseline. This is very surprising and interesting to me. Please discuss this a bit more ? As far as I understand the proposed method is a merging of AmbientGAN and CS-GAN, but much better than the naive separation. Could you give a bit more intuition on why ?”\n\n\nIndeed, this is correct. The conditional AmbientGan baseline combines the approaches of AmbientGan and CS-GAN. First, a generative model G of the data is learned without having access to samples of the signal distribution using the AmbientGAN framework. Then, in order to reconstruct the signal from a corrupted measurement y, we look for an input vector z of G that produces a simulated measurement G(z) that looks like y, by minimizing the Euclidean distance between G(z) and y. This method suffers from several drawbacks, which we believe can explain the poor results:\n\n* First drawback: suboptimality of the Generator. In theory, if the generator was optimal, under suitable conditions for the measurement process F, it would generate outputs belonging to the manifold of uncorrupted images (that we shall name M). Thus, projecting a measurement onto M should recover an uncorrupted image. However, this is never the case: in practice, GANs suffer from a number of problems. This means that it is possible that images from the manifold of generated images do not correspond to true samples: applying gradient descent to minimize the aforementioned distances, tend to generate images similar to the corrupted images y, and not to uncorrupted images x. Our model does not suffer from this problem because it maximizes the log-likelihood and the prior term jointly. If G generates a signal that does not belong to M in order to maximize the log-likelihood term (similarly to what happens with the ConditionalAmbientGan baseline), the discriminator will easily be able to detect this and consequently, the reconstruction network G is corrected in order to avoid this behaviour.\n\n* Second drawback : Euclidean distance used in ConditionalAmbientGan is not adapted in the general case considered in the paper. The natural thing to do would be to find a reconstruction from M that maximizes the likelihood p(y|x). If the corruption in the measurement process corresponds to iid additive noise, it is possible to show that the problem reduces to minimizing the euclidean distance between x and y, like in ConditionalAmbientGan. However, this is not necessarily the case for other measurement processes. Indeed, in the general formulation, the likelihood is intractable;it requires marginalizing on the noise variables \\theta, and for each SGD step we would need to approximate it, which would be very costly. Our likelihood term in the cost functions better reflects the true likelihood.\n\n\n\nIn the appendix where is the proposed method in fig 5- 8 ?\nFig 5-8 (now 11-14 )are samples from our baselines. The corresponding samples from our model were in figure 9 to 14. We are adding our model to figures 5-8 (11-14). Notes that we are now providing samples from other datasets (see general comments).\n\nDoes the proposed method outperform Deep Image Prior ? \n\nOur experiments show that for strong corruption function DIP yields poor results compared to our model (see figure 11-14). One of the main explanation is that it does not capture semantic information from the other images of the dataset. \n\nFor the measurement process Patch-Band, Remove-Pixel and Remove-Pixel-Channel, Deep Image Prior (DIP) has access to the corruption parameter \\theta of the associated measurement (we have used the inpainting formulation of DIP). In other words, it has access to the mask, as opposed to our model. We have conducted experiments where DIP does not have the mask (normal formulation of DIP), and have observed very poor results (which were actually quite similar to the poor results in Conditional AmbientGAN). \n", "Thank you for the review. We are sorry that you found the overall presentation confusing, and we have been actively working on trying to make the paper much clearer. We have thus submitted a revised version of the paper taking into account your comments and answering your questions. Please see also the general comments. Typically, we have:\n* Rewritten Section 2.1 (Problem Setting) describing the abstract measurement process and the role of theta, taking into account your comments.\n* Modified the Method section (Section 3) in order to make the explanations more straightforward and less abstract. Typically, we moved some mathematical results in the appendix for a more fluent reading.\n* Added experiments on two additional datasets: LSUN and Recipe-1M (Section 4.1 + appendix C). They illustrate the behavior of the model and of the baselines on image datasets with different characteristics and confirm the good results obtained by our model.\n* Provided additional details on the hyperparameters and the architecture for overall reproducibility (Section 4.1). Note that we will be releasing the code shortly.\n* Added details regarding the specific measurement instances (also called corruptions) used in the experiments (Section 4.2 Corruptions),\n* Added details on the different baselines in Section 4.3. (+ Figures visually describing them in appendix ) \n\nTo answer your question regarding the structure of the measurement process: the measurement (or corruption) process described in equation (1) is assumed known. This means that, as in most of the problem formulations for signal recovery, the structure of the stochastic function F is known. For example, let us consider the additive Gaussian noise case. F(X, \\Theta) = X + \\Theta, where X is the signal random variable to be recovered, and \\Theta is the noise random variable (also called corruption parameter) whose underlying distribution p_\\Theta is Gaussian. This distribution p_\\Theta is assumed known, although for a specific measurement, we do not know the precise value \\theta that contributed to its corruption. In other cases, typically when the measurement process induces a more structured corruption such as in our Patch Band corruption, that randomly places a band occluding the original image (introduced in Section 4.2), \\Theta follows a uniform distribution taking its values from the space of pixel coordinates. To simulate this corruption process, one samples a \\theta from the prior p_\\Theta, and uses it to corrupt the signal x, resulting in measurement y = F(x, \\theta). In this case, F places a band using \\theta as the position of the top of the band. This is exactly the same formulation as the one used for AmbientGan: the associated corruptions parameter \\Theta for “DropPatch” which is very similar to our “PatchBand”, corresponds to the position of the occluding patch (refer to the official implementation [1]). Note that it would also be possible to sample the size of the box, if its size varies in the corrupted data. \n\nPaired/Unpaired variant explanation :\n\n\nFor the two model variants that use the additional information, *Unpaired and Paired Variant* we have added additional details in the Baseline Section 4.3, and additional Figures describing them in the Baseline appendix C. Below is an extract of the Baselines Section of the updated paper:\n\nUnpaired Variant:\n“Here, we have access to samples of the signal distribution p_X. This means that although we have no paired samples from the joint p_X,Y, we have access to unpaired samples from p_X and p_Y. This baseline is similar to our model although, instead of discriminating between a measurement from the data y and a simulated measurement \\hat{y}, we directly discriminate between samples from the signal distribution and the output of the reconstruction network \\hat{x}.”\n\nPaired Variant:\n“This baseline has access to signal measurement pairs (y, x) from the joint distribution p_X,Y. Given input measurement y, the reconstruction is obtained by regressing to the associated signal x using a MSE loss. In order to avoid blurry samples, we add an adversarial term in the objective in order to constrain G to produce realistic samples, as in Pix2Pix [2]. The model is trained using the same architectures as our model, and the hyperparameters have been found using cross-validation. ”\n\n\n[1]: https://github.com/AshishBora/ambient-gan/blob/master/src/commons/measure.py#L176\n[2]: https://phillipi.github.io/pix2pix/\n", "Thanks to all the reviewers for their comments and suggestions. We tried to take all of them into account, we reorganized the paper accordingly and hope to provide now all the required precisions. We address below some general comments/ questions raised by the reviewers and then give detailed answers for each review.\n\nThe model presentation as been rewritten, highlighting the main ideas and results (section 3) while deferring some mathematical details to Appendix A. We have added figures illustrating the different components of the model (Fig. 1, 2, 3).\nDetails on the model parameters used for the experiments are provided in section 4.1, details on the corruption processes used for the experiments in section 4.2 and the baselines used for comparison are described quite extensively in section 4.3.\nWe performed tests on two additional datasets (LSUN Bedrooms and Recipe-1M). The three datasets have different characteristics, these experiments thus illustrate the model behavior for these different contexts. In the initial version, tests were performed on the CelebA dataset only, and two reviewers mentioned that this was too limited.\n Finally, the reviewers raised questions on the nature of the perturbation mechanism (the F(x;theta) function in the text). We agree that the description might have been unclear. This is now fully described in section 2.1. In a few words, we suppose that there exists a signal x we wish to reconstruct, but we only have access to x through lossy measurements y. The measurement process is modeled by a stochastic function with corruption parameters theta associated to a prior distribution p_Theta. The observations y are then supposed to be generated as y = F(x; theta). We have added discussions in the text, explaining the instances of F and p_Theta associated to the different types of corruptions used in the experiments.\n", "The authors address the problem of recovering an underlying signal from lossy and inaccurate measurements in an unsupervised fashion. They use a GAN framework to recover plausible signals from the measurements in the data. \n\n* Authors need to test other datasets, CelebA dataset is too limited. \n* Similarly, the experiment with different corruption processes are required. \n* What is a definition of F. It is not clear \"measurement process\".\n", "This is a very interesting paper that achieves something that seems initially impossible: \nto learn to reconstruct clear images from only seeing noisy or blurry images. \n\nThe paper builds on the closely related prior work AmbientGAN which shows that it is possible to learn the *distribution* of uncorrupted samples using only corrupted samples, again a very surprising finding. \nHowever, AmbientGAN does not try to reconstruct a single image, only to to learn the clear image distribution. The key idea that makes this is possible is knowledge of the statistics of the corruption process: the generator tries to create images that *after they have been corrupted* they look indistinguishable from real corrupted images. This surprisingly works and provably recovers the true distribution under a very wide set of corruption distributions, but tells us nothing about reconstructing an actual image from measurements. \n\nGiven access to a generative model for clear images, an image can be reconstructed from measurements by maximizing the likelihood term. This method (CS-GAN) was introduced by Bora et al. in 2017. Therefore one approach to solve the problem that this paper tackles is to first use AmbientGAN to get a generative model for clear images and then use CS-GAN using the learned GAN. If I understand correctly, this is the 'Conditional AmbientGAN' approach that is used as a baseline. This is a sensible approach given prior work. However, the authors show that their method ('Unpaired Supervision') performs significantly better compared to the Conditional AmbientGAN baseline. This is very surprising and interesting to me. Please discuss this a bit more ? As far as I understand the proposed method is a merging of AmbientGAN and CS-GAN, but much better than the naive separation. Could you give a bit more intuition on why ?\n\nI would like to add also that the authors can use their approach to learn a better AmbientGAN. After getting their denoised images, these can be used to train a new AmbientGAN, with cleaner images as input , which should be even better no ?\n\nIn the appendix where is the proposed method in fig 5- 8 ?\n\nDoes the proposed method outperform Deep Image Prior ? \n\n\n", "This paper presents a method to reconstruct images using only noisy measurements. This problem is practically interesting, since the noiseless signal may be unavailable in many applications. The approach combines ideas from recent development in compressed sensing and GANs. However, the model’s presentation is confusing, and many important details of the experiments are missing.\n\nPros:\n\n* The problem is interesting and important\n* The combination of compressed sensing and GANs for image reconstruction is novel\n\nCons:\n\n* The model structure is unclear: for example, what is the role of the variable \\theta? Section 2.1 says it is known, but the algorithm samples from its prior(?). Since there is no further explanation with respect to the experiments, I am not sure how the values of \\theta or its distributions were determined. Although \\theta is formally similar to the \\theta parameters of the measurement function in ambientGANs, this interpretation is at odds with the example given in the paper (below eq.1, saying \\theta can be positions or sizes).\n* A few important details of the model are missing. For example, what is the exact structure of the measurement function F?\n* The baseline models are a bit confusing. More detail about unpaired vs paired supervision would also be helpful for understanding how these baseline models use the additional information.\n* Although the paper mentioned parameters are obtained from cross-validation, it would still be helpful to describe a few important ones (e.g., neural network size, weight \\lambda) for comparison with other models.The experiments on only CelebA dataset are too limited." ]
[ -1, -1, -1, -1, -1, 6, 8, 4 ]
[ -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "iclr_2019_BJg4Z3RqF7", "SklY001CnQ", "BylSgJYp2Q", "B1lziIvI3m", "iclr_2019_BJg4Z3RqF7", "iclr_2019_BJg4Z3RqF7", "iclr_2019_BJg4Z3RqF7", "iclr_2019_BJg4Z3RqF7" ]
iclr_2019_BJg9DoR9t7
Max-MIG: an Information Theoretic Approach for Joint Learning from Crowds
Eliciting labels from crowds is a potential way to obtain large labeled data. Despite a variety of methods developed for learning from crowds, a key challenge remains unsolved: \emph{learning from crowds without knowing the information structure among the crowds a priori, when some people of the crowds make highly correlated mistakes and some of them label effortlessly (e.g. randomly)}. We propose an information theoretic approach, Max-MIG, for joint learning from crowds, with a common assumption: the crowdsourced labels and the data are independent conditioning on the ground truth. Max-MIG simultaneously aggregates the crowdsourced labels and learns an accurate data classifier. Furthermore, we devise an accurate data-crowds forecaster that employs both the data and the crowdsourced labels to forecast the ground truth. To the best of our knowledge, this is the first algorithm that solves the aforementioned challenge of learning from crowds. In addition to the theoretical validation, we also empirically show that our algorithm achieves the new state-of-the-art results in most settings, including the real-world data, and is the first algorithm that is robust to various information structures. Codes are available at https://github.com/Newbeeer/Max-MIG .
accepted-poster-papers
This paper proposes an interesting approach to leveraging crowd-sourced labels, along with an ML model learned from the data itself. The reviewers were unanimous in their vote to accept.
train
[ "rygf8BMF37", "BklLgntDhX", "B1g5uv1N07", "B1xLvvU0pQ", "B1e5ewz9pQ", "Syxqx4gdp7", "SJg2zPgOpQ", "SkgHhklOTQ", "HylQebguTX", "HygvOxIC3Q" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "Update after feedback: I would like to thank the authors for their detailed answers, it would be great to see some revisions in the paper also though (except new experimental results).\nEspecially thank you for providing details of a training procedure which I was missing in the initial draft. I hope to see them in the paper (at least some of them).\n\nI have increased the rating to 6. Given new experimental results both on real data and forecaster comparison I would like to increase the rating to 7. However, I am not sure that this is fair to other authors who would might not be physically able to provide new experimental results due to computational constraints, please note that the experiments in this paper are rather 'light' in the standards of modern deep learning experiments and can be done within the rebuttal period. \n====================================================\n\n\nThe paper finds a practical implementation of ideas from Kong & Schoenebeck (2018) for the learning with crowd problem. It proofs the claims from Kong & Schoenebeck (2018) for the specific family of data classifiers and crowd aggregators. From the general perspective, the papers proposes a method for joint training a classifier and a crowd label aggregator with particular consideration of correlated crowd labels. \n\nThe paper is fairly well-written and well-balanced between theoretical and empirical justification of the method. I see 1 major and 1 big issues with the paper.\n\nMajor issue: I am missing details of the actual procedure of training the model. Is MIG set as a loss function for the data classifier NN? Is crowd aggregator trained also as an NN with MIG as a loss function? How do the authors find the optimal p? Also, in order all the provided theory to work all the found data classifier NN, the aggregator and p should be exact maximisers of MIG as far as I understand. How do the author ensure that they find the exact maximisers? Also related to understanding how training works: on p.15 the authors claim “Note that our method can handle this simple correlated mistakes case and will give all useless experts weight zero based on Theorem 3.4.” I have trouble understanding why the proposed method should find these zero weights rather than it is just able to find them?\n\nI am willing to change my judgement if the authors provide convincing details on the training procedure.\n\nBig issue: Experimental settings. \na) Though it is interesting to see the analysis of the method under controlled environments of synthetic crowd labels with different properties that show benefits of the proposed method (such as dealing with correlated crowd labels), it would be also appealing to see the results with real labels, for example, Rodrigues & Pereira (2017) provide Amazon MTurk crowd labels for the LabelMe data\nb) Is the proposed data-crowd forecaster the only method that uses crowd labels on the test data? While it can be argued that it is not straightforward in the test regime to include crowd labels into Crowd Layer, for example, without retraining the neural net, AggNet can use crowd labels without retraining the neural net part. In the presented format, it is unfair to compare the forecaster with the other methods because it uses more information, and essentially, the forecaster is not compared with anything (that uses the same information). It can be compared, at least, with pure Majority Voting, or more advanced pure crowdsourcing aggregation methods. Yes, they won’t use image data, but at least they can use the same amount of crowd label information, which would make a nice comparison with the presented related work and proposed NN: this is what you can get using just image data during test (Crowd Layer, Max-MIG, and others from the current paper), this is what you can get using just crowd labels during test (Majority Voting or, preferably, more advanced pure crowdsourcing aggregators), and this is what you can get using both image and crowd labels during test (the proposed forecaster and AggNet, for example)\n\nQuestions out of curiosity: \ni). Does Max-MIG handle missing crowd labels for some data points? Did the author use missing labels in the experiments?\nii). Both the Dogs vs. Cats and CIFAR-10 datasets have more or less balanced data, i.e., the number of data points belonging to each ground truth class is similar between classes. Is this true for the LUNA16 dataset? If yes, have the authors tried their method with heavily imbalanced data? In my experience, some crowdsourcing methods may suffer with imbalanced data, for example, Crowd Layer does so on some data. This tendency of Crowd Layer is kind of confirmed on the provided Dogs vs. Cats in the naïve majority case, where based on crowd labels the first class dominates the second.\n\nOther questions/issues/suggestions:\n1. Until the formal introduction of the forecaster on page 4, it is not entirely clear what is the difference between the data classifier and data-crowd forecaster. It should be explained more clearly at the beginning that the 3 concepts (data classifier, crowd label aggregator and \"data-crowd forecaster\") are separated. Also some motivation why we should care about forecaster would be beneficial because one can argue that if we could train a NN that would make good enough predictions why we should waste resources on crowd labels. For example, the provided empirical results can be used as an argument for this.\n2. From the introduction it is unclear that there are methods in crowdsourcing that do not rely on the assumption that data and crowd labels are independent given the ground truth labels. As mentioned in related works there are methods dealing with difficulty of data points, where models assume that crowd labels maybe biased on some data points due to their difficulty, e.g., if images are blurred, which violates this assumption.\nAlso the note that considering image difficulty violates the independence assumption could be added on page 3 around \"[we] do not consider the image difficulty\"\n3. The beginning of page 4. I think it would be more clear to replace \"5 experts' labels:\" by $y^{[5]}=$\n4. I suggest to move the caption of Figure 3 into the main text. \n5. p.3 \"However, these works are still not robust to correlated mistakes\" - Why? \n6. Data-crowds forecaster equation. It would be good to add some intuition about this choice. The product between the classifier and aggregator predictions seems reasonable, division on p_c is not that obvious. This expression presumably maximises the information gain introduced below. Some link between this equation and the gain introduction would be nice. Also, minor point – it is better to enlarge inner brackets ()_c\n7. The formulation “To the best of our knowledge, our approach is a very early algorithm”, and namely “a very early algorithm” is unclear for me\n8. Dual usage of “information intersection” as an assumption and as something that Max-MIG finds is confusing\n9. Any comments how the learning rates were chosen are always beneficial\n10. Proof of Proposition C.3: “Based on the result of Lemma C.2, by assuming that h ∗ ∈ H_{NN} , we can see (h ∗ , g∗ ,p ∗ ) is a maximizer of max_{h∈H_{NN} ,g∈G_{W A},p∈∆_C} MIGf (h, g,p)” – is expectation missing in the max equation? Is this shown below on page 13? If yes, then the authors should paraphrase this sentence as it does not imply that this is actually shown below\n11. p.12 (and below) – what is $\\mathbf{C}^m$? Is it $\\mathbf{W}^m$?\n12. p.15 (at the end of proof) $p \\log q$ and $p \\log p$ are not formally defined\n\nMinor:\n1. p.1 \"of THE data-driven-based machine learning paradigm\"\n2. \"crowds aggregator\" -> \"crowd aggregator\"?\n3. p.2 (and below) \"between the data and crowdsourced labels i.e. the ground truth labelS\"\n4. Rodrigues & Pereira (2017) has a published version (AAAI) of their paper\n5. p.2 \"that model multiple experts individually and explicitly in A neural network\"\n6. p.3 \"model the crowds by A Gaussian process\"\n7. p.3 \"We model the crowds via confusion matriCES\"\n8. p.3 \"only provide A theoretic framework and assume AN extremely high model complexity\"\n9. p.4 \"forecast\" for h and g -> \"prediction\"?\n10. p.6 “between the data and the crowdsourced labelS”?\n11. p.6 “However, in practice, with A finite number of datapoints”\n12. p.6 “the experiment section will show that our picked H_{NN} and G_{W A} are sufficientLY simple to avoid over-fitting”\n13. p.6 “We call them A Bayesian posterior data classifier / crowds aggregator / data-crowds forecaster, RESPECTEVILY”\n14. p.6 “Theorem 3.4. With assumptionS 3.1, 3.3”\n15. p.7 “DoctOr Net, the method proposed by Guan et al. (2017)”\n16. p.7 “including the naive majority case since naive expert is independent with everything” – rephrasing is required, unclear what “independent with everything” means and who is “naïve expert”\n17. Please capitalised names of conferences and journals in References\n18. p.10 “she labels the image as “dog”/“cat” with THE probability 0.6/0.8 respectively”, “(e.g. B labels the image as “cat” with THE probability 0.5 and “dog” with THE probability 0.5 when the image has cats or dogs)”\n19. p.12 “Lemma C.2. (Kong & Schoenebeck, 2018) With assumptionS 3.1, 3.3”, “Proposition C.3. [Independent mistakes] With assumptionS 3.1, 3.3”\n\n\n\n", "EDIT: I thank the authors for providing all clarifications. I think this paper is a useful contribution. It will be of interest to the audience in the conference.\n\nSummary:\nThis paper provides a method to jointly learn from crowdsourced worker labels and the actual data. The key claimed difference is that previous works on crowdsourced worker labels ignored the data. At a higher level, the algorithm comprises maximizing the mutual information gain between the worker labels and the output of a neural network (or more generally any ML model) on the data. \n\nEvaluation:\nI like the idea behind the algorithm. However there are several issues on which I ask the authors to provide some clarity. I will provide a formal \"evaluation\" after that. (For the moment, please ignore the \"rating\". I will provide one after the rebuttal.) \n\n(1) As the authors clarified, one key aspect of the \"information intersection\" assumption is that the crowdsourced labels are statistically independent from the data when conditioned on the ground truth. How strongly does this coincide with reality? Since the work is primary empirical, is there any evidence on this front?\n\n(2) In the abstract, introduction etc., what does it mean to say that the algorithm is an \"early algorithm\"?\n-- Thanks for the clarification. I would suggest using the term \"first algorithm\" in such cases. However, is this the first algorithm towards this goal? See point (3).\n\n(3) The submitted paper misses an extremely relevant piece of literature: \"Learning From Noisy Singly-labeled Data\" (arXiv:1712.04577). This paper also aims to solve the label + features problem together. How do the results of this paper compare to that of this submission?\n\n(4) \"Model and assumptions\" Is the i.i.d. assumption across the values of \"i\"? Then does that not violate the earlier claim of accommodating correlated mistakes?\n\n(5) Recent papers on crowdsourcing (such as Achieving budget-optimality with adaptive schemes in crowdsourcing arXiv:1602.03481 and A Permutation-based Model for Crowd Labeling: Optimal Estimation and Robustness arXiv:1606.09632) go beyond restricting workers to have a common confusion matrix for all questions. In this respect, these are better aligned with the realistic scenario where the error in labeling may depend on the closeness to the decision boundary. How do these settings and algorithms relate to the submission?\n\n(6) Page 5: \"Later we will show....\" Later where? Please provide a reference.\n\n(7) Theorem 3.4, The assumption of existence of experts such that Y^S is a sufficient statistic for Y: For instance, suppose there are 10 experts who all have a 0.999 probability of correctness (assume symmetric confusion matrices) and there are 5 non-experts who have a 0.001 probability of correctness and even if we suppose all are mutually independent given the true label, then does this satisfy this sufficient statistic assumption? This appears to be a very strong assumption, but perhaps the authors have better intuition?\n\n(8) The experiments comprise only some simulations. The main point of experiments (particularly in the absence of any theoretical results) towards bolstering the paper is to ensure that the assumptions are at least somewhat reasonable. I believe there are several datasets collected from Amazon Mechanical Turk available online? Otherwise, would it be possible to run realistic experiments on some crowdsourcing platforms?\n", "The training procedure of Max-MIG are illustrated below.\n\nDenote data classifier as h and crowds aggregator as g. The parameters of h (resp. g) is \\theta_h (resp. \\theta_g); the learning rate of h(resp. g) is \\alpha_h (resp. \\alpha_g). The crowdsourced dataset is denoted as D. (X, Y^{M}) pair denotes a batch of images and their corresponding crowdsourced labels Y^{M} from M experts. We tune the prior p as a hyperparameter.\n\nThe implementation details, such as batch size, learning rate and network architecture for different datasets, are illustrated in Page 12-13 in our paper.\n\nStep 1: Initialization for the experts' parameters in the crowds aggregator (Please refer to Page 13 in our paper for more details)\n\nStep 2 :\n\n For t in 1, 2, ..., T\n\n\t sample mini-batch (X, Y^{M})) from D\n\n\t left_output = h(X)\n\n\t right_output= g(Y^{M})\n\n\t Loss = - MIG(left_output,right_output,p) (Please refer to Page 5 in our paper for more details)\n\n\t \\theta_h = \\theta_h - \\alpha_h * \\nabla{\\theta_h}Loss\n\n\t \\theta_g = \\theta_g - \\alpha_g * \\nabla{\\theta_g}Loss\n", "Thank you for your questions.\n\nQ: My question is about Theorem 3.4. I repeat my question: Does the example in my question satisfy the sufficient statistic condition? If so, then is there an easy way to see that it does?\nA: Yes. The short explanation is: in your example, we can make the set of senior experts S consist of both the experts and the non-experts. \n\nThe long explanation is: in the example of your question, all experts are mutually independent conditioning on the ground truth. All independent mistakes cases satisfy SSC since (1) SSC requires that there EXISTs a subset of experts S (we call them senior experts), whose identities are unknown, such that the experts in S have mutually independent labeling biases and it is sufficient to only use the experts in S’ information to predict the ground truth label; (2) in the independent mistakes case, we can make S = M, where M is the set of all experts. \n\nQ: Unfortunately, this apparently important requirement is quite hidden within all notation etc. In the revision, please clarify the meaning and implications of this condition (in the main text or appendix).\nA: Thanks for your suggestion. In addition to the explanation of this requirement in the last paragraph of our intro, in our revised version, we have also clarified it after the statement of our main theorem. \n\nQ: The simple setting of (5), which is highly prevalent in practice, does NOT satisfy the \"information intersection\" assumption. I am fine with this assumption since it appears often in earlier works in crowdsourced labeling but the paper needs to be very clear in the benefits as well as the limitations of this assumption. In the revision, please make a very careful comparison of pros and cons with respect to the references in (3) and (5).\n\nA: The \"information intersection\" assumption involves both the crowdsourced labels and the datapoints while both settings of (5), Khetan & Oh 2016 and Shah et al. 2016, are for pure crowdsourcing methods. Thus, we assume that this question means the settings of (5), i.e. Khetan & Oh 2016 and Shah et al. 2016, do not use the original Dawid-Skene model to model the experts while our crowdsourcing part uses the original Dawid-Skene model to model the experts. \n\nKhetan & Oh 2016 and Shah et al. 2016 employ the generalized Dawid-Skene model, which considers the task difficulty, while we do not as we use the original Dawid-Skene model to model the experts. However, by employing the information from the datapoints, our results are robust to the correlated mistakes cases while they do not. We agree that combining the generalized Dawid-Skene model with our Max-MIG framework is an important future direction to explore. We will add this comparison, and the comparison with (3) in our revised version. \n\n\nKhetan, Ashish, and Sewoong Oh. \"Achieving budget-optimality with adaptive schemes in crowdsourcing.\" Advances in Neural Information Processing Systems. 2016.\n\nShah, Nihar B., Sivaraman Balakrishnan, and Martin J. Wainwright. \"A permutation-based model for crowd labeling: Optimal estimation and robustness.\" arXiv preprint arXiv:1606.09632 (2016).", "Thank you for your response. The experimental results are indeed very positive. I have two follow-up comments:\n\n- Regarding earlier question (7): My question is about Theorem 3.4. I repeat my question: Does the example in my question satisfy the sufficient statistic condition? If so, then is there an easy way to see that it does? If not, then is the main theorem missing a very important case (and perhaps calls for the later proposition to be brought into the main text)? If not, also then what is the implication and/or meaning of this sufficient statistic condition? Unfortunately, this apparently important requirement is quite hidden within all notation etc. In the revision, please clarify the meaning and implications of this condition (in the main text or appendix).\n\n- Regarding earlier questions (1), (3), (5): The simple setting of (5), which is highly prevalent in practice, does NOT satisfy the \"information intersection\" assumption. I am fine with this assumption since it appears often in earlier works in crowdsourced labeling but the paper needs to be very clear in the benefits as well as the limitations of this assumption. In the revision, please make a very careful comparison of pros and cons with respect to the references in (3) and (5).\n\n", "Thanks for your comments and questions. We will release our code after review. \n\nQ: >>Is MIG set as a loss function for the data classifier NN? Is crowd aggregator trained also as an NN with MIG as a loss function? \nA: <<We train the data classifier and the crowd aggregator together using -MIG(data classifier, crowd aggregator,p) as the loss function, i.e. they share the loss function. \n\nQ: >>How do the authors find the optimal p? \nA: <<We tune p as a hyperparameter and to maximize MIG(data classifier, crowd aggregator,p). \n\nQ: >>Also, in order all the provided theory to work all the found data classifier NN, the aggregator and p should be exact maximisers of MIG as far as I understand. How do the author ensure that they find the exact maximisers? \nA: <<We are not sure we understand this question. If this question asks about the robustness our algorithm, then our empirical results show that our algorithm is robust. \n\n\nQ: >>Also related to understanding how training works: on p.15 the authors claim “Note that our method can handle this simple correlated mistakes case and will give all useless experts weight zero based on Theorem 3.4.” I have trouble understanding why the proposed method should find these zero weights rather than it is just able to find them?\nA: <<Theoretically, our method should and is able to give the useless experts weight zero. In the simple correlated mistakes case, the best crowd aggregator gives the only useful expert all weight and other useless experts zero weights. During the training process, in order to maximize the mutual information between the classifier and aggregator, the SGD process of our algorithm will increase the weights of the useful experts and decrease the weights of the useless experts to a relatively small number, such that the trained aggregator approximates the best crowd aggregator. We will clarify it in our final version.\n\nQ: >>Real crowdsourced data\nA: <<Thanks for your suggestion. See our top comments. \n\nQ: >>Compare our data-crowd forecaster with AggNet\nA: <<Thanks for your suggestion. We compared our data-crowd forecaster with AggNet. The results still match our theory. When there are no correlated mistakes, we outperform AggNet or have very similar performances. When there are correlated mistakes, we outperform AggNet a lot (e.g. +30%). We have revised our paper and added this result. \n\nQ: >>Compare our crowd aggregator with pure crowdsourcing methods (Majority Voting or, preferably, more advanced pure crowdsourcing aggregators)\nA: <<This is still an unfair comparison. Although our crowd aggregator only takes the crowdsourced labels as input, the training process of our crowd aggregator incorporated the information from the images. \n\nQ: >>Does Max-MIG handle missing crowd labels for some data points? Did the author use missing labels in the experiments?\nA: <<The LabelMe data is in the missing label setting, the empirical results (our top comments) show that our algorithm handles this setting. \n\nQ: >> Both the Dogs vs. Cats and CIFAR-10 datasets have more or less balanced data, i.e., the number of data points belonging to each ground truth class is similar between classes. Is this true for the LUNA16 dataset? If yes, have the authors tried their method with heavily imbalanced data? In my experience, some crowdsourcing methods may suffer with imbalanced data, for example, Crowd Layer does so on some data. This tendency of Crowd Layer is kind of confirmed on the provided Dogs vs. Cats in the naïve majority case, where based on crowd labels the first class dominates the second.\nA: <<LUNA16 is highly imbalanced (85%, 15%). We will clarify it in our final version. \n\nWe thank you for your careful review and will follow your suggestions on our writings and fix the typos. ", "Thank you for your review and comments. \n\nQ: >>As the authors clarified, one key aspect of the \"information intersection\" assumption is that the crowdsourced labels are statistically independent from the data when conditioned on the ground truth. How strongly does this coincide with reality? Since the work is primary empirical, is there any evidence on this front?\n\nA: <<1) Let's consider the case where we ask the turkers to label \"dogs vs cats\". This assumption says that the turkers' labels are the noisy version of the ground truth class and the noise is independent with other aspects of the images (e.g. the image scene is indoor or outdoor). When the assumption is violated in the sense that the turkers' noises are highly correlated with other aspects of the images (e.g. the image scene is indoor or outdoor), without other assumptions, no algorithm can train a classifier here to avoid the influence of the ``indoor or outdoor'' information. \n2)This assumption is commonly used in most crowd-learning literature ( Dawid & Skene (1979), Raykar et al. (2010), Albarqouni et al. (2016), Guan et al. (2017) , Rodrigues & Pereira (2017) ).\n\nQ: >>is this the first algorithm towards this goal? \nA: <<It's not the first algorithm to ``joint'' learn (Raykar et al. 2010 is the first). It is the first algorithm that is robust to various information structures theoretically and experimentally. Learning From Noisy Singly-labeled Data is not robust to correlated mistakes (see following detailed comparison). \n\nQ: >>The submitted paper misses an extremely relevant piece of literature: \"Learning From Noisy Singly-labeled Data\" (arXiv:1712.04577). This paper also aims to solve the label + features problem together. How do the results of this paper compare to that of this submission?\nA: <<Thanks for your information. We will cite this ICLR 18 paper. Theoretically, this paper still requires the experts to be mutually conditional independent while we do not. Empirically, we tested this method on LabelMe data which has the real Amazon MTurk crowd labels and our method still outperforms this method: Max-MIG 86.42 +/- 0.36, MBEM(ICLR 18) 81.24 +/- 1.60.\n\nQ: >>\"Model and assumptions\" Is the i.i.d. assumption across the values of \"i\"? Then does that not violate the earlier claim of accommodating correlated mistakes?\nA: <<It means {(x_1,y_1^1,...,y_1^M),(x_2,y_2^1,...,y_2^M),...} =(x_i,y_i^1,...,y_i^M)_i=are i.i.d. samples of the joint random variables (X,Y^1,....,Y^M). A non-i.i.d example is that all(x_i,y_i^1,...,y_i^M)_i are the same. Experts make correlated mistakes means the random variables Y^1,...,Y^M are correlated even conditioning on the ground truth. There is no contradiction here. \n\nQ: >>Recent papers on crowdsourcing (such as Achieving budget-optimality with adaptive schemes in crowdsourcing arXiv:1602.03481 and A Permutation-based Model for Crowd Labeling: Optimal Estimation and Robustness arXiv:1606.09632) go beyond restricting workers to have a common confusion matrix for all questions. In this respect, these are better aligned with the realistic scenario where the error in labeling may depend on the closeness to the decision boundary. How do these settings and algorithms relate to the submission?\nA: <<Thanks for your information. One possible direction is making both the ground truth and image difficulty as the information intersection and finds them. We agree that taking account of image difficulty is an interesting direction to explore in future work and we will try to combine our framework with relevant papers in the future. \n\nQ: >>Page 5: \"Later we will show....\" Later where? Please provide a reference.\nA: <<The formal statement is in Appendix C, Theorem 3.4 (this is a detailed statement compared with the Theorem 3.4 in the main body). We will clarify it in our revised paper. \n\nQ: >>Theorem 3.4, The assumption of existence of experts such that Y^S is a sufficient statistic for Y: For instance, suppose there are 10 experts who all have a 0.999 probability of correctness (assume symmetric confusion matrices) and there are 5 non-experts who have a 0.001 probability of correctness and even if we suppose all are mutually independent given the true label, then does this satisfy this sufficient statistic assumption? This appears to be a very strong assumption, but perhaps the authors have better intuition?\nA: <<1) Our algorithm can handle all mutually independent cases, which includes the above .999 example since the mutually independent case satisfies our assumption automatically where all experts can be seen as senior experts (see Proposition C.3. for detail). We will clarify this in our revised paper. 2)We test our algorithm in real data (see our top comments) and the results show that our algorithm is robust to the real case. \n\n\nQ: >>Realistic experiments on some crowdsourcing platforms?\nA: <<See our top comments. ", "To all reviewers:\n\nThank you all for suggesting experiments on real Amazon MTurk data. We follow the second reviewer's suggestion and run our algorithm using LabelMe data, which has the real Amazon MTurk crowd labels (it is also the missing label setting with 59 annotators and each image was labeled by an average of 2.547 workers), from Rodrigues and Pereira (2017). We also achieve the state of art in this real data case. Here is the result: Max-Mig 86.42 +/- 0.36, Majority vote 80.41 +/- 0.56, Crowd Layer 83.65 +/- 0.50, Doctor Net 80.56 +/- 0.59, AggNet 85.20 +/- 0.26. \n\nWe have revised our paper and added this result. We will also release our code after review. ", "Thanks for your comments and questions. There might be some misunderstanding and we want to clarify it here: our algorithm is not ad hoc and it is independent of any prior knowledge about the information structures and identities of the senior/junior expert. i.e. our algorithm learns from crowds without knowing the information structure among the crowds a priori.\n\nQ: >>If the labels are collected from an unknown setup (e.g. on AMT), where it is hard to establish the dependency structure of the experts, how can we use such approaches effectively? \n>>So it is not surprising that the proposed approach outperforms other approaches. It's also interesting to see that AggNet isn't that bad in general compared to the proposed approach (except on LUNA16). What if we combine all experts in one setting and apply the proposed approach without prior knowledge of who are senior/junior?\n\nA: <<Our algorithm does not need to know the dependency structure of the experts nor the identity of the senior or junior experts. In detail, our algorithm's input is (datapoints and crowdsourced labels) and the initialization is also independent of the dependency structure of the experts nor the identity of the senior or junior experts.\n\nQ: >>(Top cons)Hard-to-check assumption for Theorem 3.4 for real-world problems, on the sufficiency of senior expert's info to predict the true class label\n>>Even if there exists a clear line between senior/junior experts in the labeling process, how do we know or check that the senior experts' opinion can sufficiently estimate the true labels? \nA: <<To implement our algorithm, we also do not need to check the sufficient statistic assumption. \n\nQ: >>(Top cons)Fairly strong assumption on the existence of mutually independent senior experts in the labeling process\nA: <<1)The general MAX-MIG framework does not need this assumption and the assumption can be relaxed by providing a more complicated aggregator model (see the last paragraph of the conclusion section). We agree that this can be an interesting direction to explore in future work. 2) Our results on real data show that our current implementation of MAX-MIG, with weighted average aggregator model, is still robust to the real situation empirically (see our top comments). \n\nQ: >>Did you require all experts to label ALL the data points or only a subset of training data points? \nA: <<The LabelMe data is in the missing label setting, the empirical results (our top comments) show that our algorithm handles this setting. \n\nQ: >>I don't believe \"Naive majority\" is an interesting setting - we can easily detect those junior experts that always label cases with one class, and remove these experts from the system, in practice. \nA: <<In our experiments, we make the naive majority always label 1 to show that other methods (e.g. majority vote) cannot handle this setting. However, in fact, if the naive majority label 1 with prob 0.9 and 0 with prob 0.1, we cannot easily remove them and other methods still cannot handle this setting, while our algorithm will not be affected by these kinds of naive majority based on our theory. \n\nQ: >>The term ``early''\nA: <<We will revise this term. ", "Top pros:\n- Well motivated approach with good examples from clinical setting\n- Sound proof on why information theoretical approach is better than MLE based approaches\n- Experiments on diversified data sets to show their approach's performance, with good implementation details. \n\nTop cons:\n- Fairly strong assumption on the existence of mutually independent senior experts in the labeling process\n- Hard-to-check assumption for Theorem 3.4 for real world problems, on the sufficiency of senior expert's info to predict the true class label\n\nThe paper is in general well written, and builds upon existing work on crowdsourced data mining and co-training. I believe this line of work will benefit the community in taking a more information theoretical approach with relaxed assumptions on the data collection process. My main feedback is how to check the existence of senior experts in real-world applications. In particular,\n- If the labels are collected from an unknown setup (e.g. on AMT), where it is hard to establish the dependency structure of the experts, how can we use such approaches effectively? \n- Even if there exists a clear line between senior/junior experts in the labeling process, how do we know or check that the senior experts' opinion can sufficiently estimate the true labels? \n\nIn the experiment section, the label data was collected with a build-in assumption of senior/junior labelers, and we also know exactly who are senior/junior experts. So it is not surprising that the proposed approach outperforms other approaches. It's also interesting to see that AggNet isn't that bad in general compared to the proposed approach (except on LUNA16). What if we combine all experts in one setting and apply the proposed approach without prior knowledge of who are senior/junior? Also, did you require all experts to label ALL the data points or only a subset of training data points? \n\nMinor points:\n- I don't believe \"Naive majority\" is an interesting setting - we can easily detect those junior experts that always label cases with one class, and remove these experts from the system, in practice. \n- I wouldn't call this an \"early\" algorithm as it indicates it's somewhat pre-mature. Just call this a novel approach that is in the early phase, and more sophisticated approach can be further developed. " ]
[ 6, 7, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ 4, 4, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2019_BJg9DoR9t7", "iclr_2019_BJg9DoR9t7", "rygf8BMF37", "B1e5ewz9pQ", "SJg2zPgOpQ", "rygf8BMF37", "BklLgntDhX", "iclr_2019_BJg9DoR9t7", "HygvOxIC3Q", "iclr_2019_BJg9DoR9t7" ]
iclr_2019_BJgK6iA5KX
AutoLoss: Learning Discrete Schedule for Alternate Optimization
Many machine learning problems involve iteratively and alternately optimizing different task objectives with respect to different sets of parameters. Appropriately scheduling the optimization of a task objective or a set of parameters is usually crucial to the quality of convergence. In this paper, we present AutoLoss, a meta-learning framework that automatically learns and determines the optimization schedule. AutoLoss provides a generic way to represent and learn the discrete optimization schedule from metadata, allows for a dynamic and data-driven schedule in ML problems that involve alternating updates of different parameters or from different loss objectives. We apply AutoLoss on four ML tasks: d-ary quadratic regression, classification using a multi-layer perceptron (MLP), image generation using GANs, and multi-task neural machine translation (NMT). We show that the AutoLoss controller is able to capture the distribution of better optimization schedules that result in higher quality of convergence on all four tasks. The trained AutoLoss controller is generalizable -- it can guide and improve the learning of a new task model with different specifications, or on different datasets.
accepted-poster-papers
The paper suggests using meta-learning to tune the optimization schedule of alternative optimization problems. All of the reviewers agree that the paper is worthy of publication at ICLR. The authors have engaged with the reviewers and improved the paper since the submission. I asked the authors to address the rest of the comments in the camera ready version.
train
[ "BJgG8g-Kx4", "SkxBNIaEeV", "Bke9KJljRX", "B1lSm_S9AX", "BkgdU-e16m", "SJl_vpqYRQ", "rkgn_XcxRm", "ByxRxkQFpX", "BJlq-BGFa7", "rylM2EzFpX", "ByeGCYgKTX", "BklC6-auT7", "rked778JaQ", "HklV1vNp2m" ]
[ "author", "public", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thank you for pointing us to your work [1], which studies the similar topic concurrently with us. Both works focus on designing methods to introducing dynamicas into objectives/loss functions. Specifically, [1] tries to directly cast the objective function as a a learnable neural network (learned by measuring the similarity between model prediction and ground-truth). By contrast, we focus on learning the update schdules (parameterized as NNs) in problems where multiple objectives or/and sets of parameters are involved. Our formulation allows for tackling alternate optimization problems such as (1) GANs, where multiple objectives have clear difference with each other and are combined in a minimax form; (2) multi-task learning, that each objective of interest is well-defined and prefixed but an update order is missing; (3) or even EM-based maximum likelihood estimation where some inference procedures involved (e.g. MCMC) aren't in the form of a gradient-based optimization -- In all these cases, the objective itself might be difficult to be represented or approximated by neural networks. We will cite your paper in a future version and include the above discussion.\n\n[1] Wu, L., Tian, F., Xia, Y., Fan, Y., Qin, T., Jian-Huang, L., & Liu, T. Y. (2018). Learning to Teach with Dynamic Loss Functions. In Advances in Neural Information Processing Systems (pp. 6465-6476).", "Dear the authors,\n\nThank you for referring to our ICLR'18 work \"Learning to Teach\" in your work. We have an extension of L2T in NeurIPS this year: \"Learning to Teach with Dynamic Loss Functions\" (https://papers.nips.cc/paper/7882-learning-to-teach-with-dynamic-loss-functions.pdf), which studies the automatic discovery of better objectives/loss functions adaptively in the optimization process, and therefore is quite related with your work. It'll be more comprehensive to position this one in your paper. Thanks.\n\nBest,\nFei Tian\n", "Thanks for the great suggestions again! We're working on generating new results using suggested metrics on the GAN and NMT experiments and will add the new results in the next version.", "Thanks for the comments again! We have fixed this typo in the latest version.", "The authors proposed an AutoLoss controller that can learn to take actions of updating different parameters and using different loss functions.\n\nPros\n1. Propose a unified framework for different loss objectives and parameters.\n2. An interesting idea in meta learning for learning loss objectives/schedule.\n\nCons: \n1. The formulation uses REINFORCE, which is often known with high variance. Are the results averaged across different runs? Can you show the variance? It is hard to understand the results without discussing it. The sample complexity should be also higher than traditional approaches.\n2. It is hard to understand what the model has learned compared to hand-crafted schedule. Are there any analysis other than the results alone?\n3. Why do you set S=1 in the experiments? What’s the importance of S?\n4. I think it is quite surprising the AutoLoss can resolve mode collapse in GANs. I think more analysis is needed to support this claim. \n5. The evaluation metric of multi-task MT is quite weird. Normally people report BLEU, whereas the authors use PPL. \n6. According to https://github.com/pfnet-research/chainer-gan-lib, I think the bested reported DCGAN results is not 6.16 on CIFAR-10 and people still found other tricks such as spectral-norm is needed to prevent mode-collapse. \n\nMinor: \n1. The usage of footnote 2 is incorrect.\n2. In references, some words should be capitalized properly such as gan->GAN.\n", "Thanks the authors for addressing my comments. I’ve adjusted my score accordingly. I still think there are some weakness in terms of evaluation.\n1. IS is not the only qualitative metric in GAN and DCGAN is not the state-of-the-art baseline. I would be curious to see the how does AutoLoss perform using some more recent GAN architectures. In addition to IS, FID score is also a recent complimentary metric to show the effectiveness. \n2. I understand the response from comment 5, but reporting the metric that the community care about is also import. Sometimes, PPL is not directly correlated with BLEU or other indirect measure. Without reporting proper metrics, it is hard to know how the approach performed compared to Niehues & Cho 2017.\n\n", "Thank you for your detailed comments.\n\nThe addition of the appendix sections will greatly aid in reproducibility!\n\n@ Horizon bias: Interesting that you observe in GAN but not in MNT.\n\nOne other small typo:\n\nA.8. Double reference to Algorithm 1 in GAN section. You probably mean one to be Algorithm 2.", "We thank all reviewers for giving valuable feedback to this paper. We have uploaded a revised manuscript in which we have incorporated the suggestions from the comments. \n\nWe want to highlight the following revisions:\n- We have added to Appendix A.1 the detailed algorithm how PPO is incorporated into AutoLoss.\n- Add Appendix A.8 to disclose detailed hyperparameters to produce the presented results.\n- Add Appendix A.9 to discuss the potential limitations of AutoLoss, as suggested by AnonReviewer3.\n- We have updated Figure.4(b) to a scatter plot for clarity, suggested by AnonReviewer4.\n- We have added several references suggested by AnonReviewer4 and revised several claims to be more accurate.\n", ">> Comment #8, #9\nThanks for pointing us to these two works. In [1], the authors investigate several features and develop a controller that can adaptively adjust the learning rate of the ML problem at hand, similarly in a data-driven way. In [2], the authors propose to manually balance the training of G and D by monitoring how good G and D are, assessed by three quantities and realized by simple thresholding. By contrast, AutoLoss offers a more generic way to parametrize and learn the update schedule. Hence, AutoLoss fits into more problems (as we’ve shown in the paper).\nWe have appropriately revised the two claims and cited them in the latest version.\n\n>> Comment #10\nEmpirically, IS^2 or IS do not make much difference on the performance. The scaling term is a flexible parameter that controls the scale of the reward which we do not tune very much though.\n\n>> Comment #12\nYes, in WGAN, it is preferable to train the critic till optimality. We have revised the statement for accuracy -- we observe in our experiments, for DCGANs with the vanilla GAN objective (JSD), more generator training than discriminator training generally performs better (but this may not be an effective hint for other GAN objectives as they behave very differently).\n\n>> Comment #13\nWe have added Appendix A.8 to disclose all hyperparameters. All code and model weights used in this paper will be made available. \n\n>> Comment #14\nWe’ve revised our statements to be more accurate: for all GANs and NMT experiments, we observe AutoLoss reaches better final convergence; For GAN 1:1, GAN 1:9, AutoLoss trains faster; for NMT experiments, AutoLoss not only trains faster but also converges better.\n\nWe’d like to clarify that for all our GANs and NMT experiments, the stopping criteria of an experiment is either divergence or when we don’t observe improvement of convergence for 20 continuous epochs. This is why in Fig.2, Fig.3(L) and Fig.4(c), it looks like that different methods are given different training time.\n\n>> Comment #15\nWe have update Figure.4(b) to a scatter plot, and fixed mentioned typos in the current version.", "Thanks for the detailed and encouraging feedback! We reply all comments below (relevant ones are put together):\n\n>> Comments #1, #11\nWe mainly account the success of this simple training strategy to the simplicity of the model, the relatively low dimensionality of our input features, and the simplified action space (though all three suffice to obtain a good controller in the current settings). They make the training of the controller much easier compared to other RL tasks with higher dimensional features or larger output space.\n\nWe have added the detailed PPO-based training algorithm in Appendix A.1. While AutoLoss is amenable to different policy optimization algorithms, we empirically find PPO performs better on NMT, but REINFORCE performs better on GANs. As to the online setting, thanks for pointing us to the “short-horizon bias” paper. We have indicated in the revision the existence of this bias -- this bias was observed on the GAN task -- overtraining G can increase IS in a short term, but may lead to divergence in a long term as G becomes too strong. On the other hand, we didn’t observe it harms on NMT task noticeably. We hypothesize the tradeoff is insignificant on NMT, as in our multi-task setting, slightly over-optimizing one task objective usually does not have irreversible negative impact on the MT model (as long as the other objectives are optimized appropriately later on). \n\n>> Comments #2, #3\nWe’d like to clarify that S=1 is consistent in the overhead section and Algorithm.1. S controls how many sequences to generate to perform a (batched) policy update (i.e. S is the batch size), and we set S=1 for all tasks. Only T differs across tasks, but we always update \\phi whenever a reward is generated.\n\nBack to comment #2: for regression and classification, we have experimented with larger S and found the improvement marginal. As each reward is generated via an independent experiment, the correlations among gradients are unobvious. For large-scale tasks, we use memory replay to alleviate correlations in online settings (please see Algorithm 2 in Appendix A.1 in our revised version). \nPerforming batched update with a larger S might help reduce correlations; However, a large S, as a major drawback, requires performing ST (S>>1) steps of task model training, in order to perform one step of controller update. This yields better per-step convergence, but longer overall training (wallclock) time for the controller to converge. There might exist sweet spots for S where one can achieve both good per-step convergence and short training time, but we skip the search of S and simply use S=1 as it performs well. \nIt is worth noting that some recent literature uses a stochastic estimation of the policy gradient with batch size 1 as well, and report strong empirical results [1].\n\n[1] Efficient Neural Architecture Search via Parameter Sharing. ICML 2018\n\n>> Comment #4\nWe observe the controller performance on all 4 tasks are insensitive to initialization. A good initialization (e.g. in NMT, equally assigning probabilities to each loss at the start of the training) indeed leads to faster learning, but most experiments with random initializations manage to converge to a good optima, thanks to \\epsilon-greedy sampling used in training.\n\n>> Comment #5\nThey are the same -- there is a typo leading to confusion in the sentence “...in Figure 1 where we set different \\lambda in l_2 = \\lambda |\\Theta|_2...”; which should be “...in Figure 1 where we set different \\lambda in l_2 = \\lambda |\\Theta|_1...”. We have fixed it in the latest version.\n\n>> Comment #6\nPlease see the last paragraph in page 5. For regression, classification and NMT, we split data into 5 partitions D_{train}^C, D_{val}^C, D_{train}^T, D_{val}^T, D_{test}. AutoLoss uses D_{train}^C and D_{val}^C to train the controller. Once trained, the controller guides the training of a new task model on another two partitions D_{train}^T, D_{val}^T. Trained task models are evaluated on D_{test}. Baseline methods use the union of D_{train}^C, D_{val}^C, D_{train}^T, D_{val}^T for training/validation. For GANs that do not need a validation or test set, we follow the same setting in [1] for all methods.\n\n[1] Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. ICLR 2016.\n\n>> Comment #7\nThanks for pointing out -- we apologize for misusing “exploding or vanishing gradients” and have revised the paper to be accurate. We simply intended to clip the reward to reduce variances, and fount it effectively improved training. \n", "We have fixed the footnote and capitalization problems. Below are replies to other comments.\n\n>> Comment #1\nWe agree vanilla REINFORCE can exhibit high variance. However, as we have elaborated in the text below Eq.2, to reduce the variance and stabilize the training, we have made the following adaptations referring to previous works [1,2]:\n- Substitute a moving average B (defined in text) from the reward\n- Clip the final reward to a given range\nWe empirically found the two techniques significantly stabilize the controller training.\nMoreover, AutoLoss is not restricted to REINFORCE, but open to any off-the-shelf policy optimization method, e.g. for large-scale tasks such as NMT, we introduce PPO to replace REINFORCE, and adjust the reward generation scheme accordingly (see the paragraph “Discussion”). We’ve also revised Appendix A.1 to cover details of how PPO is incorporated. Empirically, with random parameter initialization most experiments manage to converge and give fairly good controllers. \n\nAlmost all main results are averaged over multiple runs as explicitly indicated in the main text and the table or figure captions (e.g. see captions of Table.1 and Fig.2). See Fig.2 and Fig.3(R) where vertical bars indicate variances. We have also updated Table.1 to show the variance. \n\nWe will release all code and trained models for reproducibility. \n\n>> Comment #2\nWe have provided substantial analysis and visualizations on what AutoLoss has learned in our *initial submission*. Below, we summarize them for your reference:\n\n- d-ary regression and MLP classification\n*See sec 5.1, the 3rd paragraph in P6 for analysis, and Table.1 for comparisons to handcrafted schedules*: we observe AutoLoss optimizes L1 whenever needed during the optimization. By contrast, linear combination objectives optimize both at each step while handcrafted schedules (e.g. S1-S3) optimize L1 strictly following the given schedule, ignoring the optimization status. We believe AutoLoss manages to detect the potential risk of overfitting using designed features, and combat it by optimizing L1 only when necessary.\n- GANs\nPer our observation, AutoLoss gives more flexible schedules than manually designed ones. It can determine when to optimize G or D by being aware of the current optimization status (e.g. how G and D are balanced) using its parametric controller.\n- NMT\n*See sec 5.1, the 3rd paragraph in P7 and Fig.3(M)*: we have explicitly visualized in Fig.3(M) the softmax output of a learned controller and explain in text: “...the controller meta-learns to up-weight the target NMT objective at later phase…resemble the “fine-tuning the target task” strategy...”.\n\n>> Comment #3\nWe experimented with S>1 and found the improvement marginal. However, a large S requires more task model training steps to perform one PG (or PPO) update, meaning longer overall wallclock time for the controller to converge. We hence use S=1 as it performs satisfactorily. Note that some recent meta-learning literature uses policy gradient with batchsize 1, and report strong empirical results [3].\n\n>> Comment #4\nWe’d like to clarify that we have *not* claimed that “AutoLoss can resolve mode collapse in GANs”. AutoLoss improves the performance of GANs by enabling an adaptive optimization schedule than a pre-fixed one. Our point is better and faster convergence of the model training. In the GAN experiments we *qualitatively* observed the generated images are of satisfying quality and exhibit no mode collapse. But we never claimed we aim to or can resolve mode collapse.\n\n>> Comment #5\nWe respectfully disagree with this comment. The NMT experiments aim to verify that AutoLoss can guide the multi-task optimization toward faster and better convergence on the target task, i.e. our interest is to see how the optimization goes instead of how the MT performs. Held-out PPL is the direct indicator of the quality of convergence, while BLEU evaluates the MT performance. Hence we believe PPL suffices as a metric to evaluate the performance of AutoLoss.\n\n>> Comment #6\nWe acknowledge that there may exist DCGAN implementations that achieve higher IS on CIFAR-10, but note the following facts:\n- The link verifies in a table that the best official IS (reported in literature) is 6.16 (the number we report).\n- The self-implemented DCGAN 1:1 baseline used in our paper (see Fig.4(c)) achieves an IS=6.7, higher than 6.16.\n- Still, AutoLoss-guided DCGAN achieves IS=7, higher than 6.16 reported in literature, our own implementation, and the result from your link.\n\nThanks again for mentioning spectral norm. However, these techniques are *completely orthogonal* from the scope of this paper, where we focus on whether AutoLoss can improve the convergence instead of resolving mode collapse. \n[1] Device Placement Optimization with Reinforcement Learning. ICML’17\n[2] Neural Optimizer Search with Reinforcement Learning. ICML’17\n[3] Efficient Neural Architecture Search via Parameter Sharing. ICML’18", "Thank you for the valuable and encouraging feedback! Below, please see our replies.\n\n>> What are the key limitations of AutoLoss? Did we observe some undesirable behavior of the learned optimization schedule, especially when transfer between different datasets or different models ? More discussions on these questions can be very helpful to further understand the proposed method. \n\nThese are indeed good questions. We list several limitations we discovered during the development of AutoLoss:\n- Bounded transferability\nWe observe AutoLoss has bounded transferability -- while we successfully transfer a controller across different CNNs, we can hardly transfer a controller trained for CNNs to RNNs. This is slightly different from some related AutoML works, such as in [1], where auto-learned neural optimizers are able to produce decent results on even different families of neural networks. We hypothesize that the optimization behaviors or trajectories of CNNs and RNNs are very different, hence the function mappings from status features to actions are different. We leave it as a future work to study where the clear boundary is.\n- Design white-box features to capture optimization status\nAnother limitation of AutoLoss is the necessity of designing the feature vector X, which might require some prior knowledge on the task of interest, such as being aware of a rough range of the possible values of validation metrics, etc. In fact, We initially experimented with directly feeding blackbox features (e.g. raw vectors of parameters, gradients, momentum, etc.) into controller, but found they empirically contributed little to the prediction, and sometimes hindered transferability (as different models have their parameter or gradient values at different scales).\n- Non-differentiable optimization\nMeta-learning discrete schedules involves non-differentiable optimization, which is by nature difficult. Therefore, a lot of techniques in addition to vanilla REINFORCE are required to stabilize the training. Please also see our answer to the next question for more details.\nAs a potential future work, we will seek for continuous representations of the update schedules and end-to-end training methodologies, as arisen in recent works [2].\n\nWe haved add the above discussion to the latest version as Appendix A.9.\n\n>> As the problem is formulated as an RL problem, which is well-known for its difficulty in training, did we encounter similar issues? More details in the implementation can be very helpful for reproducibility. \n>> Any plan for open source?\n\nWe acknowledge the difficulties of training controllers using vanilla REINFORCE. During our development of the training algorithm (See Eq.2, the “discussion” section in Sec.4, and Appendix A.1), we found the vanilla form of REINFORCE algorithm leads to unstable training. We therefore have made many improvements and adaptations by either referring to existing literature, or depending on the specific tasks. They include:\n- Substitute from the reward a baseline term, which is a moving average (see section 3, Eq.2)\n- Reward clipping (see section 3, under Eq.2)\n- Use different values of T for different tasks (see “discussion” in section 4)\n- Use improved training algorithms (e.g. PPO) for more challenging tasks, and slightly adjust reward generation schemes (see “discussion” in section 4, and Appendix A.1).\n\nWe have also revised the submission to disclose more details on how we make these improvements. We will make all code and models trained in this paper available for reproducibility.\n\n[1] Neural optimizer search with reinforcement learning. ICML 2017.\n[2] DARTS: Differentiable Architecture Search. Arxiv 1806.09055.\n", "Summary: This paper proposes a meta-learning solution for problems involving optimizing multiple loss values. They use a simple (small mlp), discrete, stochastic controller to control applications of updates among a finite number of different update procedures. This controller is a function of heuristic features derived from the optimization problem, and is optimized using policy gradient either exactly in toy settings or in a online / truncated manor on larger problems. They present results on 4 settings: quadratic regression, MLP classification, GAN, and multi-task MNT. They show promising performance on a number of tasks as well as show the controllers ability to generalize to novel tasks.\n\nThis is an interesting method and tackles a impactful problem. The setup and formulation (using PG to meta-optimize a hyper parameter controller) is not extremely novel (there have been similar work learning hyper parameter controllers), but the structure, the problem domain, and applications are. The experimental results are through, and provide compelling proof that this method works as well as exploration as to why the method works (analyzing output softmax). Additionally the \"transfer to different models\" experiment is compelling.\n\nComments vaguely in order of importance:\n1. I am a little surprised that this training strategy works. In the online setting for larger scale problems, your gradients are highly correlated and highly biased. As far as I can tell, you are performing something akin to truncated back back prop through time with policy gradients. The biased introduced via this truncation has been studied in great depth in [3] and shown to be harmful. As of now, the greedy nature of the algorithm is hidden across a number of sections (not introduced when presenting the main algorithm). Some comment as to this bias -- or even suggesting that it might exist would be useful. As of now, it is implied that the gradient estimator is unbiased.\n\n2. Second, even ignoring this bias, the resulting gradients are heavily correlated. Algorithm 1 shows no sign of performing batched updates on \\phi or anything to remove these corrections. Despite these concerns, your results seem solid. Nevertheless, further understanding as to this would be useful.\n\n3. The structure of the meta-training loop was unclear to me. Algorithm 1 states S=1 for all tasks while the body -- the overhead section -- you suggest multiple trainings are required ( S>1?).\n\n4. If the appendix is correct and learning is done entirely online, I believe the initialization of the meta-parameters would matter greatly -- if the default task performed poorly with a uniform distribution for sampling losses, performance would be horrible. This seems like a limitation of the method if this is the case.\n\n5. Clarity: The first half of this paper was easy to follow and clear. The experimental section had a couple of areas that left me confused. In particular:\n5.1/Figure 1: I think there is an overloaded use of lambda? My understanding as written that lambda is both used in the grid search (table 1) to find the best loss l_1 and then used a second location, as a modification of l_2 and completely separate from the grid search?\n\n6. Validation data / test sets: Throughout this work, it is unclear what / how validation is performed. It seems you performing controller optimization (optimizing phi), on the validation set loss, while also reporting scores on this validation set. This should most likely instead be a 3rd dataset. You have 3 datasets worth of data for the regression task (it is still unclear, however, what is being used for evaluation), but it doesn't look like this is addressed in the larger scale experiments at all. Given the low meta-parameter count of the I don't think this represents a huge risk, and baselines also suffer from this issue (hyper parameter search on validation set) so I expect results to be similar. \n\n7. Page 4: \"When ever applicable, the final reward $$ is clipped to a given range to avoid exploding or vanishing gradients\". It is unclear to me how this will avoid these. In particular, the \"exploding\" will come from the \\nabla log p term, not from the reward (unless you have reason to believe the rewards will grow exponentially). Additionally, it is unclear how you will have vanishing rewards given the structure of the learned controller. This clipping will also introduce bias, this is not discussed, and will probably lower variance. This is a trade off made in a number of RL papers so it seems reasonable, but not for this reason.\n\n8. \"Beyond fixed schedules, automatically adjusting the training of G and D remains untacked\" -- this is not 100% true. While not a published paper, some early gan work [2] does contains a dynamic schedule but you are correct that this family of methods are not commonplace in modern gan research.\n\n9. Related work: While not exactly the same setting, I think [1] is worth looking at. This is quite similar causing me pause at this comment: \"first framework that tries to learn the optimization schedule in a data-driven way\". Like this work, they also lean a controller over hyper-parameters (in there case learning rate), with RL, using hand designed features.\n\n10. There seem to be a fair number of heuristic choices throughout. Why is IS squared in the reward for GAN training for example? Why is the scaling term required on all rewards? Having some guiding idea or theory for these choices or rational would be appreciated.\n\n11. Why is PPO introduced? In algorithm 1, it is unclear how PPO would fit into this? More details or an alternative algorithm in the appendix would be useful. Why wasn't PPO used on all larger scale models? Does the training / performance of the meta-optimizer (policy gradient vs ppo) matter? I would expect it would. This detail is not discussed in this paper, and some details -- such as the learning rate for the meta-optimizer I was unable to find.\n\n12. \"It is worth noting that all GAN K:1 baselines perform worse than the rest and are skipped in Figure 2, echoing statements (Arjovsky, Gulrajani, Deng) that more updates of G than D might be preferable in GAN training.\" I disagree with this statement. The WGAN framework is built upon a loss that can be optimized, and should be optimized, until convergence (the discriminator loss is non-saturating) -- not the reverse (more G steps than D steps) as suggested here. Arjovsky does discuss issues with training D to convergence, but I don't believe there is any exploration into multiple G steps per D step as a solution.\n\n13. Reproducibility seems like it would be hard. There are a few parameters (meta-learning rates, meta-optimizers) that I could not find for example and there is a lot of complexity.\n\n14: Claims in paper seem a little bold / overstating. The inception gain is marginal to previous methods, and trains slower than other baselines. This is also true of MNT section -- there, the best baseline model is not even given equal training time! There are highly positive points here, such as requiring less hyperparameter search / model evaluations to find performant models.\n\n15. Figure 4a. Consider reformatting data (maybe histogram of differences? Or scatter plot). Current representation is difficult to read / parse.\n\nTypos:\npage 2, \"objective term. on GANs, the AutoLoss: Capital o is needed.\nPage 3: Parameter Learning heading the period is not bolded.\n\n[1] Learning step size controllers for robust neural network training. Christian Daniel et. al.\n[2]http://torch.ch/blog/2015/11/13/gan.html\n[3] Understanding Short-Horizon Bias in Stochastic Meta-Optimization, Wu et.al.\n\nGiven the positives, and in-spite of the negatives, I would recommend to accept this paper as it discusses an interesting and novel approach when controlling multiple loss values.", "This paper addresses a novel variant of AutoML, to automatically learn and generate optimization schedules for iterative alternate optimization problems. The problem is formulated as a RL problem, and comprehensive experiments on four various applications have demonstrated that the optimization schedule produced can guide the task model to achieve better quality of convergence, more sample-efficient, and the trained controller is transferable between datasets and models. Overall, the writing is quite clear, the problem is interesting and important, and the results are promising. \n\nSome suggestions:\n\n1. What are the key limitations of AutoLoss ? Did we observe some undesirable behavior of the learned optimization schedule, especially when transfer between different datasets or different models ? More discussions on these questions can be very helpful to further understand the proposed method. \n\n2. As the problem is formulated as an RL problem, which is well-known for its difficulty in training, did we encounter similar issues? More details in the implementation can be very helpful for reproducibility. \n\n3. Any plan for open source ? " ]
[ -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, 7, 7 ]
[ -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "SkxBNIaEeV", "iclr_2019_BJgK6iA5KX", "SJl_vpqYRQ", "rkgn_XcxRm", "iclr_2019_BJgK6iA5KX", "ByeGCYgKTX", "rylM2EzFpX", "iclr_2019_BJgK6iA5KX", "rked778JaQ", "rked778JaQ", "BkgdU-e16m", "HklV1vNp2m", "iclr_2019_BJgK6iA5KX", "iclr_2019_BJgK6iA5KX" ]
iclr_2019_BJgLg3R9KQ
Learning what and where to attend
Most recent gains in visual recognition have originated from the inclusion of attention mechanisms in deep convolutional networks (DCNs). Because these networks are optimized for object recognition, they learn where to attend using only a weak form of supervision derived from image class labels. Here, we demonstrate the benefit of using stronger supervisory signals by teaching DCNs to attend to image regions that humans deem important for object recognition. We first describe a large-scale online experiment (ClickMe) used to supplement ImageNet with nearly half a million human-derived "top-down" attention maps. Using human psychophysics, we confirm that the identified top-down features from ClickMe are more diagnostic than "bottom-up" saliency features for rapid image categorization. As a proof of concept, we extend a state-of-the-art attention network and demonstrate that adding ClickMe supervision significantly improves its accuracy and yields visual features that are more interpretable and more similar to those used by human observers.
accepted-poster-papers
This paper presents a large-scale annotation of human-derived attention maps for ImageNet dataset. This annotation can be used for training more accurate and more interpretable attention models (deep neural networks) for object recognition. All reviewers and AC agree that this work is clearly of interest to ICLR and that extensive empirical evaluations show clear advantages of the proposed approach in terms of improved classification accuracy. In the initial review, R3 put this paper below the acceptance bar requesting major revision of the manuscript and addressing three important weaknesses: (1) no analysis on interpretability; (2) no details about statistical analysis; (3) design choices of the experiments are not motivated. Pleased to report that based on the author respond, the reviewer was convinced that the most crucial concerns have been addressed in the revision. R3 subsequently increased assigned score to 6. As a result, the paper is not in the borderline bucket anymore. The specific recommendation for the authors is therefore to further revise the paper taking into account a better split of the material in the main paper and its appendix. The additional experiments conducted during rebuttal (on interpretability) would be better to include in the main text, as well as explanation regarding statistical analysis.
train
[ "r1eVYyoh3Q", "H1gDHhddR7", "Sklu7PO_C7", "Bke5rI__CX", "ryg0WL__07", "HyeS0BduCm", "HJxxoO1ram", "S1xOmfJSTQ", "rylRhZkSam", "S1gAO-ySTQ", "S1gU756h3X", "HygHbsciim" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "\nSUMMARY\n\nThis paper argues that most recent gains in visual recognition are due to the use of visual attention mechanisms in deep convolutional networks (DCNs). According to the authors; the networks learn where to focus through a weak form of supervision based on image class labels. This paper introduces a data set that complements ImageNet with circa 500,000 human-derived attention maps, obtained through a large-scale online experiment called ClickMe. These attention maps can be used in conjunction with DCNs to add a human-in-the-loop feature that significantly improves accuracy.\n\nREVIEW\n\nThis paper is clearly within scope of the ICLR conference and addresses a relevant and challenging problem: that of directing the learning process in visual recognition tasks to focus on interesting or useful regions. This is achieved by leveraging a human-in-the-loop approach.\n\nThe paper does a fair job in motivating the research problem and describing what has been done so far in the literature to address the problem. The proposed architecture and the data collection online experiment are also described to a sufficient extent.\n\nIn my view, the main issue with this paper is the reporting of the experiment design and the analysis of the results. Many of the design choices of the experiments are simply listed and not motivated at all. The reader has to accept the design choices without any justification. The results for accuracy are simply listed in a table and some results are indicated as “p<0.01” but the statistical analysis is never described. Interpretability is highlighted in the abstract and introduction as an important feature of the proposed approach but the evaluation of interpretability is limited to a few anecdotes from the authors’ review of the results. The paper does not present a procedure or measure for evaluating interpretability.\n\nOTHER SUGGESTIONS FOR IMPROVEMENT\n\n- The verb “attend” is used in many places where “focus” seems to be more appropriate.\n\n- “we ran a rapid experiment”: what does rapid mean in this context?\n\n- “the proposed GALA architecture is grounded in visual neuroscience” : this and many other statements are only elaborated upon in the appendix. I understand that page limit is always an issue but I think it is important to prioritise this and similar motivations and put at least a basic description in the main body\n\nUPDATE\n\nMy most serious concerns have been addressed in the revised version.\n", "We have uploaded two versions of the revision. (1) The most recent version is the revision. (2) The second-most recent version is a diff between the revision and our original ICLR submission. We hope this will help in evaluating our work.", "In this newest draft we have overhauled explanations and readability of the entire manuscript. We have also fixed the notation issues you raised and included a clearer description of the operations of GALA. We performed another analysis of participant learning on the ClickMe game, as suggested, and found no difference in performance on the first ten versus the second set of ten trials (49.30% vs. 52.20%; this result is now included in the Appendix). Finally, we have removed the “human-in-the-loop” description of GALA training with ClickMe maps. We have also changed the title of the manuscript to: “Learning what and where to attend.”", "In this newest draft we have expanded our explanations for experiments and results, detailed all statistical tests that were used, and incorporated a discussion of the computational neuroscience inspiration for GALA into the main text. We have also included a new analysis in which we quantify attention interpretability on images from Microsoft COCO, and emphasized our quantification of interpretability on ClickMe images.", "In this newest draft we have reworked our descriptions of methods, changed our model schematic figure, and detailed all statistical tests. Thank you for these suggestions!", "We have uploaded a revision of the manuscript that addresses each of the points that we outlined in the meta response below. We would like to draw your attention in particular to a new analysis introduced in this draft, in which we quantified the “zero-shot” model interpretability of the GALA module trained with ClickMe maps on a large set of images from Microsoft COCO with a method inspired by [1]. As we mention in Section 4.4, GALA trained with ClickMe is significantly more interpretable by this metric than GALA trained without ClickMe (significance testing done with randomization tests, as is now described in the manuscript). We have also included Appendix Figure 8, which shows examples of the visual features favored by each model: the difference between the two models is dramatic. In total, we now have quantitative and qualitative evidence that GALA attention is more interpretable when it is co-trained with ClickMe on the ClickMe dataset (it explains a greater fraction of human ClickMe map variability) and on Microsoft COCO (more interpretable attention according to this new analysis). \n\nWe believe this version of the manuscript is greatly improved and we thank you all for your comments. We hope the manuscript now answers any remaining questions or concerns you may have.\n\n[1] D. Bau, B. Zhou, A. Khosla, A. Oliva, and A. Torralba. Network Dissection: Quantifying Interpretability of Deep Visual Representations. Computer Vision and Pattern Recognition (CVPR), 2017.", "Thank you for the detailed comments and the very thorough review! Below are our responses to your suggestions on improving the paper.\n\n1. We are overhauling Sections 3 and 4 to fix notation issues, improve readability, and clarify the figure. Along these lines and as you suggested, we will include a brief description of the GALA at the beginning of Section 3. The W_expand and W_shrink operations are borrowed from the manuscript of the original Squeeze-and-Excitation [1] module. We will revamp our description of these, which will also incorporate more of the neuroscience motivation. \n\n2. The regularization term forces attention maps in the network to be similar to human feature importance maps. We agree that this is why the maps for different layers in Fig. 4 look similar vs. the attention maps from a GALA trained without such constraints, which are distinct. We felt that the improved interpretability, performance, and similarity to human feature maps that fell out of using this attention supervision justified its use at each layer. We also agree that the right pairing of properly supervised attention with a much shallower network could yield a far more parsimonious architecture for problems like object recognition than the very deep and very powerful ResNets.\n\n3. We agree that the image dataset we used to compare ClickMe with Clicktionary maps is far from ideal, and we will note this in the manuscript. However, these were the only images available for such an analysis. Although it is underpowered, this analysis is also consistent with the other results we report about how the feature importance maps derived from these games are highly consistent and stereotyped between participants (section 2).\n\nAlso, you raise a good point about the split-half comparison we use to demonstrate that participants do not learn CNN strategies in ClickMe. However, such a strategy would amount to a sensitivity analysis of the CNN without knowing how much of the image it was looking at: expanded versions of the bubbles placed by human players were used to unveil those regions to the CNN. The average CNN performance of 53.64% in the first half vs. 53.61% in the second half of participants' trials also does not suggest an effective sensitivity analysis. We will perform another analysis of participant performance to see if learning took place within the first tens of trials, and report this in the manuscript.\n\n4. This is a good point. How about: “Learning what and where to attend with human feedback”\n\n[1] Hu J, Shen L, and Sun G. Squeeze-and-excitation networks. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.", "We really appreciate the comments and we are working to correct the issues you raised.\n\nWe have devised an analysis that we hope can address your main critique, which involves measuring the similarity of the attention masks from GALA to object instance annotations using intersection-over-union (IOU), similar to [1]. We would like to note, however, that this is another flavor of an analysis that we present in the paper that we believe is an even more direct way of measuring interpretability: the similarity between attention masks and ClickMe maps, which describe visual features important to human observers. Please let us know if you have anything else in mind that would improve our argument of the interpretability of the attention maps from the GALA-ResNet-50 trained with ClickMe.\n\nTo address your other comments, as we detailed to Reviewer 2, we will expand our description of the statistical tests used in the manuscript. We will also improve our justification for the experimental design, including a definition and more context for rapid visual recognition experiments. This experimental design has been used extensively in visual neuroscience (e.g., [2-3]), and we apologize for presenting it without appropriate context and motivation for why we chose it and the kinds of constraints that it places on participants to make visual decisions. Along these lines, we will add a discussion of the neuroscience inspiration of the GALA module to the main text. Finally, we chose the verb “attend” over one like “focus” because of its meaning in neuroscience and how the GALA module works, but will gladly re-evaluate the usage if you can point to where in the manuscript it does not make sense to you.\n\n[1] Bau D, Zhou B, Khosla A, Oliva A, and Torralba A. Network dissection: Quantifying interpretability of deep visual representations. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.\n[2] Thorpe S, Fize D, Marlot C. Speed of processing in the human visual system. Nature, 1996.\n[3] Serre T, Oliva A, Poggio T. feedforward architecture accounts for rapid categorization. Proceedings of the National Academy of Sciences, 2006.\n", "Thank you for the review and comments. We are working on fixing the issues that you raised, and believe that correcting them will greatly improve the quality of the manuscript.\n\nWe are fixing the issues with notation, defining the variables that we neglected to in the original draft, overhauling our model figure, and improving the transitions between sections of the manuscripts. We thank you for pointing out that the statistical tests were unclear. We will incorporate the following test descriptions into the manuscript.\n\nFor the behavioral experiment, this involved randomization tests, which compared the performance between ClickMe vs. Salicon groups at every “percentage of image revealed by feature source” bin. A null distribution of “no difference between groups” was constructed by randomly switching participants’ group memberships (e.g., a participant who viewed ClickMe mapped images was called a Salicon viewer instead), and calculating a new difference in accuracies between the two groups. This procedure was repeated 10,000 times, and the proportion of these randomized scores that exceeded the actual observed difference was taken as the p-value. This randomization procedure is a common tool in biological sciences [1].\n\nA similar procedure was used to derive p-values for the correlations between model features and ClickMe maps. As we mention in the manuscript in our description of calculating the null inter-participant reliability of ClickMe maps: “We also derived a null inter-participant reliability by calculating the correlation of ClickMe maps between two randomly selected players on two randomly selected images. Across 10,000 randomly paired images, the average null correlation was $\\rho_r=0.18$, reinforcing the strength of the observed reliability.” The p-values of correlations between model features and ClickMe maps are the proportion of per-image correlation coefficients that are less than this value.\n\n[1] Edgington, E. Randomization tests. The Journal of Psychology: Interdisciplinary and Applied,1964.\n", "We thank the reviewers for their detailed and constructive comments. In this initial response, we want to acknowledge the raised critiques and present our plan for addressing them. Please let us know if you feel we have omitted anything. We believe that these revisions will greatly improve the manuscript.\n\nTo summarize, the revisions will address the following points:\n\n1. We will clarify and improve the methods section by replacing our model figure, fixing notational issues, explaining our statistical testing procedures, and defining terms noted by the reviewers.\n2. We will improve the flow and organization of the manuscript. This includes moving the computational neuroscience background to the related work, and expanding it.\n3. We will improve our motivation for the experimental design, and take more care to walk the reader through the results as well as the effect of ClickMe-map supervision on attention.\n4. We will include a link to a GitHub repository with Tensorflow code for the model.\n5. We will add a new analysis to quantify how co-training a GALA-ResNet with ClickMe maps increases the interpretability of its attention maps.", "The paper presents a new take on attention in which a large attention dataset is collected (crowdsourced) and used to train a NN (with a new module) in a supervised manner to exploit self-reported human attention. The empirical results demonstrate the advantages of this approach.\n\n*Pro*:\n-\tWell-written and relatively easily accessible paper (even for a non-expert in attention like myself)\n-\tWell-designed crowdsourcing experiment leading to a novel dataset (which is linked to state-of-the-art benchmark)\n-\tAn empirical study demonstrates a clear advantage of using human (attention) supervision in a relevant comparison \n\n*Cons*\n-\tSome notational confusion/uncertainty in sec 3.1 and Fig 3 (perhaps also Sec 4.1): E.g. $\\mathbf{M} and {L_clickmaps} are undefined in Sec 3.1.\n\n*Significance:* I believe this work would be of general interest to the image community at ICLR as it provides a new high-quality dataset and an attention module for grounding investigations into attention mechanisms for DNNs (and beyond). \n\n*Further comments/questions:*\n-\tThe transition between sec 2 and sec 3 seems abrupt; consider providing a smoother transition. \n-\tFigure 3: reconsider the logical flow in the figure; it took me a while to figure out what going on (especially the feedback path to U’).\n-\tIt would be beneficial to provide some more insight into the statistical tests casually reported (i.e., where did the p values come from)\n-\tThe dataset appears to be available online but will the code for the GALA module also be published?\n\n", "This paper proposes a new approach to use more informative signals (than only class labels), specifically, regions humans deem important on images, to improve deep convolutional neural networks. They collected a large dataset by implementing a game on clickme.ai and showed that using this information results in both i) improved classification accuracy and ii) more interpretable features. \n\nI think this is good work and should be accepted. The main contribution is three fold: i) a publicly available dataset that many researchers can use, ii) a network module to incorporate this human information that might be inserted into many networks to improve performance, and iii) some insights on the effect of such human supervision and the relation between features that humans deem important to those that neural nets deem important. \n\nSome suggestions on how to improve the paper:\n1. I find Sections 3 & 4 hard to track - some missing details and notation issues. Several variables are introduced without detailing the proper dimensions, e.g., the global feature attention vector g (which is shown in the figure actually). The relation between U and u_k isn't clear. Also, it will help to put a one-sentence summary of what this module does at the beginning of Section 3, like the last half-sentence in the caption of Figure 3. I was quite lost until I see that. Some more intuition is needed, on W_expand and W_shrink; maybe moving some of the \"neuroscience motivation\" paragraph up into the main text will help. Bold letters are used to denote many different things - in Section 4 as a set of layers, in other places a matrix/tensor, and even an operation (F). \n\n2. Is there any explanation on why you add the regularization term to every layer in a network? This setup seems to make it easy to explain what happens in Figure 4. One interesting observation is that after your regularization, the GALA features with ClickMe maps exhibit minimal variation across layers (those shown). But without this supervision the features are highly different. What does this mean? Is this caused entirely by the regularization? Or there's something else going on, e.g., this is evidence suggesting that with proper supervision like human attention regions, one might be able to use a much shallower network to achieve the same performance as a very deep one?\n\n3. Using a set of 10 images to compute the correlation between ClickMe and Clicktionary maps isn't ideal - this is even less than the number of categories among the images. I'm also not entirely convinced that \"game outcomes from the first and second half are roughly equal\" says much about humans not using a neural net-specific strategy, since you can't rule out the case that they learned to play the game very quickly (in the first 10 of the total 380 rounds). \n\n4. Title - this paper sound more \"human feedback\" to me than \"humans-in-the-loop\", because the loop has only 1 iteration. Because you are collecting feedback from humans but not yet giving anything back to them. Maybe change the title?" ]
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 8 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2019_BJgLg3R9KQ", "HyeS0BduCm", "HygHbsciim", "r1eVYyoh3Q", "S1gU756h3X", "iclr_2019_BJgLg3R9KQ", "HygHbsciim", "r1eVYyoh3Q", "S1gU756h3X", "iclr_2019_BJgLg3R9KQ", "iclr_2019_BJgLg3R9KQ", "iclr_2019_BJgLg3R9KQ" ]
iclr_2019_BJgRDjR9tQ
ROBUST ESTIMATION VIA GENERATIVE ADVERSARIAL NETWORKS
Robust estimation under Huber's ϵ-contamination model has become an important topic in statistics and theoretical computer science. Rate-optimal procedures such as Tukey's median and other estimators based on statistical depth functions are impractical because of their computational intractability. In this paper, we establish an intriguing connection between f-GANs and various depth functions through the lens of f-Learning. Similar to the derivation of f-GAN, we show that these depth functions that lead to rate-optimal robust estimators can all be viewed as variational lower bounds of the total variation distance in the framework of f-Learning. This connection opens the door of computing robust estimators using tools developed for training GANs. In particular, we show that a JS-GAN that uses a neural network discriminator with at least one hidden layer is able to achieve the minimax rate of robust mean estimation under Huber's ϵ-contamination model. Interestingly, the hidden layers of the neural net structure in the discriminator class are shown to be necessary for robust estimation.
accepted-poster-papers
* Strengths This paper presents a very interesting connection between GANs and robust estimation in the presence of corrupted training data. The conceptual ideas are novel and can likely be extended in many further directions. I would not be surprised if this opens up a new line of research. * Weaknesses The paper is poorly written. Due to disagreement among the authors and my interest in the topic, I read the paper in detail myself. I think it would be difficult for a non-expert to understand the key ideas and I strongly encourage the authors to carefully revise the paper to reach a broader audience and highlight the key insights. Additionally, the experiments are only on toy data. * Discussion One of the reviewers was concerned about the lack of efficiency guarantees for the proposed algorithm (indeed, the algorithm requires training GANs which are currently beyond the reach of theory and finicky in practice). That reviewer points to the fact that most papers in the robustness literature are concerned with computational efficiency and is concerned that ignoring this sidesteps one of the key challenges. The reviewer is also concerned about the restriction to parametric or nearly-parametric families (e.g. Gaussians and elliptical distributions). Other reviewers were more positive and did not see these as major issues. * Decision In my opinion, the lack of efficiency guarantees is not a huge issue, as the primary contribution of the paper is pointing out a non-obvious conceptual connection between two literatures. The restriction to parametric families is more concerning, but it seems possible this could be removed with further developments. The main reason for accepting the paper (despite concerns about the writing) is the importance of the conceptual connection. I think this connection is likely to lead to a new line of research and would like to get it out there as soon as possible. * Comments Despite the accept decision, I again urge the authors to improve the quality of exposition to ensure that a large audience can appreciate the ideas.
test
[ "rkx90sfIJ4", "rklmCB5rk4", "rkef55Kc0Q", "SyeWL0ScAm", "B1xPE0H507", "rkx0ascORX", "SJgijaKR6Q", "S1lJt6YR6X", "ryeV9nKRpQ", "ByluFd5mpm", "rkxfaWqihQ", "B1e-NIOq3Q" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for the question. \n\nYes, the statement is a bit confusing. In the formulation $\\min_Q\\max_{\\tilde{Q}}$, notice that $\\min_Q$ is before $\\max_{\\tilde{Q}}$. Thus, the class that we maximize over $\\tilde{Q}$ is allowed to depend on $Q$. To be specific, for example in location estimation (Proposition 2.1 (1)), we should regard the definition $\\tilde{\\mathcal{Q}}_{\\eta,r}=\\{N(\\tilde{\\eta},I_p): \\|\\tilde{\\eta}-\\eta\\|\\leq r\\}$, as a class that depends on a given $\\eta$ and a given $r$. Then, we define $\\mathcal{Q}=\\{N(\\eta,I_p): \\eta\\in\\mathbb{R}^p\\}$. In this way, we have $\\min_{\\eta}\\max_{\\tilde{\\eta}: \\|\\tilde{\\eta}-\\eta\\leq r\\|}$, which means we first maximize $\\tilde{\\eta}$ near $\\eta$, and then minimize over $\\eta$. This is exactly equivalent to Equation (20) in Appendix C.", "Proposition 2.1 doesn't type-check for me: The family \\tilde{Q} references an unbound variable \\eta (as far as I can tell \\eta is used as a temporary variable in the definition of Q but does not exist outside of that). Can you please clarify the statement of the proposition?", "Thanks for your response. The result of elliptical distributions is interesting. ", "(Part 1 of this comment is shown below this one.) Now we give a specific response to each of your comments. \n\n- \"TV-GAN is theoretically optimal, but it does not work in practice when the contamination distribution is not close to the true model\" I find this comment a bit puzzling. It is true that if one could truly optimize the TV-GAN objective the solution would recover the ground truth, but the algorithm presented does not do this, as the TV-GAN algorithm runs some greedy first order method to attempt to approximate this. However, the authors experiments demonstrate that the TV-GAN algorithm does not always converge, and as a result, TV-GAN is far from theoretically optimal.\n\nWhen we say “theoretical”, we mean for the global optimum. By “in practice”, we mean its numerical performance in the experiments.\n\n- In regards to this comment as well as their response to Reviewer 3, I am a bit confused. I do not see why any of their derivations should hold for anything beyond specifically Gaussian distributions. The authors make a lot of claims about the nice properties of JS-GAN which I do not believe they can support (see my comment at the bottom).\n\nThese claims are all for the global optimum. For example, Equation (16) in the manuscript.\n\n- If indeed the algorithm gets the same error rate in the presence of stronger adversaries, then it seems extremely unlikely that the algorithm can be made algorithmic. This is because there are strong SQ lower bounds against getting O(eps) error for estimating the mean of a Gaussian under these stronger adversaries [6], and it is easy to check that the sorts of SGD operations that the GAN methods use fall into this class of algorithms.\n\nWe have realized the paper [6], and your comment is true that this rules out SGD on GAN under the strong contamination model. Our guarantee is only for the global optimum and it does not imply anything on the computational complexity. As we have discussed before, it may be possible to prove convergence of GAN for robust estimation using the techniques in [4,5], and we do think in doing that, further assumptions on the contamination must be necessary.\n\n[1] Kevin A Lai, Anup B Rao, and Santosh Vempala. Agnostic estimation of mean and covariance. In Foundations of Computer Science (FOCS), 2016 IEEE 57th Annual Symposium on, pp. 665–674. IEEE, 2016.\n\n[2] Ilias Diakonikolas, Gautam Kamath, Daniel M Kane, Jerry Li, Ankur Moitra, and Alistair Stewart. Being robust (in high dimensions) can be practical. arXiv:1703.00893, 2017.\n\n[3] Yu Bai, Tengyu Ma, and Andrej Risteski. Approximability of Discriminators Implies Diversity in GANs. arXiv:1806.10586, 2018.\n\n[4] A. CHERUKURI, B. GHARESIFARD, AND J. CORTES. Saddle-point dynamics: conditions for asymptotic stability of saddle points. SIAM, 2017\n\n[5] M. Heusel, H. Ramsauer, T. Unterthiner, and B. Nessler. GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium. NIPS, 2017\n\n[6] I. Diakonikolas, D. Kane, A. Stewart. Statistical Query Lower Bounds for Robust Estimation of High-dimensional Gaussians and Gaussian Mixtures. In FOCS 2017\n", "Thank you again for your additional comments. Before giving a response to your each specific comment, please allow us to clarify our main contributions:\n\nThe paper’s main contribution is to establish a connection between the framework of GANs and the framework of various data depth functions. We believe the connection is important because GAN is important in the deep learning literature and depth functions are important to robust statistics. The connection makes it possible to use tools in one area to solve problems in the other one.\n\nWe understand and fully agree with your assessment for optimal estimation under the contamination model. It is very clear that in terms of provable algorithms that achieve optimal estimation rates, the approach by GANs may suffer from computational intractability in the worst case compared to [1, 2]. Then what is the point to study an old problem using such a new method?\n\nHowever, the connection and the framework that we build may provide new insights to both the deep learning area and robust statistics area: \n\n(i) From the deep learning perspective, it is important to understand how to design the discriminator class to optimally learn a parameter of interest or a generative process using GAN. The paper [3] is such an example for Wasserstein-GAN. The paper [3] does not give a specific algorithm to optimize W-GAN, but even the property of the objective function is already very interesting. Our manuscript is the first to study how to use GAN in the robust setting, and probably even the first to theoretically study how to design discriminator class for JS-GAN towards statistical optimality.\n\n(ii) From the robust statistics perspective, there are not just Tukey depth, but various depth functions that solve all kinds of robust estimation problem. Our connection might not give a provably polynomial algorithm, but it opens a door for using deep learning tools to optimize the corresponding JS-GAN. Previously, the state of the art of using depth functions only work for data of at most 10 dimensions. Now we can handle 5000 dimensions without a problem. This is an improvement. Even for just Tukey depth, we show that global optimum of Tukey depth or JS-GAN both achieve informationally-theoretic optimality for elliptical distributions. This generality is not met by [1, 2] (To be fair [1, 2] estimate mean, but depth or JS-GAN estimates center, so they are different).\n\nThere is little theory on convergence and computational complexity of GAN, but this does not prevent it from being so useful in learning complex distributions of images. If people use GAN to learn image distributions in practice, why not also use that for robustly learning model parameters? We are not trying to overly manipulate this connection between GAN and depth functions. It does work well in our extensive experiments and the neural net structures are not hard to tune.\n\nEstablishing some convergence theory of GAN will be a fascinating and important topic. However, this is very hard given the current techniques that we have. The state-of-the-art of the area [4,5] analyze the dynamics of the corresponding ODE and the key lies in the construction of a Lyapunov function. This is one of the future projects that we consider, and robust estimation will be a good place to start, given the connection established in this submission.\n\nLet us clarify that the theorems in submission are all proved for global minimum, and we make this clear in the paper. We did not claim anything of the alternating stochastic gradient algorithm except showing its good numerical results.\n\nOur response may be a bit aggressive, and we apologize for that. Please understand our enthusiasm behind the work.", "[edit: forgot to add reference]\n\n- \"TV-GAN is theoretically optimal, but it does not work in practice when the contamination distribution is not close to the true model\" I find this comment a bit puzzling. It is true that if one could truly optimize the TV- GAN objective the solution would recover the ground truth, but the algorithm presented does not do this, as the TV-GAN algorithm runs some greedy first order method to attempt to approximate this. However, the authors experiments demonstrate that the TV-GAN algorithm does not always converge, and as a result TV-GAN is far from theoretically optimal.\n\n- In regards to this comment as well as their response to Reviewer 3, I am a bit confused. I do not see why any of their derivations should hold for anything beyond specifically Gaussian distributions. The authors make a lot of claims about the nice properties of JS-GAN which I do not believe they can support (see my comment at the bottom).\n\n- If indeed the algorithm gets the same error rate in the presence of stronger adversaries, then it seems extremely unlikely that the algorithm can be made algorithmic. This is because there are strong SQ lower bounds against getting O(eps) error for estimating the mean of a Gaussian under these stronger adversaries [1], and it is easy to check that the sorts of SGD operations that the GAN methods use fall into this class of algorithms.\n\nGenerally, it seems to me that the authors are conflating two things, which I think is potentially dangerous. There is the true objective that the authors write down for something such as TV-GAN or JS-GAN, and then there is the actual output of the algorithm based on training via SGD. While I agree that the actual objective has very nice properties (as these are more or less classical statistical objectives), I am very unconvinced that the output of the algorithm has any of these nice properties.\n\n[1] Statistical Query Lower Bounds for Robust Estimation of High-dimensional Gaussians and Gaussian Mixtures. I. Diakonikolas\u0003, D. Kane, A. Stewart. In FOCS 2017\n", "Thank you for your comment. \n\nYour major question is whether or not the approach can be used to solve robust estimation problems in more general settings. The answer is yes. Even though the submission only considers estimating Gaussian mean, the JS-GAN also works for robust estimation of location vector of a general elliptical distribution. This includes multivariate Cauchy distribution where mean doe not even exist. As a modification, we only need to change the generator class in the JS-GAN from Gaussian to elliptical. There is no need to change the discriminator class. The estimator is minimax optimal under general elliptical family. Our numerical results demonstrate the good performance of the estimator under multivariate Cauchy data.\n\nThe revised manuscript is uploaded with changes highlighted in red.", "Thank you for your comments. We give a response to each of your comments in Cons and Questions.\n\nCons:\n\n- We agree that we do not have any convergence guarantee. This is indeed an important problem we hope to address in future work. In this work, we focus on the connection between the depth function and GAN. Compared with the existing polynomial-time methods on robust estimation, this framework is more general in developing robust estimation methods for problems other than mean estimation. For example, the problem of robust covariance matrix estimation and robust regression can be studied within the same framework given the connections to regression depth and matrix depth. Another contribution we would like to emphasize is the study on the effect of the discriminator class, which is of its own importance in understanding GAN. For example, we show that a one-layer net does not work for robust mean estimation using JS-GAN. One has to use a two-layer net.\n- We agree with this comment. JS-GAN is the one that we recommend in the paper. TV-GAN is theoretically optimal, but it does not work in practice when the contamination distribution is not close to the true model, and this is reflected in our numerical results. We somehow need TV-GAN to serve as a connection between depth functions and other f-GANs, and we also need to show its numerical results to convince readers that TV-GAN is not a good choice in practice.\n- This is a very important comment. Tukey’s median is attractive not only because it achieves the minimax rate under the contamination model, but also because of the following four properties: 1). It has a clean objective function that allows easy-to-understand extensions to other problems (regression depth and matrix depth). 2). it does not require the knowledge of the contamination proportion \\epsilon. 3). it is adaptive to the unknown covariance structure. 4). it is adaptive and optimal for location estimation under general elliptical distributions. These four properties distinguish Tukey’s median from the existing polynomial-time methods in the literature. The 4th property is especially important, which is a fundamental difference between Tukey’s median and the robust mean estimators in the literature. The existing methods estimate the population mean, while Tukey’s median estimates the population median. The two can be different for many multivariate distributions. In particular, for multivariate Cauchy, there is no mean, but Tukey’s median is still able to achieve the minimax rate under the contamination model of estimating the Cauchy location. The proposed estimator JS-GAN is indeed adaptive to the general class of elliptical distributions, and the new theorem will be included in the revised manuscript. In fact, we only need to change the generator class from Gaussian to the class of elliptical distributions. There is no need to change the discriminator class. Our numerical results also show that if the data is generated from heavy elliptical distributions such as Cauchy, JS-GAN works very well, but dimension halving and iterative filtering do not work as well as our method, because these methods are designed only for robust mean estimation, which is for a different purpose. With this revision, JS-GAN also shares the four properties of Tukey’s median, and is computationally much better than Tukey’s median. We agree with you that we use Scheffe set between two Gaussians, which is a half-plane, to derive TV-GAN. However, for JS-GAN, the Scheffe set, which can be regarded as a one-layer neural net, does not work (see discussion after Proposition 3.1). One has to use two-layer neural nets, which is not the Scheffe set between two Gaussians anymore. The overall connection between GAN and depth functions is most clear in a Gaussian framework, but the derived estimator works for general elliptical distributions.\n\nQuestions:\n\n- The computational cost is comparable to, but slower than, both dimension halving and iterative filtering. This is because training a two-layer net is a harder optimization problem. The good news is the plot that shows the relation between dimension and computational time is approximately linear, so the method is scalable. Previously, Tukey’s median never works when dimension exceeds 10, but now we can compute JS-GAN, which shares the good properties of Tukey’s median in thousands of dimensions.\n- Yes, we will get the same error rate. This is because for TV(P_1,P_2)<\\epsilon, there exist Q_1 and Q_2, such that P_1=P_2-\\epsilon Q_1 + \\epsilon Q_2. Use this fact, and the proof will go through easily. The new Theorem for elliptical distributions is now proved under strong contamination.\n\nThe revised manuscript is uploaded with changes highlighted in red.", "Thank you for your comments. The response to each point is listed below:\n- Here, Tau, \\mathcal{Q} and \\tilde{\\mathcal{Q}} are an arbitrary function classes. They will only be specified in specific problems such as mean estimation and covariance matrix estimation.\n- We agree. It should be f^*.\n- We agree. This is a typo.\n- In (12), D(x) is the same as T(x) in (4). The reason we use a new notation is that for JS-GAN, log D(x) in (14) is T(x) in (4). We will make a clarification in the revision.\n- The constants C,C’ do not depend on c anymore, as long as c is smaller than some absolute number, say c<1/100.\n- Theoretically speaking, TV-GAN should be the best, because of its close connection to depth-based estimators. The problem with TV-GAN is its optimization property, which is illustrated discussed in Figure 1. Whenever the contamination distribution is not close to the true model, TV-GAN suffers from this problem, and then it is outperformed by JS-GAN.\n- JS-GAN does not have the optimization difficulty as TV-GAN does. Moreover, we prove that JS-GAN is minimax optimal, and therefore, it has a stable performance and it is the one that we recommend.\n\nThe revised manuscript is uploaded with changes highlighted in red.", "This paper considers the robust estimation problem under Huber’s \\epsilon-contamination model. This problem is a hot topic in theoretical statistics and theoretical computer science community in recent 3 years. From theoretical statistics community, the main approach is through depth functions. Solving the robust estimation problem can be reduced to solving a min-max problem. While the formulation is clean and can achieve the optimal statistical rate, solving the min-max problem is computationally intractable in general. On the other hand, approaches from TCS community are more involved and sometimes cannot achieve the optimal statistical rate (especially for the general distribution). \n\nThis paper tries to make the approach from theoretical statistical community computationally tractable. This paper builds an interesting connection between f-GAN and depth functions. Importantly, authors show that by carefully choosing the discriminators neural network architecture and constraining the norms of the weight matrices, the generator achieves the optimal rates. This is an interesting theoretical discovery.\n\nMy major question is whether this approach can be used to solve robust estimation problems in more general settings. For example, we want to do robust mean estimation problem and the only assumption on P is it is sub-Gaussian. Is it possible to design a generator-discriminator pair to solve this problem? Theorems in this paper only focus on the Gaussian case. \n\n\nOverall, I like this paper. This paper provides a new angle toward a classical statistical problem. The computational issue has not been resolved yet. However, given recent progress from optimization in deep learning, it is quite possible that the optimization problem in this paper can be solved (approximately). Therefore, I recommend accepting. \n\n\n", "The paper considers the problem of robust high dimensional estimation in Huber’s contamination model. The algorithm is given samples from a distribution (1 - eps) * P + eps * Q, where P is a “nice” distribution (e.g. a Gaussian), eps is the fraction of contaminated points, and Q is some unconstrained noise distribution. The goal is then to estimate parameters of P as well as possible, given this noise. The settings they primarily consider in this paper are when P is a Gaussian with unknown mean and identity covariance, or when it is a Gaussian with unknown covariance. Classical estimators such as Tukey depth or matrix depth for these problems achieve optimal minimax rates, but are computationally expensive to compute. However, recent work of [1,2] propose efficient estimators for this problem that (nearly) achieve these rates.\n\nThis paper considers a different approach to this problem. They observe that in the case when P is a Gaussian, these classical depth functions (or minor variations thereof) can be written as the asymptotic limits of certain types of GANs. They then demonstrate that for specific choices for the architecture and regularization of the discriminator, the global optima of this GAN objective achieves minimax optimal error and rates in Huber’s contamination model. Unfortunately, they do not prove that their algorithm achieves these global minima. As a result they do not have any provable guarantees for their algorithms. However, they show experimentally that against many choices of noise distribution, their algorithms obtain good error, both for mean estimation and covariance estimation (at least, the JS-GAN seems to consistently succeed. They acknowledge that the TV-GAN seems to be unstable in certain regimes).\n\nPros: \n\n- I think the question of finding algorithmic equivalents of Tukey median is a very interesting question, and this is an interesting attempt.\n- I did not replicate their experiments on GANs, but the experimental numbers seem promising. However, I have some mixed feelings about this (see below).\n\nCons:\n\n- A clear disadvantage of the approach to prior algorithmic work is that the algorithms proposed in the paper do not have provable guarantees. For settings such as secure machine learning, the lack of such guarantees is problematic. Given that previous works give efficient (i.e. practical) algorithms for these problems with provable guarantees, I am unclear how much impact this will have in practice.\n\n- Given that TV-GAN is known to fail (as shown in Table 6), it is unclear how useful the numbers for it are in Table 1. Without these numbers, it then appears that JS-GAN and the filtering algorithm often achieve comparable results, although it is very interesting that JS-GAN is consistently slightly better.\n\n- I feel that the authors fall short of their goal to make a good algorithmic analog of these depth-based estimators. This is a subtle but important point, so let me justify this. As the authors explain, the major advantage of such estimators would be that they are model-free: they should give robustness for a number of settings, not just Gaussians, but also elliptical distributions, sub-gaussian distributions, etc. However, the correspondence that the authors derive to their GAN formulation of depth heavily leverages the Gaussianity of the underlying distribution. Specifically, it leverages the fact that the Scheffe set between two Gaussians is a half-plane, which clearly fails for more general distributions. As a result, it appears to me that this variational formulation of depth succeeds only in a very model-specific setting. As a result, from a theoretical perspective it is unclear what advantage this formulation has. \n\n\nQuestions:\n\n- How long does it take to train the GANs? Is it comparable to the runtime of the other algorithms?\n\n- Can these algorithms work in the stronger notions of corruption considered in [1, 2]?\n\nOverall conclusion:\n\nThe paper proposes a novel framework for robust estimation. However, in light of the previous provable and much simpler algorithms for robust estimation, in the end it seems to me that deep learning is an unnecessarily complicated approach to this problem. While the authors demonstrate some experimental improvement in the test cases they tried, the lack of provable guarantees for their approach limits the theoretical appeal of their paper. More conceptually, I am unconvinced that their approach is the correct approach to understanding algorithmic notions of depth, for the reasons described above.\n\n[1] Kevin A Lai, Anup B Rao, and Santosh Vempala. Agnostic estimation of mean and covariance. In Foundations of Computer Science (FOCS), 2016 IEEE 57th Annual Symposium on, pp. 665–674. IEEE, 2016.\n\n[2] Ilias Diakonikolas, Gautam Kamath, Daniel M Kane, Jerry Li, Ankur Moitra, and Alistair Stewart. Being robust (in high dimensions) can be practical. arXiv preprint arXiv:1703.00893, 2017.", "The authors considered Huber contamination model.\nThey use f-divergence and its variational lower bound to get a criterion for probability distribution function estimation.\nThey showed that under different functions f in f-divergence they can get different criteria used in robust depth-based estimation of a mean and/or covariance matrix.\nFor f, corresponding to the Total Variation divergence and discriminator being a logistic regression, they proved that the robust estimate can achieve the minimax rate, although there could be difficulties to optimize the criterion. Then the authors showed that for the JS-divergence with discriminator in the form of a one-layer neural network we can get robust and optimal estimate, while the criterion itself can be efficiently optimized.\n\nComments\n- it could be good to define what Tau is right after formula (3). Analogously for the class of probability distributions $mathcal{Q}$ in (4), in for $\\tilde{\\mathcal{Q}}$ in (5) \n- page 3, line 12 from above: “and f’(t) = e^{t-1}.” In fact, here we should use $f^*(t)$\n- page 3, proposition 2.1, subsection 1 of the proposition: $\\tilde{\\mathcal{Q}}$ instead of $\\tilde{Q}$ should be used as a notation for a class of probability distributions\n- in (12) the authors unexpectedly introduced a new notation $D$. I guess they should specify right after formula (12) what is $D$\n- theorem 3.1. If it is possible, it could be good at least to speculate on how $C, C’$ depend on $c$ in the displayed formula\n- axis labels in figure 2 are almost impossible to read. This somehow should be improved\n- in table 1 we clearly see that TV-GAN is better for some part of problems, and JS-GAN is better for another part of problems. Why? Any comments? At least intuition?\n- page 8, “ On the other hand, JS-GAN stably achieves the lowest error in separable cases and also shows competitive performances for non-separable ones.” Why? Any comments?\n\nConclusion\n- in general, the paper is well written\n- it contains sufficient number of experiments to prove that the proposed approach is reasonable\n- the connection between GANs based on f-divergence and robust estimation seems to be important. Thus I’d like to proposed to accept this paper\n" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 5 ]
[ "rklmCB5rk4", "iclr_2019_BJgRDjR9tQ", "SJgijaKR6Q", "rkx0ascORX", "rkx0ascORX", "S1lJt6YR6X", "ByluFd5mpm", "rkxfaWqihQ", "B1e-NIOq3Q", "iclr_2019_BJgRDjR9tQ", "iclr_2019_BJgRDjR9tQ", "iclr_2019_BJgRDjR9tQ" ]
iclr_2019_BJg_roAcK7
INVASE: Instance-wise Variable Selection using Neural Networks
The advent of big data brings with it data with more and more dimensions and thus a growing need to be able to efficiently select which features to use for a variety of problems. While global feature selection has been a well-studied problem for quite some time, only recently has the paradigm of instance-wise feature selection been developed. In this paper, we propose a new instance-wise feature selection method, which we term INVASE. INVASE consists of 3 neural networks, a selector network, a predictor network and a baseline network which are used to train the selector network using the actor-critic methodology. Using this methodology, INVASE is capable of flexibly discovering feature subsets of a different size for each instance, which is a key limitation of existing state-of-the-art methods. We demonstrate through a mixture of synthetic and real data experiments that INVASE significantly outperforms state-of-the-art benchmarks.
accepted-poster-papers
This manuscript proposes a new algorithm for instance-wise feature selection. To this end, the selection is achieved by combining three neural networks trained via an actor-critic methodology. The manuscript highlight that beyond prior work, this strategy enables the selection of a different number of features for each example. Encouraging results are provided on simulated data in comparison to related work, and on real data. The reviewers and AC note issues with the evaluation of the proposed method. In particular, the evaluation of computer vision and natural language processing datasets may have further highlighted the performance of the proposed method. Further, while technically innovative, the approach is closely related to prior work (L2X) -- limiting the novelty. The paper presents a promising new algorithm for training generative adversarial networks. The mathematical foundation for the method is novel and thoroughly motivated, the theoretical results are non-trivial and correct, and the experimental evaluation shows a substantial improvement over the state of the art.
val
[ "SkgNuvW707", "BkehMR6shm", "Byl06-RnTQ", "H1lgbw8K3X", "S1xeFW7oT7", "SylXL-XspQ", "r1eVBWms6Q", "Hke-ey7oTQ", "rkxFiB2Tom" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "I would like to thank the authors for clarifying my concerns in details, especially for the first point. I think this is a straightforward idea that relaxes the need for a predefined k in L2X and has good performance. I have updated my score accordingly.", "This paper proposes an instance-wise feature selection method, which chooses relevant features for each individual sample. The basic idea is to minimize the KL divergence between the distribution p(Y|X) and p(Y|X^{(s)}). The authors consider the classification problem and construct three frameworks: 1) a selector network to calculate the selection probability of each feature; 2) a baseline network for classification on all features; 3) a predictor network for classification on selected features. The goal is to minimize the difference between the baseline loss and predictor loss.\n\nThe motivation of the paper is clear and the presentation is easy to follow. However, I have some questions on the model and experiments:\n\n1. How is Eq. (5) formulated? As the selector network does not impact the baseline network, an intuition regarding Eq. (5) is to maximize the predictor loss, which seems not reasonable. It seems more appropriate to use an absolute value of the difference in Eq. (5). Some explanation for the formulation of Eq. (5) would be helpful.\n\n2. The model introduces an extra hyper-parameter, $\\lambda$, to adjust the sparsity of selected features. I was curious how sensitive is the performance w.r.t. this hyper-parameter. How is $\\lambda$ determined in the experiments?\n\n3. After the selector network is constructed, how are the features selected on testing data? Is the selection conducted by sampling from the Bernoulli distribution as in training or by directly cutting off the features with lower probabilities?\n", "First, I want to thank the authors for providing detailed replies to my questions. I have detailed comments for how I think of this paper in my original review. The authors have done a good work to address my concerns. I write my current opinions in short to support my updated score. I see the value of this work is proposing a new instance-wise feature selection method, INVASE, which has a tight relation with L2X. The most important value of INVASE compared to L2X is that one does not need to choose k, the number of relevant features, in advance. The authors have demonstrated that INVASE outperforms L2X and other methods for their synthetic data. They use MAGGIC medical dataset to show that INVASE is doing instance-wise feature selection qualitatively. The authors mention in the feedback that they will run experiments on Kaggle Dog vs Cat dataset and provide those results in the Appendix. I think the work has the value that it does not need to choose k in advance compared to L2X and the authors have detailed synthetic data results, so I tend to accept this work and updated my score. I do not see that instance-wise feature selection is useful for medical dataset MAGGIC in practice and I think that instance-wise feature selection is useful for CV and NLP applications. The lack of CV and NLP applications is a weak point. It is nice to include Kaggle Dog vs Cat dataset in the Appendix, the authors can consider applications in L2X and other previous works as well.\n\nOne can check my original questions and authors' feedback for experiments on synthetic dataset and MAGGIC dataset. I write a summary about my technical questions and authors' feedback. I think that they addressed my concerns. First question is whether one can use w_\\theta(x) in L2X to select features. As can be seen in authors' feedback, w_\\theta(x) in L2X has to depend on k and it does not work well. Second question is the meaning of the baseline network. Though it is a constant term in optimization, it is used to reduce variance in actor-critic models. Intuitively, the update of the baseline network can help to get predict network and baseline network at a similar \"level of convergence\" in each state. As the authors mentioned, if one trains an optimal baseline network and use it, the performance difference is marginal. The authors have addressed my technical concerns. I am satisfied with the feedback.", "This paper proposes a new instance-wise feature selection method, INVASE. It is closely related to the prior work L2X (Learning to Explain). There are three differences compared to L2X. The most important difference is about how to backpropagate through subset sampling to select features. L2X use the Gumbel-softmax trick and this paper uses actor-critic models.\n\nThe paper is written well. It is easy to follow the paper. The contribution of this paper is that it provides a new way, compared to L2X, to backpropagate through subset sampling in order to select features. The authors compare INVASE with L2X and several other approaches on synthetic data and show outperforming results. In the real-world experiments, the authors do not compare INVASE with other approaches. \n\nRegarding experiments, instance-wise feature selection is often applied on computer vision or natural language process applications, where global feature selection is not enough. This paper lacks experiments on CV or NLP applications. For the MAGGIC dataset, I expect to see subgroup patterns. The patterns that authors show in Figure 2 are very different for all randomly selected 20 patients. The authors do not explain why it is preferred to see very different feature patterns for all patients instead of subgroup patterns.\n\nI have questions about other two differences from L2X, pointed by the authors. First, the selector function outputs a probability for selecting each feature \\hat{S}^\\theta(x). In the paper of L2X, it also produces a weight vector w_\\theta(x) as described in section 3.4. I think the \\hat{S}^\\theta(x) has similar meaning as w_\\theta(x) in L2X. In the synthetic data experiment, the authors fix the number of selected features for L2X so that it forces to overselect or underselect features in the example of Syn4. Did the author try to relax this constraint for L2X and use w_\\theta(x) in L2X to select features as using \\hat{S}^\\theta(x) in INVASE? \n\nSecond, I agree with the authors that L2X is inspired by maximizing mutual information between Y and X_S and INVASE is inspired by minimizing KL divergence between Y|X and Y|X_S. Both intuitions lead to similar objective functions that INVASE has an extra term \\log p(y|x) and \\lambda ||S(x)||. INVASE is able to add a l_0 penalty on S(x) since it uses the actor-critic models. For the \\log p(y|x) term, as the author mentioned, it helps to reduce the variance in actor-critic models. This \\log p(y|x) term is a constant in the optimization of S(x). In Algorithm 1, 12, the updates of \\gamma does not depend on other parameters related to the predictor network and selector network. Could the authors first train a baseline network and use it as a fixed function in Algorithm 1? I don't understand the meaning of updates for \\gamma iteratively with other parameters since it does not depend on the learning of other parameters. Does this constant term \\log p(y|x) have other benefits besides reducing variance in actor-critic models?\n\nI have another minor question about scaling. How does the scaling of X affect the feature importance learned by INVASE?\n\nNote: I have another concern about the experiments. Previous instance-wise variable selection methods are often tested on CV or NLP applications, could the authors present those experiments as previous works?", "Thank you for the insightful comments.\n\nA1: It is not true that fast training of the predictor network can lead to a suboptimal selector network. Even when the predictor network is fully trained after each selector network update, the selector network can converge optimally. However, because the input distribution of the predictor network changes with each update of the selector network, the predictor network will have to update after each selector update. It is therefore not possible for the predictor network to converge until after the selector network has converged. Therefore, there are no stability issues caused by using the same learning rates for each network.\n\nA2: INVASE+.py is the code corresponding to the implementation found in this paper. INVASE.py corresponds to the same implementation but without the baseline (i.e. just the selector and predictor networks). In practice we found both to perform similarly, but the derivation of INVASE+ is a little more natural, and as such we used it for the paper.\n\nWe have since changed the names in the repository to INVASE and INVASE- (so that now INVASE is indeed the implemented method and INVASE- is the method without the baseline). We hope this alleviates the confusion.", "Thank you for the insightful comments.\n\nA1: We performed extensive experiments in the synthetic setting on all methods (we both reproduced and extended the settings from L2X). In addition to this, results for semi-synthetic data (where the underlying features are from real data but the label is generated synthetically) can be found in the Appendix on page 16. It is necessary to perform experiments on synthetic data if we wish to be able to compare the TPR and FDR of the different methods since we require knowledge of the ground truth relevant features.\n\nFor the real-world results, our focus was on qualitative results (believing we had already demonstrated the methods efficacy in the synthetic - and in the appendix the semi-synthetic - settings). We will move the semi-synthetic results to the main body of the paper to make clear that we have demonstrated the performance in this setting.\n\nFor the real-data experiment in which we report prediction performance, we have extended our results to include the other approaches. We use the same predictive model as the INVASE predictor network (to allow a fair comparison) but use only the selected features of each approach. As can be seen in the below table (for the PLCO dataset), INVASE does significantly outperform the other approaches. Detailed results will be added to the revised manuscript.\n\n----------------------------------------------------------------------------------------------------\n Labels | 5-year | 10-year |\n Metrics | AUROC | AUPRC | AUROC | AUPRC |\n----------------------------------------------------------------------------------------------------\n INVASE | 0.637 | 0.329 | 0.673 | 0.506 |\n L2X | 0.558 | 0.170 | 0.583 | 0.365 |\n LIME | 0.597 | 0.183 | 0.601 | 0.374 |\n Shapley | 0.614 | 0.194 | 0.615 | 0.381 |\n Knockoff | 0.619 | 0.230 | 0.658 | 0.475 |\n Tree | 0.632 | 0.269 | 0.655 | 0.469 |\n SCFS | 0.632 | 0.231 | 0.632 | 0.444 |\n LASSO | 0.623 | 0.218 | 0.656 | 0.467 |\n----------------------------------------------------------------------------------------------------\n\nA2: Our method can definitely be applied to CV or NLP, though in the paper we focus on what we believe to be an equally important application where global feature selection is not enough, i.e. medicine.\nWe will provide qualitative results in the Appendix of the revised manuscript using the Kaggle Dog vs Cat dataset (https://www.kaggle.com/c/dogs-vs-cats).\n\nA3: The results shown in figure 2 for the MAGGIC dataset are entirely qualitative. We are not suggesting that the patterns shown are preferred (or expected) but rather showing that when we use INVASE to discover features for MAGGIC, we find that the patterns are different (though, if you look at, for example, patients 9, 10 and 11 we see a similar pattern for all 3). To us, this simply reinforces the fact that instance-wise feature selection is necessary - if MAGGIC did indeed only contain subgroup patterns then we would expect INVASE to pick these out (as it does in the synthetic and semi-synthetic experiments where, for example in Syn4, Syn5 and Syn6, there are two distinct subgroups).", "\nA4: The main problem in doing this for L2X is in the training stage (not the testing stage). As can be seen in the equations to compute V values (page 4 end of the left column of L2X paper), they must provide some k to train with which is, in general, unknown in real-world datasets (because we don’t know how many features are relevant in the real-world datasets). The weights w(X) are optimized according to a specific feature selection strategy during training, using them in a different strategy during testing would not make sense, as they are no longer optimized for this strategy. While intuitively possible, consider that, due to the way they’ve been trained, the weights w(X) are expected to “spit out” k features. Because of this, it might be that the weights for the unselected k features are essentially random (but lower than the selected k features). We have no reason to believe that the weights beyond the selected k features would be meaningful (since during training the method only ever selected precisely k features).\n\nWe have, however, conducted an experiment in the Syn4 and Syn5 settings with 100 featurs in which we directly use w(X) and threshold it to select features. As can be seen below, the results are significantly worse than for INVASE and the large increase in FDR is indeed consistent with the fact that the weights beyond the top k are not well-disciplined. We will clarify this in the revised manuscript. Note that the published code of the L2X paper is also forced to select k features in both the training and testing stages.\n----------------------------------------------------------------------------------------\n Datasets | Syn4 | Syn5 |\n Thresholds | TPR | FDR | TPR | FDR |\n----------------------------------------------------------------------------------------\n L2X | 0.1 | 87.4 | 93.5 | 79.5 | 95.3 |\n L2X | 0.3 | 69.9 | 83.8 | 77.2 | 77.1 |\n L2X | 0.5 | 69.8 | 64.1 | 66.4 | 84.6 |\n L2X | 0.7 | 59.1 | 61.2 | 54.4 | 65.7 |\n L2X | 0.9 | 52.7 | 44.8 | 51.2 | 50.5 |\n INVASE | 66.3 | 40.5 | 73.2 | 23.7 |\n----------------------------------------------------------------------------------------\n\nA5: The baseline network does not have to be trained iteratively with the other networks, however in actor-critic models it typically is. This is because the baseline is used in some sense to “normalize” the predictor network. For this reason, it is therefore good to have the baseline and predictor at a similar “level of convergence”. However, the performance differences are marginal between the two methods, and so we found that it was not important which training method we used.\n\nA6: The scaling of X is not important. At no point do we multiply the feature vector (X) by the “importance weights”. The weights are used to obtain a binary mask vector which is then multiplied (element-wise) with the feature vector. As such, the unselected features end up being 0 and the selected features retain their original value.", "Thank you for the insightful comments.\n\nA1: Equation (5) is the difference between the cross-entropies of the predictor and baseline networks. The first term (-sum_y log f_i^\\phi (x^(s), s)) is the cross-entropy of the predictor network and the second term (-sum y log f_i^\\gamma (x)) is the cross-entropy of the baseline network. The loss in equation (5) is defined as the “first term – second term”. The selector network is trained to minimize this, not maximize it. Note that the baseline network is introduced to reduce the variance of this quantity, and not as a term that the selector network can change (this is a standard technique used in the actor-critic literature).\n\nAlso note that if the baseline network term (the second term) in equation (5) is removed, then we simply end up with the predictor loss defined in the “Predictor Network” section (l_1).\n\nIf instead we were to use absolute value, then when the baseline network loss is larger than the predictor network loss, the method would actually be trying to maximise the predictor network loss (which we do not want).\n\nIt is important to note that we are not trying to minimize the difference between the predictor and baseline losses - we are using the baseline to reduce the variance of the overall loss and we are simply trying to minimize the predictor loss.\n\nA2: As can be seen in page 13 (subsection “Details of INVASE”), we explain that “We use cross-validation to select lambda among {0.1,0.3,0.5,1,2,5,10}”. We select the lambda which maximizes the predictor accuracy in terms of AUROC. We will clarify this in the revised manuscript. Below, we give the results for various values of lambda in the Syn4, Syn5, and Syn6 settings. More detailed results will be added to the revised manuscript.\n--------------------------------------------------------------------------------------------------------------------------------------------------\n Datasets | Syn4 | Syn5 | Syn6 |\n--------------------------------------------------------------------------------------------------------------------------------------------------\n Lambda / Metrics (%) | TPR | FDR | TPR | FDR | TPR | FDR | \n--------------------------------------------------------------------------------------------------------------------------------------------------\n 0.1 | 98.0 | 94.3 | 90.0 | 93.4 | 99.2 | 92.3 |\n 0.3 | 93.7 | 87.9 | 84.2 | 88.9 | 96.9 | 86.7 |\n 0.5 | 99.0 | 43.1 | 88.3 | 50.6 | 99.6 | 31.7 |\n 1 | 66.3 | 40.5 | 73.2 | 23.7 | 90.5 | 15.4 |\n 2 | 0.0 | 0.0 | 25.4 | 4.1 | 67.1 | 3.6 |\n 5 | 0.0 | 0.0 | 7.5 | 2.7 | 7.6 | 2.5 |\n 10 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | \n-------------------------------------------------------------------------------------------------------------------------------------------------- \n\nA3: As can be seen in the GitHub code (anonymously published on https://github.com/iclr2018invase/INVASE), on testing data, we select the features whose selection probabilities are larger than 0.5. (see line 225 in INVASE-.py and line 274 in INVASE.py) We will clarify this in the revised manuscript.", "In the paper, the authors proposed a new algorithm for instance-wise feature selection. In the proposed algorithm, we prepare three DNNs, which are predictor network, baseline network, and selector network. The predictor network and the baseline networks are trained so that it fits the data well, where the predictor network uses only selected features sampled from the selector network. The selector network is trained to minimize the KL-divergence between the predictor network and the baseline network. In this way, one can train the selector network that select different feature sets for each of given instances.\n\nI think the idea is quite simple: the use of three DNNs and the proposed loss functions seem to be reasonable. The experimental results also look promising.\n\nI have a concern on the scheduling of training. Too fast training of the predictor network can lead to the subotpimal selection network. I have checked the implementations in github, and found that all the networks used Adam with the same learning rates. Is there any issue of training instability? And, if so, how we can confirm that good selector network has trained?\n\nMy another concern is on the implementations in github. The repository originally had INVASE.py. In the middle of the reviewing period, I found that INVASE+.py has added. I am not sure which implementations is used for this manuscript. It seems that INVASE.py contains only two networks, while INVASE+.py contains three networks. I therefore think the latter is the implementation used for this manuscript. If this is the case, what INVASE.py is for?\nI am also not sure if it is appropriate to \"communicate\" through external repositories during the reviewing period." ]
[ -1, 6, -1, 6, -1, -1, -1, -1, 6 ]
[ -1, 3, -1, 4, -1, -1, -1, -1, 3 ]
[ "Hke-ey7oTQ", "iclr_2019_BJg_roAcK7", "r1eVBWms6Q", "iclr_2019_BJg_roAcK7", "rkxFiB2Tom", "H1lgbw8K3X", "H1lgbw8K3X", "BkehMR6shm", "iclr_2019_BJg_roAcK7" ]
iclr_2019_BJgklhAcK7
Meta-Learning with Latent Embedding Optimization
Gradient-based meta-learning techniques are both widely applicable and proficient at solving challenging few-shot learning and fast adaptation problems. However, they have practical difficulties when operating on high-dimensional parameter spaces in extreme low-data regimes. We show that it is possible to bypass these limitations by learning a data-dependent latent generative representation of model parameters, and performing gradient-based meta-learning in this low-dimensional latent space. The resulting approach, latent embedding optimization (LEO), decouples the gradient-based adaptation procedure from the underlying high-dimensional space of model parameters. Our evaluation shows that LEO can achieve state-of-the-art performance on the competitive miniImageNet and tieredImageNet few-shot classification tasks. Further analysis indicates LEO is able to capture uncertainty in the data, and can perform adaptation more effectively by optimizing in latent space.
accepted-poster-papers
This work builds on MAML by (1) switching from a single underlying set of parameters to a distribution in a latent lower-dimensional space, and (2) conditioning the initial parameter of each subproblem on the input data. All reviewers agree that the solid experimental results are impressive, with careful ablation studies to show how conditional parameter generation and optimization in the lower-dimensional space both contribute to the performance. While there were some initial concerns on clarity and experimental details, we feel the revised version has addressed those in a satisfying way.
train
[ "BJx4Oa-Be4", "BJl60tXjJN", "SkeNDEQoJN", "H1gS9SOFk4", "HJlZ5J93hm", "rkgrGgvgRX", "BygPCp8lRQ", "H1ec3GPg0m", "ryxI0Wwx0X", "S1xuqePgAQ", "SJler0LgAQ", "ryeVsW1Ha7", "r1gYWGRjnQ", "ryeYsYpe2m", "SyxZkTAacm", "B1gQ7PdK5Q" ]
[ "public", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "author", "public" ]
[ "Hi, \n\nMay I ask what happens if the relation net is removed? How much will it affect the performance?", "Thanks for your constructive comments! We are happy to address any remaining concerns.", "Thank you for helping us improve the paper! We are in the process of open-sourcing our code and embeddings.", "Thanks for the clarification. Most of my concerns are resolved, and so I increase the rating accordingly. ", "This paper presents a new meta-learning framework that learns data-dependent latent space and performs fast adaptation in the latent space. To this end, an encoder that maps data to latent codes and a decoder that maps latent codes to parameters are also learned. Experimental results demonstrate its effectiveness for few-shot learning.\n\nInterestingly, the initialization for adaptation is task-dependent, which is different from conventional MAML-like methods. Furthermore, the results on multimodal few-shot regression seems to show that it works well for multimodal task distribution, which is important for generalization of meta-learning. However, there are quite a few questions that are less clear and require more discussions and insights. \n\n1. Since this work relies heavily on an encoding process and a decoding process, more details and discussions on the design and requirement of the encoder and decoder are necessary. Particularly, the inclusion of a relation network in the encoding process seems ad hoc. More explanations may be helpful. \n\n2. It is less clear why this method can deal with multimodal task distribution as shown in the regression experiment. Is it related to the data-dependent model initialization?\n\n3. It applies meta-learning in a learned latent space, which seems quite related to a recent work, Deep Meta-Learning: Learning to Learn in the Concept Space, Arxiv 2018, where meta-learning is performed in a learned concept space. A discussion on its difference to this prior work seems necessary. ", "Thanks for your comments and appreciation of our empirical results! We address the concerns as follows: (1) We have plotted the architecture of LEO in a new diagram; (2) we state that z_n is a 64-dimensional code, similar in semantics to typical VAE latents; (3) we have clarified that the testing phase is identical to training, except for disabling stochastic behavior during few-shot classification, which is not atypical for generative models.\nDue to space constraints, a lengthy description of the underlying implementation is given in the Appendices.\n\nThe key changes to the paper are detailed as follows:\n\n(1) While architectures and parameter settings will have a big impact empirically, we felt that they distracted from the reader’s understanding of the algorithm. Thus, as done by recent work, we opted to keep the description of the algorithm clean, to maximize clarity, and show all the details we believe necessary for reproducibility in the Appendix. \n\nConcretely, we give the precise architectures and exact sizes of network layers and outputs in Appendix B.3.:\n\n“We used the same network architecture of parameter generator for all datasets and tasks. The encoder and decoder networks were linear with the bottleneck embedding space of size n_s= 64. The relation network was a 3-layer fully connected network with 128 units per layer and rectifier nonlinearities. For simplicity we did not use biases in any layer of the encoder, decoder nor the relation network. Table 4 summarizes this information. Note that the last dimension of the outputs of the relation network and the decoder are two times larger than n_z and dim(x) respectively, as they are used to parameterize both means and variances of the corresponding Gaussian distributions.”\n----------------\n\nWe also added a new intuition diagram of the LEO architecture to the main paper, with some motivation:\n \n“Figure 2 shows the architecture of the resulting network. Intuitively, the decoder is akin to a generative model, mapping from a low-dimensional latent code to a distribution over model parameters. The encoding process ensures that the initial latent code and parameters before gradient-based adaptation are already data-dependent. This encoding process also exploits a relation network that allows the latent code to be context-dependent, considering the pairwise relationship between all classes in the problem instance.”\n--------------\n\n\n(2) Such parameterizations of Gaussian distributions are quite common with generative models, and appear for example in the standard variational autoencoder (see https://arxiv.org/abs/1312.6114 ). For few-shot classification, z_n is a latent vector with 64 dimensions (as stated in Appendix B.3), and is passed through the decoder to produce the means and variances which parameterize the output distribution over inference model parameters. We highlight the relationship to generative models in the main text for clarity, though it should be pointed out that our model cannot be considered an autoencoder (we do not use a reconstruction loss and the input and output spaces of the encoder/decoder are different: data and parameters respectively).\n\n\n(3) In the meta-testing phase, the procedure is the same as in training, except that instead of computing the (outer) meta-training loss, the adapted parameters are only used to perform a task. This is the same procedure as in MAML and other previous work; we have added this information to Appendix B.8 Overview of the evaluation procedure:\n\n“The procedure for evaluation is similar to meta-training, except that we disable stochasticity and dropout. Naturally, instead of computing the meta-training loss, the parameters (adapted based on Loss_train) are only used for inference on that particular task. That is:\n1. A problem instance is drawn from the evaluation meta-set.\n2. The few-shot samples are encoded to latent space, then decoded; the means are used to initialize the parameters of the inference model.\n3. A few steps of adaptation are performed in latent space, followed (optionally) by a few steps of adaptation in parameter space.\n4. The resulting parameters are used as the final adapted model for that particular problem instance.”\n--------\n\nWe hope that these changes clarify the paper for future readers.\n", "We thank the reviewer for their constructive comments, which we have addressed as follows: (1) we have added clarifications on the architecture of LEO, highlighting the role of the relation network; (2) we added an intuitive explanation for why LEO can express multimodal parameter distributions; (3) we discuss (and compare with) DEML.\n\nBelow we detail the changes and further clarifications: \n\n(1) We have added a diagram (Figure 2) in Section 2.3 that characterizes the different steps in the process (including encoding, decoding, adaptation, etc) and have consolidated further intuition and justification into Section 2.3.1: Model Overview. We hope this makes the conceptual design of the LEO network architecture clearer:\n\n“Figure 2 shows the architecture of the resulting network. Intuitively, the decoder is akin to a generative model, mapping from a low-dimensional latent code to a distribution over model parameters. The encoding process ensures that the initial latent code and parameters before gradient-based adaptation are already data-dependent. This encoding process also exploits a relation network that allows the latent code to be context-dependent, considering the pairwise relationship between all classes in the problem instance.”\n--------------\n\nFor generality, we have decided to first give an overview of LEO independently of the exact underlying model details, but we make the design of the parametric form of “f” concrete in Section 2.3.2. Decoding. Our parameter generative model needs to only produce the final output layer of a deep model for few-shot classification. This is a typical approach in recent state-of-the-art works, e.g. DEML, Qiao et. al 2017. We give the exact sizes of network layers and outputs in Appendix B.3. \n\n“Without loss of generality, for few-shot classification, we can use the class-specific latent codes to instantiate just the top layer weights of the classifier. This allows the meta-learning in latent space to modulate the important high-level parameters of the classifier, without requiring the generator to produce very high-dimensional parameters.”\n--------------\n\nRelation nets are particularly useful for our problem because they allow us to consider “context” when obtaining a parameter initialization. That is, the latent code and the resulting parameters for a particular class will depend on which other classes are present in the problem instance, making the encoder not only data-, but also context-dependent. Oreshkin et. al (2018) and Sung et al. (2018) have previously exploited this for meta-learning, as we mentioned in section 2.3.1. We have made this argument more explicit in Section 2.3.2 for clarity:\n\n“The first stage is to instantiate the model parameters that will be adapted to each task instance. Whereas MAML explicitly maintains a single set of model parameters, LEO utilises a data-dependent latent encoding which is then decoded to generate the actual initial parameters. In what follows, we describe an encoding scheme which leverages a relation network to map the few-shot examples into a single latent vector. This design choice allows the approach to consider context when producing a parameter initialization. Intuitively, decision boundaries required for fine-grained distinctions between similar classes might need to be different from those for broader classification.”\n--------------\n\n(2) Using a stochastic parameter generative model enables LEO to represent multimodal parameter distributions, in a similar way to how standard generative models can capture multimodal distributions of rich, high-dimensional input data (see VAE: https://arxiv.org/abs/1312.6114 ). This motivates our use of a parameter generator, and we have now highlighted this more clearly in Section 4.1: When meta-learning a clearly bi-modal task distribution, e.g. random sines and lines,\n\n“learning a generative distribution of model parameters should allow several different likely models to be sampled, in a similar way to how generative models such as VAEs can capture different modes of a multimodal data distribution.”\n--------------", "We thank the reviewers for their feedback, which has helped to considerably strengthen the paper. We have addressed all of the comments, and the key changes we have made include:\n\n(1) A diagram showing the architecture for LEO, providing a visual representation of how the different components work together.\n(2) Additional intuition and discussion in the text to better explain and justify the role of the encoder, decoder, and relation net in our approach.\n(3) Strengthening the appendix with more parameter settings and architectural details to further facilitate reproducibility.\n(4) Other small clarifications throughout the text to address reviewers’ comments and improve readability.\n\nWe hope the above changes will make the paper clearer for future readers.\n", "We thank you for your review and comments, which we look to address below.\n\n- My one suggestion would be to test the model on realistic data from beyond the image domain, perhaps on something sequential like language (consider the few-shot PTB setting from Vinyals et al. (2016)).\n\nThanks for the suggestion! We are definitely interested in extending this approach to other domains (including RL and sequential tasks), and have clarified this in the future work section.\n\n- Relation Networks are computationally intensive, although in few-shot learning the sets encoded are fairly small. Can you discuss the computational cost and training time of the full framework?\n\nThe LEO training process is actually quite short, for example taking 1-2 hours (plus pre-training an embedding) on a multi-core CPU for miniImagenet. We have put this information into the appendix B.6. While the relation net may appear to be computationally intensive, note that it is only performed once per problem instance (for the data-dependent initialization and not for each of the adaptation steps), and can be trivially parallelised on a GPU.\n\n- What happens empirically when you generate parameters for more than just the output layer in, eg, your convolutional networks?\n\nThe first step in that direction was taken for the few-shot regression task, where LEO is used to instantiate the parameters of a 3-layer MLP, yielding good results.\nFor few-shot classification, this a promising direction for future work. We anticipate that this will be a more difficult optimization problem but the results on the regression problem are encouraging. \n\n- What happens if you try to learn networks from scratch through the meta-learning process rather than pre-training and fine-tuning them? Some of the methods you compare against do so, to my understanding.\n\nTo clarify, the pre-training phase refers to the feature extractor whose output is fed into the softmax classifier, and it is kept fixed. Meta-learning is performed for the parameters of the softmax classifier, and the “fine-tuning” that is specified in Section 4.2.3 refers to additional adaptation directly in parameter space (i.e. the softmax parameters) rather than in the latent space. We have added this information to Section 4.2.3 to make this clear.\n\nNote from the ablation study in Table 2 that we observe only marginal gains from fine-tuning.\n\nGetting rid of the separate pre-training phase and learning everything end-to-end is an interesting problem, however an orthogonal one to the contributions of our work. We haven't tried this yet; our intuition is that, due to different learning dynamics of the feature extractor and the meta-learner, it will be much harder optimization-wise. But obviously, having a broader model space will likely lead to the existence of a better global optimum.\n", "Thank you for your interest in our paper and for the comment! \n\nWe hope that the ablation study in Table 2 helps with an incremental implementation of our model, since it measures how each component contributes towards performance; the architecture details, experiments, and hyperparameters to reproduce the approach are provided in the appendices.\n\nWe are committed to make reproducing our results easy for all readers; could you please glance at the updated manuscript and point out any omissions so that we can clarify them shortly? \n", "(3) Thank you for drawing our attention to this work. It is relevant and we now cite and compare to it in the related work section and in Table 1. However, it is important to point out that what we call the latent space is fundamentally different than the concept space in Deep Meta-Learning (DEML). In our work, the latent space (while data-dependent) is used to generate the parameters of the model, and can hence be viewed as a compressed representation of the model parameter space. In contrast, DEML uses a deep representation of the data upon which meta-learning is performed. While our work performs adaptation of these latent codes (and hence the resulting higher-dimensional parameter space), the DEML approach adapts parameters of the network directly, whose inputs are learned representation belonging to the concept space of DEML.\n\nActually, the DEML concept space plays a similar role to our feature embedding space (Section 4.2.2) which we use as an input to the model. Thus, the approaches are orthogonal and could potentially be combined in future work.\n\n“Zhou et al. (2018) train a deep input representation, or “concept space”, and use it as input to an MLP meta-learner, but perform gradient-based adaptation directly in its parameter space, which is still comparatively high-dimensional. As we will show, performing adaptation in latent space to generate a simple linear layer can lead to superior generalization.”\n----------------\n\nWe hope that the aforementioned changes will clarify the details and justification of our approach for future readers.", "I am not the performance-driven guy; but the 61.76% (1-shot) and 77.59(5-shot) accuracy results look really impressive. There is not too many details on how to achieve this yet.", "This paper proposes a latent embedding optimization (LEO) method for meta-learning, in particular, few-shot learning. The proposed model has three meta components, an encoding network, a relation network, and a decoding network. It claims the contribution is to decouple the optimization-based meta-learning techniques from high-dimensional space of model parameters. \n\nThe proposed work focuses on the standard few-shot learning scenario. The notable merit of this work is that it presented the so-far best empirical results. On miniImageNet, it produced 61.76% (1-shot) and 77.59(5-shot) accuracy results. This is quite amazing. \n\nThe presentation of the work however lacks sufficient details and motivations, which makes it difficult to judge the proposed model. (1) It is not clear what are the specific architectures and model parameter settings for the encoding, decoding and relation networks. (2) In Eq.(4), it defines \\mu_n^d,\\sigma_n^d as the output of the decoding network which takes the single z_n as input. I doubt a single z_n input can provide information on both \\mu_n^d,\\sigma_n^d. (3) How to use the developed model in the testing phase?\n", "This work presents an extension of the MAML framework for \"learning to learn.\" This extension changes the space in which \"inner-loop\" gradient steps are taken to adapt the model to a new task, and also introduces stochasticity. The authors validate their proposed method with regression experiments in a toy setting and few-shot classification experiments on mini- and tiered-Imagenet. The latter are well known and competitive benchmarks in few-shot learning.\n\nThe primary innovations that distinguish this work from previous gradient-based approaches to meta-learning (namely MAML) are that (i) the initial set of parameters is data-dependent and drawn from a generative distribution; and (ii) the adaptation of model parameters proceeds in a lower-dimensional latent space rather than in the higher-dimensional parameter space. Specifically, model parameters are generated from a distribution parameterized by an adapted latent code at each adaptation step. I find both of these innovations novel.\n\nThe experimental results, in which LEO outperforms the state of the art on two benchmarks derived from ImageNet by \"comfortable margins,\" and the ablation study demonstrate convincingly that these innovations are also significant. I also found the curvature analysis and embedding visualization illuminating of the model's function. My one suggestion would be to test the model on realistic data from beyond the image domain, perhaps on something sequential like language (consider the few-shot PTB setting from Vinyals et al. (2016)). I'm aware anecdotally that MAML struggles with adapting RNNs and I wonder if LEO overcomes that weakness.\n\nThe paper is clearly written and I had little difficulty in following the algorithmic details, although I'm sure it helped to be familiar with the convoluted meta-learning and inner-/outer- loop frameworks. I recommend it for publication.\n\nPros:\n- Natural, novel extension to gradient-based meta-learning\n- state of the art results on two competitive few-shot benchmarks\n- good analysis\n- clear writing\n\nCons:\n- realistic, high-dim data is only from the image domain\n\nMinor questions for the authors:\n- Relation Networks are computationally intensive, although in few-shot learning the sets encoded are fairly small. Can you discuss the computational cost and training time of the full framework?\n- What happens empirically when you generate parameters for more than just the output layer in, eg, your convolutional networks?\n- What happens if you try to learn networks from scratch through the meta-learning process rather than pre-training and fine-tuning them? Some of the methods you compare against do so, to my understanding.", "Thank you for your comment and question! The paper and appendices describe the 3 stages in our meta-learning approach:\n- In the first stage we use 64-way classification to pre-train the feature embedding only on the meta-training set, hence without the meta-validation classes.\n- In the second stage we train LEO on the meta-training set with early stopping on meta-validation, and we choose the best hyperparameters using random grid search.\n- In the third stage we train LEO again from scratch 5 times using the embedding trained in stage 1 and the chosen set of hyperparameters from stage 2. However, in this stage we meta-learn on embeddings from both meta-train and meta-validation sets, with early-stopping on meta-validation.\nWhile it may not be intuitive to use early stopping on meta-validation in stage 3, it is still a proxy for good generalization, since it favors models with high performance on classes excluded during feature embedding pre-training. We will update the text to better reflect our reasoning for this choice!\nOf course, the meta-test set was not used in any stage for training or selecting our models.\n", "Congratulations on your superb result on the miniImageNet benchmark.\nFrom the Appendix A.3, my understanding is that you guys use meta-training and meta-validation sets for meta-training.\nTo prevent overfitting, early-stopping based on the meta-loss evaluated on meta-validation set is required. I cannot clearly see how such early-stopping can be implemented if meta-validation set used for meta-training." ]
[ -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, 5, 8, -1, -1 ]
[ -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, 3, 5, -1, -1 ]
[ "rkgrGgvgRX", "rkgrGgvgRX", "H1gS9SOFk4", "BygPCp8lRQ", "iclr_2019_BJgklhAcK7", "r1gYWGRjnQ", "HJlZ5J93hm", "iclr_2019_BJgklhAcK7", "ryeYsYpe2m", "ryeVsW1Ha7", "BygPCp8lRQ", "r1gYWGRjnQ", "iclr_2019_BJgklhAcK7", "iclr_2019_BJgklhAcK7", "B1gQ7PdK5Q", "iclr_2019_BJgklhAcK7" ]
iclr_2019_BJgqqsAct7
Non-vacuous Generalization Bounds at the ImageNet Scale: a PAC-Bayesian Compression Approach
Modern neural networks are highly overparameterized, with capacity to substantially overfit to training data. Nevertheless, these networks often generalize well in practice. It has also been observed that trained networks can often be ``compressed to much smaller representations. The purpose of this paper is to connect these two empirical observations. Our main technical result is a generalization bound for compressed networks based on the compressed size that, combined with off-the-shelf compression algorithms, leads to state-of-the-art generalization guarantees. In particular, we provide the first non-vacuous generalization guarantees for realistic architectures applied to the ImageNet classification problem. Additionally, we show that compressibility of models that tend to overfit is limited. Empirical results show that an increase in overfitting increases the number of bits required to describe a trained network.
accepted-poster-papers
The paper combines PAC-Bayes bound with network compression to derive a generalization bound for large-scale neural nets such as ImageNet. The approach is novel and interesting and the paper is well-written. The authors provided detailed replies and improvements in response to reviewers questions, and all reviewers agree this is a very nice contribution.
train
[ "HklR-CdX67", "BygfLTdX67", "BJl6K3uXa7", "BklTXaUjnm", "H1gYrD3Sn7", "SkxzLGFojm", "ryll1c-6oQ", "HJxGoDyTjm" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer" ]
[ "Thank you for your careful reading and detailed questions and comments. .\n\n0. We have added a remark following Theorem 2.1 noting that this form is relatively complicated, explaining the reason we use it, and providing references to a unified treatment of the different PAC-Bayes bounds. In particular, Laviolette (slide 16) gives a general formulation which encompasses all existing formulas. Catoni’s formulation is significantly tighter for large values of KL, which is the case in our paper. We provide a plot comparing the different bounds here: https://github.com/anonymous-108794/nnet-compression/blob/master/artifacts/plots/README.md In our application to ImageNet, we have that KL / n is approximately 1.5.\n\n\n1. Thank you for pointing out the notational overload. In the revised version, we have adjusted the notation in Theorem 4.3 (the prior variance is now called tau) to prevent confusion, as the two lambdas were indeed distinct. We have also added a section in the appendix (A.1) to better explain how the bound is adjusted for this choice.\n\n2. The posterior variance sigma is chosen as to provide significant improvement in the bounds (by witnessing noise robustness) while minimally affecting the performance of the estimator. Sigma is part of the posterior and can be chosen in an arbitrary (data dependent) manner without affecting the validity of the bound. We choose sigma to minimally affect the estimator to ensure that bounds on the stochastic estimator are reflective of the performance of the deterministic estimator. We have included some more details to this effect in Appendix B.\n\n3 and 4: \nWe agree that a main contribution of the paper is the implementation that allows us to demonstrate that the compression-generalization link has real explanatory power for realistic deep learning applications. A strength of our bound is that it is compatible with any compression scheme. The particular the compression strategy we use was chosen because it was state of the art for compression at time of writing. We use the strategy of Han et al. (2016) with the pruning method of Guo et al. (2016). We do not make modifications to the procedures they describe, beyond hyperparameter tuning. \n\nWe anticipate that better neural network compression schemes will be developed, and future work can use our work with these better compression schemes. Accordingly, we feel that reproducing detailed descriptions of the particular compression scheme would be somewhat misleading. The respective authors provide pseudocode and detailed explanations in their original papers.\n \nAs an aside, we started with the deep compression scheme of Han et al. (2016) and modified the pruning strategy to that of Guo et al. (2016). The pruning schedule was taken from Zhu and Gupta (2018) and final sparsity values were inspired by Iandola et al. (2016).\n\nLaviolette: https://bguedj.github.io/nips2017/pdf/laviolette_nips2017.pdf\n", "Thank you for the detailed and insightful review.\n\nAs you point out, the empirical loss used in the bound is that of the stochastic classifier. We confirm that we did use the value for the stochastic network, which is indeed slightly worse than that of the non-perturbed network (65% training accuracy vs. 67% training accuracy). We have clarified these details in Appendix B (experimental details).\n\nWe agree that Catoni’s bound is unfortunately more opaque than other variants of PAC-Bayes. We have added a remark to the paper noting this and giving references to unified treatments of the different bounds. As we note in the remark, Catoni’s variant is significantly stronger when KL / n is large, as in our case. We provide a comparison of the different bounds here: https://github.com/anonymous-108794/nnet-compression/blob/master/artifacts/plots/README.md In our application for ImageNet, we have that KL / n is approximately 1.5.\n\nYour suggestion about incorporating the initialization weights is very interesting. We have previously experimented with a similar idea, also inspired by Dzugiate and Roy. We represented the weights as the difference between the initial and final values, with the hope that this would afford a more compressed representation. Unfortunately, we were not able to witness clear improvements with such strategies. Your suggestion seems like a promising direction!\n", "Thank you for your positive comments, careful review and insightful questions. We have corrected the typos in the new version of the manuscript. We now address your two questions. \n\n1) As you correctly state, the bound holds for any fixed lambda (including those smaller than 1). However, the bound is vacuous (its value is larger than 1) when epsilon is small and lambda < 1: indeed, note that phi^{-1}(x) > 1 when x > 1, and the argument is larger than 1 when lambda < 1 and epsilon is small. We introduce alpha (and the log terms) to allow for optimization over lambda. See [Catoni, p. 13], for a full derivation.\n\n2) We are not certain what the question, “why are your bounds non-vacuous?” means. Could you please elaborate? By non-vacuous, we mean that the obtained generalization error is better than guessing at random (which is 0.999 for top-1 on the 1000 class ImageNet problem). This is not an inherent property of Theorem 4.3, but an observation of the application of the bound in this specific context. \n\nUnfortunately, fair quantitative comparisons with existing bounds are difficult. In particular, many authors do not include constants required to evaluate the bound (e.g. Neyshabur et al. 2018, Theorem 1). To the best of our knowledge, attempts to evaluate these bounds have shown that they are tens of orders of magnitude too large to give non-vacuous bounds in realistic applications (see Arora et al. figure 4), even ignoring all constants and logarithmic terms.\n\nNeyshabur et al. 2018: https://arxiv.org/abs/1802.05296\nArora et al. 2018: https://arxiv.org/abs/1802.05296\n", "This paper tries to push forward in important directions the seemingly increasingly powerful approach of using PAC-Bayesian formalism to explain low risks of training neural nets on real-life data. They take an interesting approach to evaluate these bounds by setting up a prior distribution as a mixture of Gaussians centered on possible heuristic compressions of the net and this prior's variances are obtained by doing a layerwise grid search. This seems to give good risk bounds on certain known compressible nets using image data sets. \n\nLet me list out a bunch of issues that seem to be somewhat confusing in this paper (some of these were in the comment thread I had with the authors but I am repeating nonetheless for completeness) \n\n0.\nFirstly this form of the PAC-Bayes formula used here (Theorem 2.1) is of a more complicated form than what has been previously used in say these papers, https://arxiv.org/abs/1707.09564 Given this I strongly feel that there is a need for an explanation connecting this formalism to the usual one - particularly something that proves how this is stronger than the one in the paper I referred to earlier. \n\n1. \nIn the statement of Theorem 2.1 there is a \\lambda parameter over which the infimum is being taken. If I understand right in the experiments one is substituting the upperbound on KL from Theorem 4.3 into this RHS of Theorem 2.1 and evaluating this. Now there is also a \\lambda parameter in Theorem 4.3. Is this the same \\lambda as in Theorem 2.1 and when a grid-search is being done over \\lambda is the \"whole\" thing (theorem 2.1 upperbound with theorem 4.3 substituted) being minimized by choosing a good \\lambda? \n\nIf the two \\lambda s are different then is the choice of the 2 \\lambda s being optimized separately? \n\n(...the authors had earlier clarified that this is so and I strongly feel this is a very important clarification should be updated into the paper..)\n\n2. \nHow is the \\sigma of Theorem 4.3 chosen in the experiments? Am I right in thinking that this \\sigma is the posterior variance about which it is being said towards the end of page 6 that \"We add Gaussian noise with standard deviation equal to 5% of the difference between the largest and smallest weight in the filter.\" ? \n\nSo am I to understand that this is an arbitrary choice? Or is this choice dictated by some need to ensure that the posterior variance sigma is chosen so that under this distribution the sampled nets approximately compute the same function on the training data? (If yes, then what in the theory is motivating this?). \n\nTo the best of my understanding the results are highly dependent on this choice of sigma but there is virtually no explanation for this choice which was not even found by grid search. (As of now this is merely reflective of the fact that trained nets often have some noise resilience but its not a priori clear as to why that should be important to the PAC-Bayes formalism here.) \n\n3.\nThe code based compression seems a bit mysterious to me given that I do not have enough familiarity with the algorithm that is being referred to. Hence it seems a bit weird as to why there is a sum over codebooks in the proof of Theorems 4.3. Naively I would have thought that there is a fixed codebook for a given compression scheme but here it feels that the compression scheme is a randomized algorithm which also generates a new codebook in every run of it. This seems unusual and seems to need more explanation and at the very least a detailed pseudocode explaining exactly how this compression is working. \n\nThis point ties in with a somewhat larger issue I describe next...\n\n4.\nIn the previous reply to my comment the authors had shared their anonymized code and l had a look through the code. Its pretty evident from the code there are an enormous number of tweaks and hyperparameter tunings to make this work. There is very little insight otherwise as to why \"Dynamic Network Surgery' should work and its great that the authors have found an implementation that works on their image data. \n\nBut then the question arises that there should have been a cleanly abstracted out pseudocode explaining how the compression was done and how the dynamic network surgery was done. To my mind this implementation is the main contribution of the paper and giving the pseudocode for it in the paper seems not only important for essential completeness of the current paper but that could also then act as a springboard for many future attempts at trying to come up with theory for these mysterious procedures. \n\n\n\n\n\n\n\n\n", "The paper presents an application of PAC-Bayesian bounds to the problem of \nImangeNet classification (a deep neural network model). The authors provide \ninteresting empirical bounds for the risk of the ImageNet classifier. More specifically, \nthe authors introduce some clever choices for the prior distribution (on the \nhypothesis space) that allow one to incoperate a compression scheme and obtain \na (non-vacuous) bound for the predictor. \nOverall, This is an original work with clear presentation.\n\nMajor comments:\n1). In Theorem 2.1, why do you need \\lambda > 1 ?\nTo my knowledge, \\lambda only needs to be positive.\nWhy do you have to introduce the parameter \\alpha here? \nand consequently the additional log term?\n2) It is unclear for me, why are your bounds non-vacuous?\nProbably, a more clear explanation of Theorem 4.3 is to be required.\nAlso, some comparisions with the bounds in [Neyshabur et al 2018] and [Barlett et al 2017]\nwould make the paper more significant and interesting.\n\nMinor comments:\n1) in Theorem 2.1, after the formula (3), the \\Phi^{-1} should be \\Phi^{-1}_{\\gamma}.\n2) in the sentence, page 4,: \"To strengthen a naïve Occam bound, we use the idea that that deep networks are insensitive to mild... \" an extra \"that\" should be removed.\n3) in Section 5, the first paragraph, in sentence: \"The lone exception is Dziugaite & Roy (2017), which succeeds by ....\"\nshould be \"The one exception....\"\n\n\n", "This paper gives the first nonvacuous generalization bounds for\nmeaningful Imagenet models. These bounds are given in terms of the\nbit length of compressions of learned models together with a method\nfor taking into account symmetries of the uncompressed parameters.\n\nThese bounds are nonvacuous only when the compressed models are small\n--- on the order of 500 Kilobytes. State of the art compressed models\nof this size achieve Imagenet accuracies slightly better than Alexnet,\n16% error for top 5, and this paper reports a nonvacuous\ngeneralization guarantees of 89% error for top 5. While there is\nstill a large gap between the actual generalization and the guarantee,\nthis would still be a significant accomplishment.\n\nI have one major concern. The generalization bound involves adding an\nempirical loss and a regularization term computed from a KL\ndivergence. I am convinced that the authors have correctly handles\nthe KL divergence term. But the paper does not contain sufficient\ndetail to determine if the authors correctly handle the empirical loss\nterm. It is NOT correct to use the training loss of the\n(deterministic) compressed model. The generalization bound requires\nthat the training loss be measured under the parameter noise of the\nposterior distribution. The paper needs to be clear that this has\nbeen done. The comments in Appendix B on noise robustness are\ndisturbing in this regard.\n\nIf the training loss has been calculated correctly in the bound,\nthe results are significant.\n\nAssuming correctness, I would comment that the Catoni bound, while sqeaking\nout all available tightness, is very opaque. I might be good to\nconsider the more transparent bounds, claimed to be essentially the\nsame, given in McAllester's tutorial. If the more transparent bounds\nachieve equivalent numerical results, they would make the nature of\nthe bounds clearer.\n\nAnother comment involves a largely ignored detail in (Dzuigaite and\nRoy 17). Their bounds become vacuous if they center their Gaussian\nprior at zero. Instead they center the prior on the initial value of\nthe parameters. This yields a dramatic improvement in the bound. In\nthe context of the present paper, this suggests a modification of the\nprior distribution on the compressed model. We represent the model by\nfirst selecting the r code values. I think a distribution could be\ndefined on the code book that would improve its log probability, but I\nwill ignore that. Given the r code values we can define a\ndistribution over the possible compressed representations of a weight\nw_i in terms of a prior on w_i defined in terms of its initial value.\nThis gives a probability distribution over the compressed\nrepresentation. Using log probability of the compressed\nrepresentation should then be a significant improvement on the first\nterm in (8). This shift in the prior on compressed models has no\neffect on the second term of (8) so things should only get better.\n", "Thank you for the detailed reading of the paper and the comprehensive questions.\n\n1. Your description of the selection procedure for lambda is correct, along with your description of the procedure\nfor several lambdas (they are optimized separately, which is equivalent to optimizing jointly as the upper bound is separable). You are correct that due to the selection, we are not directly applying Theorem 4.3, but also combining it with a union bound to ensure correctness. One way to view it is the following: let \\pi_\\lambda denote the prior distribution in Theorem 4.3 with \\lambda fixed. We can define a new prior \\pi, which is the uniform mixture of \\pi_\\lambda for \\lambda varying over all 2^32 values corresponding to IEEE-754 single precision floating point numbers. Let \\pi_\\lambda* denote the prior selected by our grid search. Then we have that: \\pi_\\lambda*(x) \\leq \\pi_\\lambda / 2^32, and hence KL(\\rho, \\pi) \\leq KL(\\rho, \\pi_\\lambda^*) + 32 \\log 2. We apply the PAC-Bayesian bound with the prior \\pi instead of \\pi_\\sigma, and use the above bound (note: in practice we select a lambda for each layer, thus selecting 20 or so parameters, a similar argument apply). The cost paid in terms of KL divergence is thus 32 bit for each parameter, or less than 1000 bits in total, which is negligible (but taken into account) compared to the total effective size - we have thus not included this detail, although we can certainly clarify in the appendix if necessary.\n\n2. The value of \\sigma is chosen \"by wanting that the stochastic net have w.h.p. the same function values [performance] as the original net\". There is no constraint from a theoretical perspective in the choice of \\sigma (as it is part of the posterior, it can be chosen in an arbitrary, including data-dependent, manner). Our choice of sigma captures the intuition that neural networks tend to be somewhat robust to low levels of noise.\n\n3. Unfortunately, due to technological constraints, we were unable to upload the supplementary material to ICLR. We have created an anonymized Github repository with the code at: https://github.com/anonymous-108794/nnet-compression.\n\n4. Your interpretation of table 1 is right. The theory (Theorem 2.1) can be applied to any {0,1}-valued loss function, which includes both top-1 and top-5 accuracy (which is equal to 1 if the true label is in the top-1 (resp. top-5) most likely predicted labels, and 0 otherwise). We have chosen these two metrics as they are the most commonly used metrics on ImageNet.", "I request a few clarifications from the authors to help review this paper. \n\n1. \nIn the statement of Theorem 2.1 there is a \\lambda parameter over which the infimum is being taken. If I understand right, in the experiments one is substituting the upperbound on KL from Theorem 4.3 into this RHS of Theorem 2.1 and is evaluating this. Now there is also a \\lambda parameter in Theorem 4.3. Is this the same \\lambda as in Theorem 2.1 and when a grid-search is being done over \\lambda is the \"whole\" thing (theorem 2.1 upperbound with theorem 4.3 substituted) being minimized by choosing a good \\lambda? \n\n(At some point the paper says that \"we choose the prior variance \\lambda^2 layerwise by a grid search\". Does this mean that the formula being computed in the code uses a different \\lambda for each layer and hence its not Theoem 4.3's RHS that is being computed in the code?) \n\nIf the two \\lambda s are different then is the choice of the 2 \\lambda s being optimized separately? \n\n2. \nHow is the \\sigma of Theorem 4.3 chosen in the experiments? Am I right in thinking that this \\sigma is the posterior variance about which it is being said towards the end of page 6 that \"We add Gaussian noise with standard deviation equal to 5% of the difference between the largest and smallest weight in the filter.\" ? \n\nSo am I to understand that this is an arbitrary choice of \\sigma or something has been optimized to get this? Or was this choice of \\sigma constrained by wanting that the stochastic net have w.h.p almost the same function values as the original trained net? If yes, then from where in the theory is such a constraint arising from? \n\n3. \nThe footnote of page 6 says, \"Code to reproduce the experiments is available in the supplementary material.\" but I dont see any such thing anywhere. Neither is any pseudocode available to check. \n\n4. \nAt the very end of section 5 one finds this line, \"stochastic network has a top-1 accuracy of 65 % on the training data, and top-5 accuracy of 87 % on the training data\". So this means that your top-1 training error is 35% and top-5 training error is 13%. And I guess the second row of your table is what corresponds to this where you claim that your numerically optimized theoretical bound is giving upperbounds of 96.5% and 87% respectively. Am I right? \n\nThen the question arises as to where and how in the theory (the combination of Theorem 2.1 and 4.3?) were you able to specify that it should evaluate the top-1 and top-5 error? The paper does not seem to specify any loss function either where such a thing can be incorporated. " ]
[ -1, -1, -1, 6, 6, 8, -1, -1 ]
[ -1, -1, -1, 4, 5, 4, -1, -1 ]
[ "BklTXaUjnm", "SkxzLGFojm", "H1gYrD3Sn7", "iclr_2019_BJgqqsAct7", "iclr_2019_BJgqqsAct7", "iclr_2019_BJgqqsAct7", "HJxGoDyTjm", "iclr_2019_BJgqqsAct7" ]
iclr_2019_BJl6AjC5F7
Learning to Represent Edits
We introduce the problem of learning distributed representations of edits. By combining a "neural editor" with an "edit encoder", our models learn to represent the salient information of an edit and can be used to apply edits to new inputs. We experiment on natural language and source code edit data. Our evaluation yields promising results that suggest that our neural network models learn to capture the structure and semantics of edits. We hope that this interesting task and data source will inspire other researchers to work further on this problem.
accepted-poster-papers
This paper investigates learning to represent edit operations for two domains: text and source code. The primary contributions of the paper are in the specific task formulation and the new dataset (for source code edits). The technical novelty is relatively weak. Pros: The paper introduces a new dataset for source code edits. Cons: Reviewers raised various concerns about human evaluation and many other experimental details, most of which the rebuttal have successfully addressed. As a result, R3 updated their score from 4 to 6. Verdict: Possible weak accept. None of the remaining issues after the rebuttal is a serious deal breaker (e.g., task simplification by assuming the knowledge of when and where the edit must be applied, simplifying the real-world application of the automatic edits). However, the overall impact and novelty of the paper is relatively weak.
val
[ "B1l0SBlOJE", "B1xPkXivJV", "H1eeIsRo3Q", "Sye616mI0m", "HJgMTr4LAm", "HkebczdFCm", "HklnK9E8RX", "rkeXF8NUCQ", "H1gbD8dQ6Q", "Hyg1fVtXaQ", "rJeUUtum6X", "r1g6GvdXp7", "HylSf_0a27", "HkeCSi493X" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thanks for clarifying! We certainly agree that this needs to be stated as prominently as possible and we will make changes to state this more prominently and clearly in the next version of the paper.", "Thank you for the updates!\n\nIn agreement with R3's concerns, I do think it's important to state (prominently) that the annotation was performed by the authors. It seems fairly clear that there are significant qualitative differences, especially between the output of the BoW and seq encoders, and that it would be difficult to avoid bias here. That being said, I think this /does/ reinforce that the differences between models are consistent and measurable.", "The authors state nicely and clearly the main contributions they see in their work (Intro, last paragraph). Specifically the state the paper: 1) present a new and important machine learning task, 2) present a family of models that capture the structure of edits and compute efficient representations, 3) create a new source code edit dataset, 4) perform a set of experiments on the learned edit representations and present promising empirical evidence that the models succeed in capturing the semantics of edits. \n\nWe decided to organize this review by commenting on the above-stated contributions one at a time:\n\n“A new and important machine learning task”\n\nRegarding “new task”:\n\nPRO: We are unfamiliar with past work which presents this precise task; the task is new. Section 5 makes a good case for the novelty of this work.\n\nCON: None.\n\n\nRegarding “important task”:\n\nPRO: The authors motivate the task with tantalizing prospective applications-- automatically editing text and code, e.g. for grammar, clarity, and style. Conceptualizing edits as NLP objects of interest that can be concretely represented, clustered, and used for prediction is an advance.\n\nCON: Many text editors, office suites, and coding IDEs already include features which automatically suggest or apply edits for grammar, clarity, and style. The authors do not describe shortcomings in existing tools that might be better addressed using distributed representations of edits. Consequently, the significance of the proposed contribution is unclear.\n\n\n“A family of models that capture the structure of edits and compute efficient representations”\n\nRegarding “a family of models”:\n\nPRO: The family of models presented by the authors clearly generalizes: such models may be utilized for computational experiments on datasets and edit types beyond those specifically utilized in this evaluation. The authors apply well-utilized neural network architectures that may be trained and applied to large datasets. The architecture of the neural editor permits evaluation of the degree to which the editor successfully predicts the correct edit given a pre-edit input and a known representation of a similar edit.\n\nCON: The authors do not propose any scheme under which edit representations might be utilized for automatically editing text or code when an edit very similar to the desired edit is not already known and its representation available as input. Hence, we find the authors do not sufficiently motivate the input scheme of their neural editor. The input scheme of the neural editor makes trivial the case in which no edit is needed, as the editor would learn during training that the output x+ should be the same as the input x- when the representation of the “zero edit” is given as input. While the authors discuss the importance of “bottlenecking” the edit encoder so that it does not simply learn to encode the desired output x+, they do not concretely demonstrate that the edit encoder has done otherwise in the final experiments. Related to that: If the authors aimed to actually solve automated edits in text/code then it seems crucial their data contained \"negative examples\" i.e. segments which require no edits. In such an evaluation one would test also when the algorithm introduces unnecessary/erroneous edits. \n\n\nRegarding “capture structure of edits”:\n\nPRO: The authors present evidence that edit encoders tightly cluster relatively simple edits which involve adding or removing common tokens. The authors present evidence that relatively simple edits completed automatically by a “fixer” often cluster together, i.e. a known signal is retained in clustering. The authors present evidence that the nearest neighbors of edits in an edit-representation space often are semantically or structurally similar, as judged by human annotators. Section 4.3 includes interesting observations comparing edit patterns better captured by the graph or seq edit encoders. \n\nCON: The details of the human annotation tasks which generated the numerical results in Tables 1 and 2 are unclear: were unbiased third parties utilized? Were the edits stripped of their source-encoder label when evaluated? Objectively, what separates an “unrelated” from a “similar” edit, and what separates a “similar” from a “same” edit? Did multiple human annotators undertake this task in parallel, and what was their overall concordance (e.g. “intercoder reliability”)? Without concrete answers to these questions, the validity and significance of the DCG/NDCG results reported in Tables 1 and 2 are unclear. It is not clear from the two examples given in Table 1 that the three nearest neighbors embedded by the Seq encoder are “better”, i.e. overall more semantically and/or syntactically similar to the example edit, than those embedded by the Bag of Words model. It is unclear which specific aspects of “edit structure” are better captured by the Seq encoder than the Bag of Words model. The overall structure of Tables 1 and 2 is awkward, with concrete numerical results dominated by a spatially large section containing a small number of examples.\n\n\n“create a new source code edit dataset”\n\nPRO: The authors create a new source code edit dataset, an important contribution to the study of this new task.\n\nCON: Minor: is the provided dataset large enough to do more than simple experiments? See note below on sample size.\n\n\n“present promising empirical evidence that the models succeed in capturing the semantics of edits”\n\nPRO: The experiment results show how frequently the end-to-end system successfully predicted the correct edit given a pre-edit input and a known representation of a similar edit. Gold standard accuracies of more than 70%, and averaged transfer learning accuracies of more than 30%, suggest that this system shows promise for capturing the semantics of edits.\n\nCON: Due to concerns expressed above about the model design and evaluation of the edit representations, it remains unclear to what degree the models succeed in capturing the semantics of edits. Table 11 shows dramatic variation in success levels across fixer ID in the transfer learning task, yet the authors do not propose ways their end-to-end system might be adjusted to address areas of weak performance. The authors do not discuss the impact of training set size on their evaluation metrics. The authors do not discuss the degree to which their model training task would scale to larger language datasets such as those needed for the motivating applications.\n\n##############\nBased on the authors' response, revisions, and disucssions we have updated the review and the score. ", "We thank again all reviewers for their insightful comments! We have updated our submission reflecting your comments and suggestions. Here is a brief summary of the changes:\n\n**Text Updates**\n\n**Potential Impact of the Task** We revised Section 2, describing the difference of our proposed neural approach with existing rule-based editing systems. We also illustrate some interesting potential downstream applications facilitated by this task.\n\n**Details of Data Annotation** We included our annotation instructions, and the inter-rater agreement score in Appendix E.\n\n**Human Evaluation on WikiAtomicEdits** Based on comments from Reviewer-#3, we revised the format of Table 1 and the corresponding discussion in Section 4.2 to highlight the differences of the neural editing model v.s. a simple bag-of-words baseline.\n\n**Detailed Analysis of the Transfer Learning Experiments** We expanded Appendix C, presenting more analysis and discussions for some challenging C# fixer categories where our model underperformed.\n\n\n**New Experiments and Analysis**\n\nWe also included three new experiments with analysis:\n\n**Comparison with the Guu et al. Bag-of-Edits Encoder** As pointed out by Reviewer-#2, Guu et al. (2017) introduced a generative language model of natural language sentences by editing prototypes. We have included a more detailed explanation in Section 5 (first Para.) to distinguish our work from Guu et al. (2017). While we remark that our work and Guu et al. (2017) are not directly comparable, we have implemented the deterministic version of the “Bag-of-Edits” edit encoding model in Guu et al. (2017) as a baseline editor encoder for our end-to-end experiments in Section 4.4, Table 4. The results confirm the advantage of our edit encoder models proposed in Section 3.2, which go beyond the simple “Bag-of-Edits” scheme and can capture the context and positional information of edits. We also present interesting analysis regarding the contextual and positional sensitivity of edits in Appendix B.\n\n**“Lower-bounds” of the Transfer Learning Task** As suggested by Reviewer-#2, we included “lower-bounds” accuracies for the transfer learning experiments in Table 5. To approximate the lower-bounds, we trained Seq2Seq and Graph2Tree transduction models without using edit encoders, and test the model’s accuracies in directly transducing an original input code $x-$ into the edited one $x+$.\n\n**Performance with Varying Training Data Size** To address the concern raised by Reviewer-#3, we evaluated the precision of our neural editor models with varying amount of training data. We present the results in Appendix D. The results suggest that our proposed approach is relatively data efficient: our Graph2Tree (on GithubEdits) and Seq2Seq (on WikiAtomicEdits) editors achieve around 90% of the accuracies achieved using the full training set with only 60% of the training data.\n\nFinally, we would like to thank again the reviewers for their time and insightful comments which have helped make this paper better. We believe that learning to represent edits is an important yet underexplored problem in representation learning for natural language, source code, and other structured data. We hope that this work inspires future research and that the provided datasets/evaluation protocols will further facilitate future exploration of this task in the community.\n\n", "Thanks again for your insightful comments! We have updated our submission. Below is a brief summary of the changes we made reflecting your comments:\n\n1. (Regarding Data Annotation) We included our annotation instructions, and the inter-rater agreement score in Appendix E.\n\n2. (Regarding \"important task\" and \"a family of models\") We added descriptions in Section 2 describing the difference of our proposed neural approach with existing rule-based editing systems and potential downstream applications facilitated by the task. We leave the problem of identifying which edit representation to apply to an input as interesting future work\n\n3. (Regarding “capture structure of edits” and Human Evaluation) We include human evaluation details in Appendix E. We also apologize for the confusion in interpreting Table 1, and have revised its format accordingly. Example 1 in Table 1 shows that the three nearest neighbors returned by the neural editing model are clearly semantically and syntactically relevant to the seed edit (i.e., both the seed edit and the returned neighbors inserted a sentence describing the profession and date of the birth of the topic person), while the nearest neighbors returned by the bag-of-words baseline only rely on surface token overlap, and are not syntactically/semantically similar to the seed edit. We also include discussions about the contextual and positional sensitivity of edits in Appendix B.\n\n4. (Regarding results in Table 11) We expanded Appendix C, presenting more analysis and discussions for some challenging C# fixer categories (RCS1077, RCSRCS1197, RCS1207, RCS1032).\n\n5. (Regarding the Impact of Training Set Size) We evaluated the precision of our neural editor models with varying amount of training data in Appendix D. The results indicate that our proposed approach is relatively data efficient: our Graph2Tree (on GithubEdits) and Seq2Seq (on WikiAtomicEdits) editors achieve around 90% of the accuracies achieved using the full training set with only 60% of the training data.\n", "Thanks again for your review. We are wondering if our comments have sufficiently addressed your concerns or if there is something that we might have missed.\n\nOverall, we would kindly ask that you reconsider your rating given the additional experimental results, evaluations and explanation. Alternatively, could you please provide any further guidance on how to improve the paper?\n\n", "Thanks again for your insightful comments! We have updated our submission. Below is a brief summary of the changes we made reflecting your comments:\n\n1. Question: “what would be enabled by accurate prediction of atomic edits … elaborate on the motivation and significance for this new task”\n We have presented a detailed explanation in our previous response to your comment. We also revised Section 2 to illustrate some interesting potential downstream applications facilitated by this task. We will include more discussion in the final version given more pages. \n\n2. Question: \"human evaluation is not described in detail...\"\n We included our annotation instructions, and the inter-rater agreement score in Appendix E.\n\n3. Question: “what it means when they say better prediction performance does not necessarily mean it generalizes better...”\n We have presented a detailed explanation in our previous response to your comment. We have also rephrased our discussion in Section 4.4 to make the logical flow clearer.", "Thanks again for your insightful comments! We have updated our submission. Below is a brief summary of the changes we made reflecting your comments:\n\n**Details of Data Annotation** We included our annotation instructions, and the inter-rater agreement score in Appendix E.\n\n**Comparison with the Guu et al. Bag-of-Edits Encoder** As pointed out by you, Guu et al. (2017) introduced a generative language model of natural language sentences by editing prototypes. We have included a more detailed explanation in Section 5 (first Para.) to distinguish our work from Guu et al. (2017). While we remark that our work and Guu et al. (2017) are not directly comparable, we have implemented the deterministic version of the “Bag-of-Edits” edit encoding model in Guu et al. (2017) as a baseline editor encoder for our end-to-end experiments in Section 4.4, Table 4. The results confirm the advantage of our edit encoder models proposed in Section 3.2, which go beyond the simple “Bag-of-Edits” scheme and can capture the context and positional information of edits. \n\nAs in our previous response to your comment, We have presented further analysis regarding the contextual and positional sensitivity of edits in Appendix B, illustrating the importance of using more advanced edit encoders than \"Bag-of-Edits\" encoders to capture such information.\n\n**“Lower-bounds” of the Transfer Learning Task** We included “lower-bounds” accuracies for the transfer learning experiments in Table 5. To approximate the lower-bounds, we trained Seq2Seq and Graph2Tree transduction models without using edit encoders, and test the model’s accuracies in directly transducing an original input code $x-$ into the edited one $x+$.\n", "To all reviewers:\n\nWe thank all reviewers for their insightful comments!\n\n**Regarding Data Annotation**\n\nWe apologize for not detailing the annotation rubric and will make this clearer. We will update the main text to clarify the most important points, and provide the instructions and examples for the rating system in the supplementary material.\n\nAs also noted by Reviewer-#1, we realized that it is difficult to come up with a fine-grained rating system (e.g., using a 5-element scale) for characterizing semantic/syntactic similarity between edits, especially for free-form natural language data. We believe this problem alone would be an interesting research issue, reminiscent of studies in categorizing syntactic transformations in natural language (e.g., He et al., 2015). \n\nTherefore, we chose to use a simpler 3-element scale (semantically/syntactically equivalent edits, related edits, unrelated). For both natural language and code data, we designed detailed annotation instructions with illustrative examples (will be included in the supplementary material of the next version of our paper). Admittedly, this grading scheme is not perfect, as the category of “relevant edits” could be further divided, and it does not distinguish semantically similar edits from syntactically similar ones. However, we found no way to exactly define how to do such finer-grained annotations, and thus used our simple scheme. Note that this simple grading system is already effective in comparing the performance of different models. For example, we observe a clear win of Seq2Seq models over the bag-of-words baseline in both natural language and code datasets (Tables 1 and 2), and Graph2Tree with sequential edit encoder over Seq2Seq (Table 2), especially in Acc@1. \n\nThe annotation was carried out by three of the authors, and we anonymized the source of systems that generated the output. Due to time limits we assigned different sampled edits to different annotators. We will provide inter-rater agreement score shortly.\n\nReference:\n\n1. H. He, A. G. II, J. Boyd-Graber, and H. D. III. Syntax-based rewriting for simultaneous machine translation. In Empirical Methods in Natural Language Processing (EMNLP 2015)\n", "*robustness of edit encodings*: Thanks for the comment! Directly measuring the robustness of edit encodings is non-trivial, but our one-shot learning experiments (Sec. 4.4) serve as a good proxy by testing the editing accuracy using the edit encoding from a similar example.\n\n*applicability to other tasks*: Our proposed method is general and could be applied to other structured transduction tasks. We perform experiment on natural language edits (sequential) and source code commit data (tree-structured), since these are two commonly occurring sources of edits. We leave applying our model to other data sources as interesting future work.\n\n*comparison with Guu et al., 2017*: Thanks for pointing out the related work by Guu et al! As discussed in Section 5, we remark that our motivation and research issues are very different, and these two models are not directly comparable --- Guu et al. focus on learning a generative language model by marginalizing over latent edits, while our work focuses on discriminative learning of (1) representing edits given the original (x-) and edited (x+) data, and (2) applying the learned edit to new input data. We therefore directly evaluate the quality of neighboring edit representations via human annotation, and the end-to-end performance of applying edits to both parallel data and in a novel one-shot learning scenario, which are not covered in Guu et al.\n\nNevertheless, our model architecture shares a similar spirit with Guu et al. For example, the model in Guu et al. also has an edit encoder based on “Bag-of-Edits” (i.e., the posterior distribution $q(z|x-, x+)$) and a seq2seq generation (reconstruction) model of x+ given x- and the edit representation z. In some sense, our seq2seq editor with a “Bag-of-Edits” edit encoder would be similar as the “discriminative” version of Guu et al. We will make the difference between this research and Guu et al clearer in an updated version of the paper. Please also refer to below for our response to the “Bag-of-Edits” edit encoder. \n\nResponse to your specific questions:\n\n*lower-bounding transfer learning results*: Thanks for the comments! Having a lower-bound is helpful in understanding the relative advantage of our proposed method, however it is not clear what a reasonable lower-bounding baseline would be. One baseline would be an editor model (e.g., Graph2Tree with sequential edit encoder) that doesn’t use edit encodings. \n\n*constrained versions of the edit encoder*: First, we remark that our Bag-of-Word edit encoder (Table 1 and 2) is similar to a “Bag-of-Edits” model, where the representation of an edit is modeled by a vector of added/deleted tokens (we use different vocabularies for added and deleted words). Our neural edit encoders have access to the full sequences x- and x+. \n\nWe also tried a distributional bag-of-edits model like the one used in Guu et al., using an LSTM to summarize only changed tokens. This model had worse performance in our end-to-end experiment (Table 4) and we therefore we did not include the results. Through error analysis we found that many edits are **context and positional sensitive**, and encoding context (i.e., full sequences) is important. For instance, the WikiAtomicEdits examples we present in Table 9 clearly indicate that semantically similar insertions also share similar editing positions, which cannot be captured by the bag-of-edits encoder as in Guu et al. This might be more obvious for structured data source like code edits (c.f., Table 10). For instance, in the first example in Table 10, `Equal()` can be changed to `Empty()` **only** in the `Assert` namespace (i.e., the context). We apologize for the confusion and will include more results and analysis in the final version, facilitating more direct comparison with the editor encoder in Guu et al. Nevertheless, we remark that as discussed above, our work is not directly comparable with Guu et al.\n\n*subsampling WikiAtomicEdits*: At the time of submission the WikiAtomicEdits dataset could not be downloaded in full, due to an error with the zip file provided. We managed to extract the first 1M edits from the dataset. We believe that the full corpus would not present significantly different statistical properties from the 1M samples we used.\n\n*human evaluation*: please refer to our response regarding annotation. The idea of separating syntactically and semantically similar edits is also very interesting, which we will explore in our final version.\n\n*soft metric*: Thanks for the comment! We can definitely do BLEU evaluation on WikiAtomEdits. For source code data, a sensible “soft” metric on source code still remains an open research issue (Yin and Neubig, 2017). We will include more discussion in our final version.\n\n*classifying Wikipedia edits*: This is a very great idea, thanks for suggesting this. Given the time constraints, we will examine the feasibility of doing something like this for the final version of the paper.\n\n\n", "Thank you for the careful reading of the paper (including the lengthy appendices!), and elucidating concerns about validity of the task and method. We believe that several of these were due to a lack of clarity in our exposition, that can be resolved. We have attempted to clarify these below and will revise the paper to make things more clear before the end of the review period.\n\n* Regarding \"important task\"\n\nResponse: existing editing systems, like the the grammar checker in MS Word and code refactoring module in IDEs, often use heavily engineered, domain-specific, manually crafted rules to perform editing. Our proposed learning-based model is a data-driven approach that automatically **learns** to extract, represent and apply edits from large-scale edit data, and it is also a **generic** system that could be applied to heterogeneous domains like text and source code. \n\nAdditionally, using distributed representations also facilitates visualization (Figure 2) and clustering (Appendix B) of semantically similar edits. These novel applications open possibilities to develop human-assistance toolkits for discovering and extracting emerging edit patterns (e.g., new bug fixes from GitHub commits) for rule-based systems from large-scale edit data. For example, this could be used to drive the development of new rules for existing edit tools, by identifying common patterns not covered by existing capabilities. We apologize and will make this clearer. \n\n* Regarding \"a family of models\"\n\nResponse: We agree that our current system is not able to identify places where an edit should be performed, and that this is important future work. In this work, we have focused on (1) computing representations of edits that allow us to group similar changes, and (2) applying such representations in a new context. Both of these scenarios are already useful in human-in-the-loop scenarios. For example, a good solution to problem (1) can inform the development of new edit and refactoring tools (by observing common changes), whereas (2) can be used to propose changes that can be accepted/rejected by a human.\n\nWe will make this aspect of future work clearer in the next version of our paper.\n\n* Regarding “capture structure of edits” and Human Evaluation\n\nResponse: please refer to our response regarding annotation.\n\n* Regarding “present promising empirical evidence that the models succeed in capturing the semantics of edits” and Results in Table 11\n\nResponse: we thank the reviewer for his effort in analyzing the many statistics we present in Table 11! We remark that this task is a transfer learning task is indeed non-trivial. For instance, some fixer categories cover many different types of edits (e.g., RCS1077 (https://github.com/JosefPihrt/Roslynator/blob/master/docs/analyzers/RCS1077.md handles 12 differents ways of optimizing LINQ expressions). In these cases, edits are semantically related (“improving a LINQ expression”), but this relationship only exists at a high level and is not directly reflected to the syntactic transformations required by the fixer.\n\nOther categories contain complex refactoring rules that require reasoning about a chain of expressions (e.g., RCS1197 (https://github.com/JosefPihrt/Roslynator/blob/master/docs/analyzers/RCS1197.md turns sb.Append(s1 + s2 + … + sN) into sb.Append(s1).Append(s2).[...]Append(sN)), which our current models are unable to reason about. We believe that further advances in (general) learning from source code are required to correctly handle theses cases.\n\nWe will expand Appendix C with a more fine-grained analysis of the results in Table 11, providing more background on categories whose results deviate substantially from the average.\n\n[Impact of training set size and scalability]: Thanks for the comments! We will discuss this in our final version.\n\n\n", "Question: “what would be enabled by accurate prediction of atomic edits … elaborate on the motivation and significance for this new task”\n\nResponse: Our work focuses on developing a generic approach to represent and apply edits. On the WikiAtomicEdits data, one interesting application of our model would be facilitating the development of data exploration toolkits that cluster and visualizes semantically and syntactically similar edits (e.g., the example clusters shown in Table 9). Since our proposed approach is relatively general, we believe we could explore more interesting applications given access to parallel data of other forms of natural language edits. For example, our model could be used to represent and apply syntactic transfer given parallel corpora of sentences with different syntactic structures.\n\nOn the source code domain, our work enjoys more intriguing and immediate applications like learning to represent and apply code fixes from commit data, similar to the one-shot learning task we present in Section 4.4. Our work could also enable human-in-loop machine learning applications like clustering commit streams on GitHub at large-scale and helping users identify emerging “best practices” or bug fixes. Indeed, the initial motivation for our research was to automatically identify common improvements to source code that are not covered by existing tools.\n\nQuestion: \"human evaluation is not described in detail...\"\n\nResponse: please refer to our general response regarding data annotation.\n\nQuestion: “what it means when they say better prediction performance does not necessarily mean it generalizes better...”\n\nResponse: This observation is grounded in the comparison of the results displayed in Tables 4 and 5 in our end-to-end experiment on GitHubEdits data (Section 4.4). Table 4 indicates that given the encoding of an edit (x-, x+), the Seq2Seq editor is most precise in generating x+ from x-, (slightly) outperforming the Graph2Tree editor. We evaluate the generality of edit representations in our “one-shot” experiment, where we use the encoding of a related edit (x-, x+) to reconstruct x+’ from x-’. There, the Graph2Tree editor performs significantly better than the Seq2Seq editor. The latter experiment serves as a good proxy in evaluating the generalization ability of different system configurations, from whose result we derive the hypothesis that better performance with gold-standard edit encodings might not imply better performance with noisy edit encodings.\n\nWe apologize for the confusion and will update the text of the paper to clarify what we mean by generalizable and how we draw that conclusion from our experiments.\n", "This paper looks at learning to represent edits for text revisions and code changes. The main contributions are as follows:\n* They define a new task of representing and predicting textual and code changes \n* They make available a new dataset of code changes (text edit dataset was already available) with labels of the type of change\n* They try simple neural network models that show good performance in representing and predicting the changes\n\nThe NLP community has recently defined the problem of predicting atomic edits for text data (Faraqui, et al. EMNLP 2018, cited in the paper), and that is the source of their Wikipedia revision dataset. Although it is an interesting problem, it is not immediately clear from the Introduction of this paper what would be enabled by accurate prediction of atomic edits (i.e. simple insertions and deletions), and I hope the next version would elaborate on the motivation and significance for this new task. \n\nThe \"Fixer\" dataset that they created is interesting. Those edits supposedly make the code better, so modeling those edits could lead to \"better\" code. Having that as labeled data enables a clean and convincing evaluation task of predicting similar edits.\n\nThe paper focuses on the novelty of the task and the dataset, so the models are simple variations of the existing bidirectional LSTM and the gated graph neural network. Because much of the input text (or code) does not change, the decoder gets to directly copy parts of the input. For code data, the AST is used instead of flat text of the code. These small changes seem reasonable and work well for this problem.\n\nEvaluation is not easy for this task. For the task of representing the edits, they show visualizations of the clusters of similar edits and conduct a human evaluation to see how similar these edits actually are. This human evaluation is not described in detail, as they do not say how many people rated the similarity, who they were (how they were recruited), how they were instructed, and what the inter-rater agreement was. The edit prediction evaluation is done well, but it is not clear what it means when they say better prediction performance does not necessarily mean it generalizes better. That may be true, but then without another metric for better generalization, one cannot say that better performance means worse generalization. \n\nDespite these minor issues, the paper contributes significantly novel task, dataset, and results. I believe it will lead to interesting future research in representing text and code changes.", "The main contributions of the paper are an edit encoder model similar to (Guu et al. 2017 http://aclweb.org/anthology/Q18-1031), a new dataset of tree-structured source code edits, and thorough and well thought-out analysis of the edit encodings. The paper is clearly written, and provides clear support for each of their main claims.\n\nI think this would be of interest to NLP researchers and others working on sequence- and graph-transduction models, but I think the authors could have gone further to demonstrate the robustness of their edit encodings and their applicability to other tasks. This would also benefit greatly from a more direct comparison to Guu et al. 2017, which presents a very similar \"neural editor\" model.\n\nSome more specific points:\n\n- I really like the idea of transferring edits from one context to another. The one-shot experiment is well-designed, however it would benefit from also having a lower bound to get a better sense of how good the encodings are.\n\n- If I'm reading it correctly, the edit encoder has access to the full sequences x- and x+, in addition to the alignment symbols. I wonder if this hurts the quality of the representations, since it's possible (albeit not efficient) to memorize the output sequence x+ and decode it directly from the 512-dimensional vector. Have you explored more constrained versions of the edit encoder (such as the bag-of-edits from Guu et al. 2017) or alternate learning objectives to control for this?\n\n- The WikiAtomicEdits corpus has 13.7 million English insertions - why did you subsample this to only 1M? There is also a human-annotated subset of that you might use as evaluation data, similar to the C#Fixers set.\n\n- On the human evaluation: Who were the annotators? The categories \"similar edit\", and \"semantically or syntactically same edit\" seem to leave a lot to interpretation; were more specific instructions given? It also might be interesting, if possible, to separately classify syntactically similar and semantically similar edits.\n\n- On the automatic evaluation: accuracy seems brittle for evaluating sequence output. Did you consider reporting BLEU, ROUGE, or another \"soft\" sequence metric?\n\n- It would be worth citing existing literature on classification of Wikipedia edits, for example Yang et al. 2017 (https://www.cs.cmu.edu/~diyiy/docs/emnlp17.pdf). An interesting experiment would be to correlate your edit encodings with their taxonomy." ]
[ -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6 ]
[ -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "B1xPkXivJV", "rkeXF8NUCQ", "iclr_2019_BJl6AjC5F7", "iclr_2019_BJl6AjC5F7", "rJeUUtum6X", "HJgMTr4LAm", "r1g6GvdXp7", "Hyg1fVtXaQ", "iclr_2019_BJl6AjC5F7", "HkeCSi493X", "H1eeIsRo3Q", "HylSf_0a27", "iclr_2019_BJl6AjC5F7", "iclr_2019_BJl6AjC5F7" ]
iclr_2019_BJl6TjRcY7
Neural Probabilistic Motor Primitives for Humanoid Control
We focus on the problem of learning a single motor module that can flexibly express a range of behaviors for the control of high-dimensional physically simulated humanoids. To do this, we propose a motor architecture that has the general structure of an inverse model with a latent-variable bottleneck. We show that it is possible to train this model entirely offline to compress thousands of expert policies and learn a motor primitive embedding space. The trained neural probabilistic motor primitive system can perform one-shot imitation of whole-body humanoid behaviors, robustly mimicking unseen trajectories. Additionally, we demonstrate that it is also straightforward to train controllers to reuse the learned motor primitive space to solve tasks, and the resulting movements are relatively naturalistic. To support the training of our model, we compare two approaches for offline policy cloning, including an experience efficient method which we call linear feedback policy cloning. We encourage readers to view a supplementary video (https://youtu.be/CaDEf-QcKwA ) summarizing our results.
accepted-poster-papers
Strengths: One-shot physics-based imitation at a scale and with efficiency not seen before. Clear video, paper, and related work. Weaknesses described include: the description of a secondary contribution (LFPC) takes up too much space (R1,4); results are not compelling (R1,4); prior art in graphics and robotics (R2,6); concerns about the potential limitations of the linearization used by LFPC. The original reviews are negative overall (6,3,4). The authors have posted detailed replies. R1 has posted a followup, standing by their score. We have not heard more from R2 and R3. The AC has read the paper, watched the video, and read all the reviews. Based on expertise in this area, the AC endorses the author's responses to R1 and R2. Being able to compare LFPC to more standard behavior cloning is a valuable data point for the community; there is value in testing simple and efficient models first. The AC identifies the following recent (Nov 2018) paper as being the closest work, which is not identified by the authors or the reviewers. The approach being proposed in the submitted paper demonstrates equal-or-better scalability, learning efficiency, and motion quality, and includes examples of learned high-level behaviors. An elaboration on HL/LL control: the DeepLoco work also learns mocap-based LL-control with learned HL behaviors. although with a more dedicated structure. Physics-based motion capture imitation with deep reinforcement learning https://dl.acm.org/citation.cfm?id=3274506 Overall, the AC recommends this paper to be accepted as a paper of interest to ICLR. This does partially discount R3 and R1, who may not have worked as directly on these specific problems before. The AC requests is rating the confidence as "not sure" to flag this for the program committee chairs, in light of the fact that this discounts the R1 and R3 reviews. The AC is quite certain in terms of the technical contributions of the paper.
train
[ "HyxwKQnF1N", "ryevUebMRm", "SygmMXpJAm", "Sylz2k9o6Q", "SJlewPYtpX", "HJlsxPYYam", "HkllTLYYaQ", "HJxBuUYY6m", "S1eS0BTpnX", "S1gTWJZ6hX", "HJeCcTtms7" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We are reaching the end of the discussion period.\nThere remain mixed opinions on the paper.\nAny further thoughts from R2 and R3? Stating pros + cons and summarizing any change in opinion would be very useful.\nThe main contribution is centred around one-shot imitation as well as reuse of low-level motor behaviors in the context of new tasks. Issues being discussed include related prior art, demonstrated benefit of method in results, importance of LFPC.\nOf course we recognize that reviewer & author time is limited.\n-- area chair\n", "I have read all of the comments (from the reviewers and the authors) and have also read the revised version of the paper. I am still not convinced that the paper makes a strong contribution. Here are my comments:\n\n- The revised version of the paper still has LPFC as a major portion of the paper. In particular, the real estate in terms of pages devoted to explaining LPFC is more than that devoted to neural probabilistic motor primitives (which the authors claim is the main contribution of the paper). The conclusion of the paper also highlights LPFC (including its limitations). I do not think that the revised version of the paper adequately de-emphasizes LPFC.\n\n- The results from the simulation experiments showcasing neural probabilistic motor primitives (NPMP) presented in the paper are not particularly compelling. In particular, Figure 4 (which presents the relative performance of NPMP as compared to the expert) suggests that NPMP is not really doing a good job at capturing the expert's behavior. In particular, for both training and test data, the relative performance is around 0.5, which doesn't seem particularly good. Moreover, as noted by AnonReviewer2, the target following example is not particularly compelling, since it has previously been demonstrated by many other papers. I would thus have liked to have seen a thorough comparison of NPMP with other methods on this example. Moreover, as noted in my previous review, the results for LPFC are also quite weak.\n\nBased on this, I retain my original rating for the paper.\n\nSmall comments:\n- For clarity, I would recommend using \\eqref{} when referencing equations. For example, on pg. 6, \"Objective 5\" should be \"Objective (5)\".", "This paper has seen detailed reviews and detailed responses by the authors. Thank you to all.\n\nReviewers: please do provide further feedback based on the authors replies, \nand note whether it changes your evaluation and your score for the paper.\nAlso note that a revised draft has been submitted. \nYour input is greatly appreciated, as the opinions are mixed and they focus on different aspects of the work.\n\nFor revision differences of the revised draft: \nselect \"Show Revisions\" on the review page, and then select the check-boxes for the versions you wish to compare. \n\n-- area chair", "In response to reviewer feedback, we have revised our abstract and contributions portion of the introduction to better communicate the focus of the paper. We consider the neural probabilistic motor primitive module to be the primary contribution and LFPC as an auxiliary contribution. As judged by reviewer reception, this did not come across as intended. We hope the revision better reflects this.", "We thank the reviewer for their detailed discussion of the LFPC method and address concerns below. However, as pointed out in the introductory remarks this is only one aspect of the paper and we would also like to encourage the reviewer to include our main result, the neural probabilistic motor primitive module in their assessment.\n\nAddressing the concerns about LFPC in turn:\nC1: For single-behavior experts, indeed we intended Fig 3 to indicate (perhaps surprisingly) that linear-feedback policies perform well, and that LFPC can transfer that level of performance into a new neural network (from a single rollout of behavior). For a single behavior, this is merely a validation that the new neural network can be as robust as even the linear feedback policy. Our real aim is to be able to distill many experts into a single network as we demonstrate subsequently.\n\nC2: Both LFPC and the behavioral cloning baseline were able to train the NPMP and permit skill reuse, but in our specific one-shot imitation comparisons the behavior-cloning approach performed better. Behavioral cloning from arbitrary amounts of data is an arbitrarily strong baseline. The two considerations that motivate LFPC are that we can store fewer data from experts and that we can query fewer trajectories from the expert system (in settings where rollouts are costly, such as real platforms).\n\nC3, C4: The general setting for our approach is that we assume the existence of experts that perform single behaviors -- as of late, this is a reasonable assumption, enabled by previous research (e.g. Liu et al. 2010, 2015, 2018, Merel et al. 2017, Peng et al. 2018). What has not been done prior to this work is to exhibit single policies capable of flexibly generating a wide range of skills, and this is the problem we are focusing on. For our purposes, it is not critical how experts are obtained, and this paper does not advocate any particular way of generating expert policies. That being said, neural network experts have been successfully trained in some recent work, so we expected it would work, and a priori, it was not obvious that directly training a linear feedback policy might suffice. Moreover, in preliminary experiments done when beginning this work (not reported here), we found that it can be quite data inefficient to directly train a time-indexed linear feedback policy for tracking motion capture using RL, we believe due to lack of parameter sharing across timesteps, so we did not pursue this further. \n\nNevertheless our single-behavior expert transfer experiments demonstrated empirically that linear feedback policies extracted from the expert neural networks were essentially as performant as RL-trained neural network experts in terms of robust tracking of single behaviors (Fig. 3). That linear feedback policies work as well here is a statement about the dynamics of the environment and the complexity of the behaviors (i.e. that the behaviors here are sufficiently unimodal). It seems, for a wide range of stereotyped behaviors, the policies required to execute the behaviors might be “surprisingly simple”, depending on your initial preconceptions. \n\nC5: In contemporary neural network languages, it is straightforward to compute the Jacobian of the actions w/ respect to observation inputs. As described in eqn 2, this directly provides the linearization of the policy when evaluated at the nominal trajectory. In section 3.1, we use the same network architecture for cloning from noisy rollouts and from the linear feedback policy (MLP with two hidden layers: 1024, 512; we can add this detail to the text).\n\n\nReferences:\nLiu, L., Yin, K., van de Panne, M., Shao, T. and Xu, W., 2010. Sampling-based contact-rich motion control. ACM Transactions on Graphics (TOG), 29(4), p.128.\n\nLiu, L., Yin, K. and Guo, B., 2015, May. Improving Sampling‐based Motion Control. In Computer Graphics Forum (Vol. 34, No. 2, pp. 415-423).\n\nLibin Liu and Jessica Hodgins. Learning basketball dribbling skills using trajectory optimization and deep reinforcement learning. ACM Transactions on Graphics (TOG), 37(4):142, 2018. \n\nMerel, Josh, Yuval Tassa, Sriram Srinivasan, Jay Lemmon, Ziyu Wang, Greg Wayne, and Nicolas Heess. \"Learning human behaviors from motion capture by adversarial imitation.\" arXiv preprint arXiv:1707.02201 (2017).\n\nXue Bin Peng, Pieter Abbeel, Sergey Levine, and Michiel van de Panne. Deepmimic: Example-guided deep reinforcement learning of physics-based character skills. arXiv preprint arXiv:1804.02717, 2018.\n", "We thank the reviewer for appreciating the difficult problem we’re tackling. However, we disagree with the reviewer about the level of similarity between this work and previous work. We have discussed a number of relationships between this work and existing approaches in the robotics, ML, and graphics communities. As far as we are aware, no existing work learns a rich embedding space for physics-based control. For kinematic sequence modeling, there is abundant work in computer graphics that learns to blend/reuse/compose movement trajectories (e.g. Holden et al. 2017). To our knowledge, for the much more challenging problem of flexible physics-based control, there is no prior work which results in a robustly reusable skill space that is as comprehensive in scope as what was demonstrated here. We would sincerely appreciate references of any previous papers that the reviewer thinks overlap in terms of successfully demonstrating the learning of a skill space which is reusable for physics-based control, especially for humanoids.\n\nOne-shot imitation has been demonstrated by a few groups in the past couple years for mounted robotic arms. But we are aware of considerably less work (primarily Wang et al. 2017; discussed in the paper) in which humanoids perform one-shot behaviors. The reason this is difficult in the physics-based case is that the humanoid must balance and remain upright in addition to imitating the demonstration. Moreover, while one-shot imitation is the core systematic test of the model, since the architecture was trained for this setting, we emphasize that the demonstration of reuse is considerably more interesting to us. After producing this module, a fresh HL policy can learn to set the “intention” of the LL controller and produces fairly human-like behaviors by reusing the learned skill space. We selected the go-to-target task because we wanted to heavily tax the LL movement space by demanding sudden, jerky changes of movement and what resulted were strikingly human-like movement changes, with only a very simple reward (reward = 0 everywhere except when target is reached) and no additional constraints on the human-likeness of the behavior. While simpler bodies can solve this problem from scratch, for a complex humanoid, the movements produced by learning from scratch are most definitely very non-human-like in general.\n\nSimultaneously reusing the upper body for manipulation while having lower body locomote is indeed a great challenge problem for future work. We have already included imitation of arm movements in our evaluation but our training distribution does not contain any manipulation demonstrations. We are optimistic that this approach can scale to this setting, but it is beyond the scope of the present paper. We do believe, that what we have demonstrated here advances the state of the art for reusable physics-based locomotion behaviors. \n\nReferences: \nHolden, Daniel, Taku Komura, and Jun Saito. \"Phase-functioned neural networks for character control.\" ACM Transactions on Graphics (TOG) 36, no. 4 (2017): 42.\n\nZiyu Wang, Josh S Merel, Scott E Reed, Nando de Freitas, Gregory Wayne, and Nicolas Heess. Robust imitation of diverse behaviors. In Advances in Neural Information Processing Systems, pp. 5320–5329, 2017.\n", "Concerning LFPC, we note that bipedal locomotion is highly nonlinear and despite this, the linear feedback policy empirically works rather robustly (despite the high-D observation space) as shown in section 3.1. The term linear-feedback-stabilized policy, refers to the linear feedback policy in equation 2, which is stabilized with linear feedback (relative to the naive open-loop policy that simply executes a fixed sequence of actions). \n\nWe consider it clear from our results that time-indexed linear feedback policies suffice to capture the behavior of experts around nominal trajectories in our setting. Correspondingly, LFPC is capable of transferring expert functionality. We would like to point out that in our scenario there is no need to estimate J -- it is simply the Jacobian of a neural network with respect to the inputs which is readily available in standard neural network languages (see eqn 2). \n\nThere seems to be some confusion about delta s -- it has very little to do with the “various optimal controllers” and indeed we state in the paper (page 4) that the approach is fairly insensitive to precise selection of this distribution. One possible reason for this is that the distribution does not matter much as long as it covers the states visited by the linear feedback policy which appears to stay pretty close to the nominal trajectory.\n\nFinally, the reviewer expresses concerns with respect to the applicability of our approach to the real robot setting. Our paper primarily targets the control of simulated physical humanoids and we do not make any further claims. However recent approaches in a similar imitation learning setting have been shown to be effective for real robots (e.g. Laskey et al. 2017), so we do believe, as we speculate in the discussion that this is a plausible direction for future work.\n\nWe thank the reviewer for spotting a typo in equation 5 which we will correct.\n\nReferences:\nLaskey, M., Lee, J., Fox, R., Dragan, A. and Goldberg, K., 2017. Dart: Noise injection for robust imitation learning. arXiv preprint arXiv:1703.09327.\n", "We thank all reviewers for their time and comments. \n\nWe would like to emphasize that there are two contributions in the work. The focal motivation is the production of a single trained motor architecture which can execute and reuse motor skills of a large, diverse set of experts with minimal manual segmentation or curation. The architecture that we develop permits one-shot imitation as well as reuse of low-level motor behaviors in the context of new tasks. \n\nOur main results involve one-shot imitation and motor reuse, using our trained module for a humanoid body with relatively high action DoF. We believe this novel architecture enables more generic behavior and motor flexibility than other work involving learning to control physically simulated humanoids. \n\nAnonReviewer3 and AnonReviewer1 essentially restrict criticism of the work to the LFPC approach, which is only one aspect of our research contribution. We address these concerns in detail below. But we would also encourage the reviewers to assess the quality and novelty of the core architectural contributions as well as the quality of the experimental results. We are not aware of previous work for control of a physically simulated humanoid that demonstrates a learned module that can execute many behavioral skills and permits reuse. \n", "This paper mainly focuses the imitation of expert policy as well as compression of expert skills via a latent variable model. Overall, I feel this paper is not quite readable, albeit that the prosed methods are simple and straightforward. \n\nAs one major contribution of this paper, the authors introduce a first-order approximation to estimate the action of an expert, where perturbations are considered. However, this linear treatment could yield large errors when the residuals in (1) are still large, which is very common in high-dimensional and highly-nonlinear cases. Specifically, the estimation of “J” could be hard. In addition, just below (1), the authors mention (1) yields a “stabilized policy”, so what do you mean “stabilized”?\n\nAnother crucial issue lies on the treatment of “\\Delta(s)”, which is often unknown and hard to modeled, Thus, various optimal controllers are introduced so as to obtain robust controllers. Similarly, in (9) it is also difficult to decide what is “suitable perturbation distribution”.\n\nOverall, the linear treatment in (2) and assumption on “\\Delta(s)” in (5) actually oversimplify the imitation learning problem, which may not be applicable in real robot applications.\n\nOthers small comments:\n-Section 2.1 could be moved to supplementary material or appendix, as this part is indeed not a contribution.\n\n- in (5), it should be “-J_{i}^{*}”\n", "The paper tackles the problem of distilling large numbers of expert demonstrations into a single policy that can both recreate original demonstrations in a physically-simulated environment and humanoid platform, and to generalize to novel motions. Towards this, the paper presents two approaches learn policies from expert demonstrations without involving costly closed loop RL training, and distilling these individual experts into a shared policy by learning latent time-varying codes.\n\nThe paper is well-written and the method is well-evaluated in the scope that it is proposed. Both components of the proposed approach have previously been explored in the literature - there is extensive work on learning local controllers for physics based evironments from demonstrations in both open loop and closed loop settings as well as work on mixtures of these controllers in machine learning, robotics and computer graphics communities. While the paper proposes these two components as a contribution, I would like to see a more detailed argument of what this work contributes over previous such approaches. \n\nAnother part where I wish the paper could make a more compelling argument is that distilled policy can perform non-trivial generalization. Target following is a good illustrative example, but has been showcased by multitude of prior work. The paper talks about compositionality, and it would have been compelling to see examples of that if the method can achieve it. For example, simultaneously performing locomotion skills with upper body manipulation skills is something mixture of expert demonstrations approaches still struggle with and it would have been great to see this paper investigate the approach on this problem. \n\nOverall, this is a sound and well-written submission, but the existence of very related prior work with similar capabilities makes me reluctant to recommend this paper.", "This paper considers the problem of transferring motor skills from multiple experts to a student policy. To this end, the paper proposes two approaches: (1) an approach for policy cloning that learns to mimic the (local) linear feedback behavior of an expert (where the expert takes the form of a neural network), and (2) an approach that learns to compress a large number of experts via a latent space model. The approaches are applied to the problem of one-shot imitation from motion capture data (using the CMU motion capture database). The paper also considers an extension of the proposed approach to the problem of high-level planning; this is done by treating the learned latent space as a new action space and training a high-level policy that operates in this space. \n\nStrengths:\nS1. The supplementary video was clear and helpful in understanding the setup.\nS2. The paper is written in a generally readable fashion.\nS3. The related work section does a thorough job of describing the context of the work. \n\nHowever, I have some significant concerns with the paper. These are described below. \n\nSignificant concerns:\nC1. My biggest concern is that the paper does not make a strong case for the benefits of LPFC over simpler strategies. The results in Figure 3 demonstrate that a linear feedback policy computed along the expert's nominal trajectory performs as well as (and occasionally even better than) LPFC. This is quite concerning.\nC2. Moreover, as the authors themselves admit, \"while LPFC did not work quite as well in the full-scale model as cloning from noisy rollouts, we believe it holds promise insofar as it may be useful in rollout-limited settings...\". However, the paper does not present any theoretical/experimental evidence that would suggest this.\nC3. Another concern has to do with the two-step procedure for LPFC (Section 2.2), where the first step is to learn an expert policy (in the form of a neural network) and the second step is to perform behavior cloning by finding a policy that tries to match the local behavior of the expert (i.e., finding a policy that attempts to produce similar actions as the expert policy linearized about the nominal trajectory). This two-step procedure seems unnecessary; the paper does not make a case for why the expert policies are not chosen as linear feedback controllers (along nominal trajectories) in the first place.\nC4. The linearization of the expert policy produced in (1) may not lead to a stabilizing feedback controller and could easily destabilize the system. It is easy to imagine cases where the expert neural network policy maintains trajectories of the system in a tube around the nominal trajectory, but whose linearization does not lead to a stabilizing feedback controller. Do you see this in practice? If not, is there any intuition for why this doesn't occur? If this doesn't occur in practice, this would suggest that the expert policies are not highly nonlinear in the neighborhood of states under consideration (in which case, why learn neural network experts in the first place instead of directly learning a linear feedback controller as the expert policy as suggested in C3?)\nC5. I would have liked to have seen more implementation details in Section 3. In particular, how exactly was the linear feedback policy along the expert's nominal trajectory computed? Is this the same as (2)? Or did you estimate a linear dynamical model (along the expert's nominal trajectory) and then compute an LQR controller? More details on the architecture used for the behavioral cloning baseline would also have been helpful (was this a MLP? How many layers?)\n\nMinor comments:\n- There are some periods missing at the end of equations (eqs. (1), (2), (6), (8), (9))." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 6, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "iclr_2019_BJl6TjRcY7", "iclr_2019_BJl6TjRcY7", "iclr_2019_BJl6TjRcY7", "iclr_2019_BJl6TjRcY7", "HJeCcTtms7", "S1gTWJZ6hX", "S1eS0BTpnX", "iclr_2019_BJl6TjRcY7", "iclr_2019_BJl6TjRcY7", "iclr_2019_BJl6TjRcY7", "iclr_2019_BJl6TjRcY7" ]
iclr_2019_BJlgNh0qKQ
Differentiable Perturb-and-Parse: Semi-Supervised Parsing with a Structured Variational Autoencoder
Human annotation for syntactic parsing is expensive, and large resources are available only for a fraction of languages. A question we ask is whether one can leverage abundant unlabeled texts to improve syntactic parsers, beyond just using the texts to obtain more generalisable lexical features (i.e. beyond word embeddings). To this end, we propose a novel latent-variable generative model for semi-supervised syntactic dependency parsing. As exact inference is intractable, we introduce a differentiable relaxation to obtain approximate samples and compute gradients with respect to the parser parameters. Our method (Differentiable Perturb-and-Parse) relies on differentiable dynamic programming over stochastically perturbed edge scores. We demonstrate effectiveness of our approach with experiments on English, French and Swedish.
accepted-poster-papers
This paper proposes a method for unsupervised learning that uses a latent variable generative model for semi-supervised dependency parsing. The key learning method consists of making perturbations to the logits going into a parsing algorithm, to make it possible to sample within the variational auto-encoder framework. Significant gains are found through semi-supervised learning. The largest reviewer concern was that the baselines were potentially not strong enough, as significantly better numbers have been reported in previous work, which may have a result of over-stating the perceived utility. Overall though it seems that the reviewers appreciated the novel solution to an important problem, and in general would like to see the paper accepted.
val
[ "r1gciHq_pQ", "SJlN1H5up7", "ByxR94q_p7", "SylgfKWC27", "HJlknmEq2X", "Bkgmgia43Q", "rJgRXtqO3Q", "HJebIHYwh7" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer" ]
[ "Thank you for your comments and for finding the method novel and interesting.\n\nWe would like first to clarify that we are not making claiming that our method is appropriate in the high resource scenario (i.e. full in-domain English PTB parsing). However, large datasets are available only for a few languages, so the lower resource setting we study here is important and common. We use a sufficiently strong baseline (e.g., already using external word embeddings) and obtain improvements across all 3 languages. Interestingly, we observe that there are certain phenomena which our semi-supervised parser captures considerably more accurately than the baseline model (e.g., long distance dependencies and multi-word expression, see reply to R1). Very few studies have been done for semi-supervised structured prediction with neural generative models, especially for the more challenging parsing task, so we think these results are interesting.\n\nWe also think that our differentiable perturb-and-parse operator is interesting on its own, and has other potential applications. For example, it could be used in the context of latent structure induction, where there is no supervision (i.e. no treebank). Our sampling technique has properties which are different from those of previously proposed latent induction methods:\n- unlike structured attention [4], we sample global structures rather than compute marginals (e.g., we preserve higher-order statistics)\n- unlike SPIGOT [2], we can impose tree constraints directly rather than compute an approximation\n- unlike us, [3] relies on sparse distributions so that marginalization is feasible. While sparse distributions have many interesting properties, they yield flat areas in the optimization landscape that can be difficult to escape from. \n- unlike sampling with shift-reduce parsing models, we do not seem to have issues with bias which was argued to negatively affect its results [1].\n\n\n > A performance curve with different amount of labeled and unlabeled data \n\nWe will do our best to include these results in a subsequent revision. Using more unlabeled data is harder for Swedish and French, as we would need to re-tokenize in the form consistent with our labeled data. \n\n\n> What's the impact of perturbation?\n\nIn our experiments, using sampling is beneficial so that improvements are consistent across languages. For example, UAS results in French for the model that does not us sentence embeddings are as follows:\n- supervised: 84.09\n- semi-supervised without sampling: 84.27\n- semi-supervised with sampling: 84.69\n\n\n> What's the impact of keeping the tree constraint on dependencies during backpropagation?\n\nWe thought that the main motivation for dropping the constraint in previous work (e.g., SPIGOT) was efficiency. Since it does not seriously affect computation cost in our approach, we have not experimented with dropping it. \n\n\n> Are sentence embedding and trees generated from two separate LSTM encoders?\n\nYes. There are no shared parameters in our model: the LSTM of the parser, the LSTM generating the sentence embeddings and the decoder are all separate. Introducing parameter sharing would likely be beneficial. However, our set-up is more controlled, as we can make sure that the improvements are due to modeling latent syntactic structure rather than getting better word representations (i.e. from using the multi-task learning objective). \n\n\n\n[1] Andrew Drozdov and Samuel Bowman, The Coadaptation Problem when Learning How and What to Compose (2nd Workshop on Representation Learning for NLP, 2017)\n[2] Hao Peng, Sam Thomson and Noah Smith, Backpropagating through Structured Argmax using a SPIGOT (ACL 2018)\n[3] Vlad Niculae, André Martins and Claire Cardie, Towards Dynamic Computation Graphs via Sparse Latent Structure (EMNLP 2018)\n[5] Yoon Kim, Carl Denton, Luong Hoang and Alexander Rush, Structured Attention Networks (ICLR 2017)\n", "Thank you for your suggestions and the positive feedback.\n\n> hand-wavy explanations\n\nWe toned down our speculation, and incorporated your suggestions. Please let us know if you think, we could improve this further.\n\n> A number of important details are missing in the submitted version of the paper which the authors addressed in their reply to my public comment.\n\n\nThe submission has now been updated, reflecting what we described in our public comment.\n", "Many thanks for the positive feedback and suggestions.\n\n> Varying amounts of unlabeled data \n\nWe will do our best to include these results in a subsequent revision. Using more unlabeled data is harder for Swedish and French, as we would need to re-tokenize in the form consistent with our labeled data. \n\n\n> Are there natural generalizations to multi-lingual data for example settings where supervised data is only available for languages other than the language of interest?\n\nThis is a very interesting direction. We hope that using ‘unlabeled’ and ‘labeled’ terms in the objective would make the multilingual model capture correspondences between surface regularities and the underlying syntax, for a given language. This should be especially helpful in the suggested one-shot learning scenario, where only unlabeled term will present for the target language. We suspect that part-of-speech tags (not currently used in our model) would be needed to facilitate learning the cross-lingual correspondences. \n\n\n> I wonder also if this method would be particularly helpful in domain transfer\n\nYes, we would like to look into this in the future work.\n\n\n> It would be interesting to see an analysis of accuracy improvementson different dependency labels.\n\nWe performed analysis on English, there are some interesting cases:\n1. Multi-word expressions: the recall / precision scores of the semi-supervised model are 90.70 / 84.78 while the one of the supervised model are 75.58 / 81.25. We suspect that the reason is that MWEs are relatively infrequent.\n2. Adverbial modifiers: we observe an increase in precision without compromising on recall: 87.32 / 87.51 versus 87.27 / 85.95.\n3. Appositional modifiers: we also observe a significant increase for the recall in this category: 81.39 / 81.03 versus 77.49 / 80.27\nWe included the results in the new version of the paper.\n", "The paper describes a VAE-based approach to semi-supervised learning\nof dependency parsing. The encoder in the VAE is a neural edge-factored\nparser allowing inference using Eisner's dynamic programming algorithms.\nThe decoder generates sentences left-to-right, at each point conditioning\non head-modifier dependencies specified by the tree. A key technical \nstep is to develop a method for \"differentiable\" sampling/parsing,\nusing a modification of the dynamic program, and the Gumbel-max trick.\n\nI thought this was an excellent paper - very clear, an important \nproblem, a very useful set of techniques and results. I would strongly\nrecommend acceptance.\n\nSome comments:\n\n* I do wonder how well this approach would work with orders of magnitude\nmore unlabeled data. The amount of unlabeled data used is quite small.\n\n* Similarly, I wonder how well the approach works as the amount of\nunlabeled data is decreased (or increased, for that matter). It should\nbe possible to provide graphs showing this.\n\n* Are there natural generalizations to multi-lingual data, for example\nsettings where supervised data is only available for languages other\nthan the language of interest?\n\n* It would be interesting to see an analysis of accuracy improvements\non different dependency labels. The \"root\" case is in some sense just\none of the labels (nsubj, dobj, prep, etc.) that could be analyzed.\n\n* I wonder also if this method would be particularly helpful in \ndomain transfer, for example from Wall Street Journal text to\nWikipedia or Web data in general. The improvements could be more\ndramatic in this case - that kind of effect has been seen with \nELMO for example.", "[Summary]\nThis paper proposes to do semi-supervised learning , via a generative model, of an arc-factored dependency parser by using amortized variational inference. The parse tree is the latent variable, the parser is the encoder that maps a sentence to a distribution over parse-trees, and the decoder is a generative model that maps a parse tree to a distribution over sentences. \n\n[Pros]\nSemi-supervised learning for dependency parsing is both important and difficult and this paper presents a novel approach using variational auto-encoders. And the semi-supervised learning method in this paper gives a small but non-zero improvement over a reasonably strong baseline. \n\n[Cons]\n1. My main concern with this paper currently are the \"explanations\" provided in the paper which are quite hand-wavy. E.g. the authors state that using a KL term in semi-supervised learning is exactly opposite to the \"low density separation assumption\". And therefore they set the KL term to be zero. One has to wonder that why is the \"low density separation assumption\" so critical for dependency parsing only? VAEs have been used with a prior for semi-supervised learning before, why didn't this assumption affect those models ? \n\nA better explanation will have been that since the authors first trained the parser in a supervised fashion, therefore their inference network already represents a \"good\" distribution over parses, even though this distribution is specified only upto sampling but not in a mathematically closed form. Finally, setting the KL divergence between the posterior of the inference network and the prior to be zero is the same as dynamically specifying the prior to be the same as the inference network's distribution. \n\n2. A number of important details are missing in the submitted version of the paper which the authors addressed in their reply to my public comment.\n\n3. The current paper does not contain any comparison to self-training which is a natural baseline for this work. The authors replied to my comment saying that self-training requires a number of heuristics but it's not clear to me how much more difficult can these heuristics be than the tuning required for training their VAE.", "This paper proposed a variational autoencoder-based method for semi-supervised dependency parsing. Given an input sentence s, an LSTM-based encoder generates a sentence embedding z, and a NN of Kiperwasser & Goldberg (2016) generates a dependency structure T. Gradients over the tree encoder are approximated by (1) adding a perturbation matrix over the weight matrix and (2) relax dynamic programming-based parsing algorithm to a differentiable format. The decoder combines standard LSTM and Graph Convolutional Network to generate the input sentence from z and T. The authors evaluated the proposed method on three languages, using 10% of the original training data as labeled and the rest as unlabeled data.\n\nPros\n1. I like the idea of this sentence->tree->sentence autoencoder for semi-supervised parsing. The authors proposed a novel and nice way to tackle key challenges in gradient computation. VAE involves marginalization over all possible dependency trees, which is computationally infeasible, and the proposed method used a Gumbel-Max trick to approximate it. The tree inference procedure involves non-differentiable structured prediction, and the authors used a peaked-softmax method to address the issue. The whole model is fully differentiable and can be thus trained end to end.\n\n2. The direction of semi-supervised parsing is useful and promising, not only for resource-poor languages, but also for popular languages like English. A successful research on this direction could be potentially helpful for lots of future work.\n\nCons, and suggestions on experiments\nMy main concerns are around experiments. Overall I think they are not strong enough to demonstrate that this paper has sufficient contribution to semi-supervised parsing. Below are details.\n\n1. The current version only used 10% of original training data as labeled and the rest as unlabeled data. This makes the reported numbers way below existing state-of-the-art performance. For example, the SOTA UAS on English PTB has been >95%. Ideally, the authors should be able to train a competitive supervised parser on full training data (English or other languages), and get huge amount of unlabeled data from other sources (e.g. News) to further push up the performance. The current setting makes it hard to justify how useful the proposed method could be in practice.\n\n2. The best numbers from the proposed model is lower than baseline (Kipperwasser & Goldberg) on English, and only marginally better on Swedish. This probably means the supervised baseline is weak, and it's hard to tell if the gains from VAE will retain if applied to a stronger supervised.\n\n3. A performance curve with different amount of labeled and unlabeled data would be useful to better understand the impact of semi-supervised learning.\n\n4. What's the impact of perturbation? One could simply use T=Eisner(W) as approximation. Did you observe any significant benefits from sampling?\n\nOther questions\n1. What's the impact of keeping the tree constraint on dependencies during backpropagation? Have you tried removing the tree constraint like previous work?\n\n2. Are sentence embedding and trees generated from two separate LSTM encoders? Are there any parameter sharing between the two?\n\n", "Thank you for your comments and finding the idea exciting. Please find our replies to your questions.\n\n1. Thank you for pointing this out. We experimented with a version where the prior was the uniform distribution over all projective trees. It was not effective: downweighting or remove the KL term was yielding the best results. We realize that this prior may not be quite appropriate (linguistic trees are not samples from the uniform distribution), but given that our model is generative / not conditional (e.g., we do not condition even on PoS tags), the distribution would not be sharp anyway (even if we estimate it). This makes us sceptical about using the KL term in our semi-supervised learning: using KL with respect to a high entropy distribution forces our model to be uncertain on unlabelled sentences. This is exactly opposite of the standard “low density separation assumption”: our preference should be for models which are confident on datapoints (roughly speaking, decision boundaries should not cross datapoints). This motivated us to try another alternative (also not yielding ELBO), where instead of the KL term we used an adversarial term forcing our model to draw trees similar to linguistic ones. Unfortunately, it was not effective as well. We would clarify this extra experiments in a new version of the paper. Note that not using ELBO should not prevent us from using the term VAE: many recent VAE versions (e.g., beta-VAE) cannot be interpreted as optimizing ELBO.\n\n2. Again, we should have clarified this. We rely on perturb-and-map to sample a single tree from the posterior distribution. However, the MAP procedure is not differentiable, therefore we replace it with a differentiable surrogate. In our model, the weights in T do not represent probabilities neither log-probabilities but a soft-selection of arcs. GCN can be run over weighted graphs, the message passed between nodes is simply multiplied by the continuous weights. This is actually a motivation for using GCN rather than a Recursive LSTM/RNN. On the one hand, running a GCN with a matrix that represents a soft-selection of arcs (i.e. with real values) has the same computational cost than using a standard adjacency matrix (i.e. with binary elements) if we use matrix multiplication on GPU (optimization with sparse matrix multiplication is helpful on CPU, but not always on GPU). On the other hand, a recursive network over a soft-selection of arcs requires to build a n^2 set of RNN-cells that follow the dynamic programming chart where the possible inputs of a cell are multiplied by their corresponding soft-selection in T, which is expensive and not GPU-friendly. We also experimented with using straight-through estimators where GCN computation is performed over a discretizatized version of the graph, whereas the backpropagation step is done over the soft version. We did not see much of a difference in performance.\n\n3. Self-training is an option, though all (?) previous applications of self-training to syntactic parsing used quite a number of tricks and parameters (e.g., McClosky et al 2006; Reichart and Rappoport 2007, Yu and Bohnet 2017). Even if self-training works, we believe that our approach provides an interesting alternative, and one of very few methods for semi-supervised learning for structured prediction where improvements over a strong supervised baseline can be seen (recall that our baseline already uses external embeddings). What is also interesting is that the parse trees predicted by the semi-supervised model are qualitatively different from the ones produced by the supervised baseline. E.g., as we discuss in the experimental section, it predicts many more long distance dependencies than the supervised one. We speculate that this is an artefact of using the RNN+GCN decoder which does not care about short edges as they are too easy to encode by RNN, so encourages longer range dependencies. This won’t happen for self-trained parsers as self-training reinforces the predictions. Co-training is even harder to make to work than self-training, as we need to come up with two models and it would be more orthogonal to our method (we could use a co-training loss in combination with ours). Previous work suggests that co-training does not work out-of-the-box for syntactic parsing, so a meaningful baseline would be hard to construct.", "This paper proposes to do semi-supervised learning , via a generative model, of an arc-factored dependency parser by using amortized variational inference. The parse tree is the latent variable, the parser is the encoder that maps a sentence to a distribution over parse-trees, and the decoder is a generative model that maps a parse tree to a distribution over sentences. While this idea itself is exciting, a few important details are missing in the paper, that are needed to review the paper.\n\n1. A VAE requires a generative story for the latent variables. What exactly is the distribution of p(T|n) ? This distribution is not mentioned anywhere in the paper. More importantly Section 5 focuses entirely on the first term of the ELBO objective. What about the second term, the negative KL term, of the ELBO ? How exactly do you compute KL[q_φ(T , z|s)|p(T , z)] in equation (3) ? You mention that you use a weight of 0 for the KL term during optimization in the experiments section because you did not see any benefit from the KL term ? But what was the form of the prior that you used earlier ? \n\n2. As you mention Smith and Eisner (2008) showed how to frame dependency parsing as an MRF and Perturb and MAP is a method for sampling from the posterior for general MRFs. However, you are adding a further relaxation and replacing the argmax with a softmax operation ( where you set τ = 1 in all experiments). So at the end you no-longer get true dependency trees but continuous entries in T. How exactly do you compute log p_θ( s | RELAXATION of Eisner(W + P) ) in this scenario ? How do you feed soft connections to the GCN ? Does T contain probabilities or log-probabilities in this case? \n\n3. You mention a number of other fairly simple methods for semi-supervised learning such as self-training and co-training in the related work section. Clearly these types of methods will be the right baseline to evaluate against since they do not use word-embeddings, or any manual feature engineering. What was the reason to not evaluate against such simple methods? " ]
[ -1, -1, -1, 8, 7, 5, -1, -1 ]
[ -1, -1, -1, 4, 3, 3, -1, -1 ]
[ "Bkgmgia43Q", "HJlknmEq2X", "SylgfKWC27", "iclr_2019_BJlgNh0qKQ", "iclr_2019_BJlgNh0qKQ", "iclr_2019_BJlgNh0qKQ", "HJebIHYwh7", "iclr_2019_BJlgNh0qKQ" ]
iclr_2019_BJluy2RcFm
Janossy Pooling: Learning Deep Permutation-Invariant Functions for Variable-Size Inputs
We consider a simple and overarching representation for permutation-invariant functions of sequences (or set functions). Our approach, which we call Janossy pooling, expresses a permutation-invariant function as the average of a permutation-sensitive function applied to all reorderings of the input sequence. This allows us to leverage the rich and mature literature on permutation-sensitive functions to construct novel and flexible permutation-invariant functions. If carried out naively, Janossy pooling can be computationally prohibitive. To allow computational tractability, we consider three kinds of approximations: canonical orderings of sequences, functions with k-order interactions, and stochastic optimization algorithms with random permutations. Our framework unifies a variety of existing work in the literature, and suggests possible modeling and algorithmic extensions. We explore a few in our experiments, which demonstrate improved performance over current state-of-the-art methods.
accepted-poster-papers
AR1 is concerned about whether higher-order interactions are modeled explicitly and if pi-SGD convergence conditions can be easily satisfied. AR2 is concerned that basic JP has been conceptually discussed in the literature and \pi-SGD is not novel because it was realized by Hamilton et al. (2017) and Moore & Neville (2017). However, the authors provide some theoretical analysis for this setting in contrast to prior works. AR1 is also concerned that the effect of higher-order information has not been 'disentangled' experimentally from order invariance. AR4 is concerned about poor performance of higher order Janossy pooling compared to k =1 case and asks about the number of hyper-parameters. The authors showed a harder task of computing the variance of a sequence of numbers in response. On balance, despite justified concerns of AR2 about novelty and AR1 about experimental verification, the work appears to tackle an interesting topic. Reviewers find the problem interesting and see some hope in the proposed solutions. On balance, AC recommends this paper to be accepted at ICLR. The authors are asked to update manuscript to reflect honestly weaknesses as expressed by reviewers, e.g. issue with effects of 'higher-order information' and 'disentangled' from order invariance.
train
[ "B1ey5ZDp2X", "Skgp6BtKAm", "BklEWNrmC7", "SJxGsKnlA7", "r1x2XvP107", "B1gpO5L10X", "B1eC2L8J0m", "B1gYh_tiTm", "HJlOsS79pm", "HklWWHbbT7", "S1xNEC4q2X" ]
[ "official_reviewer", "author", "author", "public", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "In this paper, the authors presented a new pooling method called Janossy Pooling (JP), which is designed to better capture high-order information by addressing two limitations of existing works - fixed pooling function and fixed-size inputs. The studied problem is important and the motivation is clear, where the inputs are sets of objects such as values or vectors and how we can learn a good aggregation function to maximally preserve the information in the original sequence. The authors attacked this problem by firstly formally formulating this problem and introducing a general approach as well as a few of approximation methods to realize it in practice. They also discussed the connections of this work and some existing works such as deep set, which I found is quite useful. \n\nIn general, JP was proposed to learn permutation-invariant function for aggregating the information of the input sequence. The basic idea of JP is to simply take all generated order of sequences from the original sequence input, which however I found is not new since it has been conceptually discussed already in the literature. Since this approach is computationally prohibitive, there are several ways of approximations to approach the solution. As the authors are aware of the existing works in the literature, these approaches were discussed before either in the same context or in some particular learning tasks. From this perspective, the proposed solutions are not novel either. \n\nThe experimental results are particularly weak. It is little interesting on the first toy problem and the results on graph embedding are not promising. In Table 2, it is clearly shown that the LSTM aggregation functions on the randomly sampled sequences are really beating the simple mean aggregation function. I think the authors need much more experiments to demonstrate why we need LSTM based pooling for realizing JP in terms of both the final accuracy and computational cost. \n\n-------------------------------------------------\nAfter reading the authors' rebuttals, they have addressed part of my concerns but I still think the current form is not below the acceptance threshold due to its weak experimental results and unclear technical details. \n\n", "Thanks again for your question. We have updated the submission to include the comments we made and a few other minor improvements.", "Thank you, the comment raises an interesting point.\n\n0. GraphSAGE is an instantiation of Janossy Pooling (JP) -- k-ary JP trained with pi-SGD (see our remark \"Combining pi-SGD and Janossy with k-ary Dependencies\" in our revised submission). This was the motive behind our second set of experiments, where we study the impact of proper test-time inference, in contrast to the ad hoc LSTM aggregator of Hamilton et al., 2017. As predicted by our theory, proper inference led to significant gains in performance. \n\n1. FASTGCN similarly falls under the framework of k-ary JP with pi-SGD optimization, applying importance sampling for variance reduction (the FASTGCN paper may be somewhat confusing to readers because it uses Lebesgue integrals and dP() measures instead of sums, but its Algorithm 2 is just an importance sampling procedure). In our submission we mention one interesting variance-reduction technique used by Zolna et al., 2018, but we now realize that it would be valuable to add additional traditional techniques such as importance sampling (used in FASTGCN), Rao-Blackwellization, and control variates to our discussion. Variance reduction helps the pi-SGD objective in equation 10 (\\doublebar{J}) become more like the \"original\" objective in equation 6 (\\doublebar{L}) (see the paragraph below Proposition 2.2). \n\n2. We are excited by the prospect of applying the lessons learned from Janossy Pooling to Graph Neural Networks (GNNs) in our future work. We believe that these insights will inspire new forms of aggregation functions in GNNs. A promising avenue is to combine JP with other variance-reduction techniques such as layer normalization and ensemble techniques like dropout, which have demonstrated strong performance on GNN tasks like PPI (Chen and Zhu, 2017).\n\n3. Please note that Hamilton et al., 2017 did not use exact mean pooling in their experiments but sampled a subset of neighbors. Our experiments deviate from theirs on this point. Also note that for Cora and PubMed, sampling 25 neighbors (as in our updated draft) effectively samples all neighbors, but not for PPI. \n\n\n(Chen and Zhu, 2017) - J. Chen and J. Zhu. Stochastic training of graph convolutional networks. arXiv preprint arXiv:1710.10568, 2017.\n\n(Zolna et al., 2018) - Konrad Zolna, Devansh Arpit, Dendi Suhubdy, and Yoshua Bengio. Fraternal dropout. ICLR, 2018.", "Thank you. I also really enjoyed reading the paper. \nFor the vertex classification, without the aggregating function, minibatch sampling approaches (e.g. FASTGCN) are also getting almost similar or better accuracy than exact mean-pool (GRAPHSAGE) according to the paper. FASTGCN samples a few nodes to learn the graph neural network, so it is already fast and can avoid the additional computational burden of the aggregator. This means that the need for aggregation function is not clear for the GCN-baed models. I think more analysis with variance or other better measures are needed to show why it is meaningful for the vertex classification task as well. \n\n It may be useful to test it with the work (Moore & Neville, 2017).\n\n(Chen, Ma, and Xiao, 2018) Jie Chen, Tengfei Ma, Cao Xiao. FASTGCN: Fast Learning with graph convolutional networks via importance sampling.\n(Moore & Neville, 2017) John Moore and Jennifer Neville. Deep collective inference.", "Thank you for your comments. We address the two issues, experimental evaluation and novelty, below.\n\n1) \"Experiments weak\"\n\n(1.a) \"Toy problems:\" Our arithmetic tasks are more challenging than those of Deep Sets, whose experimental methodology we extend. That paper evaluated the model on a task that adds a set of digits. While summation does not require exploiting dependencies among elements in the sequence, we consider tasks such as “range” where doing so is imperative. Our updated submission adds the task of computing the variance of a sequence of 10 integers. This update also makes our former preliminary results part of the main paper with a discussion of the new insights (here summarized in points 1.b and 1.c.)\n\nOverall, our work is focused on generalizing today's pooling methods rather than any specific task. With that view, we make our tasks as simple as they can be to avoid spurious effects, but not so simple that the effects of increased pooling expressiveness do not apply. We welcome suggestions of ways to improve them.\n\n(1.b) \"No significant gains\". \"LSTM does not beat mean pooling.\" The new variance task shows pi-SGD + GRUs + MLP \\rho significantly outperforms other methods, including sum-pooling (variance requires better modeling of high-order interactions). Overall, pi-SGD + GRUs + MLP \\rho yields equal or superior performance across all arithmetic tasks. In general, using RNNs has the benefits of accepting variable-length sequences and seamlessly exploiting dependencies within the sequence.\n\n(1.c) \"Graph tasks\": We followed the tasks found in Hamilton et al. (2017), which we found are quite easy. Our main interest was in evaluating differences between the different JP approaches (different choices of k and the impact of proper inference) on a task distinct from the arithmetic ones, and the results confirmed the anticipated benefits of using better inference at test time. In particular, proper inference of the \\pi-SGD + LSTM model via Remark 2.2 can yield performance gains \"for free\" simply by averaging over forwarded permutations of the input sequence at test time. \n\n2) \"Novelty\": \"The basic idea of JP is to simply take all generated order of sequences from the original sequence input, which however I found ... has been conceptually discussed already in the literature.\" We will try to answer this in a few different ways.\n\n(2.a) \"There is prior modeling work summing permutations in pooling layers\". To the best of our knowledge, our pooling framework is the first that generalizes pooling, unless the reviewer is aware of other work we haven't cited. As we answer next, Hamilton et al. (2017) and Moore & Neville (2017) performed pi-SGD in ad hoc manner, at the time it was not clear that it was a sound optimization procedure. Our work provides the theoretical underpinnings for their approach. \n\n(2.b) \"\\pi-SGD is not novel because it has already been tried\". Hamilton et al. (2017) and Moore & Neville (2017) did not provide a theoretical justification for their approach, and it was not obvious how to extend it. Our framework provides a theoretical justification for why and how pi-SGD works (SGD on the aforementioned ideal), as well as a characterization of the *correct* way to do inference at test time (missing in Hamilton et al. (2017)).\n\n(2.c) \"Novelty of k-ary Janossy pooling\". To the best of our knowledge, k=3,... in full generality has not been tried. Deep Sets (with k=1) shows that sum-pooling is a universal approximator to permutation-invariant functions if the upper layers are universal approximators. We show that if the upper layers are not universal approximators (or if the universal approximation is hard to learn), k-ary Janossy pooling (k > 1) is more powerful. Moreover, we show that this pooling approach is equivalent to summing over permutation-sensitive functions and achieves tractability via a restricted model class (functions with k inputs) rather than an approximate algorithm. Thus, our framework links two views of pooling: Inductive biases imposed on the model to capture dependencies in the sequence is inextricably linked with tractability strategies and present a tradeoff with learnability. \n", "Thank you for your positive comments. We also see Janossy Pooling as simultaneously providing theory that highlights limitations of existing methods and an overarching framework for developing pooling functions. We also agree that more experiments (as always) are beneficial; our revision includes a more thorough experiments section upon which we elaborate below.\n\n(1) In the experimental section, the authors show that [k-ary Janossy Pooling] recovers some of the performances lost by using sum / mean pooling...Is it the fact that you're explicitly modelling higher-order interactions that improves performance? Or is it that you're doing Janossy pooling over the higher order interactions (i.e. summing over permutations of non-invariant functions)?\n\nOur development of k-ary Janossy Pooling (JP) demonstrates that the increased performance associated with k>1 does not rely upon using permutation-sensitive Janossy functions \\harrow{f}. Indeed, our proof that (k-1)-ary JP is less expressive than k-ary JP constructs a permutation-invariant k-ary Janossy function (\\harrow{f}) which cannot be expressed by any (k-1)-ary Janossy function, permutation-sensitive or otherwise. We modeled \\harrow{f} as permutation-sensitive in our experiments since basic neural network building blocks are permutation-sensitive. \n\n(2) I don't follow the relevance of Proposition 2.2? I see that it gives conditions under which we can expect \\pi-SGD to converge, but we aren't provided with any guidance about how likely those conditions are to be satisfied? Furthermore - these conditions don't seem to be specific to \\pi-SGD - any SGD algorithm with ``slightly biased'' gradients that satisfy these conditions would converge.\n\nWe sought to reassure the reader of the appropriateness of randomly sampling and forwarding just one permutation of the sequence during training -- which at first glance may appear inappropriate. We agree that this can be achieved simply by pointing out the similarity to ''typical'' SGD and we have revised our paper accordingly and moved the detailed proof to the appendix.\n\n(3) My view is that the experimental section is too limited to support reading (1) which asserts that k-ary pooling or LSTM + sampling approaches are the right solution to this problem. \n\nWhile we agree that reading (2) is our preferred reading too, we have added experiments to the revised version which provide further support of the power of proposed JP models. These include (a) the addition of a more complex \\rho (the function composed with the output of pooling) to all models for the arithmetic tasks, which presents a more competitive baseline, (b) the addition of a harder arithmetic task -- computing the variance of a sequence of integers -- and (c) further analysis of the impact of increasing the number of permutations sampled at test-time for prediction in a pi-SGD model. The latter was performed on the PPI graph, a new dataset evaluated in this submission.\n\n(a and b) Whereas \\rho was previously a linear layer only, we have added results where \\rho is an MLP with a single hidden layer. Our results show that Janossy pooling architectures achieve superior or similar performance to the baseline of sum pooling across all tasks -- including the variance task -- for either choice of \\rho. Notice that the GRU model with an MLP \\rho achieves a mean RMSE of 0.40 on the variance task, beating the sum-pooling baseline by a substantial margin.\n\n(c) Our arithmetic tasks show a clear benefit of averaging over more permutations at test time; doing so either improved the mean performance or left it unchanged (especially when performance was already saturated). We have also expanded our investigation of this phenomenon in the graphs tasks, where we plotted performance as a function of the number of permutations sampled at test time across different models. We saw that simply sampling just a few permutations led to consistent and statistically significant gains in performance. These gains level off but do not degrade as more permutations are sampled. \n\n", "Thank you for your positive comments. We address your concerns below.\n\n\"- Is there any reason as to why higher order Janossy poolings do not perform as good as k =1 for the sum experiment?\" \n\nThe sum task is an easy task, designed for k=1. Our revised manuscript shows sum task results with more runs and more epochs and the difference is not statistically significant. \n\n“- The whole development seems not as effective as k =1 in Table.2....”\n\nTheorem 2.1 shows that Janossy Pooling (JP) with k-ary dependencies includes and is more expressive than JP with (k-1)-ary dependencies, but there will be tasks where it is sufficient to let k=1 (and also easier to optimize). This is especially true for easy tasks like the sum task which do not require exploiting dependencies within the input sequence. Our revised manuscript now considers the harder task of computing the variance of a sequence of numbers. For this harder task, full-sequence Janossy (k = |h|) is significantly more accurate than k = 1,2,3, by using pi-SGD to train the model (which optimizes \\doublebar{J} rather than \\doublebar{L}). In the range task, full Janossy (k = |h|) + GRU + pi-SGD also shows significant gains over k=1,2,3. For all other tasks, Janossy k =|h| + GRU + pi-SGD performs as well as the other approaches. \n\n\"- One wonders, why for k =2, k =1 is not included? That is, can the formulation be changed in a way that \\downarrow operator represents l \\in {1 \\cdots k} projections? In the end, the method creates k tuples and feed them through specific fs so why not having smaller tuples?\"\n\nTheoretically it is not necessary (by Theorem 2.1) but is an interesting direction for future work that could help in practice. It is clear, however, that Janossy k = |h| with GRU + pi-SGD is hard to beat in more challenging tasks.\n\n\"-Can you report the number of parameters for the developments (Janossy -k)? Some examples according to the experiments help. \"\n\nWe have added the number of parameters in the Supplementary Material (Table 7 and Table 9) together with more details about our experimental setting. We have also tested k=2,3 with more complex models for \\arrow{f}, the Supplementary Material shows the improved results.\n\n\"- I am a bit lost to grasp the paragraph below Eq.4, can you rephrase it and possibly provide references?\"\n\nThank you, we rephrased our observations to simplify the exposition. We also considered the pros and cons of including a proof that Eq.4 captures any permutation-invariant function with an expressive-enough set of permutation-sensitive functions: the proof is straightforward as one can simply add all possible asymmetries (that cancel out when summing over all permutations) to the set of all permutation-invariant functions and make this a set of permutation-sensitive functions. It could be useful as a Proposition but, given the page limit, we have chosen to omit this straightforward proof in favor of other observations.\n\n“- When it comes to testing, how do you use Eq.13? Do you sample a few permutation and compute 13? If yes, how many in practice?”\n\nWe have rewritten our experimental section to clarify how Eq.13 is used. We recommend looking at the new Table 1 which now more clearly defines \"infr samples\" to describe how many samples we use to estimate Eq.13. \n\n\" - In preposition 2.1, n seems confusing, why not |h| \"\n\nThat was a typo, we have changed to |h|. Thank you!\n\n- In P6, x_i is a sequence. this needs to be mentioned \n\nThank you. We have made changes in the notation to clarify that x(i) is the i-th sequence from the training (test) data.", "Our primary interest was in permutation-invariant functions, and we only used the term “set function” to follow Zaheer 2017. Please note that Deep Sets performs the sum task on “sets” of integers {0, 1, 2, …, 9} of size 50 which must have duplicates. In following their design, our input sequences also have duplicates. \n\nWe also note that extensions of the Deep Sets theorem relating permutation-invariance and sum pooling was recently extended to include multisets in [Xu et al 2018].\n\nWe will add a line to the paper to clarify this. Thanks.\n\n[Xu et al 2018] Xu, Keyulu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. \"How Powerful are Graph Neural Networks?.\" arXiv preprint arXiv:1810.00826 (2018).", "I didn't pick this up in my initial review, but the \"unique sum\" and \"unique count\" tasks in the synthetic experiments go beyond the scope of the deep sets work since that paper refers to sets not multisets. \"Unique\" doesn't make sense for sets. These experiments should be removed (or at the very least this should be made clear). Similarly for the other tasks, sampling without replacement makes more sense to ensure the input is in fact a set. ", "I have found the ideas proposed in the paper very insightful and interesting. The paper, in general, is written very well and is accessible. My most important concern is \n\n The whole development seems not as effective as k =1 in Table.2 (BTW, there is a typo there). One wonders, why for k =2, k =1 is not included? That is, can the formulation be changed in a way that \\downarrow operator represents l \\in {1 \\cdots k} projections? In the end, the method creates k tuples and feed them through specific fs so why not having smaller tuples?\n\nThe rest of my review below hopefully can help improving the paper;\n\n\n- Is there any reason as to why higher order Janossy poolings do not perform as good as k =1 for the sum experiment? \n\n- Can you report the number of parameters for the developments (Janossy -k)? Some examples according to the experiments help.\n\n- I am a bit lost to grasp the paragraph below Eq.4, can you rephrase it and possibly provide references?\n\n- When it comes to testing, how do you use Eq.13? Do you sample a few permutation and compute 13? If yes, how many in practice? \n\n- In preposition 2.1, n seems confusing, why not |h|\n\n- In P6, x_i is a sequence. this needs to be mentioned \n\n", "I really enjoyed this paper. It takes an idea which at first glance seems to be obviously bad (if you want permutation invariance, build a model that considers all permutations) and uses it to make the important point that the universal approximation results contained in Deep Sets [Zaheer et al. 2017] are not the last word on pooling. Janossy Pooling is intractable for most problems of interest (because it sums over all n! permutations of the input set) so the authors suggest 3 tractable alternatives: canonical orderings, k-ary dependencies and SGD / sampling-based approaches. Only the latter two are explored in detail, so I’ll focus on them:\n\nK-ary dependencies\nFunctions that are restricted to k-ary dependencies in Janossy Pooling require summing over only |h|! / (|h| - k)! terms - that is they sum over the permutations of subsets of h of length k. In the experimental section, the authors show that this recovers some of the performances lost by using sum / mean pooling (as in Deep Sets), but this suggests the natural question: is it the fact that you’re explicitly modelling higher-order interactions that improves performance? Or is it that you’re doing Janossy pooling over the higher order interactions (i.e. summing over permutations of non-invariant functions)? \n\nThese two effects could be separated by comparing to invariant models that allow higher order interactions. E.g. you could compare against Santoro et al. [2017] who explicitly model pairwise interactions (or similarly any of the graph convolutional models [Kipf and Welling 2016, Hamilton et al 2017, etc.] with a fully connected graph would do the same); similarly Hartford, et al. [2018] allow for k-wise interactions by extending Deep Sets to exchangeable tensors - the permutation invariant analog of k-ary Janossy Pooling. All of these approaches model k-wise interactions through sum-pooling over permutation invariant functions so this lets you address the question - is it the permutation invariance that’s the problem (necessitating k-ary Janossy pooling) or is it the lack of higher-order interaction terms? \n\nSGD approaches:\nI think that the point that the sampling-based approaches are bias with respect to the Janossy sum is important to make and I liked the discussion around it, but I don’t follow the relevance of Proposition 2.2? I see that it gives conditions under which we can expect \\pi-SGD to converge, but we aren’t provided with any guidance about how likely those conditions are to be satisfied? Furthermore - these conditions don’t seem to be specific to \\pi-SGD - any SGD algorithm with “slightly biased” gradients that satisfy these conditions would converge. The regularization idea is interesting, but it isn’t evaluated so we’re left with theory that doesn’t provide guidance and isn’t evaluated.\n\nSummary:\nThere are two ways to read this paper:\n 1. Janossy pooling as a framework & proposed pooling approach implemented in one of the two ways discussed above.\n 2. Janossy pooling as an intractable upper bound on what we might want from a pooling method (with approximations in the form of the LSTM approaches) and a demonstration that our current invariant pooling methods are insufficient.\n\nI liked the paper based on reading (2). Janossy pooling clearly demonstrates limitations to sum / mean pooling which is widely used in practice which shows the need for better alternatives and it is on this basis that I’m arguing for it’s acceptance. My view is that the experimental section is too limited to support reading (1) which asserts that k-ary pooling or LSTM + sampling approaches are the right solution to this problem. \n\n[Zaheer et al. 2017] - Manzil Zaheer, Satwik Kottur, Siamak Ravanbakhsh, Barnabas Poczos, Ruslan Salakhutdinov, and\nAlexander Smola. Deep Sets\n[Santoro et al. 2017] - Adam Santoro, David Raposo, David G Barrett, Mateusz Malinowski, Razvan Pascanu, Peter Battaglia, and Tim Lillicrap. A simple neural network module for relational reasoning.\n[Kipf and Welling 2016] - Thomas N. Kipf and Max Welling. Semi-Supervised Classification with Graph Convolutional Net- works\n[Hamilton et al 2017] - William L. Hamilton, Rex Ying, and Jure Leskovec. Inductive Representation Learning on Large Graphs\n[Hartford, et al. 2018] - Jason S. Hartford, Devon R. Graham, Kevin Leyton-Brown, and Siamak Ravanbakhsh. Deep models of interactions across sets" ]
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, 7, 8 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2019_BJluy2RcFm", "BklEWNrmC7", "SJxGsKnlA7", "iclr_2019_BJluy2RcFm", "B1ey5ZDp2X", "S1xNEC4q2X", "HklWWHbbT7", "HJlOsS79pm", "S1xNEC4q2X", "iclr_2019_BJluy2RcFm", "iclr_2019_BJluy2RcFm" ]
iclr_2019_BJlxm30cKm
An Empirical Study of Example Forgetting during Deep Neural Network Learning
Inspired by the phenomenon of catastrophic forgetting, we investigate the learning dynamics of neural networks as they train on single classification tasks. Our goal is to understand whether a related phenomenon occurs when data does not undergo a clear distributional shift. We define a ``forgetting event'' to have occurred when an individual training example transitions from being classified correctly to incorrectly over the course of learning. Across several benchmark data sets, we find that: (i) certain examples are forgotten with high frequency, and some not at all; (ii) a data set's (un)forgettable examples generalize across neural architectures; and (iii) based on forgetting dynamics, a significant fraction of examples can be omitted from the training data set while still maintaining state-of-the-art generalization performance.
accepted-poster-papers
This paper is an analysis of the phenomenon of example forgetting in deep neural net training. The empirical study is the first of its kind and features convincing experiments with architectures that achieve near state-of-the-art results. It shows that a portion of the training set can be seen as support examples. The reviewers noted weaknesses such as in the measurement of the forgetting itself and the training regiment. However, they agreed that their concerns we addressed by the rebuttal. They also noted that the paper is not forthcoming with insights, but found enough value in the systematic empirical study it provides.
test
[ "S1gAKB7QC7", "rklya7Xm0Q", "rkl2xZmXCQ", "SyxkXUThiQ", "SkgKResg0X", "rygcHkH53m", "S1lWe2ceRQ", "ryeU8m5xRQ", "Skx4CMsha7", "BJgqy28cTQ", "ryxipBQcpX", "B1eemyXqp7" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author" ]
[ "Thanks for your review and suggestions, your suggested additional experiments have strengthened the paper and we will acknowledge them in the paper, if accepted. Applying some of our results towards solving catastrophic forgetting is one of the promising directions we hope to investigate in the future. One of the paths we are currently investigating is whether we can build focused memories of representative examples from previous tasks. Nonetheless, we believe our current analysis to be general, and, as such we keep the hope that our results could potentially be helpful in an even larger set of problems.", "Thank you for your review and suggestions. \n\nWe performed two additional experiments in CIFAR-10 and have presented the results in the updated supplementary. We are happy to include any parts that the reviewer finds helpful in the main paper.\n\n1. We corrupt all training images with additive Gaussian noise with mean 0 and increasing standard deviation (std 0.5, 1, 2, 10), and track the forgetting events during training as usual. Note that we add the noise after a channel-wise standard normalization step of the training images (zero mean, unit variance). Therefore, noise with standard deviation of 2 has twice the standard deviation of the unperturbed training data.\n\nWe present the results in Figure 11 in Appendix 10. We observe that adding increasing amount of noise decreases the amount of unforgettable examples and increases the amount of examples in the second mode of the forgetting distribution.\n\n2. We follow the label noise experiments presented in Figure 3, and augment only random 20% of the training data with additive Gaussian noise (mean 0, std 10). We present the results of comparing the forgetting distribution of the 20% of examples before and after pixel noise was added in Figure 12 (Left) in Appendix 10. We observe that the forgetting distribution under pixel noise resembles the one under label noise. It is a very interesting observation that we plan to investigate in the future.\n\nWe agree that it is important to follow-up with a dataset like Imagenet and will pursue this direction in our future work.", "Thank you for all of your important remarks -- they have substantially contributed to improving the paper and we will make sure to acknowledge it in the final version of the paper, if accepted.", "This paper studies the forgetting behavior of the training examples during SGD. Empirically it shows there are forgettable and unforgettable examples, unforgettable examples are like \"support examples\", one can achieve similar performance by training only on these \"support examples\". The paper also shows this phenomenon is consistent across different network architectures.\n\nPros:\nThis paper is written in high quality, clearly presented. It is original in the sense that this is the first empirical study on the forgettability of examples in during neural network training.\n\nComments and Questions on the experiment details:\n1. Is the dataset randomly shuffled after every epoch? One concern is that if the order is fixed, some of the examples will be unforgettable simply because the previous batches have similar examples , and training the model on the previous batches makes it good on some examples in the current batch.\n2. It would be more interesting to also include datasets like cifar100, which has more labels. The current datasets all have only 10 categories.\n3. An addition figure can be provided which switches the order of training in figure 4b. Namely, start with training on b.2.\n\nCons:\nLack of insight. Subjectively, I usually expect empirical analysis papers to either come up with unexpected observations or provide guidance for practice. In my opinion, the findings of this work is within expectation, and there is a gap for practice.\n\nOverall this paper is worth publishing for the systematic experiments which empirically verifies that there are support examples in neural networks.", "Thank you for the clarification and extra experiments on CIFAR-100. \nOverall, this is a paper with high quality, the experiments are complete and the paper is well written. I'm increasing the score to 7.\nI'm not giving a higher score because I think the impact of this paper on solving the catastrophic forgetting problem seems limited.", "UPDATE 2 (Nov 19, 2018): The paper has improved very substantially since the initial submission, and the authors have addressed almost all of my comments. I have therefore increased my score to an 8 and recommend acceptance.\n------------------------------------------------------------------------------------------------------------------------------\n\nUPDATE (Nov 16, 2018) : In light of the author response, I have increased my score to a 6.\n------------------------------------------------------------------------------------------------------------------------------\n\nThis paper aims to analyze the extent to which networks learn to correctly classify specific examples and then “forget” these examples over the course of training. The authors provide several examples of forgettable and unforgettable examples, demonstrating, among other things, that examples with noisy examples are more forgettable and that a reasonable fraction of unforgettable examples can be removed from the training set without harming performance. \n\nThe paper is clearly written, and the work is novel -- to my knowledge, this is the first investigation of example forgetting over training. There are an interesting and likely important set of ideas here, and portions of the paper are quite strong -- in particular, the experiment demonstrating that examples with noisy examples are more forgettable is quite nice. However, there are several experimental oversights which make this paper difficult to recommend for publication in its current form.\n\nMajor points:\n\n1) The most critical issue is with the measurement of forgetting itself: the authors do not take into account the chance forgetting rate in any of their experiments. Simply due to chance, some examples will be correctly labeled at some point in training (especially in the datasets analyzed, which only contain 10 classes). This makes it difficult to distinguish whether a “forgotten” example was actually ever learned in the first place. In order to properly ground this metric, measurements of chance forgetting rates will be necessary (for example, what are the forgetting rates when random steps are taken at each update step?). \n\n2) Were the networks trained on MNIST, permutedMNIST, and CIFAR-10 trained for the same number of epochs? Related to point 1, the forgetting rate should increase with the number of epochs used in training as the probability of each example being correctly classified should increase. If the CIFAR-10 models were trained for more epochs, this would explain the observation that more CIFAR-10 examples were “forgettable.”\n\n3) In the experiment presented in Figure 4b, it is difficult to tell whether the never forgotten set suffers less degradation in the third training regime because the examples were never forgotten or because the model had twice has much prior experience. Please include a control where the order is flipped (e.g., forgotten, never forgotten, forgotten in addition to the included never forgotten, forgotten, never forgotten order currently present).\n\n4) The visual inspection of forgettable and unforgettable examples in Figure 2 is extremely anecdotal, and moreover, do not even appear to clearly support the claims made in the paper.\n\nMinor points:\n\n1) In the discussion of previous studies which attempted to assess the importance of particular examples to classification decisions, a citation to [1] should be added. \n\n2) The point regarding similarity across seeds is absolutely critical (especially wrt major comment 1) , and should be included earlier in the paper and more prominently.\n\n3) The histograms in Figure 1 are misleading in the cropped state. While I appreciate that the authors included the full histogram in the supplement, these full histograms should be included in the main figure as well, perhaps as an inset.\n\n4) The inclusion of a space after the commas in numbers (e.g., 50, 245) is quite confusing, especially when multiple numbers are listed as in the first line on page 4.\n\n[1] Koh, Pang Wei and Percy Liang. “Understanding Black-box Predictions via Influence Functions.” ICML (2017).\n", "Thank you for providing the additional experiments and updating the text. The new section on \"Forgetting by chance\" is very nice and the multiple runs for Figure 4 make the point much more convincingly. \n\nOverall, the paper has improved dramatically since the initial submission, and I appreciate the authors' effort to provide additional controls to clarify and provide additional substantiation for the claims made in the paper. The observations in this work are significant and novel, and as such, I am raising my score to an 8, and clearly recommend acceptance to ICLR.", "Thank you for the response and for the additional comments. Please find our responses below:\n\n1) We’ve included the histogram of forgetting events under the true gradient steps in Figure 12 in the updated Appendix 11. We also included a discussion about confidence bounds in the paragraph “Stability across seeds” in Section 4 and we created a new paragraph “Forgetting by chance” to discuss the new results.\n\n3) We’ve updated Figure 4 with the mean and standard errors over 5 runs of the experiments. In each run, we randomly sample the ‘never forgotten’ set and the ‘forgotten at least once’ set from all of the examples of their respective kind. The initial stability of the forgotten set in the first half of c.3 is reproducible. This is an interesting observation that we plan to investigate in the future.\n\n4) The examples in the supplemental figure are the least and most forgotten examples of each class, when all examples are sorted by number of forgetting events (ties are broken randomly). We clarified this in Appendix 13 in the updated paper.\n", "Thank you for your response and the additional experiments provided. Please find my comments below:\n\n1) Both of the additional experiments (Appendices 11 and 12) are quite nice and provide clear evidence that the results observed are not merely due to chance forgetting. For Figure 12, please include a comparison to the histogram of forgetting events under true gradient steps as well. In addition, I could not find discussion of chance forgetting in the manuscript itself. Please include several sentences discussing both of these experiments in the main text (it's fine to leave the figures and details in the appendix).\n\n2) Thank you for the clarification.\n\n3) Thank you for including the additional ordering in Figure 4. While these experiments definitely show that the degradation in section 2 is greater for the forgotten set than the never forgotten set, it's interesting that the forgotten set is relatively stable for the first half of c.3, such that the difference between c.3 and b.3 is only present between epochs 50 and 60. I wonder if this is simply due to chance in the training run. It would be helpful to redo this experiment once more with multiple runs and error bars to assess whether this is real or simply an artifact.\n\n4) Thanks for including additional examples in the supplemental figure. Just to clarify, were these examples chosen randomly or hand-selected? \n\nIn light of the updated results, I have increased my score to a 6. Should the authors include a new version of Figure 4 with multiple runs and address the other post-rebuttal comments, I would be happy to further increase my score. \n", "This is an excellent analysis paper of a very interesting phenomenon in deep neural networks.\n\nQuality, Clarity, Originality:\nAs far as I know, the paper explores a very relevant and original question -- studying how the learning process of different examples in the dataset varies. In particular, the authors study whether some examples are harder to learn than others (examples that are forgotten and relearned multiple times through learning.) We can imagine that such examples are \"support vectors\" for neural networks, helping define the decision boundary.\n\nThe paper is very clear and the experiments are of very high quality. I particularly appreciated the effort of the authors to use architectures that achieve close to SOTA on all datasets to ensure conclusions are valid in this setting. I also thought the multiple repetitions and analysing rank correlation over different random seeds was a good additional test.\n\nSignificance\nThis paper has some very interesting and significant takeaways.\nSome of the other experiments I thought were particularly insightful were the effect on test error of removing examples that aren't forgotten to examples that are forgotten more. In summary, the \"harder\" examples are more crucial to define the right decision boundaries. I also liked the experiment with noisy labels, showing that this results in networks forgetting faster.\n\nMy one suggestion would be to try this experiment with noisy *data* instead of noisy labels, as we are especially curious about the effect of the data (as opposed to a different labelling task.)\n\nI encourage the authors to followup with a larger scaled version of their experiments. It's possible that for a harder task like Imagenet, a combination of \"easy\" and \"hard\" examples might be needed to enable learning and define good decision boundaries.\n\nI argue strongly for this paper to be accepted to ICLR, I think it will be of great interest to the community.", "Thanks for your interesting review. We try to address your remarks below:\n\n1) We randomly shuffle all datasets at the start of each epoch.\n\n2) As suggested, we investigated forgetting in CIFAR-100. We show the detailed results in Appendix 14 of the updated paper. In short, we observe that about 8% of examples in CIFAR-100 are unforgettable, which is the lowest percentage out of all investigated datasets: CIFAR-100 contains 10 times fewer examples per class (500 examples per class) than CIFAR-10 or the MNIST datasets, making each image all the more useful for the learning problem.\n\nUnexpectedly, we observed that the distribution of forgetting events in CIFAR-100 resembles the distribution of forgetting events in the noisy CIFAR-10 (with 20% randomly changed labels). This led us to suspect that a portion of CIFAR-100 examples could have noisy labels. Upon visualization of the most forgotten examples in CIFAR-100, we discovered that there are several images that appear under multiple labels, introducing noise to the dataset and possibly diminishing the proportion of unforgettable examples.\n\nFor completeness, we added the removal experiments from Figure 5 (Left) for CIFAR-100 to Appendix 14. The results align with those from the other datasets -- we are able to remove all unforgettable examples and maintain generalization performance, while outperforming a random removal baseline.\n\n3) We have included the experiment in the main paper in Figure 4 (right). Note that the \"never forgotten\" set continues to suffer from less degradation when training on the \"forgotten at least once\" set.\n", "Thanks for your detailed review. We tried to improve the paper according to your comments:\n\n-- Major points:\n\n1) We do acknowledge the importance of considering the possibility of forgetting occurring by chance, suggesting the need for confidence bounds on the number of forgetting events. Before addressing it with additional experiments, we wish to point out that the paper in its current form suggests that it is highly unlikely for the ordering produced by the metric to be the by-product of another unrelated random cause:\n\n1/ The correlation between the ordering obtained from two sets of 5 random seeds is 97.6%. We will highlight this fact more prominently in the paper (according to your minor point 2).\n2/ Removing unforgettable examples has a stronger effect than removing randomly chosen examples, suggesting that the vast majority of removed examples with low forgetting events are not picked due to some unrelated random phenomenon.\n \nWe followed your interesting suggestion and applied random steps to collect chance forgetting events on CIFAR-10. The results are shown in Appendix 11 of the updated paper. We report the histogram of ``chance forgetting events (please, see text in the paper for more details) averaged over 5 seeds. This gives an idea of the chance forgetting rate across examples. In this setting, examples are being forgotten “by chance” at most twice and most of the time less than once. We are happy to include parts of that section in the main text if it answers your concerns, as we believe it makes the paper stronger.\nWe also ran the original experiment on 100 seeds to devise 95% confidence bounds on the average (over 5 seeds) number of forgetting events per example (see Appendix 12). The confidence interval of the least forgotten examples is tight, confirming that examples with a small number of forgetting events can be ranked confidently.\n\n2) We trained on all datasets for the same number of epochs (200) to study the number of forgetting events. We’ll clarify this in the paper.\n\n3) Not including the figure with the opposite alternating sequence of tasks was an oversight (we intended to include it in the supplementary). We have now included it in the main paper in Figure 4 (right). Note that the “never forgotten” set continues to suffer from less degradation when training on the “forgotten at least once” set.\n\n4) We have updated Figure 2 to include a forgettable and unforgettable example from each class, and have included 12 more examples per class in the supplementary (Figure 14). Our main claim is that the unforgettable examples are supported by other examples in the training set, and thus can be removed without impacting generalization. The visualization shows that the unforgettable examples indeed are prototypical of their class (e.g. unobstructed full view of the entire object, commonly observed background), especially when compared to the forgettable examples, which contain more peculiar features (e.g. obstructed view of object or only parts of the object, uncommon color or context).\n\n-- Minor points\n\n1) We thank the reviewer for pointing us to this work and have included it in the discussion (Section 2 / Paragraph 1)\n2) We have moved this discussion to Section 4 where we mention experimental results and mentioned the finding at the end of the Introduction.\n3) We have updated Figure 1 to include the full histograms.\n4) We’ve updated all numbers to improve readability.\n" ]
[ -1, -1, -1, 7, -1, 8, -1, -1, -1, 9, -1, -1 ]
[ -1, -1, -1, 4, -1, 4, -1, -1, -1, 5, -1, -1 ]
[ "SkgKResg0X", "BJgqy28cTQ", "S1lWe2ceRQ", "iclr_2019_BJlxm30cKm", "ryxipBQcpX", "iclr_2019_BJlxm30cKm", "ryeU8m5xRQ", "Skx4CMsha7", "B1eemyXqp7", "iclr_2019_BJlxm30cKm", "SyxkXUThiQ", "rygcHkH53m" ]
iclr_2019_BJx0sjC5FX
RNNs implicitly implement tensor-product representations
Recurrent neural networks (RNNs) can learn continuous vector representations of symbolic structures such as sequences and sentences; these representations often exhibit linear regularities (analogies). Such regularities motivate our hypothesis that RNNs that show such regularities implicitly compile symbolic structures into tensor product representations (TPRs; Smolensky, 1990), which additively combine tensor products of vectors representing roles (e.g., sequence positions) and vectors representing fillers (e.g., particular words). To test this hypothesis, we introduce Tensor Product Decomposition Networks (TPDNs), which use TPRs to approximate existing vector representations. We demonstrate using synthetic data that TPDNs can successfully approximate linear and tree-based RNN autoencoder representations, suggesting that these representations exhibit interpretable compositional structure; we explore the settings that lead RNNs to induce such structure-sensitive representations. By contrast, further TPDN experiments show that the representations of four models trained to encode naturally-occurring sentences can be largely approximated with a bag of words, with only marginal improvements from more sophisticated structures. We conclude that TPDNs provide a powerful method for interpreting vector representations, and that standard RNNs can induce compositional sequence representations that are remarkably well approximated byTPRs; at the same time, existing training tasks for sentence representation learning may not be sufficient for inducing robust structural representations
accepted-poster-papers
AR1 seeks the paper to be more standalone and easier to read. As this comment comes from the reviewer who is very experienced in tensor models, it is highly recommended that the authors make further efforts to make the paper easier to follow. AR2 is concerned about the manually crafted role schemes and alignment discrepancy of results between these schemes and RNNs. To this end, the authors hypothesized further reasons as to why this discrepancy occurs. AC encourages authors to make further efforts to clarify this point without overstating the ability of tensors to model RNNs (it would be interesting to see where these schemes and RNN differ). Lastly, AR3 seeks more clarifications on contributions. While the paper is not ground breaking, it offers some starting point on relating tensors and RNNs. Thus, AC recommends an accept. Kindly note that tensor outer products have been used heavily in computer vision, i.e.: - Higher-Order Occurrence Pooling for Bags-of-Words: Visual Concept Detection by Koniusz et al. (e.g. section 3 considers bi-modal outer tensor product for combining multiple sources: one source can be considered a filter, another as role (similar to Smolensky at al. 1990), e.g. a spatial grid number refining local role of a visual word. This further is extended to multi-modal cases (multiple filter or role modes etc.) ) - Multilinear image analysis for facial recognition (e.g. so called tensor-faces) by Vasilescu et al. - Multilinear independent components analysis by Vasilescu et al. - Tensor decompositions for learning latent variable models by Anandkumar et al. Kindly make connections to these works in your final draft (and to more prior works).
train
[ "H1gY4uvYJE", "SklIQYXqCQ", "HyxC1Km907", "SkeR9OQ50X", "rJl1E7lHTX", "r1eg-zgrT7", "rkgLvpJraX", "S1x9a0e2hX", "S1l7y8esh7", "HJgWEWbPnQ" ]
[ "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We have created an anonymized webpage with interactive demos to accompany this paper. The page can be found here:\nhttps://tpdn-iclr.github.io/tpdn-demo/tpr_demo.html", "Thank you again for the suggestions. We have uploaded a new version of the paper that incorporates the changes discussed in our response.", "Thank you again for your comments. We have uploaded a new version of the paper that incorporates the changes discussed in our response.", "We have uploaded a revised version of the paper that incorporates the change mentioned above.", "Thank you for the feedback! Here are replies to the concerns you raise:\n\nPoint 1:\nWe will edit the introduction to make the contributions clearer. \n\nPoint 2:\nSome of these details are available in the appendices, and we will add the ones that are not already there. We will also make it clearer in the main text that such information is available in the appendices. \n\nWe will also clarify our discussion of the results in Table 2. We do not have a strong hypothesis for why Skip-thought is approximated less well than the other models. For the other models, our conjecture is that the models’ representations consist of a combination of a bag-of-words representation and some structural information that is occasionally, but not reliably, present as well. This conjecture is consistent with the finding that these representations could be approximated well, though not perfectly, with a bag-of-words role scheme. \n\nWe argue that such representations arise because the training tasks for these sentence embedding models do not depend much on the structure of the input; our results in Table 3b indicate that only structure-sensitive training tasks will induce models to learn structured representations. \n\nHowever, we will also clarify the other two possible explanation for the results in Table 2, namely that the models could be well-approximated by some role scheme that we did not test, or that the models are using some systematic but non-TPR structural representation.\n\nPoint 3:\nTables 9 and 10 show the actual performance on downstream tasks of TPDNs trained to approximate the sentence embedding models. We did not emphasize these results, however, because we are presenting the TPDN as a tool for analyzing existent models, not as a new architecture for performing tasks of interest. Therefore, the most relevant metrics are ones showing how the TPDN approximates existing models, not how it performs in its own right. For this same reason, we have not tried training the TPDN end-to-end on these specific tasks rather than training it to approximate existing models.\n\nPoint 4:\nYes, we have considered applying the TPDN to other models. \n\nFor example, TPDNs might be used to analyze transformer models by seeing whether the representations generated for each word via self-attention can be approximated as tensor product representations based on the structure of the surrounding context. We are further interested in expanding the domain of inquiry to computer vision to see if convolutional neural networks learn structured representations of scenes that can be approximated by tensor product representations. \n\nFinally, we hope that, by making our code available on GitHub, we will enable others to use this technique to analyze the models they are interested in.\n\n", "Thank you for these comments! Here are replies to the specific concerns you discuss:\n\nPoint 1:\nThere are two issues raised here. The first is the limitation of using handcrafted role schemes. What this paper attempts to do is explicit, discrete model comparisons between different candidate role schemes. We take this to be a necessary first step on the way to automatically exploring the space of logically possible role schemes, and thus \"learning\" the optimal role scheme, thereby ruling out this kind of omission. \n\nHowever, such a project is an ambitious goal, and we feel it is important to establish the basic methodology, and some basic results, first. Figure 3c and Table 3b show cases where handcrafted role schemes have succeeded near-perfectly, serving as a proof of concept that, given the right role scheme (whether it be hand-crafted or learned), TPDNs can reveal striking levels of systematic structure in RNN representations. \n\nThe second issue is the possibility that RNNs do use a systematic structural representation whose representational space cannot be approximated with a TPR. We agree that this is a possibility; although TPRs are capable of capturing complex structural relations, they rely upon certain assumptions about the structure of the representational space. RNNs are not constrained in any way that enforces these assumptions - indeed, this fact is partly why we find the successful TPDN approximations so striking in Figure 3c and Table 3b. \n\nIn the final version of the paper, we will emphasize the possibility that RNNs may sometimes use non-TPR structural representations. \n\nPoint 2:\nThe MSE is informative on a relative level: It allows us to compare role schemes within a model. To allow comparisons across models, we normalize by dividing by the random-vector performance to factor out overall vector magnitude differences across different models. The other metrics besides MSE allow for absolute measurements of performance. We will clarify the contributions of these different metrics.\n\nPoint 3:\nWe will edit the paper to clarify the three possibilities for why the alignments in Table 2 are not perfect. \n\nTwo of the possibilities, as discussed in our response to your first point, are that the RNNs are using some role scheme other than the ones we tested, or that the RNNs are using some structural representation that cannot be approximated with any tensor product representation. \n\nHowever, we argue for a third possibility: the representation can be characterized as a combination of a bag-of-words representation, plus some incomplete (not always encoded) structural information. Such a result is consistent with our observation that bag-of-words roles yield a strong but imperfect approximation for the sentence embedding models. We will edit the text to emphasize that this is merely a conjecture and that the other two possibilities must also be considered. \n\nFinally, we agree with your comment that these results do not indicate that RNNs *only* learn tensor-product representations, but we had not intended to make that claim (we meant the title to be read as “RNNs *sometimes* implement tensor-product representations”).\n", "Thank you for the feedback. We believe it would be difficult to make a paper completely stand-alone, but it is, indeed, not our goal to discuss sentence/sequence embeddings per se (note that the models we use are sentence models, not document models), but, rather, to describe a general analysis method applied to the special case of these models. \n\nTo help make the paper understandable with less context, we will integrate a very short description of what we currently refer to as \"the standard left-to-right sequence-to-sequence setup\" on page 3.\n", "This paper is not standalone. A section on the basics of document analysis would have been nice.", "The work proposes Tensor Product Decomposition Networks (TRDN) as a way to uncover the representation learned in recurrent neural networks (RNNs). TRDN trains a Tensor Product Representation, which additively combine tensor products of role (e.g., sequence position) embeddings and filler (e.g., word) embeddings to approximate the encoding produced by RNNs. TRDN as a result shed light into inspecting and interpreting representation learned through RNNs. The authors suggest that the structures captured in RNNs are largely compositional and can be well captured by TPRs without recurrence and nonlinearity.\n\npros:\n1. The paper is mostly clearly written and easy to follow. The diagrams shown in Figure 2 are illustrative;\n2. TRDN offers a headway to look into and interpret the representations learned in RNNs, which remained largely incomprehensible;\n3. The analysis and insight provided in section 4 is interesting and insightful. In particular, how does the training task influence the kinds of structural representation learned. \n\n\ncons:\n1. The method relies heavily on these manually crafted role schemes as shown in section 2.1; It is unclear the gap in the approximation of TPRs to the encodings learned in RNNs are due to inaccurate role definition or in fact RNNs learn more complex structural dependencies which TPRs cannot capture;\n2. The MSE of approximation error shown in Table 1 are not informative. How should these numbers be interpreted? Why normalizing by dividing by the MSE from training TPDN on random vectors?\n3. The alignment between prediction using RNN representations and TPDN approximations shown in Table 2 are far from perfect, which would contradict with the claim that RNNs only learn tensor-product representation. ", "This paper presents an analysis of popularly-use RNN model for structure modeling abilities by designing Tensor Product Decomposition Networks to approximate the encoder. The results show that the representations exhibit interpretable compositional structure. To provide better understanding, the paper evaluates the performance on synthesized digit sequence data as well as several sentence-encoding tasks.\n\nPros:\n1. The paper is well-written and easy to follow. The design of the TPDN and corresponding settings (including what an filler is and what roles are included) for experiments are understandable. It makes good point at the end of the paper (section 4) on how these analysis contribute to further design of RNN models, which seems useful.\n2. The experiments are extensive to support their claims. Not only synthetic data but also several popularly-used data and models are being conducted and compared. An addition of analogy dataset further demonstrate the effect of TPDN on modeling structural regularities.\n\nCons:\n1. More detailed and extensive discussion on the contribution of the paper should be included in the introduction part to help readers understand what's the point of investigating TPDN on RNN models.\n2. Some details are missing to better understand the construction. For example, on page 4, Evaluation, it is unclear of how TPDN encoder is trained, specifically, which parameters are updated? What's the objective for training? It is also unclear of whether the models in Figure 3(c) use bidirectional or unidirectional or tree decoder? In Section 3, it could be better to roughly introduce each of the existing 4 models. How do TPDN trained for these 4 sentence encoding models need to be further illustrated. More reasons should be discussed for the results in Table 2 (why bag-of-words role seem to be ok, why skip-thought cannot be approximated well).\n3. It could be better to provide the actual performance (accuracy) given by TPDN on the 4 existing tasks.\n4. Further thoughts: have you considered applying these analysis on other models besides RNN?" ]
[ -1, -1, -1, -1, -1, -1, -1, 7, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "iclr_2019_BJx0sjC5FX", "rJl1E7lHTX", "r1eg-zgrT7", "rkgLvpJraX", "HJgWEWbPnQ", "S1l7y8esh7", "S1x9a0e2hX", "iclr_2019_BJx0sjC5FX", "iclr_2019_BJx0sjC5FX", "iclr_2019_BJx0sjC5FX" ]
iclr_2019_BJxgz2R9t7
Learning To Solve Circuit-SAT: An Unsupervised Differentiable Approach
Recent efforts to combine Representation Learning with Formal Methods, commonly known as the Neuro-Symbolic Methods, have given rise to a new trend of applying rich neural architectures to solve classical combinatorial optimization problems. In this paper, we propose a neural framework that can learn to solve the Circuit Satisfiability problem. Our framework is built upon two fundamental contributions: a rich embedding architecture that encodes the problem structure and an end-to-end differentiable training procedure that mimics Reinforcement Learning and trains the model directly toward solving the SAT problem. The experimental results show the superior out-of-sample generalization performance of our framework compared to the recently developed NeuroSAT method.
accepted-poster-papers
This paper introduces a new graph neural network architecture designed to learn to solve Circuit SAT problems, a fundamental problem in computer science. The key innovation is the ability to to use the DAG structure as an input, as opposed to typical undirected (factor graph style) representations of SAT problems. The reviewers appreciated the novelty of the approach as well as the empirical results provided that demonstrate the effectiveness of the approach. Writing is clear. While the comparison with NeuroSAT is interesting and useful, there is no comparison with existing SAT solvers which are not based on learning methods. So it is not clear how big the gap with state-of-the-art is. Overall, I recommend acceptance, as the results are promising and this could inspire other researchers working on neural-symbolic approaches to search and optimization problems.
train
[ "BkekmsMohm", "HklvKK14aX", "Skx8EY1Nam", "rJgGgF1V6Q", "SJg8puyN6m", "HyeLzuJVTX", "rJxGBgb527", "HyevmThdhX" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper proposes a graph neural network architecture that is designed to use the DAG structure in the input to learn to solve Circuit SAT problems. Unlike graph neural nets for undirected graphs, the proposed network propagates information according to the edge directions, using a deep sets representation to aggregate over predecessors of each vertex and GRUs to implement recurrent steps. The network is trained by using a \"satisfiability function\" which takes soft variable assignments computed by the network and applying a relaxed version of the circuit to be solved (replacing AND with softmax, OR with softmin, and NOT with 1 - variable value) to compute a continuous score that measures how satisfying the assignment is. Training is done by maximizing this score on a dataset of problem instances that are satisfiable. Results are shown on random k-SAT and graph coloring problems.\n\nThe paper is reasonably well-written and easy to follow. The idea of using the relaxed version of the circuit for training is nice. Combining ideas from DAG-RNNs and Deep Sets is interesting, although incremental.\n\nCriticisms:\n- How much does tailoring the network architecture to the DAG structure of the circuit actually help? A comparison to a regular undirected graph neural network on the circuit input without edge directions would be useful. In particular, since both edge directions are used in the current architecture but represented as two different DAGs, it naturally raises the question of whether a regular undirected graph neural net would also work well.\n- How does the proposed approach compare to the current state-of-the-art non-learning approaches to SAT (CDCL, local search, etc.)? There is a huge literature on SAT, and ignoring all that work and comparing to only NeuroSAT seems unjustified. Without such comparisons, it is hard to say what is the benefit learning approaches in general, and the specific approach in this paper, provide in this domain. Even basic sanity-check baselines, e.g., random search, can be valuable given that the domain is somewhat new to learning approaches.\n- One way to interpret the proposed approach is that it is learning to propose soft assignments that can be easily rounded. It would be good to compare to a Linear Programming relaxation-based approach that represents the SAT instance as an integer program with binary variables, relaxes the variables to be in [0,1], solves the resulting linear program, and rounds the solution. Do these approaches share the same failure modes, how does their performance differ, etc.\n- The proposed approach has an obvious advantage over NeuroSAT in that it has access to the circuit structure, in addition to the flat representation of the SAT instance. According to the paper, not providing the circuit structure to the proposed approach hurts its performance. It would be useful to devise an experiment where a modified version of NeuroSAT is given the circuit structure as an additional input to see whether that closes the gap between the approaches.\n", "- In figure 1 (a), what are x11, x12, etc?\nx represents the node feature vector that specify the nodes of the input DAG; in particular, x_{ij} is the j-th feature of the i-th node. As explained in the paper, in the Circuit-SAT problem, node feature vectors represent the type of node operation (i.e. AND, OR, NOT or VARIABLE) in the input circuit, represented as one-hot vectors.\n\n- Correctness guarantee:\nThis is a great point indeed that we have briefly mentioned in the paper. In our framework, an input circuit is deemed as SAT if and only if the Solver network can produce an assignment that satisfies the Evaluator network. The test-time Evaluator network (or the train-time Evaluator network at low temperatures) mimics the exact behavior of the input circuit for continuous soft assignments; that is, if the soft assignment produced by the Solver network satisfies the test-time Evaluator network, its 0/1 hard counterpart will also satisfy the original circuit. Using this mechanism has two implications: (a) our framework does NOT produce false positives; if the input circuit is deemed as SAT, it means we have already found a satisfying solution for it. (b) if the satisfiability value for an input circuit is less than 0.5, all we can say about its SAT status is \"unknown\"; in other words, our method does not provide any proof of unsatisfiability.\n\n- Clarifying \"SAT Solving\":\nWe have clarified this in the revised draft.\n\n- Clarifying \"solutions\":\nWe have clarified this in the revised draft.\n\n- min() < S_min() < S_max() < max():\nAgain the reviewer is correct and the ordering relation holds for all the inputs (a_1, ... a_n); we have clarified this in the revised draft.\n\n- The effect of temperature on the Evaluator network:\nIn the beginning of the training, when the temperature is high, all the AND and OR gates in the Evaluator network (represented as S_min and S_max functions, respectively) act almost as arithmetic mean, so the training can be seen as maximizing the average values over the soft assignments (or their negations) while the gradient signal propagates back through all paths in the circuit; this is the exploration phase. As the training gradually progresses and the temperature anneals toward zero, the s_min and the s_max functions converge toward min and max functions, respectively, which in turn mimic the behavior of AND and OR gates for soft assignments. At this stage, the gradient signal will, for the most part, travel back only through the active paths in the circuit; this is the exploitation phase of learning.\n\n- Proving (circuit is UNSAT iff S_\\theta <= 0.5 for all soft assignments):\nHere we give a sketch of the proof, but we are hoping to add an entire Appendix in the camera-ready version detailing the proof.\n\n(1) circuit is UNSAT if S_\\theta <= 0.5 for all soft assignments:\nThe proof can simply be achieved by contradiction.\n\n(2) S_\\theta <= 0.5 for all soft assignments if circuit is UNSAT:\nThe proof can be achieved by induction on the size (i.e. the number of gates) in the circuit: each time, isolate and remove the sink node of the circuit DAG (i.e. the last logical operator evaluated in the expression) and show that the output of the circuit is always less than or equal to 0.5 for all types of sink gates by assuming the statement of the theorem holds for the the resulted sub-circuits which have strictly smaller sizes compared to the original circuit. \n\nAs for false positives, yes our model never produces false positives. Please refer to the above proof as well as the correctness explanation above.\n\n- Figure 2 explanation:\nYes, all the test cases are SAT instances and, as the reviewer mentioned, there are some SAT examples where neither of the two models can decode within the allowed T_max iterations. We suspect these examples belong to the region of the K-SAT instances that's super close to SAT-UNSAT phase transition point. This region mostly contains the hardest K-SAT instances.\n\n- Real-world datasets:\nWe completely agree with the reviewer that one of the main benefits of using learning methods for SAT solving is the ability of these methods to adapt to the target distribution of specific domains. This is indeed one of our current, ongoing efforts to adapt our framework to specific real-world domains. Nevertheless, we should emphasize that in our experiments, despite being random, both the training and the test examples are drawn from the hardest region of the SAT problems (the area close to SAT-UNSAT phase transition). This is achieved by using the data generation process proposed in the NeuroSAT paper.\n\n- The training time and testing time:\nThank you for bringing up this important point. We have included a new paragraph in Section 5.1 detailing the time complexity.", "We have been working to improve the clarity of the paper including the figures. So we are hoping the final version would address all the clarity concerns.\n\nAs for pre-processing the input data, we only perform a CNF-to-Circuit conversion step as fully explained in Appendices B and C. As mentioned in the paper, although there are some problem domains where the input instances naturally come in the circuit format (e.g. circuit verification), in order to make a fair comparison with NeuroSAT, we decided to use the same CNF datasets that we used for NeuroSAT and as such we needed a pre-processing step to convert those CNFs to circuits. Nevertheless, we made sure this pre-processing step to be of O(N) in the worst case. As an extra advantage point, as mentioned in Appendix C, since our framework is capable of harnessing the circuit structure, domain-specific heuristics can be injected into the circuit structure during the pre-processing step - e.g. in graph k-coloring.", "-SAT as an Integer Linear Program (ILP):\nModeling the SAT problem as a (relaxed) ILP is a very interesting idea and there are some prior works on that in the literature. Nevertheless, such methodology would require solving an optimization problem for every problem instance at the test time. However, our proposed methodology is quite different (even though we also work with relaxed assignments): after training, our framework produces a recursive neural network (the Solver network) that can be run on test problem instances on GPU *without* needing to solve any optimization at the test time. That said, one interesting idea would be to replace our Evaluator network (i.e. the relaxed circuit) with a network that encodes the relaxed ILP and study the effects of that on training the Solver network. Exploring different options for the Evaluator network is indeed a future direction on our agenda.\n\n-A modified version of NeuroSAT to take in circuit structure:\nOur understanding is that the main ingredients that make NeuroSAT NeuroSAT are (a) a graph neural network for bi-partite graphs to embed the input CNF and (b) training this network toward SAT classification. In order for NeuroSAT to consume circuit structure, one would need to replace the first part with another sophisticated graph neural network that can process and understand variable-sized and topologically-diverse DAGs (circuits). But that's exactly what we have developed in this paper: the DG-DAGRNN architecture. So while we can in theory replace a fundamental ingredient of NeuroSAT with our proposed model, we are not sure we can still call the resulted framework NeuroSAT and close the gap. In other words, upgrading NeuroSAT to understand circuit structure is a non-trivial task and in fact one of the main contributions of the present work.", "-Tailoring to DAG structure / directed vs undirected propagation: \nWe would like to emphasize that our experimental setup does NOT aim at comparing the directed vs undiredted message passing on graphs. In particular, any form of message passing on graphs by definition imposes (momentary) directions on the edges of the graph even if the underlying graph is undirected; that is, message passing is always directed. On the other hand, what we are contrasting in this paper is *sequential* propagation based on some specific node order vs *synchronous* propagation based on no order. Furthermore, we argue the \"specific order\" for sequential propagation cannot be just any random order, but it has to arise from the semantics of the problem. In particular, in the Circuit-SAT problem, the node order (and its reverse version) is induced by the order by which the logical operators are evaluated in the circuit (i.e. the topological order of the input DAG). In theory, given unbounded training data and training time, one should still be able to learn the target Circuit-SAT function while ignoring this order and using synchronous propagation, but in practice with finite data and time, the learning is intractable for general circuits. In fact, before fully developing our DG-DAGRNN framework, we experimented with synchronous propagation for general circuits, but we were not able to learn the SAT function. The reason is somewhat intuitive: if we want to consume general (non-flat) circuits, ignoring the evaluation order of operators (i.e. using synchronous propagation) adds an extra task of figuring out the correct expression structure on the top of learning to solve the SAT problem itself which makes the learning task way more difficult. And that's why providing this structure explicitly via the DG-DAGRNN framework makes a huge improvement. In contrast, the synchronous propagation is NOT problematic for the CNF-SAT problem because the clauses in a flat CNF do not adhere any specific order and can be evaluated in any order, and therefore, synchronous propagation works well in NeuroSAT which only consumes CNFs. \n\n-Comparison against modern SAT solvers:\nThis is indeed a very reasonable concern; nevertheless, we should emphasize that neither our framework nor NeuroSAT lay any claim to being on par with modern SAT-solvers at the moment. But that's not the goal here. This specific area in representation learning is relatively new and we are still in the feasibility study phase to see how much signal we can extract for SAT solving via deep learning. For practical purposes however, our intuition is that a successful approach that can potentially beat the classical solvers would be a hybrid of both learned models and traditional heuristic search components. But before getting there, we would need to gain a good understanding of what kind of useful signals we can or cannot extract from the problems structure via pure learning.\nThat said, we have made a time comparison with MiniSAT (a popular, highly-optimized solver for moderate size problems). Even though, MiniSAT runs faster per example, our model, being a a neural network, is far more parallelizable and can solve many problems concurrently in a single batch. This would in turn make our method much faster than MiniSAT when applied on large sets of problems. We have included a new paragraph in the revised version describing this phenomenon. ", "We would like to thank all the reviewers for bringing up some important questions and their detailed, constructive feedbacks. We have uploaded the first revised version of the paper addressing some of these concerns. In particular, the new draft includes:\n\n1) A revised version of Figure 1 to fix an error in the figure.\n2) A few clarifying statements to address some reviewers concerns regarding clarity.\n3) A new paragraph detailing the time comparisons between the competing methods as well as the off-the-shelf MiniSAT solver.\n\nIn what follows, we will address the reviewers' questions and concerns in more details.", "The Authors of this paper investigate Neuro-Symbolic methods in the context of learning a SAT solver generalized to the Circuit-SAT problem. Using a reinforcement learning – inspired approach to demonstrate a framework that is capable of (unsupervised) learning, by means of an end-to-end differentiable training procedure. Their formulation incorporates the solving of a given SAT problem into the architecture, meaning the algorithm is trained to produce a solution if a given problem is satisfiable. This is in contrast to previous similar work by (Selsam et al. 2018), where the framework was trained as a SAT classifier. Their results outline the performance increase over the previous work (Selsam et al. 2018) on finding a given solution for a SAT problem, on in-sample and out-sample results.\n\nNeg: \nFigure descriptions are not very clear\nWhen it comes to comparing the results, they do use a prepossessing step for their algorithm which they do not incorporate into the results\n\nPros:\nClear outline of the data sets used for benchmarks.\nGood Literature review, expressing in-depth knowledge of the current state of the art formulation for same/similar tasks \nExtensive background section, that explains the theoretical concepts and their architecture used well.\nClear outline of the Solver, where the individual parts/networks are explained and justified in detail\nVery well outlined argumentation for approaching this particular problem by the proposed method/\nThe experimental results as well are easy to follow and show promising results for the proposed framework\nThe proposed method as well is novel and outperforms similar algorithms in the experimental evaluation.\n\n\nThe paper is very well written, proposes a novel Neuro-Symbolic approach to the classical SAT problem, and demonstrates promising results.\n ", "The paper makes a nice contribution to solving Circuit-SAT problem from a Neuro-Symbolic approach, particularly, 1) a novel DAG embedding with a forward layer and a reverse layer that captures the structural information of a circuit-sat input. 2) Compared with Selsam et al.'s work of Neuro-SAT, the proposed model in this paper, DG-DAGRNN, directly produces an assignment of variables, and the method is unsupervised and end-to-end differentiable. 3) Empirical experiments on random k-SAT and random graph k-coloring instances that support the authors' claim on better generalization ability.\n\nThe paper is lucid and well written, I would support its acceptance at ICLR. Though I have a few comments and questions for the authors to consider.\n\n- In figure 1 (a), what are x11, x12, etc?\n\n- When comparing the two approaches of Neuro-Symbolic methods, besides the angles of optimality and training cost, it is worth to mention that the first one that based on classical algorithms always has a correctness guarantee, while the second one (learning the entire solution from scratch) usually does not.\n\n- Section 4.1, as a pure decision problem, solving SAT means that giving a yes/no answer (i.e., a classification); while for practical purposes, solving SAT means that producing a model (i.e., a witness) of the formula if it is SAT. This can be misleading for some readers when the authors mentioning \"solving SAT\", and it would be clear if the authors could make a distinction when using such terms.\n\n- Section 4.1, \"without requiring to see the actual SAT solutions during training\", again, what is the meaning of \"solutions\" is not very clear at this point. Readers may realize the experiments in the paper only train with satisfiable formulae from the afterward description, so the \"solutions\" indicates the assignments of variables. But it would be better to make it clear.\n\n- Section 4.1/The Evaluator Network, \"one can show also show that min() < S_min() <= S_max() < max()\", what is the ordering relation (i.e., < and <=) here? It is a bit confusing if a forall quantifier for inputs (a_1, ... a_n) is required here.\n\n- Section 4.1/The Evaluator Network, how does the temperature affect the results of R_G? It would be helpful to show their dynamics.\n\n- Section 4.1/Optimization, \"if the input circuit is UNSAT, one can show that the maximum achievable values for S_\\theta is 0.5\", it would be better to provide a brief description of how it is guaranteed. Also, this seems to be suggesting the DG-SAGRNN solver has no false positives, i.e., it will never produce a satisfiable result for unsatisfiable formulae? This would be interesting toward some semi-correctness if the answer is yes.\n\n- Section 5.1, are the testing data all satisfiable formulae? If yes, then the figure 2 shows there is a number of satisfiable formulae but both the models cannot produce correct results -- is that a correct understanding of figure 2? If not, then what is the ground truth?\n\n- I would love to see more experiments on SAT instances with a moderate number of variables but from real-world applications. It would be interesting to see how the model utilizes the rich structural information of instances from real applications (instead of randomly generated formulae).\n\n- The training time and testing time(per instance) are not reported in the experiments.\n" ]
[ 6, -1, -1, -1, -1, -1, 8, 7 ]
[ 5, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2019_BJxgz2R9t7", "HyevmThdhX", "rJxGBgb527", "BkekmsMohm", "BkekmsMohm", "iclr_2019_BJxgz2R9t7", "iclr_2019_BJxgz2R9t7", "iclr_2019_BJxgz2R9t7" ]
iclr_2019_BJxh2j0qYm
Dynamic Channel Pruning: Feature Boosting and Suppression
Making deep convolutional neural networks more accurate typically comes at the cost of increased computational and memory resources. In this paper, we reduce this cost by exploiting the fact that the importance of features computed by convolutional layers is highly input-dependent, and propose feature boosting and suppression (FBS), a new method to predictively amplify salient convolutional channels and skip unimportant ones at run-time. FBS introduces small auxiliary connections to existing convolutional layers. In contrast to channel pruning methods which permanently remove channels, it preserves the full network structures and accelerates convolution by dynamically skipping unimportant input and output channels. FBS-augmented networks are trained with conventional stochastic gradient descent, making it readily available for many state-of-the-art CNNs. We compare FBS to a range of existing channel pruning and dynamic execution schemes and demonstrate large improvements on ImageNet classification. Experiments show that FBS can respectively provide 5× and 2× savings in compute on VGG-16 and ResNet-18, both with less than 0.6% top-5 accuracy loss.
accepted-poster-papers
The authors propose a dynamic inference technique for accelerating neural network prediction with minimal accuracy loss. The method are simple and effective. The paper is clear and easy to follow. However, the real speedup on CPU/GPU is not demonstrated beyond the theoretical FLOPs reduction. Reviewers are also concerned that the idea of dynamic channel pruning is not novel. The evaluation is on fairly old networks.
train
[ "BJlgEOGIl4", "SJgBir11l4", "H1g7fwG3JE", "SkgSXSGny4", "B1l4hUboJE", "B1e6h0GUy4", "Bkgib72i2X", "rJxv5KuXRm", "SJevI4GXRQ", "Bkx2UnWm0m", "rye68ibXCm", "rklHZy28TX", "S1euMaPJ67", "SJlk4ADkaQ", "B1eUMSd16X" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer" ]
[ "We tested VGG-16 and ResNet-18 with FBS against their respective baselines, the experiments were repeated 1000 times and we recorded the average wall-clock time results for each model.\n\nThe VGG-16 baseline observed on average 520.80 ms for each inference. FBS was applied to VGG-16 and reduced the amount of computation by a factor of 3.01x. Inference now took 175.13ms, thus achieving a speedup of 2.97x (in terms of wall-clock time). Similarly, a model with 4.00x computation reduction took 142.17 ms, which translates to a 3.66x actual speedup. This means that the overhead of our PyTorch implementation is less than 10%.\n\nA line-by-line profiling of our implementation revealed that the overhead of the extra computations introduced by FBS in convolutional layers are fairly minimal (we have annotated the percentage of execution time of each component here: https://imgur.com/YVQormC ). We found that the excessive data movements we mentioned earlier contribute to the majority of the observed overhead, while actual computations introduced by FBS amount to only 3.0% of the total time required to compute the layers. As we have suggested, the data movements are entirely redundant due to API limitations.\n\nOur FBS-based ResNet-18 provided a 1.98x reduction in the amount of computation, which took 63.73 ms for each inference, while the baseline required 101.82 ms, thus achieving a 1.60x real performance gain. We found that in addition to the overhead introduced by the FBS implementation above, the add operations for residuals cannot be accelerated in PyTorch for channel-wise sparse activations, and incur excessive copy operations as a result of the API limitations. Even with these limitations, the real speedup provided by FBS surpasses/matches the actual speedups of all other works compared in Table 1:\n------------------------------------------------------------------ -------------- ------------- ------\nMethod Top-5 error Theoretical Real\n------------------------------------------------------------------ -------------- ------------- ------\nSoft Filter Pruning (He et al., 2018) 12.22% 1.72x 1.38x\nDiscrimination-aware Channel Pruning (Zhuang et al., 2018) 12.40% 1.85x 1.60x\nLow-cost Collaborative Layers (Dong et al., 2017) 13.06% 1.53x 1.25x\nFeature Boosting and Suppression (this work) 11.78% 1.98x 1.60x\n------------------------------------------------------------------ -------------- ------------- ------\n\nWe hope this answers your concern regarding the actual performance gains.", "Please report the wall-clock time running the *whole network* on VGG-16 and ResNet-18, rather than cherry picking a specific layer to show speedup. The last column of Table 1 is not \"speedup\", but \"FLOP reduction\". ", "We would like to thank the reviewer for the positive comments. \nA comparison to AMC [1] is included in Table 2, it is difficult for us to compare to Netadapt [2] since the networks considered are different.\nWe would like to point out that Dong et al. [3] considered spatial dynamic execution, which eliminates computations at a finer granularity and is thus harder to accelerate compared to our channel-wise dynamic execution. On a CPU, we recently found that a single layer using FBS can increase inference speed by 3.91x, given a theoretical speedup of 3.98x.\n\n[3] More is Less: A More Complicated Network with Less Inference Complexity, CVPR 2017, https://arxiv.org/pdf/1703.08651.pdf\n", "Thanks for the correction, we will change this to a more precise statement: \"FBS can reduce the FLOPs of VGG-16 by 5x and ResNet-18 by 2x\".\n\nWe tested on CPU one layer of VGG-16 (the 2nd convolution layer) with FBS using the new Pytorch 1.0 (JIT enabled), and achieved 3.91x speedup in wall-clock time when the FBS density is set to 0.5 (which yields a theoretical speedup of 3.98x). FBS achieves a wall-clock time of 12.780ms and the original convolution takes 49.942ms. This minor overhead is mostly due to the excessive data movements to dynamically gather a subset of weight parameters that cannot be eliminated because of the API limitations. We will put the details of this wall-clock time test in Appendix with open source code if accepted.\n\nGiven that relatively large blocks of compute can be omitted, it is realistic to suggest that in this case, FLOP reduction will translate into wall-clock time savings. We foresee no particular problems in doing this but existing hardware and tool-chains may currently prevent the necessary optimisations. We would certainly agree that if the optimisations focused on eliminating computations at a finer granularity that actual gains may be difficult to obtain.\n", "It's misleading to the community to report \"FLOP reduction\" as \"speedup\". FLOP reduction doesn't translate to speedup on hardware. If the authors wants to report the speedup, please report the wall-clock time support the below claim: \"FBS cam accelerate VGG-16 by 5× and improve the speed of ResNet-18 by 2×\"", "In the revision, the authors have made significant improvement over the original submission. I also appreciate that my main concerns regarding the original submission have been addressed. ", "Summary: \n\nThis paper proposed a feature boosting and suppression method for dynamic channel pruning. To be specific, the proposed method firstly predicts the importance of each channel and then use an affine function to amplify/suppress the importance of different channels. However, the idea of dynamic channel pruning is not novel. Moreover, the comparisons in the experiments are quite limited. \n\nMy detailed comments are as follows.\n\n\nStrengths:\n\n1. The motivation for this paper is reasonable and very important. \n\n2. The authors proposed a new method for dynamic channel pruning.\n\nWeaknesses:\n\n1. The idea of dynamic channel pruning is not novel. In my opinion, this paper is only an extension to Network Slimming (Liu et al., 2017). What is the essential difference between the proposed method and Network Slimming?\n\n2. The writing and organization of this paper need to be significantly improved. There are many grammatical errors and this paper should be carefully proof-read.\n\n3. The authors argued that the importance of features is highly input-dependent. This problem is reasonable but the proposed method still cannot handle it. According to Eqn. (7), the prediction of channel saliency relies on a data batch rather than a single data. Given different inputs in a batch, the selected channels should be different for each input rather than a general one for the whole batch. Please comment on this issue.\n\n4. The proposed method does not remove any channels from the original model. As a result, both the memory and the computational cost will not be reduced. It is confusing why the proposed method can yield a significant speed-up in the experiments.\n\n5. The authors only evaluate the proposed method on shallow models, e.g., VGG and ResNet18. What about the deeper model like ResNet50 on ImageNet?\n\n6. It is very confusing why the authors only reported top-5 error of VGG. The results of top-1 error for VGG should be compared in the experiments.\n\n7. Several state-of-the-art channel pruning methods should be considered as the baselines, such as ThiNet (Luo et al., 2017), Channel pruning (He et al., 2017) and DCP (Zhuang et al., 2018)\n[1] Channel pruning for accelerating very deep neural networks. CVPR 2017.\n[2] Thinet: A filter level pruning method for deep neural network compression. CVPR 2017.\n[3] Discrimination-aware Channel Pruning for Deep Neural Networks. NIPS 2018.\n", "This paper propose a channel pruning method for dynamically selecting channels during testing. The analysis has shown that some channels are not always active. \n\nPros:\n- The results on ImageNet are promising. FBS achieves state-of-the-art results on VGG-16 and ResNet-18.\n- The method is simple yet effective.\n- The paper is clear and easy to follow.\n\nCons:\n- Lack of experiments on mobile networks like shufflenets and mobilenets\n- Missing citations of some state-of-the-art methods [1] [2].\n- The speed-up ratios on GPU or CPU are not demonstrated. The dynamic design of Dong et al., 2017 did not achieve good GPU speedup.\n- Some small typos.\n\n[1] Amc: Automl for model compression and acceleration on mobile devices\n[2] Netadapt: Platform-aware neural network adaptation for mobile applications ", "I hope this addresses weaknesses 2, 6 and 7 identified by your comments. We additionally included more comparisons against other works in Tables 1 and 2.", "> \"the authors did not present a real-world application in \n> which it is important to speed up by 2 or 3 times at a small \n> cost, so it is hard to judge the real\n> impact of the proposed method.\"\n\nOf course, all real systems are constrained by power and memory bandwidth. The proposed scheme offers very significant savings (2-3X in both compute and memory bandwidth) that would be beneficial in almost all scenarios, either to reduce power, increase performance or trade for better accuracy.\n\nAdditionally, we would like to point out that FBS works as an technique to accelerate network inference. Although it is entirely feasible to use it to accelerate training, we have not conducted relevant experiments.\n", "Thank you for your comments.\n\n1. Re. motivation, to clarify we do increase performance as you state (2--5x) but in addition also make significant savings in terms of compute and memory bandwidth. These savings would be beneficial in almost all scenarios, either to reduce power, increase performance or trade for better accuracy. We have clarified this in our introduction.\n\n2. I think there is some misunderstanding here. By dynamically gating computation, FBS reduces both compute and memory requirements. We simply don't load/store the weights/activations for the suppressed channels. The newly added Table 3 quantifies these savings.\n\n3. We are working on generating data for newer models, but this might be limited by the amount of time available.", "The authors propose a dynamic inference technique for accelerating neural network prediction with minimal accuracy loss. The technique prunes channels in an input-dependent way through the addition of auxiliary channel saliency prediction+pruning connections.\n\nPros:\n- The paper is well-written and clearly explains the technique, and Figure 1 nicely summarizes the weakness of static channel pruning\n- The technique itself is simple and memory-efficient\n- The performance decrease is small\n\nCons:\n- There is no clear motivation for the setting (keeping model accuracy while increasing inference speed by 2x or 5x)\n- In contrast to methods that prune weights, the model size is not reduced, decreasing the utility in many settings where faster inference and smaller models are desired (e.g. mobile, real-time)\n- The experiments are limited to classification and fairly dated architectures (VGG16, ResNet-18)\n\nOverall, the method is nicely explained but the motivation is not clear. Provided that speeding up inference without reducing the size of the model is desirable, this paper gives a good technique for preserving accuracy.", "3. \"The authors argued that the importance of features is highly input-dependent. This problem is reasonable but the proposed method still cannot handle it.\nAccording to Eqn. (7), the prediction of channel saliency relies on a data batch rather than a single data. Given different inputs in a batch, the selected channels should be different for each input rather than a general one for the whole batch. Please comment on this issue.\"\n\nThe prediction of channel saliency *does not* rely on a batch of data. In equation (7), x_(l-1) is the output of the (l-1)-th layer, which comprises of C_(l-1) features, each feature has the spatial dimensions H_(l-1) * W_(l-1), as defined in Section 3.1. Throughout this paper, x_l for all layers is a single input image, which consists of multiple channels. Equation (7) reduces each channel in an image to a scalar, which is then used to predict the output channel saliencies in equation (8). Although this process is identical for each input image, each evaluation of equation (8) may produce drastically different predicted channel saliencies dependent on the input image.\n\nWe would like to update this section to remove any sources of ambiguity, would it be possible for you to describe how our intended meaning was lost?\n\n4. \"The proposed method does not remove any channels from the original model. As a result, both the memory and the computational cost will not be reduced. It is confusing why the proposed method can yield a significant speed-up in the experiments.”\n\nIt is hopefully clear from previous comments that this is not the case.\n\nTypically, convolutional layers are stacked to form a sequential convolutional network. Prior to computing the costly convolution, FBS uses the input (or the output from the previous layer) to predict the saliencies of output channels of the costly convolution. If an output channel is predicted to have a zero saliency, the evaluation of this output channel can be entirely skipped, as the entire output channel is predicted to contain only zero entries.\n\nIn addition, each convolutional layer takes as its input the output of the previous layer. This input can have channel-wise sparsity (channels consisting of only zero entries), if the previous layer is a convolutional layer. It is clear that these inactive input channels can always be skipped when computing the convolution.\n\nThe input- and output-side sparsities therefore doubly accelerate the expensive convolution and thus achieve a huge reduction in compute. Such reduction in computation is also seen in [2], as it shares the same goal but uses an entirely different method.\n\n5. \"The authors only evaluate the proposed method on shallow models, e.g., VGG and ResNet18. What about the deeper model like ResNet50 on ImageNet?\"\n\nThe method we propose is a per-layer method, which should not make a difference when targeting deeper models. Unlike NS, we do not rank channel importance globally to produce pruning decisions. We are working on generating results on deeper models, but this might be limited by the amount of time available.\n\n6. \"It is very confusing why the authors only reported top-5 error of VGG. The results of top-1 error for VGG should be compared in the experiments.\"\n\nWe will update Table 2 to include top-1 errors. However, some works we compare to, e.g. He et al.'s channel pruning [4], may have missing top-1 errors as they were not reported.\n\n7. \"Several state-of-the-art channel pruning methods should be considered as the baselines, such as ThiNet (Luo et al., 2017), Channel pruning (He et al., 2017) and DCP (Zhuang et al., 2018).\"\n\nThank you for pointing out these works. These are all static techniques. We will be including them in our comparisons. In addition, it should be noted that Channel pruning [4] is already in our comparison of Table 2.\n\n\nWe thank the reviewer for providing this review.\n\nWe are in the process of updating this paper, and will notify you by comment of the new revision and its changes.\n\n[1]: Squeeze-and-Excitation Networks, CVPR 2018, https://arxiv.org/abs/1709.01507\n[2]: Runtime Neural Pruning, NIPS 2017, https://papers.nips.cc/paper/6813-runtime-neural-pruning\n[3]: Conditional Computation in Neural Networks for Faster Models, ICLR 2016, https://arxiv.org/abs/1511.06297\n[4]: Channel pruning for accelerating very deep neural networks, ICCV 2017, https://arxiv.org/abs/1707.06168", "Thanks for your review.\n\nWe would like to clarify some points to avoid misunderstandings.\n\nOur paper proposes a method called Feature Boosting and Suppression (FBS). FBS adds small auxiliary layers on top of each existing convolution. These auxiliary layers have trainable parameters that are optimized using SGD and control whether individual channels are evaluated at run-time or not. Using this conditional execution, the overall computation required is reduced significantly. Furthermore, the output of the auxiliary layers is used to scale each channel output. Channel saliencies are computed by the auxiliary layers on a per input basis. FBS utilizes sparse input channels (from the previous dynamically pruned convolutional layer) to predict which channels to skip in the output channels, so that we have large reduction in computations, as we exploit both input- and output-side sparsities.\n\nThe weaknesses identified by the reviewer (1,3 and 4) do not hold for the approach described above. We will address each of these comments in turn.\n\nIntroductory statement:\n\"firstly predicts the importance of each channel and then use an affine function to amplify/suppress the importance of different channels\"\n\nThis statement is not true. To clarify, the amplification of channels is dependent on the input (Equation 5), whereas the suppression process effectively performs important channel selection (Equation 6). Both yield strictly non-affine transformations on the batch normalized channel output. \n\n1. \"The idea of dynamic channel pruning is not novel. In my opinion, this paper is only an extension to Network Slimming (Liu et al., 2017).\nWhat is the essential difference between the proposed method and Network Slimming?\"\n\nThe Network Slimming (NS) procedure is applied statically and only prunes channels away. Our technique is applied at run-time and is input dependent. We prune channels away and boost important channels at run-time.\n\nWe consider our method, FBS, to be very different from Network Slimming. For each input image during inference, FBS predicts the relative importance of each channel, and selectively evaluates a subset of output channels that are important for the subsequent layer, given the activation of the previous layer. Different input images would therefore activate drastically different execution paths in the model.\n\nFigure 3b corroborates this observation, as the heat maps show that many channels demonstrate high varying probabilities of being suppressed when being shown images of different categories. Our work is more related to runtime neural pruning [2] and conditional computation [3], where channels are dynamically selected for evaluation in each convolution, yet [2], [3] and FBS use very different methods to achieve this goal. In contrast, NS does not employ dynamic execution, as the pruned channels are *permanently removed* from the model, resulting in a network structure that remains static for all inputs where some capabilities will be permanently lost. \n\nIn addition, FBS preemptively steers feature attention: as FBS not only uses the saliency metrics to predicatively prune unimportant channels at run-time, it further amplifies important channels. The non-linearity added to the network is conceptually similar to Squeeze-and-Excitation (SE) [1], as FBS captures inter-dependencies among input channels and adaptively recalibrates output features in a channel-wise fashion. Even without pruning, FBS can improve the baseline accuracies of CIFAR-10 and ImageNet models (Section 4.2), which is absent from static/dynamic channel pruning methods including NS, RNP, [4] and others.\n\nBecause of the above differences, FBS can achieve a much improved accuracy/compute trade-off when compared to other channel pruning methods.\n\n2. \"The writing and organization of this paper need to be significantly improved. There are many grammatical errors and this paper should be carefully proof-read.\"\n\nWe will complete another round of polishing to address any shortcomings. Could you suggest how/where the organization of the paper could be improved?\n", "This manuscript presents a nice method that can dynamically prune some channels in a CNN network to speed up the training. The main strength of the proposed method is to determine which channels to be suppressed based upon each data sample without incurring too much computational burden or too much memory consumption. The good thing is that the proposed pruning strategy does not result in a big performance decrease. Overall, this is a nicely written paper and may be empirically useful for training a very large CNN. Nevertheless, the authors did not present a real-world application in which it is important to speed up by 2 or 3 times at a small cost, so it is hard to judge the real impact of the proposed method." ]
[ -1, -1, -1, -1, -1, -1, 6, 7, -1, -1, -1, 6, -1, -1, 7 ]
[ -1, -1, -1, -1, -1, -1, 4, 5, -1, -1, -1, 3, -1, -1, 4 ]
[ "SJgBir11l4", "SkgSXSGny4", "rJxv5KuXRm", "B1l4hUboJE", "iclr_2019_BJxh2j0qYm", "SJevI4GXRQ", "iclr_2019_BJxh2j0qYm", "iclr_2019_BJxh2j0qYm", "Bkgib72i2X", "B1eUMSd16X", "rklHZy28TX", "iclr_2019_BJxh2j0qYm", "Bkgib72i2X", "Bkgib72i2X", "iclr_2019_BJxh2j0qYm" ]
iclr_2019_BJxhijAcY7
signSGD with Majority Vote is Communication Efficient and Fault Tolerant
Training neural networks on large datasets can be accelerated by distributing the workload over a network of machines. As datasets grow ever larger, networks of hundreds or thousands of machines become economically viable. The time cost of communicating gradients limits the effectiveness of using such large machine counts, as may the increased chance of network faults. We explore a particularly simple algorithm for robust, communication-efficient learning---signSGD. Workers transmit only the sign of their gradient vector to a server, and the overall update is decided by a majority vote. This algorithm uses 32x less communication per iteration than full-precision, distributed SGD. Under natural conditions verified by experiment, we prove that signSGD converges in the large and mini-batch settings, establishing convergence for a parameter regime of Adam as a byproduct. Aggregating sign gradients by majority vote means that no individual worker has too much power. We prove that unlike SGD, majority vote is robust when up to 50% of workers behave adversarially. The class of adversaries we consider includes as special cases those that invert or randomise their gradient estimate. On the practical side, we built our distributed training system in Pytorch. Benchmarking against the state of the art collective communications library (NCCL), our framework---with the parameter server housed entirely on one machine---led to a 25% reduction in time for training resnet50 on Imagenet when using 15 AWS p3.2xlarge machines.
accepted-poster-papers
The Reviewers noticed that the paper undergone many editions and raise concern about the content. They encourage improving experimental section further and strengthening the message of the paper.
test
[ "SkgZHKcaJE", "B1eUDnU037", "HkgS6925hX", "r1lMfYylTQ", "r1lxIcJxpm", "SJg-hh75C7", "SJg5q27qAX", "r1g4OhQqRQ", "Byx4HczqC7", "S1eaXqJgpX", "SygCGdygTQ", "BJezmXgjt7", "r1eEYTEq2m" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "Dear AC and AnonReviewer1,\n\nThe reviewers’ scores show a consensus to accept. Still, AnonReviewer1 raises important points that we want to address here.\n\n1. QSGD precision. We agree, thanks for pointing it out. We are running experiments on 2 and 4bit QSGD and will add these to the paper.\n\n2. Bulyan. We disagree. We believe this comparison is unnecessary for the following reasons:\n———(A) our comparison with Krum is “good”—Krum successfully detects and eliminates the adversaries in our experiments. The only drawback of Krum is that is has a requirement for total num workers “n” to exceed num adversaries “f” by n > 2f + 2, therefore for n=7, Krum already breaks down at num adversaries f=3, whereas majority vote still works at f=3.\n———(B) Bulyan, on the other hand, only tolerates up to 25% adversaries, requiring n > 4f + 3. For our case of 7 workers this means it only tolerates 1 adversary (f=1). Clearly Bulyan will perform worse than Krum on these experiments.\n———(TL;DR) Krum already “aces” our experiments, except for the fact it has max security level f=2, therefore we didn’t see the need to compare to Bulyan which only serves to lower the max security level to f=1.\n———(extra) There is another drawback of Krum and Bulyan, in that they throw away workers even when there are no adversaries—they have a “paranoid” regime. Majority vote does not do this. But this effect was not visible in our experiments (probably the batch size was too large to see it).\n\nWe therefore see no reason why the paper should not be accepted for this round of submission. In particular we think presenting the small batch theory (Theorem 1) would be an important and timely contribution to the understanding of adaptive gradient methods like Adam, which closely relate to signSGD. The paper may also spur further research into the combination of gradient compression and fault tolerance, which seem like a natural mix for large scale distributed learning.\n\nFinally, we want to thank all the reviewers for their thorough, critical and constructive reviews.", "The authors present a distributed implementation of signSGD with majority vote as aggregation. The result is a communication efficient and byzantine robust distributed training method. This is an interesting and relevant problem. There are two parts in this paper: first the authors prove a convergence guarantee for signSGD, and then they prove that under a weak adversary attack signSGD will be robust to a constant fraction of adversarial nodes. The authors conclude with some limited experiments.\n\nOverall, the idea of combining low-communication methods with byzantine resilience is quite interesting. That is, by limiting the domain of the gradients one expects that the power of an adversary would be limited too. The application of the majority vote on the gradients is an intuitive technique that can resolve weak adversarial attacks. Overall, I found the premise quite interesting.\n\nThere are several issues that if fixed this could be a great paper, however I am not sure if there is enough time between rebuttals to achieve this for this round of submissions. I will summarize these key issues below.\n\n\n1) Although the authors claim that this is a communication efficient technique, signSGD (on its communication merit) is not compared with any state of the art communication efficient training algorithm, for example:\n- 1Bit SGD [1]\n- QSD [2]\n- TernGrad [3]\n- Deep Gradient compression [4]\nI think it is important to include at least one of those algorithms in a comparison. Due to the lack of comparisons with state of the art it is hard to argue on the relative performance of signSGD.\n\n2) Although the authors claim byzantine resilience, this is against a very weak type of adversary, eg one that only sends back the opposite sign of the local stochastic gradient. An omniscient adversary can craft attacks that are significantly more sophisticated, for which a simple majority vote would not work. Please see the results in [b1].\n\n3) The authors although reference some limited literature on byzantine ML, they do not compare with other byzantine tolerant ML methods. For example check [eg, b1-b4] below. Again, due to the lack of comparisons with state of the art it is hard to argue on the relative performance of signSGD.\n\nOverall, although the presented ideas are promising, a substantial revision is needed before this paper is accepted for publication. I think it is extremely important that an extensive comparison is carried out with respect to both communication efficient algorithms, and/or byzantine tolerant algorithms, since signSGD aims to be competitive with both of these lines of work. This is a paper that has potential, but is currently limited by its lack of appropriate comparisons.\n\n\n\n[1] https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/IS140694.pdf\n[2] https://papers.nips.cc/paper/6768-qsgd-communication-efficient-sgd-via-gradient-quantization-and-encoding.pdf\n[3] https://papers.nips.cc/paper/6749-terngrad-ternary-gradients-to-reduce-communication-in-distributed-deep-learning.pdf\n[4] https://arxiv.org/pdf/1712.01887.pdf\n\n[b1] https://arxiv.org/pdf/1802.07927.pdf\n[b2] https://arxiv.org/pdf/1803.01498.pdf\n[b3] https://dl.acm.org/citation.cfm?id=2933105\n[b4] https://arxiv.org/pdf/1804.10140.pdf\n[b5] https://arxiv.org/pdf/1802.10116.pdf\n\n########################\n\nI would like to commend the authors for making a significant effort in revising their manuscript. Specifically, I think adding the experiments for QSGD and Krum are an important addition. However, I still have a few major that in my opinion are significant:\n\n- The experiments for QSGD are only carried for the 1-bit version of the algorithm. It has been well observed that this is by far the least well performing variant of QSGD. That is, 4 or 8 bit QSGD seems to be significantly more accurate for a given time budget. I think the goal of the experiments should not be to compare against other 1-bit algorithms (though to be precise, 1-bit QSGD is a ternary algorithm) , but against the fastest low-communication algorithm. As such, although the authors made an effort in adding more experiments, I am still not convinced that signSGD will be faster than 4 or 8 bit QSGD. I want to also acknowledge in this comment the fact that these experiments do take time, and are not easy to run, so I commend them again for this effort.\n\n- My second comment relates to comparisons with state of the art algorithms in byzantine ML. The authors indeed did compare against Krum, however, as noted in my original review there are many works following Blanchard et al. \n\nFor example as I noted https://arxiv.org/pdf/1802.07927.pdf (the Bulyan algorithm) shows that there exist significantly stronger defense mechanisms for byzantine attacks. I think it would have been a much stronger comparison to compare with Bulyan.\n\nOverall, I think the paper has good content, and the authors significantly revised their paper according to the reviews. However, several more experiments are needed for convincing a potential reader of the main claims of the paper, i.e., that signSGD is a state of the art communication efficient and byzantine tolerant algorithm. \n\nI will increase my score from 5 to 6, and I will not oppose the paper being rejected or accepted. My personal opinion is that a resubmission for a future venue would yield a much stronger and more convincing paper assuming more extensive and thorough comparisons are added.", "This paper continues the study of the signSGD algorithm due to (Balles & Hennig, Bernstein et al), where only the sign of a stochastic gradient is used for updating. There are two main results: (1) a slightly refined analysis of two results in Bernstein et al. The authors proved that signSGD continues to converge at the 1/sqrt(T) rate even with minibatch size 1 (instead of T as in Bernstein et al), if the gradient noise is symmetric and unimodal; (2) a similar convergence rate is obtained even when half of the worker machines flip the sign of their stochastic gradients. These results appear to be relatively straightforward extensions of those in Bernstein et al.\n\nClarity: The paper is mostly nicely written, with some occasionally imprecise claims. \n\nPage 5, right before Remark 1: it is wrongly claimed that signSGD converges to a critical point of the objective. This cannot be inferred from Theorem 1. (If the authors disagree, please give the complete details on how the random sequence x_t converges to some critical point x^*. or perhaps you are using the word \"convergence\" differently from its usual meaning?)\n\nPage 6, after Lemma 1. The authors claimed that \"the bound is elegant since ... even at low SNR we still have ... <= 1/2.\" In my opinion, this is not elegant at all. This is just your symmetric assumption on the noise, nothing more...\n\nEq (1): are you assuming g_i > 0 here? this inequality is false as you need to discuss the two cases. \n\n\"Therefore signSGD cannot converge for these noise distributions, ..... point in the wrong direction.\" This is a claim based on intuitive arguments but not a proven fact. Please refrain from using definitive sentences like this.\n\nFootnote 1: where is the discussion?\n\n\nOriginality: Compared to the existing work of Bernstein et al, the novelty of the current submission is moderate. The main results appear to be relatively straightforward refinements of those in Bernstein. The observation that majority voting is Byzantine fault tolerant is perhaps not very surprising but it is certainly nice to have a formal justification.\n\nQuality: At times this submission feels like half-baked:\n-- The theoretical results are about signSGD while the experiments are about sigNUM\n-- The adversaries must send the negation of the sign? why can't they send an arbitrary bit vector?\n-- From the authors' discussion \" we will include this feature in our open source code release\", \"plan to run more extensive experiments in the immediate future and will update the paper...\", and \"should be possible to extend the result to the mini-batch setting by combining ...\"\n\nSignificance: This paper is certainly a nice addition to our understanding of signSGD. However, the current obtained results are not very significant compared to the existing results: Theorem 1 is a minor refinement of the two results in Bernstein et al, while Theorem 2 at its current form is not very interesting, as it heavily restricts what an adversary worker machine can do. It would be more realistic if the adversaries can send random bits (still non-cooperated though).\n\n\n\n##### added after author response #####\nI appreciate the authors' efforts in trying to improve the draft by incorporating the reviewers' comments. While I do like the authors' continued study of signSGD, the submission has gone through some significant revision (more complete experiments + stronger adversary). ", "Dear AnonReviewer1,\n\nThank you for your clear and precise review. We appreciate the comment that our work “could be a great paper” if we add some comparisons during the rebuttal. We want to contest your take on the weakness of our adversarial model, yet wholeheartedly agree with the need for adequate experimental comparisons to other techniques.\n\n>>>>>>>>>>>>>>>>>>>>>>>>>>>>\n>>>>>>> Comparison expts >>>>>>\n>>>>>>>>>>>>>>>>>>>>>>>>>>>>\n\nWe have added comparisons to QSGD (compression) Multi-Krum (Byzantine fault tolerance). Please see the revisions in the post above.\n\n>>>>>>>>>>>>>>>>>>>>>>>>>>>>\n>>>>>>> Adversarial model >>>>>>\n>>>>>>>>>>>>>>>>>>>>>>>>>>>>\n\n> the adversary is “very weak” since it “only sends back the opposite sign of the local stochastic gradient”\nWe have formulated an entire class of adversaries that our algorithm is robust to. Please see our revisions above.\n\nThank you for pointing us to the paper [b1] saying that “convergence is not enough” since, for example, a powerful adversary can steer convergence to bad local minimisers. This is a great point. For this reason we do not recommend using our algorithm to protect against “omniscient” adversaries. But for “mere mortal” adversaries, our results are interesting. An example of a “mere mortal” adversary could be a broken machine that sends random bits or stale gradients.", "Dear AnonReviewer2,\n\nThank you for your clear and thorough review. We appreciate your comment that the paper is a “nice addition to our understanding of signSGD”. \n\nWe will first contest the criticism about the significance of the work. We will then respond to the other comments in detail.\n\n>>>>>>>>>>>>>>>>>>>>>>>>>>>>\n>>>> On matters of significance >>>\n>>>>>>>>>>>>>>>>>>>>>>>>>>>>\n\n> “it heavily restricts what an adversary worker machine can do”\nWe have now formulated an entire class of adversaries that our algorithm is robust to. Please see our revisions above. This class contains machines that send random bits as a special case.\n\n> “Theorem 1 is a minor refinement”.\nWhilst \"algebraically\" the result is a minor refinement, conceptually it is a larger shift. It brings the signSGD work in line with modern machine learning practice. And we expect that it has ramifications on other active areas of ML research. For example:\n\nReddi et al. (2018) showed how bimodal noise distributions can lead to divergence of Adam. This leaves a major outstanding question in the community: if Adam generally diverges, why does it work so well in practice? Theorem 1 shows how signSGD---a special limit of Adam---may be guaranteed to converge in natural settings such as Gaussian noise distributions. It suggests that we may be able to prove convergence of Adam for Gaussian noise distributions.\n\n>>>>>>>>>>>>>>>>>>>>>>>>>>>>\n>>>>>>> Minor comments >>>>>>>\n>>>>>>>>>>>>>>>>>>>>>>>>>>>>\n\n> “signSGD converges to a critical point of the objective”\nTo clarify, we mean convergence in the sense that the gradient norm goes to zero as N increases, which is exactly what Theorem 1 tells us. Points with zero gradient norm are critical points. The mixed norm on the left hand side is unusual, but by inspection it is clear that the mixed norm shrinking to zero implies that the L2-norm shrinks to zero. We will clarify this in the paper.\n\n> “are you assuming g_i > 0 here”\nThanks for mentioning this. We did not signpost it, but we assumed, without loss of generality, that g_i > 0. (The case that g_i < 0 follows by totally analogous reasoning.)\n\n> The claim “signSGD cannot converge for these noise distributions” is only “based on intuitive arguments”. \nThank you for pointing this out, we decided to simplify the discussion by just giving a simple example.\n\n> ”The theoretical results are about signSGD while the experiments are about sigNUM”\nSee [1, Appendix, Figure A.4] for experiments across a range of momentum values. [1] also discusses the theoretical relation between Signum and signSGD. In general we suggest practitioners use Signum instead of signSGD in practice since it is only fair to give our algorithm as many hyperparameters as momentum SGD.\n\n[1] signSGD, compressed optimisation for non-convex problems https://arxiv.org/abs/1802.04434.", "Dear AnonReviewer3,\n\nWe have updated your individualised response, and also summarised our revisions to the paper in a post above.\n\nBest wishes,\nAnonAuthors", "Dear AnonReviewer2,\n\nWe have updated your individualised response, and also summarised our revisions to the paper in a post above.\n\nBest wishes,\nAnonAuthors", "Dear AnonReviewer1,\n\nWe have updated your individualised response, and also summarised our revisions to the paper in a post above.\n\nBest wishes,\nAnonAuthors", "Dear AnonReviewers and AC,\n\nWe have updated the paper. The new version includes the following additions:\n\n1. Added comparison to the Multi-Krum [1] Byzantine fault tolerant method (p9)\n2. Added comparison to the QSGD [2] gradient compression method (p9)\n3. Added natural language task benchmark (QRNN [3] model on the Wikitext-103 dataset) (p8)\n4. Extended the robustness theorem to an entire class of adversaries that we term \"blind multiplicative adversaries\" (p7) \n\nWe are grateful to Rev1 and Rev3 for encouraging us to run the additional experiments, and for Rev2 for encouraging us to extend the robustness theorem. \n\nWe will now go into more detail:\n\n1. Multi-Krum experiment. Multi-Krum is a Byzantine fault tolerant method that defines a security level f, and always removes f workers from the gradient aggregation (even when there are no adversaries present). Majority Vote in contrast always keeps all workers. We found that when the number of adversaries exceeds f, Multi-Krum deteriorates dramatically, whereas Majority Vote deteriorates more gracefully.\n\n2. QSGD experiment. For a resnet-18 model on Cifar-10, we found that majority vote converges much faster than the \"theory version\" [2, p5] of the QSGD algorithm, but it converges at similar rate to the \"experiment version\" [2, p7] where the QSGD authors normalise by the max instead of the L2 norm. We found the max-norm version of QSGD had about 5x higher compression than the 32x compression of signSGD for this problem, but this gain represents a diminishing return since the cost of backpropagation has already started to dominate at that compression level. \n\nTo be explicit, for this network with SGD and NCCL, one epochs costs \n=========> 6 sec computing + 12 sec communicating = 18 sec\nFor signSGD a very efficient implementation should reduce communication by 32x, therefore we expect one epoch to cost\n=========> 6 sec computing + 12/32 sec communicating = 6.375 sec\nFor QSGD a very efficient implementation should reduce communication by (32x5)x, therefore one epoch should cost\n=========> 6 sec computing + 12/(32x5) sec communicating = 6.075 sec\nAnd we see the marginal gain of QSGD is small, whilst the algorithm is much more complicated.\n\n3. Natural language experiment. We found that using signSGD with majority vote to train QRNN led to a 3x speedup per epoch over Adam with NCCL. That said, there was a deterioration in the converged solution. This meant that overall the performance after 2 hours of training was very similar.\n\n4. Extended the robustness theorem. We show that Majority vote is robust to an entire class of adversaries that we call \"blind multiplicative adversaries\". This class includes adversaries that invert or randomise their gradient estimate as special cases. We are particularly interested in randomised attacks as a model of network faults. This class of adversaries is more rigorous than the class of \"non-cooperative\" adversaries that we discussed previously.\n\n[1] https://papers.nips.cc/paper/6617-machine-learning-with-adversaries-byzantine-tolerant-gradient-descent\n[2] https://papers.nips.cc/paper/6768-qsgd-communication-efficient-sgd-via-gradient-quantization-and-encoding\n[3] https://openreview.net/forum?id=H1zJ-v5xl", "Dear AnonReviewer3,\n\nThank you for your positive review. We really appreciate the remarks that our “experiments are extensive” and our paper is “solid and interesting”.\n\n>>>>>>>>>>>>>>>>>>>>>>>>>>>>\n>>>>>>> More experiments >>>>>>\n>>>>>>>>>>>>>>>>>>>>>>>>>>>>\n\n> “More experiments on different tasks and DNN architectures could be performed”\n\nThanks for the suggestion, we have added experiments training the QRNN language model on the Wikitext-103 dataset. Please see the revisions above\n\n>>>>>>>>>>>>>>>>>>>>>>>>>>>>\n>>>>>>>> Further thoughts >>>>>>>\n>>>>>>>>>>>>>>>>>>>>>>>>>>>>\n\n> “some workers might be lost during one iteration”\nIntuitively, dropping workers will slow down convergence but not prevent it. You can see this immediately since a dropped worker is strictly better for convergence than an adversarial worker. This is one of the reasons we are excited about our Byzantine fault tolerance results.\n\n> what “regularization technique would be suitable for signed update kind of method”?\nWe are particularly excited about this question for future work, thanks for suggesting it.\n", "Dear AnonReviewers,\n\nThank you for your thoughtful and thorough reviews. We will summarise the content of the reviews here.\n\nFirst some high notes:\n\nRev3 says our “experiments are extensive” and our paper is “solid and interesting”. Rev2 says the paper is a “nice addition to our understanding of signSGD”. Rev1 says our work “could be a great paper” if we add sufficient comparisons during the rebuttal.\n\nThe reviewers’ main concerns:\n\n1. Rev1 and Rev2 question the strength of the adversarial model;\n2. Rev1 asks for comparison experiments for communication and/or Byzantine property;\n3. Rev3 would like to see additional datasets and network architectures.\n", "Dear anonReviewers,\n\nHere's a Jupyter notebook in case you'd like to play with the algorithm: https://colab.research.google.com/drive/1PlD2jXoXr2a8e57aIDINCw1-7RIttRTt\n\nIt can be run in the browser, or you can just download it and run on your machine.\n\nBest wishes,\nanonAuthors", "The paper proposes a distributed optimization method based on signSGD. Majority vote is used when aggregating the updates from different workers.\n The method itself is naturally communication efficient. Convergence analysis is provided under certain assumptions on the gradient. It also theoretically shows that it is robust up to half of the workers behave independently adversarially. Experiments are carried out on parameter server environment and are shown to be effective in speeding up training. \n\nI find the paper to be solid and interesting. The idea of using signSGD for distributed optimization make it attractive as it is naturally communication efficient. The work provides theoretical convergence analysis under the small batch setting by further assuming the gradient is unimodal and symmetric, which is the main theoretical contribution. Another main theoretical contribution is showing it is Byzantine fault tolerant. The experiments are extensive, demonstrating running time speed-up comparison to normal SGD. \n\nIt is interesting to see a test set gap in the experiments. It remains to be further experimented to see if the method itself inherently suffer from generalization problems or it is a result of imperfect parameter tuning. \n\nOne thing that would be interesting to explore further is to see how asynchronous updates of signSGD affect the convergence both in theory and practice. For example, some workers might be lost during one iteration, how will this affect the overall convergence.\nAlso, it would be interesting to see the comparison of the proposed method with SGD + batch normalization, especially on their generalization performance. It might be interesting to explore what kind of regularization technique would be suitable for signed update kind of method. \n\nOverall, I think the paper proposes a novel distributed optimization algorithm that has both theoretical and experimental contribution. The presentation of the paper is clear and easy to follow. \n\nSuggestions: I feel the experiments part could still be improved as also mentioned in the paper to achieve competitive results. More experiments on different tasks and DNN architectures could be performed. \n" ]
[ -1, 6, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7 ]
[ -1, 5, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2019_BJxhijAcY7", "iclr_2019_BJxhijAcY7", "iclr_2019_BJxhijAcY7", "B1eUDnU037", "HkgS6925hX", "r1eEYTEq2m", "HkgS6925hX", "B1eUDnU037", "iclr_2019_BJxhijAcY7", "r1eEYTEq2m", "iclr_2019_BJxhijAcY7", "iclr_2019_BJxhijAcY7", "iclr_2019_BJxhijAcY7" ]
iclr_2019_BJxssoA5KX
Bounce and Learn: Modeling Scene Dynamics with Real-World Bounces
We introduce an approach to model surface properties governing bounces in everyday scenes. Our model learns end-to-end, starting from sensor inputs, to predict post-bounce trajectories and infer two underlying physical properties that govern bouncing - restitution and effective collision normals. Our model, Bounce and Learn, comprises two modules -- a Physics Inference Module (PIM) and a Visual Inference Module (VIM). VIM learns to infer physical parameters for locations in a scene given a single still image, while PIM learns to model physical interactions for the prediction task given physical parameters and observed pre-collision 3D trajectories. To achieve our results, we introduce the Bounce Dataset comprising 5K RGB-D videos of bouncing trajectories of a foam ball to probe surfaces of varying shapes and materials in everyday scenes including homes and offices. Our proposed model learns from our collected dataset of real-world bounces and is bootstrapped with additional information from simple physics simulations. We show on our newly collected dataset that our model out-performs baselines, including trajectory fitting with Newtonian physics, in predicting post-bounce trajectories and inferring physical properties of a scene.
accepted-poster-papers
This paper proposes a novel dataset of bouncing balls and a way to learn the dynamics of the balls when colliding. The reviewers found the paper well-written, tackling an interesting and hard problem in a novel way. The main concern (that I share with one of the reviewers) is about the fact that the paper proposes both a new dataset/environment *and* a solution for the problem. This made it difficult the for the authors to provide baselines to compare to. The ensuing back and forth had the authors relax some of the assumptions from the environment and made it possible to evaluate with interaction nets. The main weakness of the paper is the relatively contrived setup that the authors have come up with. I will summarize some of the discussion that happened as a result of this point: it is relatively difficult to see how this setup that the authors have and have studied (esp. knowing the groundtruth impact locations and the timing of the impact) can generalize outside of the proposed approach. There is some concern that the comparison with interaction nets was not entirely fair. I would recommend the authors redo the comparisons with interaction nets in a careful way, with the right ablations, and understand if the methods have access to the same input data (e.g. are interaction nets provided with the bounce location?). Despite the relatively high average score, I think of this paper as quite borderline, specifically because of the issues related to the setup being too niche. Nonetheless, the work does have a lot of scientific value to it, in addition to a new simulation environment/dataset that other researchers can then use. Assuming the baselines are done in a way that is trustworthy, the ablation experiments and discussion will be something interesting to the ICLR community.
train
[ "ryeBDwsMs7", "SJloGnWRnQ", "Hyl5ady1JV", "rkgYYukJ1N", "S1lJ8Ylo0Q", "SJe2hHXj6Q", "H1gfIEmspm", "r1eXfEXiaX", "ByekAQQo6Q", "HkegBTQcnm" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "Paper summary:\nThe paper proposes to predict bouncing behavior from visual data. The model has two main components: (1) Physics Interface Module, which predicts the output trajectory from a given incoming trajectory and the physical properties of the contact surface. (2) Visual Interface Module, which predicts the surface properties from a single image and the impact location. A new dataset called Bounce Dataset is proposed for this task.\n\nPaper strengths:\n- The paper tackles an interesting and important problem.\n- The data has been collected in various real scenes.\n- The idea of training the physics part of the network with synthetic data and later fine-tuning it with real images is interesting.\n- The experiments are thorough and well-thought-out.\n\nPaper weaknesses:\n- It would be more interesting if the dataset was created using multiple types of probe objects. Currently, it is only a ball.\n\n- It is not clear how the evaluation is performed. For instance, the length of the groundtruth and predicted trajectories might be different. How is the difference computed?\n\n- The impact location (x,y) corresponds to multiple locations in 3D. Why not using a 3D point as input? It seems the 3D information is available for both the real and synthetic cases.\n\n- Why is it non-trivial to use a deconvolution network for predicting the output point cloud trajectory?\n\n- The length of the input trajectory can vary, but it seems the proposed architecture assumes a fixed-length trajectory. I am wondering how it handles a variable-length input.\n\n- How is the bounce location encoded in VIM?\n\n- I don't see any statistics about the objects being used for data collection. That should be added to the paper.\n\n>>>>> Final score: The authors have addressed my concerns in the rebuttal. I believe this paper tackles an interesting problem, and the experiments are good enough since this is one of the first papers that tackle this problem. So I keep the initial score. \n", "The authors present both a dataset of videos of a real-world foam ball bouncing and a model to learn the trajectory of the ball at collision (bounce) points in these videos. The model is comprised of a Physics Inference Module (PIM) and a Visual Inference Module (VIM). The PIM takes in both a vector of physical parameters (coefficient of restitution and collision normal) and a point cloud representation of the pre-bounce trajectory, and produces a point cloud representation of the post-bounce trajectory (or, rather, an encoded version of such). The VIM takes in an image and ground-truth bounce location and produces the physical parameters of the surface at that location.\n\nI find the paper well-written and clear. The motivation in the introduction is persuasive and the related work section is complete. However, the authors are introducing both a new training paradigm (to my knowledge unused in the literature) and a new model, and without any existing baselines to compare against I find it a bit difficult to understand how well the model works. \n\nOverall, the authors’ model is somewhat complicated and not as general as it initially seems. To justify this complication I would like to see more convincing results and benchmarking or application to more than one single dataset (e.g. non-spheres bouncing).\n\nHere are some specific concerns:\n\n1) I could not find a link to an open-sourced version of the dataset(s). Given that the authors emphasize the dataset as a main contribution of the paper, they should open-source it and make the link prominent in the main text (apologies if I somehow missed it).\n\n2) The authors claim in multiple places that the model is trained end-to-end, but this does not seem to be the case. Specifically, the PIM is pre-trained on an auxiliary dataset from simulation. The trajectory encoder also seems to be pre-trained (though I could be wrong about that, see my question below). Furthermore, there is a bit of hand-holding: The PIM uses ground-truth state for pre-training, and the VIM gets the ground-truth bounce location. In light of this, the model seems a lot less general and end-to-end than implied in the abstract and introduction.\n\n3) No comparison to existing baselines. I would like to see how the authors’ model compares to standard video prediction algorithms. The authors could evaluate their model with respect to pixel loss (after ground-truth rendering) and compare to a video prediction algorithm (such as PredNet by Lotter, Kreiman, & Cox, 2016). Given that the authors’ method uses some extra “privileged” information (as described in point 2), it should far out-perform algorithms that train only on video data, and such a result would strengthen the paper a lot.\n\n4) Table 1 is not a very convincing demonstration of performance. Regardless of baselines, the table does not show confidence intervals. I would love to see training curves with errorbars of the models on the most important metrics (e.g. Dist and COR Median Absolute Error).\n\nI also was confused about a couple of things:\n\n1) How was the PointNet trajectory encoder trained? I did not see this mentioned anywhere. Were gradients passed through from the PIM? Was the same network used for both the simulation and real-world data?\n\n2) The performance of the center-based model in Table 1 seems surprisingly low. The center-based model should be as good at the Train core, Fix traj. enc. model, since it has access to the ball’s position. Why is it worse? Is the VIM at fault? Or is the sphere-fitting sub-optimal? How does it compare on the simulated data with ground truth physical parameters?\n\n3) Lastly, the color-scheme is a bit confusing. It looks like the foam ball in the videos was rainbow-colored. However, in the model outputs in trajectory figures time is also rainbow-colored. This was initially a bit confusing. Perhaps grayscale for the model outputs would be clearer.\n", "\n2)  “Lots of hand-holding, lack of generality.  Giving the ground-truth bounce position on the real dataset is a serious assumption.”  \n While we make the assumption of knowing the impact spatial location, it can be automatically estimated. We demonstrate this by using the RANSAC-estimated point cloud center in the collision frame as the collision point and retraining the best model (Row 5 Table 1: “Train core, Fix traj. enc.”). \n\t\t\t\t\t\t Dist.\t\t\t% Normals\t\tCOR Median Abs Err\n\t\t\t\t\t       (Mean, Std)\t (Mean, Std)\t\t       (Mean, Std)\nKnown Impact Loc \t\t       21.9, 0.006\t \t27.06, 0.09\t\t        0.158, 0.01\nAuto. Estimated Impact Loc . 21.7, 0.009\t\t27.58, 0.04\t\t        0.153, 0.01\n\nThe assumption of impact location simply allowed us to create the experimental setup to investigate the main goal of our work: estimation of physical parameters by learning from observed bounces.\n\n\n
“Confidence intervals should be in the paper itself in Tables 1 and 2”\n
“Training curves should be plotted”\n We will add these to the final version of the paper, if accepted. While the PDF revision deadline has passed, we are hoping to incorporate all discussions with the reviewers.\n\n\n\nDiscussion about the problem:\nPoint 1: We are agreed “not niche”.\nPoint 2: We hope this concern is now addressed by our removal of the assumption of knowing the spatial location of the impact (in the experiment above) by training with automatically estimated bounce positions.\n\nPoints 3 and 4: Collision observation, detection and simulation of the non-rigid surfaces that are encountered in everyday scenes remains a challenging task[a] and the subject of active research. Further, in our setting, we have the added challenge of approximate and noisy estimates of the scene geometry; collision detection and processing with uncertainty is just recently considered [b] and there are no standardized codes or methods in the deformable setting. We show post-bounce predictions in Figures 1 and 3 and will release videos of our post-bounce predictions (ICLR’s openreview did not have a way for us to submit the videos anonymously as supplemental material). Note that rollouts require collision detection, which is challenging as previously noted. Collision detection and rollouts are interesting for future work.\n\nPoint 5: As discussed in the first paragraph in the introduction, modeling single-object bounces has potential application in augmented reality for dynamic object compositing. Handling multiple interacting objects is interesting future work, but the single object setting is a first and necessary step towards it.\n\n[a] Collision Detection for Deformable Objects. M. Teschner, S. Kimmerle, G Zachmann, B. Heidelberger, L Raghupathi, A. Fuhrmann, M-P Cani, F Faure, N. Magnenat-Thalmann, and W. Strasser. Eurographics 2004, State-of-the-Art Report\n[b] Fast and Bounded Probabilistic Collision Detection for High-DOF Trajectory Planning in Dynamic Environments. C. Park, J.S. Park, and D. Manocha. IEEE Transactions on Automation Science and Engineering. 2018.\n\n", "We thank the reviewer for their response and useful suggestions. We have conducted additional experiments that specifically address the major concerns of the reviewer. We compare the PIM model to Interaction Networks (Battaglia et al., 2016) and also relax the assumption of knowledge of impact spatial location.  We show that our PIM model significantly outperforms Interaction Networks on synthetic data and that a model trained without knowledge of ground truth spatial location of impact performs as well as our proposed model. First, we would like to address the high-level concerns of the reviewer.\n\nIn the context of applications, it is true that our proposed model can be applied to a single ball collision under knowledge of impact time. However, as a research problem, we believe that this setting and dataset exposes numerous important challenges that currently hinder progress in this direction (discussed in the Introduction). In fact, we believe that what seems like a niche problem is actually a challenging unaddressed elementary problem in modeling real-world collisions. \n\n

\nWe now present experimental results to address the mentioned concerns:\n1)  “Also on the simulated data you could compare to state-based prediction models, such as (Battaglia et al., 2016) that you reference.”\n Thank you for the suggestion. Comparing PIM to Interaction Networks (IN) is indeed an interesting experiment. We have used the simulation data from our experiments to train two versions of the IN model (IN-positions and IN-positions-velocity, described below) using the available codebase. Forward prediction error at 0.1s post-bounce (“Dist.” in the main text) from the simulation-based PIM (as described in “Pretraining the PIM” in Section 3.1) and the two IN Models are as follows:\n\nCenter-based PIM: 11.72cm (stdev: 0.009)\nPointNet-based PIM: 12.87cm (stdev: 0.005)\nIN-position-velocity: 36.14cm (stdev: 0.023)\nIN-pos1-pos2-pos3: 23.22cm (stdev: 0.015)\n\nWe observe that the PIM model performs significantly better than the Interaction Networks models due to the iterative nature of IN which leads to compounding of errors over time. The Center-based and PointNet-based PIM models perform similarly on the simulation data, but the PointNet-based model is more robust to sensor noise on real data as shown in Table 1. \n\nIN-position-velocity: State vector of the object at t=1 contains [x, y, z, v_x, v_y, v_z]  (used in the original Interaction Networks paper)\nIN-pos1-pos2-pos3: State vector of the object at t=3 contains [x1, y1, z1, x2, y2, z2, x3, y3, z3] 

\n\n\nResults continued below", "Dear Authors,\n\nThank you for your reply. However, many of my previous comments still apply (also, the paper itself looks to have not been revised much if at all).\n\nSpecifically, my main concerns are these:\n\n1) No comparison to existing baselines. There are many baselines against which you could compare. For example, you can compare to video prediction baselines on the simulation data (where you can use the simulation renderer to render trajectories). Also on the simulated data you could compare to state-based prediction models, such as (Battaglia et al., 2016) that you reference. Ultimately, as a reader I have no idea how well your model actually models physics. Given some of the trajectories in Figure 12 it is clear that the model does in fact make mistakes, so this must be compared to existing baselines (even if they don't use exactly the same training paradigm) to verify that it is actually learning the physics well.\n\n2) Lots of hand-holding, lack of generality. Giving the ground-truth bounce position on the real dataset is a serious assumption. For more general data, this could be a highly non-trivial preprocessing step and limits the generality of the model. Similarly for ground-truth knowledge of the impact time.\n\nAlso, a few of my minor comments remain unresolved:\n1) Confidence intervals should be in the paper itself in Tables 1 and 2, preferably as 90% or 95% confidence intervals.\n2) Training curves should be plotted (at least in the supplementary material) corresponding the the tables would be good to see. The shape of the training curves would indicate how fast the model learns and whether the fine-tuning asymptotes or results in seed-dependent instability (which is common for fine-tuning physics prediction models).\n\nStepping back a bit, this paper addresses a very niche problem, because the paradigm involves:\n1) A large simulated dataset and a small real-world dataset (not niche)\n2) Ground-truth impact locations yet no other physical parameters for the real-world dataset (very niche)\n3) Ground-truth knowledge of when the impact occurs in time, and specificity of the model to this time-point (very niche)\n4) Your aim is to infer some unknown physical parameters without actually be able to do rollouts or video prediction (somewhat niche)\n5) The only environment is a single object bouncing (very niche).\n\nYour model is also very specific to this particular paradigm. So without strong results (which, given no benchmarking with existing methods, the reader cannot evaluate) I'm struggling to see how this paper could be of interest to the wider ICLR audience.\n\nWhile I appreciate your reply, I cannot in good conscience give a rating higher than 5.", "We thank the reviewer for their appreciation of our work. We address the reviewer’s concerns here:\n\n1) “It would be more interesting if the dataset was created using multiple types of probe objects. Currently, it is only a ball.”\n We agree that the eventual goal for research in this direction should be to generalize to multiple types of probe objects. We discuss this further in the response to the review from AnonReviewer2. (https://openreview.net/forum?id=BJxssoA5KX&noteId=ByekAQQo6Q )\n\n2)“The length of the groundtruth and predicted trajectories might be different. How is the difference computed?”\n The evaluation is not dependent on the length of the trajectories recorded. The distance between the predicted center and the ground-truth center is computed at timestep 10 (0.1 seconds post-bounce). All trajectories in the dataset have length greater than 10 timesteps.\n\n3) “The impact location (x,y) corresponds to multiple locations in 3D. Why not using a 3D point as input? It seems the 3D information is available for both the real and synthetic cases.”\n In the physics model, the 3D collision point is currently used since the point cloud is represented with collision as origin. In the VIM model, using the 3D points is similar to using a 2D (x,y) points since we eventually need to extract visual features from 2D input images.\n\n4) “Why is it non-trivial to use a deconvolution network for predicting the output point cloud trajectory?”\n There is very limited work on generating point clouds from embeddings. Integrating a deconvolution model would have added an additional obstacle to an already challenging problem. Furthermore, it would make localizing the errors more difficult.\n\nSome relevant literature that demonstrate the challenges of generating point clouds:\n[1] Achlioptas, Panos, et al. \"Representation learning and adversarial generation of 3D point clouds.\" arXiv preprint arXiv:1707.02392 (2017).\n[2] Insafutdinov, Eldar, and Alexey Dosovitskiy. \"Unsupervised Learning of Shape and Pose with Differentiable Point Clouds.\" arXiv preprint arXiv:1810.09381 (2018).\n[3] Lin, Chen-Hsuan, Chen Kong, and Simon Lucey. \"Learning efficient point cloud generation for dense 3D object reconstruction.\" arXiv preprint arXiv:1706.07036 (2017).\n[4] Achlioptas, Panos, et al. \"Learning Representations and Generative Models for 3D Point Clouds.\" (2018).\n\n\n5) “The length of the input trajectory can vary, but it seems the proposed architecture assumes a fixed-length trajectory. I am wondering how it handles a variable-length input.”\n We observed that 10 frames before and after the collision contain sufficient information. Therefore, we used these 20 frames in the proposed model. For videos where more frames are available, we use only the 10 frames before and after collision. \n\n6) “How is the bounce location encoded in VIM?”\n The bounce location is used to index the feature map which is the output of the VIM. We present this in Subsection 3.2 “Training” paragraph - $\\rho_{x,y}$ is obtained by indexing the output $\\mathcal{V}(I)$.\n\n7) “I don't see any statistics about the objects being used for data collection. That should be added to the paper.”\n Thank you for the suggestion. That would indeed be informative. We shall add this to the final version of the paper since this would require some additional effort to label the objects. ", "We thank the reviewer for their time and appreciation of our work.", "3) “The authors could evaluate their model with respect to pixel loss (after ground-truth rendering) and compare to a video prediction algorithm (such as PredNet by Lotter, Kreiman, & Cox, 2016).”\n The goal of our work was to investigate whether real-world data can be used to learn models of physics and also simultaneously estimate physical parameters in real-world scenes. We do not, however, deal with the realistic rendering of the predicted outputs from the learned physics model. Therefore, we cannot directly compare to future-prediction models like PredNet [Lotter et al], since we do not predict the pixels in the future frames. \n\n4) “I would love to see training curves with errorbars of the models on the most important metrics (e.g. Dist and COR Median Absolute Error)”\n We have now computed the error bars for the Forward prediction distance error and COR Median absolute error over multiple training/testing runs with different initializations. These results confirm the conclusions of our ablative study.\nExperiment\t\t\tDist (Mean, Std)\t\tCOR Med Abs Err (Mean, Std)\nCenter based\t\t\t28.2, 0.005\t\t\t0.173, 0.01\nFix core and traj. enc. \t 38.4, 0.008\t\t\t0.258, 0.008\nTrain core and traj. Enc.\t24.7, 0.004\t\t\t0.169, 0.006\nTrain core, Fix traj. Enc.\t21.9, 0.006\t\t\t0.158, 0.01\n\n\nClarifications:\n1) “How was the PointNet trajectory encoder trained? Were gradients passed through from the PIM? Was the same network used for both the simulation and real-world data?”\n Yes, the PointNet trajectory encoder is actually part of the PIM in our proposed approach. The gradients for the trajectory encoder are computed with respect to the objectives mentioned in Equations (2) and (3). \nYes, the same network is used for simulation and real-world data.\n\n2) “The performance of the center-based model in Table 1 seems surprisingly low. Is the VIM at fault? Or is the sphere-fitting sub-optimal?”\n In theory, if accurate centers and point clouds are available, both models should perform similarly. The sphere-fitting in our data is sub-optimal due to the noise in the stereo-depth estimates. We believe that this highlights the advantage of using a PointNet-based model to avoid dealing with hand-crafted estimates of centers.\n\n[a] Jui-Hsien Wang, Rajsekhar Setaluri, Dinesh K. Pai, and Doug L. James. Bounce maps: An improved restitution model for real-time rigid-body impact. ACM Transactions on Graphics (Proceedings of SIGGRAPH 2017), 36(4), July 2017. doi: https://doi.org/10.1145/3072959.3073634.\n", "We thank the reviewer for their feedback. We address the concerns of the reviewer below.\n\n1) “The authors are introducing both a new training paradigm (to my knowledge unused in the literature) and a new model, and without any existing baselines to compare against I find it a bit difficult to understand how well the model works.”\n We agree that due to the novelty of our training paradigm, model and data, there is a lack of existing literature/baselines to compare against. This is an unavoidable challenge we face. However, in order to better provide context for the performance of our models, we have conducted extensive quantitative and qualitative experiments and compared to relevant baselines (as also noted by other reviewers) including: (a) experiments dissecting the proposed model to localize the performance gains obtained due the PointNet trajectory encoders; (b) training the PIM on real-world data; and (c) a ground truth normals based experiment for reference. Overall, we hope that our proposed approach can also serve as a useful baseline for future work in this direction. \n\n2) “Overall, the authors’ model is somewhat complicated and not as general as it initially seems. To justify this complication I would like to see more convincing results and benchmarking or application to more than one single dataset (e.g. non-spheres bouncing).”\n As previously noted by the reviewer, prior work along the lines of estimating physical parameters and learning models of physics from real-world data is extremely scarce. Therefore, there are no relevant datasets that can directly be used to benchmark our approach, which also emphasizes the need for such a dataset. \nIn the nascent stages of this field, we believe that addressing the problem with a spherical probe object provides a good starting point. Non-spherical probe objects introduce additional complexity making exploration in this direction more challenging. For example, results in [a] show how much the physical properties vary across the surface of an object. The controlled setup of a spherical probe object ensures that the outcomes of bounces are dependent only on the physical properties of one object. However, we agree that non-spherical probe objects could definitely be an interesting and essential next step to pursue as future work. \n\nSpecific concerns:\n1) “A link to open source version of dataset is not available”\n The double-blind submission of ICLR constrains the ability for us to provide the dataset publicly without revealing our identity. The data will be made publicly available with the final version of the paper. \n\n2) “The authors claim in multiple places that the model is trained end-to-end, but this does not seem to be the case. Specifically, the PIM is pre-trained on an auxiliary dataset from simulation. The trajectory encoder also seems to be pre-trained (though I could be wrong about that, see my question below). Furthermore, there is a bit of hand-holding: The PIM uses ground-truth state for pre-training, and the VIM gets the ground-truth bounce location. In light of this, the model seems a lot less general and end-to-end than implied in the abstract and introduction.”\n The PIM (including the trajectory encoder) is pretrained initially using simulation data. The VIM+PIM pipeline is then finetuned in end-to-end manner on the real data. In the abstract/introduction, we refer to this end-to-end training. It is true that the PIM uses simulation parameters in the pretraining phase and the VIM uses the ground truth location to index the feature maps. However, the training is still “end-to-end” in the conventional usage of the term, since the model is fully differentiable and the gradients for the objective in Equation 3 are computed w.r.t all the parameters of both the VIM and PIM. This is analogous to pretraining on ImageNet and finetuning with added parameters for other tasks which is also referred to as end-to-end training. \n\n\n(Continued below)\n\n", "This paper presents a method for inferring physical properties of the world (specifically, normals and coefficients of restitution) from both visual and dynamic information. Objects are represented as trajectories of point clouds used under an encoder/decoder neural network architecture. Another network is then learned to predict the post bounce trajectory representation given the prebounce trajectory representation given the surface parameters. This is used both to predict the post bound trajectory (with a forward pass) but also to estimate the surface parameters through an optimization procedure. This is coupled with a network which attempts to learn these properties from visual cues as well. This model can be either pretrained and fixed or updated to account for new information about a scene.\n\nThe proposed model is trained on a newly collected dataset that includes a mixture of real sequences (with RGB, depth, surface normals, etc) and simulated sequences (additionally with physical parameters) generated with the help of a physics engine. It is compared with a number of relevant baseline approaches and ablation models. The results suggest that the proposed model is effective at estimating the physical properties of the scene.\n\nOverall the paper is well written and thoroughly evaluated. The problem is interesting and novel, the collected dataset is likely to be useful and the proposed solution to the problem is reasonable." ]
[ 8, 6, -1, -1, -1, -1, -1, -1, -1, 7 ]
[ 4, 3, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2019_BJxssoA5KX", "iclr_2019_BJxssoA5KX", "rkgYYukJ1N", "S1lJ8Ylo0Q", "r1eXfEXiaX", "ryeBDwsMs7", "HkegBTQcnm", "ByekAQQo6Q", "SJloGnWRnQ", "iclr_2019_BJxssoA5KX" ]
iclr_2019_BJxvEh0cFQ
K for the Price of 1: Parameter-efficient Multi-task and Transfer Learning
We introduce a novel method that enables parameter-efficient transfer and multi-task learning with deep neural networks. The basic approach is to learn a model patch - a small set of parameters - that will specialize to each task, instead of fine-tuning the last layer or the entire network. For instance, we show that learning a set of scales and biases is sufficient to convert a pretrained network to perform well on qualitatively different problems (e.g. converting a Single Shot MultiBox Detection (SSD) model into a 1000-class image classification model while reusing 98% of parameters of the SSD feature extractor). Similarly, we show that re-learning existing low-parameter layers (such as depth-wise convolutions) while keeping the rest of the network frozen also improves transfer-learning accuracy significantly. Our approach allows both simultaneous (multi-task) as well as sequential transfer learning. In several multi-task learning problems, despite using much fewer parameters than traditional logits-only fine-tuning, we match single-task performance.
accepted-poster-papers
Reviewers largely agree that the proposed method for finetuning the deep neural networks is interesting and empirical results clearly show the benefits over finetuning only the last layer. I recommend acceptance.
train
[ "S1eTI1PuAX", "BJgEyp8gAQ", "SJgssRVq3X", "SJeD6ZMT6Q", "B1lPdZz6a7", "H1lFC1M6p7", "BygmSOC2hm", "S1e7BbGqj7" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thanks to the authors for their reply. I am satisfied with the current state of the paper and tend to keep my score.", "Several changes have been made to my comments, thanks for pointing out the mistakes. ", "This paper explored the means of tuning the neural network models using less parameters. The authors evaluated the case where only the batch normalisation related parameters are fine tuned, along with the last layer, would generate competitive classification results, while using very few parameters comparing with fine tuning the whole network model. However, several questions are raised concerning the experiment design and analysis:\n1. Only MobilenetV2 and InceptionV3 are evaluated as classification model, while other mainstream models such as ResNet, DenseNet are not included. Would it be very different regarding the conclusion of this paper?\n2. It seems that the only effective manner is by fine tuning the parameters of both batch normalisation related and lasts layer, while fine tuning last layer seems to be having the main impact on the final result. In Table 4, authors do not even provide the results fine tuning last layer only.\n3. The organisation of the paper and the order of illustration is a bit confusing. e.g. later sections are frequently referred in the earlier sections. Personally I would prefer a plain sequence than keep turning pages for confirmation.", "We thank AnonReviewer1 for the review. Below are our responses inline. \n\n>> * explain the choice of the hyper-parameters of RMSProp (paragraph under Table 1).\n\nThe hyper-parameters are the same as those in the standard setup for MobilenetV2 or InceptionV3. We have added a line in the experiments section mentioning this.\n\n>> * fix Figure 3, it's impossible to read in the paper-printed version\n\nThe four subfigures are now split into two rows and are now hopefully easily readable. \n\n>> * explain how the average number of parameters per model in computed in Tables 4 and 5. E.g. 700K params/model in the first column of Table 4 is misleading - I suppose the shared parameters are not taken into account. The same holds for 0 in the second column, etc.\n\nThank you for pointing this out. We had mistakenly only counted the non-shared parameters in the models, and forgot to include the last layer parameters in the second column. This has now been corrected to simply the total number of parameters trained. \n\n>> * add a proper discussion for domain adaptation part. The simple \"The results are shown in Table 5\" is not enough. \n\nDone. \n\n>> * consider leaving the discussion of cost-efficient model cascades out. The presented details are too condensed and do not add value to the paper.\n\nMakes sense. We moved these results to the appendix to be included in the full version.\n\n>> * explain how different resolutions are managed by the same model in the domain adaptation experiments.\n\nWe added a line in the paper stating the images are brought to the right resolution using bilinear interpolation before passing as input to each model. \n", "We thank AnonReviewer2 for the review. Below is our detailed response.\n\n>> 1. Only MobilenetV2 and InceptionV3 are evaluated as classification model, while the residual connection based models such as ResNet, DenseNet are not included. Would it be very different regarding the conclusion of this paper?\n\nWe experimented extensively with multiple tasks (classification, detection, multi-task learning) and datasets instead of trying more models for the same task, as we intended to test the effectiveness of our method in various situations. Further, MobileNetV2 has residual connections, which encouraged us to believe that the results on other residual connection based models would be similar. \n\nWe ran experiments with ResNet and got similar results. For instance, transfer learning accuracy from ImageNet to Cars goes up from 61.4% (last layer fine-tuning) to 73% (S/B patch + last layer fine-tuning). From ImageNet to Aircraft, accuracy goes up from 51.8% (last layer) to 62.5% (S/B patch + last layer). In the interest of space, we did not think it added much to the experimental section of the paper.\n\n>> 2. It seems that the only effective manner is by fine tuning the parameters of both batch normalisation related and lasts layer, while fine tuning last layer seems to be having the main impact on the final result. In Table 4, authors do not even provide the results fine tuning last layer only.\n\nFine-tuning the last layer is not always required. For instance, in domain adaptation (Sec 5.4), the model patch consists of only the batch normalization parameters, and the resulting accuracies match or exceed those of individually trained models. \n\nFrom Figure 3 and Table 4, we see that fine-tuning scales, biases (S/B) and depthwise (DW) along with last layer causes an average 50% relative improvement in accuracy over fine-tuning only the last layer while being only a small (4%) increase in terms of number of parameters over the last layer.\n\nWhen performing multi-task or transfer learning across different tasks (e.g. ImageNet → Places365), it becomes necessary to have different last layers as the output spaces are different. In Table 4, the second column corresponds to the case where only the last layer is separate for each task. We apologize if this was not clear - we have now updated Table 4 headers to explicitly reflect this fact. \n \n>> 3. The organisation of the paper and the order of illustration is a bit confusing.\n\nWe will be happy to modify the paper if the reviewer elaborates on this point.\n", "We thank AnonReviewer3 for the review. Below are our responses to specific comments. \n\n>> 1. The memory benefit is obvious, it would be interesting to know the training speed compared to fine-tuning methods (both the last layer and the entire network)?\n\nGenerally, we did not see a large variation in training speed on the datasets that we tried. All fine-tuning approaches needed 50-200K steps depending on the learning rate and the training method. While different approaches definitely differ in the number of steps necessary for convergence, we find these changes to be comparable to changes in other hyper-parameters such as learning rate, and generally not providing a clear signal worth articulating in the paper. \n\n>> 2. It seems that DW patch has limited effects compared to S/B patch. It would be nice to have some analysis of this aspect.\n\nYes, DW patch seems to be less powerful than S/B patch. Generally, DW patch resulted in about 5-10% percentage points lower accuracy than the S/B patch while having comparable number of parameters. However, it does add a lot of value when used in conjunction with S/B patch. For instance, from the top two figures in Figure 3, we see that fine-tuning the combination of DW and S/B patches (4% of the network parameters) closes the accuracy gap between S/B patch (1% of the network parameters) and fine-tuning the last layer (37% of the network parameters). \n\nIf the reviewer thinks that adding the performance of DW only patch would be a useful addition to Figure 3, we are happy to do that. We had excluded it in the interest of not crowding the plots.", "The authors proposed an interesting method for parameter-efficient transfer learning and multi-task learning. The authors show that in transfer learning fine-tuning the last layer plus BN layers significantly improve the performance of only fine-tuning the last layer. The results are surprisingly good and the authors also did analysis on the relationship between embedding space and biases. \n\n1. The memory benefit is obvious, it would be interesting to know the training speed compared to fine-tuning methods (both the last layer and the entire network)?\n2. It seems that DW patch has limited effects compared to S/B patch. It would be nice to have some analysis of this aspect.\n", "Summary: the paper introduces a new way of fine-tuning neural networks. Instead of re-training the whole model or fine-tuning the last few layers, the authors propose to fine-tune a small set of model patches that affect the network at different layers. The results show that this way of fine-tuning is superior to above mentioned typical ways either in accuracy or in the number of tuned parameters in three different settings: transfer learning, multi-task learning and domain adaptation.\n\nQuality: the introduced way of fine-tuning is interesting alternative to the typical last layer re-training. I like that the authors present an intuition behind their approach and justify it by an illustrative example. The experiments are fair, assuming the authors explain the choice of hyper-parameters during the revision.\n\nClarity: in general the paper is well-written. The discussion of multi-task and domain adaptation parts can be improved though.\n\nOriginality: the contributions are novel to my best knowledge.\n\nSignificance: high, I believe the paper may facilitate a further developments in the area.\n\nI ask the authors to address the following during the rebuttal stage:\n* explain the choice of the hyper-parameters of RMSProp (paragraph under Table 1).\n* fix Figure 3, it's impossible to read in the paper-printed version\n* explain how the average number of parameters per model in computed in Tables 4 and 5. E.g. 700K params/model in the first column of Table 4 is misleading - I suppose the shared parameters are not taken into account. The same holds for 0 in the second column, etc.\n* add a proper discussion for domain adaptation part. The simple \"The results are shown in Table 5\" is not enough. \n* consider leaving the discussion of cost-efficient model cascades out. The presented details are too condensed and do not add value to the paper.\n* explain how different resolutions are managed by the same model in the domain adaptation experiments." ]
[ -1, -1, 6, -1, -1, -1, 7, 8 ]
[ -1, -1, 3, -1, -1, -1, 5, 4 ]
[ "SJeD6ZMT6Q", "B1lPdZz6a7", "iclr_2019_BJxvEh0cFQ", "S1e7BbGqj7", "SJgssRVq3X", "BygmSOC2hm", "iclr_2019_BJxvEh0cFQ", "iclr_2019_BJxvEh0cFQ" ]
iclr_2019_BJzbG20cFQ
Towards Metamerism via Foveated Style Transfer
The problem of visual metamerism is defined as finding a family of perceptually indistinguishable, yet physically different images. In this paper, we propose our NeuroFovea metamer model, a foveated generative model that is based on a mixture of peripheral representations and style transfer forward-pass algorithms. Our gradient-descent free model is parametrized by a foveated VGG19 encoder-decoder which allows us to encode images in high dimensional space and interpolate between the content and texture information with adaptive instance normalization anywhere in the visual field. Our contributions include: 1) A framework for computing metamers that resembles a noisy communication system via a foveated feed-forward encoder-decoder network – We observe that metamerism arises as a byproduct of noisy perturbations that partially lie in the perceptual null space; 2) A perceptual optimization scheme as a solution to the hyperparametric nature of our metamer model that requires tuning of the image-texture tradeoff coefficients everywhere in the visual field which are a consequence of internal noise; 3) An ABX psychophysical evaluation of our metamers where we also find that the rate of growth of the receptive fields in our model match V1 for reference metamers and V2 between synthesized samples. Our model also renders metamers at roughly a second, presenting a ×1000 speed-up compared to the previous work, which now allows for tractable data-driven metamer experiments.
accepted-poster-papers
1. Describe the strengths of the paper. As pointed out by the reviewers and based on your expert opinion. - The problem is well-motivated and related work is thoroughly discussed - The evaluation is compelling and extensive. 2. Describe the weaknesses of the paper. As pointed out by the reviewers and based on your expert opinion. Be sure to indicate which weaknesses are seen as salient for the decision (i.e., potential critical flaws), as opposed to weaknesses that the authors can likely fix in a revision. - Very dense. Clarity could be improved in some sections. 3. Discuss any major points of contention. As raised by the authors or reviewers in the discussion, and how these might have influenced the decision. If the authors provide a rebuttal to a potential reviewer concern, it’s a good idea to acknowledge this and note whether it influenced the final decision or not. This makes sure that author responses are addressed adequately. No major points of contention. 4. If consensus was reached, say so. Otherwise, explain what the source of reviewer disagreement was and why the decision on the paper aligns with one set of reviewers or another. The reviewers reached a consensus that the paper should be accepted.
train
[ "S1gispPmRX", "r1e9ZnwmA7", "ryx7JhwmRX", "BygPHovmC7", "HyeGxiD7C7", "rJeH99PXA7", "B1li_tDmAm", "HJedMX8Na7", "rJxupYl0hm", "Byxt733Fhm" ]
[ "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We’d like thank all reviewers for the feedback and assessment of our paper. We hope to have individually addressed all your concerns. We have uploaded a modified version of our paper where we have addresses such concerns, re-arranged figures, and fixed minor typos and corrections. These include:\n\nMoving Figure 13 to the Supplementary Material for a detailed discussion on the potential interpretability of V1 and V2 metamers in the human visual system.\n\nEnhancing Figure 4 with the subfigures of Figure 3 where we show how the interpolation is done in the encoded space.\n\nAdding the clarification in Section 3 that the FS model includes structural constraints in the metamer generation pipeline as shown in Wallis et al. 2016.\n\nAn extended version of Figure 9 where we tune a gamma function via the perceptual optimization framework of Experiment 1, but using other IQA metrics such as MS-SSIM and IW-SSIM. This figure is supplementary and added in the Supplementary Material.\n\nThe histograms of the permutation tests that verify the scale invariance of the gamma function.\n\nConfidence intervals for the estimates of critical scaling and absorbing factors, as well as the lapse rate for each observer in Experiment 2.", "- How is the model trained? Do the authors use the pre-trained model of Huang & Belongie or is the training different in the context of the proposed method? I could only find the statement that the decoder is trained to invert the encoder, but that doesn't seem to be what Huang & Belongie's model does and the paper does not say anything about how it's trained to invert. Please clarify.\n\nWe use the pre-trained decoder by Huang and Belongie as stated in our paper. The decoder does invert the encoder with high fidelity if the input images for style and content are the same, both in theory (all the statistics are matched in the VGG19), and in practice (visual inspection and alpha=0 SSIM scores as reported in the paper). In the training pipeline, the encoder is fixed and the decoder is trained to learn how to invert the structure of the content image, and the texture of the style image, thus when the content and style image are the same, then the decoder approximates the inverse of the encoder. In the supplementary material we provide details of such training as uploaded in our submission, the content images were natural scenes from ImageNet and the style images were a collection of paintings and texture-like images.\n\nWe would like to emphasize as stated in our paper that within our pipeline, there is no explicit training to render metamers, but rather to invert image structure and texture-driven distortions in the encoded space.\n", "- I would like to know some details about the inference of the critical scaling. It seems surprisingly spot on 0.5 as in F&S for synth vs. synth, but looking at the data in Fig. 12 (rightmost panel), I find the value 0.5 highly surprising given that all the blue points lie more or less on a straight line and the point at a scaling factor of 0.5 is clearly above chance level. Similarly, the fit for original vs. synth does not seem to fit the data all that well and a substantially shallower slope seems equally plausible given the data. How reliable are these estimates, what are the confidence intervals, and was a lapse rate included in the fits (see Wichmann & Hill 2001)?\n\nWe followed your suggestion and reported the lapse rates in the updated manuscript (all under 3%). There was little variability in the fits, as the absorbing factor generally takes care of modulating the asymptotic performance of the psychometric function for each subtack in our roving experiment. We elaborate more on the fitting procedure as well as the characteristics of the psychometric function and lapse rates in the Supplementary Material. We also included confidence intervals for all the estimates of critical scaling factor, absorbing factor and lapse rates in our updated version (See Figure 12).\n\nYou are correct. In order to compute the average fitted values for the pooled observer, we averaged the fitted values for the 3 observers and labelled them as the average fit, rather than performing least squares regression on the average values (which was done individually). \n\nWe added these clarifications in the supplementary material of paper, as well as a derivation on how to compute the lapse rate for our ABX task.\n\n- I don't get the point of Figs. 4, 13 and 14. I think they could as well be removed without the paper losing anything. Similarly, I don't think sections 2.1 and the lengthy discussion (section 5) are useful at all. Moreover, section 3 seems bogus. I don't understand the arguments made here, especially because the obvious options (alpha=1 or overlapping pooling regions; see above) are not even mentioned.\n\nWe moved Figure 13 to the Supplementary Material.\n\nWe believe Figures 4, and 14 provide a clear interpretation of the model in terms of distortions in the encoded space and how distortions when viewed by the human observer change as a function of alpha and the geometry of the surface in the encoded space. We think there is a lot of room and work to do in terms of integrating the literature of visual metamerism within the context of differential geometry -- which is outside of the scope of this paper, but we provide hints on why it could be appropriate, and is currently being developed as a follow up work. The recent work of Henaff (2018) on the perceptual straightening hypothesis, as well as Eidolon distortions by Koenderink et al. (2017), are both examples of integrations between vision science and differential geometry. \n\nSection 3 provides mathematical insight of the psychophysical tractability of the metamer rendering problem, given that a structural constraint should be included as we clarified previously. For example, a per-pooling region tuning of maximal distortion is psychophysically intractable given that the experimenter would have to explore all pooling regions, over many values of alpha and over a wide collection of images, over many scales and for multiple trials. If we assume (similar to our settings): 100 pooling regions, 5 scales, 10 steps for alpha, 10 images, 30 trials, and 2 seconds per trial for observer response this amounts to roughly 1 month of raw psychophysics time. If we take into account that observers usually do a maximum of 2 hours per day -- this would extrapolate to a year in actual data collection time. It is possible, but unreasonable. This is the benefit of the perceptual optimization simulated experiment we propose.\n\nSection 3 also provides formality of the psychophysical optimization to be performed to find the critical scaling value for the FS metamers and motivates the need for Experiment 1, where we reduce the hyper-parametric nature of our model to a single parameter.\n", "Thank you for providing critical insights in our paper and giving such positive feedback! We would like to address your concerns:\n\n- The motivation for introducing alpha not clear to me. Wasn't the idea of F&S that you can reduce the image to its summary statistics within a pooling region whose size scales with eccentricity? Why do you need to retain some content information in the first place? How do images with alpha=1 (i.e. keep only texture) look?\n\nYou are correct on the idea that F&S introduce the texture matching hypothesis for peripheral processing (as preceded by Balas et al., 2009), however while the original work of Balas et al. 2009 uses pure texture matching with no structural constraints to explain losses in performance for a visual search task, the model and implementation of FS includes a prior of image structure at each step when performing gradient descent for texture matching. You could think of this as globally trying to minimize the mean square error (MSE) between the initial noise seed and the final image in pixel space, and locally at the same time, they are matching the MSE in texture space (via the Portilla-Simoncelli statistics) between the initial noise seed and the image content that lies within each receptive field. Images with pure alpha=1 for each receptive field show highly aberrant distortions, as would FS metamers if the content/structure matching restriction would not be added in their implementation. Consequently, a pure texture matching approach does not work for visual metamerism. This has been clarified with great detail in Wallis et al. (Journal of Vision) 2016 -- See Figure 7., General Discussion (Texture Statistics & Metamerism), and Acknowledgements of their paper, and also been suggested recently in Wallis, Funke et al., 2018. In addition the pioneering work of Rosenholtz et al. 2012 (Journal of Vision) on Mongrels as well as the Texforms of Long, Yu & Konkle (PNAS, 2018) provide the same intuitions and clarifications with regards to preserving structure. It is a subtle detail present in the original FS code, and might have not been emphasized in the original paper.\n\nHere is a link to the line in the code, where they project the image to its low pass residual (a way of enforcing structural constraints) for every step of the texture matching procedure that is done via a set of coarse-to-fine sub-iterations:\n https://github.com/freeman-lab/metamers/blob/master/main/metamerSynthesis.m#L191\n\nWe would really like to thank you for pointing this out, as it is a detail that if not properly addressed, defeats the whole purpose of trying to preserve image structure -- and introducing an alpha parameter in the first place. Hopefully we have addressed your main concern, and appreciate the rigorous feedback that has propelled this work forward from previous versions.\n\n- Related to above, why does alpha need to change with eccentricity? Experiment 1 seems to suggest that changing alpha leads to similar SSIM differences between synths and originals as F&S does, but what's the evidence that SSIM is a useful/important metric here?\n\nAlpha should change as a function of eccentricity given higher effects of crowding. We empirically verified this is the case by fitting a gamma function that tunes each alpha coefficient as a function of receptive field size which increase with eccentricity. With regards to the choice of SSIM over other IQA metrics, please see our detailed response to AnonReviewer 3 who has suggested trying Experiment 1 with other IQA metrics. We have done so, finding that the tuning properties of the gamma function still hold, and have added these results in the updated Supplementary Material (Section 6.7) .\n\n- Again related to above, why do you not use the same approach of blending pooling regions like F&S did instead of introducing alpha?\n\nWe do indeed use blended pooling regions as in F&S, and would like to clarify that the interpolation in Figures 3 and 4 are done for each pooling region, rather than the whole image. You could think of Figure 3 as a ‘zoomed in’ pooling region as we wanted to magnify the effects of the distortions within a receptive field. These smoothly blended pooling regions are used for local style transfer for each receptive field. Figure 9 (top), shows how we assign an alpha coefficient to each pooling regions (receptive field) and Section 6.2 in the supplementary material provides details on the construction of blended pooling regions. \n", "--- (Also not necessarily a negative) Exercising SSIM is a valid decision given it's widespread use. I am curious if MS-SSIM, IW-SSIM or other metrics make any significant difference. \n\nThis is also a great observation. In principle we chose SSIM because it is has been empirically been shown to be monotonic with human judgments of visual perception in terms of distortions. Other important factors of our choice of SSIM that we did not include in the paper, is that SSIM is based on changes of luminance, contrast, and structure (via normalized contrast), all of these which are critical aspects when analyzing distortions. In addition, SSIM is upper bounded, symmetric and has a unique maximum; which are all ideal traits to have for the perceptual optimization pipeline proposed in Experiment 1 (Section 4.1). MS-SSIM (multiscale SSIM) and IW-SSIM (image content weighted SSIM, computed via mutual information between the encoded reference and distorted image) also share these properties and following your suggestion we decided to re-run Experiment 1 with these IQA metrics to analyze the robustness of choice for SSIM vs other metrics as well as to analyze the potential change of shape of the gamma function. This experiment served as a great control: as it showed that our optimization scheme is extendible to other IQA metrics. (See Algorithm 1 in the Supplementary Material in our original and updated submission).\n\nWe have added a page in the Supplementary Material (Section 6.7), with such updates results, figures and permutation tests, and where we discuss our updated results and what we found. We have copied them here:\n\nThere are the 3 key observations that stem from these additional results:\n\n1) The sigmoidal natural of the gamma function is found again and is also scale independent, showing the broad applicability of our perceptual optimization scheme and how it is extendable to other IQA metrics that satisfy SSIM-like properties (upper bounded, symmetric and unique maximum).\n\n2) The tuning curves of MS-SSIM and IW-SSIM look almost identical, given that IW-SSIM is not more than a weighted version of MS-SSIM where the weighting function is the mutual information between the encoded representations of the reference and distortion image across multiple resolutions. Differences are stronger in IW-SSIM when the region over which it is evaluated is quite large (i.e. an entire image), however given that our pooling regions are quite small in size, the IW-SSIM score asymptotes to the MS-SSIM score. In addition both scores converge to very similar values given that we are averaging these scores over the images and over all the pooling regions that lie within the same eccentricity ring. We found that ~90% of the maximum alpha's had the same values given the 20 point sampling grid that we use in our optimization. Perhaps a different selection of IW hyperparameters (we used the default set), finer sampling schemes for the optimal value search, as well as averaging over more images, may produce visible differences between both metrics.\n\n3) The sigmoidal slope is smaller for both IW-SSIM and MS-SSIM vs SSIM, which yields more conservative distortions (as alpha is smaller for each receptive field). This implies that the model can still create metamers but potentially with different critical scaling factors for the reference vs synth experiment, and for the synth vs synth experiment. Future work should focus on psychophysically finding these critical scaling factors, and if they still are within the range of rate of growth of receptive field sizes of V1 and V2.\n", "Thank you for having a very positive outlook on our paper, we will address some of your comments and questions\n\n--- At the extreme tradeoff between intrinsic structure and texture, the notion of a metamer seems somewhat obscured. At what point is a metamer no longer a metamer?\n\nThis is a great question. In general, two stimuli are metameric to each other when they are perceptually indistinguishable, under certain viewing conditions. In our experiments the viewing condition is restricted to a forced fixation task at the center of each image. To answer your question, this happens when the scaling value that is used to construct the size of the pooling regions exceeds their critical limit. All images below such critical scaling values remain metameric to each other contingent on the testing paradigm: Reference vs Synthesis (s=0.25) and Synthesis vs Synthesis Experiment (s=0.5). Indeed, you could imagine a small alteration in an image, such as modifying a specific pixel by 1 bit, that could also produce a metamer. Yet that distortion is somewhat uninteresting, and most importantly it does not provide theoretical insights on the computations done by the human visual system (texture matching in the periphery as proposed in Balas et al, 2009 and Freeman and Simoncelli 2011). Moreover, we find a function (the gamma function), that modulates how much distortion (quantified by alpha) to insert contingent on the size of each receptive field, for any scaling factor. Figure 4 illustrates this idea with the blue contour around the blue dot which we call the metameric boundary, if a distortion exceeds such value, the synthesized image will fail to be metameric locally for a receptive field, and thus for the entire image. \n", "Thanks for taking the time to review our paper. We also share your enthusiasm with regards to metamerism. Below we address some of the comments:\n\n--- The quantitative evaluation is somewhat lacking in that there are no quantitative psychophysical experiments to compare this model to competing ones across different observers. For example, it would have been interesting to compare the ability of observers to distinguish between original images and metamers generated by different models.\n\nThis is an excellent point and we are currently working towards that direction. The current submission represents a good first step: to fully describe our model, and psychophysically evaluate it under 2 conditions (synth vs synth, and synth vs reference). A next step is to evaluate our model with other models including the FS for the same set of images. One current limitation when considering such rigorous evaluation, is that both the SideEye model and the CNN Synthesis model are not publicly available -- thus the differences in performance might be driven by hyperparameter/implementation settings for each model, rather by the model itself. Along these lines, we are looking forward to release our code and make it public similar to the FS model, to promote the development of improved metamer generation models as well as to see potential applications of metamerism in computer vision as suggested in the discussion section.\n\n--- Additional comments: On page 10., you show Fig. 13 however you mention at the end of the first paragraph you further elaborate on Fig 13. in the Supplementary Materials. I think it would be better to either provide more discussion in the text and refer to the figure, or just move it fully to Supplementary materials. \n\nThanks for pointing this out. We moved Figure 13 to the Supplementary Material where we elaborate more on the geometrical interpretation of these distortions in the encoded space and how a human observer might not be able to discriminate between such distortions.\n\n--- Additional comments: Also, in the qualitative comparison of various models you mention that SideEye runs in milliseconds whereas NF runs in seconds. It would be interesting to discuss the potential trade-off between speed and the quality of generated metamers between the models.\n\nWe agree, and this goes back to the point we mentioned earlier with regards to publicly available code from the authors. One of the main differences that we can comment on, is that they differ in distortions given the difference in texture statistics. We have verified this via visual inspection. The SideEye model uses a Fully Convolutional Network to approximate a Texture Tiling Model (Mongrel) in O(1) time that locally matches texture distortions everywhere in the field analogous to the metamers of FS. These Mongrels use Portilla Simoncelli texture statistics as compared to the output of the VGG-Net that we use in our parametrization.\n\nComparing all models is a next step in metamer research, and we will begin conversations with some of the other authors to see if we can share/distribute our code for such comparisons. In addition, the work of Wallis, Funke et al., 2018 has also shown that the choice of evaluations on images (texture-like, scene-like and man-made) also affects the difficulty of metameric rendering. Thus, the field is not only limited by access to models, and the code, but also by the lack of a standardized set of images and psychophysical paradigm for evaluation.\n", "Summary\nThis paper proposes a NeuroFovea (NF) model for generation of point-of-fixation metamers. As opposed to previous algorithms which use gradient descent to match the local texture and image statistics, NF proposed to use a style transfer approach via an Encoder-Decoder style architecture, which allows it to produce metamers in a single forward pass, allowing it to achieve a significant speed-up as compared to early approaches.\n\nPros\n-The paper tackles a very intriguing topic.\n-The paper is very well written using concise and clear language allowing it to present a large -amount of information in the 10 pages + appendix.\n-The paper provides a thorough discussion of both the problem, related work and the model itself.\n-A single forward pass nature of the model allows it to achieve a 1000x speed-up in generating metamers as opposed to previous GD based approaches.\n-The authors provide enough details to allow for reproducibility.\n\nCons\n-(Not necessarily a negative) Requires a very careful reading as the paper provides a lot of information (though as mentioned it is very well written)\n-The quantitative evaluation is somewhat lacking in that there are no quantitative psychophysical experiments to compare this model to competing ones across different observers. For example, it would have been interesting to compare the ability of observers to distinguish between original images and metamers generated by different models. \n\nAdditional comments\nOn page 10., you show Fig. 13 however you mention at the end of the first paragraph you further elaborate on Fig 13. in the Supplementary Materials. I think it would be better to either provide more discussion in the text and refer to the figure, or just move it fully to Supplementary materials.\n\nAlso, in the qualitative comparison of various models you mention that SideEye runs in milliseconds whereas NF runs in seconds. It would be interesting to discuss the potential trade-off between speed and the quality of generated metamers between the models.", "This paper presents an interesting analysis of metamerism and a model capable of rapidly producing metamers of value for experimental psychophysics and other domains.\n\nOverall I found this work to be well written and executed and the experiments thorough. Specific points on positives and negatives of the work follow:\n\nPositives:\n- The paper shows a solid understanding of the literature in this domain and presents a strong motivation\n- The problem itself is addressed at a deep level with many nuanced (but important) considerations discussed\n- Ultimately the results of the model seem convincing in particular with the accompanying psychophysical experiments\n\nNegatives:\n- (Maybe not a negative, but a question) At the extreme tradeoff between intrinsic structure and texture, the notion of a metamer seems somewhat obscured. At what point is a metamer no longer a metamer?\n- (Also not necessarily a negative) Exercising SSIM is a valid decision given it's widespread use. I am curious if MS-SSIM, IW-SSIM or other metrics make any significant difference. ", "Summary:\nThe paper proposes a fast method for generating visual metamers – physically different images that cannot be told apart from an original – via foveated, fast, arbitrary style transfer. The method achieves the same goal as an earlier approach (Freeman & Simoncelli 2011): locally texturizing images in pooling regions that increase with eccentricity, but is orders of magnitude faster. The authors perform a psychophysical evaluation to test how (in)discriminable their synthesized images are amongst each other and compared with originals. Their experiment replicates the result of Freeman & Simoncelli of a V2-like critical scaling in the synth vs. synth condition, but shows that V1-like or smaller scaling is necessary for the original vs. synth condition.\n\nI reviewed an earlier version of this paper for a different venue, where I recommended rejection. The authors have since addressed some of my concerns, which is why I am more positive about the paper now.\n\nStrengths:\n+ The motivation for the work is clear and the implementation straightforward, combining existing tools from style transfer in a novel way.\n+ It's fast. Rendering speed is indeed a bottleneck in existing methods, so a fast method is useful.\n+ The perceptual quality of the rendered images is quantified by psychophysical testing.\n+ The role of the scaling factor for the pooling regions is investigated and the key result of Freeman & Simoncelli (pooling regions scale with 0.5*eccentricity) is replicated with the new method. In addition, the result of Wallis et al. (2018) that lower scale factors are required for original vs. synth is replicated as well.\n\n\nWeaknesses:\n- Compared with earlier work, an additional fudge parameter (alpha) is introduced. It is not clear why it is necessary and it complicates interpretation.\n- The paper contains a number of sections with obscure mathiness and figures that I can't follow and whose significance is unclear.\n\n\nConclusion:\nThe work is well motivated, the method holds up to its promise of being fast and is empirically validated. However, it feels quite ad-hoc and the writing of the paper is very obscure at various places, which leaves room for improvement.\n\n\nDetails:\n\n- The motivation for introducing alpha not clear to me. Wasn't the idea of F&S that you can reduce the image to its summary statistics within a pooling region whose size scales with eccentricity? Why do you need to retain some content information in the first place? How do images with alpha=1 (i.e. keep only texture) look?\n\n- Related to above, why does alpha need to change with eccentricity? Experiment 1 seems to suggest that changing alpha leads to similar SSIM differences between synths and originals as F&S does, but what's the evidence that SSIM is a useful/important metric here?\n\n- Again related to above, why do you not use the same approach of blending pooling regions like F&S did instead of introducing alpha?\n\n- I would like to know some details about the inference of the critical scaling. It seems surprisingly spot on 0.5 as in F&S for synth vs. synth, but looking at the data in Fig. 12 (rightmost panel), I find the value 0.5 highly surprising given that all the blue points lie more or less on a straight line and the point at a scaling factor of 0.5 is clearly above chance level. Similarly, the fit for original vs. synth does not seem to fit the data all that well and a substantially shallower slope seems equally plausible given the data. How reliable are these estimates, what are the confidence intervals, and was a lapse rate included in the fits (see Wichmann & Hill 2001)?\n\n- I don't get the point of Figs. 4, 13 and 14. I think they could as well be removed without the paper losing anything. Similarly, I don't think sections 2.1 and the lengthy discussion (section 5) are useful at all. Moreover, section 3 seems bogus. I don't understand the arguments made here, especially because the obvious options (alpha=1 or overlapping pooling regions; see above) are not even mentioned.\n\n- How is the model trained? Do the authors use the pre-trained model of Huang & Belongie or is the training different in the context of the proposed method? I could only find the statement that the decoder is trained to invert the encoder, but that doesn't seem to be what Huang & Belongie's model does and the paper does not say anything about how it's trained to invert. Please clarify.\n\n- At various places the writing is somewhat sloppy (missing words, commas, broken sentences), which could have been avoided by carefully proof-reading the paper." ]
[ -1, -1, -1, -1, -1, -1, -1, 7, 8, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 4, 5 ]
[ "iclr_2019_BJzbG20cFQ", "ryx7JhwmRX", "BygPHovmC7", "Byxt733Fhm", "rJeH99PXA7", "rJxupYl0hm", "HJedMX8Na7", "iclr_2019_BJzbG20cFQ", "iclr_2019_BJzbG20cFQ", "iclr_2019_BJzbG20cFQ" ]
iclr_2019_BkG5SjR5YQ
Post Selection Inference with Incomplete Maximum Mean Discrepancy Estimator
Measuring divergence between two distributions is essential in machine learning and statistics and has various applications including binary classification, change point detection, and two-sample test. Furthermore, in the era of big data, designing divergence measure that is interpretable and can handle high-dimensional and complex data becomes extremely important. In this paper, we propose a post selection inference (PSI) framework for divergence measure, which can select a set of statistically significant features that discriminate two distributions. Specifically, we employ an additive variant of maximum mean discrepancy (MMD) for features and introduce a general hypothesis test for PSI. A novel MMD estimator using the incomplete U-statistics, which has an asymptotically normal distribution (under mild assumptions) and gives high detection power in PSI, is also proposed and analyzed theoretically. Through synthetic and real-world feature selection experiments, we show that the proposed framework can successfully detect statistically significant features. Last, we propose a sample selection framework for analyzing different members in the Generative Adversarial Networks (GANs) family.
accepted-poster-papers
The submission evaluates maximum mean discrepancy estimators for post selection inference. It combines two contributions: (i) it proposes an incomplete u-statistic estimator for MMD, (ii) it evaluates this and existing estimators in a post selection inference setting. The method extends the post selection inference approach of (Lee et al. 2016) to the current u-statistic approach for MMD. The top-k selection problem is phrased as a linear constraint reducing it to the problem of Lee et al. The approach is illustrated on toy examples and a GAN application. The main criticism of the problem is the novelty of the paper. R1 feels that it is largely just the combination of two known approaches (although it appears that the incomplete estimator is key), while R3 was significantly more impressed. Both are senior experts in the topic. On the balance, the reviewers were more positive than negative. R2 felt that the authors comments helped to address their concerns, while R3 gave detailed arguments in favor of the submission and championed the paper. The paper provides an additional interesting framework for evaluation of estimators, and considers their application in a broader context of post-selection inference.
train
[ "H1lpAZgWAQ", "S1xJY1jeC7", "HJeK_589a7", "S1lGXvh8T7", "ryeN2LnIaX", "Syl46BhIpX", "BkxV1bD02Q", "HkgHB2j23X", "SyghNyEchQ" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We really appreciate your feedback. We have already fixed typos. ", "A few additional typos to fix:\n-Section 1: 2nd paragraph: 'i.e. higher order' -> 'i.e., higher order',\n-Section 2: 1st paragraph: 'larges score' -> 'largest score',\n-Section 3.3.: last paragraph: 'see theoretical analysis section' -> 'see Section 4',\n-Corollary 3: ',incomplete' -> ', incomplete'.", "Thank you again for your valuable comments. \n\nAfter the first revision, we noticed that the reference section was not updated properly.\nThus, we have updated the reference section based on your comments and upload the new version to the system.", "We really appreciate the thoughtful and detailed comments. \nWe have corrected the grammatical errors and modified the manuscript based on the suggestions above.\nSpecifically, in Section 1, we modified the discerption of MMD; in section 2 we added more information on divergence measures. In section 4 and the supplementary material, we defined constant $c$ as the degeneracy of the U-stats kernel. And for the block estimate, we clarified that the block size is selected to be $\\sqrt{n}$ in all our experiments.\n", ">The paper is a nice combination of incomplete mmd and post selection inference technique. However, the combination is straightforward: the asymptotic Gaussian property of the incomplete mmd is the key.\n\nWe would like to emphasize that, as far as we know this is the first selective inference method for distribution comparison (or two-sample test). To achieve this, combining MMD and PSI is one of the simple approaches. To deal with the normality constraint in PSI, we also proposed an incomplete U-statistics estimator for MMD and showed basic theoretical properties of the estimator. Although the combination itself is intuitive, since MMD is heavily used in machine learning community in two-sample testing and generative modeling, we believe that this new estimator and application to PSI can shed new light to further improvements of MMD-based algorithms.\n\n>Furthermore, I think the applications (feature selection and test for GAN objective) is not exciting from the machine learning point of view. A better application which can show-case the obtained p-value is very useful will make the paper more interesting.\n\nFeature selection (or variable selection) is one of important machine learning problems including sparse learning. For instance, Lasso is a widely known feature selection method and there exist a large number of related papers in top machine learning venues. Even among the ICLR submissions this year, we found an another interesting feature selection paper:\nKnockoffGAN: Generating Knockoffs for Feature Selection using Generative Adversarial Networks\nhttps://openreview.net/forum?id=ByeZ5jC5YQ \n\nMoreover, feature selection has a number of important applications in biology and healthcare. In these filed, estimating p-value is extremely important. As for comparing generative models, analyzing GAN is one of hot topics in machine learning in particular for deep learning communities; although the result that we get (all null hypotheses rejected) is not the most surprising, we believe that the problem itself is important and further study needed.", "Thank you for your valuable comments.\n\n>However, I wish the paper could have explained the main idea clearly. Right now it is hard to me to judge whether or not the proposed estimator is correct (the only place that seems to support this is Fig. 4(a). Where the p-value seems to be clear to a uniform distribution. \n\nThe main idea is to use an estimator of MMD with normal response to perform testing and selective inference. The proposed incomplete estimator randomly subsamples the U-stats kernels of MMD, and asymptotic normality follows from properties of incomplete U-statistics. In finite sample cases, the correctness of this estimator is supported by the uniform p-value distribution in Figure 4(a), and also the fact that the false positive rate is successfully controlled at the desired \\alpha in both synthetic and real datasets.\n\n>For instance, the proposed PSI estimator, we will need to estimate the covariance matrix. This was explained in Section 3.3, and it was said that the algorithm in Fan et al. (2013) was used for the explanation. However, I think more detailed discussions and explanation should be provided here. In order to obtain correct p-value estimate, I believe getting accurate covariance matrix estimate is crucial. How large the sample size is needed, in order for us to get an accurate enough covariance matrix, to perform sub-sequent post selection inference?\n\nFor hypothesis testing, the consistency of the covariance matrix estimator is necessary to get accurate covariance matrix estimates. In our PSI setup, the consistency of the covariance matrix estimator which corresponds to the result of Theorem 2 and its corollaries is derived under standard regularity conditions. \n\nIn practice, for the small number of samples, the estimation of covariance matrix is extremely hard. As shown in the Figure 6 of supplementary material, we experimentally found that the estimation error of the incomplete MMD covariance is smaller than that of the block MMD covariance. Moreover, for the incomplete U-statistics MMD, we have $\\ell=rn$ samples to estimate the covariance matrix, while we have only $n/B$ samples for the block MMD. That is, the number of samples that can be used for covariance estimation of the incomplete U-statistics estimator is $rB$ times larger than that of the block estimator. In the setup of Figure 6, we have $rB = 100$. For example, for $n = 1000$, we can use $5000$ samples for the incomplete U-statistics, while we have only $50$ samples for the block estimator. Thus, for the block MMD, this is problematic for high-dimensional case (e.g., d = 100), and the estimated covariance matrix is not full-rank. To alleviate this issue, we employ the POET algorithm for the block estimator. \n\nMoreover, we run the PSI algorithms for the white wine dataset by changing the number of samples as 200, 400, …, 4000. As can be seen, both PSI algorithms can control the FPR with 2000 samples. Moreover, it is clear that the incomplete U-statistics estimator can get better performance than the block estimator in both TPR and FPR. We included this experimental result in the revised supplementary material (see Figure 8).", "The paper proposes a new post selection inference for MMD statistics, i.e., identity the p-values for the dimensions of vector. I believe this is an important problem that has not been addressed in previous literature. The work provides an extension to the original post selection inference work for lasso (Lee et al. 2016). \n\nHowever, I wish the paper could have explained the main idea clearly. Right now it is hard to me to judge whether or not the proposed estimator is correct (the only place that seems to support this is Fig. 4(a). Where the p-value seems to be clear to a uniform distribution. \n\nFor instance, the proposed PSI estimator, we will need to estimate the covariance matrix. This was explained in Section 3.3, and it was said that the algorithm in Fan et al. (2013) was used for the explanation. However, I think more detailed discussions and explanation should be provided here. In order to obtain correct p-value estimate, I believe getting accurate covariance matrix estimate is crucial. How large the sample size is needed, in order for us to get an accurate enough covariance matrix, to perform sub-sequent post selection inference? More discussions are needed here. ", "The paper propose a method for post feature selection inference in the case where the distribution is non-Gaussian. The paper developed a statistic called incomplete mmd and showed its asymptotic normal property. Then the incomplete mmd can be plugged into post feature selection framework for computing the p-value. \n\nThe paper is a nice combination of incomplete mmd and post selection inference technique. \nHowever, the combination is straightforward: the asymptotic Gaussian property of the incomplete mmd is the key. \n\nFurthermore, I think the applications (feature selection and test for GAN objective) is not exciting from the machine learning point of view. A better application which can show-case the obtained p-value is very useful will make the paper more interesting. ", "The authors focus on the selection problem of k statistically significant features discriminating 2 probability distributions accessible via samples. They propose a non-parametric approach under the PSI (post selection inference) umbrella using MMD (maximum mean discrepancy) as a discrepancy measure between probability distributions. The idea is to apply (asymptotically) normal MMD estimators, rephrase the top-k selection problem as a linear constraint, and reduce the problem to Lee et al., 2016. The efficiency of the approach is illustrated on toy examples and in GAN (generative adversarial network) context. The technique complements the PSI-based independence testing approach recently proposed by Yamada et al., 2018. \n\nThe submission is a well-organized, clearly written, nice contribution; it can be relevant to the machine learning community.\n\nBelow I enlist a few suggestions to improve the manuscript:\n-Section 1: The notion of characteristic kernel (kernel when MMD is metric) has not been defined, but it was referred to. 'Due to the mean embeddings in RKHS, all moment information is stored.': This sentence is somewhat vague.\n-Section 1: 'MMD can be computed in closed form'. This is rarely the case (except for e.g. Gaussian distributions with Gaussian or polynomial kernels). I assume that the authors wanted refer to the estimation of MMD.\n-Section 1: 'K nearest neighbor approaches (Poczos & Schneider, 2011)'. The citation to this specific estimator can go under alpha-divergences. The Wasserstein metric could also be mentioned.\n-Section 3.1: k is used to denote the number of selected features and also the kernel used in MMD. I suggest using different notations.\n-Theorem 1: '\\Phi is the CDF...'. There is no \\Phi in the theorem.\n-Section 3.2: The existence of MMD (mean embedding) requires certain assumptions: E_{x\\sim p}\\sqrt{k(x,x)} < \\infty, E_{x\\sim q}\\sqrt{k(x,x)} < \\infty.\n-Section 3.2.: block estimator: 'B_1 and B_2 are finite'. 'fixed'?\n-Section 3.2.: MMD_{inc}: \n i) 'S_{n,k}': k looks superfluous.\n ii) 'l': it has not been introduced (cardinality of D).\n-Section 3.3: typo: 'covraiance' (2x)\n-Section 3.3: Fan et al. 2013: The citation can go to \\citep{}. \n-Theorem 2: \n i)'c' is left undefined.\n ii)Comma is missing before 'where'.\n iii)\\xrightarrow{d} (Theorem 2, Corollary 3-4): Given that 'd' also denotes dimension in the submission, I suggest using a different notation for convergence in distribution.\n-At the introduction of block-MMD the block size (B) was fixed, while in the experiments (e.g. Figure 3) it is growing with the sample size (B=\\sqrt{n}). The assumption on B should be clearly stated.\n-Section 5.1: (b) mean shift: comma is missing before 'where'.\n-References: \n i) Abbreviations and names in the titles should be capitalized (such as cramer, wasserstein, hilbert-schmidt, gan, nash). \n ii) Scholkopf should be Sch\\{\"o}lkopf (in the ALT 2005 work).\n iii) 'Exact post-selection inference, with application to the lasso': All the authors are listed; 'et al.' is not needed." ]
[ -1, -1, -1, -1, -1, -1, 6, 5, 8 ]
[ -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "S1xJY1jeC7", "HJeK_589a7", "S1lGXvh8T7", "SyghNyEchQ", "HkgHB2j23X", "BkxV1bD02Q", "iclr_2019_BkG5SjR5YQ", "iclr_2019_BkG5SjR5YQ", "iclr_2019_BkG5SjR5YQ" ]
iclr_2019_BkG8sjR5Km
Emergent Coordination Through Competition
We study the emergence of cooperative behaviors in reinforcement learning agents by introducing a challenging competitive multi-agent soccer environment with continuous simulated physics. We demonstrate that decentralized, population-based training with co-play can lead to a progression in agents' behaviors: from random, to simple ball chasing, and finally showing evidence of cooperation. Our study highlights several of the challenges encountered in large scale multi-agent training in continuous control. In particular, we demonstrate that the automatic optimization of simple shaping rewards, not themselves conducive to co-operative behavior, can lead to long-horizon team behavior. We further apply an evaluation scheme, grounded by game theoretic principals, that can assess agent performance in the absence of pre-defined evaluation tasks or human baselines.
accepted-poster-papers
The paper studies population-based training for MARL with co-play, in MuJoCo (continuous control) soccer. It shows that (long term) cooperative behaviors can emerge from simple rewards, shaped but not towards cooperation. The paper is overall well written and includes a thorough study/ablation. The weaknesses are the lack of strong comparisons (or at least easy to grasp baselines) on a new task, and the lack of some of the experimental details (about reward shaping, about hyperparameters). The reviewers reached an agreement. This paper is welcomed to be published at ICLR.
train
[ "rygaShhcn7", "BJlGC9UhpX", "HkgfkJPnaX", "Sylf0sIn6X", "BJl-oTGeaX", "Skx1dt70hX" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper introduces a new multiagent research environment---a simplified version of 2x2 RoboSoccer using the MuJoCo physics engine with spherical players that can rotate laterally, move forwards / backwards, and jump.\n\nThe paper deploys a fine-tuned version of population-based sampling on top of a stochastic value gradient reinforcement learning algorithm to train the agents. Some of the fine-tunings used include deploying different discount factors on multiple different reward channels for reward shaping.\n\nThe claimed novel contributions of the paper are (1) a new multiagent testbed, (2) a decentralized training procedure, (3) fine-tuning reward shaping, and (4) highlighting the challenges in evaluation in novel multiagent competitive environments.\n\nOverall, my judgment is that the paper is fine, but the authors have not helped me to understand the significance of their contributions.\n\nTaking each in turn:\n\n(1) What is the significance of the new environment? What unique characteristics make it difficult? What makes this environment an importantly different testbed or development environment? The connection to RoboSoccer is motivating but tenuous. The new environment should have particular characteristics that expose problems with past algorithms or offer new challenges existing algorithms have not addressed at all.\n\n(2) Why is it important to have a decentralized training procedure when the authors have control over all the agents? If it will allow faster training, has the authors' algorithm been demonstrated to accomplish that goal? \n\n(3) It's hard to evaluate new algorithms when the domain studied is also new. We have no sense for state-of-the-art performance on this domain across a range of algorithms. The authors conduct a careful ablation study on their new algorithm but do not compare their approach to other classes of algorithms.\n\n(4) The authors indicate that evaluating the quality of an algorithm for a competitive context is hard in absence of established benchmarks---whereas in single-agent or cooperative environments progress can be measured against the goal of the environment, progress in competitive environments requires comparison to approaches that are thought to be good. Here the authors are themselves pointing out a fundamental problem with introducing new competitive multiagent testbeds, and the authors don't resolve this tension. Since the main contribution of the work is the environment, it's hard to see how this point the authors themselves make doesn't undermine that central contribution.\n\nBesides other comments mentioned above, a couple other ways to improve the paper would be:\n- Clarify why this environment is important to be introducing---what are the unique things that can be studied with this new environment?\n- Hold an open competition to get benchmarks created by other teams of researchers\n\nSome minor comments:\n- $n_r$ is not defined explicitly in the text as far as I have found\n- The authors state: \"The specific shaping rewards use for soccer are detailed in Section 4.2\" but I couldn't find them there. \n\n---\n\nPost-rebuttal \n\nMy main concern was assessing the value of the overall contribution of the paper. The other reviewers seem to appreciate both the new environment being offered and the combination of techniques deployed in the authors' solution. If there is an audience that will appreciate this work at ICLR as seems to be indicated by those reviews, then I would increase my score to marginally above the acceptance threshold.", "We thank the reviewer for their constructive feedback.", "We thank the reviewer for constructive feedback. The contribution of our work extends beyond the introduction of a novel environment. We use the domain to study the emergence of coordination by analyzing the behaviors of decentralized agents. We carried out ablation studies to surface important ingredients for effective learning in multi-agent cooperative-competitive games. Our work highlights a fundamental difficulty in evaluation on multi-agent domains, with or without benchmarks, which we alleviate through a principled Nash averaging evaluation scheme.\n\nWe address each point individually:\n\n1) Q) “What makes this environment an importantly different testbed or development environment?”\nA) The environment will provide the ML community with a cooperative-competitive multi-agent environment in a simulated physical world which is accessible and flexible. It is accessible because it uses a widely adopted physics simulator and research platform. It is also accessible in the sense that we have demonstrated a solution using end-to-end RL. It is flexible because although the current paper describes a relatively simple agent embodiment (chosen to draw attention to multi-agent coordination), the environment can be extended in terms of body complexity as well as the number of players and could become part of a wider multi-task suite with consistent physics. We believe it is an important contribution to create such an environment, release it, and publish the first set of results on it. Further, the environment rules are simple but complexity emerges from sophisticated behavior and interactions between independent physically embodied agents. As such we have seen a level of emergent cooperation in a simulated physical world, which has not been witnessed before by end-to-end RL.\n\nQ) “The new environment should [...] offer new challenges existing algorithms have not addressed at all.''\nA) Learned cooperation of embodied independent RL agents in physical worlds is an unsolved problem, and a significant challenge for all existing approaches. To our knowledge there is no published environment that allows us to study this problem with realistic simulated physics where agents must acquire and leverage physical motor skills in order to coordinate with others in an open-ended manner.\n\n2) Q) “Why is it important to have a decentralized training procedure when the authors have control over all the agents?”\nA) We agree that the environment could be used to investigate centralized approaches which could yield faster learning in this particular problem (but may not in general scale to more agents). However, we chose to study the emergence of coordination in decentralized, non-communicating agents, which is a significant unsolved problem important for real-world multi-agent problems (e.g. interaction between self-driving cars from different manufacturers, or human-agent interactions) where centralized solutions may not be feasible, and is more consistent with human learning.\n\n3) Q) “It's hard to evaluate new algorithms when the domain studied is also new.” & “We have no sense for state-of-the-art performance on this domain across a range of algorithms”\nA) We agree that evaluation is difficult in the absence of clear baselines on a novel domain. We have combined state-of-the-art distributed RL and continuous control, with additional improvements, and suggest that this is a sensible reference solution for future investigations. We performed a detailed ablation study precisely to answer the question: what are the important ingredients for successful multi-agent learning on this novel, challenging domain?\n\n4) Q) “The authors indicate that evaluating the quality of an algorithm for a competitive context is hard in the absence of established benchmarks”\nA) We disagree with reviewer’s assessment that highlighting difficulties in evaluation undermines the contribution of this work. There have been multiple studies (sec 4.3) where conclusions have been drawn according to simple multi-agent evaluation schemes. Our work shows where existing evaluation procedures fall short. We adopted an evaluation scheme via Nash averaging and demonstrated the discrepancy between our methods and a tournament (Figure 10). We do not claim that our evaluation method resolves the issue completely, but we believe it provides a more principled evaluation scheme. Even for domains where we possess human baselines or programmed bots evaluation is still difficult for the same underlying reason. It is important to introduce domains in which these problems arise, such as this one.\n\nQ) “what are the unique things that can be studied with this new environment?”\nA) See 1)\n\nQ) “Hold an open competition to get benchmarks created by other teams of researchers”\nA) we agree that our environment would be suitable for a competition, since the environment is an easily accessible MuJoCo environment. This could be an exciting future project, beyond the current paper scope.", "We thank the reviewer for their constructive feedback. We address each point individually: \n\nRe. correlation of rewards within and across teams:\n\nIn our setup we distinguish between the raw sparse reward events / raw continuous performance metrics (all denoted by r), and the individual agent’s preferences for these (denoted by alpha). While the binary reward events ‘goal’ and ‘concede’ are correlated within team, but anti-correlated across teams, this is not true for all continuous metrics (it is for ball-vel-to-goal but not for vel-to-ball). Independently, each agent can have different preferences for each of the signals and associated discount factors. These quantities are evolved via PBT and thus vary across agents and over time. As a consequence, even when the signal itself is perfectly (anti-)correlated between agents this is almost never true for the resulting reward received by the agents and they may thus acquire different behaviors.\n\nRe. relative importance of hyperparameter adjustments performed by evolution: \n\nThe reviewer raised an important question regarding population-based training. Given that the PBT procedure drives evolution towards agents whose hyper-parameters and model parameters are the most competitive within the current population of agents (in terms of winning the game), a parameter that is irrelevant for the learning progress should not exhibit a consistent trend across experiment replicas (as each hyper-parameter is initialized randomly and then evolved through an evolution procedure that selects, inherits and mutates where mutation applies a random multiplicative perturbation). We concretely observed in our work (Figure 4) that both actor and critic learning rates as well as discount factor and entropy cost exhibit clear trends over the course of training. Regarding learning rates specifically, we believe that our PBT procedure re-discovers the commonly employed learning rate annealing schedule for accelerated learning. We have added a new Section E in the appendix comparing the evolution of hyperparameters across three experiments with different seeds: entropy cost and critic learning rates evolve consistently across experiments indicating that performance is more sensitive to these parameters. The critic learning rate in particular decreases over time. Actor learning rate is relatively less consistent across the three experiments, indicating that performance is less sensitive to fine tuning the actor learning rate.", "The paper proposes a new environment - 2vs2 soccer - to study emergence of multi-agent coordinated team behaviors. Learning relies on population-based training of agent's shaped reward mixtures and approach of nash averaging is used for evaluation.\n\nClarity: the paper is well-written and clear. The ablations provided are helpful in understanding how much different introduced components matter, and quantitative and qualitative analysis of resulting behavior is quite nice\n\nOriginality: the individual pieces of this work (PBT, SVG, nash averaging) have been introduced previously, but this paper puts them together in a well-chosen manner.\n\nSignificance: I believe this paper proposes a number of interesting observations (effects of PBT, evaluation, effects of recurrent policies to overcome non-stationarity issues) that I believe would be of value to the part of ICLR community doing research in multi-agent systems. ", "Summary: The authors use competition as a way to train agents in a complex continuous team-based control task: a 2 player soccer game. Agents are paired randomly into a team of 2 and play another team of 2. The key aspect of the proposed algorithm is the use of population based training.\n\nStrong Points\n-\tThe authors propose a convincing methodology for speeding up learning in coordinated MARL.\n-\tThe Nash Averaging approach suggested for evaluating in the presence of cycles is interesting and a useful tool for evaluation when there are no easy baselines\n-\tThe authors do convincing ablation studies to show that the PBT is the most important part of the learning algorithms and does well even when paired with a simple feed forward model\n\nQuestions\n-\tThe authors use reward shaping of the form: “We design shaping reward functions {rj : S × A → R}j=1,...,nr P , weighted so that r(·) := nr j=1 αj rj (·) is the agent’s internal reward and, as in Jaderberg et al.” I’m not sure I follow how this works, without the additional dense shaping in the soccer game the reward is 0/1 depending on if one’s team wins or loses, so won’t one’s rewards always be perfectly correlated with those of one’s teammates and perfectly anticorrelated with those of the other team? Does this only work with the dense shaping (e.g. vel-to-ball)?\n-\tI would like to see which of the PBT controlled hyperparameters actually matter for the increase in training speed. Do the learning rates matter (since they’re also being changed by the Adam optimizer as training goes) or is it about the discount factor/entropy regularizer?\n" ]
[ 6, -1, -1, -1, 7, 7 ]
[ 3, -1, -1, -1, 3, 3 ]
[ "iclr_2019_BkG8sjR5Km", "BJl-oTGeaX", "rygaShhcn7", "Skx1dt70hX", "iclr_2019_BkG8sjR5Km", "iclr_2019_BkG8sjR5Km" ]
iclr_2019_BkMiWhR5K7
Prior Convictions: Black-box Adversarial Attacks with Bandits and Priors
We study the problem of generating adversarial examples in a black-box setting in which only loss-oracle access to a model is available. We introduce a framework that conceptually unifies much of the existing work on black-box attacks, and demonstrate that the current state-of-the-art methods are optimal in a natural sense. Despite this optimality, we show how to improve black-box attacks by bringing a new element into the problem: gradient priors. We give a bandit optimization-based algorithm that allows us to seamlessly integrate any such priors, and we explicitly identify and incorporate two examples. The resulting methods use two to four times fewer queries and fail two to five times less than the current state-of-the-art. The code for reproducing our work is available at https://git.io/fAjOJ.
accepted-poster-papers
This paper is on the problem of adversarial example generation in the setting where the predictor is only accessible via function evaluations with no gradients available. The associated problem can be cast as a blackbox optimization problem wherein finite difference and related gradient estimation techniques can be used. This setting appears to be pervasive. The reviewers agree that the paper is well written and the proposed bandit optimization-based algorithm provides a nice framework in which to integrate priors, resulting in impressive empirical improvements.
train
[ "BJx1HlJJpQ", "S1eCfrzsA7", "H1gTyuSq0Q", "BkglK_U5C7", "SJg2WDBcR7", "rkgBbCf90X", "B1xIAu-qhm", "BJg_v6iOCQ", "SJeCGji_A7", "SyglTvjdR7", "rklvTWsO07", "rke9oWZuam", "rJgK3ebOpX", "B1ead6lupQ", "B1lDhBt52X" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper formulates the black-box adversarial attack as a gradient estimation\nproblem, and provide some theoretical analysis to show the optimality of an\nexisting gradient estimation method (Neural Evolution Strategies) for black-box\nattacks.\n\nThis paper also proposes two additional methods to reduce the number of queries\nin black-box attack, by exploiting the spacial and temporal correlations in\ngradients. They consider these techniques as priors to gradients, and a bandit\noptimization based method is proposed to update these priors. \n\nThe ideas used in this paper are not entirely new. For example, the main\ngradient estimation method is the same as NES (Ilyas et al. 2017);\ndata-dependent priors using spatially local similarities was used in Chen et\nal. 2017. But I have no concern with this and the nice thing of this paper is \nto put these tricks under an unified theoretical framework, which I really \nappreciate.\n\nExperiments on black-box attacks to Inception-v3 model show that the proposed\nbandit based attack can significantly reduces the number of queries (2-4 times\nfewer) when compared with NES. \n\nOverall, the paper is well written and ideas are well presented.\nI have a few concerns:\n\n1) In Figure 2, the authors show that there are strong correlations between the\ngradients of current and previous steps. Such correlation heavily depends on\nthe selection of step size. Imagine that the step size is sufficiently large,\nsuch that when we arrive at a new point for the next iteration, the\noptimization landscape is sufficiently changed and the new gradient is vastly\ndifferent than the previous one. On the other hand, when using a very small\nstep-size close to 0, gradients between consecutive steps will be almost the\nsame. By changing step-size I can show any degree of correlation. I am not\nsure if the improvement of Bandit_T comes from a specific selection of\nstep-size. More empirical evidence on this need to be shown - for example, run\nBandit_T and NES with different step sizes and observe the number of queries\nrequired.\n\n2) This paper did not compare with many other recent works which claim to\nreduce query numbers significantly in black-box attack. For example, [1]\nproposes \"random feature grouping\" and use PCA for reducing queries, and [2]\nuses a good gradient estimator with autoencoder. I believe the proposed method\ncan beat them, but the authors should include at least one more baseline to \nconvince the readers that the proposed method is indeed a state-of-the-art.\n\n3) Additionally, the results in this paper are only shown on a single model\n(Inception-v3), and it is hard to compare the results directly with many other\nrecent works. I suggest adding at least two more models for comparison (most\nblack-box attack papers also include MNIST and CIFAR, which should be easy to\nadd quickly). These numbers can be put in appendix.\n\nOverall, this is a great paper, offering good insights on black-box adversarial\nattack and provide some interesting theoretical analysis. However currently it\nis still missing some important experimental results as mentioned above, and\nnot ready to be published as a high quality conference paper. I conditionally\naccept this paper as long as sufficient experiments can be added during the\ndiscussion period.\n\n\n[1] Exploring the Space of Black-box Attacks on Deep Neural Networks, by Arjun\nNitin Bhagoji, Warren He, Bo Li and Dawn Song, https://arxiv.org/abs/1712.09491\n(conference version accepted by ECCV 2018)\n\n[2] AutoZOOM: Autoencoder-based Zeroth Order Optimization Method for Attacking\nBlack-box Neural Networks, by Chun-Chen Tu, Paishun Ting, Pin-Yu Chen, Sijia\nLiu, Huan Zhang, Jinfeng Yi, Cho-Jui Hsieh, Shin-Ming Cheng,\nhttps://arxiv.org/abs/1805.11770\n\n==========================================\n\nAfter discussing with the authors, they provided better evidence to support the conclusions in this paper, and fixed bugs in experiments. The paper looks much better than before. Thus I increased my rating.", "Thanks for fixing this bug in experiments. The results look much reasonable now. I will increase my rating.", "Also note that due to the time constraint in getting a revision in, we actually only compared Tu et al to our second-best method, Bandits_T. We are running a comparison with Bandits_{TD} (both time and data priors) and will revise (if time permits) and/or report back the results.\n\n[EDIT: we have now done so, see comment above]", "We have updated the paper again (specifically, the comparison with Tu et al) to reflect experiments we have now run with both the time and data prior (Bandits_{TD}). At 100% success rate with the same experimental design, our method now uses over 6 times fewer queries.\n\n", "First of all, we would like to sincerely thank the reviewer for their continually detailed comments and thorough review---it has been great help in improving the manuscript.\n\nUpon checking the code, we realized that (as the reviewer suggested), we had accidentally reproduced the _targeted_ attacks in the baseline code repository. To account for this, we modified our code to work for targeted attacks, and properly replicated the experimental setup, choosing the correct \\ell_2 perturbation bound, and random target classes as in Tu et al (except for the fact that we use the prepackaged Inception-v3 classifier rather than the downloaded one from Tu et al). We don't tune our hyper parameters at all, and use the same ones that we used for untargeted. \n\nOur method achieves the same success rate at over 3 times the query efficiency at 100% success rate (note that this is higher success rate than Tu et al achieve at the same l2 perturbation bound, since there the authors only bound the mean and not the max), still establishing significant improvement. We have uploaded a revision reflecting these changes. ", "Dear Paper1206 Authors,\n\nThank you for adding these new results. Figure 7 now shows the cosine similarity under different step sizes, which looks convincing. The newly added experiments on different models (ResNet-50, VGG-16) and different datasets (CIFAR and ImageNet), as well as the comparisons to other state-of-the-art methods make this paper look much stronger than before.\n\nI have a concern regarding the comparison with (Tu et al, 2018). The 100-fold reduction looks to good to be true. Can you confirm that you performed the attack under the same setting? e.g., do you run attacks with the same target labels for both methods, or running untargeted attacks for both? I think it is better to double check this.\n\nI am willing to increase my rating to 7 as long as the above concern can be addressed.\n\nThanks,\nPaper1206 AnonReviewer1\n", "UPDATE:\n\nI've read the revised version of this paper, I think the concernings have been clarified.\n\n-------\n\nThis paper proposes to employ the bandit optimization based approach for the generation of adversarial examples under the loss accessible black-box situation. The authors examine the feasibility of using the step and spatial dependence of the image gradients as the prior information for the estimation of true gradients. The experimental results show that the proposed method out-performs the Natural evolution strategies method with a large margin.\n\nAlthough I think this paper is a decent paper that deserves an acceptance, there are several concernings:\n\n1. Since the bound given in Theorem 1 is related to the square root of k/d, I wonder if the right-hand side could become \"vanishingly small\", in the case such as k=10000 and d=268203. I wish the authors could explain more about the significance of this Theorem, or provide numerical results (which could be hard).\n\n2. Indeed I am not sure if Section 2.4 is closely related to the main topic of this paper, these theoretical results seem to be not helpful in convincing the readers about the idea of gradient priors. Also, the length of the paper is one of the reasons for the rating.\n\n3. In the experimental results, what is the difference between one \"query\" and one \"iteration\"? It looks like in one iteration, the Algorithm 2 queries twice?", "Thank you again for the review. We have now posted a revision of our paper, and the summary comment above details all of the changes we've made in response to reviewer comments, including several additional experiments and comparisons with other methods. \n\nTo highlight the changes that are most relevant to your review:\n\n1) We now provide an illustration of the bound in Appendix A in the relevant query regimes\n\n2 and 3) We have clarified some points in the papers based on reviewer comments and added significantly more experimental results---we hope that these results further justify the use of the full 10 pages.", "We have addressed the above comments in our revision, please see the main comment for more details. Thank you again for the review and suggestions.", "We have addressed comments (1), (2), and (3) in our revision (details are in the main comment above). To address the raised points directly:\n\n(1) is now addressed in Figure 7 in Appendix B.3 which shows how the time-dependent trend decays with the step size---even at high step sizes the trend persists. Specifically, we plot a graph identical to Figure 2 but for many different step sizes, from norm around 0.03 all the way to 4.0.\n\n(2) Appendix G now shows a comparison with Tu et al ([2] in the original review). See our main comment above for a summary of the results.\n\n(3) We now include results from ImageNet and CIFAR, with Inception-v3, ResNet, and VGG16 in the appendices (more details in our main comment above).\n\nThank you again for the detailed review and the useful suggestions. \n\n\n", "We thank all the reviewers again for the helpful responses and revision suggestions. We have posted a revision that we believe addresses all the reviewer comments. \n\nIn addition to adding the suggested edits to the paper for clarity, we have now compared our approach with several datasets, baselines, and classifiers, and established a significant margin over state-of-the-art methods. Specifically, we have made the following updates:\n\n—————\nQuantifying time-dependent prior\n—————\nWe include a graph (in the omitted figures appendix) showing that the successive correlation prior (aka the time-dependent prior) holds true even up to very large step sizes. Specifically, we plot a graph identical to Figure 2 but for many different step sizes, from norm around 0.03 all the way to 4.0.\n\n—————\nOther threat models and datasets\n—————\nWe have added an Appendix F corresponding to ImageNet results for VGG16 and ResNet50 classifiers (along with Inceptionv3 copied from the main text for reference). Our methods still outperform NES on these benchmarks, often by a larger margin than shown for Inception-v3 in Table 1.\n\nWe have added an Appendix E corresponding to a comparison of our methods and NES in the CIFAR l-infinity threat model (for L2, we could not find a reasonable maximum \\epsilon from recent literature) with VGG16, Resnet50, and Inceptionv3 networks. \n\n————- \nComparison with another baseline\n————-\n\nEfficiency compared to Tu et al:\n—————\nWe looked into (Tu et al, 2018) and (Bhagoji et al, 2017) as suggested by reviewer 1 to compare with a baseline; we chose to compare with Tu et al (AutoZOOM) since it was (a) released later, (b) uses a more standard classifier than in Bhagoji et al and (c) does not require access to an external set of representative images (unlike Bhagoji et al, which uses this set to find the PCA components). As such, we have added an Appendix comparing our method to that of Tu et al: achieving the same success rate and using the mean perturbation from Tu et al as our maximum perturbation, we achieve a 35-fold reduction in query complexity.\n\nEfficiency compared to Tu et al + fine tuning:\n—————-\n Tu et al also give a “distortion fine-tuning” technique that attempts to reduce the mean perturbation after the attack. This fine-tuning takes around 100,000 queries, and in the best case, after using around 100,000 queries, reduces the mean perturbation to 0.4e-4 per-pixel normalized, which works out to just over 10 (see Figure 3a in Tu et al). In Appendix F, we show that running our attack with this lower distortion budget directly gives a similar success rate, using an average of around 900 queries as opposed to 100,000, giving more than a *100-fold* reduction in query complexity.\n\n————\nBound illustration\n————\n- To illustrate, we give an example of our own \\ell_2 threat model, where Theorem 1 gives us a bound on the performance gap between NES and least squares, in Appendix 1 (after the proofs).\n\n————\nEdits to paper\n————\n- We noticed that our image normalization for generating Table 1 was slightly incorrect, so we have fixed it and rerun the experiment—this has not changed the output significantly, and our methods still beat NES by the same margin of normalized queries. However, in the interest of correctness, we have updated the numbers in Table 1 to reflect the experiment run with correct normalization.\n- We have made the pseudocode for the bandits attack clearer, and explicitly noted how the data-dependent prior can be included, as well as justifying the boundary projection step\n- Fixed: \\nabla L —> g^* in Figures\n- Fixed: Section 2.4 sentence (as pointed out by Reviewer 3)", "We thank the reviewer for the detailed comments on the paper. We address the main points below:\n\n1. Typically black-box adversarial attacks are executed in a multi-step fashion, i.e. by using small numbers of queries per gradient estimates, and taking several gradient estimate steps (Ilyas et al, the NES-based attack, for example, uses 50 queries per gradient estimate). While it may be possible to prove tighter bounds, in the 50-query regime with d=268203, the bound is actually rather tight. (Furthermore, during our own preliminary experimentation, least-squares attacks usually performed identically to NES).\n\n2. Section 2.4 is meant to illustrate that without priors, we have essentially hit the limit of query-efficiency in black-box attacks. In particular, NES, which we found to be the current state-of-the-art, actually turns out to be approximately optimal, even from a theoretical perspective. This motivates us to take a new look on adversarial example generation, breaking through this optimality by introducing new information into the problem.\n\nWithout the proof in Section 2.4, one could reasonably hope that there are simply better gradient estimators that we can use as a drop-in replacement for NES. The theorems we prove there instead motivate our bandit optimization-based view. \n\n3. One iteration constitutes two queries (which are used for a variance-reduced gradient estimate via antithetic sampling). In general, the query count refers to queries of the classifier, whereas iteration counts the number of times that we take an estimated gradient step.\n\nWe hope the above points clarify the reviewer's concerns, and thank the reviewer again for the detailed feedback.", "We thank the reviewer for the comments!\n\nWe address the main points below:\n\n> Data dependent prior in pseudocode: Yes it is in fact by choice of d, but we agree this can be made clearer in the pseudocode. We will make sure to describe this more clearly in our final paper.\n\n> Figure 4: We will make sure to update this and be more explicit.\n\n> Figure 4c (low cosine similarity): Remarkably, for black-box attacks, though higher cosine similarity is better, the threshold for a successful adversarial attack (in terms of cosine similarity) is extremely low. In particular, for NES, the cosine similarity (as you mentioned) is almost 0, but the gradient estimates *still* result in a successful attack! We show that using our method leads to significantly better estimates of the gradient, though as one would expect in such a query-deficient domain (100s of queries vs 3*10e5 dimensional images), still pretty poor.\n\nWe will also be sure to address all of the minor comments in our final paper. We thank the reviewer again for the useful comments and suggestions.\n", "Thank you for the detailed comments, we will be sure to make these changes in the final version of the paper. As the reviewer correctly identifies, we consider the theoretical framework of online optimization as a basis for all black-box attacks to be one of our most profound contributions. That said, in order to improve the quality of the experimental results, we have addressed and added each suggested experiment. Specifically:\n\n1) We thank the reviewer for raising this---we initially only used the default NES step size (from Ilyas et al) to evaluate the temporal correlation. To give a fuller picture on how this temporal correlation relates with the step size, we have added a new plot in the appendix, which shows the average correlation on a trajectory as a function of the step size. \n\n2) To address this, we have added a table (in the Appendix) which compares our query-efficiency against that of [1] and [2]. It should also be noted, however, that both [1] and [2] can be integrated as \"priors\" on the gradient; in particular, that the gradient lays in some low-dimensional subspace. Our framework gives us a way to formalize these assumptions, and measure how empirically valid they are in order to find better and better black-box attacks.\n\n3) We have also added results on ResNet-50 and VGG-16 on ImageNet, and have also benchmarked our attack on all three classifiers (Inceptionv3, ResNet-50, VGG-16) on CIFAR as well.\n\nWe will be sure to comment again with a revision when the experiments are complete and integrated into the paper. We thank the reviewer again for the valuable suggestions.\n\n", "Paper formalizes the gradient estimation problem in a black-box setting, and provs the equivalence of least Squares with NES. It then improves on state of the art by using priors coupled with a bandit optimization technique.\n\nThe paper is well written. The idea of using priors to improve adversarial gradient attacks is an enticing idea. The results seem convincing.\n\nComments:\n- I missed how data dependent prior is factored into the algorithms 1-3. Is it by the choice of d? I suggest a clearer explanation.\n- In fig 4, I was confused that the loss of the methods is increasing. it took me a minute to realize this is the maximized adversarial loss, and thus higher is better. you may want to spell this out for clarity. I typically associate lower loss with better algorithms.\n- I am confused by Fig 4c. If I am comparing g to g*, I do expect a high cosine similarity. cos = 1 is the best. Why is correlation so small? and why is it 0 for NES? You may also want to offer additional insight in the text explaining 4c. \n\nMinor comments:\n- Is table one misplaced?\n- The symbol for \"boundary of set U\" may be confused with a partial derivative symbol\n- first paragraph of 2.4: \"our estimator a sufficiently\". something missing?\n- \"It is the actions g_t (equal to v_t) which...\" refering to g_t as actions is confusing. Although may be technically correct in bandit setting\n- Further explain the need for the projection of algorithm 3, line 7.\n- Fig 4: refer to true gradient as g*\n\nCaveat: Although I am well versed in bandits, I am not familiar with adversarial training and neural network literature. There is a chance I may have misevaluated central concepts of the paper." ]
[ 7, -1, -1, -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, 8 ]
[ 5, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, 2 ]
[ "iclr_2019_BkMiWhR5K7", "BkglK_U5C7", "SJg2WDBcR7", "SJg2WDBcR7", "rkgBbCf90X", "B1ead6lupQ", "iclr_2019_BkMiWhR5K7", "B1xIAu-qhm", "B1lDhBt52X", "BJx1HlJJpQ", "iclr_2019_BkMiWhR5K7", "B1xIAu-qhm", "B1lDhBt52X", "BJx1HlJJpQ", "iclr_2019_BkMiWhR5K7" ]
iclr_2019_BkN5UoAqF7
Sample Efficient Imitation Learning for Continuous Control
The goal of imitation learning (IL) is to enable a learner to imitate expert behavior given expert demonstrations. Recently, generative adversarial imitation learning (GAIL) has shown significant progress on IL for complex continuous tasks. However, GAIL and its extensions require a large number of environment interactions during training. In real-world environments, the more an IL method requires the learner to interact with the environment for better imitation, the more training time it requires, and the more damage it causes to the environments and the learner itself. We believe that IL algorithms could be more applicable to real-world problems if the number of interactions could be reduced. In this paper, we propose a model-free IL algorithm for continuous control. Our algorithm is made up mainly three changes to the existing adversarial imitation learning (AIL) methods – (a) adopting off-policy actor-critic (Off-PAC) algorithm to optimize the learner policy, (b) estimating the state-action value using off-policy samples without learning reward functions, and (c) representing the stochastic policy function so that its outputs are bounded. Experimental results show that our algorithm achieves competitive results with GAIL while significantly reducing the environment interactions.
accepted-poster-papers
The paper proposes a simple method for improving the sample efficiency of GAIL, essentially a way of turning inverse reinforcement learning into classification. As reviewers noted, the method is based on a simple idea with potentially broad applicability. Concerns were raised about the multiple components of the system and what each contributed, and missing pointers to the literature. A baseline wherein GAIL is initialized with behaviour cloning, although only suggested but not tried in previous works. The authors did, however, attempt this setting and found it to hurt, not help, performance. I find this surprising and would urge the authors to validate that this isn't merely an uninteresting artifact of the setup, however I commend the authors for trying it and don't believe that a surprising result in this regard is a barrier to publication. As several reviewers did not provide feedback on revisions addressing their concerns, this Area Chair was left to determine to a large degree whether or not reviewer concerns were in fact addressed. I thank AnonReviewer4 for revisiting their review towards the end of the period, and concur with them that many of the concerns raised by reviewers have indeed been adequately dealt with.
train
[ "rye5EKn1pm", "B1lQQqme6X", "BkgMjKRznX", "Sklwhthhnm" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposed an imitation learning algorithm that achieves competitive results with GAIL, while requiring significantly fewer interactions with the environment.\n\nI like the method proposed in this paper. It seems similar to ideas in this concurrent submission: https://openreview.net/forum?id=B1excoAqKQ\n\nHowever, the paper is a bit difficult to read. The proposed method is made up of several changes compared to the baselines (e.g. using Q-learning without IRL instead of IRL, using off-policy learning, using conditioning to obtain a stochastic policy) but motivation for each component is presented late within the paper. The terminology used to describe these components is a bit confusing. Also some math is presented without intuitive descriptions.\n\nI’d like to see more ablations performed: there are three main changes compared to GAIL, but an ablation is only performed for the stochastic policy. It would be interesting to tease out what is more important, off-policy learning, or bypassing IRL.", "Summary/contributions:\n\nThe primary aim of this paper is to improve the sample efficiency of GAIL (Ho et al. 2016). The claimed contributions can be summarized by consisting of 1) replacing TRPO (which was used in the original paper) with a off-policy RL with a modified reward, 2) using a policy parameterizing where the noise is used as an input rather than at the output. While conceptually simple, this paper contributes a method that shows improved sample efficiency on a series of benchmark mujoco tasks, which has practical implications for real world environments. \n\nPros:\n- a simple idea with good empirical results that would be of interest to the community\n\nCons:\n- (extremely) unclear presentation which hinders the message of the paper.\n- the novelty of the approach is somewhat limited\n\nJustification for score:\nI gave my rating based upon the following considerations. The approach in this paper makes sense from a practical perspective and presents strong results. However, the experiments in the paper do not clearly identify which components of their method lead to their improved performance (i.e., an ablation on their stated contributions). The writing is also extremely poor. The paper makes use of non-standard notation (in relation to the prior work which it builds on) and unusual terminology. Overall however, I am on the fence about this paper, since I recognize the good results presented in this paper, in addition to the timely nature of the idea (there are at least two concurrent submissions that I am aware of that are similar).\n\nOther:\n- I would appreciate if the related work discussed prior off-policy methods that use demonstrations (e.g Hester et al. 2017)\n- The paper has a large number of ungrammatical sentences and unidiomatic expressions. ", "This paper proposes a new method to imitate expert efficiently. The paper first proposes a way to compute reward function from expert demonstration and uses the log probability to represent this reward function. Then they find a form of bellman equation that can optimize the reward stably. After the 'Q learning without IRL', an off-policy RL off-pac is applied. So this paper achieves comparable results to GAIL but uses much less data amount. \n\nclarity:\nThis paper is clearly written.\n\noriginality:\nThis paper is original.\n\npros:\nComparable performance with GAIL.\nBetter performance than Behavioral Cloning\nNew way of using demonstrations\n\ncons:\nAlthough both the method and the experiments look promising, there is a very simple yet competitive baseline missing. This baseline is also mentioned in the original GAIL paper: you initialize GAIL with BC, and then train GAIL. That's the baseline for a set of fair comparison.\n", "The paper proposes a method for imitation learning via inverse reinforcement learning based on a specific modeling of the reward. It is modeled as the log probability of a state action pair to belong to the expert policy. It models this distribution as a Bernoulli one and thus it reduces the IRL problem to a classification task. The global method also uses an off-policy algorithm to learn the value function of the current agent policy to improve sample efficiency. The method is tested on a set of continuous control tasks such as walker, hopper or humanoid. \n\nI think the paper has several flaws. First, I found the paper not very well written and organized. It is hard to read. It uses some terminology in a way that is different from the rest of the littérature (such as Q-learning as learning the Q-function of the expert policy instead of using the optimal Bellman operator (even if the expert is supposed to be optimal)). I also think that the related work section is missing a lot of important refs because it really focuses on recent papers while imitation learning has a long history. \n\nYet, my main concern is that the proposed method seems to reduce to a classification problem to me and is likely to suffer from the same issues than the supervised learning method (AKA behavior cloning). It probably overfits a lot and there is nothing in the experiments that shows how robust is the method to perturbations. In a discrete world, this method would ideally place a reward of 1 in every state visited by the expert and 0 elsewhere which is very likely to overfit and result in unstable behaviors in the presence of noise etc. I would like to see experiments showing robustness. \n\nThe experiments are also a bit strange since the learning is stopped early for the proposed method. Is it because the learning is unstable ?\n\n\n" ]
[ 7, 5, 5, 5 ]
[ 5, 4, 5, 5 ]
[ "iclr_2019_BkN5UoAqF7", "iclr_2019_BkN5UoAqF7", "iclr_2019_BkN5UoAqF7", "iclr_2019_BkN5UoAqF7" ]
iclr_2019_Bke4KsA5FX
Generative Code Modeling with Graphs
Generative models forsource code are an interesting structured prediction problem, requiring to reason about both hard syntactic and semantic constraints as well as about natural, likely programs. We present a novel model for this problem that uses a graph to represent the intermediate state of the generated output. Our model generates code by interleaving grammar-driven expansion steps with graph augmentation and neural message passing steps. An experimental evaluation shows that our new model can generate semantically meaningful expressions, outperforming a range of strong baselines.
accepted-poster-papers
This paper presents an interesting method for code generation using a graph-based generative approach. Empirical evaluation shows that the method outperforms relevant baselines (PHOG). There is consensus among reviewers that the methods are novel and is worth acceptance to ICLR.
train
[ "S1x0nNKgCm", "rJgMmnfxC7", "SyxeYOdu3Q", "HJl5-iMxAX", "Byen_drhpX", "Ske3KQBnT7", "rkxpzWG36m", "H1eHUfG36m", "HJeYe4p9p7", "Skly0QT9pm", "S1gsoQ69Tm", "S1ggSfa567", "S1gbGzT5aX", "rygKSlpcp7", "rkeutuqqnm", "Hyl3dbvq2X", "SJeXntTY37" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author" ]
[ "The new figure 2 is indeed much clearer. Thanks!", "Looking forward to revisions", "The paper proposes a code completion task that given the rest of a program, predicts the content of an expression. This task has similarity to code completion tasks in the code editor of an IDE. The paper proposes an interesting problem, but the paper would benefit if writing and evaluation are significantly improved.\n\nThe work builds on prior research by Allamanis et al. 2018b that performs such completions of single variables by picking from the variables in the scopes. The difference here is that portions of parse trees are predicted as opposed to a single variables, where the algorithm from the prior research is used to predict single variables.\n\nWriting-wise the paper is hard to read on the technical part with many unclear details and this portion needs a good amount of extra explanations. The Epsilon set includes triples which are not described and need understanding equation (2). The first element of this triple is an edge label <edge>($a$, $v$) where $a$ is an AST and $v$ is a node. Thus, edges of the graph end up between entire ASTs and nodes? While I can see how could this make sense, there is certainly lack of explanation going on here. Overall, this part is hard to parse and time-consuming to understand except at high level. Furthermore, the text has many functions without signatures and they seem to be used before they are defined (e.g. getRepresentation).\n\nTechnically, the approach also seems very similar to N3NN by Parisotto et al, ICLR 2017. There should be more elaboration on what is new here. Otherwise, the novelty of the paper really is just combining this work with Allamanis et al. 2018b.\n\nIn terms of evaluation, the task seems to be on a different set of expressions than the one explained in the exposition. How many expressions where there in the evaluation programs and how many were chosen to evaluate on and based on what criteria. It seems from the exposition that expressions with field accessed and function calls are not possible to be generated, but then some completions show method calls. How much of the full task is actually solved? In particular, several of the cited prior works solve specific problems like constants that are ignored here.\n\nThe evaluation is mostly an ablation studies of the proposed approach by removing edges from the final idea. \nBesides this, the paper also introduces a new dataset for showcasing the technique and does not report sizes and running times, essentially not answering basic questions like what is the trade-off between the different techniques. Comparison to actual prior works on similar tasks is also lacking (some TODO is left in the paper), but there is the claim that existing neural techniques such as seq2seq perform \"substantially worse\". I guess the authors have extra experiments not included for lack of space or that the evaluation was not ready at submission time.\n", "Thank you again for your valuable and detailed feedback. We will add the content of the additional answers we've given in the comments here to the next revision of the paper. Concretely, we will provide more details on (1) the selection of samples in our dataset as well as how we infer the grammar (this will clarify the issue of method calls as well), (2) the exact meaning of lastUse in Alg. 2, with a note on the relationship to Allamanis et al. 2018 (3) the relation to R3NN. Are there any other open questions you had that we overlooked?\n\nFinally, as many of the points you raised in your initial review have been clarified / resolved, would you consider raising your rating for the updated version of our paper?", "lastToken, lastUse, lastSibling and parent return unique nodes. While you are right that lastUse could be understood to return all locations in which a variable may have been used in an execution (which would indeed require several edges), we mean the lexically last occurrence of the node, which is uniquely determined. This is a simplification of the approach of Allamanis et al. (2018), though we are not stating this explicitly in our submission. We will clarify this in the next revision.\n\n[The actual source code used to compute is posted anonymously on http://paste.debian.net/hidden/7f6ba717/ now]\n\n\n\"inheritedAttr\" returns the node corresponding to the inherited attribute of a node, e.g., for node 10 in step 8 of Fig. 2, inheritedAttr(10) would return 0. We will clarify this in the next revision as well.\n\n\nThank you for your very detailed questions, they do help as us a lot to identify parts of the paper that are insufficiently precise.", "On choosing expressions:\n\nWe are greedily picking the largest allowed expression from the ASTs that we consider. So for example, from \"if ((boolVar || x > y) && UserDefinedFoo(x + y + z - 1))\", we select \"boolVar || x > y\" and \"x + y + z - 1\" and no other subexpressions.\n\n\nOn vocabulary size:\nIn the graph-expansion setting, there is no classical decoder vocabulary. There is a grammar that we infer from the expressions observed in the training data. This yields rules such as \"|Expr| -> ! |Expr|\", \"|Expr| -> |Expr| + |Expr|\", \"|Expr| -> |Expr|.Equals(|Expr|\", where |Expr| is a non-terminal. That inferred grammar has 222 expressions in total for our dataset, and includes the built-in functions applicable to the datatypes we support.\n\nSome non-terminals are treated specially, as discussed in the paper. Concretely, |Variable| is expanded using pickVariable from Eq. 4, and |${Type}Literal| is expanded using pickLiteral from Eq. 5. The vocabularies used in pickLiteral have size 50, i.e., we pick from the 50 most common integer/character/string literals observed in the training data.\nThe copying part of the pickLiteral has access to all tokens in the context that are not language keywords (\"for\", \"public\", etc). Anecdotally, we can report that this makes no significant difference -- this masking of keywords/nonterminal nodes in the context was disabled by a bug for some experimental runs without negative effects.\n\n\nOn generating functions:\nSee above - the limited number of functions have dedicated grammar rules, i.e., \"|Expr|.Equals(|Expr|)\" is treated analogous to generating \"|Expr| + |Expr|\".\n\n\nOn determining \"user-definedness\" of methods:\nAs we only support methods, there is a straightforward check. When we observe \"var.Method(${args})\", we check if \"var\" is of an allowed type; this implies that Method is implemented in the type and not by the user. If the arguments ${args} are also in our fragment of the language, we include the full expression in the dataset; the inferrence of grammar rules from the observed ASTs then yields a rule |Expr| -> |Expr|.Method(...)\".\n\nThis is actually not completely correct, as C# has an extension mechanism by which methods can be added to existing type. However we found this to be seldomly used on the types we are restricting ourselves to, though this leads to a handful of user-defined extension methods occuring in our grammar (e.g., there is \"|Expr| -> |Expr|.VirtualPathToDbPath()\")", "- choosing expressions:\n\nI think part of the question is answered to other reviewers about size of expressions. But for x > y there are 3 expressions that match the description. x, y and x < y. I guess you only take x<y. What is the vocabulary size?\n\n- functions and constants:\n\nAre functions generated with pickLiteral ? There is no special rule for choosing a built-in function in the generation process, yet these functions are in the dataset. Is there a separate vocabulary for functions and other literals? When copying from the context, do you only include functions or all literals? When picking the dataset, how do you decide if a function is user-defined?\n\nThanks for improving the experiments.", "- confusing functions $F(a, v)$ and edges $(u, label, v)$, which are not the same thing\n\nThank you for the clarification. The problem with $F(a, v)$ being a function is that it is not clear if these functions return a single node (as not all edges have out-degree 1). For example $lastUse(a, v)$ should probably return a set of nodes, because of loops and ifs. However, the new exposition is much better for this.\n\nWhat does inheritedAttr return?\n\n- R3NNs\n\nThank you for the clarification. Indeed the proposed idea in this paper is much nicer than R3NN.", "[Second part of reply, as we were over 5000 chars]\n\n> In terms of evaluation, the task seems to be on a different set of \n> expressions than the one explained in the exposition. How many \n> expressions where there in the evaluation programs and how many were \n> chosen to evaluate on and based on what criteria.\n\nThis is discussed in the first paragraph of Sect. 5:\n- \"all expressions of the fragment that we are considering (i.e.,\n restricted to numeric, Boolean and string types, or arrays of such\n values; and not using any user-defined functions)\"\n- \"343974 samples overall [...] ~100k samples generated from 114 projects\n into a 'test-only' [...] remaining data we split into\n training-validation-test sets (60-20-20)\"\n\nWe are not sure what additional information you are asking for here. Could you please elaborate?\n\n> It seems from the exposition that expressions with field accessed and \n> function calls are not possible to be generated, but then some \n> completions show method calls.\n\nWe exclude _user-defined_ functions but allow the built-in functions (and fields) of the considered data types, which primarily include string manipulation/tests (\"Substring\", \"IndexOf\", etc.) and generic functions such as \"Equals\" or the \"Length\" field of arrays. These built-in function calls are added as new productions in the underlying C# expression grammar.\n\n> In particular, several of the cited prior works solve specific \n> problems like constants that are ignored here.\n\nWe do handle constants (i) by generation from a vocabulary and (ii) by copying from context (cf. \"Choosing Productions, Variables & Literals\" and equation (5)); what other problem do you have in mind here?\n\n> The evaluation is mostly an ablation studies of the proposed approach \n> by removing edges from the final idea. Besides this, the paper also \n> introduces a new dataset for showcasing the technique and does not \n> report sizes and running times, essentially not answering basic \n> questions like what is the trade-off between the different techniques.\n\nWe will update the paper to include additional statistics about the experiments (for example, how many epochs were needed to train to convergence, how long an epoch takes on our dataset). Are there any specific statistics that you are interested in besides runtime?\n\n> Comparison to actual prior works on similar tasks is also lacking \n> (some TODO is left in the paper), but there is the claim that existing \n> neural techniques such as seq2seq perform \"substantially worse\".\n\nA seq2seq baseline achieves 21.8% accuracy (28.1% accuracy in the 5 most probable results returned by beam search) on the test dataset. The perplexity is very high 87.5, primarily driven by uncertainty about generating variables in generated expressions. On the test-only dataset, these are 10.8% accuracy (16.8% @5) and perplexity 130.5.\nWe have also updated the paper with the PHOG results.", "Thanks for the thorough review. We will try to improve the writing to make the parts of the paper you found hard to follow easier to read.\n\n> The work builds on prior research by Allamanis et al. 2018b that \n> performs such completions of single variables by picking from the \n> variables in the scopes. The difference here is that portions of parse \n> trees are predicted as opposed to a single variables, where the \n> algorithm from the prior research is used to predict single variables.\n\nWe want to point out that this is not quite precise -- the model from Allamanis et al. 2018b applies a much more complex analysis to identify the correct variable, introducing speculative data flow edges (i.e., \"how would the graph look like if a certain variable were used in this location\"). Our method is much more simple, and is more akin to using a pointer network to select a variable available in scope.\n\n> Writing-wise the paper is hard to read on the technical part with many \n> unclear details and this portion needs a good amount of extra \n> explanations. The Epsilon set includes triples which are not described \n> and need understanding equation (2). The first element of this triple \n> is an edge label <edge>($a$, $v$) where $a$ is an AST and $v$ is a \n> node.\n\nYou seem to be confusing functions $F(a, v)$ and edges $(u, label, v)$, which are not the same thing and should not be used interchangeably. A function like “parent(a, v)” returns a node from the partial AST $a$ (in this case, the parent node of $v$). An edge is a triple of “(node, label, node)”, explicitly defined in the paragraph right above Eq. (2). The edge labels and function names are (generally) not shared.\n\nWe apologize for this confusion. We have updated the paper with a new Notation paragraph that formally defines the constituents of edges triples and functions.\n\n> Thus, edges of the graph end up between entire ASTs and nodes?\n\nNo, edges are always between nodes. The functions like parent(a, v) return a /node/ from the partial AST $a$ (e.g. in this case, the parent node of $v$), and should not be confused with edge types.\n\n> Furthermore, the text has many functions without signatures and they \n> seem to be used before they are defined (e.g. getRepresentation).\n\nWe indeed stripped the text of explicit signatures for space reasons (as we felt they were implicitly defined anyway), but we remedied this somewhat in the new revision. We have added a Notation paragraph to explain the used functions, which we hope is sufficient. If you feel more context is needed, we can also include explicit signatures everywhere.\n\n> Technically, the approach also seems very similar to N3NN by Parisotto \n> et al, ICLR 2017. There should be more elaboration on what is new \n> here.\n\n[We assume this was a typo, and you refer to R3NNs] The core difference is that R3NNs use only the tree structure. While their up-then-down recursion scheme allows information sharing between different sibling subtrees in principle, no explicit domain knowledge is integrated to directly connect relevant parts of the tree. Our core contribution is to show how to integrate richer domain knowledge directly into the model.\n\nUsing a R3NNs in a generative procedure also implies a quadratic computational cost, as each partial tree is traversed twice at each expansion step (summing up to roughly \\sum_{1 < i < V} 2i = V^2 + V), whereas our sequential graph propagation requires only a linear pass over all nodes in our graph, where each node in the expansion tree is visited at most twice (once for inherited and once for synthesized attributes).\n\n[On a side note, the authors of the R3NN paper have communicated to us privately that training the model was extremely hard, requiring careful tuning of hyperparameters to ensure convergence to a reasonable state.\nIn contrast, our model required almost no hyperparameter tuning to get good results, and we only did a cursory exploration to create the experiments in the paper]", "Thank you for the update. Indeed, the explanations and the notations is much better now.", "Thank you for your kind review!\n\n> The authors have a qualitative evaluation section describing the\n> differences in errors made by various methods. Making this more\n> quantitative by categorizing the errors and computing their frequency\n> would be quite interesting.\n\nWe thought about this as well, but we found it hard to automatically categorize errors beyond considering syntax, type and non-typing semantic errors. We have not reported the numbers for syntax errors in this paper, as all models produce syntactically valid expressions in over 99% of the cases. If you have ideas for metrics that are effectively computable, we are happy to provide additional experimental data. \n", "Thank you for your careful review and kind comments. We hope that we can further improve our submission with your feedback.\n\n> 1, I think it would be great to provide more statistics of the\n> proposed dataset, e.g., the average number of tokens, the average size\n> of ASTs. \n\nAs a reminder, there are 344k samples in the dataset overall. There are on average 4.32 (stddev 3.80) tokens per expression to generate [2 tokens: 88k, 3 tokens: 116k, 4 tokens: 38k, 5 tokens: 32k, 6 tokens: 21k, 7 tokens or more: 49k]. The generated trees on average have 3.69 (stddev 3.06) production steps. The dataset is clearly dominated by simple expressions such as “x > y” and “x[y]” (we have filtered out single-variable expressions), but longer expressions are included often as well (cf. Sect. 2 for more details on the selection process). We can also report that we have successfully extended the model to generate whole blocks of statements in a different research project.\n\n> 2, Given the dynamic nature of the graph generation process, I am\n> curious about the efficiency of the proposed method. It would be great\n> to provide some run time information. Also, since recurrent networks\n> are heavily used throughout the model, I wonder how difficult the\n> training process is.\n\nRegarding performance: Training is relatively efficient, as we know the target expansion graph and can thus compute the representations of all nodes in the expansion graph in one go. While that computation is relatively easy to parallelize, its length is the length of the longest path in the target expansion graph. We currently cap this at 50 during training (which excludes only few examples in our dataset). Combined with the computationally relatively expensive GGNN-based encoder, training on a K80 processes around ~25 samples/s, compared to about ~60 samples/s just for the encoder. \n\nAs we needed to implement beam search at test time, we have decided to entirely forego batching (and, indeed, GPU usage) at test time, and have essentially implemented Alg. 1 from the paper directly in Python, pruning back the set of beams after each expansion step. For this, “getRepresentation” is a lazy implementation of Eq. (2), computing node representations by message passing “on demand”. We made no efforts to optimize this implementation, instead aiming for simplicity to avoid bugs. An implementation in a dynamic computation graph framework such as TF Eager or PyTorch should be able to significantly outperform our code.\n\nWe will report precise runtime statistics in the supplementary material in our next revision. We also plan to release our implementation.\n\n> 3, It would be great to also compare the log likelihood on the test\n> set.\n\nThe perplexity shown in Table 1 is directly proportional to the log likelihood in the test set, albeit normalized per token. Could you please elaborate on what you had in mind and how this differs from the results in Table 1?\n\n> 4, It is unclear from the paper that whether authors use a pre-trained\n> GGNN as encoder or train the encoder end-to-end with the decoder from\n> scratch.\n\nThe full network is trained end-to-end with no pretraining. We will clarify this in the text.\n\n> 5, It would be great to improve figure 2 as it is not easy to read.\n> Maybe draw another graph to illustrate the temporal evolution of AST?\n\nWe’re sorry that this figure is not legible. Given the suggested page limit we wanted to make this explanation as concise as possible. In the new version, we redrew this figure as a sequence of multiple minifigures that show the evolution of the propagation.\n(This has extended the content length beyond the recommended 8 pages, but we agree with you that clarity of this illustration is more important.)\n", "We have updated our submission taking some of the reviewer's feedback into account and hope that this improves its clarity. Primarily, we have made the following changes:\n- Experiments: We have included results for the PHOG and seq2seq baselines in the paper.\n- Notation: We have slightly improved the notation in Alg. 2 and Eq. (2) and added a paragraph giving an overview of used notation in Sect. 3.\n- Visualization of tree expansion: We have replaced Fig. 2 by a step-by-step version that should be easier to follow for the reader.\n\nIn the next revision, we plan to reflect the remaining feedback and make the following changes:\n- Statistics about dataset and (runtime) performance of decoders.\n- Experiments with a graph2seq baseline model.\n\n", "In this paper, authors propose a conditional generative model which predicts the missing expression given the surrounding code snippet. Authors represent programs as graphs and use some off-the-shelf encoder to obtain representations for all nodes. Inspired from the attribute grammar, authors augment every node in AST with two new nodes which contain inherited and synthesized information. Based on GGNN, a grammar-driven decoder is further proposed to sequentially generate the AST and the corresponding program. Authors also propose a large dataset which is built from open sourced projects. Experimental results on this dataset show that the proposed method achieves better predictive performance compared to several recent work. \n\nStrength:\n\n1, The problem this paper tries to tackle, i.e., building generative models of code, is very challenging and of great significance. \n\n2, The overall model is a novel and successful attempt to incorporate the structure information of the program into neural networks. I think it will be inspiring for other machine learning based programming applications.\n\n3, The results are very promising and impressive, especially given the large size of the proposed dataset. For example, the top 5 accuracy of predicting correct expression on unseen projects is 57%.\n\nWeakness:\n\n1, I think it would be great to provide more statistics of the proposed dataset, e.g., the average number of tokens, the average size of ASTs. \n\n2, Given the dynamic nature of the graph generation process, I am curious about the efficiency of the proposed method. It would be great to provide some run time information. Also, since recurrent networks are heavily used throughout the model, I wonder how difficult the training process is. \n\n3, It would be great to also compare the log likelihood on the test set.\n\n4, It is unclear from the paper that whether authors use a pre-trained GGNN as encoder or train the encoder end-to-end with the decoder from scratch.\n\n5, It would be great to improve figure 2 as it is not easy to read. Maybe draw another graph to illustrate the temporal evolution of AST?\n\nOverall, I think this paper has made a great progress towards neural modelling of programs and recommend it to be accepted for ICLR.\n", "The paper introduces a 'code generation as hole completion' task and associated dataset, ExprGen. The authors proposed a novel extension of AST code generation which uses what they call Neural Attribute Grammars. They show the proposed method does well on this task, compared to ablations of their model (which are similar to previous AST approaches).\n\nThe task and dataset are interesting, and the comparison of the proposed method to baselines seems thorough. \n\n*Details to Improve*\nThe authors have a qualitative evaluation section describing the differences in errors made by various methods. Making this more quantitative by categorizing the errors and computing their frequency would be quite interesting.", "The authors of the non-neural PHOG model have now run additional experiments on the dataset used in our paper. Note that their model is a language model and thus only takes code \"left of\" the hole to fill into account, and that their framework is fairly generic and does not have special modeling of which variables are in scope etc. Hence, the results of their model are only a lower bound of what their model could achieve on this task with suitable extensions; e.g., it would be possible to extend their formalism to also take code after the hole to fill into account.\n\nBearing these limitations in mind, their results (i.e., their row in Table 1) are as follows:\nOn the \"Test\" dataset:\n Acc@1: 34.8%\n Acc@5: 42.9%\nOn the \"Test-only\" dataset:\n Acc@1: 28.0%\n Acc@5: 37.3%" ]
[ -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, -1 ]
[ -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, -1 ]
[ "S1gbGzT5aX", "HJl5-iMxAX", "iclr_2019_Bke4KsA5FX", "SyxeYOdu3Q", "H1eHUfG36m", "rkxpzWG36m", "HJeYe4p9p7", "Skly0QT9pm", "SyxeYOdu3Q", "SyxeYOdu3Q", "rygKSlpcp7", "Hyl3dbvq2X", "rkeutuqqnm", "iclr_2019_Bke4KsA5FX", "iclr_2019_Bke4KsA5FX", "iclr_2019_Bke4KsA5FX", "iclr_2019_Bke4KsA5FX" ]
iclr_2019_BkeStsCcKQ
Critical Learning Periods in Deep Networks
Similar to humans and animals, deep artificial neural networks exhibit critical periods during which a temporary stimulus deficit can impair the development of a skill. The extent of the impairment depends on the onset and length of the deficit window, as in animal models, and on the size of the neural network. Deficits that do not affect low-level statistics, such as vertical flipping of the images, have no lasting effect on performance and can be overcome with further training. To better understand this phenomenon, we use the Fisher Information of the weights to measure the effective connectivity between layers of a network during training. Counterintuitively, information rises rapidly in the early phases of training, and then decreases, preventing redistribution of information resources in a phenomenon we refer to as a loss of "Information Plasticity". Our analysis suggests that the first few epochs are critical for the creation of strong connections that are optimal relative to the input data distribution. Once such strong connections are created, they do not appear to change during additional training. These findings suggest that the initial learning transient, under-scrutinized compared to asymptotic behavior, plays a key role in determining the outcome of the training process. Our findings, combined with recent theoretical results in the literature, also suggest that forgetting (decrease of information in the weights) is critical to achieving invariance and disentanglement in representation learning. Finally, critical periods are not restricted to biological systems, but can emerge naturally in learning systems, whether biological or artificial, due to fundamental constrains arising from learning dynamics and information processing.
accepted-poster-papers
Irrespective of their taste for comparisons of neural networks to biological organisms, all reviewers agree that the empirical observations in this paper are quite interesting and well presented. While some reviewers note that the paper is not making theoretical contributions, the empirical results in themselves are intriguing enough to be of interest to ICLR audiences.
train
[ "rJxHyW8rRX", "rJlxyjwX07", "Bkgba5DmCQ", "H1ebz8D7Cm", "ryguIWkf6Q", "SJg8cOOka7", "HJgH4ifFi7", "SJlyF5rTnQ", "BJeHWCM6hQ" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public" ]
[ "We are thankful to the reviewer for their positive assessment of our paper. In fact, we share the same sentiment, as we articulate in the Conclusion, that one should resist the temptation to build too much on structural correspondences between such diverse systems. By showing these data we mostly wanted to emphasize how our reasoning is inspired by a reflection on the neurobiology of visual systems, and how such paradigms could be employed to better understand DNNs, since both systems share similar information processing goals.", ">> In neuroscience the opening of the critical period window if thought to be mechanistically mediated by the maturation of inhibition. Is that view compatible with the results presented in this paper? This is sort of complementary to the FIM analysis, since is mostly about net average input to a neuron, i.e. about the information contained in the activations, rather than the weights.\n\nWe thank the reviewer for the insightful comment. Opening and closing of critical periods in neuronal networks have indeed been shown to be regulated by inhibitory (mostly GABAergic, but not exclusively) neuronal populations, which provide the critical balance between competing pathways (such as in ocular dominance) shaping the network in its mature form (Hensch, Curr. Top. Dev. Biol., 2005). While the CNN architectures we have tested in our study do not have direct inhibitory connections between elements of the CNN, we can speculate that \"diffuse\" inhibitory effects could emerge naturally during network optimization in order to make the inference more robust, leading to the effective \"pruning\" of certain connections, as mirrored by the FIM trace (Figure 4). It should also be noted here the connection existing between the decrease of the information in the weights (e.g., by \"pruning\") and the loss of information in the corresponding activations.\n\nIn addition we have the fact that only datasets which provide robust \"stimulation\" of the CNNs exhibit critical-period-like behavior, while being fed noise as the input is a deficit that the network promptly recovers from (Figure 2, left). An analogous phenomenon has been demonstrated in kittens, with dark rearing causing prolonged plasticity and delayed critical period inception (and closure), but, remarkably, this behavioral evidence has been tied to the decreased inhibitory GABAergic tone in the relevant circuits (e.g. Chen et al., Mol. Brain Res. 2001), which, by not providing the necessary competitive balance, lengthens the plastic window of visual development.", "We are thankful to the reviewer for the feedback and the many insightful suggestions. We have added the suggested experiments to the revised version of the paper (in particular Figure 3, Figure 8), and also discuss some of the points more in detail below.\n\n>> Presumably, early training on blurred images prevents the initial conv filters from learning to discriminate high-frequency components (first of all, is this true?). The crucial phenomenon pointed out by the authors is that, even after removing the blur, the lower convolutions aren't able to recover and learn the high-frequency components.\n\nWe share the same intuition: We added to Figure 8 in the Appendix a visualization of the first-layer filters for networks with and without a deficit, and with the deficit removed after the end of the critical period. Figure 8 qualitatively shows that if high-resolution stimuli are not available before the critical period, the network does not manage to extract high-frequency features in the first layer. Unfortunately, the filters of the architecture we use are small (3x3), making the analysis more difficult: We are considering alternate experiments to test this hypothesis indirectly using responses to sinusoidal gratings to obtain clearer results.\n\n>> In fact, the high FIM trace in the latest layers could be due to the fact that they're trying to compensate for the lack of appropriate low-level feature extractors by composing low-frequency filters so as \"build\" high-frequency ones. If this makes sense, one would assume that freezing the last layers and only maintaining plasticity in the lower ones could be a way of \"reopening\" the critical period. Is that indeed the case?\n\nThis is a very interesting hypothesis: We tried to test it as suggested, freezing layers 3-6 of AllCNN while leaving layers 0-3 and the final classifier free to change. We observe that upon deficit removal the network error increases (since the data distribution changes), only to revert to the performance with deficit soon after. This may be because freezing the upper layers makes new information extracted by the lower layers invisible to the classifier, and therefore does not promote learning of new features. Rather, lower layers may adapt to blur them as before, to fit the upper response expected by the upper layers. However, we agree that finding ways to reopen critical periods (either by augmenting the data with more \"stimulating\" experiences, as is sometimes done in neuroscience, see e.g. Knudsen, J. Cogn. Neurosci., 2004, or by changing the training procedure) is an intriguing question.\n\n>> The authors show that their main results are robust to changes in the learning rate annealing schedule. However, it is not clear how changing the optimizer might affect the presence of the critical period. What would happen for instance using Adam or another optimization procedure that relies on the normalization of the gradient?\n\nWe have conducted the experiment suggested and show in Figure 3 (Bottom Right) the result of training a ResNet with Adam, following the same experimental setup as Figure 1, and confirming that Adam also follows a similar trend.\n\n>> On a related note, the authors point out the importance of forgetting, in particular as the main mechanism behind the second learning phase. They also point out that the deficit in learning the task after sensory deprivation is accompanied by large FIM trace in the last layers. What would happen in the presence of a standard regularizer like weight decay?\n\nWe fully agree on the importance of weight decay for critical periods, and in the revised version of the paper we have added new experiments that corroborate it (Figure 3, bottom left). We observe that training in the same setup as Figure 1, but without weight decay, leads to a sharper and sensibly shorter critical period. Gradually increasing the value of weight decay leads to more prolonged critical periods, up to the point where the network eventually stops training properly altogether.", "We thank the reviewer for their feedback and suggestions. We have updated the paper accordingly, and address some of the points more in detail below:\n\n>> I was disappointed to see Tishby's result (2017) only remotely discussed, an earlier work than the one by Tishby is by Montavon et al 2011 in JMLR. Also in this work properties of successive compression and dimensionality reduction are discussed, perhaps the starting point of quantitative analysis of various DNNs.\n\nWe preferred not to elaborate at length on the connections with Schwartz and Tishby's results, since the relationship between the FIM of the weights (which we use in our paper) and the Shannon information of the activations, used by Tishby, is non-trivial and has already been discussed in more detail by other authors. However, given how important this aspect is and since it has also been discussed by the other reviewers, we have included in the revised version an extended discussion on it, which hopefully will also make the manuscript more self-contained.\n\nConcerning Montavon's paper, that is indeed our miss; we have added the paper to the revised discussion, and thank the reviewer for pointing it out.\n\n>> To this point the paper presents no theoretical contribution, rather empirical findings only, that may or may not be ubiquitous in DNN learning systems. The latter point may be worthwhile to discuss and analyse. \nOverall, the paper is interesting with its nice empirical studies but stays somewhat superficial. \n\nThe empirical findings are observed across the most commonly used architectures and optimization algorithms, as we also confirm with the new experiments in Fig. 3. But it is true that it will take much more experimentation to assess whether they are truly ubiquitous and how they may affect different kinds of data. On the theoretical side, the analysis of the transient, irreversible, properties of the learning process using the Fisher information in the weights is not only novel, but also different from other theoretical analyses, such as the study of flat minima, which focuses on the asymptotic behavior of the optimization (see the last paragraph of the Discussion). In particular, our analysis suggests that crossing bottlenecks in the loss landscape, as opposed to convergence to critical points, may play a fundamental role in characterizing the final behavior of the network. This aspect has, until now, been largely ignored and we are hopeful it may be fruitfully integrated in the current understanding of deep networks using, for example, tools from non-equilibrium dynamics, where such studies are common.\n\nAlthough we agree on the need for an analytical model, we tried to avoid the pitfalls of prematurely settling on a particular abstraction of the problem in order to paint a clearer picture, both through empirical experiments and by establishing connections with the most recent theories in deep learning, and yet providing a novel approach where the Fisher Information becomes one of the central quantities to consider.\n\n>> To learn more a simpler toy model may be worthwhile to study. \n\nWe fully agree. In this paper, we focused on testing our hypotheses on current state-of-the-art models and relatively complex datasets, in order to understand what are the key aspects that need to be captured by any simplified model. Now that this is established, and shown to be of practical relevance, given the widespread practice of fine-tuning, we can and will focus on simpler models that perhaps are also tractable analytically.", "Let's be frank: I have never been a fan of comparing real brains with back-prop trained multilayer neural networks that have little to do with real neurons. For instance, I am unmoved when Figure 1 compares multilayer network simulations with experimental data on actual kitten. More precisely, I see such comparisons as cheap shots.\n\nHowever, after forgetting about the kitten, I can see lots of good things in this paper. The artificial neural network experiments designed by the authors show interesting phenomena in a manner that is amenable to replication. The experiments about the varied effects of different kinds of deficits are particularly interesting and could inspire other researchers in creating mathematical models for these striking differences. The authors also correlate these effects with the two phases they observe in the variations of the trace of the Fisher information matrix. This is reminiscent of Tishby's bottleneck view on neural networks, but different in interesting ways. To start with, the trace of the Fisher information matrix is much easier to estimate than Tishby's mutual information between patterns, labels, and layer activation. It also might represent something of a different nature, in ways that I do not understand at this point.\n\nIn addition the paper is very well written, the comments are well though, and the experiments seem easy to replicate.\n\nGiven all these qualities, I'll gladly take the kitten as well..\n", "The authors analyze the learning dynamics in deep neural networks and identify an intriguing phenomenon that reflects what in biological learning is known as critical period: a relatively short time window early in post-natal development where organisms become particularly sensitive to particular changes in experience. The importance of critical periods in biology is due to the fact that specific types of perturbations to the input statistic can cause deficits in performance which can be permanent in the sense that later training cannot rescue them.\n\nThe authors did a great job illustrating the parallelism between critical periods in biological neural systems and the analogous phenomenon in artificial deep neural networks. Essentially, they showed that blurring the input samples of the cifar10 dataset during the initial phase of training had an effect that is very reminiscent of the result of sensory deprivation during the critical periods of visual learning in mammals, resulting in a long-term impairments in visual object recognition that persists even if blurring is removed later in training. The authors go as far as characterizing the effects of the length of the \"sensory deprivation\" window and its onset during training, and comparing the results to classic neuroscience monocular deprivation experiments in kittens, pointing out very striking phenomenological similarities.\n\nNext, the authors establish a connection between critical periods in deep neural networks and the amount of information that the weights of the trained model contain about the task by looking at the Fisher Information Matrix (FIM). With this method they obtain a host of interesting insights. One insight is that there are two phases in learning: an initial one where the trace of the FIM grows together with a rapid increase in classification accuracy, and a second one where accuracy keeps slightly increasing, but Fisher Information trace globally decreases. They then go into detail and look at how this quantity evolves within individual layers of the deep learning architecture, revealing that the deficit caused by the blurring perturbation during the early epochs training is accompanied by larger FIM trace in the last layers of the architecture at the expense of the intermediate layers.\nBesides the fact that deep neural network exhibit critical periods, another important result of this work is the demonstration that pretraining, if done inappropriately can actually be deleterious to the performance of the network.\n\nThis paper is insightful, and interesting. The conceptual and experimental part of the paper is very clearly presented, and the methodology is very appropriate to tease apart some of the mechanisms underlying the basic phenomenological observations. Here are some detailed questions meant to elucidate some points that are still unclear.\n\n- Presumably, early training on blurred images prevents the initial conv filters from learning to discriminate high-frequency components (first of all, is this true?). The crucial phenomenon pointed out by the authors is that, even after removing the blur, the lower convolutions aren't able to recover and learn the high-frequency components. In fact, the high FIM trace in the latest layers could be due to the fact that they're trying to compensate for the lack of appropriate low-level feature extractors by composing low-frequency filters so as \"build\" high-frequency ones. If this makes sense, one would assume that freezing the last layers and only maintaining plasticity in the lower ones could be a way of \"reopening\" the critical period. Is that indeed the case?\n- The authors show that their main results are robust to changes in the learning rate annealing schedule. However, it is not clear how changing the optimizer might affect the presence of the critical period. What would happen for instance using Adam or another optimization procedure that relies on the normalization of the gradient?\n- On a related note, the authors point out the importance of forgetting, in particular as the main mechanism behind the second learning phase. They also point out that the deficit in learning the task after sensory deprivation is accompanied by large FIM trace in the last layers. What would happen in the presence of a standard regularizer like weight decay? Assuming that large FIM trace in the last layers is correlated with large weighs, that might mitigate the negative effect of early sensory deprivation.\n- In neuroscience the opening of the critical period window if thought to be mechanistically mediated by the maturation of inhibition. Is that view compatible with the results presented in this paper? This is sort of complementary to the FIM analysis, since is mostly about net average input to a neuron, i.e. about the information contained in the activations, rather than the weights.", "The paper is interesting and I like it. I draws parallels from biological learning and the well known critical learning phases in biological systems to artificial neural network learning. \nA series of empirical simulation experiments that all aim to disturb the learning process of the DNN and to artificially create criticality are presented. They are providing food for thought, in order to introduce some quantitative results, the authors use well known Fisher Information to measure the changes. So far so good and interesting.\nI was disappointed to see Tishby's result (2017) only remotely discussed, an earlier work than the one by Tishby is by Montavon et al 2011 in JMLR. Also in this work properties of successive compression and dimensionality reduction are discussed, perhaps the starting point of quantitative analysis of various DNNs. \n\nTo this point the paper presents no theoretical contribution, rather empirical findings only, that may or may not be ubiquitous in DNN learning systems. The latter point may be worthwhile to discuss and analyse. \nOverall, the paper is interesting with its nice empirical studies but stays somewhat superficial. To learn more a simpler toy model may be worthwhile to study. \n\n", "The phenomena observed in our study of the Fisher Information of the weights, and especially their connections with irreversible changes in the connectivity of Deep Networks during the early phases of training, cannot be derived from the results of Shwartz-Ziv and Tishby concerning the Shannon mutual information of the activations. In fact, to the best of our knowledge, we are the first to show the relationship between changes of Fisher Information and irreversible effects of optimizing a Deep Network. It is however true that some of the results by Shwartz-Ziv and Tishby are related, in fact implied, by our observations, which therefore provide further and independent corroborations of their claims. \n\nIn particular, Shwartz-Ziv & Tishby report changes in the Shannon mutual information of the activations (not of the weights) during training. This are however not observed to be associated with any kind of irreversible changes. In fact, the existence of critical (irreversible) phases of learning have not been observed, let alone studied, by https://arxiv.org/abs/1703.00810 nor by anyone else to our knowledge. Furthermore, note that changes in mutual information of the activations are always observed during training, however critical learning periods are present only for some specific types of deficits. Therefore, Shwartz-Ziv & Tishby's framework and results on the different phases of information in the activations during training cannot explain the existence and the observed phenomenology of critical periods.\n\nOn the other hand, thanks to the introducing of the Fisher Information of the weights, and by exploiting its relationship to the network connectivity, we can empirically characterize critical-period-inducing deficits as precisely those those that severely alter the connectivity of the network, and suggest a theoretical explanation for these phenomena (see end of Section 3). To the best of our knowledge, we are the first to compute and track these changes of the Fisher information of the weights during training of a state of the art, modern deep network, and in particular, nobody has shown plots like those in Figure 1, Figure 2, Figure 3, Figure 5.\n\nHowever, even if discussing different quantities, Shwartz-Ziv and Tishby’s results are indeed related to the plots we show in Figure 4, as we also discuss in Section 4 (pag. 7, second paragraph). The non-trivial connection between the two can be derived from the bound on information introduced by Achille and Soatto (https://arxiv.org/abs/1706.01350, JMLR 2018): As we describe on Page 7, they show that reduction of the information in the weights implies information reduction in the activation (but not vice-versa). In this sense, our results are can also serve to corroborate and expand, using an independent framework, the experimental evidence on the existence of multiple phases of learning shown by Shwartz-Ziv and Tishby.", "I wonder if there is any novelty in your experiment about Fisher Information. Many of the phenomenon in your experiments have been studied in https://arxiv.org/abs/1703.00810 " ]
[ -1, -1, -1, -1, 9, 8, 6, -1, -1 ]
[ -1, -1, -1, -1, 4, 4, 5, -1, -1 ]
[ "ryguIWkf6Q", "Bkgba5DmCQ", "SJg8cOOka7", "HJgH4ifFi7", "iclr_2019_BkeStsCcKQ", "iclr_2019_BkeStsCcKQ", "iclr_2019_BkeStsCcKQ", "BJeHWCM6hQ", "iclr_2019_BkeStsCcKQ" ]
iclr_2019_BkeU5j0ctQ
CEM-RL: Combining evolutionary and gradient-based methods for policy search
Deep neuroevolution and deep reinforcement learning (deep RL) algorithms are two popular approaches to policy search. The former is widely applicable and rather stable, but suffers from low sample efficiency. By contrast, the latter is more sample efficient, but the most sample efficient variants are also rather unstable and highly sensitive to hyper-parameter setting. So far, these families of methods have mostly been compared as competing tools. However, an emerging approach consists in combining them so as to get the best of both worlds. Two previously existing combinations use either an ad hoc evolutionary algorithm or a goal exploration process together with the Deep Deterministic Policy Gradient (DDPG) algorithm, a sample efficient off-policy deep RL algorithm. In this paper, we propose a different combination scheme using the simple cross-entropy method (CEM) and Twin Delayed Deep Deterministic policy gradient (TD3), another off-policy deep RL algorithm which improves over DDPG. We evaluate the resulting method, CEM-RL, on a set of benchmarks classically used in deep RL. We show that CEM-RL benefits from several advantages over its competitors and offers a satisfactory trade-off between performance and sample efficiency.
accepted-poster-papers
This paper combines two different types of existing optimization methods, CEM/CMA-ES and DDPG/TD3, for policy optimization. The approach resembles ERL but demonstrates good better performance on a variety of continuous control benchmarks. Although I feel the novelty of the paper is limited, the provided promising results may justify the acceptance of the paper.
test
[ "H1g8kU290X", "Ske_YvI527", "Ske3D7Jqh7", "SJeaaoUYAX", "rJgh-YoWAQ", "SyefCujZRm", "rkxM5djb0Q", "r1lJmuibAQ", "BJgACvi-CQ", "Syev33W527" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "The rebuttal provided by the authors is convincing.", "The contributions of this paper are in the domain of policy search, where the authors combine evolutionary and gradient-based methods. Particularly, they propose a combination approach based on cross-entropy method (CEM) and TD3 as an alternative to existing combinations using either a standard evolutionary algorithm or a goal exploration process in tandem with the DDPG algorithm. Then, they show that CEM-RL has several advantages compared to its competitors and provides a satisfactory trade-off between performance and sample efficiency.\n\nThe authors evaluate the resulting algorithm, CEM-RL, using a set of benchmarks well established in deep RL, and they show that CEM-RL benefits from several advantages over its competitors and offers a satisfactory trade-off between performance and sample efficiency. It is a pity to see that the authors provide acronyms without explicitly explaining them such as DDPG and TD3, and this right from the abstract.\n\nThe parer is in general interesting, however the clarity of the paper is hindered by the existence of several typos, and the writing in certain passages can be improved. Example of typos include “an surrogate gradient”, “\"an hybrid algorithm”, “most fit individuals are used ” and so on… \n\nIn the related work the authors present the connection between their work and contribution to the state of the art in a detailed manner. Similarly, in section 3 the authors provide an extensive background allowing to understand their proposed method.\n\nIn equation 1, 2 the updates of \\mu_new and \\sigma_new uses \\lambda_i, however the authors provide common choices for \\lambda without any justification or references.\n\nThe proposed method is clearly explained and seems convincing. However the theoretical contribution is poor. And the experiment uses a very classical benchmark providing simulated data.\n\n1. In the experimental study, the authors present the value of their tuning parameters (learning rate, target rate, discount rate…) at the initialisation phase without any justifications. And the experiments are limited to simulated data obtained from MUJOCO physics engine - a very classical benchmark. \n2. Although the experiments are detailed and interesting they support poor theoretical developments and use a very classical benchmark\n", "The paper presents a combination of evolutionary search methods (CEM) and deep reinforcement learning methods (TD3). The CEM algorithm is used to learn a Diagional Gaussian distribution over the parametes of the policy. The population is sampled from the distribution. Half of the population is updated by the TD3 gradient before evaluating the samples. For filling the replay buffer of TD3, all state action samples from all members of the population are used. The algorithm is compared against the plane variants of CEM and TD3 as well as against the evoluationary RL (ERL) algorithm. Results are promising with a negative result on the swimmer_v2 task.\n\nThe paper is well written and easy to understand. While the presented ideas are well motivated and it is certainly a good idea to combine deep RL and evoluationary search, novelty of the approach is limited as the setup is quite similar to the ERL algorithm (which is still on archive and not published, but still...). See below for more comments:\n- While there seems to be a consistent improvement over TD3, this improvement is in some cases small (e,g. ants). \n- We are learning a value function for each of the first half of the population. However, the value function from the previous individual is used to initialize the learning of the current value function. Does this cause some issues, e.g., do we need to set the number of steps so high that the initialization does not matter so much any more? Or would it make more sense to reset the value function to some \"mean value function\" after every individual?\n- The importance mixing does not seem to provide a better performance and could therefore be shortened in the paper\n\n\n\n", "The revised version addresses the issues I raised. Thank you.\n\nI was already rating the paper positively, so my rating is unchanged.", "We thank the reviewer for raising useful points which helped us a lot improving the paper.\n\nThe main point of the reviewer is that the novelty of our approach is limited with respect to the Evolutionary RL (ERL) algorithm, and that improvement is sometimes small. These remarks helped us realize that we had to better highlight the differences between our approach and ERL, both in terms of concepts and performance. We did so by replacing Figure 1, which was contrasting CEM-RL to CEM, with a figure directly contrasting CEM-RL to ERL. We also added Figure 6 which better highlights the properties of the algorithms and we performed several additional studies, described either in the main text or in appendices.\n\nBy the way, the ERL paper is now published at NIPS, but it was not the case yet when we submitted ours. We updated the corresponding reference.\n\nThe reviewer seems to consider that each actor in our CEM-RL algorithm comes with its own critic (the reviewer says value function), which would raise a value function initialization issue. Actually, this is not the case: there is a single TD3 critic over the whole process, and gradient steps are applied to all the selected actors from that single critic. This has been clarified in the text by insisting on the unicity of this critic.\n\nWe agree with the reviewer that the importance mixing did not provide the sample efficiency improvement we expected, and we can only provide putative explanations of why so far. Nevertheless, we believe this mechanism still has some potential and is currently overlooked by most deep neuroevolution researchers, so we decided to keep the importance mixing study in Appendix B rather than just removing it.\n", "A lot of our effort during the rebuttal stage has been focused on better highlighting the often subtle differences between ERL and our approach. For doing so, we replaced Figure 1 with a figure directly contrasting CEM-RL to ERL. We also added Figure 6 which better highlights the properties of the algorithms and we performed several additional studies described either in the main text or in appendices.\n\nThe next point of the reviewer is that a good deal of the strong performance of our method and RL may just be due to the fact that we are using multiple actors, thus benefiting from an \"ensemble method\" effect already mentioned in several papers such as Osband et al., 2016 for DQN. This point is absolutely valid.\n\nThe reviewer thus suggests a relevant control which would be to keep a population (ensemble) of policies, but only update using RL while sharing experience across all actors. This would isolate the ensemble effect from the evolutionary search effect. We performed the suggested control. The resulting algorithm is a multiple-actor version of TD3. Results show that CEM-TD3 actually outperforms this multiple-actor TD3, thus the CEM part actually brings performance improvement.\n\nAbout replacing the ReLU non-linearity in DDPG and TD3 prior work with tanh, we spotted that we could get much better results on several environments with the latter. This explanation is now clearly mentioned in the paper, and motivates a future work direction which consists in using \"neural architecture search\" for RL problems, the performance of algorithms being a lot dependent on such architecture details.\n\nFinally, to keep our paper shorter than the hard page limit for ICLR while addressing all the reviewers points, we had to move several studies into appendices, starting with the importance mixing study.", "We thank the reviewer for his/her positive evaluation of our paper and for raising many very useful points which helped us getting to a clearer picture of our contribution. A few of these points deserve discussion beyond the changes made in the paper.\n\nDue to a mistake on page 2, we got the reviewer confused believing we are using importance sampling while we are using importance mixing instead. This has been fixed.\n\nThe reviewer mentions it may be possible to construct counter-examples where the gradient updates will prevent convergence. This is a very important point. There are many RL problems (see e.g. Continuous Mountain Car, Colas et al. at ICML 2018) where at some point the gradient computed by the critic is deceptive, i.e. it drives the policy parameters into a wrong direction. In that case, applying that gradient to CEM actors as we do in CEM-RL is counter-productive. But the fact that we only apply this gradient to half the population makes it that CEM-RL should nevertheless overcome this issue: the actors which did not receive a gradient step will be selected and the population will continue improving. However, admittedly, in this very specific context, CEM-RL is behaving as a CEM with only half a population, thus it is less efficient than the standard CEM. Besides, ERL even better resists than our approach to the same issue: if the actor generated by DDPG does not perform better than the evolutionary population due to a deceptive gradient issue, then this actor is just ignored, and the evolutionary part behaves as usual, without any loss in performance. This deceptive gradient issue certainly explains why CEM is the best approach on Swimmer. Finally, it may also happen that the RL part does not bring benefit just because the current critic is wrong and provides an inadequate gradient, in a non-deceptive gradient case. All the above points have now been made much clear in the new version of the paper, in particular we added an appendix dedicated to the swimmer benchmark.\n\nThe reviewer also raises doubts about the fact that the method of Khadka & Tumer (2018) cannot be extended to use CEM. After second thoughts, this is absolutely right. As the reviewer says, in both this work and Khadka & Tumer, the RL updates lead to policies that may differ a lot from the search distribution and there is no guarantee in this work that the TD3 updates result in policies close to the starting point.\nBut if the RL actor shows good enough performance, this does not prevent from computing a new covariance matrix which includes it. The corresponding ellipsoid in the search space may be very large, leading to a widespread next generation, but the process should tend to converge again towards a population of actors where evolutionary and RL actors are closer to each other.\n\nA result of these second thoughts is that one could definitely build an ERL algorithm where the evolutionary part is replaced by CEM. We corrected the paper according to this new insight. Unfortunately we did not find enough time to implement and test this algorithm during the rebuttal stage, but we now mention this possibility as an interesting avenue for future work.\n\nDespite the very interesting points above, the reviewer is wrong when saying that the main distinction between our approach and the ERL approach is that only in ours the information flow is from ES to RL and vice-versa. Actually, in ERL, if the RL actor added to the population performs well, it will steer the whole evolutionary population to the right direction just by generating offsprings, so RL and ES also benefit from each other. \n", "We thank the reviewer for many positive comments about our paper.\n\nThe typos explicitly mentioned in the review have been corrected, and we did our best to spot other typos not mentioned. Besides, all the acronyms have been explained.\n\nWe added the tutorial from Hansen (2016) as the reference for the common choices for setting \\lambda_i in Equation 1, 2.\n\nWe agree with the reviewer that our paper is not theoretically oriented, nor does it address any real world application like robotics or other challenging domain. Our point is rather to provide a practical method performing well with respect to the state of the art, which is most often evaluated with the same widely used benchmarks.\n\nWith respect to initialization of hyperparameters, as explicitly mentioned in the \"experimental setup\" section, \"Most of the TD3 and DDPG hyper-parameters were reused from Fujimoto et al. (2018).\" The justification for this choice is to facilitate comparison with previously published work.", "We warmly thank the three reviewers for their valuable feedback on our paper. Their reviews helped us realize that the main weakness of our paper was insufficiently clear outline of the differences between our contribution and the Evolutionary RL (ERL) algorithm. As a consequence, we made many changes. We would appreciate if the reviewers could take a look at the new version of the paper before eventually revising their score or expressing further concerns.\n\nThe main changes we made with respect to the submitted version are the following:\n\n* we added the control experiment suggested by Reviewer 3, to compare our CEM-RL framework with a 10 actors TD3 framework.\n\n* we better outlined the conceptual differences between ERL and CEM-RL with several additional figures and a more thorough discussion: we replaced Figure 1 with a figure directly contrasting CEM-RL to ERL, we added Figure 6 which better highlights the properties of the algorithms and we performed several additional studies described either in the main text or in appendices.\n\n* we have consolidated all experimental results, with longer runs when necessary.\n\n* we have reorganized the paper, moving several side studies to appendices. In particular, following the suggestion of Reviewer 2, we moved the presentation and experimental study of importance mixing to Appendix B, leaving more room for comparison to ERL.\n", "Gradient-free evolutionary search methods for Reinforcement Learning are typically very stable, but scale poorly with the number of parameters when optimizing highly-parametrized policies (e.g. neural networks). Meanwhile, gradient-based deep RL methods, such as DDPG are often sample efficient, particularly in the off-policy setting when, unlike evolutionary search methods, they can continue to use previous experience to estimate values. However, these approaches can also be unstable.\n\nThis work combines the well-known CEM search with TD3 (an improved variant of DDPG). The key idea of of this work is in each generation of CEM, 1/2 the individuals are improved using TD3 (i.e. the RL gradient). This method is made more practical by using a replay buffer so experience from previous generations is used for the TD3 updates and importance sampling is used to improve the efficiency of CEM.\n\nThis work shows, on some simple control tasks, that this method appears to result in much stronger performance compared with CEM, and small improvements over TD3 alone. It also typically out-performs ERL.\n\nIntuitively, it seems like it may be possible to construct counter-examples where the gradient updates will prevent convergence. Issues of convergence seem like they deserve some discussion here and potentially could be examined empirically (is CEM-TD3 converging in the swimmer?).\n\nThe justification that the method of Khadka & Tumer (2018) cannot be extended to use CEM, since the RL policies do not comply with the covariance matrix is unclear to me. Algorithm 1, step 20, the covariance matrix is updated after the RL step so regardless of how the RL policies are generated, the search distribution on the next distribution includes them. In both this work, and Khadka & Tumer, the RL updates lead to policies that differ from the search distribution (indeed that is the point), and there is no guarantee in this work that the TD3 updates result in policies close to the starting point. It sees like the more important distinction is that, in this approach, the information flows both from ES to RL and vice-versa, rather than just from RL to ES.\n\nOne view of this method would be that it is an ensemble method for learning the policy [e.g. similar to Osband et al., 2016 for DQN]. This could be discussed and a relevant control would be to keep a population (ensemble) of policies, but only update using RL while sharing experience across all actors. This would isolate the ensemble effect from the evolutionary search.\n\nMinor issues:\n\n- The ReLU non-linearity in DDPG and TD3 prior work is replaced with tanh. This change is noted, but it would be useful to make a least a brief (i.e. one sentence) comment on the motivation for this change.\n\n- The paper is over the hard page limit for ICLR so needs to be edit to reduce the length.\n\nOsband I, Blundell C, Pritzel A, Van Roy B. Deep exploration via bootstrapped DQN. InAdvances in neural information processing systems 2016 (pp. 4026-4034)." ]
[ -1, 6, 7, -1, -1, -1, -1, -1, -1, 7 ]
[ -1, 3, 5, -1, -1, -1, -1, -1, -1, 4 ]
[ "Ske_YvI527", "iclr_2019_BkeU5j0ctQ", "iclr_2019_BkeU5j0ctQ", "SyefCujZRm", "Ske3D7Jqh7", "rkxM5djb0Q", "Syev33W527", "Ske_YvI527", "iclr_2019_BkeU5j0ctQ", "iclr_2019_BkeU5j0ctQ" ]
iclr_2019_BkedznAqKQ
LanczosNet: Multi-Scale Deep Graph Convolutional Networks
We propose Lanczos network (LanczosNet) which uses the Lanczos algorithm to construct low rank approximations of the graph Laplacian for graph convolution. Relying on the tridiagonal decomposition of the Lanczos algorithm, we not only efficiently exploit multi-scale information via fast approximated computation of matrix power but also design learnable spectral filters. Being fully differentiable, LanczosNet facilitates both graph kernel learning as well as learning node embeddings. We show the connection between our LanczosNet and graph based manifold learning, especially diffusion maps. We benchmark our model against 8 recent deep graph networks on citation datasets and QM8 quantum chemistry dataset. Experimental results show that our model achieves the state-of-the-art performance in most tasks.
accepted-poster-papers
The reviewers unanimously agreed that the paper was a significant advance in the field of machine learning on graph-structured inputs. They commented particularly on the quality of the research idea, and its depth of development. The results shared by the researchers are compelling, and they also report optimal hyperparameters, a welcome practice when describing experiments and results. A small drawback the reviewers highlighted is the breadth of the content in the paper, which gave the impression of a slight lack of focus. Overall, the paper is a clear advance, and I recommend it for acceptance.
train
[ "SJghoMfW0Q", "SklJrmGZA7", "BJeBk7f-0Q", "r1eh_ffWAm", "S1lEn5RRhQ", "ryxJEZ4Rhm", "r1llrOIv2Q" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for the comments! We have not tried Arnoldi algorithm since we only deal with undirected graphs in the current applications which have symmetric graph Laplacians. Unlike Lanczos algorithm which has error bounds and monotonic convergence properties, Arnoldi algorithm is not well understood since eigenvalues of non-symmetric matrix may be complex and/or badly conditioned. Nonetheless, efficient implementation of Arnoldi algorithm exists. We will explore it in the future.", "Thanks for the comments! We will improve the writing and make the main contributions more clear.", "Thanks for the careful reading and the constructive comments! We will improve the writing and make the paper more accessible in terms of main contributions. Additionally, we would like to clarify a few raised questions as below.\n\nQ1: What gets fundamentally different from polynomial filters proposed in other graph convnets architectures?\n\nA1: We mainly compare with the Chebyshev polynomial filter since it is the most frequently used and also has the nice orthogonality property. \n\nFirst, Chebyshev polynomial filters can be regarded as a special case of our learnable spectral filters. The expansion of the Chebyshev recursion manifests that the filtering lies in a Krylov subspace of which the eigenbasis can be achieved by Lanczos algorithm. Therefore, recovering Chebyshev polynomial filters reduces to recovering the specific coefficients of polynomials which can be achieved by a multi-layer perceptron (MLP) due to its universal approximation power.\n\nSecond, we decouple the order of polynomial and the number of eigenbasis which is not the case for Chebyshev polynomial. Recall that computing K-th order Chebyshev polynomial, i.e., finding K basis vectors, requires running the recursion K times. However, we can run the Lanczos algorithm for M steps, e.g., M < K, to get M basis vectors. Then we can easily get the K-th order polynomial by directly raising the K-th power of Ritz values.\n\nWe will discuss more on this difference in our later version.\n\nQ2: What happens when the graph change? Do the learned features make sense on different graphs? And if yes, why? If not, the authors should be more explicit in their presentation.\n\nA2: Like many other graph convolutional networks, learnable parameters of our model do not depend on any graph specific quantities, like the number of nodes or edges, thus permitting generalization over different graphs. Moreover, in our QM8 experiments, different molecules are indeed different graphs. Therefore, the experimental results empirically verify that our learned features can generalize to different graphs. In terms of why they generalize, we currently do not have a satisfying answer as it requires deep understanding of the data distribution, model expressiveness and non-trivial inequality techniques for proving a useful generalization bound. Intuitively, the successful generalization may be due to the fact that our model does capture some patterns of sub-graphs within the molecules. These patterns may frequently appear in different molecules and determine the physical and chemical properties which link to the final predicted energy. We will improve our presentation regarding to this point.\n\nQ3: What is the complexity of the proposed methods? that should be minimally discussed (at least), as it is part of the key motivations for the proposed algorithms.\n\nA3: It is hard to describe the overall time complexity in a concise manner as it requires lengthy notation. For the Lanczos algorithm alone, assuming the graph has N nodes, the most computationally expensive operation of our Algorithm 1 is the matrix vector product in line 4 which generally costs O(N^2) per step. If we further assume the algorithm runs for K steps, then the overall time complexity is O(K(N^2)). It is economical since a single graph convolution operation in any graph convnets is also generally O(N^2). In contrast, the eigen decomposition is generally O(N^3). We will discuss this in the later version.\n\nQ4: How is the learning done in 3.2? If there is any learning at all? (btw, S below Eq (6) is a poor notation choice, as S is used earlier for something else).\n\nA4: For the spectral filter, the learning is done via learning the MLP which maps Ritz values R to R_hat, i.e., f as described above Eq. (5). S below Eq (6) is actually in different font style. We will change the notation to improve the presentation. \n\nQ5: The results are not very impressive - they are good, but not stellar, and could benefit from showing an explicit tradeoff in terms of complexity too?\n\nA5: We have partially updated experimental results by adding spectral filters in a layer-wise manner. Please refer to our common response. We will also show the run-time in the later version to contrast these methods. \n", "We thank all the reviewers for the careful reading and the constructive comments. During the rebuttal period, we extended our current model by adding spectral filters for multiple layers, whereas only the first layer contains spectral filters in the submitted version. We show the average results over 3 runs with different random initializations on QM8 as below. Note that experiments of our AdaLanczosNet are still ongoing. We will update this in the later version of our paper.\n\n----------------------------------------------------------------\nMethods | Validation MAE | Test MAE |\n----------------------------------------------------------------\nGCN-FP | 15.06 +- 0.04 | 14.80 +- 0.09 |\n----------------------------------------------------------------\nGGNN | 12.94 +- 0.05 | 12.67 +- 0.22 |\n----------------------------------------------------------------\nDCNN | 10.14 +- 0.05 | 9.97 +- 0.09 |\n----------------------------------------------------------------\nChebyNet | 10.24 +- 0.06 | 10.07 +- 0.09 |\n----------------------------------------------------------------\nGCN | 11.68 +- 0.09 |11.41 +- 0.10 |\n----------------------------------------------------------------\nMPNN | 11.16 +- 0.13 | 11.08 +- 0.11 |\n----------------------------------------------------------------\nGraphSAGE | 13.19 +- 0.04 | 12.95 +- 0.11 |\n----------------------------------------------------------------\nGAT | 11.39 +- 0.09 | 11.02 +- 0.06 |\n----------------------------------------------------------------\nLanczosNet | 9.65 +- 0.19 | 9.58 +- 0.14 |\n----------------------------------------------------------------\n", "The paper under review builds useful insights and novel methods for graph convolutional networks, based on the Lanczos algorithm for efficient computations involving the graph Laplacian matrices induced by the neighbor edge structure of graph networks.\n\nWhile previous work [35] has explored the Lanczos algorithm from numerical linear algebra as a means to accelerate computations in graph convolutional networks, the current paper goes further by:\n(1) exploring in significant more depth the low rank decomposition underlying the Lanczos algorithm.\n(2) learning the spectral filter (beyond the Chebychev design) and potentially also the graph kernel and node embedding.\n(3) drawing interesting connections with graph diffusion methods which naturally arise from the matrix power computation inherent to the Lanczos iteration.\n\nThe paper includes a systematic evaluation of the proposed approach and comparison with existing methods on two tasks: semi-supervised learning in citation networks and molecule property prediction from interactions in atom networks. The main advantage of the proposed method as illustrated in particular by the experimental results in the citation network domain is its ability to generalize well in the presence of a small amount of training data, which the authors attribute to its efficient capturing of both short- and long-range interactions.\n\nIn terms of presentation quality, the paper is clearly written, the proposed methods are well explained, and the notation is consistent.\n\nOverall, a good paper.\n\nMinor comment:\npage 3, footnote: \"When faced with a non-symmetric matrix, one can resort to the Arnoldi algorithm.\": I was wondering if the authors have tried that? I think that the Arnoldi algorithm for non-symmetric matrices are significantly less stable than their Lanczos counterparts for symmetric matrices.", "This paper proposes to use a Lanczos alogrithm, to get approximate decompositions of the graph Laplacian, which would facilitate the computation and learning of spectral features in graph convnets. It further proposes an extension with back propagation through the Lanczos algorithm, in order to train end to end models. \n\nOverall, the idea of using Lanczos algorithm to bypass the computation of the eigendecomposition, and thus simplify filtering operations in graph signal processing is not new [e.g., 35]. However, using this algorithm in the framework of graph convents is new, and certainly interesting. The authors seem to claim that their method permits to learn spectral filters, what other methods could not do - this is not completely true and should probably be rephrased more clearly: many graph convnets, actually learn features. \n\nThe general construction and presentation of the algorithms are generally clear, and pretty complete. A few things that could be clarified are the following:\n\n- in the spectral filters of Eq (4), what gets fundamentally different from polynomial filters proposed in other graph convnets architectures?\n- what happens when the graph change? Do the learned features make sense on different graphs? And if yes, why? If not, the authors should be more explicit in their presentation\n- what is the complexity of the proposed methods? that should be minimally discussed (at least), as it is part of the key motivations for the proposed algorithms\n- how is the learning done in 3.2? If there is any learning at all? (btw, S below Eq (6) is a poor notation choice, as S is used earlier for something else)\n- the results are not very impressive - they are good, but not stellar, and could benefit from showing an explicit tradeoff in terms of complexity too?\n\nThe discussion in the related work, and the analogy with manifold learning are interesting. However, that brings probably to one of the main issues with the papers - the authors are obviously very knowledgeable in graph convnets, graph signal processing, and optimisation. However, there are really too many things in this paper, which leads to numerous shortcuts, and some time confusion. Given the page limits, not everything can be treated with the level of details that it would deserve. It might be good to consider trimming down the paper to its main and core aspects for the next version. \n\n\n\n", "The authors propose a novel method for learning graph convolutional networks. The core idea is to use the Lanczos algorithm to obtain a low-rank approximation of the graph Laplacian. The authors propose two ways to include the Lanczos algorithm. First, as a preprocessing step where the algorithm is applied once on the input graph and the resulting approximation is fixed during learning. Second, by including a differentiable version of the algorithm into an end-to-end trainable model. \n\nThe proposed method is novel and achieves good results on a set of experiments. \n\nThe authors discuss related work in a thorough and meaningful manner. \n\nThere is not much to criticize. This is a very good paper. The almost 10 pages are perhaps a bit excessive considering there was an (informal) 8 page limit. It might make sense to provide a more accessible discussion of the method and Theorem 1, and move some more detailed/technical parts in pages 4, 5, and 6 to an appendix. \n" ]
[ -1, -1, -1, -1, 7, 7, 8 ]
[ -1, -1, -1, -1, 3, 5, 4 ]
[ "S1lEn5RRhQ", "r1llrOIv2Q", "ryxJEZ4Rhm", "iclr_2019_BkedznAqKQ", "iclr_2019_BkedznAqKQ", "iclr_2019_BkedznAqKQ", "iclr_2019_BkedznAqKQ" ]
iclr_2019_BkfbpsAcF7
Excessive Invariance Causes Adversarial Vulnerability
Despite their impressive performance, deep neural networks exhibit striking failures on out-of-distribution inputs. One core idea of adversarial example research is to reveal neural network errors under such distribution shifts. We decompose these errors into two complementary sources: sensitivity and invariance. We show deep networks are not only too sensitive to task-irrelevant changes of their input, as is well-known from epsilon-adversarial examples, but are also too invariant to a wide range of task-relevant changes, thus making vast regions in input space vulnerable to adversarial attacks. We show such excessive invariance occurs across various tasks and architecture types. On MNIST and ImageNet one can manipulate the class-specific content of almost any image without changing the hidden activations. We identify an insufficiency of the standard cross-entropy loss as a reason for these failures. Further, we extend this objective based on an information-theoretic analysis so it encourages the model to consider all task-dependent features in its decision. This provides the first approach tailored explicitly to overcome excessive invariance and resulting vulnerabilities.
accepted-poster-papers
This paper studies the roots of the existence of adversarial perspective from a new perspective. This perspective is quite interesting and thought-provoking. However, some of the contributions rely on fairly restrictive assumptions and/or are not properly evaluated. Still, overall, this paper should be a valuable addition to the program.
val
[ "rkl8B7OVJ4", "HklfNANAh7", "ByeDB22aRQ", "Bkeye2iT0X", "r1e1SFU_2m", "SyeLf8916Q", "Byx23FoQaQ", "ByeqLhs7TX", "BklM_OiQ6m", "HJeo0yhma7", "B1xjee27aX" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "We were glad to see your positive feedback.\n\nIndeed we agree some open questions (summarized below in point (II)) remain. Yet, we hope that our efforts to prove the underlying principles of our objective sparks future analysis how/when our optimality assumptions (discussed below in point (I)) can be achieved and why the objective succeeds in our current setting.\nThat being said, as pointed out above, the objective function itself is one out of 4 major contributions and therefore this analysis would be out of scope for the presented work.\n\nThank you once again for the constructive discussion!\n\n------------------------------\n(I) Optimality assumptions:\n- Lemma 8 (i) (Appendix A): CE- and MLE-term is Maximum Likelihood under a factorized prior p(z_s, z_n) = p(z_s) p(z_n). In the optimum, it thus holds I(z_s; z_n) = 0 as I(z_s; z_n) = KL(p(z_s, z_n) || p(z_s) p(z_n)).\nFurthermore, in the optimum I(y; z_s) = H(y) = const.\n- Lemma 8 (ii) and (iii) (Appendix A): If the lower bound is tight (nuisance classifier can decode all information about y in z_n), we minimize I(y; z_n) provably.\n------------------------------\n(II) Achieving optimality / Possible alternatives:\n- Connecting independence of z_n and z_s with the model architecture: Due to information preservation, bijective networks are particularly suitable for our task, but other architectures could be considered.\n- Tightness of lower bounds: How tight are lower bounds given by a nuisance classifier or alternative lower bounds by the MINE estimator (Belghazi et al. 2018)?\n- Lack of alternatives: As I(y; z_n) is bounded (Remark 9, Appendix A.2), non-trivial (smaller than H(y)) upper bounds on I(y; z_n) are difficult and to the best of our knowledge, we are not aware of any.", "This paper studies a new perspective on why adversarial examples exist in machine learning -- instead of seeing adversarial examples as the result of a classifier being sensitive to changes in irrelevant information (aka nuisance), the authors see them as the result of a classifier being invariant to changes in relevant (aka semantic) information. They show how to efficiently find such adversarial examples in bijective networks. Moreover, they propose to modify the training objective so that the bijective networks could be more robust to such attacks.\n\nPros:\n -- clarity is good (except for a few places, e.g. no definition of F(x)_i in Definition 1; Page 6 \"three ways forward\" item 3: I(y;z_n|z_s) = I(y;z_s) should be I(y;z_n|z_s) = I(y;z_n).)\n -- the idea is original to the best of my knowledge\n -- the mathematical motivation is sound\n -- Figure 6 seems to show that the proposed defense works on MNIST (However, would you provide more details on how you interpolated z_n? Moreover, what do the images generated with z_s from one input and z_n from another input look like (in your method)?)\n\nCons:\n -- scope: as all the presented problems and solutions assume bijective mapping, I wonder how is it relevant to the traditional perspective of adversarial attack and defense? It seems to me that the contribution of this paper is identifying a problem of bijective networks and then proposing a solution, thus its significance is restricted.\n -- method: while the mathematical motivation is sound, I'm not sure if the proposed training objective can achieve that goal. To elaborate, I see problems with both terms added in the proposed loss function:\n (a.) for the objective of maximizing the cross entropy of the nuisance classifier, it is possible that I(y;z_n) is not reduced, but rather the information about y is encoded in a way that the nuisance classifier is not able to decode, similar to what happens in a one-way function (for example, see https://en.wikipedia.org/wiki/Cryptographic_hash_function ). In the MNIST experiments, the nuisance classifier is a three-layer MLP, which may be too weak and susceptible to information concealing.\n (b.) for the objective of maximizing the likelihood of a factorized model of p(z_s, z_n), I don't see how optimizing it would reduce I(z_s; z_n). In general, even if z_s and z_n are strongly correlated, one can still fit such a factorized model. This only ensures that I(Z_s; Z_n) = 0 for Z_s, Z_n *sampled from the model*, but does not necessarily reduce I(z_s; z_n) for z_s, z_n *used to train the model*. The discrepancy between p(Z_s, Z_n) and p(z_s, z_n) could be huge, in which case one has the model misspecification problem which is another topic.\n (c.) a side question: why is the MLE objective using likelihood rather than log likelihood? Since the two cross entropy losses are similar to log likelihood, I feel there is a mismatch here.\n\n----------------------------------------\nAFTER REBUTTAL:\n\nThanks for your reply to my comments. The new revision has improved clarity and provided new supporting evidences. I would like to raise my rating to 6.\n\nThat being said, (as you agreed) the link from the conceptual goal to the proposed objective has mostly empirical support. Therefore I hope it may encourage future investigation on when and why the proposed objective is successful in achieving the conceptual goal.", "Dear Authors,\n\nThanks for your reply to my comments. The new revision has improved clarity and provided new supporting evidences.\n\nThat being said, (as you agreed) the link from the conceptual goal to the proposed objective has mostly empirical support. Therefore I hope it may encourage future investigation on when and why the proposed objective is successful in achieving the conceptual goal.\n\nBest,", "Dear Reviewer2,\n\nwe would be most grateful if you can let us know if there are any further concerns you have after considering the thoroughly revised manuscript, added experiments and answers above.", "The paper focuses on adversarial vulnerability of neural networks, and more specifically on perturbation-based versus invariance-based adversarial examples and how using bijective networks (with so-called metameric sampling) may help overcoming issues related to invariance. The approach is used to get around insufficiencies of cross-entropy-based information-maximization, as illustrated on experiments where the proposed variation on CE outperforms CE. \n\nWhile I am not a neural network expert, I felt that the ideas developed in the paper are worthwhile and should eventally lead to useful contributions and be published. This being said, I did not find the paper in its present form to be fit for publication in a high-tier conference or journal. The main reason for this is the disbalance between the somehow heavy and overly commented first four pages (especially in Section 2) contrasting with the surprisingly moderate level of detail when it comes to bijective networks, supposedly the heart of the actual original contribution. To me this is severely affecting the overall quality of the paper. The contents of sections 3 and 4 seem relevant, but I struggled find out what precisely is the main contribution in the end, probably because of the lack of detail on bijective networks mentioned before. Again, I am not an expert, and I will indicate that in the system of course, but while I cannot completely judge all aspects of the technical relevance and the originality of the approach, I am fairly convinced that the paper deserves to be substantially revised before it can be accepted for publication. \n\nEdit: After paper additions I am changing my score to a 6. ", "This paper explores adversarial examples by investigating an invertible neural network. They begin by first correctly pointing out limitations with the commonly adopted \"l_p adversarial example\" definition in literature. The main idea involves looking at the preimage of different embeddings in the final layer of an invertible neural network. By training a classifier on top of the final embedding of the invertible network the authors are able to partition the final embedding into a set of \"semantic variables\", which are the components used for classification of the classifier, and a set of \"nuisance variables\" which are the complement of the logit variables. This partition allows the authors to define entire subspaces of adversarial images by holding the logit variables fixed and varying the nuisance variables, and applying the inverse to these modified embeddings. The authors are able to find many incorrectly classified images with this inversion technique. The authors then define a new loss which minimizes the mutual information between the nuisance variables and the predicted label. \n\nI found the ideas in this paper quite interesting and novel. Starting with the toy problem of adversarial spheres is great, and it's convincing that the inversion technique can be used to find errors on this dataset even when the classification accuracy is (empirically) 100%. The resulting adversarial images generated by applying their technique are also quite interesting, and this is a cool interesting way to study the robustness of networks in non-iid settings.\n\nThe main weakness is on the evaluation of their proposed new training objective, and I have a few suggestions as to how to strengthen this evaluation. It would be very convincing to me if the authors could show that their new training objective increases robustness to distributional shift. A potential benchmark for distributional shift could be https://arxiv.org/abs/1807.01697 (or just picking a subset of these image corruptions). If the proposed objective shows improvement on this benchmark (or a related one) then this would be a solid contribution.\n\nOne question I have for the authors is how typical the behavior in Figure 4 is? For any fixing of the logits, are all/most metameric samples classifiable by a human oracle? That is do you ever get garbage images from this sampling process. Adding a collection of random samples to the Appendix to demonstrate typical behavior could help demonstrate this.\n\nEdit: After paper additions I am changing my score to a 7. ", "Dear Reviewers, we thank you very much for helping us to substantially improve the manuscript.\n\nWe have addressed all raised concerns either with additional experiments and results, with additional discussions in the manuscript or through other aspects of our revision.\n\nWe were delighted to see the positive reaction by all reviewers to our developed ideas and your suggestions and concerns greatly improved the paper. The new distribution shift experiments, as well as new results and discussion of non-bijective networks and their relationship to bijective ones, significantly increase the practical relevance of the work. \n\nGiven the tension between the positive comments to most of our contributions, the ratings and the fact that the main concerns are related to our proposed solution, we would like to point out that the developed training objective is only one out of four major contributions of the paper.\n\nWe list our updated contributions here again for clarity:\n\n1 - We introduce an alternative viewpoint on adversarial examples, one of the major failures in modern machine learning algorithms, give a formal definition of it and show its practical relevance for commonly used architectures in the updated experiments and discussion.\n\n2 - We build a competitive bijective ImageNet/MNIST classifier to tractably compute such adversarial examples exactly. Based on this, we provide what may be the first analytic adversarial attack method in the literature.\n\n3 - We prove that a major reason for invariance-based vulnerability is the commonly used cross-entropy objective and show from an information-theoretic viewpoint what may be done to overcome this.\n\n4 - We put our theoretical results into practice: based on bijective networks we introduce a practically useful loss and illustrate as a proof-of-concept that it largely overcomes the problem of excessive invariance, making it a promising way forward. Additionally, we have now included more quantitative experiments showing robustness to adversarial distribution shifts on a newly introduced benchmark.\n\nIn the revision we have:\n\n-- Thoroughly revised and updated the whole manuscript to make all of our contributions more clear and incorporate all raised concerns.\n-- Updated figures and descriptions and moved large parts of section 2 to the appendix to improve clarity. \n-- Added an adversarial distribution shift benchmark to stress test our proposed objective and show its effectiveness in challenging settings.\n-- Added new results on non-bijective networks for the metameric samples and the distribution shift experiments to show non-bijective networks have the same issues as the bijective networks we use. \n-- Added a discussion on the relationship between ResNets and RevNet-type networks, providing evidence that they are closely related. \n-- Added additional references from the literature providing evidence of false excessive invariance in non-bijective architectures.\n-- Added a random batch of metameric samples to the appendix, to showcase the consistency of our results.\n\nPlease let us know if you have any more questions or if there is anything else we can do to make you reconsider your rating.\n\nThank you once again for your effort.", "\n--------------------------------------------------------\n\nWe thank you very much for acknowledging our work as interesting and novel, as well as for the appreciation of our developed methodologies.\n\nWe answer your questions below.\n\n--------------------------------------------------------\n\nQ: Does the new training objective increase robustness to distributional shift?\n--\nThank you for raising this point. To shed light on the effect of our loss under adversarial distribution shifts we have added new experiments on a dataset we introduce to precisely test our claims. We term the dataset shiftMNIST and designed it such that it follows distribution shifts D_Adv of the form we assumed for Theorem 6.\n\nOur results reveal, that our proposed loss does indeed reduce the errors under challenging distribution shifts up to 38% as compared to cross-entropy trained ResNets and RevNets, highlighting the efficacy of our proposed objective. \n\nFurther, the results also show once again how badly standard networks can fail, even though in one task only one single pixel is removed, leaving the image semantics almost entirely unchanged. The results are one more piece of evidence for the insufficiency of cross-entropy based information maximization and the excessive invariance it may lead to in practice. \n\nWe sincerely thank you for bringing this up.\n\n--------------------------------------------------------\n\nQ: What is the typical behavior of samples shown in Figure 4?\n--\nThe metameric samples shown are representative and we have observed similar quality throughout the whole validation set, sometimes with slight colored artifacts though. We have added a large batch of metameric samples to the appendix to give the reader a better idea about their typical behavior. \n\n--------------------------------------------------------\n\nWe believe your review have substantially improved the manuscript, thank you.", "\n--------------------------------------------------------\n\nWe thank you very much for acknowledging our work being appealing and our contributions being publication-worthy.\nWe also thank you for your thoughts and comments on the structure of the manuscript.\n\n--------------------------------------------------------\n\nQ: Overly commented first pages, imbalanced with section 3 and 4.\n\nWe have done our best to fix this and have substantially revised the paper. We removed large portions of section 2 and added it to the appendix, we added additional details about bijective networks, re-structured section 3 and 4 and added another experiment to emphasise our main contributions more. Finally, we have adjusted the abstract and contributions in the introduction accordingly.\n\n--------------------------------------------------------\n\nQ: Lacking detail on bijective network.\n\nThe main components we are using are based on Real-NVP[1]/Glow[2] and iRevNet[3] networks, which are widely known and cited in the paper, so we decided not to put too much focus on their details.\nHowever, in the revision we have added some additional details, for instance, we have added figure 3 that explains the architecture we are using.\n\n[1] Dinh, Laurent, Jascha Sohl-Dickstein, and Samy Bengio. \"Density estimation using Real NVP.\" \n[2] Kingma, Diederik P., and Prafulla Dhariwal. \"Glow: Generative flow with invertible 1x1 convolutions.\"\n[3] Jacobsen, Jörn-Henrik, Arnold Smeulders, and Edouard Oyallon. \"i-RevNet: Deep Invertible Networks.\"\n--------------------------------------------------------\n\nPlease let us know if you have any more comments or concerns!\n\nThank you once again.", "\n--------------------------------------------------------\n\nWe are glad that you find most of our major contributions original, interesting, clear and mathematically sound.\nWe also thank you for your thoughtful questions and comments, we address them below.\n\n--------------------------------------------------------\n\nQ: How are findings related to non-bijective networks?\n--\nThank you for bringing this up, we have revised the manuscript to answer this important question very clearly to show our identified problems, analysis and conclusion are not limited to bijective networks.\nWe summarize below.\n\n----------\n-- Our identified problem of excessive invariance occurs in many other networks as well.\n----------\n\nWe have added results on the gradient-based equivalent of our analytic metameric sampling attack to the paper. We match the logit vector of one image with the logits of another image via gradient-based optimization and no norm-based restriction on the input. We do so on an ImageNet-trained state of the art ResNet-154 and see that the problem we have identified in bijective nets is the same here, if not worse as the metameric samples look even cleaner. Qualitative results are added to figure 5.\n\nBesides that, multiple papers have observed excessive invariance. On the adversarial spheres problem [1], for instance, the authors show their quadratic network does almost perfectly well while ignoring up to 60% of *semantically meaningful* input dimensions. Another line of work has also shown that similar behavior can appear in ReLU networks as well [2].\n\nWe have also added an additional set of experiments to the revised manuscript that shows how cross-entropy trained ResNets fail badly under distribution shifts that exploit their excessive invariance, giving another piece of evidence that our findings are not limited to bijective networks, but applicable to the most successful deep network architecture around as well.\n\n----------\n-- There is a close relationship between bijective nets and SOTA architectures.\n----------\n\nBijective networks are closely related to ResNets, they are in fact provably bijective under mild assumptions, as shown by a recent publication [3]. Further, it has been shown that ResNets and RevNet-type networks differ only in their dimension splitting scheme from one another [4]. And finally, bijective iRevNets have been shown to have many equivalent progressive properties to ResNets throughout the layers of their learned representation [5].\n\nIn summary, there is ample evidence, that bijective RevNet-type networks are not the reason for the problems we observe, but rather extremely similar to ResNets, the de-facto state-of-the-art architecture, while providing a powerful framework to study and combat problems like excessive invariance.\n\n[1] Gilmer, Justin, et al. \"Adversarial spheres.\" \n[2] Behrmann, Jens, et al. \"Analysis of Invariance and Robustness via Invertibility of ReLU-Networks.\"\n[3] Behrmann, Jens, David Duvenaud, and Jörn-Henrik Jacobsen. \"Invertible Residual Networks.\"\n[4] Grathwohl, Will, et al. \"FFJORD: Free-form Continuous Dynamics for Scalable Reversible Generative Models.\"\n[5] Jacobsen, Jörn-Henrik, Arnold Smeulders, and Edouard Oyallon. \"i-RevNet: Deep Invertible Networks.\"", "\n---------------------------------------------------------\n\nQ: Can the training objective achieve its goal?\n\n----------\n-- (a) The nuisance classifier is not powerful enough to decode y from z_n.\n----------\n\nThis is indeed a common problem when formulating a bound this way and it is the same problem GANs face. However, in practice, GANs often work and we also find that the nuisance classifier does indeed do its job, one could even validate this post-hoc by training a more powerful nuisance classifier to confirm it.\n\nAdditionally, we also have metameric sampling as a validation method. If the information about the class is only hidden in z_n, but not removed, then metameric sampling would reveal this. Replacing z_n of one category with a z_n from another category would then change the category of the reconstruction, but we see that this is not happening when applying our loss. Thus, we conclude that the objective is successful, albeit it having its challenges.\n\n----------\n-- (b) The factorial maximum likelihood objective does not lead to independence.\n----------\n\nWe agree, that there is no guarantee that the loss will lead to full independence, but it does encourage it at least.\nOn the other hand, our evaluation method (metameric sampling) is not based on samples from the model but is based on the activations of real data points. Thus, according to your argumentation, this sampling method would reveal strong dependencies between the subspaces. In practice, we see this is not the case, as shown in figure 7 on the right, where combinations of z_n from one class and z_s from another do indeed lead to a change of nuisance/style in the original image, but not to a change of category. Empirically this means our objective was successful and most of the label information has been removed from z_n.\n\nTo further analyze the objective, we have added another experiment to assess if it can successfully defend against targeted distribution shifts as considered in Theorem 6.\nWe introduce a new dataset termed shiftMNIST, it augments MNIST with additional highly predictive features at train time and removes or randomizes those features at test time, while leaving the digits themselves as the stable predictive variable.\n\nOur experiments reveal, that the baseline cross-entropy trained ResNet and fiRevNet fail badly on these problems, while our proposed loss reduces the error under such distribution shift up to 38%. This provides more evidence that our proposed objective does achieve its goal in practice.\n\n----------\n\nIn summary, we do agree that the lower bound and the maximum likelihood objectives have their respective issues and we added some discussion on this to the manuscript. However, in practice, the metameric samples and our additional distribution shift experiments show that the loss does, in fact, work as intended, making it a promising way forward.\n\n---------------------------------------------------------\n\nQ: What do the images generated with z_s from one input and z_n from another input look like (in your method)?\n--\nThose images (the metameric samples) are already shown in the last row in the top block of figure 7, we have adapted the figure and added some more description to it, to make everything more clear.\nIn the baseline the metameric samples are adversarial examples, meaning one can turn any image into any class without changing the logits at all. With our objective (shown on the right side), this is not possible anymore as keeping z_s fixed and exchanging z_n only affects the style of the image, not its class-specific content. The objective has achieved its goal and successfully defended against the metameric sampling attack.\n\n---------------------------------------------------------\n\nMinor:\nWe have fixed the typos and added the log to the MLE objective, thank you.\n\n---------------------------------------------------------\n\nThank you once again for the detailed review, we were able to significantly improve the manuscript based on it.\nWe have revised multiple parts, added new experiments and added discussions to answer your concerns.\n\nWe hope we were able to answer everything to your satisfaction, please let us know if there are any more open points.\n\nThank you once again!" ]
[ -1, 6, -1, -1, 6, 7, -1, -1, -1, -1, -1 ]
[ -1, 4, -1, -1, 2, 4, -1, -1, -1, -1, -1 ]
[ "ByeDB22aRQ", "iclr_2019_BkfbpsAcF7", "Bkeye2iT0X", "B1xjee27aX", "iclr_2019_BkfbpsAcF7", "iclr_2019_BkfbpsAcF7", "iclr_2019_BkfbpsAcF7", "SyeLf8916Q", "r1e1SFU_2m", "HklfNANAh7", "HJeo0yhma7" ]
iclr_2019_Bkg2viA5FQ
Hindsight policy gradients
A reinforcement learning agent that needs to pursue different goals across episodes requires a goal-conditional policy. In addition to their potential to generalize desirable behavior to unseen goals, such policies may also enable higher-level planning based on subgoals. In sparse-reward environments, the capacity to exploit information about the degree to which an arbitrary goal has been achieved while another goal was intended appears crucial to enable sample efficient learning. However, reinforcement learning agents have only recently been endowed with such capacity for hindsight. In this paper, we demonstrate how hindsight can be introduced to policy gradient methods, generalizing this idea to a broad class of successful algorithms. Our experiments on a diverse selection of sparse-reward environments show that hindsight leads to a remarkable increase in sample efficiency.
accepted-poster-papers
The paper generalizes the concept of "hindsight", i.e. the recycling of data from trajectories in a goal-based system based on the goal state actually achieved, to policy gradient methods. This was an interesting paper in that it scored quite highly despite all three reviewers mentioning incrementality or a relative lack of novelty. Although the authors naturally took some exception to this, AC personally believes that properly executed, contributions that seem quite straightforward in hindsight (pun partly intended) can be valuable in moving the field forward: a clean and didactic presentation of theory backed by well-designed and extensive empirical investigation (both of which are adjectives used by reviewers to describe the empirical work in this paper) can be as valuable, or moreso, than a poorly executed but higher-novelty works. To quote AnonReviewer3, "HPG is almost certainly going to end up being a widely used addition to the RL toolbox". Feedback from reviewers prompted extensive discussion and a direct comparison with Hindsight Experience Replay which reviewers agreed added significant value to the manuscript, earning it a post-rebuttal unanimous rating of 7. It is therefore my pleasure to recommend acceptance.
test
[ "Hyx3e4Pc3X", "BJxW_kWc3Q", "B1enK-g7Am", "SkgYmj3gCQ", "HklK6CgeCm", "rygTDRlx0X", "B1l7mFA06m", "BJlHmDRC6Q", "HJeizrR0pQ", "HJxEna5ETQ", "SyefN6lMcQ", "BkxBfwLbcQ" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "author", "public" ]
[ "The authors present HPG, which applies the hindsight formulation already applied to off-policy RL algorithms (hindsight experience replay, HER, Andrychowicz et al., 2017) to policy gradients.\nBecause the idea is not new, and formulating HPG from PG is so straightforward (simply tie the dynamical model over goals), the work seems incremental. Also, going off policy in PG is known to be quite unstable, and so I'm not sure that simply using the well known approach of normalized importance weights is in practice enough to make this a widely useful algorithm for hindsight RL.\n\n\nEvaluation 3/5 How does HPG compare to HER? The only common experiment appears to be bit-flipping, which it appears (looking back at the HER paper, no reference to HER performance in this paper) to signifcantly underperform HER. In general I think that the justification for proposing HPG and possible advantages over HER need to be discussed: why should we generalize what is considered an on-policy algorithm like PG to handle hindsight, when HER seems ideally suited for such scenarios? Why not design an experiment that showcases the advantages of HPG over HER? \nClarity 4/5 Generally well explained.\nSignificance 3/5 The importance of HPG relative to off-policy variants of hindsight is not clear. Are normalized importance weights, a well established variance reduction technique, enough to make HPG highly effective? Do we really want to be running separate policies for all goals? With the practical need to do goal sub-sampling, is HPG really a strong algorithm (e.g. compared to HER)? Why does HPG degrade later in training sometimes when a baseline is added? This is strange, and warrants further investigation.\nOriginality 2/5 More straightforward extension of previous work based on current presentation. \n\nOverall I feel that HPG is a more straightforward extention of previous work, and is not (yet at least) adequately justified in the paper (i.e. over HER). Furthermore, the experiments seem very preliminary, and the paper needs further maturation (i.e. more discussion about and experimental comparision with previous work, stronger experiments and justification).\nRating 5/10 Weak Reject\nConfidence 4/5\n\nUpdated Review: \n\nThe authors have updated the appendix with new results, comparing against HER, and provided detailed responses to all of my concerns: thank you authors.\n\nWhile not all of my concerns have been addressed (see below), the new results and discussion that have been added to the paper make me much more comfortable with recommending acceptance. The formuation, while straightforward and not without limitations, has been shown in preliminary experiments to be effective. While many important details (e.g. robust baselines and ultimate performance) still need to be worked out, HPG is almost certainly going to end up being a widely used addition to the RL toolbox. Good paper, recommend acceptance.\n\nEvaluation/Clarity/Originality/Significance: 3.5/4/3/4\n\nRemaining concerns: \n- The poor performance of the baselines may indeed be due to lack of hindsight, but this should really be debugged and addressed by the final version of the paper.\n- Results throughout the paper are shown for only the first 100 evaluation steps. In many of the figures the baselines are still improving and are highly competitive... some extended results should be included in the final version of the paper (at least in the appendix).\n- As pointed out, it is difficult to compare the HER results directly, and it is fair to initially avoid confounding factors, but Polyak-averaging and temporal difference target clipping are important optimization tricks. I think it would strengthen the paper to optimize both the PG and DQN based methods and provide additional results to get a better idea of where things stand on these and/or possibly a more complicated set of tasks.\n\n\n", "Following recent work on Hindsight Experience Replay (Andrychowicz et al. 2017), the authors extend the idea to policy gradient methods. They formally describe the goal-conditioned policy gradient setup and derive the extensions of the classical policy gradient estimators. Their key insight to deriving a computationally efficient estimator is that for many situations, only a small number of goals will be \"active\" in a single trajectory. Then, they conduct extensive experiments on a range of problems and show that their approach leads to improvements in sample efficiency for goal-conditioned tasks.\n\nAlthough the technical novelty of the paper is not high (many of the estimators follow straightforwardly from previous results, however, the goal subsampling idea is a nice contribution), the paper is well written, the topic is of great interest, and the experiments are extensive and insightful. I expect that this will serve as a nice reference paper in the future, and launching point for future work. \n\nThe only major issue I have is that there is no comparison to HER. I think it would greatly strengthen the paper to have a comparison with HER. I don't think it diminishes their contributions if HER outperforms HPG, so I hope the authors can add that.\n\nComments:\n\nIn Sec 6.1, it seems surprising that GCPG+B underperforms GCPG. I understand that HPG+B may underperform HPG, but usually for PG methods a baseline helps. Do you understand what's going on here?\n\nIn Sec 6.2, it would be helpful to plot the average return of the optimal policy for comparison (otherwise, it's hard to know if the performance is good or bad). Also, do you have any explanations for why HPG does poorly on the four rooms?\n\n====\n\nRaising my score after the authors responded to my questions and added the HER results.", "We have updated the paper to include an empirical comparison between hindsight policy gradients and hindsight experience replay. This comparison is presented in Appendix E.3.7 (Pgs. 36-38).", "Thank you for your clarifications.\n\nThat the baseline is so poor is still surprising. A simple running average baseline (that takes into account the number of remaining steps) should do no worse than the original estimator.\n\nLook forward to the addition of the HER results. I will raise my score with the updated version of the paper.\n", "Regardless of the results of the direct empirical comparison with hindsight experience replay that we will provide, several facts justify our work. Firstly, policy gradient approaches constitute one of the most important classes of model-free reinforcement learning methods, which by itself warrants studying how they can benefit from hindsight. Our empirical results show that our approach is more than just theoretically interesting. Because such approach is complementary to previous work, note that it is entirely possible to train a critic by hindsight experience replay while training an actor that employs hindsight policy gradients. Secondly, although hindsight experience replay does not require a correction analogous to importance sampling, indiscriminately adding hindsight transitions to the replay buffer is problematic, which has mostly been tackled by heuristics (see Andrychowitz et al. (2017), Sec. 4.5). In contrast, our approach seems to benefit from incorporating all available information about goals at every update, which also avoids the need for a memory-costly replay buffer. The practical need for active goal subsampling in longer episodes seems to lead to a natural trade-off between computational efficiency and sample efficiency. Note that many successful model-free reinforcement learning algorithms rely on an approximation to a principled formulation, and our experiments suggest that subsampling active goals is such an example.\n\nAlthough the bit flipping and FetchPush environments used in our evaluation are similar to environments found in the work of Andrychowicz et al. (2017), our results are not directly comparable to theirs, mainly due to differences in the evaluation protocol. The most significant of such differences is that Andrychowicz et al. (2017) are only concerned with whether a goal was achieved during an episode, whereas we are also concerned with whether a goal was achieved quickly.\n\nRegarding the fact that policies sometimes degrade during training with HPG+B, first note that this phenomenon is only observed in bit flipping environments. Also note that GCPG+B presents instability in the only bit flipping environment where it does not perform poorly (batch size 16, k = 8). After careful investigation, we are convinced that the main cause of this issue is the fact that the value function baseline is still very poorly fit by the time that the policy exhibits desirable behavior. In all likelihood, this is due to the fact that the value function baseline is not trained using hindsight, which is also consistent with the fact that the instability is observed precisely in the the most extreme examples of sparse-reward environments. Although our preliminary experiments in using hindsight to fit a value function baseline have been successful, this may be accomplished in several ways, and requires a careful study of its own.\n\nBesides the additional experimental content to be released before the rebuttal deadline, we can easily improve the justification for our approach by including the arguments presented above in the final version of the paper. We hope that these changes will allow you to reconsider the rating given to our submission.\n\n(Part 2/2)", "We thank the reviewer for suggesting the work of Nachum et al. (2018). Their work seems related to the work of Levy et al. (2017), which applies hindsight experience replay in a hierarchical reinforcement learning approach.\n\nWe address your detailed comments in order of appearance:\n* We note that $p(s_{t+1} \\mid s_t, a_t)$ may be different from $p(s_{t'+1} \\mid s_{t'}, a_{t'})$ even if $s_{t+1} = s_{t'+1}$, $s_{t} = s_{t'}$, and $a_{t} = a_{t'}$, as long as $t \\neq t'$. Following the convention briefly introduced in the first paragraph of Section 2, $p(s_{t+1} \\mid s_{t}, a_{t})$ refers to the conditional probability of $S_{t+1} = s_{t+1}$, whereas $p(s_{t'+1} \\mid s_{t'}, a_{t'})$ refers to the conditional probability of $S_{t'+1} = s_{t'+1}$. Because $S_{t+1}$ and $S_{t'+1}$ are distinct random variables when $t \\neq t'$, our task formulation allows the probability of a state transition given an action to change across time steps within an episode. This is reminiscent of the notation employed by Bishop (2013). Our notation would also allow a different policy for each time step, which is why we have noted that the same policy is used to make decisions at every time step. We understand that this may be confusing at first, but we have been very careful in our choice of notation to allow for conciseness and rigour.\n* As discussed previously, we believe that Section 3 is important to make the paper as a whole more accessible.\n* Because estimators are random variables, they are indeed written as functions of other random variables (instead of their realizations). In your example, $G^{(i)} = g$ refers to conditioning the random variable $G^{(i)}$ on the realization $g$, whereas a probability conditioned on $G^{(i)}$ will be a function of a random variable (itself a random variable). In any case, the reviewer seems to have interpreted the expression correctly. \n* That is correct. Note that interacting with the environment after a particular goal has been achieved would provide an HPG agent with additional information about alternative goals.\n* As discussed previously, we have already investigated the instability of value function baselines in bit flipping environments.\n\nWe hope that these clarifications and the additional experimental content to be released before the rebuttal deadline will allow you to reconsider the rating given to our submission.\n\n(Part 2/2)", "Thank you very much for the time that you have dedicated to evaluate our work. We are glad that you believe that our work is high quality, that it will be useful to the community, that our paper is well written, that our theoretical contributions are solid, that our experiments are well designed, that our experimental analysis is rigorous, that our hyperparameter sensitivity and ablation analyses are valuable, and that our method appears highly effective.\n\nRegarding your brief summary of our work, although this is probably clear, we would like to emphasize that restricting attention to active goals does not affect the HPG estimator (a remarkable property), while subsampling active goals increases computational efficiency likely at a cost in sample efficiency. Therefore, both strategies are not intended to reduce variance.\n\nWe completely understand your interest in a direct comparison with hindsight experience replay, although we are glad that you agree that our contribution would not be diminished if hindsight experience replay were more sample efficient at this stage. Because this comparison was a common request among reviewers, we are currently working on it. We will provide an updated version of the paper including the corresponding results before the end of the rebuttal period (ideally by 21/11).\n\nNonetheless, we would like to briefly explain why we did not include such a comparison in the current version of the paper. Firstly, hindsight experience replay is an approach that can be applied to any reinforcement learning technique that relies on experience replay. Besides the choices required to implement hindsight experience replay itself (such as the goal sampling strategy and number of hindsight transitions per observed transition), each of these techniques potentially has several important hyperparameters. Instead of comparing HPG to one of these techniques, we preferred to focus on a rigorous comparison with GCPG, its most natural counterpart. The similarities between both methods allow for a highly systematic comparison that minimizes confounding factors. Secondly, note that we have not used tricks that are known to increase the performance of policy gradient methods (e.g., entropy bonuses, reward scaling, learning rate annealing, simple statistical baselines), once again in order to avoid introducing confounding factors. Because hindsight experience replay is directly applicable to state-of-the-art techniques, this would lead to an unbalanced comparison. Finally, it should be clear that our work can probably benefit from being extended to state-of-the-art policy gradient approaches. However, once again, such extensions are likely to introduce confounding factors that we would prefer to avoid in our fundamental work.\n\nWe plan on conducting and presenting experiments on environments with continuous action spaces in future work.\n\nRegarding the fact that policies sometimes degrade during training with HPG+B, first note that this phenomenon is only observed in bit flipping environments. Also note that GCPG+B presents instability in the only bit flipping environment where it does not perform poorly (batch size 16, k = 8). After careful investigation, we are convinced that the main cause of this issue is the fact that the value function baseline is still very poorly fit by the time that the policy exhibits desirable behavior. In all likelihood, this is due to the fact that the value function baseline is not trained using hindsight, which is also consistent with the fact that the instability is observed precisely in the the most extreme examples of sparse-reward environments. Although our preliminary experiments in using hindsight to fit a value function baseline have been successful, this may be accomplished in several ways, and requires a careful study of its own.\n\nAlthough we agree that Section 2 and Section 3 could be abridged in favour of results presented in the Appendices, we also believe that these two sections make the paper more accessible. For instance, Section 4 certainly benefits from the previous presentation of goal-conditional counterparts of well known results (Section 3) using our notation (Section 2).\n\nWe disagree with the claim that our paper is somewhat less novel because hindsight experience replay may be applied to the deep deterministic policy gradient (DDPG). Firstly, the correction mechanism based on importance sampling that we propose is radically different in comparison to the approach based on experience replay. Secondly, and more importantly, the deep deterministic policy gradient is not in the same class of policy gradient algorithms that we consider, which contains important state-of-the-art algorithms. Note that DDPG requires a critic that is differentiable with respect to the choice of action by the actor (for any state). Consequently, the method only applies to environments with continuous action spaces.\n\n(Part 1/2)\n", "Thank you very much for the time that you have dedicated to evaluate our work. We are glad that you found our ideas generally well explained.\n\nRegarding your summary of our contribution, although we agree that hindsight is not an original idea in reinforcement learning, it was introduced only recently and has attracted significant interest, as evidenced by the fact that the work of Andrychowicz et al. (2017) has received more than one hundred citations in less than two years, when it first appeared as a technical report. \n\nWhile we agree that tying the dynamics model over goals is straightforward, that is just one of many steps required to derive our approach, which has importance sampling at its core. Importance sampling is indeed the natural choice to enable using off-policy data in policy gradients. Nonetheless, the exact formulation of the hindsight policy gradient, its relationships with value functions, and the feasibility of the corresponding estimators are only clear in hindsight. For instance, note that we are able to derive an estimator that can be effectively computed for environments of interest even though it seems to require an expectation over all possible goals. Although apparently simple by analogy, several results require proofs that are elementary but involved (for an example, see Theorem 4.2). Our technical approach to hindsight is radically different from previous work, which is why we strongly disagree with the claim that our work is incremental.\n\nThe reviewer is correct in noting that employing importance sampling to compute gradients can in general be unstable, which motivates the empirical study presented in Section 6 and the supplementary empirical study of likelihood ratios presented in Appendix E.3.6. We believe that our experiments on a diverse selection of sparse-reward environments conclusively answer the question of whether weighted importance sampling is effective. In addition to such substantial empirical evidence, it is crucial to note that we apply importance sampling in a very specific setting, leading to estimators that have remarkable properties that differentiate them from previous estimators for off-policy learning. We mention several of these properties in Section 5, in the paragraph before the last.\n\nOn a related subject, we vehemently disagree with the claim that our experiments are preliminary. Note that Reviewer #2 refers to our experiments as extensive and Reviewer #4 believes that our experiments are well designed and that our analysis is thorough and rigorous.\n\nWe completely understand your interest in a direct comparison with hindsight experience replay. Because this comparison was a common request among reviewers, we are currently working on it. We will provide an updated version of the paper including the corresponding results before the end of the rebuttal period (ideally by 21/11). \n\nNonetheless, we would like to briefly explain why we did not include such a comparison in the current version of the paper. Firstly, hindsight experience replay is an approach that can be applied to any reinforcement learning technique that relies on experience replay. Besides the choices required to implement hindsight experience replay itself (such as the goal sampling strategy and number of hindsight transitions per observed transition), each of these techniques potentially has several important hyperparameters. Instead of comparing HPG to one of these techniques, we preferred to focus on a rigorous comparison with GCPG, its most natural counterpart. The similarities between both methods allow for a highly systematic comparison that minimizes confounding factors. Secondly, note that we have not used tricks that are known to increase the performance of policy gradient methods (e.g., entropy bonuses, reward scaling, learning rate annealing, simple statistical baselines), once again in order to avoid introducing confounding factors. Because hindsight experience replay is directly applicable to state-of-the-art techniques, this would lead to an unbalanced comparison. Finally, it should be clear that our work can probably benefit from being extended to state-of-the-art policy gradient approaches. However, once again, such extensions are likely to introduce confounding factors that we would prefer to avoid in our fundamental work.\n\n(Part 1/2)", "Thank you very much for the time that you have dedicated to evaluate our work. We are glad that you believe that our paper is well written and of great interest, that our experiments are extensive and insightful, and that our contribution has the potential to become a reference and starting point for future work. \n\nYou are absolutely correct in noting that the fact that only active goals need to be considered is crucial to the feasibility of the proposed estimators. This result is very specific to this application of importance sampling, which also leads to other remarkable properties (as discussed in Section 5). However, we disagree with the claim that the technical novelty of our paper is not high. Firstly, our technical approach to hindsight is radically different from previous work. Secondly, the exact formulation of the hindsight policy gradient, its relationships with value functions, and the feasibility of the corresponding estimators are only clear in hindsight. Finally, although apparently simple by analogy, several results require proofs that are elementary but involved (for an example, see Theorem 4.2).\n\nIt is indeed very interesting that including a value function baseline seems more harmful than helpful according to our experiments. After careful investigation, we have concluded that the value function baseline is often poorly fit by the time that the policy exhibits desirable behavior, which is probably due to the fact that the value function baseline is not trained using hindsight. This is particularly evident in the bit flipping environments, the most extreme examples of sparse-reward environments that we consider, where both HPG+B and GCPG+B exhibit unstable behavior (although GCPG+B only ever reaches a good performance for k=8 and a batch of size 16). Although our preliminary experiments in using hindsight to fit a value function baseline have been successful, this may be accomplished in several ways, and requires a careful study of its own.\n\nWe believe that the poor performance of every technique in the four rooms environment could be addressed by well-known policy gradient tricks (e.g., entropy bonuses, reward scaling, learning rate annealing, simple statistical baselines), which we have avoided in order to reduce confounding factors in our experiments. The stark state information and the layout that offers a single door between adjacent rooms make this environment surprisingly difficult, but it is probably within reach of agents trained with either HPG or GCPG. Indeed, plotting the average return of the optimal policy would be helpful for inspecting results. We can easily include that in the final version of the paper.\n\nWe completely understand your interest in a direct comparison with hindsight experience replay, although we are glad that you agree that our contribution would not be diminished if hindsight experience replay were more sample efficient at this stage. Because this comparison was a common request among reviewers, we are currently working on it. We will provide an updated version of the paper including the corresponding results before the end of the rebuttal period (ideally by 21/11). \n\nNonetheless, we would like to briefly explain why we did not include such a comparison in the current version of the paper. Firstly, hindsight experience replay is an approach that can be applied to any reinforcement learning technique that relies on experience replay. Besides the choices required to implement hindsight experience replay itself (such as the goal sampling strategy and number of hindsight transitions per observed transition), each of these techniques potentially has several important hyperparameters. Instead of comparing HPG to one of these techniques, we preferred to focus on a rigorous comparison with GCPG, its most natural counterpart. The similarities between both methods allow for a highly systematic comparison that minimizes confounding factors. Secondly, note that we have not used tricks that are known to increase the performance of policy gradient methods, once again in order to avoid introducing confounding factors. Because hindsight experience replay is directly applicable to state-of-the-art techniques, this would lead to an unbalanced comparison. Finally, it should be clear that our work can probably benefit from being extended to state-of-the-art policy gradient approaches. However, once again, such extensions are likely to introduce confounding factors that we would prefer to avoid in our fundamental work.\n\nWe hope that these clarifications and the additional experimental content to be released before the rebuttal deadline will allow you to reconsider the rating given to our submission.", "This paper extends the work of Hindsight Experience Replay to (goal-conditioned) policy gradient methods. Hindsight, which allows one to learn policies conditioned on some goal g, from off-policy experience generated by following goal g’, is cast in the framework of importance sampling. The authors show how one can simply rewrite the goal-conditioned policy gradient by first sampling a trajectory, conditioned on some goal $g’$ and then computing the closed form gradient in expectation over all goals. This gradient is unbiased if the rewards are off-policy corrected along the generated trajectories. While this naive formulation is found to be unstable , the authors propose a simple normalized importance sampling formulation which appears to work well in practice. To further reduce variance and computational costs, the authors also propose goal subsampling mechanisms, which sample goals which are likely along the generated trajectories. The method is evaluated on the same bit-flipping environment as [1], and a variety of discrete environments (grid worlds, Ms. Pac-Man, simulated robot arm) where the method appears highly effective. Unfortunately for reasons which remain unclear, hindsight policy gradients with value baselines appear unstable.\n\nQuality:\nThis paper scores high wrt. quality. The theoretical contributions of the method are solid, the experiments are well designed and highlight the efficacy of the method, as well as areas for improvement. In particular, I commend the authors for the rigorous analysis (bootstrapped error estimates, separate seeds for hyper-parameters and reporting test error, etc.), including the additional results found in the appendix (sensitivity and ablative analyses). That being said, the paper could benefit from experiments in the continuous control domain and a direct head-to-head comparison with HER. While I do not anticipate the proposed method to outperform HER in terms of data-efficiency (due to the use of replay) the comparison would still be informative to the reader.\n\nClarity:\nThe paper is well written and easy to follow. If anything, the authors could have abridged sections 2 and 3 in favor of other material found in the Appendix, as goal-conditioned policy gradients (and variants) are straightforward generalizations of standard policy gradient methods.\n\nOriginality:\nNovelty is somewhat low for the paper as Hindsight Experience Replay already presented a very similar off-goal-correction mechanism for actor-critic methods (DDPG). The method is also very similar to [2], the connection to which should also be discussed.\n\nSignificance.\nDespite the low novelty, I do believe there is value in framing “hindsight” as importance sampling in goal-conditioned policy gradients. This combined with the clear presentation and thorough analysis in my opinion warrants publication and will certainly prove useful to the community. Significance could be improved further should the paper feature a more prominent discussion / comparison to HER, along with a fix for the instabilities which occur when using their method in conjunction with a value baseline.\n\n[1] Hindsight Experience Replay. Marcin Andrychowicz et al.\n[2] Data-Efficient Hierarchical Reinforcement Learning. Ofir Nachum, Shixiang Gu, Honglak Lee, Sergey Levine.\n\nDetailed Comments:\n* Section 2: “this formulation allows the probability of a state transition given an action to change across time-steps within an episode”. I do not understand this statement, as $p(s_{t+1} \\mid s_t, a_t)$ is the same transition distribution found in standard MDPs, and appears stationary wrt. time.\n* Theorems 3.1 - 3.1 (and equations). A bit lengthy and superfluous. Consider condensing the material.\n* Section 5: I found the change in notation (from lower to upper-case) somewhat jarring. Also, the notation used for empirical samples from the mini-batch is confusing. If $A^{(i)}_t}$ is meant to be the action at time-step $t$ for the $i$-th trajectory in the minibatch, then what does $G^{(i)} = g$ mean ? I realize this means evaluating the probability by setting the goal state to $g$, but this is confusing especially when other probabilities are evaluated conditioned on $G^{(i)}$ directly.\n* Section 6. “Which would often require the agent to act after the end of an episode”. Do you mean that most episodes have length T’ < T, and as such we would “waste time” generating longer trajectories ?\n* RE: Baseline instabilities. Plotting the loss function for the value function could shed light on the instability.\n", "Thank you for your interest. Note that Equation 6 involves rewards computed for every possible goal, not only for an original goal. Therefore, as long as there is an (alternative) goal for which a trajectory obtains a non-zero reward, the corresponding term will probably be non-zero. Section 5 details how the corresponding estimator can be computed efficiently.", "If I understand this correctly, if the rewards are sparse i.e if goal reached then reward is 1 else 0, wouldn't be your gradient 0 most of the time in equation 6? If that is the case, what is the need of the importance sampling then? " ]
[ 7, 7, -1, -1, -1, -1, -1, -1, -1, 7, -1, -1 ]
[ 4, 4, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1 ]
[ "iclr_2019_Bkg2viA5FQ", "iclr_2019_Bkg2viA5FQ", "iclr_2019_Bkg2viA5FQ", "HJeizrR0pQ", "BJlHmDRC6Q", "B1l7mFA06m", "HJxEna5ETQ", "Hyx3e4Pc3X", "BJxW_kWc3Q", "iclr_2019_Bkg2viA5FQ", "BkxBfwLbcQ", "iclr_2019_Bkg2viA5FQ" ]
iclr_2019_Bkg3g2R9FX
Adaptive Gradient Methods with Dynamic Bound of Learning Rate
Adaptive optimization methods such as AdaGrad, RMSprop and Adam have been proposed to achieve a rapid training process with an element-wise scaling term on learning rates. Though prevailing, they are observed to generalize poorly compared with SGD or even fail to converge due to unstable and extreme learning rates. Recent work has put forward some algorithms such as AMSGrad to tackle this issue but they failed to achieve considerable improvement over existing methods. In our paper, we demonstrate that extreme learning rates can lead to poor performance. We provide new variants of Adam and AMSGrad, called AdaBound and AMSBound respectively, which employ dynamic bounds on learning rates to achieve a gradual and smooth transition from adaptive methods to SGD and give a theoretical proof of convergence. We further conduct experiments on various popular tasks and models, which is often insufficient in previous work. Experimental results show that new variants can eliminate the generalization gap between adaptive methods and SGD and maintain higher learning speed early in training at the same time. Moreover, they can bring significant improvement over their prototypes, especially on complex deep networks. The implementation of the algorithm can be found at https://github.com/Luolc/AdaBound .
accepted-poster-papers
The paper was found to be well-written and conveys interesting idea. However the AC notices a large body of clarifications that were provided to the reviewers (regarding the theory, experiments, and setting in general) that need to be well addressed in the paper.
train
[ "Bke-32cM1N", "ryl12OxuAX", "BylLNcbdAX", "S1eizWWuCQ", "BJgABOx_C7", "S1lEtdgdAQ", "SJef8P3thQ", "BkeFPweF3m", "rkg0-SM-3m", "rkg7oagJn7", "Skx8lvomim", "H1glD5ZAcm", "Byg-RFWA5X", "Hklkd3A25X", "SklPiVVncm", "r1lJqtvjcm", "HkxJNxD55X", "S1goTjL5qX" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public", "author", "author", "public", "author", "public", "author", "public" ]
[ "I thank the reviewers for their response, and I keep my score.", "\n[About details and extra experiments you asked for]\n\n>>> Am I correct in saying that with t=100 (i.e., the 100th iteration), the \\eta s constrain the learning rates to be in a tight bound around 0.1? If beta=0.9, then \\eta_l(1) = 0.1 - 0.1 / (0.1*100+1) = 0.091. After t=1000 iterations, \\eta_l becomes 0.099. Again, are the good results coincidental with the fact that SGD with learning rate 0.1 works well for this setup? In the scheme of the 200 epochs of training (equaling almost 100-150k iterations), if \\eta s are almost 0.099 / 0.10099, for over 99% of the training, we're only doing SGD with learning rate 0.1.\n\nActually, we used \\beta_1=0.99 in our experiments. Therefore \\eta_l comes to 0.091 at t=1000 rather than t=100, and it is about 10 epochs. Also, as mentioned above, we test larger \\beta and the performance are similar (see Figure 5).\n\n>>> Along the same lines, what learning rates on the grid were chosen for each of the problems? \n\nThe settings are:\nSGD(M): \\alpha=0.1 for MNIST/CIFAR10, \\alpha=10 for PTB; momentum=0.9\nAdam, AMSBound: \\alpha=0.001, \\beta_1=0.99, \\beta_2=0.999\nAdaGrad: \\alpha=0.01\n\nWe only provided the grid search sets of hyperparameters due to the page limit before. \nWe will soon add a section in the appendix to illustrate the specific settings of hyperparameters for all the optimizers.\n\n>>> Does the setup still work if SGD needs a small step size and we still have \\eta converge to 1? A VGG-11 without batch normalization typically needs a smaller learning rate than usual; could you try the algorithms on that?\n\nYes. We add an experiment according to your suggestion (VGG-11 without batch normalization on CIFAR-10, using AdaBound/AMSBound, SGD, and other baselines). The best step size for SGD is 0.01 and AdaBound with \\alpha*=1 still have similar performance with the best-tuned SGD (see this anonymous link to the results: https://github.com/AgentAnonymous/X/blob/master/vgg_test.pdf ) . \n\n>>> Can the authors plot the evolution of learning rate of the algorithm over time? You could pick the min/median/max of the learning rates and plot them against epochs in the same way as accuracy. This would be a good meta-result to show how gradual the transition from Adam to SGD is.\n\nWe conduct an experiment as you suggested, the results are placed in Appendix H.\nFor short, we can see that the learning rates increase rapidly in the early stage of training, then after a few epochs its max/median values gradually decrease over time, and finally converge to the final step size. The increasing at the beginning is due to the property of the exponential moving average of \\phi_t of Adam, while the gradually decreasing indicates the transition from Adam to SGD.\n", "\nThanks for your comments.\n\n>>> There is not much novelty in Theorems 1,2,3 since similar results already appeared in Reddi et al.\n\nWe argue that Reddi et al. (2018) did not prove 「for all the initial learning rates」, Adam has bad behavior, and this condition is important for showing the necessity of our idea of restricting the actual learning rates. That’s why we complete the proof with this weaker assumption. We would not claim the theoretical analysis as our main contribution in this paper, but it is a necessary part that serves for our actual main contribution「proposing the idea of an optimization algorithm that can gradually transform from adaptive methods to SGD(M), combining both of their advantages」. All the other parts in the paper, including preliminary empirical study, theoretical proofs, experiments, and further analysis, serve for this main contribution.\n\n>>> Also, the theoretical part does not demonstrate the benefit of the clipping idea. Concretely, the regret bounds seem to be similar to the bounds of AMSBound. Ideally, I would like to see an analysis that discusses a situation where AdaGrad/AMSBound fail or perform really bad, yet the clipped versions do well.\n\nFirst, the name of our new proposed methods are AdaBound and AMSBound. I guess you mean AMSGrad in your suggestion?\nActually, it is easy to use a setting similar to that of Wilson et al. (2017), to show AdaGrad/Adam achieve really bad performance while our methods do well. But I don’t think it is very meaningful since it is only a bunch of examples. As also mentioned by review 2, the average performance of the algorithms is what really matters. But due to its difficulty, most similar works on optimizers tend to use experiments to support their arguments and lack the theoretical proofs for this part.\n", "\nThanks for your comments!\n\nWe deeply agree that the average performance of different algorithms is very important in practice. But as also mentioned in the reply to anonymous comments before (on 11.12), our understanding of the generalization behavior of deep neural networks is still very shallow by now. It is a big challenge of investigating from theoretical aspects. Actually, the theoretical analysis of most recent related work is still under strong or particular assumptions. I believe if one could conduct convincing theoretical proof without strong assumptions, that work is totally worth an individual publication.\n\nWe are conducting more experiments on larger datasets such as CIFAR-100 and on more tasks in other fields, and the results are very positive too. We will add the results and analysis in the final revision if there is space left in the paper.\n\nWe want to argue that the use of diag() is necessary since \\phi_t is a matrix rather than a vector. Also, $g$ is not a vector but $g_t$ is, and $g_{t,i}$ is coordinate. \nIt is true that the expression $x_i$ might be ambiguous without context: 1) $x$ is a vector and it means the i-th coordinate of $x$ or 2) $x$ is not a vector and $x_i$ is a vector at time $i$. But since $x$ cannot be or not be a vector at the same time, it is clear in a specific context. This kind of notation is also used in many other works. We re-check the math expressions in our paper and think they are ok.\n", "\nThanks for your questions and suggestions. We separate the questions into 3 parts (bound functions, contributions, and extra details & experiments) and post the responses below. We hope they can address your questions.\n\n[About bound functions]\n\nWe want to clarify the following facts about the bound function:\n1. The convergence speed (indicated by \\beta in current settings) and convergence target (indicated by \\alpha*) exert minor impacts on the performance of AdaBound.\n2. In other words, AdaBound is not sensitive to the form of bound functions, and therefore we don’t have to waste much time fine-tuning the hyperparameters, especially compared with SGD(M).\n3. Moreover, even not carefully fine-tuned AdaBound can beat SGD(M) with the optimal step size.\n\nWe conducted the empirical study in Appendix G in order to illustrate the above points. But as you have raised a few questions about the bound function, it seems that our original experiments are not enough. We expand the experiments in an attempt to give more evidence to support the above statements and hope this can answer some questions you mentioned.\n\n>>> I'm somewhat confused by the formulation of \\eta_u and \\eta_l. The way it is set up (end of Section 4), the final learning rate for the algorithm converges to 0.1 as t goes to infinity. In the Appendix, the authors show results also with final convergence to 1. Are the results coincidental with the fact that SGD works well with those learning rates? It is a bit odd that we indirectly encode the final learning rate of the algorithm into the \\eta s.\n\n(Note: SGD and SGDM have similar performance in our experiments. Here we directly use SGD to generally indicate SGD or SGDM)\nIt is not a coincidence. SGD is very sensitive to the step size. \\alpha=0.1 is the best setting and other settings have large performance gaps compared with the optimal one (see Figure 6a). But AdaBound has stable performance in different final step sizes (see Figure 6b). Moreover, for all the step sizes, AdaBound outperforms SGD (see Figure 7).\n\n>>> Can you try experimenting with/suggesting trajectories for \\eta which converge to SGD stepsize more slower?\n\nWe further test \\beta for {1-1/10, 1-1/50, 1-1/100, 1-1/500, 1-1/1000}, which translates to some slower convergence speed of bound functions. Their performances are really close (see Figure 5).\n\n>>> Similarly, can you suggest ways to automate the choice for the \\eta^\\star? It seems that the 0.1 in the numerator is an additional hyperparameter that still might need tuning?\n\nIn the current form of bound functions, yes, it is an additional hyperparameter. But as illustrated by the experiments, AdaBound is very robust and not sensitive to hyperparameters (we can randomly use \\alpha from 0.001 to 1 and still get stable and good performance). I think in practice, we can somehow treat it as “no need of tuning”, and 0.1 can be a default setting.\n", "\n[About contributions]\n\n>>> Is it correct that a careful theoretical analysis of this framework is what stands as the authors' major contribution?\n\nWe want to clarify that our main contribution is: 「proposing the idea of an optimization algorithm that can gradually transform from adaptive methods to SGD(M), combining both of their advantages」\nAll the other parts in the paper, including preliminary empirical study, theoretical proofs, experiments, and further analysis, serve for the main contribution. From Wilson et al. (2017), many researchers have been devoted to finding a way to train as fast as Adam and as good as SGD. Many of them failed and some of them present so complicated algorithms.\nThe purpose of this paper is to tell other researchers that such an interesting, simple and direct approach can achieve surprisingly good and robust performance. Note that “bound functions on learning rates” is only one particular way to conduct “gradual transformation from Adam to SGD”. There might be other ways that can work too, such as well-designed decay. We think publicizing now with several baseline experiments and a basic theoretical proof so as to stimulate other people's research can benefit the research community.\n\n>>> The core observation of extreme learning rates and the proposal of clipping the updates is not novel; \n\nWe are not the first to propose clipping of learning rates. But we would argue that no one has given a clear observation of the existence of extreme learning rates before. Wilson et al. (2017) first mentioned that extreme learning rates may cause bad performance, but it is just an assumption. Keskar & Socher (2017)’s preliminary experiment can be seen as indirect evidence. As far as we know, we are the first that directly show both extremely large and small learning rates exist in the final stage of training.\n\n>>> Keskar and Socher (which the authors cite for other claims) motivates their setup with the same idea (Section 2 of their paper). I feel that the authors should clarify what they are proposing as novel. \n\nWe will clarify that the idea of learning rate clipping has been proposed by Keskar & Socher (2017). \nEven if they had not mentioned the idea of clipping learning rates, we wouldn’t claim it as our new contribution. Actually, clipping is really common in practice/in many frameworks’ API. The difference is that we usually use it on gradients. We have also mentioned the above facts in Section 4.\nAlso, we want to clarify again that our main contribution is the idea of “gradual transformation from Adam to SGD”, and clipping is just one particular way of implementation.\nIt should also be mentioned that this part in Keskar & Socher (2017) is preliminary. They did not give a thorough discussion about clipping or extreme learning rates.\n", "This paper presents new variants of ADAM and AMSGrad that bound the gradients above and below to avoid potential negative effects on generalization of excessively large and small gradients; and the paper demonstrates the effectiveness on a few commonly used machine learning test cases. The paper also presents detailed proofs that there exists a convex optimization problem for which the ADAM regret does not converge to zero.\n\nThis paper is very well written and easy to read. For that I thank the authors for their hard word. I also believe that their approach to bound is well structured in that it converges to SGD in the infinite limit and allows the algorithm to get teh best of both worlds - faster convergence and better generalization. The authors' experimental results support the value of their proposed algorithms. In sum, this is an important result that I believe will be of interest to a wide audience at ICLR.\n\nThe proofs in the paper, although impressive, are not very compelling for the point that the authors want to get across. That fact that such cases of poor performance can exists, says nothing about the average performance of the algorithms, which is practice is what really matters.\n\nThe paper could be improved by including more and larger data sets. For example, the authors ran on CIFAR-10. They could have done CIFAR-100, for example, to get more believable results.\n\nThe authors add a useful section on notation, but go on to abuse it a bit. This could be improved. Specifically, they use an \"i\" subscript to indicate the i-th coordinate of a vector and then in the Table 1 sum over t using i as a subscript. Also, superscript on vectors are said to element-wise powers. If so, why is a diag() operation required? Either make the outproduct explicit, or get rid of the diag().", "The authors introduce AdaBound, a method that starts off as Adam but eventually transitions to SGD. The motivation is to benefit from the rapid training process of Adam in the beginning and the improved convergence of SGD at the end. The authors do so by clipping the weight updates of Adam in a dynamic way. They show numerical results and theoretical guarantees. The numerical results are presented on CIFAR-10 and PTB while the theoretical results are shown on assumptions similar to AMSGrad (& using similar proof strategies). As it stands, I have some foundational concerns about the paper and believe that it needs significant improvement before it can be published. I request the authors to please let me know if I misunderstood any aspect of the algorithm, I will adjust my rating promptly. I detail my key criticisms below:\n\n- I'm somewhat confused by the formulation of \\eta_u and \\eta_l. The way it is set up (end of Section 4), the final learning rate for the algorithm converges to 0.1 as t goes to infinity. In the Appendix, the authors show results also with final convergence to 1. Are the results coincidental with the fact that SGD works well with those learning rates? It is a bit odd that we indirectly encode the final learning rate of the algorithm into the \\eta s. \n\n- Am I correct in saying that with t=100 (i.e., the 100th iteration), the \\eta s constrain the learning rates to be in a tight bound around 0.1? If beta=0.9, then \\eta_l(1) = 0.1 - 0.1 / (0.1*100+1) = 0.091. After t=1000 iterations, \\eta_l becomes 0.099. Again, are the good results coincidental with the fact that SGD with learning rate 0.1 works well for this setup? In the scheme of the 200 epochs of training (equaling almost 100-150k iterations), if \\eta s are almost 0.099 / 0.10099, for over 99% of the training, we're only doing SGD with learning rate 0.1. \n\n- Along the same lines, what learning rates on the grid were chosen for each of the problems? Does the setup still work if SGD needs a small step size and we still have \\eta converge to 1? A VGG-11 without batch normalization typically needs a smaller learning rate than usual; could you try the algorithms on that? \n\n- Can the authors plot the evolution of learning rate of the algorithm over time? You could pick the min/median/max of the learning rates and plot them against epochs in the same way as accuracy.This would be a good meta-result to show how gradual the transition from Adam to SGD is. \n \n- The core observation of extreme learning rates and the proposal of clipping the updates is not novel; Keskar and Socher (which the authors cite for other claims) motivates their setup with the same idea (Section 2 of their paper). I feel that the authors should clarify what they are proposing as novel. Is it correct that a careful theoretical analysis of this framework is what stands as the authors' major contribution?\n\n- Can you try experimenting with/suggesting trajectories for \\eta which converge to SGD stepsize more slower? \n\n- Similarly, can you suggest ways to automate the choice for the \\eta^\\star? It seems that the 0.1 in the numerator is an additional hyperparameter that still might need tuning? \n", "*Summary :\nThe paper explores variants of popular adaptive optimization methods.\nThe idea is to clip the magnitude of the gradients from above and below in order to prevent too aggressive/conservative updates.\nThe authors provide regret bound to this algorithm in the online convex setting and perform several illustrative experiments.\n\n\n*Significance:\n-There is not much novelty in Theorems 1,2,3 since similar results already appeared in Reddi et al.\n\n-Also, the theoretical part does not demonstrate the benefit of the clipping idea. Concretely, the regret bounds seem to be similar to the bounds of AMSBound.\nIdeally, I would like to see an analysis that discusses a situation where AdaGrad/AMSBound fail or perfrom really bad, yet the clipped versions do well.\n\n-The experimental part on the other hand is impressive, and the results illustrate the usefulness of the clipping idea.\n\n*Clarity:\nThe idea and motivation are very clear and so are the experiments.\n\n\n*Presentation:\nThe presentation is mostly good.\n\nSummary of review:\nThe paper suggests a simple idea to avoid extreme behaviour of the learning rate in standard adaptive methods. The theory is not so satisfying, since it does not illustrate the benefit of the method over standard adaptive methods. The experiments are more thorough and illustrate the applicability of the method.\n\n", "Sorry for the late response. It's been a bit busy in the past few days. Thanks for your comments and we present our responses below.\n\n1. We didn't pay much attention to the smoothness of the learning curve before and the analysis mainly focuses on training speed w.r.t. the advantage of adaptive methods. But, actually, we also mentioned that the learning curve of our framework is smoother than that of SGD in the experiment on PTB (in para. 1, section 5.4, page 8). We would agree with your opinion that the smoothness is also important. We are pleased to add some more discussion on this point in the next revision.\n\n2. That's interesting. It is a good engineering question that what particular bound function is best (simplicity, efficiency, effectiveness etc.) in production. As for this paper, it is more about to show the potential of a novel framework and stimulate others' research. It would be a direction of future work to investigate whether there is a simpler bound function that guarantees the performance.\n\n3. I am afraid that the method in Keskar et al. (2017) seems not able to be applied to our algorithm directly. Introducing automation is meaningful. But it is not a very easy task, IMO. We may think about this point carefully in the future work.", "Dear authors,\n\nInteresting work! My coauthors and I have been suffered from the poor generalization of Adam in many of our productions for a long time. We have to use SGD for better performance but I do HATE fine-tuning hyperparameters of SGD again and again!\n\nI noticed that there have been many new proposed optimizers claiming they are better than Adam. I once tried some of them and was disappointed to find that they can bring nothing improvement but more hyperparameters! I doubt that the more and more complicated design of optimizers is not a right way and there must be a simple way to build an optimizer as fast as Adam while as good as SGD.\n\nThat’s why this paper really attracts me. The idea of gradually transforming Adam to SGD is really simple but looks intuitive and reasonable. It makes sense to me. The algorithm is also well-presented. I am surprised that you also provide convincing proofs about the algorithm --- I had thought you would just construct some empirical studies w/o theoretical analysis.\n\nI have a few questions about the paper and personal thoughts of the future work. I hope they will be useful to the authors. Feel free to leave them as is if they are not correct. :D\n\n- Besides rapid training, the smoothness of the learning curve is another advantage of adaptive methods. Personally, I think it might be more important. When trying to train new models, we often do not know whether it can converge in advance. A common approach is training few epochs and making a preliminary decision of what to do next based on the trend of learning curve in the early stage. The sharp fluctuation of loss is common when using SGD, which makes it hard to estimate the trend of learning curve quickly. Are your framework able to keep this strength of Adam? What is your take on this?\n\n- I tried AdaBound on CIFAR-10 by myself. It is interesting that I have used simpler bound functions (linear functions and piecewise constant function) and still got very good performance. As you also mentioned that the convergence speed of bound functions is not very important, I suggest you may choose simpler ones (Occam’s Razor).\n\n- I am thinking about if we may use a way like in Keskar et al. (2017) to determine the final step size automatically. I didn’t think through this carefully of whether it is possible. What is your opinions?\n\nThanks in advance for your time and I hope this paper get accepted!", ">>> You mention \"Experimental results show that new variants can eliminate the generalization gap between adaptive methods and SGD\". Given that the paper only contains a few empirical results (on some important and common tasks) and no theoretical proof in that respect, I find it to be a misleading statement.\n\nHonestly, maybe I don't exactly get your point. We said \"experimental results show\", not \"theoretical proof show\" or something like that. If you said \"your experiments are not enough\", I could understand and may add some additional experiments on other tasks if reasonable. But we don't think \"no theoretical proof in that respect\" is a valid point to criticize that the statement is misleading or overclaiming.\n\nIn addition, it should be mentioned that our understanding of the generalization behavior of deep neural networks is still very shallow by now. It is a big challenge of investigating from theoretical aspects. A summary of recent achievements can be found here (http://ruder.io/deep-learning-optimization-2017/), and we can see their theoretical analysis is still under strong or particular assumptions. That's why most similar works on optimizers tend to use experiments to support their arguments.\n\nAs for the richness of our experiments, in our paper the tasks include several popular ones on CV and NLP area; the models include simple perceptron, deep CNN models and RNN models. We give a brief comparison to some recent works for a fair judgment. \n\n- [1] does not propose novel algorithms or frameworks as we do. Their main contribution is empirically showing that the minima found by adaptive learning rate methods perform generally worse compared to those found by SGD, and providing some possible causes. The richness of experiments of ours is similar to theirs. Personally, the amount of experiments in this work is an average level among similar works as far as I know.\n- The experiments in [2] are very limited, as the authors also state the experiments are \"preliminary\".\n- [3] conducts more experiments than other similar works. But there is no theoretical analysis, which is important in such kinds of works.\n- [4] (posted on arXiv and also a submission to ICLR19) only conducts experiments on image classification tasks. As it is known that the gap between Adam and SGD on this task is notable, while on some NLP tasks like machine translation, well-turned Adam may even outperform SGD ([6]), it is not enough to just test on this single task.\n- The experiments in [5] (posted on arXiv and also a submission to ICLR19) are even more limited than that of [2], only a toy model on MINST.\n\nTherefore, we argue that our experiments have already shown the potential of our proposed framework. Future papers by other researchers are a more appropriate home for additional experiments on other tasks. We think publicizing now with the set of baselines that we have already included so as to stimulate others' research is more effective than us delaying publication and presentation of this work.\n\n-----\n[1] Wilson, A.C., Roelofs, R., Stern, M., Srebro, N., & Recht, B. (2017). The Marginal Value of Adaptive Gradient Methods in Machine Learning. NIPS.\n[2] Sashank J.R., Satyen K., & Sanjiv K. (2018). On the Convergence of Adam and Beyond. ICLR.\n[3] Keskar, N.S., & Socher, R. (2017). Improving Generalization Performance by Switching from Adam to SGD. CoRR, abs/1712.07628.\n[4] Chen, J., & Gu, Q. (2018). Closing the Generalization Gap of Adaptive Gradient Methods in Training Deep Neural Networks. CoRR, abs/1806.06763.\n[5] Chen, X., Liu, S., Sun, R., & Hong, M. (2018). On the Convergence of A Class of Adam-Type Algorithms for Non-Convex Optimization. CoRR, abs/1808.02941.\n[6] Denkowski, M.J., & Neubig, G. (2017). Stronger Baselines for Trustable Results in Neural Machine Translation. NMT@ACL.", "Thanks for your interests.\n\nI respond point by point below.\n\n>>> \"they are observed to generalize poorly compared with SGD or even fail to converge due to unstable and extreme learning rates.\" As far as I am aware the issue with the convergence analysis of exponentiated squared gradient averaging algorithms like ADAM and RMSPROP do not extend to ADAGRAD. So, ADAGRAD is indeed guaranteed to converge given the right assumptions.\n\nA more precise expression of that sentence should be \"Adam, RMSprop, AdaGrad, and other adaptive methods are observed to generalize poorly ..., and, some of them (i.e. Adam) even fail to converge ...\", which is summarized from [1][2][3] and Section 3 in our paper. We didn't notice the original sentence might be misunderstood. We will use a more precise way to summarize the phenomenon. However, although AdaGrad is theoretically guaranteed to converge, it is well-accepted that in practice the convergence is too slow at the end of training due to its accumulation of second order momentum. As we usually use a limited number of epochs or limited time in a training job, it may fail to achieve the \"theoretical convergence\". Therefore, maybe we can say \"may hard to converge\" to summarize. :D\n\n>>> In the rest of the paper, the experiments and arguments mainly consist of ADAM and not adaptive methods in general. So I think the distinction between adaptive methods in general and adaptive methods like ADAM and RMSPROP with respect to convergence guarantees should be made clearer.\n\nThe main purpose of this paper is to introduce a novel framework that can combine the advantages of adaptive methods and SGD(M). The framework applies to Adam as well as AdaGrad and other adaptive methods. As mentioned above, the weaknesses of adaptive methods are in common and combining with SGD can help overcome the problems. Therefore, we don't think it is necessary to distinguish particular adaptive methods everywhere in the paper. We run experiments mainly on Adam because of its popularity. According to your comments, we would consider adding more experiments on other adaptive methods like AdaGrad.\n\n>>> I am not sure I understand but could you please clarify how AMSGRAD helps in the generalization of ADAM. From my understanding, it only solved the convergence issue by ensuring that the problematic quantity in the proof is PSD.\n\nI guess we understand \"generalization\" differently. If you regard \"generalization error\", in a narrow sense, as how large the gap between training and testing error is, then I agree that AMSGrad only solves the convergence issue. But broadly speaking, \"generalization error\" is a measure of how accurate of a method on unseen data (see https://en.wikipedia.org/wiki/Generalization_error). It depends on not only handling overfitting but the convergence results on training data. Therefore, attempts on solving convergence issue can also help the generalization in a broad sense.\n\n>>> The experiments in Wilson et. al(2017) give proper evidence of the gap between SGD and Adaptive methods in overparameterized settings. To show that this method overcomes it, I think you need a stronger argument than what you have shown.\n\nWe would first argue that the experiments in Wilson et al. (2017), including a few common tasks in CV and NLP, are not much different to ours and that in other recent similiar works. While, their artifactual example before the experiment section does use a overparameterized settings, but they never claim it is the main cause of poor generalization. It is a necessary but not sufficient condition. Indeed, the poor generalizaion is mainly caused by the propery of the carefully constructed particular task. In other words, it is highly problem-dependent. The actual statement of Wilson et al. (2017) is \n\n** When a problem has multiple global minima, different algorithms can find entirely different solutions when initialized from the same point. In addition, we construct an example where adaptive gradient methods find a solution which has worse out-of-sample error than SGD. **\n\nTherefore, no one can affirm there are no examples that adaptive methods find a better solution than SGD. The above are just examples and there are infinite exmaples. We don't think it is meaningful to show our framework can perform well on that particular one, even though it is not hard.", "Hi,\n\ni have three main questions for you. It would be great if you could help clarify them.\n\n1. You mention the following about ADAGRAD along with ADAM and RMSPROP - \"they are observed to generalize poorly compared with SGD or even fail to converge due to unstable and extreme learning rates.\". As far as I am aware the issue with the convergence analysis of exponentiated squared gradient averaging algorithms like ADAM and RMSPROP do not extend to ADAGRAD. So, ADAGRAD is indeed guaranteed to converge given the right assumptions. In the rest of the paper, the experiments and arguments mainly consist of ADAM and not adaptive methods in general. So I think the distinction between adaptive methods in general and adaptive methods like ADAM and RMSPROP with respect to convergence guarantees should be made clearer.\n\n2. I am not sure I understand but could you please clarify how AMSGRAD helps in the generalization of ADAM. From my understanding, it only solved the convergence issue by ensuring that the problematic quantity in the proof is PSD.\n\n3. You mention \"Experimental results show that new variants can eliminate the generalization gap\nbetween adaptive methods and SGD\". Given that the paper only contains a few empirical results (on some important and common tasks) and no theoretical proof in that respect, I find it to be a misleading statement. The experiments in Wilson et. al(2017) give proper evidence of the gap between SGD and Adaptive methods in overparameterized settings. To show that this method overcomes it, I think you need a stronger argument than what you have shown.\n", "Hi Hyesst,\n\nThanks for your interests. \n\n1. You are absolutely right! Thanks for your correction! It should be $\\beta_1$ in the upper bound function at the end of Section 4.\n\n2. Yes, we used DenseNet-121. We will add this information in the next revision.\n\nThank you very much for your comments and suggestions.", "Hi, thanks for the nice paper. The way of combining the adaptive methods and SGD proposed in the paper is really interesting, while I guess I find some small typos or mistakes. They are all minor and do not much affect the understand of the paper, but I think a clarification on them would be fine.\n\nFirst, the upper bound function at the end of Section 4 and Appendix G does not converge to 0.1. I believe it is a typo: there is a redundant \"1\" at the denominator and the correct expression should be $0.1 + \\frac{0.1}{(1-\\beta)t}$. Also, I guess you miss the subscripts of $\\beta$ in the functions in Section 4. Maybe it should be $\\beta_1$ or $\\beta_2$, I guess.\n\nSecond, how many layers do you use in DenseNet? You provide the source code you used for DenseNet, and it is DenseNet-121 in the code. However, I suggest mentioning the number of layers directly in the paper. It is an important hyperparameter of deep CNN network.", "Thank you for your interests!\n\nHonestly, the code is a little bit messy currently. We are cleaning up the code for releasing it these days.\nIf you can't wait to have a try, it is easy to implement the algorithm by making some minor changes on the optimizers in PyTorch. Take AdaBound/AMSBound as an example, we just modify the source code of Adam (https://github.com/pytorch/pytorch/blob/master/torch/optim/adam.py). Specifically, we use torch.clamp(x, l, r) function, which can constrain x between l and r element-wisely, to perform the clip operation mentioned in the paper. You can also make similar changes to other optimizers such as AdaDelta and RMSprop.\n\nThe codes for the experiments in the paper, as mentioned in the footnote on page 6, are obtained from https://github.com/kuangliu/pytorch-cifar and https://github.com/salesforce/awd-lstm-lm.\n\nWe would be happy if you can share your results on your own researches using our methods.", "Hi! I am interested in the algorithm you proposed and want to have a try on my researches. Could you provide an implementation of the algorithm? Or, if it is not convenient in the review period, could you give a brief instruction of how to implement it? \nGood luck and hope your paper can be accepted. :-)" ]
[ -1, -1, -1, -1, -1, -1, 7, 4, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, -1, -1, -1, 4, 5, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "BylLNcbdAX", "S1lEtdgdAQ", "rkg0-SM-3m", "SJef8P3thQ", "BkeFPweF3m", "BJgABOx_C7", "iclr_2019_Bkg3g2R9FX", "iclr_2019_Bkg3g2R9FX", "iclr_2019_Bkg3g2R9FX", "Skx8lvomim", "iclr_2019_Bkg3g2R9FX", "Byg-RFWA5X", "Hklkd3A25X", "iclr_2019_Bkg3g2R9FX", "r1lJqtvjcm", "iclr_2019_Bkg3g2R9FX", "S1goTjL5qX", "iclr_2019_Bkg3g2R9FX" ]
iclr_2019_Bkg6RiCqY7
Decoupled Weight Decay Regularization
L2 regularization and weight decay regularization are equivalent for standard stochastic gradient descent (when rescaled by the learning rate), but as we demonstrate this is \emph{not} the case for adaptive gradient algorithms, such as Adam. While common implementations of these algorithms employ L2 regularization (often calling it ``weight decay'' in what may be misleading due to the inequivalence we expose), we propose a simple modification to recover the original formulation of weight decay regularization by \emph{decoupling} the weight decay from the optimization steps taken w.r.t. the loss function. We provide empirical evidence that our proposed modification (i) decouples the optimal choice of weight decay factor from the setting of the learning rate for both standard SGD and Adam and (ii) substantially improves Adam's generalization performance, allowing it to compete with SGD with momentum on image classification datasets (on which it was previously typically outperformed by the latter). Our proposed decoupled weight decay has already been adopted by many researchers, and the community has implemented it in TensorFlow and PyTorch; the complete source code for our experiments is available at \url{https://github.com/loshchil/AdamW-and-SGDW}
accepted-poster-papers
Evaluating this paper is somewhat awkward because it has already been through multiple reviewing cycles, and in the meantime, the trick has already become widely adopted and inspired interesting follow-up work. Much of the paper is devoted to reviewing this follow-up work. I think it's clearly time for this to be made part of the published literature, so I recommend acceptance. (And all reviewers are in agreement that the paper ought to be accepted.) The paper proposes, in the context of Adam, to apply literal weight decay in place of L2 regularization. An impressively thorough set of experiments are given to demonstrate the improved generalization performance, as well as a decoupling of the hyperparameters. Previous versions of the paper suffered from a lack of theoretical justification for the proposed method. Ordinarily, in such cases, one would worry that the improved results could be due to some sort of experimental confound. But AdamW has been validated by so many other groups on a range of domains that the improvement is well established. And other researchers have offered possible explanations for the improvement.
train
[ "Bkx6qDs50m", "rJxk4OZ5AQ", "HJlCOfb90X", "HylQ0bbqR7", "rJl_LZZcA7", "B1xxyZZqCm", "rkeDkABcnm", "rJlYWZMYhm", "rkgKJ4AXhX" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you very much for your positive evaluation! We have fixed the typo and updated the paper. ", "1) This completely clears up my concern.\n\n2) It seems that we largely share the same opinion here. After some more reflection, I think that this proposition does bring some good to the paper by attempting to formalize the relationship between L2 regularization and weight decay in adaptive methods and agree with your comments in that regard.\n\n3) Thank you for the response and clarification. I am glad that this was brought into the discussion!\n\n\n----\n\nNotation comment: Thank you for making this change -- I think it is much clearer now. With a quick pass through, everything looks consistent.\n\nReadability: The new supp figures are easy to read. I guess there is no easy fix for Figure 4, but I still consider this a minor issue.\n\n\nNew minor issue:\n\nIn Supp figure 4 explanation in appendix: \"and to an even greater improvements of test error\". Improvements should not be plural.\n\n-------\n\nTo summarize, I think that the revised paper improves on many of the minor issues I had with the original paper. I am still a little unconvinced by the theoretical justification but I feel that the empirical results and some of the formal analysis makes up for this. I hope that this paper is accepted.", "We thank all reviewers for their positive evaluation and their valuable comments. We've uploaded a revision to address the issues raised and individually reply to the reviewers' concerns. We kindly ask you to update your rating if our replies clarified your concerns. \nThank you again for your reviews!\n", "Thank you for the positive and detailed review and your questions and comments. We reply to them below. \n\n ***\n\n“It would be great to see more experimental results on other tasks to have a better understanding of this problem.”\n\nPlease see Section 4.5 where we now mention additional applications of our decoupled weight decay.\n\n ***\n\n“But it is not clear to me why the hyperparameters w and \\alpha are decoupled in the proposed methods? For example, in Line 6 of Alg. 1, g_t is a function of w, and later in Line 8, g_t is coupled with \\alpha which naturally introduces a term w \\alpha into m_t. So both w and \\alpha are still coupled together in the proposed algorithm. If this is the case why the authors still call w and \\alpha decoupled?”\n\nWe believe this question is due to a point of confusion: please note that in Algorithm 1 we use two different colors to define the one difference between SGD and SGDW: the text with the purple background in Line 6 is only active for SGD, and the part with the green background in Line 9 is only active for SGDW. \nTo reply in detail, please note that we modified our notation as proposed by AnonReviewer3 to denote weights as \\theta (in line with the original Adam paper) and the weight decay hyperparameter as \\lambda. In order to avoid possible confusions, we first repeat your question with the new notation applied: \n\n“But it is not clear to me why the hyperparameters w and \\alpha are decoupled in the proposed methods? For example, in Line 6 of Alg. 1, g_t is a function of \\lambda, and later in Line 8, g_t is coupled with \\alpha which naturally introduces a term \\lambda \\alpha into m_t. So both \\lambda and \\alpha are still coupled together in the proposed algorithm. If this is the case why the authors still call \\lambda and \\alpha decoupled?”\n\nYou described the original SGD with L2 regularization when you mention “in Line 6 of Alg. 1, g_t is a function of \\lambda, and later in Line 8, g_t is coupled with \\alpha which naturally introduces a term \\lambda \\alpha into m_t”. It is true that this means that *in the original SGD* \\alpha and \\lambda are coupled. In contrast, SGDW does not have \\lambda in Line 6 but has it in Line 9, where \\lambda and \\alpha are decoupled. Please note that Algorithm 1 uses two different colors to define specifics of SGD and SGDW when regularization is applied. We hypothesize that this confusion might have been caused by a black&white printout (in which the original colors were not easy to tell apart) and to avoid this confusion in the future we changed the colors to also be clearly distinguishable when printed in black&white. \n\n\nThanks again for your review! We would kindly ask you to please consider updating your rating if this reply clarified your concerns, in particular about decoupling.\n", "Thank you for the positive and detailed review and your questions and comments. We reply to them below. \n\n ***\n\n“1) The authors emphasize the fact that L2 regularization and weight decay are not the same for different optimizers and claim that this goes against the belief of some practitioners. In my experience, most practitioners would not be surprised by this observation itself. The second observation made by the authors, that L2 regularization is not effective in Adam, is the more interesting (and perhaps surprising) observation.” \n\nIn our experience, practitioners often have the equivalence result for SGD in mind and other cases such as Adam tend to remain unnoticed until they actively think about them. In reply to your comment, we’ve toned down the wording from \n\n“Contrary to a belief which seems popular among some practitioners, the two techniques are not equivalent. For SGD, they can be made equivalent by a reparameterization of the weight decay factor based on the learning rate; this is not the case for Adam.”\n\nto\n\n“The two techniques can be made equivalent for SGD by a reparameterization of the weight decay factor based on the learning rate; however, as is often overlooked, this is not the case for Adam.” \n\n ***\n\n“2) I am not convinced of the importance of Proposition 3. In practice, adaptive methods will have a preconditioner which depends locally on the parameters. I understand the motivation from the previous paragraph but felt that the formal result added little.”\n\nWe agree that the relevance of Proposition 3 does not derive from its immediate applicability to practical adaptive gradient algorithms. Nevertheless, we still believe the proposition to be useful since (for this simple special case of a fixed preconditioner) it provides a precise equivalence between decoupled weight decay and standard L2 regularization with a scaled regularizer and thus provides an intuitive explanation for what decoupled weight decay does: parameters with a small preconditioner (which in practice would be caused by typically large gradients in this dimension) are regularized relatively more than they would be with L2 regularization; specifically, the regularization is proportional to the inverse of the root of the preconditioner. \nFor adaptive gradient algorithms with changing preconditioner matrices (which includes all popular cases) there is no 1-to-1 equivalence to a fixed L2 regularization, but we can still use the intuition from the proposition to think about what loss function is being optimized in each step.\n\n ***\n\n\"However, the last paragraph of Section 3 seems to utilize the Bernstein-von Mises theorem to promote the idea that with large datasets the prior distribution is unimportant (and is ignored). I am not sure that I follow this argument. For example, this claim seems to be completely independent of the optimization algorithm used and moreover Propositions 1,2, and 3 are independent of the data distribution. I suspect that this confusion is due to a misunderstanding on my part and would appreciate clarification from the authors.\"\n\nThank you for this comment! After a closer analysis of the argument by Aitchison described in the last paragraph of Section 3, we are also less convinced about it: due to the equivalence of L2 regularization and weight decay in SGD settings (our Proposition 1), one should not expect them to scale differently with the dataset size, as the argument would suggest. In order to avoid possible confusion, we decided to remove that paragraph entirely. Thank you for raising concerns about it. \n\n ***\n\n“I find the notation in the paper confusing in general. x is used to denote weights, and w to denote hyperparameters (e.g. w' for L2 regularization scale and w for weight decay scale). I don't see why it wouldn't be preferable to use the more standard W for weights, x for inputs, and lambda for hparams”\n\nThanks, this is a good point. In response, we’ve now modified our notation to denote weights as \\theta (in line with the original Adam paper) and the weight decay hyperparameter as \\lambda.\n\n ***\n\n“Figure 4: it is difficult to distinguish between Adam and SGDWR (especially left).”\n\nWe added SuppFigure 5 and SuppFigure 6 which should improve readability of Figure 4. \n\n\nThanks again for your review!\n", "Thank you for the positive detailed review and your questions and comments. We reply to them below. \n\n ***\n\n“It would also be interesting to see results on architectures other than ResNet. In section 4.5 the authors claim that the proposed idea was used in different settings by many authors. So, I would recommend to elaborate on this section in the final version of the paper.”\n\nPlease see Section 4.5 where we now mention additional applications of our decoupled weight decay and AdamW.\n\n ***\n\n“1. One of the main advantages of Adam is the speed of convergence. Does AdamW or AdamWR converge faster than the corresponding SGD method? Figure 4 is not quite representative since it contains an experiment with a very large number of training epochs.”\n\nTo address this question, we added SuppFigure 5, SuppFigure 6 and the following text in the supplementary material:\n\n“SuppFigure 5 and SuppFigure 6 are the equivalents of Figure 4 in the main paper but supplemented with training loss curves in its bottom row. The results show that Adam and its variants with decoupled weight decay converge faster (in terms of training loss) on CIFAR-10 than the corresponding SGD variants (the difference for ImageNet32x32 is small). As is discussed in the main paper, when the same values of training loss are considered, AdamW demonstrates better values of test error than Adam. Interestingly, SuppFigure 5 and SuppFigure 6 show that restart variants AdamWR and SGDWR also demonstrate better generalization than AdamW and SGDW, respectively. ”\n\nWhile in the paper we noted that restarts help to obtain better anytime performance, we didn’t pay attention to the fact that they also show better test errors for the same levels of training errors; this observation was made while answering this question, thank you.\n\n ***\n\n“2. While AdamWR delivers much better test accuracy than Adam, it is still slightly worse than SGDWR method.”\n\n\nWe agree that some difference is still present on CIFAR-10 while the two are almost indistinguishable on ImageNet32x32. The largest part of the difference between SGD and Adam was linked to weight decay and L2 regularization, but we believe that this is not the case anymore for SGDW and AdamW. We tend to believe that the issues often cited for “adaptive methods may converge to sharp local optima” are present and we hope that our findings on weight decay regularization will complement new methods which attempt to address these issues. \n\n ***\n\n“I would also recommend to change scale of y-axis, Figure 4, right. Since 0.5% percent difference can be significant for state-of-the-art classification results.”\n\nThanks, we agree and modified Figure 4, right, in response to this question. The new version includes a subfigure which better shows the very last epochs. \n\n\nThanks again for your review!\n", "In this paper, the authors investigate a very simple but still very interesting idea of decoupling weight decay and gradient step. It is a well known problem that Adam optimization method leads to worse generalization and stronger overfitting than SGD with momentum on classification tasks despite its faster convergence. The authors tried to find a reason for such behavior. They noticed that while SGD with L2 regularization is equivalent to SGD with weight decay, it is not the case for adaptive methods, such as Adam. The main contributions include the following:\n1. Improvement of Adam method via decoupling weight decay and optimization step and using warm restarts. The authors thoroughly investigated the proposed idea on different learning rate schedules and different datasets. It would also be interesting to see results on architectures other than ResNet. In section 4.5 the authors claim that the proposed idea was used in different settings by many authors. So, I would recommend to elaborate on this section in the final version of the paper.\n2. Reducing sensitivity of SGD to weight decay parameter. The authors noticed that the optimal weight decay parameter depends on the number of training epochs, therefore they proposed a functional form of dependency between weight decay and the number of batch passes. \n\nI also have the following concerns:\n1. One of the main advantages of Adam is the speed of convergence. Does AdamW or AdamWR converge faster than the corresponding SGD method? Figure 4 is not quite representative since it contains an experiment with a very large number of training epochs.\n2. While AdamWR delivers much better test accuracy than Adam, it is still slightly worse than SGDWR method.\n\nI would also recommend to change scale of y-axis, Figure 4, right. Since 0.5% percent difference can be significant for state-of-the-art classification results.\n\n\nOverall, the paper is written clearly and organized well. It contains a lot of experiments and proposes an explanation of the observed phenomena. While the idea is very simple, the experimental results show its efficiency.\n", "This review has been somewhat challenging to complete. As the authors write, this work has already been impactful and motivated a great deal of further research. The empirical evaluation is convincing and the results have been reproduced and further studied by others. A moderate amount of space in the paper (Section 3, Section 4.5) is used to refer to work motivated by the paper itself. While I do not take issue with this I believe it should be considered for the final decision (in the sense that disentangling the contributions of the authors and related work becomes tricky). With this said, I continue with my review.\n\nPaper summary: The authors observe that L2 regularization is not effective when using the Adam optimizer. By replacing L2 regularization with decoupled weight decay the authors are able to close the generalization gap between SGD and Adam and make Adam more robust to hyperparameter settings. The empirical evaluation is comprehensive and convincing.\n\nDetailed comments:\n\n1) The authors emphasize the fact that L2 regularization and weight decay are not the same for different optimizers and claim that this goes against the belief of some practitioners. In my experience, most practitioners would not be surprised by this observation itself. The second observation made by the authors, that L2 regularization is not effective in Adam, is the more interesting (and perhaps surprising) observation.\n\n2) I am not convinced of the importance of Proposition 3. In practice, adaptive methods will have a preconditioner which depends locally on the parameters. I understand the motivation from the previous paragraph but felt that the formal result added little.\n\n3) Section 3 introduced the Bayesian filtering perspective of stochastic optimization. The authors share the observation of Aitchison, 2018 that decoupled weight decay can be recovered in this framework. My interpretation is that this observation is important _because_ of the empirical observations in this paper and does not necessarily provide theoretical support for the approach. However, the last paragraph of Section 3 seems to utilize the Bernstein-von Mises theorem to promote the idea that with large datasets the prior distribution is unimportant (and is ignored). I am not sure that I follow this argument. For example, this claim seems to be completely independent of the optimization algorithm used and moreover Propositions 1,2, and 3 are independent of the data distribution. I suspect that this confusion is due to a misunderstanding on my part and would appreciate clarification from the authors.\n\n4) The empirical evaluation in this paper is very strong and these practical techniques have already been adopted by the community in addition to spurring novel research. The empirical observation broadly explores two directions: decoupled weight decay leads to separable hyperparameter search spaces (meaning optimization is less sensitive to hyperparameters), and decoupled weight decay gives improved generalization (and training performance). Both claims are explored throughly with strong evidence given for the improvement due to AdamW.\n\nOverall, I find this paper to be presented well and with convincing empirical results. I feel that the theoretical justification for decoupling weight decay are a little weak, and believe that other work is moving towards better explanations then the ones presented in this paper [1,2,3]. Despite this, I believe that this paper should be accepted.\n\n\nMinor comments:\n\n- I find the notation in the paper confusing in general. x is used to denote weights, and w to denote hyperparameters (e.g. w' for L2 regularization scale and w for weight decay scale). I don't see why it wouldn't be preferable to use the more standard W for weights, x for inputs, and lambda for hparams.\n- Figure 4: it is difficult to distinguish between Adam and SGDWR (especially left).\n\n\n\nClarity: The paper is well written and clear. I find the notation confusing in places, but is consistent throughout.\n\nOriginality: This paper presents original findings but occasionally relies on work motivated by itself to convince the reader of its importance. I do not think that this subtracts from the value of the work.\n\nSignificance: The work is clearly significant. Even without knowing that practitioners have adopted the techniques presented in this work, the paper clearly distinguishes itself with strong empirical results.", "This paper first identifies an inequivalence between L2 regularization and the original weight decay in adaptive stochastic gradient methods, e.g., the Adam method, and then proposes two decoupled variants, SGDW and AdamW, respective. The authors also cited a recent work to provide a justification of their proposed update rules from the perspective of Bayesian filtering. To demonstrate the effectiveness of both methods, experiments on CIFAR10 and ImageNet32x32 are conducted to compare with the original methods. Results show that the proposed methods consistently lead to faster convergence. Overall the paper is well written and easy to follow, with enough details describing the experimental settings. \n\nFirst of all I appreciate the authors pointing out that weight decay is not equal to L2 regularization in general. This is evident once the original definition of weight decay is given. The main motivation comes from the argument that instead of using L2 regularization, weight decay should be used in adaptive gradient methods. The Bayesian filtering interpretation helps to justify the proposed method. But it is not clear to me why the hyperparameters w and \\alpha are decoupled in the proposed methods? For example, in Line 6 of Alg. 1, g_t is a function of w, and later in Line 8, g_t is coupled with \\alpha which naturally introduces a term w \\alpha into m_t. So both w and \\alpha are still coupled together in the proposed algorithm. If this is the case why the authors still call w and \\alpha decoupled? \n\nTo me the most interesting result is Proposition 3 where the authors show that weight decay actually corresponds to preconditioned L2 regularization. This helps to explain what's the algorithmic difference between these two methods in adaptive gradient methods, and provides an intuitive insight on why weight decay may lead to better results compared with the vanilla L2 regularization. \n\nExperiments on image recognition tasks basically confirm the authors' claims. However, as the authors have already pointed out, it is better to have more thorough experiments on other kinds of tasks, e.g., in text classification, etc. If the improvement does come from the difference between weight decay vs L2, then I would also expect the same improvement on other tasks. It would be great to see more experimental results on other tasks to have a better understanding of this problem. So far it is not clear whether the same improvement holds in general or not. \n" ]
[ -1, -1, -1, -1, -1, -1, 6, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "rJxk4OZ5AQ", "rJl_LZZcA7", "iclr_2019_Bkg6RiCqY7", "rkgKJ4AXhX", "rJlYWZMYhm", "rkeDkABcnm", "iclr_2019_Bkg6RiCqY7", "iclr_2019_Bkg6RiCqY7", "iclr_2019_Bkg6RiCqY7" ]
iclr_2019_Bkg8jjC9KQ
Optimistic mirror descent in saddle-point problems: Going the extra (gradient) mile
Owing to their connection with generative adversarial networks (GANs), saddle-point problems have recently attracted considerable interest in machine learning and beyond. By necessity, most theoretical guarantees revolve around convex-concave (or even linear) problems; however, making theoretical inroads towards efficient GAN training depends crucially on moving beyond this classic framework. To make piecemeal progress along these lines, we analyze the behavior of mirror descent (MD) in a class of non-monotone problems whose solutions coincide with those of a naturally associated variational inequality – a property which we call coherence. We first show that ordinary, “vanilla” MD converges under a strict version of this condition, but not otherwise; in particular, it may fail to converge even in bilinear models with a unique solution. We then show that this deficiency is mitigated by optimism: by taking an “extra-gradient” step, optimistic mirror descent (OMD) converges in all coherent problems. Our analysis generalizes and extends the results of Daskalakis et al. [2018] for optimistic gradient descent (OGD) in bilinear problems, and makes concrete headway for provable convergence beyond convex-concave games. We also provide stochastic analogues of these results, and we validate our analysis by numerical experiments in a wide array of GAN models (including Gaussian mixture models, and the CelebA and CIFAR-10 datasets).
accepted-poster-papers
This paper investigates the usage of the extragradient step for solving saddle-point problems with non-monotone stochastic variational inequalities, motivated by GANs. The authors propose an assumption weaker/diffrerent than the pseudo-monotonicity of the variational inequality for their convergence analysis (that they call "coherence"). Interestingly, they are able to show the (asympotic) last iterate convergence for the extragradient algorithm in this case (in contrast to standard results which normally requires averaging of the iterates for the stochastic *and* mototone variational inequality such as the cited work by Gidel et al.). The authors also describe an interesting difference between the gradient method without the extragradient step (mirror descent) vs. with (that they called optimistic mirror descent). R2 thought the coherence condition was too related to the notion of pseudo-monoticity for which one could easily extend previous known convergence results for stochastic variational inequality. The AC thinks that this point was well answered by the authors rebuttal and in their revision: the conditions are sufficiently different, and while there is still much to do to analyze non variational inequalities or having realistic assumptions, this paper makes some non-trivial and interesting steps in this direction. The AC thus sides with expert reviewer R1 and recommends acceptance.
test
[ "r1x7Pw_4yE", "HJxro3I4kV", "rkxGKGDY6m", "r1lVGZvtTm", "SkxF31vKpm", "HkxnI7tn3Q", "H1xUfVr9hX", "SyeTm7oIhm" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Many thanks for the extra round of feedback and the encouraging remarks! We reply to the points you raised below:\n\n1. Regarding the example of a coherent problem with a general convex solution set.\n\nAgain, for simplicity, focus on the optimization case, i.e., the minimization of a function f:X->R (X convex). In this case, letting X* = argmin f, and writing g(x) for the (sub)gradient of f, the (strict) coherence requirement takes the form:\n 
- <g(x),x-x*>≥0 for all x in X and all x* in X*.\n - Equality holds above if and only if x lies in X*.\n\nNow, fix some convex subset C of X, and let f(x) = dist(x,C)^2 (where dist denotes the standard Euclidean setwise distance). By construction, f is convex (though not strictly so) and X*=C. Convexity guarantees the first requirement of coherence. For the second, note that g(x) is a multiple of x - proj_C(x) so, for any x* in X*, the product <g(x),x-x*> vanishes only if x lies itself in C (since C=X*).\n\nOf course, the above function is convex, but if we perturb f away from C = X* in an appropriate way, non-convex examples can also be constructed (though there are diminishing returns regarding the simplicity of the resulting example).\n\n[NB: just to avoid any misunderstanding, the above concerns the definition of coherence as presented in the *original* version of the paper; the current version includes examples with non-convex solution sets like x^2 y^2 as we outlined in our first reply.]\n\n\n2. Thanks for the pointer to Chen and Rockafellar, it looks very promising for future study! The reviewer's suggestion seems very plausible but the devil is often in the details, so we would need more time in order to provide a more definitive reply.\n\n\nWe cannot revise the paper at this time, but we'd of course be happy to do so along the lines above if accepted.", "Thank you for you detailed answer.\n\n\"[We can provide a concrete example if the referee finds this useful]\" would love to.\n\nRegarding 3. I would like to say that the strict coherence assumption is an extension of the strict monotonicity assumption with which you can also prove last iterate converge. Nemirovski, Nesterov, Juditski focus on general monotonicity (the equivalent of you general coherence with which you do not prove any last iterate convergence result)\nAn interesting point I would like to make is that Last iterate convergence have been proven in the literature under the *strong* monotonicity assumption see for instance [Chen et al. 1997] (the Forward-backward algorithm is a generalization of the MD algorithm). Maybe you could have convergence rate under a *strong* coherence assumption (but also raising the question to what extend *strong* coherence assumption is realistic)\n\n\nChen, George HG, and R. Tyrrell Rockafellar. \"Convergence rates in forward--backward splitting.\" SIAM Journal on Optimization 7.2 (1997): 421-444.\n\n", "We thank the reviewer for their constructive remarks! We reply point-by-point below:\n\n1.\tTo be sure, coherence does not cover all GAN problems: GANs can be so complex that we feel that any endeavor to account for all problems would be chimeric (at least, given our current level of understanding of the GAN landscape). Being fully aware of this, our goal in this paper was simply to provide concrete theoretical evidence that the inclusion of an extra-gradient step can help resolve many of the problems that arise in practice (and, in particular, cycling and oscillatory mode collapses). In this regard, our paper tackles a significantly wider framework than the 2018 ICLR paper of Daskalakis et al. which only addressed bilinear models.\n\nFurthermore, we would like to point out that Corollaries 3.2 and 3.3 are only *sufficient* conditions for coherence. To make an analogy with convex analysis, in practice, when trying to determine whether a given function is convex, one of the standard techniques is to show that its Hessian matrix is diagonally dominant - and, hence, positive-semidefinite. Obviously, this is just a sufficient condition, but it is still useful in practice. We view Corollaries 3.2 and 3.2 in a similar light: they show that our results cover a wide array of cases of practical (and theoretical) interest, without attempting to be exhaustive.\n\n\n2.\tRegarding the relation with pseudo-monotonicity: despite any formal similarities, we would like to point out that coherence and pseudo-monotonicity can be quite different. As an example, take the objective function (2.2) in our paper: for x_1 = 1/2, we get f(1/2,x_2) = (x_2^2 - 2)^2 (4 + 5x_2^2) / 16, which has *two* well-separated maximizers, i.e. it is not even quasi-concave - implying in turn that (2.2) is not pseudo-monotone (it is, in fact, multi-modal in x_2).\n\nMoreover, as we pointed in our reply to Reviewer 2, the version of coherence that we presented was the simplest possible one (and we did so for reasons of clarity and ease of presentation). Our definition can be weakened substantially by considering the following definition of \"weak coherence\":\n\nDefinition: We say that f is weakly coherent if:\n(i) There exists a solution p of (SP) that satisfies (VI).\n(ii) Every solution x* of (SP) satisfies (VI) locally, i.e., g(x) (x - x*) ≥ 0 for all x sufficiently close to x*.\n\nAs we pointed out in our reply to Reviewer 2, under this *weaker* definition of coherence, the solution set of (SP) need no longer be convex, thus making the difference with pseudo-monotone problems even more pronounced. As a very simple example, consider the case where Player 1 controls x,y in [-1,1], and the objective function is f(x,y) = x^2 y^2, i.e., Player 2 has no impact in the game (just for simplicity). In this case, the solution set of the problem is the cross-shaped set X* = {(x,y) : x=0 or y=0}, which is non-convex - in stark contrast to the convex structure of the solution set of pseudo-monotone problems.\n\nWe will update our manuscript accordingly as soon as possible to make this change!\n\nWe will also include a detailed discussion of the paper by Noor et al. - we were not aware of it, and we thank the reviewer for bringing it to our attention.\n\n\n3.\tRegarding the integration of Adam in our proof technique: we agree with the reviewer that this is a worthwhile extension, but not one that can be properly undertaken without completely changing the structure of the paper and its focus. Adam has a very specific update structure and requires the introduction of significant machinery to handle theoretically, so we do not see how it can be done without greatly shifting the scope and balance of our treatment and analysis.", "We thank the reviewer for their in-depth remarks and positive evaluation! We reply point-by-point below:\n\n\n1.\tRegarding the structure of the solution set of a coherent problem: we agree that this structural question can be investigated further but, given space constraints, we are concerned that this might potentially dilute the focus of the paper. Nevertheless, we would like to take advantage of the openreview format to answer in detail the referee's questions regarding the solution set of a coherent problem:\n- As the referee already points out, uniqueness can be easily taken care of by considering the constant function: the solution set of this problem is the entire feasible region, though the problem is null coherent [and vacuously strictly coherent if we interpret Definition 2.1 to hold for the empty set in the case of strict coherence.] More interesting examples with a zeroed-out direction also exist: for instance, the problem f(x_1,x_2) = x_1^2 is strictly coherent, but its solution set is an affine space.\n- Whether the solution set is an affine space intersected with the set of constraints: in the current formulation, it can be shown that the solution set of a coherent problem is a convex space, though not necessarily one obtained as the intersection of an affine set with the feasible region. [We can provide a concrete example if the referee finds this useful]\n- However, as we state in the paper, the definition of coherence can be weakened substantially, and our results still go through. Specifically, consider the following definition of \"weak coherence\":\n\nDefinition: We say that f is weakly coherent if:\n(i) There exists a solution p of (SP) that satisfies (VI).\n(ii) Every solution x* of (SP) satisfies (VI) locally, i.e., g(x) (x - x*) ≥ 0 for all x sufficiently close to x*.\n\nUnder this *weaker* definition of coherence, the solution set of (SP) need no longer be convex! To see this, consider a very simple optimization example where Player 1 controls x,y in [-1,1], and the objective function is f(x,y) = x^2 y^2 (i.e., Player 2 has no impact in the game, just for simplicity). In this case, the solution set of the problem is the cross-shaped set X* = {(x,y) : x=0 or y=0}, which is non-convex!\n\nWe chose to focus on the case where the solutions of (SP) and (VI) coincide for simplicity and clarity of presentation; however, we will update our manuscript accordingly as soon as possible to make this change!\n\n\n2.\tIndeed, the results are only asymptotic - but, as the reviewer states, we know of virtually no other results at this level of generality, and the analysis has to start somewhere. We agree that getting rates is an important problem, but we believe that all this cannot be addressed within a single paper.\n\n\n3.\tRegarding the similarity of proof techniques with MD/OMD: we would like to point out that conventional MD/OMD proof techniques are typically quite different as they focus on the convergence of the so-called \"ergodic average\" of the sequence of iterates (see e.g., the cited literature by Nemirovski, Nesterov, Juditski et al., and many others). Averaging techniques rely crucially on the problem being convex-concave and cannot be used in a non-monotone setting; as a result, we took a completely different approach relying on a quasi-Fejér analysis inspired by recent work on Bregman proximal methods in operator theory.\n\n\n4.\tWe concur that our results can be extended to non-zero-sum games, this is a great observation! Again, we did not make this link explicit in our paper for simplicity, but we will definitely update our manuscript accordingly.\n\n\n5.\tRegarding the name \"optimistic mirror descent\". In the original NIPS 2013 paper of Rakhlin and Sridharan, the authors present two variants of OMD: one is essentially the mirror-prox algorithm of Nemirovski (2004), and the other is a \"momentum\"-like variant which was further studied by Daskalakis et al. in their recent 2018 ICLR paper. Regrettably, there is a fair bit of confusion in the literature regarding what \"optimistic\" descent is: personally, we have a strong preference for the original \"mirror-prox\" terminology of Nemirovski (after all, in saddle-point problems, the method is *not* a descent method). However, we used the OMD terminology of Rakhlin and Sridharan because it seems to be more easily recognizable in the GAN community.\n\n\n6. Minor comments: We will take care of those, thanks!", "We thank the reviewer for their positive and encouraging feedback! We also feel that the inclusion of an extra-gradient step can greatly enhance the stability of GAN training methods, and can provide further key insights.", "This paper is trying to find a saddle-point of a Lagrangian using mirror descent. Mirror descent based methods use Bregman divergence to encode the convexity and smoothness of objective function beyond the euclidean structure. The main contribution of this paper is adding an extra gradient step to the standard MD, i.e., step 5 in Algorithm 2 as well as stochastic versions. Numerical experiments support their results.", "This work provides the converge proof of the last iterates of two stochastic methods (almost surely) that the author called mirror descent and optimistic mirror descent under an assumption weaker than monotonicity called coherence. \nRoughly, the definition of coherence is the equivalence between being a saddle point and the solution of the Minty variational inequality. \n\nOverall, I think that this paper try to tackle an interesting problem which is to prove convergence of saddle point algorithms under weaker assumption than monotonicity of the operator.\n\nHowever, I have some concerns: \n\n- I think that the properties of coherent saddle point could be more investigated. For instance is the set of coherent saddle point connected ? It would be very relevant for GANs. You claim that \"neither strict, nor null coherence imply a unique solution to (SP),\" but I do not see any proof of that statement (both provided examples have a unique SP). I agree that you can set $g$ to $0$ in some directions to get an affine space a of saddle points but is there examples where the set of solution is not an affine space (intersected with the constraints) ? \n- First of all the results are only asymptotic. (I agree that it can be mitigated saying that there is (almost) no results on non-monotone VI and it is a first step to try to handle non-convexity of the objective functions.)\n- One big pro of this work might have been new proof techniques to handle non-monotonicity in variational inequalities but the coherence assumption looks like to be the weakest condition to use the standard proof technique of convergence of the (MD) and (OMD). Nevertheless, this work is still interesting since it handles in a subtle way stochasticity (I did not have time to check Theorem 2.18 [Hall & Heyde 1980], I would be good to repeat it in the appendix for self-completeness)\n- This work could be easily extended to non zero-sum games which is crucial in practice since most of the state of the art GANs (such as WGAN with gradient penalty or non saturating GAN) are non zero-sum games. \n- Are you sure of the use of the denomination Optimistic mirror descent ? What you are presenting is the extragradient method. These two methods are slightly different, If you look at (5) in (Daskalaki et al., 2018) you'll notice that the updates are slightly different from you (OMD), particularly (OMD) require two gradient computations per iteration whereas (5) in (Daskalaki et al., 2018) requires only one. (it just requires to memorize the previous gradient)\n\nMinor comment: \n- For saddle point (and more generally variational inequalities) Mirror descent is no longer a descent algorithm. The name used by the literature is mirror-prox method (see Juditsky's paper) \n- in (C.1) U_n is not defined anywhere but I guess it is $\\hat g_n - g(X_n)$.\n- Some cited paper are published paper but cited as arXiv paper. \n- Lemma D.1 could be extended to the case (\\sigma \\neq 0) but the additional noise term might be hard to handle to get a result similar as Thm 4.1\nfor $\\sigma \\neq 0$.", "Prons: \nThis paper provides an optimistic mirror descent algorithm to solving minmax optimization problem. Its global convergence is guaranteed under the coherence property. The experimental results are promising.\n\nCons: \n1.\tThe coherence property is still a strong assumption. The sufficient conditions provided in Corollary 3.2 and 3.3 to guarantee coherence property are too specific to cover existing GAN models. \n\n2.\tThe current theoretical contribution seems incrementally. From the perspective of operator theory, the coherence property is highly related to the pseudo-monotone property. Extragradient method to solve the pseudo-monotone VIP has already existed in the literature [1]. The proposed OMD can be simply regarded a stochastic extension of [1] and simultaneously generalize the European distance in [1] to Bregman distance. \n\n3.\tThe integrating of Adam and OMD in the experiments is very interesting. To match the experiments, we highly recommend the authors to show the convergence of OMD + Adam with or without coherence condition, rather than requiring a diminishing learning rate.\n\n[1] Noor, Muhammad Aslam, et al. \"Extragradient methods for solving nonconvex variational inequalities.\" Journal of Computational and Applied Mathematics 235.9 (2011): 3104-3108.\n" ]
[ -1, -1, -1, -1, -1, 7, 6, 5 ]
[ -1, -1, -1, -1, -1, 3, 5, 5 ]
[ "HJxro3I4kV", "r1lVGZvtTm", "SyeTm7oIhm", "H1xUfVr9hX", "HkxnI7tn3Q", "iclr_2019_Bkg8jjC9KQ", "iclr_2019_Bkg8jjC9KQ", "iclr_2019_Bkg8jjC9KQ" ]